﻿ Encyclo - Concept Stew - Statistics for the Terrified: Glossary

## Copy of `Concept Stew - Statistics for the Terrified: Glossary`

The wordlist doesn't exist anymore, or, the website doesn't exist anymore. On this page you can find a copy of the original information. The information may have been taken offline because it is outdated.

 Concept Stew - Statistics for the Terrified: Glossary Category: Mathematics and statistics > Statistics Date & country: 13/10/2007, UKWords: 143

Active group
In a clinical trial, the group of patients to whom the active treatment is applied and the results contrasted with a control group. See also Control group.

Alternative hypothesis (h1)
Sometimes we propose an alternative to the null hypothesis, to be accepted if the null hypothesis is rejected. If the null hypothesis is that the mean is three, we reject it if the mean is not three. The alternative hypothesis may state that the mean is 3.5. Note that it is possible to reject both the null and alternative hypotheses. (The statement that the mean is not equal to three is known as the complement of the null hypothesis.)

Analysis of covariance
A technique used to detect differences between groups, which removes bias caused by a separate influential variable, and also has the advantage of increasing precision.

Analysis of variance (anova)
A perversely titled technique to test whether there is a difference between a set of sample means. The simplest form of analysis of variance looks at the means classified by just one factor, eg. the factor UK nationality would produce four means: English, Irish, Scottish and Welsh. Two-way analysis of variance can handle a second factor as well, eg. Gender: male, female. This way you can look at both factors simultaneously, effectively English male, English female, Irish male, Irish female, etc. The name of the technique includes the word variance because it calculates the variation between the factor means and uses this as the basis of the test.

Antilog
Antilogarithm: returns a logged number to the standard number system. See also log.

Area under the curve (auc)
A single value summary of something measured repeatedly over time. It consists of a series of lines connecting adjacent points. The area underneath these lines is used as a summary of the values shown by these points. It can be calculated from the baseline established by the first measurement (usually preferable) or from zero. (The name of the technique is a misnomer, because there is no curve.)

Baseline
The initial data score of a subject before an experiment, used as a benchmark (or reference value) against which the experimental data is compared.

Bias
Bias is a consistent error brought about by experimental design favouring one group over another or by the investigator/data recorder favouring one group over another. In the first case it can be prevented by matching the groups, and in the second case by blinding. See also blind study.

Binary data
Data that can take only one of two values, eg. Yes/no, on/off, dead/alive. See also Data types.

Biological significance (relevance)
Getting statistical significance is not necessarily the end of the story. In a biological experiment you would also want to see biological significance, which is not always present. In biological terms, the statistically significant difference may be so small as to be irrelevant. Note that we may see biological significance in a sample, but we cannot conclude that this is a real effect in the population unless we also have statistical significance.

Blind study
Because human psychology plays a big part in how we respond to things, blinding may be employed to ensure that subjects in an experiment do not know which of the treatments they are receiving. This is used to combat bias. For example, in a clinical trial people's beliefs could affect the outcome; thus non-drug treatment may appear to be more effective on advocates of alternative medicine. See also bias, blinded evaluation, double blind, single blind.

Blinded evaluation
Unfortunately sometimes even investigators have biases (though they may not realise it). Therefore it is better if they do not know which treatment a particular subject received. In a blinded evaluation an investigator reviews and assesses outcome without knowing which particular treatment has been applied. This eliminates the possibility of both conscious and unconscious bias. See also bias, blind study, double blind, single blind.

Block randomisation
A randomisation techique often used in multi-centre clinical trials where each block contains a unique treatment allocation. The blocks are then allocated randomly in centres.

Bonferroni
When multiple testing is carried out the likelihood of finding a significant difference increases in line with the number of tests. The bonferroni correction is applied to the significance level of each test to ensure that the overall significance level (across the complete set of tests) is brought back to the required level (usually 0.05). The correction stipulates that the significance level of each test is 0.05 divided by the number of tests. See also probability, significance level.

Box and whisker plot
A graphical display showing the range of the middle 50% of data within a box, with two extending lines (the whiskers) indicating the upper and lower extremes.

Carryover effect
This is a problem encountered in crossover trials. Sometimes the influence of the first treatment carries over when the second treatment is applied. Avoid this by having a long enough washout period before crossing over. The carryover effect makes crossover trials inappropriate for some disciplines. For example, you cannot test a teaching method this way, since once something has been learnt, its effect clearly carries over into the next period of the study. See also Crossover trials.

Case control study
In a case control study, two groups are contrasted: the subjects in one share a characteristic, usually a disease (the cases), and the subjects in the other do not (the control group). See also Control group.

Categorical data
Uses numbers or labels with no implied numeric value, e.g. cause of death: cancer, cardiovascular, respiratory, other. This is unlike ordinal data which has an order or hierarchy. See also Data types.

Central limit theorem
All means tend to be normal. For large sample sizes, the sample mean is normally distributed (or approximately normally distributed) irrespective of the distribution of the parent population. The larger the sample, the closer the sample mean to normality.

Chi-square statistic
The test statistic arrived at during the analysis of classification tables using the chi-square test.

Chi-square test
A test used with classification tables to determine the influence (if any) of one factor (rows) on a second factor (columns) by assessing whether there is a difference in the proportion of an outcome in two or more groups. Examples of factors might be smoking (yes/no) against lung cancer (yes/no): in other words, does smoking status lead to a larger risk of lung cancer, or is it irrelevant? The chi-square test is not for use on continuous data, but specifically for counts. See also Classification table.

ci
See Confidence interval.

Classification table
This is also known as a contingency table. Data is laid out for analysis with the chi-square or Fisher's exact tests. The rows represent different groups of subjects, and the columns different outcomes. The data consists of counts of the subjects falling into each cross-classification.

Clinical significance (relevance)
Getting statistical significance is not necessarily the end of the story. In a clinical trial you would also want to see clinical significance, which is not always present. In clinical terms, the statistically significant difference may be so small as to be irrelevant. Note that we may see clinical significance in a sample, but we cannot conclude that this is a real effect in the population unless we also have statistical significance.

Clinical trial
A well-defined scientific and ethical study of the effects of a particular treatment regime. Almost always, results are compared against a control group. Clinical trials are subject to very stringent regulation and codes of practice.

Coding
Sometimes we are interested in a quality rather than a quantity, eg. Good, bad, indifferent. The best way to get a computer to handle this kind of data is to numerically code the qualities, eg:
Deteriorated -1
Stayed the same 0
Improved 1
Alternatively we may wish to classify continuous data within categories, eg. pulse rate could be coded as:
Less than 21 1
Between 21 and 50 2
Above 50 3 See also categorical data.

Confidence interval (ci)
An upper and lower limit, within which you have a stated level of confidence that the true mean lies. These are usually quoted as 95% intervals, which are constructed so that you are 95% confident that the true mean lies between the limits. To be 99% sure, you need a wider confidence interval. Increased confidence is bought at the cost of precision.

Confirmatory analyses
Studies conducted specifically to confirm a particular hypothesis.

Confounded
When the effects of two or more factors cannot be separated, e.g. a study recruiting old men and young women confounds age and gender effect. A well-designed study should seek to minimise such effects.

Contingency table
Also known as a classification table. Data is laid out for analysis with the chi-square or Fisher's exact tests. The rows represent different groups of subjects, and the columns different outcomes. The data consists of the subjects falling into each cross-classification.

Continuous data
Can take any value along a continuum (eg. body temperature: 98.4, 98.46, 99.9999997 are all valid) as opposed to discrete data which can only take integer values (eg. number of children in a family). See also Data types.

Control group
A reference group involved in a study against which the active group is compared.

Correlation
Quantifies the extent to which two variables are related to each other. it is measured in the range of +1 to -1. A correlation of +1 indicates a perfect positive relationship, ie. as one goes up, the other goes up by the same amount. A correlation of -1 indicates a perfect negative relationship, ie. as one goes up, the other goes down by the same amount. A correlation of 0 indicates that the two variables are completely independent of each other. See also Linefitting.

Count
Exactly as it sounds, eg. returned tablet count. See also Data types.

Covariance
The variation in common between two related variables. See also Analysis of covariance.

Covariate
A secondary variable that influences the outcome of an experiment.

Critical value
The value at which a decision rule triggers significance.

Crossover trial
A study that looks at two treatments, where each individual in the study crosses over from the first treatment to the second: particularly effective because each individual acts as his or her own control. However, it is not an appropriate technique for all disciplines. See also Carryover effect.

Data
Information (usually numeric) collected on the participants in a study and on which data analysis is performed.

Data types
All data is not the same, and different types of data must be handled in different ways. See also Binary data, Categorical data, Continuous data, Count, Discrete data, Ordinal data.

Decision rule
A clear statement made at the outset of a study detailing how a decidion on significance is to be made.

Degrees of freedom
This crops up everywhere in statistical tests, and is used to calculate the p value. It is a fairly deep mathematical topic and we need not go fully into it here. Broadly speaking, the larger the sample size, the larger the df: the smaller the sample, the smaller the df. However, this is modified by the number of groups you have and the parameters being estimated. A small df makes it more difficult to detect significance.

Dependent variable
The term used for the outcome variable in regression analysis. A variable that depends on another variable is called the dependent variable: the variable on which it depends is called the predictor variable (sometimes the independent variable). See also Linefitting.

Descriptive statistics
Simple summary description of data carried out before a full analysis, eg. Mean, Standard error, etc.

Discrete data
Data that can only take a small set of particular values, usually whole numbers. For instance, you cannot actually have 2.4 children. See also Data types.

Distribution
A graph plotting probability against values. There are some typical shapes: normal, uniform, exponential. The normal distribution (bell-shaped) is the most common. See also Normal distribution, population distribution, null distribution.

Double blind
When neither subject nor evaluator knows what treatment or regime has been administered. This double-blind approach reduces the risk of bias (psychological or otherwise) being introduced by either the investigator or the subjects of the study. Single-blind occurs when one of these two is aware of the treatment or regime administered. See also Bias, Blinded evaluation, Blind study, Single blind.

Economic significance (relevance)
Getting statistical significance is not necessarily the end of the story. In an economics study you would also want to see economic significance, which is not always present. In economic terms, the statistically significant difference may be so small as to be irrelevant. Note that we may see economic significance in a sample, but we cannot conclude that this is a real effect in the population unless we also have statistical significance. A new management practice that increases profits by 38p per year has no economic value.

Educational significance
Getting statistical significance is not necessarily the end of the story. In an educational study you would also want to see educational significance, which is not always present. In educational terms, the statistically significant difference may be so small as to be irrelevant. Note that we may see educational significance in a sample, but we cannot conclude that this is a real effect in the population unless we also have statistical significance. If an expensive new teaching method increases reading ability by 0.02% in a nationwide study, it has no educational significance.

Error
A misleading term: there is no mistake. This is actually the natural variation occurring in a sample.

Estimation
A set of techniques where the value of a population parameter is inferred from sample data.

Expected values
The values you would expect to get in a classification table if the null hypothesis were true.

Exploratory analyses
This is a kind of getting-to-know-your-data exercise that is carried out at the beginning of the analysis.

F distribution
The distribution of the test statistic used in analysis of variance.

F ratio
The ratio of two variances, arising from analysis of variance. A large f ratio gives rise to significant results - small ones do not.

Factor
An experiment is usually designed to investigate the effect that certain factors have on an outcome, such as sex (a two-level factor; male, female), or UK nationality (a four-level factor; english, irish, scottish, welsh).

False negative
When our data leads us to believe something is untrue when it is in fact true.

False positive
When our data leads us to believe something is true when it is in fact untrue.

Field trial
Trials concerned with large populations in real life situations, for example large vaccine trials. (The term derives from agricultural experiments carried out by R.A. Fisher.)

Fisher's exact test
Used in place of the chi-square test (in a classification table scenario) when the expected value of at least once cell is less than five.

Gaussian distribution
Another name for the normal distribution, after its discoverer Gauss. See also Normal distribution.

Also called slope. The gradient indicates how steeply a line slopes. You will come across this when using the linefitting technique. A line is described by its gradient and intercept, the point at which the line crosses the y-axis. See also Linefitting.

H0 (null hypothesis)
The null hypothesis is an aunt sally put forward to be disproved, usually stating that there is no difference or relationship between the items under study. See also H1.

H1 (alternative hypothesis)
Sometimes we propose an alternative to the null hypothesis, to be accepted if the null hypothesis is rejected. If the null hypothesis is that the mean is three, we reject it if the mean is not three. The alternative hypothesis may state that the mean is 3.5. Note that it is possible to reject both the null and alternative hypotheses. (The statement that the mean is not equal to three is known as the complement of the null hypothesis. See also H0.

Histogram
A histogram is a bar or column chart, showing the frequency with which certain values occur in a sample. It is analogous to the distribution of a population.

Hypothesis
See also Alternative hypothesis, H0, H1, Null hypothesis.

Independent variable
A term used in linefitting for the x-axis variable, also more informatively called the predictor variable.

Intercept
One of two parameters estimated when fitting a line through pairs of data (such as height and weight), the intercept is the point at which the line crosses the y-axis).

Kruskal-Wallis test
The non-parametric equivalent of oneway analysis of variance. This test tries to find differences between three or more groups on the basis of average ranks per group.

Line of best fit
The line that best first a set of datapoints. The best fit refers to the fact that discrepancies between the points and the line are minimised. It is described by gradient and intercept. See also Linefitting.

Linefitting
Also called regression. A method that finds the best line through a set of plotted points; a simple (and economical) way of describing th relationship between predictor and outcome variables. See also Correlation, Gradient, Intercept, Multiple regression, Regression.

Log
Logarithm: also called the exponent. Expresses a number as a power of 10 or e. See also Antilog.

Mann-Whitney test
The non-parametric equivalent of the two sample t test; essentially based on the average rank observed in each group.

Mean
An average, one of several that summarise the typical value of a set of data. The mean is the grand total divided by the number of data points.

Mean square
Appears as a column in an analysis of variance table and consists of what may loosely be called the variance between the means for each factor.

Median
The middle value in a sample sorted into ascending order. If the sample contains an even number of values, the median is defined as the mean of the middle two.

Missing values
The bugbear of many statistical experiments. This one really is what it sounds like!

Mode
The most popular point of a distribution (that is, where the shape of the distribution peaks).

Multiple regression
This is linefitting with more than the two variables used in simple regression. The techique is used to predict one variable with the assistance of two or more predictor variables. See also Linefitting.

Multiple testing (multiplicity)
The more tests we do on the same data (eg. by subdividing the data into more and more subsets) the more likely we are to find a significant result in one of them. Unfortunately this is increasingly likely to be spurious because one in twenty tests on effectively identical populations will be significant due to a type I error. See also Type I error.

Mutually exclusive
As an example, gender can be split into two mutually exclusive groups; ie. a member of one cannot at the same time be a member of the other.

Non-parametric statistics
That branch of statistics concerned with tests that make no assumptions about the distribution of the data - for this reason also called distribution-free statistics. Because there is no distribution, there are no parameters to estimate. The Mann-Whitney and Wilcoxon tests are examples. See also Parametric statistics.

Normal distribution
The most common distribution, when extreme values are much less likely to occur than the values in the middle. This distribution is symmetrical about the mean and has a bell shape. Height and weight are good examples of variables that are normally distributed. Many naturally occurring physical measurements follow the normal distribution, which was discovered by the German philosopher Gauss. See also Distribution.

Null distribution
The distribution of the test statistic when the null hypothesis is true. See also Distribution, Null hypothesis.

Null hypothesis (h0)
The null hypothesis is an aunt sally put forward to be disproved, usually stating that there is no difference or relationship between the items under study. See also Null distribution.

Odds
The odds of an event happening is the ratio of the number of times the event happens divided by the number of times it does not happen.

Odds ratio (or)
The odds ratio (or) is the ratio of the odds in one group to the odds in another.

Oneway analysis of variance
A generalisation of the two sample t test which tests for differences between three or more group means. The test calculates, amongst other things, the 'variance' between the group means - hence its name. See also Analysis of variance, Twoway analysis of variance.

Ordinal data
When categories have a natural order to them (a hierarchy), eg. improved, stayed the same, deteriorated. This is unlike categorical data where there is no natural ordering, eg. preferred colour of car; black, green, silver. See also Data types.

Outliers
A value in a sample that is so extreme as to throw question on its validity.

p value
Hand-in-hand with the test statistic is the p value which indicates the probability of getting the characteristics observed in a sample if the null hypothesis were true.

Paired t test
A test on the change that occurs in a measurement carried out on the same subject under two different conditions, eg. before and after a therapeutic treatment.

Parameters
Variables not in the data, but in the model (eg. distribution, line of best fit) which describes it, eg. mean, gradient, standard deviation.

Parametric statistics
That branch of statistics based on the estimation and testing of parameters (usually normally distributed data). See also Non-parametric statistics, Parameters.

Percentile
A 10 percentile is the value (of a particular variable such as weight) beneath which 10% of the population falls. A 50 percentile is the value beneath which 50% of the population falls. These could also be written as 10%ile and 50%ile.

Placebo
A non-active treatment applied as a control in a study where psychological factors could affect the outcome. This allows the observation and quantification of any procedural effects involved in the trial that have nothing to do with any administered treatment. See also Placebo effect.

Placebo effect
Improvement in outcome that is consistent but not due to any treatment or non-treatment administered. This is only a factor for human beings and possibly animals where psychology of care can affect outcome. See also Placebo.

Population
The set of all possible subjects from which a sample can be drawn.

Population distribution
The distribution of the population from which a sample was drawn. See also Distribution.

Power
The ability to reject the null hypothesis when it is false.