Systematic Review Terminology

Attrition: subject units lost during the experimental/investigational period than cannot be included in the analysis (e.g. units removed due to deleterious side-effects caused by the intervention).

Bias (synonym: systematic error): the distortion of the outcome, as a result of a known or unknown variable other than intervention (i.e. the tendency to produce results that depart from the “true” result).

Confounding variable (synonym: co-variate): a variable associated with the outcome, which distorts the effect of intervention.

Effectiveness: the extent to which an intervention produces a beneficial outcome under ordinary circumstances (i.e. does the intervention work?).

Effect size: the observed association between the intervention and outcome, where the improvement/decrement of the outcome is described in deviations from the mean.     

Efficacy: the extent to which an intervention produces a beneficial outcome under ideally controlled circumstances (i.e. can the intervention work?).

Efficiency: the extent to which the effect of the intervention on the outcome represents value for money (i.e. the balance between cost and outcome).

Evidence-based health care: extends the application of the principles of evidence-based medicine to all professions associated with health care, including purchasing and management.

Evidence-based medicine (EBM): is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

Fixed effects model: a mathematical model that combines the results of studies that assume the effect of the intervention is constant in all subject populations studied. Only within-study variation is included when assessing the uncertainty of results (in contrast to a random effects model). 

Forest plot: a plot illustrating individual effect sizes observed in studies included within a systematic review (incorporating the summary effect if meta-analysis is used).

Funnel plot: a graphical method of assessing bias; the effect size of each study is plotted against some measure of study information (e.g. sample size; if the shape of the plot resembles an inverted funnel, it can be stated that there is no evidence of publication bias within the systematic review).

Heterogeneity: the variability between studies in terms of key characteristics (i.e. ecological variables) quality (i.e. methodology) or effect (i.e. results). Statistical tests of heterogeneity may be used to assess whether the observed variability in effect size (i.e. study results) is greater than that expected to occur purely by chance.

Intervention: the policy or management action under scrutiny within the systematic review.

Mean difference: the difference between the means of two groups of measurements.

Meta-analysis: a quantitative method employing statistical techniques, to combine and summarise the results of studies that address the same question.

Meta-regression: A multivariable model investigating effect size from individual studies, generally weighted by sample size, as a function of various study characteristics (i.e. to investigate whether study characteristics are influencing effect size).

Outcome: the effect of the intervention in a form that can be reliably measured.

Power: the ability to demonstrate an association where one exists (i.e. the larger the sample size, the greater the power and the lower the probability of the association remaining undetected).

Precision: the proportion of relevant articles identified by a search strategy as a percent of all articles found (i.e. a measure of the ability of a search strategy to exclude irrelevant articles).

Protocol: the set of steps to be followed in a systematic review. It describes the rationale for the review, the objective(s), and the methods that will be used to locate, select and critically appraise studies, and to collect and analyse data from the included studies.

Publication bias: the possible result of an unsystematic approach to a review (e.g. research that generates a negative result is less likely to be published than that with a positive result, and this may therefore give a misleading assessment of the impact of an intervention). Publication bias can be examined via a funnel plot

Random effects model: a mathematical model for combining the results of studies that allow for variation in the effect of the intervention amongst the subject populations studied. Both within-study variation and between-study variation is included when assessing the uncertainty of results (in contrast to a fixed effects model). 

Review: an article that summarises a number of primary studies and discusses the effectiveness of a particular intervention. It may or may not be a systematic review.

Search strategy: an a priori description of the methodology, to be used to locate and identify research articles pertinent to a systematic review, as specified within the relevant protocol. It includes a list of search terms, based on the subject, intervention and outcome of the review, to be used when searching electronic databases, websites, reference lists and when engaging with personal contacts. If required, the strategy may be modified once the search has commenced.  

Sensitivity: the proportion of relevant articles identified by a search strategy as a percentage of all relevant articles on a given topic (i.e. the degree of comprehensiveness of the search strategy and its ability to identify all relevant articles on a subject).

Sensitivity analysis: repetition of the analysis using different sets of assumptions (with regard to the methodology or data) in order to determine the impact of variation arising from these assumptions, or uncertain decisions, on the results of a systematic review

Standardised mean difference (SMD): an effect size measure used when studies have measured the same outcome using different scales. The mean difference is divided by an estimate of the within-group variance to produce a standardised value without units.

Study quality: the degree to which a study seeks to minimise bias.

Subgroup analysis: used to determine if the effects of an intervention vary between subgroups in the systematic review. Subgroups may be pre-defined according to differences in subject populations, intervention, outcome and study design. 

Subject: the unit of study to which the intervention is to be applied.

Summary effect size: the pooled effect size, generated by combining individual effect sizes in a meta-analysis.

Systematic review (synonym: systematic overview): a review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included within the review. Statistical methods (meta-analysis) may or may not be used to analyse and summarise the results of the included studies. 

Weighted mean difference (WMD):a summary effect size measure for continuous data where studies that have measured the outcome on the same scale have been pooled.

 

This glossary has been compiled and adapted from:

Khan, K.S., Kunz, R., Kleijnen, J., and Antes, G. (2003). Systematic Reviews to Support Evidence-Based Medicine: how to apply findings of healthcare research. Royal Society of Medical Press Ltd. London, UK.

NHS Centre for Reviews and Dissemination. (2001). Undertaking Systematic Review of Research on Effectiveness, ed. K.S. Khan, G. ter Riet, J. Glanville, A.J. Sowden, and J. Kleijnen. NHS CRD Report No. 4. University of York, York, UK.