Variability refers to the extent or spread of a group of scores. Measures of variability (sometimes called measures of dispersion) provide descriptive information about the dispersion of scores within the data. In this way, the variability measures provide summary statistics to understand the range of scores relative to the midpoint of the data. Common measures of variability include range, variance, and standard deviation.

Variability measures

Variability refers to how far apart the scores are from the distribution or how much the scores vary from each other. There are four main measures of variability, including range, interquartile range, variance, and standard deviation. Range represents the difference between the highest and lowest scores in a distribution. It is rarely used because it considers only the two extreme scores.

The interquartile range, on the other hand, measures the difference between the outermost scores in just the middle fifty percent of the scores. In other words, to determine the interquartile range, the score at the 25th percentile is subtracted from the score at the 75th percentile, which represents the middle 50 percent range of the scores. The variance is the average of the squared differences of each score from the mean.

To calculate the variance, the difference between each score and the mean is squared and then added. This sum is then divided by the number of scores minus one. When the square root of the variance is taken, we call this new statistic the standard deviation. Since the variance represents the squared differences, the standard deviation represents the actual differences, and is therefore easier to interpret and much more commonly used. However, since the standard deviation is based on the mean of the distribution, it is also affected by extreme scores in a skewed distribution.

Background

Over the last decade, there has been a sharp increase in the number of published diagnostic study meta-analyzes and the methods for conducting such meta-analyzes have evolved rapidly. Analyzing the variability in the results of primary studies is a challenge in any type of systematic review. But it is even more difficult in systematic reviews of diagnostic studies.

Estimates of test precision are likely to differ between studies in a meta-analysis. This is known as variability or heterogeneity (in the broad sense of the word). Some variability in estimates can be expected simply due to chance as a result of sampling error. Even if the studies are methodologically identical and carried out in the same population, their results may differ because each study only looks at a sample of the entire theoretical population.

When there is more variability than expected due solely to chance, this is called statistical heterogeneity, and some refer to it as “true heterogeneity” or simply as heterogeneity. When there is statistical heterogeneity, it indicates that the precision of a test differs between studies (this is sometimes called a difference in ‘true effects’). Reviewers can be encouraged to look for possible explanations for these differences, as they can have important clinical implications.[4-6]

Variability Cochran’s Q Test

When there is a single (univariate) measure of effect, Cochran’s Q test is often used to test for variability beyond chance and I2 is used to quantify this variability. Unlike reviews of interventions that focus on a single measure of effect (e.g., a hazard ratio or odds ratio), reviews of diagnostic studies often meta-analyzes of two correlated outcomes, namely, sensitivity and specificity (the proportions of sick and non-sick that are correctly identified).

Sensitivity and specificity vary inversely with the threshold at which patients are considered ill, leading to a negative correlation between these estimates known as the threshold effect. Thresholds can be explicit, such as specific values used in laboratory tests, or implicit, such as differences in the way imaging tests are interpreted between studies..

In a meta-analysis of diagnostic tests, the explicit or implicit thresholds of the test under study may differ between studies, leading to varying estimates of sensitivity and specificity. It is clinically relevant to know the variability that exists beyond what could be attributed to chance or the threshold effect. Rather than performing two separate univariate analyzes of sensitivity and specificity where it is impossible to estimate the amount of variability that is due to the threshold effect, another approach is to focus on a single parameter, such as the diagnostic odds ratio (DOR), precision general, or Youden index.

The Moses Bend – Littenberg Curve

The summary Moses-Littenberg receiver (SROC) performance curve takes this approach by modeling the relationship between precision and a threshold-related parameter, namely the proportion with positive test results. More recently, however, hierarchical bivariate random effects models have been shown to be more appropriate and more intuitive, such as the bivariate random effects model proposed by Reitsma et al., Which focuses on estimating a summary point and the region. corresponding trust or hierarchical model.

The SROC Model (HSROC)

It focuses on fitting a summary receiver operating characteristic curve (SROC). These models are random-effects models that assume that the real effects vary with a given distribution around a mean value and estimate that distribution, as opposed to fixed-effects models that assume that all studies share the same common effect.

The HSROC and the bivariate model are identical when no covariates are included and the parameters of one model can be used to calculate those of the other. Bivariate random effects analysis estimates the amount of correlation between the two outcome measures, allowing for the calculation of conditional variances between studies (i.e., the variance in specificity at a fixed sensitivity value and vice versa) that are less that between – Study the variations of two separate univariate analyzes of sensitivity and specificity in case there is a (negative) correlation between the two outcome measures.

Variability of estimates in surveys

Pollsters often speak of precision in terms of the “margin of error” (MOE), which describes how much survey estimates are expected to bounce if the survey were repeated many times identically. For probability-based surveys, the margin of error is generally based on the inherent mathematical properties of the random samples. For optional samples, this is not possible. Instead, the MOE should be based on modeling assumptions about what other hypothetical samples would look like if the same sampling process were repeated many times. Although the interpretation is largely the same as for probability-based samples, we call it “modeling” margin of error to explicitly acknowledge confidence in these assumptions.

Survey Accuracy

This type of error is in addition to any systematic bias caused by under-coverage, non-response, or self-selection. For example, an estimate with an MOE of ± 3 percentage points and without bias would normally be within 3 points of the truth. If the bias were +10 points, the same margin of error would mean that the estimates would normally fall 7 to 13 points above the truth, distributed in the same way but centered on the wrong value.

While sample size is generally considered the most important factor in determining MOE, survey precision is also affected by weighting. Including more variables in the fit generally leads to a larger MOE, as does discarding observations when making the comparison. To see how the different procedures influence the variability, we calculated the modeled MOE for each of the 81 estimates of the 24 reference variables and took the average.23 Without weighting, the average margin of error in the references was ± 1, 3 percentage points for a sample size of n = 2,000. As the sample size increased, the mean MOE decreased to a minimum of ± 0.4 points at n = 8,000.

The modeled margin of error increases only slightly with the addition of political variables

A clear finding is that the use of political variables in addition to basic demographics has a minimal effect on the margin of error. For the 14 methods and for all sample sizes, the addition of political variables to the adjustment procedure never increased the average MOE by more than 0.2 percentage points. In most cases, the difference was even smaller, and in some cases the average MOE was actually lower with political variables than without them.24 Given this consistent pattern, the remainder of this section will focus only on procedures. that are adjusted for both demographic and political variables.

Bibliographic References

Pocock SJ, Hughes MD, Lee RJ. Statistical problems in the reporting of clinical trials – a survey of three medical journals. N Engl J Med. 1987;317:426–32.

García-Berthou E, Alcaraz C. Incongruence between test statistics and P values in medical papers. BMC Med Res Methodol. 2004;4:13–7.

Cooper RJ, Schriger DL, Close RJ. Graphical literacy: The quality of graphs in a large-circulation journal. Ann Emerg Med. 2002;40:317–22.

Goodman SN, Altman DG, George SL. Statistical reviewing policies of medical journals. J Gen Intern Med. 1998;13:753–6.

Also you might be interested in: Transferability in Quantitative Research

Variability and its measures

Variability and its measures

 

 

Abrir chat
1
Scan the code
Bienvenido(a) a Online Tesis
Nuestros expertos estarán encantados de ayudarte con tu investigación ¡Contáctanos!