<- .9
rxx <- 15
sigma <- 100
mu <- sigma * rxx
sigma_ts <- sigma * sqrt(1 - rxx)
sigma_e
<- seq(mu - 4 * sigma, mu + 4 * sigma, sigma / 15)
x <- dnorm(x,mu, sigma_ts)
y <- 120
ts tibble(ts = ts) %>%
ggplot() +
::geom_normalviolin(aes(x = ts, mu = 100, sigma = sigma_ts)) ggnormalviolin
The more reliable a score is, the more certain we can be about what it means (provided its validity is close to its reliability). Certain rules-of-thumb about score reliability are sometimes proposed:
- Base high-stakes decisions only on scores with reliability coefficients of 0.98 or better.
- Base substantive interpretations on scores with reliability coefficients of 0.90 or better.
- Base decisions to give more tests or not on scores with reliability coefficients of 0.80 or more.
Such guidelines seem reasonable to me, but I do not find reliability coefficients to be intuitively easy to understand. How much uncertainty is associated with a reliability coefficient of .80? The value of the coefficient (.80) is not directly informative about individual scores. Instead, it refers to the correlation the scores have with a repeated measurement.
Another way to think about the reliability coefficient is that it is a ratio of true score variance to observed score variance. In classical test theory, an observed score (X) is influenced by a reliable component, the true score (T), and also by measurement error (e). That is
X=T+e
The true score is static for each person, and the error fluctuates randomly.
Because the true score and error are uncorrelated, the variance of X is the sum of the variance of T and the variance fo e:
\sigma^2_X=\sigma^2_T+\sigma^2_e
Therefore, the reliability coefficient of an observed score is the ratio of the true score variance over the total variance:
r_{XX}=\frac{\sigma^2_T}{\sigma^2_X}
In other words, what proportion of the observed score’s variability is consistent?
Okay, so what is variance? Variance is the average squared deviation from the mean. Squared quantities are not easy to think about for most of us. For this reason, I prefer to convert reliability coefficients into confidence interval widths. Confidence interval widths and reliability coefficients have a non-linear relationship:
\text{CI Width}=2z\sigma_{x}\sqrt{r_{xx}-r^2_{xx}}
Where:
z is the z-score associated with the level of confidence you want (e.g., 1.96 for a 95% confidence interval)
\sigma_{x} is the standard deviation of X
r_{xx} is the classical test theory reliability coefficient for X
For index scores (μ = 100, σ = 15), a reliability coefficient of .80 is associated with a 95% confidence interval that is 24 points wide. That to me is much more informative than knowing that 80% of the variance is reliable.
Calculating a lower and upper bounds of a confidence interval for a score looks complex with all the symbols and subscripts, but after doing it a few times, it is not so bad. Basically, compute the estimated true score (\hat{T}) and then add (or subtract) the margin of error.
\hat{T}=\mu_x+r_{xx}\left(X-\mu_x\right)
\text{CI} = \hat{T} \pm z\sigma_x\sqrt{r_{xx}-r^2_{xx}}
The interactive app in Figure 1 graph below shows the non-linear relationship between reliability and 95% confidence interval widths for different observed index scores. The confidence interval width is widest when the reliability coefficient is .5 and tapers to 0 when the reliability coefficient is 0 or 1.
The idea that the confidence interval width is zero when reliability is perfect makes sense. However, it might be counterintuitive that the confidence interval width is also zero when the reliability coefficient is zero.
How is this possible? To make sense of this, we have to remember what the true score is. It is the long term average score after repeated measurements (assuming no carryover effects). If a score has no reliable component, it is pure error. When r_{XX}=0, the score is X=T+e, where \sigma_T^2=0 and \sigma_e^2=\sigma_X^2. So the true has no variance, meaning that its mean is exactly the mean of X for everyone.
The reason that this is true is that the true score for a variable with no reliability is a constant.
Paying close attention to confidence intervals allows you to do away with rough rules-of-thumb about reliability and make more direct and accurate interpretations about individual scores.
Citation
@misc{schneider2014,
author = {Schneider, W. Joel},
title = {Reliability Coefficients Are for Squares},
date = {2014-01-16},
url = {https://wjschne.github.io/AssessingPsyche/2014-01-16-rereliability-is-for-squares/},
langid = {en}
}