Jon Hamilton, "Alzheimer's Blood Test Raises Ethical Questions", NPR Morning Edition 3/9/2014:
An experimental blood test can identify people in their 70s who are likely to develop Alzheimer's disease within two or three years. The test is accurate more than 90 percent of the time, scientists reported Sunday in Nature Medicine.
The finding could lead to a quick and easy way for seniors to assess their risk of Alzheimer's, says Dr. Howard Federoff, a professor of neurology at Georgetown University. And that would be a "game changer," he says, if researchers find a treatment that can slow down or stop the disease.
But because there is still no way to halt Alzheimer's, Federoff says, people considering the test would have to decide whether they are prepared to get results that "could be life-altering."
But having a prediction with no prospect for a cure is not, in my opinion, the biggest problem with tests of this kind.
As we can learn from the cited publication (Mark Mapstone et al., "Plasma phospholipids identify antecedent memory impairment in older adults", Nature Medicine 3/9/2014) , the "more than 90 percent of the time" accuracy is defined as "a sensitivity of 90% and specificity of 90%" for identifying participants who had unimpaired memory at the beginning, but would begin exhibiting cognitive impairment during the study.
One small point is that the size of the study was not large enough to be very certain about these numbers:
We enrolled 525 community-dwelling participants, aged 70 and older and otherwise healthy, into this 5-year observational study. Over the course of the study, 74 participants met criteria for amnestic mild cognitive impairment (aMCI) or mild Alzheimer's disease (AD) (Online Methods); 46 were incidental cases at entry, and 28 phenoconverted (Converters) from nonimpaired memory status at entry (Converterpre).
The blood tests are converting participants in the "Converterpre" category from the "Normal Controls" (NC) category, and 28 is not a very large number.
But the bigger problem lies in the meaning of "sensitivity" and "specificity", as explained by John Gever, "Researchers Claim Blood Test Predicts Alzheimer's", MedPage Today 3/9/2014:
If the study cohort's 5% rate of conversion from normal cognition to mild impairment or Alzheimer's disease is representative of a real-world screening population, then the test would have a positive predictive value of just 35%. That is, nearly two-thirds of positive screening results would be false. In general, a positive predictive value of 90% is considered the minimum for any kind of screening test in normal-risk individuals.
Let's unpack this. We start with a 2-by-2 "contingency table", relating test predictions and true states or outcomes:
|Reality is Positive (P)||Reality is Negative (N)|
|Test is Positive||True Positive (TP)||False Positive (FP)|
|Test is Negative||False Negative (FN)||True Negative (TN)|
In the context, the "sensitivity" is the true positive rate: TP/P, the proportion of real positives that test positive.
The "specificity" is the true negative rate: TN/N = the proportion of real negatives that test negative.
And 90% sensitivity and specificity sounds pretty good.
But what doctors and patients really learn is only whether the test is positive or negative. So suppose the test is positive and the true prevalence of the condition is 5%. Then out of 1,000 patients, there will be 0.05*1000 = 50 who are truly going to get AD; and of these, 0.9*50 = 45 will have a positive test result. But there will be 0.95*1000 = 950 who are not going to get AD; and of these, 0.1*950 = 95 will have a positive test result.
So there will be a total of 45+95 = 140 positive test results, and of these, 45 will be true positives, or 45/140 = 32%.
Thus the real problem with a positive test result, in this case, would not be learning that you're fated to get AD and can't do anything to prevent it. Rather, it would be believing that you're 90% likely to get AD when your actual chances are much lower.
In fact , I think that the numbers might be a bit better than Gever's article suggests. According to "2012 Alzheimer's disease facts and figures" from the Alzheimer's Association:
The estimated annual incidence (rate of developing disease in a one-year period) of Alzheimer’s disease appears to increase dramatically with age, from approximately 53 new cases per 1,000 people age 65 to 74, to 170 new cases per 1,000 people age 75 to 84, to 231 new cases per 1,000 people over age 85.
Even at a rate of 53 per 1,000, the chances of "converting" within three years would be (1 – (1-0.053)^3) = 0.151, so the positive predictive value of the test would be more like 62% than 32%. But 62% is still not 90%, and the general point is an important one.
For more on the terminology involved, see the Wikipedia article on sensitivity and specificity.