"Project Talent" adds to long-range dementia predictions
« previous post | next post »
Tara Bahrampour, "In 1960, about a half-million teens took a test. Now it could predict the risk of Alzheimer’s disease.", WaPo 9/21/2018:
In 1960, Joan Levin, 15, took a test that turned out to be the largest survey of American teenagers ever conducted. It took two-and-a-half days to administer and included 440,000 students from 1,353 public, private and parochial high schools across the country — including Parkville Senior High School in Parkville, Md., where she was a student. […]
Fifty-eight years later, the answers she and her peers gave are still being used by researchers — most recently in the fight against Alzheimer’s disease. A study released this month found that subjects who did well on test questions as teenagers had a lower incidence of Alzheimer’s and related dementias in their 60s and 70s than those who scored poorly.
The cited study is Alison Huang et al., "Adolescent Cognitive Aptitudes and Later-in-Life Alzheimer Disease and Related Disorders", JAMA 2018:
Findings In this cohort study of 43 014 men and 42 749 women, lower adolescent memory for words, in women, and lower mechanical reasoning, in men, were associated with higher odds of Alzheimer disease and related disorders in later life.
More specifically,
Population-based cohort study from the Project Talent–Medicare linked data set, a linkage of adolescent sociobehavioral data collected from high school students in 1960 to participants’ 2012 to 2013 Medicare Claims and expenditures data. The association between adolescent cognitive ability and risk of ADRD in later life was assessed in a diverse sample of 43 014 men and 42 749 women aged 66 to 73 years using a series of logistic regressions stratified by sex, accounting for demographic characteristics, adolescent socioeconomic status, and regional effects.
How big are these effects?
Results from logistic regressions are expressed as odds ratios and Bonferroni-corrected 95% simultaneous confidence intervals. We express cognitive aptitude measures as z scores such that change in odds ratio (OR) should be interpreted per SD disadvantage in cognitive ability. Using a Bonferroni-corrected α, low IQ (men: OR, 1.17; 95% CI, 1.04-1.32; women: OR, 1.17; 95% CI, 1.04-1.31) and low general academic aptitude (men: OR, 1.18; 95% CI, 1.05-1.33; women: OR, 1.19; 95% CI, 1.06-1.33) were significantly associated with increased odds of ADRD in later life in both men and women.
What does this really mean? This passage is helpful:
In women, low memory for words in adolescence showed the strongest association with ADRD in later life such that 1 SD disadvantage was associated with 1.16-fold increased odds (OR, 1.16; 95% CI, 1.05-1.28). In men, low memory for words was also an important indicator (OR, 1.16; 95% CI, 1.05-1.27); however, mechanical reasoning showed a slightly more robust association; 1 SD disadvantage in mechanical reasoning was associated with 1.17-fold higher odds of ADRD (OR, 1.17; 95% CI, 1.05-1.29).
What is "1.16-fold increased odds"? They tell us earlier that
In a sample of 43 014 men and 42 749 women, incidence of Medicare-reported ADRD was 2.9% in men (n = 1239) and 3.3% in women (n = 1416)
So for women in their sample, the overall odds of an ADRD diagnosis are 1416/(42749-1416) or about 0.034. If I've understood the report correctly, increasing those odds by 1.16 would predict (after a bit of algebra) about 1634 ADRD diagnoses for women whose "memory for words" score was one standard deviation below the mean, ignoring the various statistical corrections for other factors.
[This is because
(1634/(42749-1634))/(1416/(42749-1416)) = 1.160
Here the odds ratios are pretty close to the risk ratios.]
These are substantial epidemiological changes, even if the predictive value for individuals remains relatively low. What accounts for them?
The authors mention the "Nun study" among other earlier indications of similar relationships — for some background, see "Writing style and dementia", 12/3/2004; "Miers dementia unlikely", 10/21/2005; "Nun study update", 8/27/2009. As that last post notes, a 2009 paper in Neurology by Diego Iacono et al. ("The Nun Study. Clinically silent AD, neuronal hypertrophy, and linguistic skills in early life") established that "higher idea density scores in early life are associated with intact cognition in late life despite the presence of AD lesions". This is one of several lines of evidence suggesting that the connection between early-life cognitive skills and later-life AD is probably not due to a difference in the degree of later-life neurodegeneration, but rather to a "cognitive reserve" effect. "Cognitive reserve" means that people who start out with better skills can function at a higher level with a given amount of physiological deterioration, and thus delay diagnosis.
But is this because of some genetic (or at least embyonic) difference in neurophysiology? Or is it because of differences in childhood experience (including nutrition, education, and other aspects of the environment)? Or some mixture of both?
I don't know of any evidence on this point. So we might as well add "protection against ADRD" to the many reasons to improve childhood nutrition and education, avoid environmental toxins, etc.
Dick Margulis said,
October 1, 2018 @ 9:05 am
"'Cognitive reserve' means that people who start out with better skills can function at a higher level with a given amount of physiological deterioration, and thus delay diagnosis."
Anecdotally, my dad, diagnosed with dementia (noted on his chart) "couldn't possibly have dementia" per multiple nurses and aides at various facilities. (My sister's response was consistently "Have you looked at the chart?!?") This caused problems of various sorts, as one might imagine.
Cervantes said,
October 1, 2018 @ 9:14 am
Actually part of is just that high baseline IQ means that the diagnostic criteria for dementia aren't met until the person acquires a larger relative deficit. I know this from experience – it was obvious to me and my siblings that my mother was becoming demented but she kept passing the tests the doctors gave her and didn't get a diagnosis for years. Now it's obvious to anyone but for a long time, she could compensate. It may just be that simple.
mg said,
October 1, 2018 @ 1:43 pm
The same effect as @Dick and @Cervantes point to is why it's so hard to get diagnosis and help for smart kids with learning disabilities.
As a statistician, my reaction is that the odds ratio isn't particularly high even if it's statistically significant. Especially given possible confounders (that is, other factors that are often correlated with scores on cognitive tests, including socio-economic status and lifestyle factors).
I'd like to know how much missing data there was and what factors are associated with not having follow-up data. Also, for a study like this, it would have been more appropriate to use survival analysis methods that look at time-to-event, rather than whether the event occurs or not, because those methods work with situations where if someone dies before a diagnosis occurs (i.e., you don't know if they would have eventually developed it or not).
Trogluddite said,
October 1, 2018 @ 3:08 pm
mg said: "The same effect as @Dick and @Cervantes point to is why it's so hard to get diagnosis and help for smart kids with learning disabilities."
Likewise for autism, too, whether or not learning disability is also present. In grass-roots autism communities, it is very widely acknowledged as 'masking' or 'passing' (i.e. the ability to pass as a non-autistic person). Sadly, it is recognised by very few healthcare professionals, nor is it mentioned in any of the common diagnostic manuals. This intellectual compensation for innate traits or impairments can have serious consequences for one's mental and physical health, not least of which is that it can be incredibly mentally exhausting. It is one of the reasons that many autistic people are suspicious of therapies based on behavioural psychology (e.g. Applied Behavioural Analysis), as it is felt that they often disregard the hidden costs of the cognitive and attentional demands required for maintaining any modified behaviours.
There is also some debate about the prevalence of dementia in autistic people, due to concerns that similarities between common autistic traits and those of dementia, and a lifetime of practice at masking, might confound the diagnosis of dementia in autistic patients. Very little research on autism is directed at the difficulties faced by adults on the spectrum, especially geriatric care; so, as yet, it is very difficult to say how large a problem this might be.
Trogluddite said,
October 1, 2018 @ 5:21 pm
Mark Liberman said: "So we might as well add "protection against ADRD" to the many reasons to improve childhood nutrition and education, avoid environmental toxins, etc."
While those are certainly laudable aims for innumerable reasons, I'd be cautious about this assertion; there's a big difference between protection against the neurological damage, and better initial cognitive skills as compensation. I don't know enough about the pathology of ADRD to know what effect those environmental factors have; but if the "cognitive reserve" hypothesis is correct, then improved cognition might just delay diagnosis, rather than protect from, or retard progression of, ADRD.
Dr Carol Routledge, Director of Research at Alzheimers Research UK, writing in New Scientist recently, hypothesised that the disappointing results achieved so far in clinical trials for ADRD treatments may be because treatment needs to begin in the earliest stages, maybe before any noticeable cognitive deficits manifest themselves. This is a window possibly as wide as twenty years, and presumably the lesser cognitive symptoms of the earlier stages would be easier to mask. If this is correct, then it is even more vital to develop routine tests which will not be biased by "cognitive reserve".
This also suggests a likely bias in the study. The subjects with ADRD were identified by already known diagnoses, not an independent test for neurological damage, in which case some kind of cognitive change has most likely been noticed prior to the study. The non-ADRD cohort may contain a significant number of undiagnosed subjects with high "cognitive reserve", or those for whom the cognitive impairments are not deemed worth bothering a doctor about.
bks said,
October 1, 2018 @ 5:42 pm
There is never going to be a bright-line rule for dementia. Everyone not blessed with premature death is going to experience senescence and some days will be clearer than others.
Andrew Usher said,
October 1, 2018 @ 7:31 pm
Actually, that's always been my biggest question, too, when it comes to this topic: is it just a continuum between full dementia and normal cognitive decline with age, or is there really some clinical basis for drawing a line (even if it may not be clear in every case)? I haven't research the question myself and I doubt the impartiality of any clinical studies that find there is a distinction.
In any case the 'cognitive reserve' explanation is so obvious it should be the default hypothesis.
k_over_hbarc at yahoo.com
Dick Margulis said,
October 2, 2018 @ 5:57 am
@bks and @Andrew: There are different types of dementia that manifest quite differently (it's not all Alzheimer disease). Behavior-based diagnoses can be confirmed by autopsy, apparently. Normal cognitive decline or senescence, whatever that means to you, may indeed be on a continuum with one or another of the dementias, but not everyone experiences that. Some people remain clearheaded until the end and die because their kidneys fail, for example.
bks said,
October 2, 2018 @ 7:45 am
Dick Margulis, yes, some people are remarkably clear-headed in their 90's, but if you haven't done the work for your Nobel Prize in Physics by age 40, it's not going to happen.
V said,
October 2, 2018 @ 8:01 am
It took me years to realize I have ADHD (or learn what it is) because I was doing great in high school (from an outside perspective), and my various problems were attributed to various random causes. Not until I started having serious anxiety issues and going to therapy did it emerge that I have neurological problems, and that took two years and ruined my long-term relationship. And it took having to go to a different country to a psychiatrist, because we simply don't have people who know anything about ADHD in my native country.
Trogluddite said,
October 2, 2018 @ 10:16 am
@Andrew Usher, Dick Margulis
There is also the problem that, by definition, a decline can have no absolute measure, only a relative one. Very few of us will have a base-line measure of pre-dementia cognitive performance for comparison. Even for those of us who do, if they were measured as part of a diagnostic assessment for some other condition, they may not be strictly comparable. In addition there is the problem of controlling for perfectly normal variation in cognitive performance due to factors such as anxiety, sleep-deprivation, medication, etc. Any kind of clinical screening reliant on behavioural observation would have to be conducted over the long term, and routine measurement of cognitive performance may raise ethical issues with regards to health insurance, disclosure, etc.