by Philip D. Harvey, PhD

Dr. Harvey is Professor of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, Georgia.

Psychiatry (Edgemont). 2009;6(1):23–25

Introduction

As a major determinant of functional disability, quality of life, and adherence to medications in people with severe mental illness, cognitive impairment is an important treatment target. Furthermore, as efforts to treat cognitive functioning with pharmacological and behavioral means attract more attention, the need to assess cognitive functioning as a treatment outcomes variable becomes more important. Many mental health practitioners do not have adequate access to comprehensive neuropsychological assessments, and the greater usefulness of extensive assessments compared to more abbreviated testing has been questioned.

There are several possible ways that cognitive functioning could be examined without formal cognitive assessments. These include inferences from clinical ratings of symptoms in structured rating scales, such as the Positive and Negative Syndrome Scale (PANSS). Another method would be interview-based approaches, where the patient or an informant could be interviewed about his or her cognitive functioning and its functional implications. An alternate version of interview-based measures could be questionnaires completed by the patient or an informant, which could be collected without even contacting the informant directly.

Assessing Cognition with PANSS

In the realm of clinical ratings, the PANSS in particular has a number of items that address aspects of “function” that are broadly cognitive in nature. These include problems in abstract thinking, conceptual disorganization, stereotyped thinking, disorientation, attentional impairments, and judgment and insight. Such items were included in the PANSS in an effort to broadly capture the symptoms seen in people with schizophrenia, and these items are rated with generally high reliability and are common in people with schizophrenia. Several studies have concluded that these items have a coherent factor structure.[1] The severity scores of these items tend to be correlated with each other and to emerge from exploratory factor analyses.

Examination of these indices was popular when pharmaceutical companies were examining the possibility that second-generation antipsychotic medications directly enhanced cognition in people with schizophrenia and related disorders. A number of presentations and publications described improvements on the PANSS “cognitive” factor and concluded that these improvements might be a signal of potential for improving functional outcomes in people with schizophrenia.

However, there are at least two reasons why these clinical ratings may not translate directly into the same types of constructs as measured with neuropsychological testing. This includes the fact that many of these items are defined in terms of broad impairments in cognitive processes and related behavioral abnormalities. For instance, ratings of the severity of impairments in judgment and problem solving use examples of impulsive or irrational behavior to infer impaired judgment. Clearly, these are often behaviors that are quite multiply determined. Conceptual disorganization is rated on the basis of overtly impaired speech and includes both disorganized and impoverished speech as indices of impairment. The second reason for concern is that some of these items are based on performance-based assessments, such as difficulties in performing simple mental status tests such as counting by sevens or spelling words backwards. These assessments have fixed difficulty and are easy for most individuals, leading to a non-normal distribution of scores. These non-normal distributions contrast to distributions of scores on neuropsychological assessments.

Studies that have directly examined the correlation between clinically rated cognitive impairments and performance on formal testing have found two major problems with these clinical ratings. The first is most serious: There is minimal overlap with performance on neuropsychological testing. In one of the studies, the PANSS cognitive factor shared less than 10 percent of the variance with any test in a neuropsychological assessment battery.[2] In the second,[3] there were two tests out of a 12-test battery where there was at least 10-percent variance shared between the PANSS cognitive factor and performance on that test. At the same time, the other 10 tests were found to share less than five-percent variance with scores on the PANSS cognitive factor.

Further, both of these studies also found that the PANSS cognitive factor was very nonspecific in terms of both factorial purity and correlations with test performance. In both studies, items from the PANSS cognitive factor were typically at least as strongly correlated with the total scores on the PANSS negative factor as with the cognitive factor. A second major issue is that in one of the two studies performance on NP tests was more strongly correlated with total scores on the negative symptom factor than with scores on the PANSS cognitive factor. These results suggest minimal overlap between clinical ratings and performance on testing and minimal specificity of the cognitive factor in terms of identification of uniquely cognitive features.

These findings lead to the conclusion that clinical ratings are an unacceptable proxy for performance-based assessments using cognitive tests. While these ratings appear reliable, they are not measuring the same thing as neuropsychological tests. There is no evidence to date that ratings on this symptom factor are a good predictor of real-world disability either, thus suggesting that they are not a reasonable predictor of anticipated future functioning as well.

Assessing Cognition Through Interviews

An alternative to inferring cognitive impairment from clinical symptoms is interview-based procedures. These procedures are structured interviews that systematically assess reports of cognitive impairments. Previous efforts in this area have yielded two measures, both of which are administered similarly.[4,5] An interviewer asks a series of questions regarding cognitively relevant activities (maintaining focus on a conversation, following a TV program, figuring out how to operate household devices). Studies of these procedures have focused on both patient self-report and the reports of someone who knows the patient well, such as a relative or some type of caregiver. The research studies have then related these interview-based data to performance on neuropsychological tests.

The findings have suggested some promise, with some caveats. Caregivers or others who know the patient well can generate reports of cognitive impairments that are moderately related to test performance. Interviewers who interview both the patient and the caregiver generate even more convergent reports. Patients, however, do not generate reports of their own cognitive impairments that are correlated with their test performance. In fact, the results of these studies did not suggest that patients were consistent over- or under-estimators of their performance, rather that the correlation was close to zero.

These findings suggest that using patient self report of their cognitive limitations to estimate their functioning, even when these self-reports are collected with a systematic and structured procedure, gives a poor signal of current functioning. While we have known for years that patients lack awareness of their psychotic symptoms, often referred to as lack of insight, there has been a tacit assumption that other areas might not be affected. Assessments of current every day functioning have often relied on patient’s self-reports, and recent evidence has suggested that these are not extraordinarily accurate.[6] All of these sets of data converge to suggest that reliance on patient report of cognitive or functional impairments may be problematic.

Conclusion

Inferring cognitive impairments from clinical symptom ratings does not give a strong signal relevant to performance on neuropsychological tests. Asking patients about their impairments, even using a structured and systematic interview, does not converge with objective performance either. However, informants give reports of cognitive impairments that are considerably more accurate and when these reports are combined with observations of patient behavior, they lead to even more accurate estimation of current cognitive limitations.

Perhaps the best current strategy to estimate global elements of cognitive impairments would be a combined approach. Asking caregivers how impaired they believe that patients are could be combined with a very abbreviated performance-based assessment. There is a growing literature that suggests that brief assessments, taking as little as 7 to 10 minutes give a strong signal for estimating the severity of overall impairments. These assessments are simple to perform and interpret. These abbreviated assessments could be combined with observation of patients and information collected from those who are quite familiar with them. Such an approach would yield a highly accurate estimation of global impairments but might lack sensitivity to the rare patient who has what appears to be a focal pattern of impairment in a certain cognitive ability area.

References
1.    White L, Harvey PD, Opler L, Lindenmayer JP. Empirical assessment of the factorial structure of clinical symptoms in schizophrenia. A multisite, multimodel evaluation of the factorial structure of the Positive and Negative Syndrome Scale. The PANSS Study Group. Psychopathology. 1997;30:263–274.
2.    Harvey PD, Serper M, White L, et al. The convergence of neuropsychological testing and clinical ratings of cognitive impairment in patients with schizophrenia. Compr Psychiatry. 2001;42:306–313.
3.    Good KP, Rabinowitz J, Whitehorn D, et al. The relationship of neuropsychological test performance with the PANSS in antipsychotic naïve, first-episode psychosis patients. Schizophr Res. 2004;68:11–19.
4.    Keefe RS, Poe M, Walker TM, et al. The Schizophrenia Cognition Rating Scale (SCoRS): interview-based assessment and its relationship to cognition, real world functioning and functional capacity. Am J Psychiatry. 2006;163:426–432.
5.    Ventura J, Cienfuegos A, Boxer O, Bilder R. Clinical global impression of cognition in schizophrenia (CGI-CogS): reliability and validity of a co-primary measure of cognition. Schizophr Res. 2008;106:59–69.
6.    Bowie CR, Twamley EW, Anderson H, et al. Self-assessment of functional status in schizophrenia. J Psychiatr Res. 2007;41:1012–1018.