Miciak,
J., Fletcher, J. M., Stuebing, K. K., Vaughn, S., & Tolar, T. D. (2014). Patterns of cognitive strengths and
weaknesses: Identification rates, agreement, and validity for learning
disabilities identification. School
Psychology Quarterly, 29(1),
21. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4111129/
Henry Kissinger is known for his saying: "Israel
has no foreign policy, only domestic politics".
This sometimes is the situation in the learning
disability field as well.
Many of the professional stances are actually
political ones. Should the identification of a
possible neurobiological cause for the low achievement be required as an essential
part of the definition? Should there be a requirement to identify deficient cognitive processes or abilities that lie at the
base of the child's difficulties in reading, writing or arithmetic?
This issue is in dispute. The
position that demands identification of a cognitive base for the low
achievement strengthens psychologists, who are
experts at intelligence and cognitive ability assessment.
The aim of the study presented here was to look at the
Flanagan model of learning disability definition, which requires linking low
achievement with a low cognitive ability (and at another similar model which I
will not discuss here). The model was
tested with a group of 139 sixth and seventh grade students that did not
respond to intervention.
The group that conducted this research included the
renowned Jack Fletcher.
Jack M.
Fletcher, Ph.D., is a Professor of Psychology at the University of Houston. For
the past 30 years, Dr. Fletcher, a board-certified child neuropsychologist, has
worked on issues related to child neuropsychology, including studies of
children with spina bifida, traumatic brain injury, and other acquired
disorders. In the area of developmental learning and attention disorders, Dr.
Fletcher has addressed issues related to definition and classification,
neurobiological correlates, and most recently, intervention. He served on the NICHD National Advisory
Council, the Rand Reading Study Group, the National Research Council Committee
on Scientific Principles in Education Research, and the President's Commission
on Excellence in Special Education. He published 3 books and over 350 papers. He was President of the International
Neuropsychological Society in 2008-2009.
Fletcher
argues, that "there's a big question and a lot of controversy about what
cognitive assessments add…I cannot find data that shows that cognitive
assessments, strengths and weaknesses in cognitive skills, are related to
intervention outcomes. It's very hard to
find… A bigger issue is that there is little evidence that there is additional
value added information that you get from an evaluation of cognitive skills if
you've carefully evaluated achievement levels". You can see him make this argument here (minutes07:24-08:10). This video was shot in
2010, long before this study was published.
Here is a reminder of the
Flanagan definition steps:
The steps depend on each other, in a way that a child who doesn't "pass"
the first step cannot move on to the second step. A child who doesn't "pass" the
second step cannot move on to the third step and so on. The steps are:
1. Low
achievement (a score that is at least one standard
deviation below the mean) in
reading, writing or arithmetic tests.
2. One of
the child's cognitive abilities (or more, of the
following: fluid ability, visuospatial processing, auditory processing,
processing speed, long term storage and retrieval, short term memory or
comprehension knowledge) is
significantly below average (a score that is at least one standard
deviation below the mean).
3. There
is a reasonable or empirical link between the poor achievement and the low
ability (for example, poor reading comprehension
due to deficient comprehension knowledge).
4. Most
of the child's cognitive abilities are within average limits (within one standard deviation from the mean).
5.
Exclusionary factors (sensory disability, intellectual disability, emotional or social
disorders, cultural differences, immigration and insufficient or improper
instruction) are not the
main reasons for the child’s low achievement.
In this study, 228 6th
and 7th grade children received Tier2 intervention. The intervention took place in groups of
10-15 students, for one period every day for an entire school year (very
impressive). The intervention included
reading fluency, vocabulary and reading comprehension. The intervention teachers received 60 hours
of training and supervision throughout the year. They were also evaluated for their adherence
to the intervention program and their teaching quality.
In the spring of the
intervention year the children took four tests (I've dropped the test's names
for sake of reading clarity):
·
A basic reading test
·
A word reading efficiency test
·
A reading comprehension test
·
A matrix test
A child who received a low score on at least one of
the first three tests (measuring reading achievement) was considered as not
responding to the intervention. There
were 139 such children.
At
this point the authors write that the sample reflects what will emerge in many schools that complete mass screening of all
secondary students to identify struggling readers. It includes a large number
of economically disadvantaged students (83.46% of the 139 students in this
sample) and students from linguistically and culturally diverse backgrounds
(13.53% of the 139 students in this sample). The sample of inadequate
responders includes a higher percentage of students receiving free and reduced
lunch and a larger percentage of students with a history of ESL (all students
received English-only core instruction and completed the Tier 2 intervention in
English).
The
paper doesn't present data on the number of years these ESL children are living
in the US. Immigration is an
exclusionary factor for learning disability.
This means that it's possible that a large part of the 13.45% of the ESL
children could not have been classified as learning disabled, being in the
process of acculturation and English acquisition. It's also worth noting, that poor
socioeconomic background may disrupt cognitive development, especially comprehension
knowledge development (but not only this ability). A child from a low SES family
may have poor cognitive abilities not because of disabilities but rather from
lack of opportunities to develop them.
Exclusionary
factors were not considered in this study.
In the autumn of the year following the intervention the children
took the following tests (I omit test names for clarity):
Achievement tests:
·
Word and letter identification
·
Word attack
·
Reading comprehension
·
Spelling
·
Efficiency in single word reading
·
A group assessment of reading comprehension
·
A test for efficiency of silent reading and
reading comprehension.
The children also took cognitive tests meant to measure the CHC
abilities in order to apply Flanagan's definition. A sufficient measure of a broad cognitive ability, according to Flanagan,
consists of (at least) two tests, each measuring a different narrow ability.
In this study, Long term storage and retrieval, Fluid ability, Short
term memory, Comprehension knowledge and Processing speed were measured with
only one test. Hence these abilities
were not sufficiently assessed. Here are
the ways the abilities were measured:
·
Auditory
processing: phonological
decoding efficiency, phonological awareness index. It's not clear whether two different narrow
auditory abilities were measured.
·
Long term
storage and retrieval – naming
speed test.
·
Fluid
ability – matrix test
·
Short term
memory – spatial working memory test. The test used had no national norms. The norms were collected from the sample group
itself (!)
·
Comprehension
knowledge – listening
comprehension test. Listening
comprehension is not a very clean measure of comprehension knowledge, since it
is affected by other abilities as well (for example, fluid ability, short term
memory and processing speed).
·
Processing
speed –underlining test. This test doesn't have national norms as
well. The norms were collected from the
sample.
Visuospatial
ability was not measured at all. The authors write that this was the case
"because it is not strongly
related to LD in reading and because we had a measure of nonverbal reasoning
that should be a strength in many with reading LD. For the present study,
visual processing skill was assumed to be normal in the calculation of profile
normality".
Thus,
out of seven cognitive abilities, five were insufficiently measured by one test
only. Two (out of the five) were assessed
by tests that did not have adequate norms, and one ability was not assessed at
all.
The authors had three hypotheses about the links between cognitive
abilities and reading: students with
word decoding difficulties will have low phonological awareness; students with
low reading fluency will have low naming speed; students with low reading
comprehension will have poor listening comprehension.
It's possible to make more
hypotheses about other cognitive abilities' involvement in reading, but the
authors did not do this.
To the best of my understanding, the study does not present the
cognitive ability scores of students with difficulties in single word decoding,
reading fluency or reading comprehension.
Achievement tests scores:
The authors present the average scores of the whole 139 student
group. The average scores of the group
in basic reading and single word decoding efficiency were within normal
limits. Their average score in spelling
was also within normal limits, in the low average range.
The group had a poor average score on silent reading efficiency and
reading comprehension and on other reading comprehension tests.
Cognitive
ability scores:
The group's average score on phonological
awareness (auditory processing), rapid naming (long term storage and retrieval)
and listening comprehension (comprehension knowledge) were within average
limits - in the low average range. The group's average scores on matrices test
(fluid ability), visual working memory (short term memory), and underlining
test (processing speed) were average.
Only
24 students out of the 139 non-responders, 17%, were
classified as leaning disabled according to
CHC theory (Flanagan's model).
The
authors see this number as low, and as attesting that the Flanagan model is not
efficient for the identification of children with learning disabilities.
However:
A.
We have no way of knowing what should be the
"real" percentage of learning disabled children in the 139 non-responder
group. It's possible that not all
children that did not respond to intervention are learning disabled. Some of them may have not responded because
of different exclusionary factors not assessed in this study (for instance,
emotional difficulties). The group's
difficulties were in reading comprehension and not in reading decoding. Because of the high percentage of children from low SES background and
ESL students, it's possible that learning disability was not the main reason
for many of these students' low achievement. It's possible that many of these students have reading comprehension
difficulties resulting from cultural and linguistic differences. And so it may be not surprising that the CHC
method identified only 24 of them as learning disabled. I wonder how many of
the 139 students had a g score lower than one (meaning, had many poor broad abilities). I think this data is not presented.
B. As
written above, there were shortcomings in the implementation of the Flanagan learning
disability definition steps in this study:
the use of only one test to measure each cognitive ability; using tests
without norms; and not assessing visuospatial processing. Because of these shortcomings, I'm not sure
that a conclusion about the method's efficiency can be drawn. Fletcher and his colleagues
write that due to time considerations, they were not able to use more than one
test for each ability. But they also
write that "the addition of
extra indicators for each CHC factor would be unlikely to affect the results of
the present study" (I didn't understand why). As for the measured that lacked norms, the
authors write that "the effect
of this limitation is unlikely to change the conclusions of the study. First,
the two measures were utilized only for the purpose of establishing a “normal”
cognitive profile within the XBA [Flanagan] method. The effect of a restricted
norming sample would likely result in inflated scores and thus a higher
frequency of normal profiles. Utilizing population norms may have decreased the
number of normal cognitive profiles and decreased the number of students
identified as learning disabled. Second, weak correlations between the two
measures and all reading measures suggest that the restriction of range
displayed by the reading-impaired sample may have been minimal". But, it's better to make sure that profiles
are normal with tests that have good norms…
Furthermore, some children's disability resides in short term memory or
processing speed or visuospatial ability (and the rest of their abilities are average). If there were good measures of short term
memory and processing speed, and if visuospatial ability were measured, it's
possible that more children could have been found learning disabled.
No comments:
Post a Comment