McGill, R. J., & Busse, R. T.
(2015). Incremental validity of the WJ III COG: Limited predictive effects
beyond the GIA-E. School
Psychology Quarterly, 30(3),
353. https://pdfs.semanticscholar.org/f5b5/d70077a1b7747a31bbcd5fb7b7dfcc38c2a3.pdf
What predicts achievement
better: g or broad abilities?
Well, it
depends on the model of intelligence that you're using, and on the way the
statistical analysis was done in the research on which you base your
conclusion.
The WJ III COG examiner manual encourages primary interpretation at
the broad ability level (e.g., CHC-related cluster scores). Because linking performance
in reading/writing/math to the state of the child's cognitive
abilities is a major use of intelligence tests, examining relationships between
WJ III COG cluster scores and external achievement measures is important. These
examinations are also critically important for evaluating the tenability of
several models that have been proposed for use in the identification of
specific learning disabilities (SLD) in children and adolescents. These and
similar models utilize lower-order scores, such as the WJ III COG broad ability
clusters, as a critical component for determining whether or not an individual
has a learning disability.
According to Flanagan's model, a child is learning disabled if:
A. he has significantly poor
reading/writing/math achievement.
B. he has one (or two)
significantly low broad ability scores.
C. the low broad ability scores
can explain the child's poor
performance in reading/writing/math.
D. the child's other broad
abilities are average or above average.
E. excluding factors (like
insufficient or inappropriate instruction or emotional problems) are not better
explanations of the poor reading/writing/math achievement.
Shortly after the publication of the WJ III COG, McGrew and his
colleagues utilized multiple regression to examine predictive relationships
between WJ III COG CHC clusters and standardized reading, writing and math
measures. Their analyses provided evidence for differential predictive effects
across the age span for specific CHC clusters.
I'd reviewed one of McGrew's studies in a prior post. Here are slides from that post (click to enlarge).
These slides contain valuable
information that can help us plan the diagnostic process, identify the child's difficulties
and plan an intervention.
But McGill and Busse argue that these studies did not control for
potential effects of the common variance shared by mental measures. The common variance is a manifestation of
g. Each subtest and broad ability
measures both g and the specific thing it is supposed to measure. If a broad ability is "saturated"
with g, it is a good measure of g but a poorer measure of the unique construct
it's supposed to measure.
To investigate the tenability of the recommendation for
practitioners to interpret primarily at the broad ability level, it is
necessary to examine the incremental predictive validity provided by the broad
abilities after controlling for the effects of variance already accounted for
by the full scale score. That is, the extent to which broad abilities add
meaningful information beyond what we get from the full scale IQ score.
The participants in this study were children and adolescents ages
6–0 to 18–11 (n = 4,722) drawn from the standardization sample for the WJ III
COG and the WJ III ACH. The WJ III COG measures
7 broad abilities: Comprehension Knowledge, Fluid Reasoning, Long Term Storage
and Retrieval, Short Term Memory, Visual Processing, Auditory Processing and Processing
Speed. The General Intellectual Ability – Extended (GIA-E) score is composed of
14 subtests, 2 subtests for each broad ability.
McGill and Busse used Hierarchical multiple regression analysis to analyze
the data. In this procedure, the full
scale score is entered first into a regression equation followed by the lower
order factor or cluster scores to predict a criterion achievement variable.
This entry technique allows for the predictive effects of the cluster scores to
be assessed while controlling for the effects of the full scale score.
The authors found that GIA-E accounted for statistically significant
portions of each of the WJ III ACH cluster scores: Broad Reading, Basic
Reading, Reading Comprehension, Broad Mathematics, Math Calculation, Math
Reasoning, Broad Written Language, Basic Writing, Written Expression, Oral
Expression, and Listening Comprehension (I'm not sure oral expression and
listening comprehension are really achievement areas. I think they belong more to the comprehension
knowledge cluster and they are also affected by fluid reasoning). The GIA-E accounted for 29% (Math Calculation Skills) to 56%
(Listening Comprehension; Mdn 46%) of the criterion variable variance.
CHC clusters entered jointly
into the second block of the regression equations accounted for 2% (Math
Reasoning) to 23% (Oral Expression; Mdn 5%) of the incremental variance. The 23% incremental variance in Oral
Expression was predicted by the Comprehension Knowledge cluster. But even this unique outcome is
dubious: oral expression tests measure
linguistic competency and vocabulary, that is – Comprehension Knowledge. So it's hardly surprising that Comprehension Knowledge
predicts Comprehension Knowledge…None of the other broad abilities accounted for more than 5% of
achievement variance beyond GIA-E.
Although the CHC clusters contributed significant portions of
incremental achievement variance beyond the effects of the GIA-E, effect size
estimates were negligible. The results
from the current study indicate that practitioners who interpret CHC cluster
scores on the WJ III COG, without accounting for the effects of the GIA-E risk
overestimating the predictive effects of various CHC-related abilities
These results are fairly consistent with those that have been
obtained from other cognitive measures like the Wechsler test.
But how does all this fit with
McGrew's results presented in the slides above?
McGill and Busse write that
reverse entry of the independent variables, in this case entering the broad
ability clusters first, would result in the clusters accounting for
approximately the same variance proportions that were attributed to the GIA-E
in this study. Consequently,
the GIA-E would provide little incremental prediction. Order of entry
arbitrarily determines whether scores such as the GIA mean everything or
nothing. However,
order of entry is not an arbitrary process and must be determined a priori
according to expected theoretical relationships between the variables and
causal priority. Contemporary intelligence theory (e.g., CHC) and the WJ III
COG structural model support entering the GIA-E before the clusters because the
cluster scores are both theoretically and statistically subordinate to the
GIA-E. Reverse entry conflicts with existing intelligence theory and violates
the scientific law of parsimony (if you can predict something with one variable
with the same level of success as with many variables, prefer the one over the
many).
The CHC model is an integration between the Cattell and Horn model
and Carroll's model. Cattell and Horn
presented a model of intelligence that included broad abilities but did not
include g. Carroll presented a model
that included both g and broad abilities. To this day, many questions remain as
to whether g reflects an actual latent ability or is merely a statistical
artifact resulting from the tendency for all tests of mental ability to be
positively correlated.
So, if I believe there's no g
(like Cattell and Horn), the first thing I'll enter into the regression
equation would be the broad abilities, and the results I'd get would be that
the broad abilities predict achievement pretty well. On the other hand, If I believe in g (like
Carroll), I'll enter g first and find that it's the best predictor of
achievement and that broad abilities do not add any meaningful incremental
value…
What's more, it's obvious that
FSIQ predicts achievement. Obviously, it
predicts achievement better than any broad ability, since it embodies the
influence of all broad abilities. But
this is not the interesting question.
What we want to know is to what extent do the components of g,
that is the broad abilities, predict achievement. I think this is a good theoretical reason to
enter broad abilities first into the regression equation whether we believe in
g or not. We don't want to use g to
predict achievement because it's too broad and doesn't lead to meaningful
interventions.
McGill and Busse point out
that incremental validity researchers have largely relied on archived
standardization data to assess the predictive effects of cognitive test scores.
This is problematic given that the two incremental validity studies that have
been conducted using data obtained from clinical samples have found
significantly diminished effects associated with the general factor with
greater portions of achievement variance accounted for by factor-level scores.
Additionally whereas the cognitive variables consistently accounted
for large portions of achievement variance, approximately half of the variance
in the WJ III ACH variables was left unpredicted in this study. What explains this additional
variance? Maybe noncognitive variables
like motivation or effort.
No comments:
Post a Comment