This is a very theoretical post that I hope will help me
argue that some of the critique of psychologists' emphasis on broad abilities
over general intelligence is unjustified.
According to the excellent paper by Beaujean ,(2015)there are two theoretical approaches that conceptualize the
relations between g, broad cognitive abilities/group factors and narrow
cognitive abilities/subtests: the British approach (bi-factor model) and the
American approach (hierarchical model).
The American approach is shown in figure 1 (click to enlarge).
Figure 1
As is evident from the figure, this approach sees
intelligence as a hierarchical structure.
The approach works "bottom up": at the lowest level are subtests
which measure narrow abilities. Higher
order factors (broad abilities like
fluid ability, comprehension – knowledge) are formed from clusters of
subtests that measure a common essence (and that have common variance). Every subtest is a measure of the broad
ability to which it belongs and also measures something unique. If enough broad abilities/group factors are present, and they share
enough variance, then their correlations can be factor analyzed to find
higher-order factors. The apex of this
higher-order factor model often containes a single factor – general intelligence
(g). Each broad ability is a
measure of g and also measures something unique. Because g is formed out of the broad abilities,
in this model the broad abilities and g are dependent on each other.
Thus, in the American approach, which sees intelligence as a
hierarchical structure, broad cognitive abilities are formed before g and precede
it. g is formed out of the broad
abilities and expresses their common variance, their correlation, their common
component. Thus, g is secondary to the broad
abilities.
In a hierarchical model the common variance of the subtests
does not contribute directly to the formation of g because g is formed out of
the broad abilities (not the narrow ones).
And vice versa: g does not have a
direct influence on the subtests. It
affects broad abilities directly and the broad abilities affect subtests.
In a hierarchical model, the main difference between g and
broad abilities is their place in the hierarchy: g is superior to the broad abilities and
represents an entity that is more abstract than broad abilities.
The British approach is presented in figure 2.
Figure 2
As you can see in the figure, g is formed directly from the
subtests, and it expresses the common component of all subtests. Each subtest is a measure of g and also measures
something unique.
The broad abilities are estimated from the covariance remaining among
the subtests after accounting for g; they
reflect what is common among a group of subtests (after accounting for g). There still remains in each subtest a
component which is not common to the g or to the broad ability – a unique element
that this subtest measures, which is specific to the narrow ability. Since this approach begins with the g, and g
precedes the broad abilities, this approach works "top down".
This
model is a bi-factor model: one
factor is g and the other factor is the broad abilities. In a bi-factor model, the factors are independent of each
other. g is not dependent on the broad
abilities. A change in performance in
specific subtests can affect specific broad abilities but not g. A change in g will affect performance in the
subtests directly, but it will not necessarily affect all broad abilities and
so on. General intelligence in this
model directly affects the subtests (not through the broad abilities, as
happens in the hierarchical model).
In
a bi-factor model, the main difference between g and broad abilities is in the
breath of influence. General
intelligence affects all subtests, while each broad ability affects only the
subtests that measure it.
What's the significance of all this?
Since
g in a hierarchical model and in a bi-factor model is created out of different
sets of relations between variables, the general intelligence score derived by each model will usually be
different. One reason for this
could be the composite score extremity effect (but I'm not sure. This is only my guess and I haven't found
basis for it in the literature yet).
Suppose a child has a low score, say 6, in each subtest. Because of the composite score extremity
effect, his broad ability scores will be even lower – say 5. In a hierarchical model, since general intelligence
is formed out of the broad abilities, the composite score extremity effect will
work here as well. The child's g score will be even lower than the broad
ability scores, say 4. In a bi – factor
model, the composite score extremity effect will work separately on g and on
the broad abilities. When a child scores
6 on all subtests, his g score will be lower than 6. The question is how low
will it be, and whether it will be lower in a bifactor or in a hierarchical
model.
The
broad ability composites will also be different in the two models. In a bi – factor model, a broad ability
reflects the common essence of all subtests that comprise it, after accounting for g. In
a hierarchical model, a broad ability reflects the common essence of the
subtests that comprise it; g is optional but is not always formed. Only if there are enough broad abilities with
enough common variance can g be formed.
If g is formed in the model, each broad ability can be said to measure
both g and something unique.
Does this mean that in a bi-factor model the broad
abilities are "cleaner" – reflecting more of the unique thing they
measure and less of the g? Does this
mean that in a bi –factor model the broad abilities are more independent of
each other? That there is less
correlation between them? That each broad ability in a bi-factor model reflects
a skill that is more unique than in a hierarchical model?
Why is this interesting? What are the consequences of the differences
between the models? Is CHC a
hierarchical model of intelligence or a bi-factor model of intelligence?
Prof. McGill and
his colleagues are attempting to prove that after accounting for g, broad abilities
explain only a very small amount of incremental variance. For example, McGill found that the general
ability score explains 67% of the variance in reading comprehension at the age
of 17; broad abilities explain only 10% of the variance in reading
comprehension at this age. Given these
results, McGill suggests that "a more
circumspect appraisal of the importance of CHC dimensions in relationship to
the development of reading skills may be needed in the professional literature"
(McGill, 2017).
McGill and Busse (2015)
found similar results: they re-analyzed
the data of 6-19 year old children from the norming sample of the WJ3
test. The children in this analysis were
also given the WJ3 achievement battery. McGill and Busse
found that general intelligence explained between 29% (in math calculations)
and 56% (in reading comprehension) of the variance in achievement. Broad abilities explained between 2% (in
mathematical reasoning) and 23% (in oral expression) of the incremental
variance, over and above g. It was
comprehension knowledge that predicted 23% of achievement in oral expression
(but oral expression tests are also measures of comprehension knowledge…). The other broad abilities did not explain
more than 5% of the variance in reading/writing/math over and above general
intelligence. The contribution of broad
abilities was significant but very small.
However, McGill and his
colleagues conducted these studies from a bi – factor orientation, and thus
entered g first into the factor analysis.
McGill and Busse (2015)
write that reverse entry of the independent variables (entering the
broad ability clusters first), would result in the clusters accounting for
approximately the same variance proportions that were attributed to the g. Consequently,
the g would provide little incremental prediction. Order of entry arbitrarily
determines whether scores such as the g mean everything or nothing. But they argue that "order of entry is not an arbitrary
process and must be determined a priori according to expected theoretical
relationships between the variables and causal priority. Contemporary
intelligence theory (e.g., CHC) and the WJ III COG structural model support
entering the GIA-E before the clusters because the cluster scores are both
theoretically and statistically subordinate to the GIA-E" (GIA-E is the general intelligence
score).
I think that they may be mistaken twice:
The first mistake is that if indeed g is hierarchically
higher than broad abilities, we are in the domain of a hierarchical model. In a hierarchical model it is befitting to
enter g to the analysis last - not first.
The second mistake: when g is entered into the analysis first, one
works within a bi-factor model framework (not within a hierarchical one). Beaujean(2015) as well as Benson et al (2018) think that
Carroll's model is bi-factorial,
not hierarchical. Since Carroll's model
is one of the bases for the CHC model, they conclude that the CHC model is
bi-factorial. CHC is an integration of
the models of Carroll and Cattell – Horn.
But in Cattell – Horn's model there is no g at all! Cattell – Horn's model comprises of broad and
narrow abilities and is a hierarchical model. Thus, there are
good reasons to think that the CHC model is hierarchical and not bi
–factorial. If it is hierarchical, it
should be built "bottom up". In
a hierarchical model, broad abilities precede g, and they should be entered
into the analysis first.
When researchers take a hierarchical
model approach, they see that broad abilities do explain substantial variance
in achievement, as was found by, for example, McGrew and Wendling (2010).
Beaujean, A. A. (2015). John Carroll’s views on
intelligence: Bi-factor vs. higher-order models. Journal of
Intelligence, 3(4), 121-136. http://www.mdpi.com/2079-3200/3/4/121/htm
Benson, N. F., Beaujean, A. A., McGill, R. J.,
& Dombrowski, S. C. (2018). Revisiting Carroll's survey of factor-analytic studies:
Implications for the clinical assessment of intelligence. Psychological assessment.
McGill, R. J. (2017). Re (Examining) Relations between CHC Broad and
Narrow Cognitive Abilities and Reading Achievement. Journal of Educational and
Developmental Psychology, 7(1), 265. http://www.ccsenet.org/journal/index.php/jedp/article/viewFile/66066/36510
McGill, R. J., & Busse, R. T. (2015). Incremental validity of the WJ III COG: Limited
predictive effects beyond the GIA-E. School Psychology Quarterly, 30(3),https://pdfs.semanticscholar.org/f5b5/d70077a1b7747a31bbcd5fb7b7dfcc38c2a3.pdf
McGrew, K. S., & Wendling, B. J. (2010). Cattell–Horn–Carroll
cognitive‐achievement relations: What we have learned from the past 20 years of
research. Psychology in
the Schools, 47(7),
651-675.
No comments:
Post a Comment