An Intelligence Quotient or IQ is a score derived from one of several different standardized tests attempting to measure intelligence. The term "IQ," a calque of the German Intelligenz-Quotient, was coined by the German psychologist William Stern in 1912 as a proposed method of scoring early modern children's intelligence tests such as those developed by Alfred Binet and Theodore Simon in the early 20th Century.[1] Although the term "IQ" is still in common use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is now based on a projection of the subject's measured rank on the Gaussian bell curve with a center value (average IQ) of 100, and a standard deviation of 15, although different tests may have different standard deviations.
IQ scores have been shown to be associated with such factors as morbidity and mortality,[2] parental social status,[3] and to a substantial degree, parental IQ. While its inheritance has been investigated for nearly a century, controversy remains as to how much is inheritable, and the mechanisms of inheritance are still a matter of some debate.[4][5]
IQ scores are used in many contexts: as predictors of educational achievement or special needs, by social scientists who study the distribution of IQ scores in populations and the relationships between IQ score and other variables, and as predictors of job performance and income.[6][7][8][9][10][11]
The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century with most of the increase in the lower half of the IQ range: a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities, or merely methodological problems with past or present testing.
Contents |
In 1904 the French psychologist Alfred Binet published the first modern intelligence test called the Binet-Simon intelligence scale. He was commissioned by the French government to find a method to differentiate between children who were intellectually normal and those who were inferior. His principal goal was to identify students who needed special help in coping with the school curriculum. The test had children do tasks such as follow commands, copy patterns, name objects, and put things in order or arrange them properly. Along with his collaborator Theodore Simon, Binet published revisions of his intelligence scale in 1908 and 1911, the last appearing just before his untimely death, at the age of 54.
In 1912, the German psychologist William Stern coined the abbreviation "I.Q.," a translation of the German Intelligenz-Quotient ("intelligence quotient"), proposing that an individual's intelligence level be measured as a quotient of their estimated "mental age" and their chronological age. A further refinement of the Binet-Simon scale was published in 1916 by Lewis M. Terman, from Stanford University, who incorporated Stern's proposal, and this Stanford-Binet Intelligence Scale formed the basis for one of the modern intelligence tests that remains in common use.
At first, IQ was calculated as a ratio with the formula
H. H. Goddard, director of research at Vineland Training School in New Jersey, decided that the Binet test would be a wonderful way to screen students for his school. He translated Binet's work into English and advocated a more general application of the Simon-Binet Scale. He classified people as being normal, idiots, or imbeciles. Idiots could only develop to a mental age of three to seven years, while imbeciles could not progress to more than a three-year-old level. Goddard developed a new term, “morons,” to describe people who were somewhere between normal and idiots. Unlike Binet, Goddard considered intelligence a solitary, fixed and inborn entity that could be measured.
While Goddard extolled the value and uses of the single IQ score, Lewis M. Terman, who also believed that intelligence was hereditary and fixed, worked on revising the Simon-Binet Scale. His final product, published in 1916 as the Stanford Revision of the Binet-Simon Scale of Intelligence (also known as the Stanford-Binet), became the standard intelligence test in the United States for the next several decades. Convincing American educators of the need for universal intelligence testing, and the efficiency it could contribute to school programming, within a few years,
the Simon-Binet Scale, originally designed for identification of children requiring special instructional attention, was transformed into an integral, far-reaching component of the American educational structure. Through Goddard's and Terman's efforts the notion that intelligence tests were accurate, scientific, and valuable tools for bringing efficiency to the schools resulted in assigning the IQ score an almost exalted position as a primary, definitive, and permanent representation of the quality of an individual. Hence, intelligence testing became entrenched in the schools over the next several decades.
Few people realize that the tests being used today represent the end result of a historical process that has its origins in eugenics. Many of the founding fathers of the modern testing industry — including Goddard, Terman and Carl Brighan (the developer of the Scholastic Aptitude Test) — advocated eugenics. The founding fathers of the testing industry saw testing as one way of achieving eugenicist aims. Goddard's belief in the innateness and unalterability of intelligence levels, for example, was so firm that he argued for the reconstruction of society along the lines dictated by IQ scores:
If mental level plays anything like the role it seems to, and if in each human being it is the fixed quantity that many believe it is, then it is no useless speculation that tries to see what would happen if society were organized so as to recognize and make use of the doctrine of mental levels… It is quite possible to restate practically all of our social problems in terms of mental level… Testing intelligence is no longer an experiment or of doubted value. It is fast becoming an exact science… Greater efficiency, we are always working for. Can these new facts be used to increase our efficiency? No question! We only await the Human Engineer who will undertake the work.
As a result of his views on intelligence and society, Goddard lobbied for restrictive immigration laws. Upon his “discovery” that all immigrants except those from Northern Europe were of “surprisingly low intelligence;” such tight immigration laws were enacted in the 1920s. Testing caused him to conclude that, for example, 87 percent of Russian immigrants were morons. Of course it didn’t take into account that they were given a test in English, with questions based on American cultural assumptions, to people who could barely, if at all, speak English. Vast numbers of immigrants were deported in 1913 and 1914 because of this test.
In 1939 David Wechsler published the first intelligence test explicitly designed for an adult population, the Wechsler Adult Intelligence Scale, or WAIS. Subsequent to the publication of the WAIS, Wechsler extended his scale for younger ages, creating the Wechsler Intelligence Scale for Children, or WISC. The Wechsler scales contained separate subscores for verbal and performance IQ, thus being less dependent on overall verbal ability than early versions of the Stanford-Binet scale, and was the first intelligence scale to base scores on a standardized normal distribution rather than an age-based quotient: since age-based quotients worked only for children, these methods were replaced by a projection of the measured rank on the Gaussian bell curve using an average IQ of 100 as the center value and a standard deviation of 15 or occasionally 16 or 24 points.
Thus, the modern IQ score is a mathematical transformation of a raw score on an IQ test, based on the rank of that score in a normalization sample.[12] Modern scores are sometimes referred to as "deviance IQ", while older method age-specific scores are referred to as "ratio IQ."
The two methodologies yield similar results near the middle of the bell curve, but the older ratio IQs yielded far higher scores for the intellectually gifted— for example, Marilyn vos Savant, who appeared in the Guinness Book of World Records, obtained a ratio IQ of 228. While this score could make sense using Binet's formula (and even then, only for a child), on the Gaussian curve model it would be an exceptional 7.9 standard deviations above the mean and hence virtually impossible in a population with a normal IQ distribution (see normal distribution). In addition, IQ tests like the Wechsler were not intended to discriminate reliably much beyond IQ 130, as they simply do not contain enough exceptionally difficult items.[1]
Since the publication of the WAIS, almost all intelligence scales have adopted the normal distribution method of scoring. The use of the normal distribution scoring method makes the term "intelligence quotient" an inaccurate description of the intelligence measurement, but "I.Q." still enjoys colloquial usage, and is used to describe all of the intelligence scales currently in use.
The role of genes and environment (nature and nurture) in determining IQ is reviewed in Plomin et al. (2001, 2003).[13] Until recently heritability was mostly studied in children. Various studies find the heritability of IQ between 0.4 and 0.8 in the United States;[14][15][16] that is, depending on the study, a little less than half to substantially more than half of the variation in IQ among the children studied was due to variation in their genes. The remainder was thus due to environmental variation and measurement error. A heritability in the range of 0.4 to 0.8 implies that IQ is "substantially" heritable.
The first scientist to study intelligence was a German named Kirk vonHindersten, who in 1873 first began to study intelligence based on children's performance in school with the Fachhochschule München University. He determined that siblings exhibit similar abilities in school, thus sparking discussion about the possibility of the heritability of intelligence.
The effect of restriction of range on IQ was examined by Matt McGue and colleagues, who wrote that "restriction in range in parent disinhibitory psychopathology and family SES had no effect on adoptive-sibling correlations ... IQ."[17] On the other hand, a 2003 study by Eric Turkheimer, Andreana Haley, Mary Waldron, Brian D'Onofrio, Irving I. Gottesman demonstrated that the proportions of IQ variance attributable to genes and environment vary with socioeconomic status. They found that in impoverished families, 60% of the variance in IQ "in a sample of 7-year-old twins" is accounted for by the shared environment, and the contribution of genes was close to zero.[18]
It is reasonable to expect that genetic influences on traits like IQ should become less important as one gains experiences with age. Surprisingly, the opposite occurs. Heritability measures in infancy are as low as 20%, around 40% in middle childhood, and as high as 80% in adulthood.[13] The American Psychological Association's 1995 task force on "Intelligence: Knowns and Unknowns" concluded that within the white population the heritability of IQ is "around .75." The Minnesota Study of Twins Reared Apart, a multiyear study of 100 sets of reared-apart twins which was started in 1979, concluded that about 70% of the variance in IQ was found to be associated with genetic variation. Some of the correlation of IQs of twins may be a result of the effect of the maternal environment before birth, shedding some light on why IQ correlation between twins reared apart is so robust.[4] There are a number of points to consider when interpreting heritability:
Environmental factors play a role in determining IQ. Proper childhood nutrition appears critical for cognitive development; malnutrition can lower IQ.
A recent study found that the FADS2 gene, along with breastfeeding, adds about 7 IQ points to those with the "C" version of the gene. Those with the "G" version of the FADS2 gene see no advantage.[20][21]
Musical training in childhood also increases IQ.[22] Recent studies have shown that training in using one's working memory may increase IQ.[23][24]
Research on the role of the environment in children's intellectual development has demonstrated that a stimulating environment can dramatically increase IQ, whereas a deprived environment can lead to a decrease in IQ. A few such research studies are listed below. They confirm that IQ is all but a fixed quantity.
A particularly interesting project on early intellectual stimulation involved twenty-five children in an orphanage. These children were seriously environmentally deprived because the orphanage was crowded and understaffed. Thirteen babies with an average age of nineteen months were transferred to the Glenwood State School for retarded adult women and each baby was put in the personal care of a woman. Skeels, who conducted the experiment, deliberately chose the most deficient of the orphans to be placed in the Glenwood School. Their average IQ was 64, while the average IQ of the twelve who stayed behind in the orphanage was 87.
In the Glenwood State School the children were placed in open, active wards with the older and relatively bright women. Their substitute mothers overwhelmed them with love and cuddling. Toys were available, they were taken on outings and they were talked to a lot. The women were taught how to stimulate the babies intellectually and how to elicit language from them.
After eighteen months, the dramatic findings were that the children who had been placed with substitute mothers, and had therefore received additional stimulation, on average showed an increase of 29 IQ points. A follow-up study was conducted two and a half years later. Eleven of the thirteen children originally transferred to the Glenwood home had been adopted and their average IQ was now 101. The two children who had not been adopted were reinstitutionalized and lost their initial gain. The control group, the twelve children who had not been transferred to Glenwood, had remained in institution wards and now had an average IQ of 66 (an average decrease of 21 points). Although the value of IQ tests is grossly exaggerated today, this astounding difference between these two groups is hard to ignore.
More telling than the increase or decrease in IQ, however, is the difference in the quality of life these two groups enjoyed. When these children reached young adulthood, another follow-up study brought the following to light: “The experimental group had become productive, functioning adults, while the control group, for the most part, had been institutionalized as mentally retarded.”
In the late 1960s, under the supervision of Rick Heber of the University of Wisconsin, a project was begun to study the effects of intellectual stimulation on children from deprived environments. In order to find a “deprived environment” from which to draw appropriate subjects for the study, Heber and his colleagues examined the statistics of different districts within the city of Milwaukee. One district in particular stood out. The residents of this district had the lowest median income and lowest level of education to be found in the city. This district also had the highest population density and rate of unemployment of any area of Milwaukee. There was one more statistic that really attracted Heber’s attention: Although this district contained only 3 percent of the city’s population, it accounted for 33 percent of the children in Milwaukee who had been labelled “mentally retarded”.
At the beginning of the project, Heber selected forty newborns from the depressed area of Milwaukee he had chosen. The mothers of the infants selected all had IQs below 80. As it turned out, all of the children in the study were black, and in many cases the fathers were absent. The forty newborns were randomly assigned, 20 to an experimental group and 20 to a control group.
Both the experimental group and the control group were tested an equal number of times throughout the project. An independent testing service was used in order to eliminate possible biases on the part of the project members. In terms of physical or medical variables, there were no observable differences between the two groups.
The experimental group entered a special program. Mothers of the experimental group children received education, vocational rehabilitation, and training in homemaking and child care. The children themselves received personalized enrichment in their home environments for the first three months of their lives, and then their training continued at a special center, five days a week, seven hours a day, until they were ready to begin first grade. The program at the center focused upon developing the language and cognitive skills of the experimental group children. The control group did not receive special education or home-based intervention and enrichment.
By the age of six all the children in the experimental group were dramatically superior to the children in the control group,in terms of IQ. This was true on all test measures, especially those dealing with language skills or problem solving. The experimental group had an IQ average of 120.7 as compared with the control group’s 87.2.
At the age of six the children left the center to attend the local school. By the time both groups were ten years old and in fifth grade, the IQ scores of the children in the experimental group had decreased to an average of 105 while the control group’s average score held steady at about 85. One possible reason for the decline is that schooling was geared for the slower students. The brighter children were not given materials suitable for their abilities and they began to fall back. Also, while the experimental children were in the special project center for the first six years they ate well, receiving three hot, balanced meals a day. Once they left the center and began to attend the local school, many reported going to classes hungry, without breakfast or a hot lunch.
In the developed world, personality traits in some studies show that, contrary to expectations, environmental effects actually can cause non-related children raised in the same family ("adoptive siblings") to be as different as children raised in different families.[25][13] There are some family effects on the IQ of children, accounting for up to a quarter of the variance, however, by adulthood this correlation approaches zero.[26] For IQ, adoption studies show that, after adolescence, adoptive siblings are no more similar in IQ than strangers (IQ correlation near zero), while full siblings show an IQ correlation of 0.6. Twin studies reinforce this pattern: monozygotic (identical) twins raised separately are highly similar in IQ (0.86), more so than dizygotic (fraternal) twins raised together (0.6) and much more than adoptive siblings (~0.0).[13] The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns[15] states that there is no doubt that normal child development requires a certain minimum level of responsible care. Severely deprived, neglectful, or abusive environments must have negative effects on a great many aspects of development, including intellectual aspects. Beyond that minimum, however, the role of family experience is in serious dispute. Do differences between children's family environments (within the normal range) produce differences in their intelligence test performance? The problem here is to disentangle causation from correlation. There is no doubt that such variables as resources of the home and parents' use of language are correlated with children's IQ scores, but such correlations may be mediated by genetic as well as (or instead of) environmental factors. But how much of that variance in IQ results from differences between families, as contrasted with the varying experiences of different children in the same family? Recent twin and adoption studies suggest that while the effect of the family environment is substantial in early childhood, it becomes quite small by late adolescence. These findings suggest that differences in the life styles of families, whatever their importance may be for many aspects of children's lives, make little long-term difference for the skills measured by intelligence tests. It also stated "We should note, however, that low-income and non-white families are poorly represented in existing adoption studies as well as in most twin samples. Thus it is not yet clear whether these studies apply to the population as a whole. It remains possible that, across the full range of income and ethnicity, between-family differences have more lasting consequences for psychometric intelligence."[15]
A study of French children adopted between the ages of 4 and 6 shows a possible continuing interplay of nature and nurture. The children came from poor backgrounds with IQs that initially averaged 77, putting them near retardation. Nine years later after adoption, they retook the I.Q. tests, and all of them did better. The amount they improved was directly related to the adopting family’s status. "Children adopted by farmers and laborers had average I.Q. scores of 85.5; those placed with middle-class families had average scores of 92. The average I.Q. scores of youngsters placed in well-to-do homes climbed more than 20 points, to 98."[27] On the other hand, the degree to which these increases persisted into adulthood are not clear from the study.
Stoolmiller (1999)[28] found that the range restriction of family environments that goes with adoption, that adopting families tend to be more similar on for example socio-economic status than the general population, means that role of the shared family environment have been underestimated in previous studies. Corrections for range correction applied to adoption studies indicate that socio-economic status could account for as much as 50% of the variance in IQ.[28] However, the effect of restriction of range on IQ for adoption studies was examined by Matt McGue and colleagues, who wrote that "restriction in range in parent disinhibitory psychopathology and family socio-economic status had no effect on adoptive-sibling correlations [in] IQ".[29]
Eric Turkheimer and colleagues (2003),[30] not using an adoption study, included impoverished US families. Results demonstrated that the proportions of IQ variance attributable to genes and environment vary nonlinearly with socio-economic status. The models suggest that in impoverished families, 60% of the variance in IQ is accounted for by the shared family environment, and the contribution of genes is close to zero; in affluent families, the result is almost exactly the reverse.[31] They suggest that the role of shared environmental factors may have been underestimated in older studies which often only studied affluent middle class families.[32]
A meta-analysis, by Devlin and colleagues in Nature (1997),[4] of 212 previous studies evaluated an alternative model for environmental influence and found that it fits the data better than the 'family-environments' model commonly used. The shared maternal (fetal) environment effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%.
Bouchard and McGue reviewed the literature in 2003, arguing that Devlin's conclusions about the magnitude of heritability is not substantially different than previous reports and that their conclusions regarding prenatal effects stands in contradiction to many previous reports.[33] They write that:
Chipuer et al. and Loehlin conclude that the postnatal rather than the prenatal environment is most important. The Devlin et al conclusion that the prenatal environment contributes to twin IQ similarity is especially remarkable given the existence of an extensive empirical literature on prenatal effects. Price (1950), in a comprehensive review published over 50 years ago, argued that almost all MZ twin prenatal effects produced differences rather than similarities. As of 1950 the literature on the topic was so large that the entire bibliography was not published. It was finally published in 1978 with an additional 260 references. At that time Price reiterated his earlier conclusion. Research subsequent to the 1978 review largely reinforces Price’s hypothesis.
Dickens and Flynn[34] postulate that the arguments regarding the disappearance of the shared family environment should apply equally well to groups separated in time. This is contradicted by the Flynn effect. Changes here have happened too quickly to be explained by genetic heritable adaptation. This paradox can be explained by observing that the measure "heritability" includes both a direct effect of the genotype on IQ and also indirect effects where the genotype changes the environment, in turn affecting IQ. That is, those with a higher IQ tend to seek out stimulating environments that further increase IQ. The direct effect can initially have been very small but feedback loops can create large differences in IQ. In their model an environmental stimulus can have a very large effect on IQ, even in adults, but this effect also decays over time unless the stimulus continues (the model could be adapted to include possible factors, like nutrition in early childhood, that may cause permanent effects). The Flynn effect can be explained by a generally more stimulating environment for all people. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they taught children how to replicate outside the program the kinds of cognitively demanding experiences that produce IQ gains while they are in the program and motivate them to persist in that replication long after they have left the program.[34][35]reet
In 2004, Richard Haier, professor of psychology in the Department of Pediatrics and colleagues at University of California, Irvine and the University of New Mexico used MRI to obtain structural images of the brain in 47 normal adults who also took standard IQ tests. The study demonstrated that general human intelligence appears to be based on the volume and location of gray matter tissue in the brain. Regional distribution of gray matter in humans is highly heritable. The study also demonstrated that, of the brain's gray matter, only about 6 percent appeared to be related to IQ.[36]
Many different sources of information have converged on the view that the frontal lobes are critical for fluid intelligence. Patients with damage to the frontal lobe are impaired on fluid intelligence tests (Duncan et al 1995). The volume of frontal grey (Thompson et al 2001) and white matter (Schoenemann et al 2005) have also been associated with general intelligence. In addition, recent neuroimaging studies have limited this association to the lateral prefrontal cortex. Duncan and colleagues (2000) showed using Positron Emission Tomography that problem-solving tasks that correlated more highly with IQ also activate the lateral prefrontal cortex. More recently, Gray and colleagues (2003) used functional magnetic resonance imaging (fMRI) to show that those individuals that were more adept at resisting distraction on a demanding working memory task had both a higher IQ and increased prefrontal activity. For an extensive review of this topic, see Gray and Thompson (2004).[37]
A study involving 307 children (age between six to nineteen) measuring the size of brain structures using magnetic resonance imaging (MRI) and measuring verbal and non-verbal abilities has been conducted (Shaw et al 2006). The study has indicated that there is a relationship between IQ and the structure of the cortex—the characteristic change being the group with the superior IQ scores starts with thinner cortex in the early age then becomes thicker than average by the late teens.[38]
There is "a highly significant association" between the CHRM2 gene and intelligence according to a 2006 Dutch family study. The study concluded that there was an association between the CHRM2 gene on chromosome 7 and Performance IQ, as measured by the Wechsler Adult Intelligence Scale-Revised. The Dutch family study used a sample of 667 individuals from 304 families.[39] A similar association was found independently in the Minnesota Twin and Family Study (Comings et al. 2003) and by the Department of Psychiatry at the Washington University.[40]
Significant injuries isolated to one side of the brain, especially those occurring at a young age, may not significantly affect IQ.[41]
Studies reach conflicting conclusions regarding the controversial idea that brain size correlates positively with IQ. Jensen and Reed claim no direct correlation exists in nonpathological subjects.[42] A more recent meta-analysis suggests otherwise.[43]
An alternative approach has sought to link differences in neural plasticity with intelligence,[44] and this view has recently received some empirical support.[45]
Since the twentieth century IQ scores have increased at an average rate of around three IQ points per decade in most parts of the world.[46] This phenomenon has been named the Flynn effect (aka the "Lynn-Flynn effect") named after Richard Lynn and James R. Flynn. Attempted explanations have included improved nutrition, a trend towards smaller families, better education, greater environmental complexity, and heterosis.[47] Tests are therefore renormalized occasionally to obtain mean scores of 100, for example WISC-R (1974), WISC-III (1991) and WISC-IV (2003). This adjustment specifically addresses the variation over time, allowing us to compare scores from different times.
Some researchers argue that the Flynn effect may have ended in some developed nations starting in the mid 1990s, namely in Denmark[48] and in Norway.[49]
Once thought immutable, recent research suggests that certain mental activities can change the brain's raw ability to process information, leading to the conclusion that intelligence can be altered or changed over time. Studies into the neuroscience of animals indicate that challenging activities can produce changes in gene expression patterns of the brain. (Training Degus to Use Rakes [50] and Iriki's earlier research with macaque monkeys indicating brain changes.) A study published in April 2008 by a team from the Universities of Michigan and Bern demonstrated the transfer of fluid intelligence from specifically designed working memory training ([51]).
Further research will be needed to determine the extent and duration of the transfer.
Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between populations. While there is little scholarly debate about the existence of some of these differences, the reasons remain highly controversial both within academia and in the public sphere.
Persons with a higher IQ have generally lower adult morbidity and mortality. Post-Traumatic Stress Disorder,[7] severe depression,[52][9] and schizophrenia[53][54] are less prevalent in higher IQ bands.
A study of 11,282 individuals in Scotland who took intelligence tests at ages 7, 9 and 11 in the 1950s and 1960s, found an "inverse linear association" between childhood IQ scores and hospital admissions for injuries in adulthood. The association between childhood IQ and the risk of later injury remained even after accounting for factors such as the child's socioeconomic background.[55] Research in Scotland has also shown that a 15-point lower IQ meant people had a fifth less chance of seeing their 76th birthday, while those with a 30-point disadvantage were 37% less likely than those with a higher IQ to live that long.[10]
A decrease in IQ has also been shown as an early predictor of late-onset Alzheimer's Disease and other forms of dementia. In a 2004 study, Cervilla and colleagues showed that tests of cognitive ability provide useful predictive information up to a decade before the onset of dementia.[56] However, when diagnosing individuals with a higher level of cognitive ability, in this study those with IQ's of 120 or more,[57] patients should not be diagnosed from the standard norm but from an adjusted high-IQ norm that measured changes against the individual's higher ability level. In 2000, Whalley and colleagues published a paper in the journal Neurology, which examined links between childhood mental ability and late-onset dementia. The study showed that mental ability scores were significantly lower in children who eventually developed late-onset dementia when compared with other children tested.[6]
Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood-brain barrier is less effective. Such impairment may sometimes be permanent, or may sometimes be partially or wholly compensated for by later growth. Several harmful factors may also combine, possibly causing greater impairment.
Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Comprehensive policy recommendations targeting reduction of cognitive impairment in children have been proposed.[58]
In terms of the effect of one's intelligence on health, high childhood IQ correlates with one's chance of becoming a vegetarian in adulthood (Gale, CR. "IQ in childhood and vegetarianism in adulthood: 1970 British cohort study". British Journal of Medicine 334 (7587): 245.), and inversely correlates with the chances of smoking (Taylor, MD. "Childhood IQ and social factors on smoking behaviour, lung function and smoking-related outcomes in adulthood: linking the Scottish Mental Survey 1932 and the Midspan studies". British Journal of Health Psychology 10 (3): 399–401.), becoming obese, and having serious traumatic accidents in adulthood.
Men outperform women on average by 3-4 IQ points.[59][60] Studies illustrate consistently greater variance in the performance of men compared to that of women (i.e., men are more represented at the extremes of performance)[61], and that men and women have statistically significant differences in average scores on tests of particular abilities, which even out when the overall IQ scores are weighted.[62][63]
The 1996 Task Force investigation on Intelligence sponsored by the American Psychological Association concluded that there are significant variations in I.Q. across races. [15] The causes underlying these variations are most often related to nature and nurture. Most scientists believe there is insufficient data to resolve the contributions of heredity and environment. One of the most notable researchers arguing for a strong hereditary basis is Arthur Jensen. In contrast, Richard Nisbett , the long-time director of the Culture and Cognition program at the University of Michigan, argues that intelligence is a matter of environment and biased standards that praise a certain type of “intelligence” (success on standardized tests) over another.
In a recent editorial in the New York Times entitled, “All Brains Are the Same Color“, Dr. Nisbett argues against the hypothesis that IQ differences between blacks and whites are genetic. He notes that decades of research have not supported the assertion that one of our social races in the United States (for that’s really the only way to define them) is biologically inferior in terms of innate intelligence. Rather, he argues, “Whites showed better comprehension of sayings, better ability to recognize similarities and better facility with analogies — when solutions required knowledge of words and concepts that were more likely to be known to whites than to blacks. But when these kinds of reasoning were tested with words and concepts known equally well to blacks and whites, there were no differences. Within each race, prior knowledge predicted learning and reasoning, but between the races it was prior knowledge only that differed.”
While IQ is sometimes treated as an end unto itself, scholarly work on IQ focuses to a large extent on IQ's validity, that is, the degree to which IQ correlates with outcomes such as job performance, social pathologies, or academic achievement. Different IQ tests differ in their validity for various outcomes. Traditionally, correlation for IQ and outcomes is viewed as a means to also predict performance; however readers should distinguish between prediction in the hard sciences and the social sciences.
Validity is the correlation between score (in this case cognitive ability, as measured, typically, by a paper-and-pencil test) and outcome (in this case job performance, as measured by a range of factors including supervisor ratings, promotions, training success, and tenure), and ranges between −1.0 (the score is perfectly wrong in predicting outcome) and 1.0 (the score perfectly predicts the outcome). See validity (psychometric).
Research shows that general intelligence plays an important role in many valued life outcomes. In addition to academic success, IQ correlates to some degree with job performance (see below), socioeconomic advancement (e.g., level of education, occupation, and income), and "social pathology" (e.g., adult criminality, poverty, unemployment, dependence on welfare, children outside of marriage). Recent work has demonstrated links between general intelligence and health, longevity, and functional literacy. Correlations between g and life outcomes are pervasive, though IQ does not correlate with subjective self-reports of happiness. IQ and g correlate highly with school performance and job performance, less so with occupational prestige, moderately with income, and to a small degree with law-abiding behaviour. IQ does not explain the inheritance of economic status and wealth.
One study found a correlation of .82 between g and SAT scores.[2] Another correlation of .81 between g and GCSE scores.[3]
Correlations between IQ scores (general cognitive ability) and achievement test scores are reported to be .81 by Deary and colleagues, with the percentage of variance accounted for by general cognitive ability ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design".[64]
The American Psychological Association's report Intelligence: Knowns and Unknowns[15] states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. However, this means that they explain only 25% of the variance. Successful school learning depends on many personal characteristics other than intelligence, such as persistence, interest in school, and willingness to study.
Correlations between IQ scores and total years of education are about .55, implying that differences in psychometric intelligence account for about 30% of the outcome variance. Many occupations can only be entered through professional schools which base their admissions at least partly on test scores: the MCAT, the GMAT, the GRE, the DAT, the LSAT, etc. Individual scores on admission-related tests such as these are certainly correlated with scores on tests of intelligence. It is partly because intelligence test scores predict years of education that they also predict occupational status, and income to a smaller extent.
According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[65] The validity depends on the type of job and varies across different studies, ranging from 0.2 to 0.6.[66] However IQ mostly correlates with cognitive ability only if IQ scores are below average and this rule has many (about 30 %) exceptions for people with average and higher IQ scores.[67] Also, IQ is related to the "academic tasks" (auditory and linguistic measures, memory tasks, academic achievement levels) and much less related to tasks where even precise hand work ("motor functions") are required.[68]
A meta-analysis[66] which pooled validity results across many studies encompassing thousands of workers (32,124 for cognitive ability), reports that the validity of cognitive ability for entry-level jobs is 0.54, larger than any other measure including job try-out (0.44), experience (0.18), interview (0.14), age (−0.01), education (0.10), and biographical inventory (0.37). This implies that, across a wide range of occupations, intelligence test performance accounts for some 29% of the variance in job performance.
According to Marley Watkins and colleagues, IQ is a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[69] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability but not specific ability scores predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[70]
The American Psychological Association's report Intelligence: Knowns and Unknowns[15] states that other individual characteristics such as interpersonal skills, aspects of personality, et cetera, are probably of equal or greater importance, but at this point we do not have equally reliable instruments to measure them.[15]
Other studies question the real-world importance of whatever is measured with IQ tests, especially for differences in accumulated wealth and general economic inequality in a nation. IQ correlates highly with school performance but the correlations decrease the closer one gets to real-world outcomes, like with job performance, and still lower with income. It explains less than one sixth of the income variance. Even for school grades, other factors explain most the variance. One study found that, controlling for IQ across the entire population, 90 to 95 percent of economic inequality would continue to exist. Another recent study (2002) found that wealth, race, and schooling are important to the inheritance of economic status, but IQ is not a major contributor and the genetic transmission of IQ is even less important. Some argue that IQ scores are used as an excuse for not trying to reduce poverty or otherwise improve living standards for all.
Some researchers claim that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much."[71][72]
Other studies show that ability and performance for jobs are linearly related, such that at all IQ levels, an increase in IQ translates into a concomitant increase in performance.[73] Charles Murray, coauthor of The Bell Curve, found that IQ has a substantial effect on income independently of family background.[74]
The American Psychological Association's report Intelligence: Knowns and Unknowns[15] states that IQ scores account for about one-fourth of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes.[15]
One reason why some studies claim that IQ only accounts for a sixth of the variation in income is because many studies are based on young adults (many of whom have not yet completed their education). On pg 568 of The g Factor, Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, A Question of Intelligence, Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance).
A 2002 study[75] further examined the impact of non-IQ factors on income and concluded that an offspring's inherited wealth, race, and schooling are more important as factors in determining income than IQ.
In addition, IQ and its correlation to health, violent crime, gross state product, and government effectiveness are the subject of a 2006 paper in the publication Intelligence. The paper breaks down IQ averages by U.S. states using the federal government's National Assessment of Educational Progress math and reading test scores as a source.[76]
There is a correlation of -0.19 between IQ scores and number of juvenile offences in a large Danish sample; with social class controlled, the correlation dropped to -0.17. Similarly, the correlations for most "negative outcome" variables are typically smaller than 0.20, which means that test scores are associated with less than 4% of their total variance. It is important to realize that the causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well.[15]
IQ is also negatively correlated with certain diseases.
Tambs et al.[77] found that occupational status, educational attainment, and IQ are individually heritable; and further found that "genetic variance influencing educational attainment ... contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ". In a sample of U.S. siblings, Rowe et al.[78] report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role.
In the United States, certain public policies and laws regarding military service,[79] education, public benefits,[80] crime,[81] and employment incorporate an individual's IQ or similar measurements into their decisions. However, in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except in very rare cases[82]. Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising or preventing a decline in intelligence.
Alfred Binet, a French psychologist, did not believe that IQ test scales qualified to measure intelligence. He neither invented the term "intelligence quotient" nor supported its numerical expression. He stated:
Binet had designed the Binet-Simon intelligence scale in order to identify students who needed special help in coping with the school curriculum. He argued that with proper remedial education programs, most students regardless of background could catch up and perform quite well in school. He did not believe that intelligence was a measurable fixed entity.
Binet cautioned:
Some scientists dispute psychometrics entirely. In The Mismeasure of Man, Harvard professor and paleontologist Stephen Jay Gould argued that intelligence tests were based on faulty assumptions and showed their history of being used as the basis for scientific racism, although did not at any point attempt to scientifically refute intelligence tests. He wrote:
He spent much of the book criticizing the concept of IQ, including a historical discussion of how the IQ tests were created and a technical discussion of why g is simply a mathematical artifact. Later editions of the book included criticism of The Bell Curve.
Gould did not dispute the stability of test scores, nor the fact that they predict certain forms of achievement. He did argue, however, that to base a concept of intelligence on these test scores alone is to ignore many important aspects of mental ability.
According to Dr. C. George Boeree of Shippensburg University, intelligence is a person's capacity to (1) acquire knowledge (i.e. learn and understand), (2) apply knowledge (solve problems), and (3) engage in abstract reasoning. It is the power of one's intellect, and as such is clearly a very important aspect of one's overall well-being. Psychologists have attempted to measure it for well over a century.
Several other ways of measuring intelligence have been proposed. Daniel Schacter, Daniel Gilbert, and others have moved beyond general intelligence and IQ as the sole means to describe intelligence.[84]
The American Psychological Association's report Intelligence: Knowns and Unknowns[15] states that IQ tests as predictors of social achievement are not biased against people of African descent since they predict future performance, such as school achievement, similarly to the way they predict future performance for European descent.[15]
However, IQ tests may well be biased when used in other situations. A 2005 study stated that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students,"[85] indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa.[86][87] Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for children with autism; and dyslexia the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and have resulted in incorrect claims that a majority of children with autism are mentally retarded.[88]
A 2006 paper argues that mainstream contemporary test analysis does not reflect substantial recent developments in the field and "bears an uncanny resemblance to the psychometric state of the art as it existed in the 1950s."[89] It also claims that some of the most influential recent studies on group differences in intelligence, in order to show that the tests are unbiased, use outdated methodology.
Some argue that IQ scores are used as an excuse for not trying to reduce poverty or otherwise improve living standards for all. Claimed low intelligence has historically been used to justify the feudal system and unequal treatment of women (but note that many studies find identical average IQs among men and women; see sex and intelligence). In contrast, others claim that the refusal of "high-IQ elites" to take IQ seriously as a cause of inequality is itself immoral.[90]
In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a task force in 1995 to write a consensus statement on the state of intelligence research which could be used by all sides as a basis for discussion. The full text of the report is available through several websites.[15]
In this paper the representatives of the association regret that IQ-related works are frequently written with a view to their political consequences: "research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications".
The task force concluded that IQ scores do have high predictive validity for individual differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They agree that individual differences in intelligence are substantially influenced by genetics and that both genes and environment, in complex interplay, are essential to the development of intellectual competence.
They state there is little evidence to show that childhood diet influences intelligence except in cases of severe malnutrition. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, and that these differences cannot be attributed to biases in test construction. The task force suggests that explanations based on social status and cultural differences are possible, and that environmental factors have raised mean test scores in many populations. Regarding genetic causes, they noted that there is not much direct evidence on this point, but what little there is fails to support the genetic hypothesis.
The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, several of them arguing that the report failed to examine adequately the evidence for partly-genetic explanations.
A high IQ society is an organization that limits membership to people who are within a certain high percentile of IQ test results.
Many websites and magazines use the term IQ to refer to technical or popular knowledge in a variety of subjects not related to intelligence, including sex,[91] poker,[92] and American football,[93] among a wide variety of other topics. These topics are generally not standardized and do not fit within the normal definition of intelligence.
IQ reference charts are tables suggested by psychologists to divide intelligence ranges in various categories.