There has been significant controversy in the academic community about the heritability of IQ since research on the issue began in the late nineteenth century.
The heritability of IQ increases with the child's age and reaches a plateau at 14–16[9] years old, continuing at that level well into adulthood.
[12] Eric Turkheimer and colleagues (2003) found that for children of low socioeconomic status heritability of IQ falls almost to zero.
[18][19][20][21] The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.
[28] The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?
[12] In 2006, David Kirp, writing in The New York Times Magazine, summarized a century's worth of research as follows, "about three-quarters of I.Q.
[39] Judith Rich Harris suggests that this might be due to biasing assumptions in the methodology of the classical twin and adoption studies.
[15][42][44][45] The American Psychological Association's report "Intelligence: Knowns and Unknowns" (1996) asserts the necessity of a certain minimum level of responsible care for normal child development.
Environments that are severely deprived, neglectful, or abusive negatively affect various developmental aspects, including intellectual growth.
Recent twin and adoption studies indicate that the effect of the shared family environment is significant in early childhood but diminishes substantially by late adolescence.
These findings suggest that differences in family lifestyles, while potentially important for many aspects of children's lives, have little long-term impact on the skills measured by intelligence tests.
Although parents treat their children differently, such differential treatment explains only a small amount of non-shared environmental influence.
The APA report "Intelligence: Knowns and Unknowns" (1996) also stated that: "We should note, however, that low-income and non-white families are poorly represented in existing adoption studies as well as in most twin samples.
It remains possible that, across the full range of income and ethnicity, between-family differences have more lasting consequences for psychometric intelligence.
"[15] A study (1999) by Capron and Duyme of French children adopted between the ages of four and six examined the influence of socioeconomic status (SES).
[50] On the other hand, the effect of this was examined by Matt McGue and colleagues (2007), who wrote that "restriction in range in parent disinhibitory psychopathology and family socio-economic status had no effect on adoptive-sibling correlations [in] IQ"[51] Turkheimer and colleagues (2003) argued that the proportions of IQ variance attributable to genes and environment vary with socioeconomic status.
[52] Asbury and colleagues (2005) studied the effect of environmental risk factors on verbal and non-verbal ability in a nationally representative sample of 4-year-old British twins.
[35] The argument here rests on a strong form of Spearman's hypothesis, that the hereditability of different kinds of IQ test can vary according to how closely they correlate to the general intelligence factor (g); both the empirical data and statistical methodology bearing on this question are matters of active controversy.
[55][56][57] A 2011 study by Tucker-Drob and colleagues reported that at age 2, genes accounted for approximately 50% of the variation in mental ability for children being raised in high socioeconomic status families, but genes accounted for negligible variation in mental ability for children being raised in low socioeconomic status families.
The authors noted that previous research had produced inconsistent results on whether or not SES moderates the heritability of IQ.
Price (1950), in a comprehensive review published over 50 years ago, argued that almost all MZ twin prenatal effects produced differences rather than similarities.
Research subsequent to the 1978 review largely reinforces Price's hypothesis (Bryan, 1993; Macdonald et al., 1993; Hall and Lopez-Rangel, 1996; see also Martin et al., 1997, box 2; Machin, 1996).
However, Flynn and a group of other scientists share the viewpoint that modern life implies solving many abstract problems which leads to a rise in their IQ scores.
[74] A 2009 review article identified over 50 genetic polymorphisms that have been reported to be associated with cognitive ability in various studies, but noted that the discovery of small effect sizes and lack of replication have characterized this research so far.
The authors concluded that most reported genetic associations with general intelligence are probably false positives brought about by inadequate sample sizes.
When comparing pre-1963 to late 1970s data, researchers DeFries and Plomin found that the IQ correlation between parent and child living together fell significantly, from 0.50 to 0.35.
[85] The scientific consensus is that genetics does not explain average differences in IQ test performance between racial groups.
For instance, some hereditarians have cited as evidence the failure of known environmental factors to account for such differences, or the high heritability of intelligence within races.
What's more, those forces would have to have acted across entire continents, with wildly different environments, and have been persistent over tens of thousands of years of tremendous cultural change.
[104][105][106][107] In light of these and similar findings, a consensus has formed that genetics does not explain differences in average IQ test performance between racial groups.