Modern intelligence testing can be traced back to Alfred Binet, a French psychologist. The French government wanted a way to identify students who would do well in school, as well as students who would not benefit from schooling. They turned to Binet in 1904 to create an unbiased, objective measure that would help the government identify the children who needed special educational assistance. Binet and his colleagues created the first modern intelligence test by focusing on abilities like problem-solving and attention.
From this early beginning, intelligence testing evolved with modifications by Lewis Terman, who was at Stanford, and William Stern (who created the Intelligence Quotient {IQ}). Terman modified Binet’s work and created the Stanford-Binet intelligence test. This test remains in use today (it’s gone through a number of revisions), although it is not as popular as the Wechsler Adult Intelligence Scale (WAIS).
While early tests calculated the IQ based on “mental age” and “chronological age,” IQ is no longer calculated in this manner because of the problems this (mental age) / (chronological age) formula presents in adults.
The WAIS is set to have an average score of 100 and a standard deviation of 15. This means that a person with average intelligence would have an IQ of 100 (50% of scores are higher and 50% are lower, theoretically); however, the average IQ actually falls in the range of 85 to 115. A score between 110 and 115 is considered high average and scores between 90 and 85 can be considered low average. About 96% of the population have scores between 70 and 130, with 2% below 70 and 2% above 130.
While intelligence tests are not perfect and are rightly subject to much criticism, they are still quite useful for researchers, clinicians, and educational institutions.
Image by Echoside.