Intelligence Testing Introduced

Modern intelligence testing can be traced back to Alfred Binet, a French psychologist. The French government wanted a way to identify students who would do well in school, as well as students who would not benefit from schooling. They turned to Binet in 1904 to create an unbiased, objective measure that would help the government identify the children who needed special educational assistance. Binet and his colleagues created the first modern intelligence test by focusing on abilities like problem-solving and attention.Brain and cards
From this early beginning, intelligence testing evolved with modifications by Lewis Terman, who was at Stanford, and William Stern (who created the Intelligence Quotient {IQ}). Terman modified Binet’s work and created the Stanford-Binet intelligence test. This test remains in use today (it’s gone through a number of revisions), although it is not as popular as the Wechsler Adult Intelligence Scale (WAIS).
While early tests calculated the IQ based on “mental age” and “chronological age,” IQ is no longer calculated in this manner because of the problems this (mental age) / (chronological age) formula presents in adults.
The WAIS is set to have an average score of 100 and a standard deviation of 15. This means that a person with average intelligence would have an IQ of 100 (50% of scores are higher and 50% are lower, theoretically); however, the average IQ actually falls in the range of 85 to 115. A score between 110 and 115 is considered high average and scores between 90 and 85 can be considered low average. About 96% of the population have scores between 70 and 130, with 2% below 70 and 2% above 130.
While intelligence tests are not perfect and are rightly subject to much criticism, they are still quite useful for researchers, clinicians, and educational institutions.

Image by Echoside.

Blog for a personal side to Alzheimer’s Disease

I received a nice email from someone who runs a blog about caring for her father who has Alzheimer’s disease. She provides a nice perspective from the caregiver role of what it is like to deal with this disease.

http://www.knowitalz.com

Volunteering as Therapy for Individuals with Dementia of the Alzheimer’s Type

The following post is a lengthy exposition on a possible link between volunteering and Alzheimer’s disease. This post is more social psychology then neuroscience (actually, it has very little to do with neuroscience). I am not asserting that volunteering can be a useful therapy for someone with Alzheimer’s disease, rather I am making the case that there is enough evidence for research to be conducted along those lines. In other words, I see a need for someone to research whether or not volunteering is beneficial for people with Alzheimer’s disease.

Alzheimer’s disease (AD) is a serious condition that affects an estimated four million people in the United States. Most of these people are over the age 65, since the risk of developing AD increases with age. It is also estimated that there are currently over 400 thousand new cases of AD each year in the United States alone (Rodgers, 2002). The prevalence rate of Dementia of the Alzheimer’s Type (DAT), according to the Diagnostic and statistical manual of mental disorders–fourth edition (DSM-IV) is “between 2% and 4% of the population over the age 65 years…[and] the prevalence increases with increasing age, particularly after age 75 years” (American Psychological Association [APA], 1994). (In this post, the terms AD and DAT are used as interchange terms, even though DAT is the Axis I code and AD is the Axis III code in the DSM–IV. This is done because most articles about Alzheimer’s use the term “AD” in lieu of “DAT”). For this post, I will first give the DSM-IV diagnostic criteria for DAT. Then, I will discuss the effects of that volunteering has on older people. I will also provide some background theories about why volunteering has the effects that it does. Next, I will make the connection between AD and voluntarism.

DSM-IV Criteria for DAT

There are six main criteria associated with DAT as found in the DSM-IV. The first is:

“The development of multiple cognitive deficits manifested by both (1) memory impairment (impaired ability to learn new information or to recall previously learned information) [and] (2) one (or more) of the following cognitive disturbances: (a) aphasia (language disturbance), (b) apraxia (impaired ability to carry out motor activities despite intact motor function), (c) agnosia (failure to recognize or identify objects despite intact sensory function), (d) disturbance in executive functioning (i.e., planning, organizing, sequencing, abstracting)” (APA, 1994, p. 142).

Continue reading “Volunteering as Therapy for Individuals with Dementia of the Alzheimer’s Type”

At a conference

I just wanted to say that I’m at a conference in San Antonio, Texas for the next 5 days and might not be able to write any posts. Please visit the excellent sites in my Blogroll.

The Modal Model of Memory and the Serial Position Effect

I’m continuing my recent trend of basic cognitive psychology posts. The following post is about the Modal Model of memory, which has been highly influential for a number of decades but it is slowly being modified over time. I won’t get into the more modern modifications of the modal model, rather, in my post I present the very traditional view of memory, even if it is somewhat controversial today. For example, a number of psychologists do not believe that short term memory really exists (working memory fills in the gap). In any case, my post serves as a brief introduction to a classic view of memory and of the primacy and recency effects.

The modal model of memory has three main components. They are: sensory register, short-term memory (STM), and long-term memory (LTM). This Atkinson and Shiffrin model of memory assumes that the processes of moving information from the sensory store to short-term and then long-term memory takes place in discrete stages. At any of these stages information can be lost through interference or decay. Another assumption of this model is that information processing has to start in the sensory register and be attended to, then move to STM, and then to LTM with rehearsal.

The serial position effect (split into the primacy and recency effects) is that the first few and last few items in a word list, for example, are the easiest to remember. A graph of this effect would be roughly parabolic (i.e., U-shaped). The primacy effect occurs because people have time to rehearse the first few items until the STM capacity is reached. The recency effect occurs because the last items are still in STM and have not decayed yet so they are easy to remember. The items in the middle of lists are easy to forget because STM capacity is too full for much rehearsal by then and as more items are presented, older items in STM are “pushed out.”

Serial Position EffectThere are ways to hinder the primacy or recency effects though. If items are presented rapidly then there is not time to rehearse the items and the primacy effect fades away. If there is a distracting task given at the end of the main task (similar to Peterson and Peterson’s 1959 study testing the decay rate of STM), then the recency effect disappears due to STM capacity being taken up by the distracters, which leads to decay of the information in STM. These findings indicate that the systems governing primacy and recency effects are separate. The findings also gave support to the modal model because researchers identified the primacy effect with the transfer of STM into LTM. The recency effect is just an example of information being in STM.

PET Scans and fMRI Compared

PET ScannerThe positron emission tomography (PET) scan measures blood flow in the brain. This is accomplished by injecting a person or animal with a radioactive isotope (i.e. an unstable atom, usually a variation of oxygen that has a short-half life); this isotope will quickly decay. Founded on the assumption that blood flow will increase in areas of the brain that are in heavy use (such as when a person is viewing an object or reading words or some other cognitive-intensive function), a fair portion of the injected isotopes will end up in the active part of the brain. As the isotopes decay, a positron (a small particle with the exact opposite charge as an electron) is released. This positron will collide with an electron and they will annihilate each other, sending two gamma ray particles in exactly opposite directions. These gamma rays are picked up by the PET scanner, which then determines where they came from in the brain. Since blood should concentrate where the brain is activated, there should be higher levels of isotopes there and this will show up on the scanner in the form of increased levels of gamma rays. The test is usually run twice (once as the control condition and once as the experimental). The difference between the two conditions is measured and any difference should show what area(s) of the brain was (or were) activated.

A PET scan is similar to an fMRI in that both measure blood flow in the brain, which is an indirect measure of brain activity. However, there are advantages and disadvantages to both functional brain imaging methods. PET scans are advantageous in that a person does not have to remain as still as he or she would for the fMRI. Tiny movements can obscure and ruin fMRI data but small movements do not affect PET scans. So, for example, with a PET scan study a researcher could have someone read out loud lists of words but the simple jaw movements would ruin the fMRI data (although this is changing to some degree as image processing becomes more sophisticated; researchers also can modify the task slightly to reduce movement artifacts in fMR images). This is really the main advantage of PET over fMRI.

PET scanning is disadvantaged compared to fMRI because the resolution of the scans is lower. PET scans can measure changes in blood flow in the brain in an area of about 5-10 cubic millimeters. fMRI can resolve down to 3 cubic millimeters and even lower as the machines become more powerful. PET scanning is also much more expensive than fMRI since it takes a special machine, radioactive isotopes, and multiple trials to get a scan. fMRI’s can be done at many hospitals around the world with little or no extra cost because of the prevalence of MRI scanners. Another disadvantage PET’s have are needing radioactive isotopes to work. This isotope can be given only a few times before it is unsafe.

While PET scans were and are better in some situations than fMRI’s, they have many disadvantages overall. With higher cost, lower spatial resolution, and need for isotopes, the disadvantages of PET scans seem to outweigh the advantages.

Image by Muffet.

Word Superiority Effect and Parallel Processing

WordsOne experiment about cognitive brain functioning is the word superiority effect findings of Dr. Reicher in 1969. In this experiment either a word or a non-word (string of letters) is flashed on a screen. The subject is asked if the stimulus contained one of two letters, say a “C” or an “E”. When the stimulus did not resemble a word (e.g., XXCX) subjects were correct in identifying the target letter about 80% of the time. When the string of letters was similar to a word but not one (e.g., FELV) the subjects also correctly identified the target letter 80% of the time. However, the interesting finding was that when the stimulus was a word (e.g., TEND), subjects were correct in identification 90% of the time. So the word superiority effect is that subjects are most accurate in identifying a target letter when it is contained in a word as opposed to a string of letters.

This lends support to the theory that there are things that we can process in parallel and that that parallel processing (or parallel activation of word and letter) can be beneficial at times (such as helping subjects correctly identify individual letters more often when the letter is contained within a word rather than in a random string of letters). In other words, the whole word is recognized before all the letters individually are recognized. This then speeds up or aids processing because there are now a couple routes, per se, to that letter; there is the visual stimulus (seeing the letter) and the linguistic information (knowing that the letter is in the word) that both are activated and help people recognize letters better.

Image by uncommonmuse.

Quick post

I’m really sorry for not posting as much as I would like to. I’ve been preparing for a presentation at a research conference next week (Vas-Cog) and haven’t had time to write a good post for the blog. Anyway, LiveScience has an interesting story about new research localizing visual mistake recognition in the temporal lobes. This recognition appears to occur faster than conscious awareness.

A basic introduction to fMRI and MRI

MRI scannerfMRI (functional magnetic resonance imaging) builds on a basic MRI (magnetic resonance imaging) by looking at blood flow. An MRI works because protons, which make up atoms, are affected by magnetic fields. Basically, an MRI aligns a very small proportion of the protons in body tissue (it usually affects hydrogen the most because of hydrogen’s proton and neutron composition; hydrogen is also prevalent in body tissue and so it is easy to affect). Normally the protons in hydrogen are randomly orientated which means their minute magnetic fields are also randomly orientated. When these protons are placed in the vicinity of the strong magnetic field produced by MRI machines, some of them align with the magnetic field of the machine. The machine also produces radio waves that slightly affect the aligned protons. These waves will cause the protons to spin a certain way in response to the radio waves. The radio waves are then turned off and the protons realign themselves to the magnetic field produced by the MRI machine. The machine picks up this re-alignment and a computer processes it to create an image of the brain (or what ever else is scanned). Since protons in different tissues align at different rates, the machine can differentiate between different types of tissue (such as skull and white and gray matter).

An fMRI just builds on the MRI by focusing on the ratio between oxygenated to deoxygenated blood; this is the blood oxygenation level dependent effect (BOLD effect). Basically, an fMRI indirectly measures brain activity by measuring the change in blood levels (specifically hemoglobin as it deoxygenates). An fMRI works because as brains process information blood flows to those areas to help provide the needed oxygen and glucose. The result of this process is a scan of the brain with lighter (or darker) areas where blood is flowing in greater quantity.

One example of how an fMRI was used to test a cognitive neuroscience theory was when Deibert et al. (1999) had subjects close their eyes and try to identify objects only by touch. The researchers discovered through fMRI that the subjects’ visual cortex was activated even though their eyes were closed. There were two different explanations: first the objects were identified and then visual images were created or the visual image was created during the process of identification and thus helped the subjects recognize the objects. However, fMRI alone was not sufficient to support the correct theory. When researchers used transcranial magnetic stimulation (TMS) they discovered that they could interrupt the processing in the occipital lobe and interfere with object recognition. So the combination of fMRI and TMS showed that the visual image formed during tactile exploration is important for object recognition. While fMRI was not sufficient in this case, it was key in uncovering and explaining the theory about how tactile object recognition works in the absence of visual input.

Image courtesy of MacRonin47.

Alternate assumptions to naturalism in neuroscience

Thinking ManThis post is very different than anything I’ve previously written; it’s more philosophical than psychological and is an example of Theoretical and Philosophical Psychology, a small but important niche within psychology that provides critical analyses of the underlying assumptions [philosophies] of psychology and the related sciences. My post is not meant to attack the neurosciences (after all, that is my field of specialization); rather, it is meant to expose the philosophical underpinnings of neuroscience. The alternative assumptions I write about are not necessarily superior, just different. Feel free to contact me with any questions or if you are interested in the references I cite.

This post is an exposition of the naturalistic assumptions in the article An fMRI Study of Personality Influences on Brain Reactivity to Emotional Stimuli by Canli et al. (2001). It will also focus on alternative assumptions. I will first explore the assumption of materialism, one half of Descartes’ dualism, and contrast this assumption with a holistic monism. Then I will discuss biological determinism as well as an alternative assumption to it, namely agency.

Materialism accounts for one half of the Cartesian dualism (and thus has been termed a one-sided dualism), the theorized split between mind and matter. It is defined as the notion that “biological explanations will (eventually) be able to fully account for and explain…psychological phenomena” (Hedges, p. 3). Materialism assumes that biology is sufficient to explain behavior. This article is focused on “the neural correlates of emotion [and personality] in healthy people” (p. 33) by using brain imaging techniques. This is an example of materialism in that the authors are looking for “the biological basis [or an objective foundation] of emotion [a subjective phenomenon]” (p. 33). The authors’ assumption of materialism will become clearer with another example. Canli et al. state: “The similarity in the dimensional structure of personality and emotion is due to a common neural substrate where personality traits moderate the processing of emotional stimuli” (p. 33; italics added). What they are saying is that neurons (the brain) are the base and that emotional processing in the brain is affected by personality traits (which they state have a “common neural substrate” with emotions). This is a one-sided dualism—the researchers attempt to explain the subjective experiences of the mind (i.e., emotion) in terms of the material, or biological, body while not including the mind in their methods.

The authors of this study sought to understand emotional responses in terms of neuroimaging. This is an example of method-driven science in that the researchers “ignored…[the] notion of the mind [being immaterial and unpredictable] and focused…on the body” (Slife, p. 13). There is no way to image emotions directly, but by assuming that they are centered in biological reactions, these researchers were able to use traditional scientific methods to measure those reactions. This materialism, or one-sided dualism, has its shortcomings. An alternative way to approach the hypothesis of how personality serves as a “middleman” between the brain and emotions is to use the assumption of a holistic monism. Whereas the authors assume that the brain (body) is the foundation of emotional experience and thus sufficient for that experience, with a monistic assumption the researchers would recognize both body and mind as necessary but not separately sufficient. This would change their study because they would look at a more inclusive picture of people, not just biology and mind but context as well. All of these conditions interact and are only understood in relation to one another. The authors would also consider qualitative measures of life experience and meaning and research those, taking a pluralistic approach.

Another prevalent assumption, which is inseparable from materialism and is in fact a subset of it, is that of biological determinism. Whereas my materialism section focused on the authors’ attempts to explain subjective experiences by their “objective” methods, this one will focus on how they explain varying emotions as caused by variations in biological factors. The authors end their paper on a strong deterministic note: “The different brain activation patterns that these pictures produce…may result in two different subjective interpretations of the identical objective experience” (p. 39). Although they hedge their statement with a may, what they are saying is that their subjects all had the same “objective experience” but because of apparent differences in how their brains responded, this difference caused the variation in subjective emotional interpretation. They imply that people’s interpretations are determined by biology, which rules out agency.

Alternately, when viewing this article according to holistic monism, specifically agency, there are would be many changes in it. First off, it would not be a problem to recognize the role agency plays in the body. The authors would assume that the body affects agency and vice versa–they constitute each other. Instead of “different brain activation patterns” (p. 39) causing different interpretations of emotion it could be that the interpretations affect the neuronal firing instead (or an interplay of both). Also, with an alternative assumption, the following hypothesis would no longer be deterministic: “Extraversion is associated with greater brain reactivity to positive” (p. 34). The authors imply that personality traits are biologically based (see paragraph 2 of this paper)–even if behaviorally influenced; therefore, biology causes personality which causes changes in brain reaction (which are experienced subjectively by people as emotions). Alternatively, this can be explained by “agentic factors” (Slife, p. 25), such as people choosing (even unconsciously) how to respond to the pictures. Also, instead of personality being determined by the brain, manifestations of agency (choices) in a context (e.g., experiences) could shape personality.