Archive

Archive for the ‘statistics’ Category

Are Liberals Smarter Than Conservatives?

October 22nd, 2009 No comments

Are Liberals Smarter Than Conservatives? — The American, A Magazine of Ideas.

As someone interested in intelligence and politics this article is very interesting. It is written from a conservative viewpoint but it quite balanced (there are some mild jabs at the “liberal elite” though).

“Who are smarter, liberals or conservatives? This is the kind of question that could spark fierce and endless debates between political opponents, but what if we could know, scientifically, that one side has the edge in brainpower? Should that change how we think about political issues?”

Click on the link above to read the rest of the article. Any thoughts about the article? Does IQ really matter? Are conservatives “dumber” than liberals or vice versa? Is it even useful to compare intelligence across the aisle, so to speak?

Prevalence of Psychologists in Argentina

October 16th, 2009 5 comments

A 2008 study found that Argentina has 145 psychologists per 100,000 citizens. That is the highest rate in the world. The Wall Street Journal reports the following numbers (from 2005 – the number of psychologists in Argentina has increased since that time):

“Per Capita: Argentina topped a world ranking of psychologists per capita compiled by the World Health Organization in 2005:

Psychologists per 100,000 inhabitants

Argentina: 121.2
Denmark: 85
Finland: 79
Switzerland: 76
Norway: 68
Germany: 51.5
Canada: 35
Brazil: 31.8
USA: 31.1
Ecuador: 29.1

Also: In 2008, Argentina had 145 psychologists per 100,000 inhabitants; the capital, Buenos Aires, 789, according to a report by Modesto Alonso and Paula Gago. A 2009 national survey conducted by TNS Argentina found that 32% of respondents had at some time made a psychological consultation. That was an increase from 2006, when 26% said they had.”

Does anyone know why Argentina has much higher rates of psychologists than other countries? Buenos Aires particularly has a very high concentration of psychologists. What is further interesting is that many of the psychologists – at least inferred from the article – have a psychodynamic background.

So why does Argentina have a high concentration of psychologists? When looking at the list of countries with rates higher than the United States there are a number of possible explanations. One is that psychology is valued more in those countries than it is in the United States. Maybe the people are more trusting of psychologists and open to psychotherapy. Another possible explanation is that people in those countries are more depressed or anxious or have other psychopathology. They also could have fewer other resources to which they can turn for support (e.g., family or clergy or friends). Another possible answer is that there is something about the countries that make psychologists more prevalent. It could be political (maybe more turmoil or less stable governments), criminal (higher rates of crime), or some other psychosocial factor. It’s possible that higher rates of psychologists is related to prevalence of socialistic philosophy. Maybe psychologists in those countries are paid better than they are in countries with lower numbers per capita of psychologists. There could be any number of reasons why there is a higher prevalence of psychologists in Argentina (and other countries for that matter). Any additional thoughts?

Hypothesis Testing in Psychology Research

February 3rd, 2009 No comments

Hypothesis testing first starts with theory. Theories are particular assumptions about the way things are. After a theory is formulated, a conceptual hypothesis is created, which is a more specific (than pure theory) prediction about the outcome of something. Next an experimental hypothesis is created. This is where definitions are operationalized so specific matters can be tested. For example, you could operationalize affection as number of hugs and kisses and other related actions. Then you statistically hypothesize in order to measure and test one of two hypotheses: the null, or H0, which represents non-effect (i.e. no difference between samples or populations, or whatever was tested), and an alternate hypothesis, H1.

The alternate hypothesis is that there is a difference, or an effect. It can be that one mean is greater than another, or that they are just not equal. So, the purpose of statistical testing is to test the truth of a theory or part of a theory. In other words, it is a way to look at predictions to see if they are accurate. To do this, researchers test the null hypothesis. We do not test the alternate hypothesis (which is what we think will happen). We do this because we base our testing on falsification logic (i.e., it only takes one example to prove a theory is wrong but conversely you cannot prove that a theory is right without infinite examples, so we look for examples where we are wrong).

The probability associated with a statistical test is assigned to the possibility of the occurrence of Type I error. This is the probability that you will reject the null hypothesis when in fact the null is true and thus should not have been rejected. It is saying there was an effect or a difference when there really was not.

The process of statistical testing can result in probability statements about the theories under consideration but only under certain conditions. Statistical testing and hypothesizing is representative of theory when it is conceptually (verbally and operationally) connected to theory. This means that there has to be a logical and direct association between the statistical probability statements and the theory in order for those statements to represent the overarching theory. This link is forged by the experimental and conceptual hypotheses.

Decomposing Statistics

October 6th, 2008 3 comments

Statistics are used by all but understood by few. In fact, studies have shown that 94% of people have little to no understanding of statistical methods. OK, that last statistic was made up; I wrote it to make a point though. I could post something like that on this blog and people would believe it and possibly even repeat it. The sad thing is that it probably isn’t that far from reality. In social science and neuroscience research we use statistics to understand data and support hypotheses. This post will serve as a statistical primer. I will not discuss how to calculate statistics, rather I will write about the underlying assumptions and theory of statistics. I will also discuss how to properly use and understand them (and hopefully avoid misusing them). I hope to help you become a more informed consumer of statistics.

When did we start using statistics and why?

Joel Best wrote a brief history of the use of statistics in his excellent book Damned Lies and Statistics: Untangling Numbers From the Media, Politicians, and Activists. [I urge everyone to read the book to be more informed about statistics. All quotes will be from the book. It provides only a superficial treatment of actual statistical methods – which he states is the case – but it provides a good theoretical background for being a critical thinker about statistics]. He states that statistics rose in popularity as governments and social activists wanted ways to track and “influence debates over social issues” (p.11). Early statistics were used almost exclusively for political purposes, especially to shape social and governmental policies. From the beginning, statistics were used for non-neutral purposes. They gave credibility to arguments.

One assumption that people erroneously make is that statistics are neutral and that they represent truth. They are useful for aggregating a lot of data but the problem is that most statistical methods are based on certain assumptions about the underlying data (e.g., that it is normally-distributed). However, many times researchers use certain statistical methods and make conclusions based on those data when the methods are not appropriate for the data. Even simple descriptive statistics (e.g., averages) can lead to people making erroneous conclusions.

People who create statistics all have a purpose for them. Researchers are all biased and have agendas. It just may be to get their research published or it might be for other ulterior reasons. Social activists use statistics to create social problems (see p. 14); they are not the cause of the “problem” but they try to raise awareness of it by turning it into a “problem” that we need to pay attention to and solve. This can often be a good thing but activists are using statistics to give credibility to their cause (e.g., “According to the World Health Organization, between 12 percent and 25 percent of women around the world have experienced sexual violence at some time in their lives.” source). Governments also use statistics to defend their position (e.g., “Crime rates decreased by XX% from last year. See! we are doing our job.”) and sometimes to counter the claims of activists.

The media pick up on statistics, on activists, because they present a new story and might even be controversial and controversy sells. Businesses also use statistics to promote their causes. Not everyone or entity will collect data in the same way either – one police station might have different criteria for counting an assault than another one has.

The author Best proposes three general questions to ask when seeing a statistic used.

  1. Who created this statistic?
  2. Why was this statistic created?
  3. How was this statistic created? (pp. 27-28).

Many times people don’t even know enough to ask those questions or to research the answers to those questions. After all, as Best points out, we are largely an innumerate society (this holds true world-wide). Innumeracy is the math equivalency of illiteracy. A majority of people are uncomfortable with even basic mathematics and completely oblivious to statistics. After all, mathematics is abstract and requires a lot of mental effort to use and understand. It is often not taught as well as it can and only reluctantly learned in school. Once out of school, people rarely need to use more than basic math and so they forget what they learned. The other problem that we have is that we accept math (and by extension, statistics) to be perfect and infallible (Gödel demonstrated in effect that this is not the case). Best describes this fallacy:

“We sometimes talk about statistics as though they are facts that simply exist, like rocks, completely independent of people, and that people gather statistics much as rock collectors pick up stones. That is wrong. All statistics are created through people’s actions: people have to decide what to count and how to count it, people have to do the counting and the other calculations, and people have to interpret the resulting statistics, to decide what the numbers mean. All statistics are social products, the results of people’s efforts” (p.27; emphasis added).

So what do you do when you view a news program on TV or read an article or hear an activist or politician quote a statistic? If it makes you go, “Wow!” then that is one sign you need to step back and really scrutinize it (which you should do even if it doesn’t surprise/scare/etc. you). If you agree with the point the show, article, or person is trying to make then you really need to step back and critique the statistic. This means you need to understand your biases. It is easy to only want to confirm our hypotheses and beliefs and ignore anything that might contradict them. This is generally adaptive to help us process a lot of information but it can be a problem when we don’t critically view statistics, especially when they are “bad statistics” (which you can never discover without critiquing them). When you view or read a statistic, that is the time to ask yourself those 3 questions Best proposed and go from there. You might discover something interesting.

I’ll post more on this subject later.

Reference

Best, J. (2001). Damned Lies and Statistics: Untangling Numbers From the Media, Politicians, and Activists. University of California Press, Berkeley and Los Angeles, CA.