The Self, the Other, and Happiness

From my limited but growing experience in therapy I have observed that there is one underlying factor that affects how people behave, think, and feel. Now, this one factor does not discount the effects of other factors but it is a prevalent theme in the lives of many of the people I have worked with in therapy. This factor is what is called self-centeredness, or in other words, selfishness. Any time that people focus on themselves, they cannot focus on those around them. Some people are able to focus on themselves but then switch over to an outward focus. Others are not very good at this. The problem with focusing on oneself is that when external events occur, their effects are all driven inwardly and change is effected in the individual. Over time some people develop dependencies on external stimuli to the extent of exclusion or occlusion of internal, self-driven stimuli. This is what is called an external locus of control. I am not discounting people who have what psychologists call an internal locus of control, which is often viewed as a more positive, internally driven sense of control over life, but the majority of people I have seen in therapy emphasized external events to an extreme extent. That is, they let external events control their lives and thus their emotions, thoughts, and behaviors.

My interpretation of why this occurs in some people is that everything external becomes internalized (i.e., everything outside themselves gets focused inward). If something bad happens at work (the external event), a person might twist it into a reflection of her sense of the worth of her inner self. This means that something negative (even if it was that person’s fault) becomes a reflection of that person’s character rather than simply a negative event (e.g., “I am a failure” versus “I sure made a mistake there!” – notice the difference between the negative self-evaluation and the labeling of a negative event). This is an attack to a person’s sense of self worth; this attack on the self can turn into a vicious cycle of self-defeating blows. Attributing negative events to one’s character is a form of self-centeredness. However, that is only part of the self-centeredness of which I am writing. what I mean by self-centeredness goes beyond locus of control – it is an attitudinal and personal characteristic of interpreting everything as being about oneself. This is not narcissistic personality disorder – it’s not an overt and extreme ‘personality’ characteristic, it’s a learned way of interpreting events. It is relatively mild and probably not even noticeable to many other people (narcissism is obvious) and almost never to the individual.

This selfishness is manifest in the perpetual worrying of the state of the Self instead of the Other. This does not mean that the self-centered one never worries about other people, it means that they are never able to ‘forget’ themselves. I believe that true happiness comes only by forgetting oneself and serving others. One problem with this belief is that some will misunderstand it and spend all their time doing thing for others at the expense of their needs – but that is rare. But one can, on average, spend the bulk of his or her time focused on others instead of on oneself. From my completely anecdotal personal experience, those people who spend the least amount of time thinking about themselves are usually the happiest. The corollary to this is that those who spend the most amount of time thinking about themselves are usually the least happy.

We all make choices. Choice – free will – is not an illusion. We all choose how we react in life – to our thoughts, to our boss, to a spouse, to others. Dr. Barbara Heise stated, “We give up our…right to choose when we say, ‘He (or she) made me angry.’ I encourage you not to give away your right to choose by handing that power over to someone else. No one can ‘make’ you angry. You make a choice to respond by being angry or by taking offense. But you can also choose to make the effort to find out what is really going on with the other person and understand their behavior—or maybe just agree to disagree.” (Source).

We are agents of our actions. We choose our attitudes and most of our thoughts. Every person on earth faces hardships of one kind or another. Some might face starvation or abuse or loss of loved ones. Some might face loneliness or addiction or stress. Some people might face anxiety or depression. But here is the key – we can choose what our attitude will be; we can choose to be happy or sad. Yes, even in depression. The choice of happiness does not mean that we are happy all the time or happy immediately, it means that we will try to respond with happiness throughout our day; it means we will work toward the goal of happiness. I know that most people would say that happiness (as opposed to unhappiness) is always a goal for them but how many people are actively choosing happiness.

The surest way to overcome unhappiness, or even anxiety or a number of other common mental health problems, is by choosing to forget the self and get to work, so to speak. We can choose to be self-centered or we can choose to be other-centered. This choice and action of other-centeredness is the surest way to happiness and peace. That is the intriguing thing about focusing on others – and I mean really focusing on others; I’ve met people who spend most of their time filling the needs and wants of others and who are unhappy; why are they unhappy? They are unhappy because they resented the time spent for others. Many times this resentment was not overt but it was obvious in their speech. But if we are able to truly focus outward towards others, we will find that our self takes care of itself. We get anxious because we are worried about what others think of ourselves. We feel depressed for much the same manner – focusing inward on the self – and interpreting many external events through the lens of the self. That is not necessarily bad when external events are positive but when they are negative, it can lead to depression.

When I was young, my younger brother would on occasion do something that I found annoying. When I protested to my father, he usually replied, “Don’t be annoyed.” That lesson stuck. It does not mean I never again felt annoyed – I do from time to time – but it helped me realize that being annoyed is a choice. What one person might find annoying, another person will not. I do not believe that most people, when they do something others find annoying, are meaning to be annoying; most simply do not realize that they are doing something other people might find annoying. A gentle request that they stop will often solve the problem. Again, the choice is there – choose to not be annoyed. In the same manner, choose to be happy.

I do not mean to minimize the complexities of depression or anxiety but I do not think that we should give away our choice of happiness by allowing others or our biology or other stressors to determine our happiness. I have to admit that I do not believe in determinism, I do not think it exists. If we learn anything from quantum physics it is that there is some level of indeterminacy to basic matter. By extrapolation, this means that even a small uncertainty might affect larger entities, such as neurotransmitters or neurons, or pathways, or beings. Indeterminacy does not equal free will or choice but it is a component of it. I do not believe we should let anything hold our happiness hostage. True happiness comes from focusing on others – note that they are not determining your happiness, you are choosing to focus outwardly and happiness results; not because you are seeking it but because when you focus on others, when you serve others, happiness finds you. You open the door to it and let it in to your life. The choice is there – you can choose to be self-centered and miserable or you can choose to be other-centered and happy. What do you choose?

Book Review – Leadership and Self-deception: Getting Out of the Box

I’m going to preface my review by stating that the book I’m going to review is not directly about psychology; it has nothing to do with neuroscience. However, it has everything to do with interpersonal relationships and social interactions, which are two areas frequently addressed in psychology. I do not have any affiliation with The Arbinger Institute; I just enjoyed the book.

The book Leadership and Self Deception: Getting Out of the Box was written by The Arbinger Institute. The work was derived from the ideas of C. Terry Warner, a U.S. philosopher. The Arbinger Institute is a management training and consulting firm that works with businesses and individuals to help them improve their businesses and lives. The 168 page book is easy to read; it is written in a simple prose like a novel.

The main character in the book is Tom, a recently-hired mid to upper level manager at the fictional company Zagrum. Throughout the book Tom mainly interacts with two other characters – Bud, his boss who is the executive vice president of the company, and Kate, Zagrum’s president. Both Bud and Kate take time out of their busy schedules to train Tom about “the box”, which is self-deception.

The gist of the book is that much conflict between people is based on self-betrayal and self-deception. It comes from viewing other people as objects, as “things” that either help or hinder our own progress. The self-deception is that we are more important than other people and that they only exist to help us (or at least not stop us) self-actualize (I’m using different terms than used in the book; the author(s) of the book are not particularly fond of the humanistic concept of self-actualization, by the way). However, we deceive ourselves when we think that if we want to have improved relationships with others – especially if they are strained – then it is others who need to change and not ourselves.

Self-betrayal occurs, according to the author(s), when we are not true to that part of ourselves that is other-centered; this results in self-centeredness. In the book the author(s) give an example of how self-betrayal occurs. I’ll summarize that example.

At night a husband and wife are sleeping. The husband wakes up when the baby in the other room starts to cry. The husband’s first thought is to get up and get the child before his wife wakes up; after all, she works so hard all day and needs all the sleep she can get. The husband’s next thought though is that he too works hard all day and needs to get up early for a meeting. “Why should I get up? My responsibility in this family is to go to work and earn money so we can live. I need all the sleep I can get so I can function at my job – I have a big project to complete tomorrow. [Baby continues to cry]. Why doesn’t my wife get up and get that baby? Doesn’t she realize I need to get sleep? Okay, I know she’s awake now. Why doesn’t she get up? Now she’s just being lazy. [And so on].

These types of thoughts often become self-fulfilling prophecies, such that all our our own actions and thoughts inflate our self-worth (i.e., we do see ourselves as good, hard-working people) while simultaneously deflating the self-worth of another (i.e., we attribute certain attributes to them – “lazy” or “inconsiderate” – and then much of what we see them do after that only supports that hypothesis). While this specific example has not occurred with everyone, we have all experienced similar situations. Maybe the situation is at work where you had a thought that you should do something but then didn’t do it. When it created a problem you were able to rationalize your behavior and blame someone else (“I would have done X had Susan done her job” or “I was just too busy with other things to get X done.”). Basically, self-betrayal results from not being true to what you [hopefully] know is the right thing to do. When we don’t do what we know is right, the normal human response is to rationalize and justify our action or inaction in order to protect our egos, per se. This leads to us shifting the blame from ourselves onto others. We start to view others as hindering our progress; when this occurs they stop being people and start being objects (in other words, people are viewed as either starting blocks or stumbling blocks – they help or hinder us).

It is relatively straightforward to see how this can lead to interpersonal problems – at home or at work. The problem is that we do not know that we are betraying and deceiving ourselves, so we continue to ascribe most of our problems to others. The author(s) further points out that even if we recognize our self-betrayal and self-deception, we never will completely be free of these behaviors; however, we will be able to reduce these negative behaviors and improve our relationships with others.

Overall, this book provides an important and novel way to approach interpersonal behavior. The overarching message is that we should not worry about changing others (or even ourselves! – but I’ll let you read the book to understand that); we should instead recognize that the problem lies within ourselves and go from there. One very creative application of this philosophy is how this is being applied in businesses to increase productivity, human relations, public relations, and even the profitability of the company. I’ll let my readers read this book to understand how this philosophical approach to other-interaction can help a business make more money.

One of my criticisms of this approach to interpersonal behaviors is that it is fairly esoteric and difficult to grasp conceptually. That’s not necessarily a negative; however, it means that most people will really have to study and ponder on the concepts in order to understand them. The book also only serves as a brief – but important – introduction to the topic, leaving one a bit unsure exactly how to implement this new attitude and these new behaviors in one’s own life (although, there is enough information in the book that an astute reader can understand enough to follow this method of interpersonal interaction). This is where the Arbinger Institute’s training workshops and seminars come in. Additionally, C. Terry Warner wrote a book called Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, which is a more complete description of the concepts found in Leadership and Self-deception.

I recommend the book Leadership and Self Deception: Getting Out of the Box to anyone seeking to develop insight about themselves and their interpersonal interactions.

The Death of Psychotherapy

I’m going to preface my post by stating that the following post was written to help me think through the relationship between neuroscience and therapy. As such, it is a philosophical journey through some of my thoughts and is not even necessarily what I really believe because I’m still working on discovering what I believe. Thought processes like this are one way I try to keep some of my beliefs about psychology and neuroscience balanced. If I start leaning too strongly one way, I’ll start looking for things that disconfirm those beliefs and see what I discover. It’s a bit of playing the Devil’s Advocate with myself and a bit of philosophizing. Some of my friends and I used to do things like this in junior high and high school – having philosophical discussions where we discussed things and even tried to argue for things that we didn’t necessarily believe (e.g., classic topics such as supposing that this world and universe really aren’t real but are just reflections of reality. Again, that’s not something I believe but we would speculate). What does this all have to do with psychology and neuroscience?

The brain is what drew me to psychology initially. However, I vowed I would never go into clinical psychology because I didn’t think I would like therapy or dealing with people’s problems. Over time I discovered neuropsychology. Most neuropsychologists are clinical psychologists so in order or me to be a neuropsychologist, I had to be trained as a clinical psychologist. There are many things I enjoy about clinical psychology but therapy is not one of those things. Granted, most neuropsychologists do not actually do therapy, but we have to be trained in it. I enjoy talking with people in sessions but I haven’t been that impressed with therapy as a whole so far. Maybe that’s just because I haven’t exactly found the particular type of therapeutic method that really “clicks” with me. Cognitive-behavioral therapy is fine but so much of actual therapy in practice is just plain common sense. However, not everyone has a lot of common sense so they need some training in it. Part of me recognizes the validity of therapy but another part of me struggles with it. Now on to my main article.

The more I study the brain and the more exposure I have to therapy (giving, not receiving), the more biased towards the brain I become. What I mean is that we continue to discover more about the brain and as we discover more, the more behavior we can explain based on biology or neurophysiology and the less important I think therapy is. I’ve written about this topic in the past but wanted to briefly revisit it. This is somewhat of a second chapter to that post. Before I continue I wanted to expose one of my biases; I believe humans have free will. Even though some of my beliefs about the brain could be seen as mechanistic and deterministic, I do not believe that a strongly-biological foundation for behavior rules out free will. You can still assume biological foundations without assuming determinism. If, for example, you have a monistic set of assumptions that incorporates both mind – “nonmaterial” – and body – “material” – in one. [I have quotes around nonmaterial and material because mind is not necessarily nonmaterial and body is not necessarily material, well at least philosophically speaking]. Monism is a similar idea to a unified field theory (e.g., Grand Unified Theory) or the Theory of Everything for which some theoretical physicists are searching. That’s not what I’m going to write about and if it didn’t make sense, then don’t worry about it (I discussed this topic in a couple different posts: here {I linked to that post previously} and here). To summarize, I view behavior through a strong biological bias but I do not assume determinism.

As I said earlier, the more I learn about the brain and behavior (through research and observation), the more I lean towards neuroscience and away from “traditional psychology.” However, I still appreciate the psychosocial aspects of behavior; the nature versus nurture dispute will never be resolved because both are important. The environment is important  – all external stimuli are important – but the problem with downplaying biology is that it is the medium of behavior. What I mean is, everything we think, sense, perceive, or do is translated and transmitted through the firing of neurons. This means that all abnormal behavior, which is what psychologists often are interested in, originates in a neuron or related cell. Whether or not the cause of that behavior was internal or external is irrelevant. All events and stimuli are translated into patterns of neuronal firings.

This is why I think that understanding the biology of the brain is the best way to understand a person’s behavior. However, because we have an imperfect understanding of the biology of the brain, we have an imperfect understand of the biological foundations of behavior. This means that until we have a perfect understanding, we cannot ignore the psychosocial aspects of behavior; even with a perfect understand we couldn’t either because even if we understand the “translation” process we may not understand the origin of what needs to be translated. This is where traditional talk therapy can be most beneficial. However, I still believe less and less that talk therapy is the best solution for dealing with many psychological issues. Over time as we discover more and more about the brain, therapy will become even less important.

That is a fairly radical position to take as a student of clinical psychology – it’s more in line with psychiatry, or rather, I believe it’s more in line with neuroscience. I’m not saying that therapy is useless, I’m just saying that as we gain a more perfect understanding of the brain and how various chemicals interact in the brain, we will have less need for people to help others by “talking” through their problems. The better we understand the physiology of the brain, the more natural our pharmaceuticals will be. In other words, it will be easier to mimic and create normal brain functioning. Of course, many will ask, “What is normal?” That’s a good question.

Some may argue that with depression, for example, many people will have negative image and self-evaluations, which can lead to depression. That is true but it’s the classic chicken and egg question. Which came first? Did the negative thoughts cause the depression or did the person experiencing negative thoughts have a biological predisposition to those thoughts and depression? In other words, it is possible that biology originally led to the negative thoughts and not vice versa. This is all speculation but I think there is increasing evidence for this view.

The big problem with my point though is that at some point, it does become a deterministic system in that it’s possible that we could medicate away people’s free will. This is an unacceptable outcome. There would be a lot of power with this knowledge and many opportunities for abuse. That’s an ethical discussion for a later time.

To summarize, I think that as we (speaking in the collective) gain a more perfect understanding of the brain (and even individual differences in the brain) we will be better able to eradicate and prevent many or most psychological disorders. We could potentially stop schizophrenia through genetic engineering or other modifications. Again, I’m not addressing whether or not we should but I believe we will have the ability to at some point. This is why, at the moment I lean more towards neuroscience than I do psychotherapy. Of course, tomorrow I could [I won’t] write a post that completely contradicts this one. As I said, this is a process. I think it’s important to argue both sides of the issue.

The Philosophy of Science

I wrote this response to someone who questioned the assertion I made (on a different website) that science is not impartial. Let me know what you think.

I don’t really have room or time to get into a philosophical discussion; this is a discussion that takes months to talk about. As a note, I’m not just making things up, I’ve studied epistemology in college. One of the philosophical foundations of science (science is all based on philosophy, which is one reason in the U.S. all science doctorates are PhDs – Doctor of Philosophy) is empiricism. I’ll quote from Wikipedia because in this case it is accurate.

“Empiricism is one of several competing views about how we know things, part of the branch of philosophy called epistemology, or “theory of knowledge”. Empiricism emphasizes the role of experience and evidence, especially sensory perception, in the formation of ideas, while discounting the notion of innate ideas.”

“In the philosophy of science, empiricism emphasizes those aspects of scientific knowledge that are closely related to evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world, rather than resting solely on a priori reasoning, intuition, or revelation. Hence, science is considered to be methodologically empirical in nature.”

There are other competing philosophies to empiricism. Rationalism is one of those; although in our day some ideas of rationalism are combined with empiricism. Materialism (all entities are matter and reducible to smaller entities, e.g., atoms) is another foundation for most science.

Because modern science is based on specific philosophies with specific assumptions (e.g., that all is matter) it cannot be completely impartial because science (forgive the anthropomorphism) inherently disregards anything that is not based on its same assumptions and philosophies (e.g., religion). Science has one particular view of the world and states that everything else is false, or at least unknowable. That’s not impartial – that’s bias. That’s like Americans saying “Our world view is the only correct world view.” Now, maybe it is true but that does not make it less biased. Everything and everyone have biases, even the philosophies that form the foundation for science.

As I said, this is some pretty deep philosophy. People have been arguing over this for thousands of years and will be for thousands more.

One last example. We tend to believe that mathematics is perfect and unbiased. Kurt Godel showed that it isn’t. Now, not everyone agrees with his ideas but he convincingly showed that most math is flawed, or at least incomplete. Math does not equal science but most science is founded on mathematical principles.

I answered your question, hopefully without coming across as a troll. As I said in my original post, I’m not trying to discredit science (science is my job) but blindly accepting that science is perfect and completely unbiased and the only way to knowledge is demonstrating as much faith in science as many do in religion.”

After a reply back that expressed complete disbelief (that also insulted my intelligence) 🙂 here’s my final response:

“I did not say that philosophy and science are the same, I just said that science is based on specific philosophies. As I said, it’s some pretty heavy stuff that most people (rightly) don’t care about. Again, I didn’t say philosophy and science are the same. The relationship (and this isn’t a perfect example) is more like philosophy:science::arithmetic:calculus.”

Am I completely off base here? I haven’t had extensive epistemology but I’ve had a fair amount. I remember in one of my classes that some people just didn’t get it. They were very bright people, it’s just that philosophy requires a different way of thinking (not better, just different). It takes practice; I just happened to start having serious philosophical discussions with friends pretty early on in school.

William James’ Legacy

William James in the 1890s William James entered the field of psychology not with a bang or an explosion but as the morning dew distilling upon fields of clover. He was a reluctant psychologist, who did not want to even be called a psychologist, but he forever changed the course of modern psychology. William James not only changed the course of American psychology, in some ways he was the Course. James did not produce original research, he did not perform experiments, yet he became the driving force behind psychology. How did he accomplish this feat?

James was born to wealthy parents. His grandfather was one of the richest men in America and William’s father inherited a considerable sum of money. William’s father did not fall into the frivolous failings of inherited money though. He was very involved in his children’s lives, educating them himself in matters both temporal and spiritual. He was not an indulgent father but allowed the children to make their own decisions about life and to formulate their own ideas. He also provided them with opportunities to experience the world’s cultures and diversities. William and his family took a number of trips to Europe in order to be exposed to languages, cultures, arts, and philosophies.

William became interested in art initially. He wanted to become an artist but after studying some in Paris, he decided that, while he was good, he was not good enough. So he decided to become a scientist. He started attending Harvard, where he worked with different scientists. He discovered that he abhorred scientific experimentation, finding it tedious. He appreciated the work of other scientists but did not want to do any experimentation himself. After graduation he started medical school at Harvard. While he enjoyed the subjects he studied, William did not want to be a practicing physician; he decided he liked philosophy best. James had by then also been exposed to the work of the great German psychologists like Wilhelm Wundt and was impressed by their research. Continue reading “William James’ Legacy”

Building a Better Brain

Let’s look forward a number of years. Bioengineering is at the point where replacing people’s organs with lab-grown ones is standard procedure. Gone are the days of transplant patients taking anti-rejection medications for the rest of their lives. Transplanted organs are all manufactured using stem cells from their own body, from bone marrow or from skin or any number of different sources. New organs are rapidly grown using modified growth hormones to speed up their development. A complete new organ is grown within a few weeks, a surgery performed, and the transplant patient home within days. Because of the relative low cost of such procedures, all have access to transplants. Replacing hearts, livers, lungs, kidneys, and other organs increased the life expectancy dramatically with most people living well over 100 years. Scientists are on the verge of transplanting the first manufactured brain. Knowledge of neural networks and cognition is at the point where a person’s entire knowledge system and all memories can be downloaded and stored as a backup. Scientists are working on manufacturing an entire replica human body as a “clone” in case a person is seriously injured. While individual organs come fairly cheap, a whole body is prohibitively expensive. A large portion of the cost is the brain. Even though scientists have created working brains, their success rate is still only about 5% (but always getting better). They go through a lot of brains.

Some people use this new biotechnology for creating backups of their bodies. Other people have started using it to enhance the performance of their existing body. In laboratory situations scientists are able to create organs that are effectually perfect. They are created in well-controlled situations and don’t have to go through the gauntlet of normal development, with exposure to teratogens, fluctuations in nutrition, and all the other things that can affect development. Popular organs to replace are hearts and lungs. People are able to run faster than ever before due to more efficient hearts and lungs. Other people get new legs or arms with well-sculpted muscles. Still other receive nanotech implants to enhance normal biological performance. None of this is being done in the United States or in the United Kingdom but there are plenty of countries that don’t outlaw the procedures

With the common body enhancing going on many people want to enhance their brains. They want a new brain created with certain gyri a little bit bigger and cortex a little bit thicker. Some researchers are working on improving the speed and efficiency of neurotransmitting. Most of the improvements in brain design come from turning on and off certain genes at different time points in development and providing the lab-grown brains optimal nutrients and stimulation. These enhancements can create brains that can learn 1000 times more in 1000 times less time.

I’ve taken a bit of liberty in my hypothetical treatment of bioengineering and biotechnology in the unspecified future. There is little, scientifically-speaking, that stands in the way of us as humans eventually reaching this point. The question is, should we? Should we seek to create immortal and essentially all-knowing humans through science. Supposing humans can build better brains and bodies, should they control and manipulate natural biological processes to the extent that they can create “superbeings”? I’m not going to answer any of the questions; I just want to raise them. With our great advances in bioengineering, technology, and neuroscience, where do we draw the line, assuming we do draw a line? Do we eradicate all developmental, genetic, and environmental diseases and disorders. Do we cure epilepsy, cancer, Autism, Alzheimer’s Disease, and ever other disorder? Do we enhance some functioning, such as hearts or muscles but not the brain?

With all advances in science, we have to always be mindful of the underlying morality and ethics of the advances. we need to make sure that our advances do not out-pace our morals.

Descartes and Modern Psychology

Psychology is a field that traces its roots back thousands of years. In its earliest forms it was a subset of philosophy; Aristotle, Plato, and other Greek philosophers all posited psychological principles. Decartes is sometimes considered the founding father of modern philosophical psychology. He is possibly the most famous dualist; he believed that there was a split between mind and body. The mind influenced the brain through the pineal gland, a small structure in the middle of the brain. It is situated near the ventricles in the brain. Descartes believed that the pineal gland moved the “animal spirits” in the ventricles and sent them throughout the body, through the nerves, to control behavior and movement. DescartesThis was not true but back in the day it seemed a very logical explanation, especially in light of the new discovery about how the heart worked like a pump for blood. The pineal gland was Descartes ideal structure where mind and body interacted because it was in the middle of the brain and was a singular brain structure (most structures in the brain are in both hemispheres. So for example, there are two hippocampi, two caudates, two frontal lobes, etc…). Even with Descartes advances, psychology remained a philosophy until the 1800s when Wundt and other empiricists created experimental psychological laboratories. From there, the field of psychology grew exponentially into the major field it is today.

Many other people influenced psychology over the years, people such as William James, Charles Darwin, B. F. Skinner, Sigmund Freud, and Jean Piaget but Descartes is the earliest of “modern psychologists,” even though he was involved in so much more than psychology. Contrary to conventional wisdom, Descartes was someone who came before the horse and led the way (pun intended; “Don’t put the cart {Descartes} before the horse.”).

Biological Determinism

Free WillAs someone with a strong neurobiological foundation, I believe that the brain is the center of all behavior. What is the evidence for that belief? Remove someone’s brain and see if they have any behavior (note: I’m not endorsing this behavior, I’m merely postulating a hypothetical situation). Without the brain, there is no behavior. So, the brain is necessary for behavior but is it sufficient?

In psychology we often talk about necessary and sufficient conditions for behavior. That is, you may need a certain factor in order for a behavior to happen but without other factors, the behavior will not occur. For example, you need water to live – it is necessary – but you also need food, so thus not sufficient. So, the brain is necessary for behavior but can all behavior be explained solely by the brain? Another way of phrasing this question is, “Does biology determine all behavior?” The term for this belief is biological determinism.

To answer the question we first have to investigate and uncover other potential influences on (causes of) behavior. If behavior is biologically determined, do people have free will? That is, do people really have the ability to consciously make and choose different behavior? Or are all behaviors simply determined at the neuronal (or genetic) level and free will is only an illusion? This post is an expansion on one of my previous posts concerning alternative assumptions to naturalism in neuroscience.

If you really believe that the brain (and by reductionism, genes) are solely responsible for behavior, then you cannot believe that people have free will. You also cannot believe that the environment is directly responsible for behavior – it can influence it – but at the core, your genes and your neurons create behaviors. Alternatively, you can believe that humans have free will, that we can make choices because of or in spite of our biology. Agency can influence biology and biology can influence agency – they are not mutually exclusive categories. While the brain is a necessary condition for behavior, it is not sufficient; agency is a factor in human behavior.

Alternate assumptions to naturalism in neuroscience

Thinking ManThis post is very different than anything I’ve previously written; it’s more philosophical than psychological and is an example of Theoretical and Philosophical Psychology, a small but important niche within psychology that provides critical analyses of the underlying assumptions [philosophies] of psychology and the related sciences. My post is not meant to attack the neurosciences (after all, that is my field of specialization); rather, it is meant to expose the philosophical underpinnings of neuroscience. The alternative assumptions I write about are not necessarily superior, just different. Feel free to contact me with any questions or if you are interested in the references I cite.

This post is an exposition of the naturalistic assumptions in the article An fMRI Study of Personality Influences on Brain Reactivity to Emotional Stimuli by Canli et al. (2001). It will also focus on alternative assumptions. I will first explore the assumption of materialism, one half of Descartes’ dualism, and contrast this assumption with a holistic monism. Then I will discuss biological determinism as well as an alternative assumption to it, namely agency.

Materialism accounts for one half of the Cartesian dualism (and thus has been termed a one-sided dualism), the theorized split between mind and matter. It is defined as the notion that “biological explanations will (eventually) be able to fully account for and explain…psychological phenomena” (Hedges, p. 3). Materialism assumes that biology is sufficient to explain behavior. This article is focused on “the neural correlates of emotion [and personality] in healthy people” (p. 33) by using brain imaging techniques. This is an example of materialism in that the authors are looking for “the biological basis [or an objective foundation] of emotion [a subjective phenomenon]” (p. 33). The authors’ assumption of materialism will become clearer with another example. Canli et al. state: “The similarity in the dimensional structure of personality and emotion is due to a common neural substrate where personality traits moderate the processing of emotional stimuli” (p. 33; italics added). What they are saying is that neurons (the brain) are the base and that emotional processing in the brain is affected by personality traits (which they state have a “common neural substrate” with emotions). This is a one-sided dualism—the researchers attempt to explain the subjective experiences of the mind (i.e., emotion) in terms of the material, or biological, body while not including the mind in their methods.

The authors of this study sought to understand emotional responses in terms of neuroimaging. This is an example of method-driven science in that the researchers “ignored…[the] notion of the mind [being immaterial and unpredictable] and focused…on the body” (Slife, p. 13). There is no way to image emotions directly, but by assuming that they are centered in biological reactions, these researchers were able to use traditional scientific methods to measure those reactions. This materialism, or one-sided dualism, has its shortcomings. An alternative way to approach the hypothesis of how personality serves as a “middleman” between the brain and emotions is to use the assumption of a holistic monism. Whereas the authors assume that the brain (body) is the foundation of emotional experience and thus sufficient for that experience, with a monistic assumption the researchers would recognize both body and mind as necessary but not separately sufficient. This would change their study because they would look at a more inclusive picture of people, not just biology and mind but context as well. All of these conditions interact and are only understood in relation to one another. The authors would also consider qualitative measures of life experience and meaning and research those, taking a pluralistic approach.

Another prevalent assumption, which is inseparable from materialism and is in fact a subset of it, is that of biological determinism. Whereas my materialism section focused on the authors’ attempts to explain subjective experiences by their “objective” methods, this one will focus on how they explain varying emotions as caused by variations in biological factors. The authors end their paper on a strong deterministic note: “The different brain activation patterns that these pictures produce…may result in two different subjective interpretations of the identical objective experience” (p. 39). Although they hedge their statement with a may, what they are saying is that their subjects all had the same “objective experience” but because of apparent differences in how their brains responded, this difference caused the variation in subjective emotional interpretation. They imply that people’s interpretations are determined by biology, which rules out agency.

Alternately, when viewing this article according to holistic monism, specifically agency, there are would be many changes in it. First off, it would not be a problem to recognize the role agency plays in the body. The authors would assume that the body affects agency and vice versa–they constitute each other. Instead of “different brain activation patterns” (p. 39) causing different interpretations of emotion it could be that the interpretations affect the neuronal firing instead (or an interplay of both). Also, with an alternative assumption, the following hypothesis would no longer be deterministic: “Extraversion is associated with greater brain reactivity to positive” (p. 34). The authors imply that personality traits are biologically based (see paragraph 2 of this paper)–even if behaviorally influenced; therefore, biology causes personality which causes changes in brain reaction (which are experienced subjectively by people as emotions). Alternatively, this can be explained by “agentic factors” (Slife, p. 25), such as people choosing (even unconsciously) how to respond to the pictures. Also, instead of personality being determined by the brain, manifestations of agency (choices) in a context (e.g., experiences) could shape personality.