Sunday, March 01, 2009

Moral neuropolitics and ideology

An interesting paper was recently brought to my attention. It's all worth reading, but I want to focus on one specific passage, because I think it spotlights a very important question, and provides a springboard for discussion of a number of significant issues in political psychology.

This is the paper:

We Empathize, Therefore We Are: Toward a Moral Neuropolitics

It's by Gary Olson, who is currently Chair of the Political Science Department at Moravian College, Bethlehem, PA.

Olson begins by pointing out the important human characteristic of being able to empathize with the experienced injustice and suffering of others. Citing the empathy felt by people who viewed art depicting victims of the transatlantic slave trade, Olson connects this with recent neuroscience having to do with mirror neurons:
The abolitionist's most potent weapon was the dissemination of drawings of the slave ship Brooks. Rediker asserts that these images were "to be among the most effective propaganda any social movement has ever created" (p. 308).

Based on recent findings from neuroscience we can plausibly deduce that the mirror neurons of the viewer were engaged by these images of others suffering. The appeal was to the public's awakened sense of compassion and revulsion toward graphic depictions of the wholesale violence, barbarity, and torture routinely practiced on these Atlantic voyages. Rediker notes that the images would instantaneously "make the viewer identify and sympathize with the 'injured Africans' on the lower deck of the ship . . ." while also producing a sense of moral outrage (p. 315, Olson, 2008).

In our own day, the nonprofit Edge Foundation recently asked some of the world's most eminent scientists, "What are you optimistic about? Why?" In response, the prominent neuroscientist Marco Iacoboni cited the proliferating experimental work into the neural mechanisms that reveal how humans are "wired for empathy." This is the aforementioned discovery of the mirror neuron system or MNS. The work shows that the same affective brain circuits are automatically mobilized upon feeling one's own pain and the pain of others.

Iacoboni's optimism is grounded in his belief that with the popularization of scientific insights, these findings in neuroscience will seep into public awareness and " . . . this explicit level of understanding of our empathic nature will at some point dissolve the massive belief systems that dominate our societies and that threaten to destroy us" (Iacoboni, 2007, p. 14).

Given that background, the crucial passage in Olson's paper seems to me to be this:
That said, one of the most vexing problems that remains to be explained is why so little progress has been made in extending this empathic orientation to distant lives, to those outside certain in-group moral circles. That is, given a world rife with overt and structural violence, one is forced to explain why our deep-seated moral intuition doesn't produce a more ameliorating effect, a more peaceful world. Iacoboni suggests this disjuncture is explained by massive belief systems, including political and religious ones, operating on the reflective and deliberate level. As de Waal reminds us, evolutionarily, empathy is the original starting point out of which sprang culture and language. But over time, the culture filters and influences how empathy evolves and is expressed (de Waal, 2007, p. 50). These belief systems tend to override the automatic, pre-reflective, neurobiological traits that should bring people together.

Right off the bat, of course, we have a problem. As I've just been writing about (here, here), there may be serious scientific difficulties with the whole concept of a "mirror neuron system" in humans.

Olson says "we can plausibly deduce that the mirror neurons of the viewer were engaged by these images of others suffering." He does mention fairly recent (2007-8) studies that seemed to indicate the existence of mirror neurons in humans, but the skeptical opinions of neuroscientists like Gregory Hickok, whose very recent paper I wrote about, suggest that this whole issue is still up in the air.

But perhaps the issue of mirror neurons isn't all that important. While it can be questioned whether humans have mirror neurons, and whether such mirror neurons (if they do exist) actually account for empathy in humans, surely it would be hard to dispute the existence of empathy in humans.

Or would it? One does have to question how often empathy, even if it exists, plays a dominant role in humans affairs. Sometimes it does, as the ending of the transatlantic slave trade and the final ending of slavery in the U. S. attest. On the other hand, different forms of slavery still persist in the world, as well as all manner of other ills, such as crime, genocide, war, territorial occupations, and economic exploitation.

Nevertheless, let's leave that whole question aside. Human empathy does exist in many circumstances (even if we don't have an adequate neurobiological explanation of it), yet even so, there seem to be social and cultural forces that all too often are able to override empathy. Olson evidently agrees with Iacoboni in identifying the responsible factor as "massive belief systems, including political and religious ones, operating on the reflective and deliberate level."

Let's refer to those political and religious belief systems as "ideology".

The key question, then, is how to explain the substantial power that ideology has over human social behavior – not just behavior that is culturally conditioned, but even behavior that has evolutionary, biological roots, such as the empathy that derives (perhaps) from mirror neurons. That is, we have to explain how "belief systems tend to override the automatic, pre-reflective, neurobiological traits that should bring people together."

It seems to me that this is a rather important open question that political science ought to be addressing.

Because we are dealing with phenomena that can override neurobiological traits, I think we have to look at explanations that also refer to neurobiology. It just makes the most sense to consider the problem as a whole at that level.

How is it that ideology has such a compelling influence over people? What is it that ideology has to offer? How does it fit with underlying psychological factors?

We need some frame of reference to consider these issues. For the sake of concreteness, I'm going to proceed here using a circle of ideas championed by Jonathan Haidt as a convenient reference frame.

What Haidt has proposed is a "moral foundations theory" that claims to identify five "fundamental moral values" that are held by a large number of people, to greater or lesser extents, in a wide variety of cultures around the world (and in history). I think his list is incomplete as a comprehensive foundation for "morality" in general, and there are valid questions about whether some aspects of the "values" he describes even merit consideration as part of a fundamental set. Nevertheless, each of the "values" does have its devoted adherents, and so ipso facto plays a role in social behavior in those cultures where the "value" is recognized.

Here, according to Wikipedia, are the five "fundamental moral values":
  1. Care for others, protecting them from harm. (He also referred to this dimension as Harm.)
  2. Fairness, Justice, treating others equally.
  3. Loyalty to your group, family, nation. (He also referred to this dimension as Ingroup.)
  4. Respect for tradition and legitimate authority. (He also referred to this dimension as Authority.)
  5. Purity, avoiding disgusting things, foods, actions.

Further references (listed at the end of this article): [1], [2], [3], [4].

I'm not ready to make an overall evaluation of Haidt's ideas, but let's look at them and see where they might lead.

On the basis of cross-cultural research Haidt came up with these five distinguishable biological bases of morality. If nothing else, they should be factors that would give significant force and impact to ideologies that are leveraged from them.

The first two factors are (quoting from [1]) "(i) harm, care, and altruism (people are vulnerable and often need protection) or (ii) fairness, reciprocity, and justice (people have rights to certain resources or kinds of treatment)."

Haidt sees these as having evolutionary origins in kin selection and the mechanism of reciprocal altruism.

I think they could have other evolutionary origins as well. In addition, I find it a little difficult to distinguish these two factors. Both encode an obvious "golden rule" sort of morality. However that may be, it seems that mirror neurons, or something equivalent, might play a role in the neurobiology of these factors, which both relate to "empathy".

So another important question we can ask is: what are the evolutionary origins of mirror neurons (or equivalents)? Since other primates, and indeed other animals (e. g. dogs), seem to have something functionally like mirror neurons, and also notions of fairness and justice that resemble human notions, we probably need to look back further in time than the origins of hominids.

It seems to me that something like mirror neurons should be useful equipment for a member of any species that engages in intra-species combat, which is probably a large percentage of species. That certainly doesn't mean many species necessarily have mirror neurons. But, at least, the evolution of mirror neurons certainly could be a useful adaptation for many species. So something like mirror neurons could well be a primary evolutionary development, not a mere side effect of something else. And if empathy, altruism, etc. have roots in such a mechanism, they too are at least useful side effects of evolution, even if they were not directly adaptive in themselves. (Though there are plenty of reasons to think they are adaptive in themselves, especially if you believe in group selection.)

But perhaps the more interesting aspect of Haidt's ideas comprises the three other factors he regards as basic to many human moral codes.

Quoting again from [1], "In addition to the harm and fairness foundations, there are also widespread intuitions about ingroup-outgroup dynamics and the importance of loyalty; there are intuitions about authority and the importance of respect and obedience; and there are intuitions about bodily and spiritual purity and the importance of living in a sanctified rather than a carnal way."

Let's look at these separately. First up is group loyalty, preference for the ingroup, and fear/aversion towards the outgroup. This is pretty clearly, at least in part, a kin selection sort of thing.

There is also another clever evolutionary argument for this factor. It's spelled out by Choi and Bowles in [5]. They call the idea "parochial altruism". The authors present computer simulation evidence for their idea. It has the interesing property of being able to explain the otherwise paradoxical fact that humans are a fairly warlike species, in spite of countervailing empathetic tendencies. I wrote about it here at some length. See also [6].

There are, of course, other evolutionary arguments for group loyalty, such as basic considerations of group selection – successful groups should tend to be cohesive and behave something like kin groups, even in the absence of near kinship. And this would be especially true in time of resource scarcity (which probably was not infrequent).

Among the neurobiological bases of group loyalty would be any neural capabilities that enable the detection of cheating and disloyalty. These need not be discrete neural systems or brain modules. They might be just general capabilities that enable individuals to remember the past behavior of others and reason about it in such a way as to recognize signs of loyalty or disloyalty to the group. Capability for cheater detection might be a general learning ability, like the ability to learn language. Individuals need not be born being good cheater detectors. They just need to be able to learn how to be good at it.

I'm not aware of neurobiological research into cheater detection mechanisms, or other mechanisms that could support group loyalty. Studies of loyalty and cheater detection and conditions for extending trust to others should also connect up easily with the importance of "patriotism" and "solidarity" in various ideologies. This would seem to be a great area for future research.

The bottom line here is that there are very good reasons to expect ingroup/outgroup dynamics to have neurobiological underpinnings, and that these factors would strongly influence ideology. ("Deutschland uber alles." "Defend the fatherland." Etc.)

Turning to the next factor Haidt mentions: authority and the importance of respect and obedience. The psychological power of authority is quite well established. Including the Milgram experiment, which provides a glaring example of how social psychology can override any innate sense of empathy for others. Zimbardo's prison experiment is also relevant.

Respect for and obedience to authority pretty clearly have evolutionary roots in any social species that has a hierarchically organized social life – which is many species, even insects. (Some very recent research shows that even ants will attack other ants that don't follow the rules.)

Interestingly, though, degree of respect for authority varies a lot in humans. (But then, so to does a propensity to cheat.) Political scientists have known for a long time of Theodor Adorno's concept of an "authoritarian personality".

Again, respect for and obedience to authority are key features of many powerful ideologies – features that easily override empathy-based respect for peers.

This is another area that calls for much more neurobiological research. What characteristics of our neurobiology equip us to recognize and defer to authority? Is is just fear based on the consequences of disobedience to actors with substantial social/physical power? Are obedient personalities just a result of a kind of "Stockholm syndrome"?

Haidt's last factor is "intuitions about bodily and spiritual purity." This is, to my mind, the murkiest of the factors. Clearly, humans have evolved good instincts for avoiding contaminated or corrupt food, or other gathering places of pathogenic things. Exactly how that bootstraps into elaborate ideologies featuring supernatural beings is a whole bigger question.

I think there are quite a few additional factors that go into the social psychology of religion and its ideologies, including group loyalty and obedience to authority. And research into such factors seems to be pretty active these days, though not primarily into neurobiological factors. This is a large area of research all by itself. So I don't have clear ideas about how important "purity" is as a factor, by itself, that influences ideology.

Pascal Boyer has an interesting recent essay in Nature ([7]). He writes, "So is religion an adaptation or a by-product of our evolution? Perhaps one day we will find compelling evidence that a capacity for religious thoughts, rather than 'religion' in the modern form of socio-political institutions, contributed to fitness in ancestral times. For the time being, the data support a more modest conclusion: religious thoughts seem to be an emergent property of our standard cognitive capacities."

As an aside, this suggests that what we may find is that political behavior in general, and specific ideologies, are also emergent properties of our standard cognitive capacities. And that's a disquieting thought. Our cognitive capacities were shaped in a time when humans were far fewer in number, and had much less ability to cause large-scale problems for themselves and the rest of the world. Our inherited cognitive traits may result in less sanguine outcomes today than they did in the past. In particular, religion as a common sort of ideology, and Haidt's other moral predispositions, may be less beneficial for humans now than they may have been in the past.

One token of this may be seen in moral principles that do not seem to have deep roots in evolution and neurobiology. For example: aversion to war, faithful attention to truthfulness and honesty in dealing with others, sensitivity to and aversion towards manipulative behavior on the part of social elites, and respect and concern for the natural environment. Such principles don't even appear in Haidt's scheme.

Alternative reference frame: fear and emotions in general

All that said, Haidt's ideas are not the only way to approach the question of neurobiological bases of ideology. Another distinct approach involving neurobiology would center on the importance of the emotion of fear. There is, of course, a voluminous amount of research on the underpinnings of fear and its opposite (trust), as mediated by anatomical features like the amygdala and the limbic system in general.

I've just summarized a number of previous comments on this topic here. Since we're concerned with ideology in this note, it seems especially worth observing the similarities between beliefs about government and about religion, in particular the significant role that fear plays in both (see here).

Fear obviously plays a role in practical politics. How it interacts with organized ideologies is less clear. Certainly, fear of death or great harm is enough to motivate ideologies that feature institutions of authority that "protect" the populace. In any case, fear in some form or other is a strong motivator, another factor that can easily override an individual's healthier empathetic instincts.

Alternative reference frame: personality theory

Yet another direction of possible research involves trying to relate specific personality traits to ideological preferences. Perhaps the best supported of such findings could provide clues as to underlying mechanisms that link psychological tendencies to ideological features.

However, I'm skeptical. Personality traits, in fact, have been defined empirically by looking at the way people tend to use labels to describe other people. The most widely accepted type scheme, the "Big Five", is based on studies of language usage, in which factor analysis is employed to group certain behaviors using labels given to people who exhibit those behaviors. So it's entirely driven by data of a particular type, rather than theory.

That raises two problems. Firstly, there is little logical relationship between either the traits or the behavioral characteristics associated with them and specific ideologies. This is in contrast with Haidt's morality types that do connect reasonably well with ideologies. While empirical correlations between personality traits and ideologies have been found, they usually aren't very strong.

The second problem is that there's little apparent connection between personality traits and neurobiology. Perhaps that will change as more laboratory work is done that investigates the underpinnings of emotions and behavior, but I don't have the sense that clarity is close at hand.

The net result is that personality traits aren't an obvious way to make connections with either ideology or neurobiology.

Conclusion

All in all, it certainly looks like there's a huge need for research to explore how neurobiology interacts with social behavior, politics, and ideology. Understanding the potential role of something like mirror neurons is certainly important. But I think there's a whole lot more we need to understand, especially concerning the darker sides of human nature.

References and further reading:

[1] The New Synthesis in Moral Psychology – 5/18/07 Science review article by Jonathan Haidt

[2] The Roots of Morality – 5/9/08 Science News Focus article by Greg Miller

[3] Is ‘Do Unto Others’ Written Into Our Genes? – 9/18/07 New York Times article by Nicholas Wade

[4] The Moral Instinct – 1/13/08 New York Times Magazine article by Steven Pinker

[5] The Coevolution of Parochial Altruism and War – 10/26/07 research paper in Science by Jung-Kyoo Choi and Samuel Bowles

[6] The Sharp End of Altruism – 10/26/07 Perspectives article in Science by Holly Arrow

[7] Being human: Religion: Bound to believe? – 10/23/08 essay in Nature by Pascal Boyer

Tags: , ,

Labels: , , , , ,

0 Comments:

Post a Comment

<< Home