Sunday, June 27, 2010

Creativity and mental illness

The association between creativity and mental illness is sort of a cliché – but that doesn't mean there's nothing to it. Standard examples given include Vincent van Gogh, Robert Lowell, and John Nash.

There has been a rather large amount of research into the connection, and a large number of biographical accounts of famous creative people who also suffered from mental illness. But the neurobiological details are emerging only slowly. After all, our understanding of the biological roots of either creativity or mental illness remains fairly rudimentary.

However, one recent study does add some tantalizing clues.

Thinking Outside a Less Intact Box: Thalamic Dopamine D2 Receptor Densities Are Negatively Related to Psychometric Creativity in Healthy Individuals
Several lines of evidence support that dopaminergic neurotransmission plays a role in creative thought and behavior. Here, we investigated the relationship between creative ability and dopamine D2 receptor expression in healthy individuals, with a focus on regions where aberrations in dopaminergic function have previously been associated with psychotic symptoms and a genetic liability to schizophrenia. Scores on divergent thinking tests (Inventiveness battery, Berliner Intelligenz Struktur Test) were correlated with regional D2 receptor densities, as measured by Positron Emission Tomography, and the radioligands [11C]raclopride and [11C]FLB 457. The results show a negative correlation between divergent thinking scores and D2 density in the thalamus, also when controlling for age and general cognitive ability. Hence, the results demonstrate that the D2 receptor system, and specifically thalamic function, is important for creative performance, and may be one crucial link between creativity and psychopathology. We suggest that decreased D2 receptor densities in the thalamus lower thalamic gating thresholds, thus increasing thalamocortical information flow. In healthy individuals, who do not suffer from the detrimental effects of psychiatric disease, this may increase performance on divergent thinking tests. In combination with the cognitive functions of higher order cortical networks, this could constitute a basis for the generative and selective processes that underlie real life creativity.

Executive summary: There is a correlation between performance on a part of a common psychological test for creativity and a certain property of neurons in a brain structure called the thalamus. The association with mental illness, specifically schizophrenia, is that the same neural abnormality in the same part of the brain has also been found to correlate with various symptoms of schizophrenia.

Let's look at creativity first. It's often defined, to quote from the research paper, as "the ability to produce work that is at the same time novel and meaningful, as opposed to trivial or bizarre". A creative work should be original and unexpected, but it should also be more than just randomly different from the ordinary. It should also impress us as insightful or solve a difficult problem.

So there are several abilities a creative person should possess. They don't necessarily correlate with each other, but all or most should be present for "true" creativity. A creative artist, for instance, should be inventive and original, but also have good artistic skills. As far as the present research is concerned, we're dealing just with the aspect of creativity that comprises novelty and originality.

The psychological test used in this research is called the "Berliner Intelligenz Struktur Test". It's a general intelligence test, and it consists of several parts. One part is the "Inventiveness battery", and the specific ability that measures is called "divergent thinking".

Even within the divergent thinking component, several characteristics can be distinguished. The test may ask, for example, to think of as many reasonable uses as possible for an object like a brick. The characteristics that might be observed include:
Fluency–the number of valid responses; Originality–how frequent the participant's responses were among the responses of the rest of the sample; Flexibility–the number of semantic categories produced; Switching–the number of shifts between semantic categories; and Elaboration–how extensive each response is (if the task involves producing more than single words).

To do well on this test, a subject must be able to quickly produce valid responses that are unobvious and diverse in nature, not just variations on a few themes.

Previous research had established that divergent thinking is influenced by the "dopaminergic" neural system, i. e., neurons whose primary neurotransmitter is dopamine. Specifically, there is a correlation between divergent thinking (as measured by the test just described) and certain variants of the dopamine D2 receptor. The present research further narrows down the relationship.

We've discussed dopamine before (list). It is involved in quite an impressive number and diversity of psychological phenomena, including appetite, addiction, risk-taking, memory, and trust. Some abnormalities of the dopaminergic system are also implicated in pathologies such as ADHD, Parkinson's disease, depression, and schizophrenia (dum-da-dum-dum).

Indeed, because dopamine is involved in so many functions, therapies for certain dopamine-related disorders can cause side effects in seemingly unrelated areas. For example, Parkinson's disease results from insufficient dopamine activity, but treatments that raise dopamine levels can cause other problems, such as pathological gambling, compulsive shopping, binge eating and other impulse control disorders. (Ref.: here.)

The reason that dopaminergic neuron abnormalities have such diverse effects is that dopaminergic neurons are common in a number of specialized areas of the brain. A dopamine abnormality will therefore affect whatever function such an area is involved in.

As far as divergent thinking is concerned, there are two brain areas of particular interest: the striatum and thalamus. Many neurons in both regions have D2 receptors. And interestingly enough, these regions are also linked with schizophrenia. As the research paper notes,
[N]etworks relevant to divergent thinking, i.e. structures and processes in associative corticostriatal-thalamocortical loops overlap to a great extent with regions and networks affected in schizophrenia and bipolar disorder. Furthermore, dopamine is known to influence processing in these networks and alterations in dopaminergic function and activity of D2 receptors have been linked to both positive and negative psychotic symptoms. Two regions appear to be of particular interest in this context: the thalamus and the striatum. Several studies have shown thalamic D2BP to be reduced in drug-naïve schizophrenia patients. Moreover, D2BP in subregions of the thalamus was found to be negatively related to total symptoms, general symptoms, positive symptoms, hostility and suspiciousness as well as grandiosity.

(D2BP refers to D2 "binding potential", which depends on the number density of D2 receptors and their ability to bind dopamine.)

Based on the known facts, the researchers decided to look for correlations between a measure of divergent thinking and D2BP in the thalamus and the striatum. What they found was that, indeed, there was a significant (p=.013) negative correlation, in a relatively small sample of healthy (non-schizophrenic) individuals, between a measure of divergent thinking and D2BP in the thalamus. There was not a similar correlation in the striatum.

In other words, non-schizophrenic people who had lower dopamine activity in the thalamus tended to have higher divergent thinking scores. This is pretty interesting in itself, especially since other studies have shown lower D2BP in the thalamus to be correlated with higher scores for pathological symptoms in schizophrenics.

What, then, is known about the function of the thalamus? It's a left-right midplane symmetric structure, situated between the cerebral cortex and the midbrain. It has a number of functions, especially as a relay station between the cortex and various subcortical areas. In particular, all sensory signals (except smell) pass through substructures of the thalamus on their way to the part of the cortex that processes them. The thalamus is also thought to be important for regulation of sleep, wakefulness, and consciousness – which makes sense, as it's in a position to control what sensory signals get through.

But why do the dopaminergic neurons of the thalamus have something to do with divergent thinking? The present research doesn't explicitly say anything about that. But the researchers suggest some hypotheses:
Based on the current findings, we suggest that a lower D2BP in the thalamus may be one factor that facilitates performance on divergent thinking tasks. The thalamus contains the highest levels of dopamine D2 receptors out of all extrastriatal brain regions. Decreased D2BP in the thalamus has been suggested, firstly, to lower thalamic gating thresholds, resulting in decreased filtering and autoregulation of information flow, and, secondly, to increase excitation of cortical regions through decreased inhibition of prefrontal pyramidal neurons. The decreased prefrontal signal-to-noise ratio may place networks of cortical neurons in a more labile state, allowing them to more easily switch between representations and process multiple stimuli across a wider association range.

Stated more clearly, perhaps, though less precisely, it seems that lower dopamine activity in the thalamus may allow a freer flow of associations to reach the cortex, which is where higher-level cognition takes place. At the same time, however, if this effect is too strong, the result could be cortical activity that is, pathologically, too chaotic.




This post was chosen as an Editor's Selection for ResearchBlogging.org
de Manzano, �., Cervenka, S., Karabanov, A., Farde, L., & Ullén, F. (2010). Thinking Outside a Less Intact Box: Thalamic Dopamine D2 Receptor Densities Are Negatively Related to Psychometric Creativity in Healthy Individuals PLoS ONE, 5 (5) DOI: 10.1371/journal.pone.0010670



Further reading:

Creativity linked to mental health (5/18/10)

Link Between Creativity and Mental Illness Revealed (5/19/10)

More brains and bonkers connection: thinking out of a broken box (5/24/10)

Dopamine receptor binding potential in the thalamus and creativity (6/1/10)

Creative madness (8/1/10)

Related articles:

Sugar can be addictive (1/11/09)

Dopamine and obesity (11/17/08)

Labels: , , , ,

Wednesday, August 12, 2009

How does language shape the way we think?

A more hypothesis-free question might be "Does language shape the way we think?"

This question, in fact, goes back a long way. It can be traced to the European romantic period of the early 19th century, with antecedents that are much earlier.

But a more modern form of the question is well-known to many as the Whorfian Hypothesis or the Sapir-Whorf Hypothesis. In even more modern terminology, this is the question of linguistic relativity. (That Wikipedia article gives a fairly detailed summary.)

There's a recent essay at Edge by neuroscientist Lera Boroditsky that outlines a number of the issues and gives an entertaining account of simple experiments that have been performed to test various linguistic relativity hypotheses.

How does language shape the way we think?
Humans communicate with one another using a dazzling array of languages, each differing from the next in innumerable ways. Do the languages we speak shape the way we see the world, the way we think, and the way we live our lives? Do people who speak different languages think differently simply because they speak different languages? Does learning new languages change the way you think? Do polyglots think differently when speaking different languages?

These questions touch on nearly all of the major controversies in the study of mind. They have engaged scores of philosophers, anthropologists, linguists, and psychologists, and they have important implications for politics, law, and religion. Yet despite nearly constant attention and debate, very little empirical work was done on these questions until recently. For a long time, the idea that language might shape thought was considered at best untestable and more often simply wrong. Research in my labs at Stanford University and at MIT has helped reopen this question. We have collected data around the world: from China, Greece, Chile, Indonesia, Russia, and Aboriginal Australia. What we have learned is that people who speak different languages do indeed think differently and that even flukes of grammar can profoundly affect how we see the world.

Let's first review the history of linguistic relativity. In romantic thought during the early 19th century in Germany language was seen as expressing the "spirit" of a nation. Implicitly, then, there was a close connection between language and the social and political attitudes of a place. However, the direction of a causal link, if any, was not clearly articulated. If anything, one could infer that society and culture affect language more than the reverse.

This might have been true before the invention of writing, but since then, language generally changes much more slowly than social attitudes. Instead of following culture, one might expect language to be one of the ways that attitudes are transmitted from generation to generation. So the natural expectation now is for language to causally affect society.

Some of the assertions in Ludwig Wittgenstein's Tractatus Logico-Philosophicus (TLP) tend to support the notion that language strongly affects thought. TLP was published in 1921 and belongs to the "early period" of Wittgenstein's thought. In his later period, represented in the posthumously published Philosophical Investigations (1953), he reversed course on a number of issues. Yet many people still regard much of TLP as powerfully insightful.

The basic thrust is that one's language places limits on what can usefully be thought, especially: "The limits of my language mean the limits of my world." (TLP, 5.6) And the concluding assertion of the book: "Whereof one cannot speak, thereof one must be silent." (TLP, 7) One interpretation is that although one might have certain thoughts one can't express in language, such thoughts would be largely futile, since they could not be communicated to others (at least not verbally) or even to oneself at a later time – perhaps only hours or even minutes later. At all events, limitations on the expressive power of a particular language place constraints on useful thinking. (To a large extent, this should be an empirical question about how much the brain uses a natural language to help with long-term storage of abstract thoughts. It seems very plausible – we need only recall a short aphorism like Wittgenstein's to access the more complex ideas. In fact, there's now a common term for this: "sound bites".)

A century after the German romantics, Edward Sapir (1884-1939), an American anthropologist and linguist, began to put the study of the relationship between language and thought on a more systematic and scientific basis. Sapir's linguistic studies adopted an anthropological approach (under the influence of Franz Boas), including field work exploring native American languages.

Although Sapir's connection with the question of a relationship between language and thought becomes more explicit in the work and writings of his student, Benjamin Lee Whorf, Sapir himself did much to encourage examination of the relationship. His introductory book on linguistics, Language: An Introduction to the Study of Speech is still in print. In Chapter X of it he wrote "Nor can I believe that culture and language are in any true sense causally related." But nevertheless, "Language and our thought-grooves are inextricably interrelated, are, in a sense, one and the same."

Benjamin Lee Whorf (1897-1941) was much more closely identified with the question, so much so that the idea of an influence of language on thought is often called simply the Whorfian Hypothesis. His collection of writings, still in print, Language, Thought, and Reality, is the work for which he is best known. Representative quote:
We dissect nature along lines laid down by our native language. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscope flux of impressions which has to be organized by our minds—and this means largely by the linguistic systems of our minds.

For some time more recently, especially in the 1980s, the Whorfian Hypothesis (which wasn't actually stated explicitly by Whorf) and the idea of linguistic relativity in general was not well regarded by cognitive scientists, because it lacked precision, testable hypotheses, and (consequently) experimental evidence. It was, after all, largely speculations based on anthropological observations. Lack of experimental evidence made a skeptical view of linguistic relativity easier to hold. But since the 1990s many attempts have been made to study the question scientifically and experimentally.

Although Boroditsky's essay could be construed as implying that most of the research has been done in her laboratory, in fact it has been undertaken rather more widely. However, we won't attempt to survey the other work right now. Let's just recap some of the research findings she discusses.

One interesting group of studies is anthropological in origin, like the original investigations of Sapir and Whorf. It involves how the languages of certain Australian aborigines, including but not limited to a group known as the Kuuk Thaayorre, describe 2-dimensional space. In English, and indeed in most languages of which English speakers have ever heard, positions in 2-dimensional space, in which the observer is situated, are usually described relative to the observer: left vs. right, in front of vs. behind. But for the aborigines, 2-dimensional space in which they are located is described in terms of absolute directions: north, south, east, west. Such a system has obvious advantages for navigation in possibly unfamiliar territory, by forcing the observer to remain oriented to fixed directions. It's also natural for people who spend a lot of time outdoors. This anthropological fact has been known for some time.

Many languages can apply spatial concepts to non-spatial information as well, particularly in the special-case of 1-dimensional data of many types. Examples include size, weight, age, time ("earlier" vs. "later"), moral values ("better" vs. "worse"), kinship ("near" vs. "far"), musical pitch ("high" vs. "low"). (It's now understood, of course, that pitch can be quantified as vibrational frequency, but this is a modern development.)

Each of these examples either involves things that can be quantified using numbers (size, weight, age), or if that's not possible there are terms specialized for the particular concept. Quantifiable concepts generally also have their own special terms too, which indicate magnitude in a vaguer way, when numeric values aren't convenient ("big" or "small", "heavy" or "light", "old" or "young").

In the case of quantifiable concepts it's not unusual to think of attribute values spatially laid out on a line from smaller to bigger (though the actual direction in space may vary – left to right or right to left, for example). Some concepts that aren't obviously quantifiable also customarily involve spatial metaphors (kinship, pitch).

Other concepts more often are described with non-spatial terms, but sometimes admit spatial metaphors. For instance, a particular time may be "near" or "distant", like kinship. Moral values may be "higher" or "lower", like pitch.

Obviously, languages other than English may customarily use different terms and metaphors to describe the same concepts. Every language seems to have its own way of conceptualizing and describing certain things, even with quantifiable concepts. Consider time, for example. Sometimes it's represented in non-spatial terms ("earlier" vs. "later"), but spatial metaphors are also common ("before" vs. "after"). When thinking spatially of time, English speakers tend to think of it in a horizontal line, perpendicular to the observer's line of sight, from earlier to later (and usually left to right). Occasionally time is described, like kinship, in a line that runs outward from the observer. However, Mandarin arranges time in a vertical direction – the way English arranges musical pitch.

(There are obviously even more complexities in all this if similar concepts customarily used in differing languages aren't quite congruent or coextensive to begin with. For instance, the concept "moral values" is one thing if such values are customarily believed in one culture to be supernaturally ordained, but something rather different if they are thought to be products of a rational ethics.)

In mathematics there is a uniform way of dealing with such 1-dimensional concepts. This is especially true when the concepts are quantifiable, but it also true even when there is only a greater-than-less-than relationship between specific attributes of different individuals. This kind of relationship of attribute values is called a linear ordering. However, most people, and most languages, don't express things in abstract mathematical terms. And so the actual language used for different types of linearly orderable attribute values can vary dramatically from one language to another.

With all these ways of using metaphors and terminology, which vary from language to language, for describing concepts, one can't help but wonder whether the choice of terminology or metaphor in a particular language affects how a speaker of the language thinks about a concept. For instance, does use of a spatial metaphor for moral values ("higher" or "lower") predispose a speaker to think differently about the concept than a different metaphor ("conservative" or "liberal", say)? If we are comparing different languages, do the different customary metaphors employed lead to observable differences between cultures where one or the other language predominates?

So the interesting general scientific question that arises from this situation is whether the peculiarities of how a particular language describes certain types of attributes have an effect on how the speakers of the language think about the attributes. More specifically, is it possible to set up experiments that demonstrate consistently different behaviors by speakers of different languages, where the behaviors make sense in terms of how each language typically represents the attributes?

When, as is quite common, a single language offers choices of metaphors or terminology for describing a given concept, it should be easy to set up experiments to detect different ways people think about a concept (as determined by observable behavior) according to selections of metaphor or terminology presented to an experimental subject. Or better yet, can we measure how much or how likely use of a specific metaphor affects specific behavior with respect to a given concept under given conditions?

This kind of experiment, that uses only speakers of a single language, can separate out effects due to language (as expressed in metaphor or terminology) from effects due to underlying culture, as long as experimental subjects are drawn from the same culture or classified by culture (or specific demographic or ethnic groups, for example). This is a way to deal with the questions that always arise about whether it's the language or the culture in which the language is spoken that accounts for observed differences. A similar option, which Boroditsky mentions, is to deliberately train experimental subjects to use unfamiliar metaphors or terminology drawn from languages other than the subject's native language.

Boroditsky cites a few examples of experiments that have been performed to study such questions. In addition to considering languages that use different spatial metaphors (either 1- or 2-dimentsional), she mentions experiments involving other types of language differences. One example is whether and how "gender" is encoded in the grammar of a language. English is a little unusual in not having much grammatical role for gender (except for pronouns). In Russian, however, most nouns have one of three genders (masculine, feminine, neuter). In a Russian sentence, adjectives, pronouns, and even verbs that refer to a noun must agree in gender.

As Boroditsky notes, "some Australian Aboriginal languages have up to sixteen genders, including classes of hunting weapons, canines, things that are shiny." Although English and other Indo-european languages seem to rely on sex (i. e. genitals – same linguistic root as "gender") as the metaphor for gender, in other languages gender is simply about whatever the language, or the underlying culture, considers to be the most important categories for partitioning the world of discourse. Obviously, this can lead to huge differences in how people who use different languages think about the world.

Or consider the way languages differ in how they divide up a space of perceptual qualia – color in particular. Russian, for instance, recognizes two different types of blue (varying in the brightness dimension), using different words. This can lead to observable differences in behavior between Russian and English speakers. (Of course, English also recognizes multiple types of blue, such as azure and blue-green. It may simply use hyphenated and compound words to denote many of the different types. There may be thousands of English terms. What's distinctive about Russian is that it doesn't have a single word that is applicable to something that's normally just "blue" in English.)

It would be very interesting to study linguistic effects with respect to other types of qualia, such as taste. Taste is especially interesting, since there are physiological reasons that, cross-culturally, five different "flavors" are commonly recognized: "sweet", "salty", "sour", "bitter", and "umami". The result is a 5-dimensional perceptual space, instead of the 1-dimensional space of loudness or hue. For that matter, color space is actually 3-dimensional (hue, saturation, brightness), and sound has various dimensions too (loudness, pitch, timbre, etc.)

All this raises many interesting questions. How much do languages vary in the way they conceptualize perceptual spaces? Even if spatial dimensions are the same between two languages, are the spatial axes oriented differently? And what effects do such differences have on how people think about the perceptual spaces?

Interestingly, even in the case of color, the strength of a linguistic effect on perception of hue (e. g. 1 vs. 2 types of blue) seems to depend on how much the experiment calls for cognitive processing in the left vs. the right brain hemisphere, presumably because of the special role of the left hemisphere in language processing.

And then, supposing some correlation exists between how languages represent certain attributes and how people behave in thinking about the attributes, how can it be decided whether the language has affected the thinking, as opposed to both language and thought being affected by cultural history and circumstances (such as a lifestyle that places high importance in being oriented to directions of the compass). Boroditsky suggests experiments that can be done, such as deliberately teaching people different ways of describing things, to find out whether and how this affects thought and behavior.

But let's begin to wrap this up by returning briefly to Boroditsky's observations of the Kuuk Thaayorre. There was a specific reason she wanted to study these people. As noted, it was known that they tended to think of (2-dimensional) spatial information using coordinates independent of the observer. That's also true in thinking about 1-dimensional data, which speakers of English (and most other language) think of as left vs. right, relative to the observer.

Boroditsky wanted to know how the Kuuk Thaayorre conceptualized and described other 1-dimensional linear orderings, such as time, where spatial metaphors are less likely to be used in other languages. The answer is that the Kuuk Thaayorre continued to use the same absolute coordinate system for such things. For example, given pictures illustrating temporal progressions, they would tend to arrange them in a line in the east-west direction. So not only did they stick to absolute coordinates, but they chose a direction that just happens to be the same as the Sun's motion through the sky – as opposed to north-south, say. Of course, that might be due to dealing with temporal information.

One wonders whether they would do the same with other linearly ordered data, such as taste of food (from "unpleasant" to "delicious"). One also wonders what the outcome of such experiments would be if they were done with Kuuk Thaayorre people indoors, under conditions where they would be unaware of actual compass directions.

Interestingly, speakers of English and other languages that are written in a left-right direction, tend to arrange pictures illustrating temporal progression in the same left-right direction, while speakers of languages written from right to left tend to use this right-left ordering for 1-dimensional data. Right there you have an apparent effect of language on thought patterns and behavior.

Concluding thoughts

This is a vast subject. It's not at all easy to summarize even the discussion presented here. So I won't try. I'll just add some general remarks.

It has generally been assumed that if there is any causal relationship between language and thought it should be in the direction of language affecting thought instead of vice versa. This is probably because language is assumed to have arisen without conscious design, and because it's difficult to imagine what relatively complex thought would be like if it came before language. We assume that non-human animals, which lack language, can't have complex thoughts. But we know almost nothing about the origins of human language, at least tens of thousands of years ago, or what human thinking might have been like at the time. Our assumptions could be wrong. Language and complex thought probably co-evolved to some extent that we can now only guess about.

For another thing, it should be obvious that the effect of language doesn't have to be all or nothing. The Whorfian Hypothesis isn't necessarily generally true or generally false. Instead, what needs to be investigated are tendencies and correlations under many different special conditions, such as the kind of cognitive activity one suspects might be affected. One should look for the strength (i. e. probability) of an effect, and its pervasiveness (i. e. range of circumstances in which it can be observed).

Another important consideration is that language has a variety of parts, with vocabulary and grammar being the major divisions. The vocabulary and grammar of a language probably differ a lot in whatever way they affect thinking.

As far as vocabulary is concerned, if a language lacks a word for a specific concept, it's considerably more difficult to think about the concept, though generally not impossible. Without a word for a concept, it's harder to keep it in mind in order to examine it logically in order to understand it, or to use it for classifying or organizing things. For example, a study of an Amazonian tribe, the Piraha, who don't have words for numbers, suggests that these people have difficulty with numerical concepts in tasks where memory is involved, even though they presumably can recognize differences in size of small collections of things. (See below for more about this study.)

Likewise, suppose we didn't have words for many types of mammals, how easily could we distinguish between a fox and a coyote, say? Whether we regard two things as the "same" or "different" depends a lot on whether we have separate, common words for the two things. (For example: different shades of the same color.) Even if we could visually perceive differences between the species, would we remember the differences long enough to keep track of other ways the species differed?

So languages with sparse vocabularies make many kinds of thinking difficult. In language without words for abstract concepts like "liberty", "freedom", "justice", etc. it would be difficult to develop much of a political philosophy. Indeed, the introduction of such terms is often the starting point for new directions in philosophy. All this argues for a significant effect of language on thought, mediated by the language's vocabulary.

But vocabulary is a rather different aspect of language from grammar, and it is the grammar of a language that is often the focus of questions about how language affects thought. For instance, there are all the questions about the role of grammatical gender. Other important grammatical issues have to do with verbs – the way in which they involve time ("tense"), attitude towards the action ("mood"), whether action is ongoing or completed ("aspect"), issues of effect of an action ("transitivity") and so forth. A language can potentially pack quite a variety of disparate kinds of information in its verbs. Each of these aspects of grammar may affect thinking in different ways and different degrees. Research should be looking for whether certain aspects tend to be more influential than others.

Another interesting difference between vocabulary and grammar is that the former probably changes more quickly than the latter. Indeed, a language's grammar tends to change pretty slowly, and in small increments, if at all, over hundreds of years. But vocabulary can change in a matter of decades, or less. Think of the difference in English vocabulary between Shakespeare's time and ours. (With Shakespeare himself responsible for quite a few innovations.) Yet the grammar of Shakespearean English is hardly different from that of contemporary English.

What this may mean is that the grammar of a language has a more profound effect than vocabulary on thinking because of its relative constancy. Consequently, perhaps, speakers of the language can hardly imagine that distinctions imposed by grammar might admit any possible alternative kinds of distinctions. Just as speakers of English have a difficult time imagining any relevance of gender to non-living things.

Lastly (for now), I should point out the relevance of this discussion to the even more general topic of "social construction of reality", as briefly touched on here.

In that regard, Edward Sapir wrote: "No two languages are ever sufficiently similar to be considered as representing the same social reality."

A few recent relevant research reports

Language Without Numbers: Amazonian Tribe Has No Word To Express 'One,' Other Numbers (7/15/08)
An Amazonian tribe, the Piraha, with only 300 speakers of its language, apparently has no words for any natural number, even "one", though it has terms for comparing quantities ("some", "more"). Consequently the size of a very small collection (1 to 4 things) would be described with a word meaning "few", while larger collections would be "many".

Aboriginal Kids Can Count Without Numbers (8/18/08)
In contrast to the preceding report, a study of children in two Australian aboriginal communities, whose language does not contain words for numbers, showed that the children were able to copy and perform number-related tasks.

Pre-verbal Number Sense Common To Monkeys, Babies, College Kids (2/15/09)
As in a number of other studies, this research shows that human infants, as well as macaque monkeys, show an awareness of the different size of very small sets (2 or 3 objects), even though they do not have language. In fact, even dogs have a similar awareness, using similar testing protocols. (See here.) So certainly the Piraha also have such awareness. But as is pointed out in the Piraha study, lack of words for numbers seems to significantly affect the ability to perform tasks where memory is involved.

Scientists show that language shapes perception (2/26/09)
Modern Greek, like Russian, has different words for light and dark blue, unlike English. Greek speakers automatically use different words to describe the color of the sky or the color of the ink of a (dark) blue pen. Using a measuring technique called "event related brain potentials", differences in brain activity caused by seeing different shades of blue can be detected before the appropriate color word enters consciousness. However, this doesn't necessarily mean that the brains of Greek speakers and English speakers (say) are genetically different – only, perhaps, that the cultural habit of sharply distinguishing two kinds of blue has been learned in non-verbal, as well as verbal, parts of the brain.

Expressing comparisons is possible even without language, researchers find (6/30/09)
Deaf children who had not learned a spoken language were compared with hearing children who had learned some language. Deaf children are able to use gestures to indicate a recognition that (for example) a house cat is different from a tiger. However, "as the children grew older, the comparisons diverged — deaf children's comparisons remained broader in nature, while hearing children's comparisons became more complex and focused, such as saying that the hair was brown like a brown crayon, after hearing children started using the word 'like'."


Update, 8/24/09: Here's a relevant new article I haven't had a chance to evaluate yet: Does Language Shape What We Think?

Update, 9/4/09: And here's an interesting quote from Steven Pinker (via Amira):
Human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society. It is quite an illusion to imagine that one adjusts to reality essentially without the use of language and that language is merely an incidental means of solving specific problems of communication or reflection.

The fact of the matter is that the ‘real world’ is to a large extent unconsciously built up on the language habits of the group … We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation.


Tags: , , social construction of reality

Labels: , , ,

Monday, August 03, 2009

Brain simulation

Recently there was a considerable splash (some would unkindly call it hype) about an ongoing simulation project that might produce an "artificial human brain" within 10 years from now:

Artificial brain '10 years away' (7/22/09)
A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.

Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.

He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.

Around two billion people are thought to suffer some kind of brain impairment, he said.

"It is not impossible to build a human brain and we can do it in 10 years," he said.

The project isn't brand new. It was formally initiated in June 2005 by EPFL, the École Polytechnique Fédérale de Lausanne (in Switzerland), in cooperation with IBM. This Blue Brain project has been directed since the beginning by Henry Markram of EPFL. The project's name is derived from its use of an IBM Blue Gene computer for performing the simulations.

An initial goal of the project was completed after just 1.5 years, in 2006 – the simulation of a single rat cortical column. While that's a noteworthy accomplishment, keep in mind that a human cortical column contains 6 times as many neurons (60,000) as does the rat equivalent.

In addition, although the architecture of each column is roughly similar to that of any other, the total number of columns in a human cerebral cortex is estimated at about 2 million. (The cerebral cortex is the outermost, and evolutionarily most recent, part of the human brain, in which the most sophisticated processing occurs.) The project is facing a rather daunting amount of simulation that needs to be performed.

The present flurry of excitement over the project stems from a presentation by Markram at the recent TEDGlobal 2009 conference.

While a video of Markram's talk doesn't seem to be available online yet, you can find some blogged notes here: Henry Markram at TEDGlobal 2009: Running notes from Session 5.

Here are some additional references on the talk and the project in general:

Note that the caption on the last video in this list observes that a complete simulation may require a computer 20,000 times as powerful as any currently existing computer. (At current rates of progress, we would see a computer "only" 1000 times as powerful in 10 years.) Such a computer would also need a memory capacity 500 times the size of the Internet. Anyone else sense a discrepancy with claims made more recently?

Later additions:

Competition in the wings?

Before passing on to a skeptical take on the feasibility of what the Blue Brain project proposes to do, it's rather interesting that there may be competition in this race – and in part from no less than another portion of IBM:

Cognitive Computing Project Aims to Reverse-Engineer the Mind (2/6/09)
"The plan is to engineer the mind by reverse-engineering the brain," says Dharmendra Modha, manager of the cognitive computing project at IBM Almaden Research Center.

In what could be one of the most ambitious computing projects ever, neuroscientists, computer engineers and psychologists are coming together in a bid to create an entirely new computing architecture that can simulate the brain’s abilities for perception, interaction and cognition. All that, while being small enough to fit into a lunch box and consuming extremely small amounts of power.

The 39-year old Modha, a Mumbai, India-born computer science engineer, has helped assemble a coalition of the country’s best researchers in a collaborative project that includes five universities, including Stanford, Cornell and Columbia, in addition to IBM.

The researchers’ goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It’s the first attempt of its kind.

Sort of makes one wonder what's going on here, no? I don't have any further information on this project, except some details on Dr. Modha, and further references contained therein (especially the first item):


And now for the skeptical view

You knew it was coming, right?

Here's video of a debate on this topic (in general, not either IBM simulation project in particular) between John Horgan (the skeptic) and Ray Kurzweil (in rebuttal).

Ray Kurzweil and John Horgan debate whether a singularity is near or far

And here's an article by Horgan with the details of his argument:

The Consciousness Conundrum

I don't always agree with Horgan, but his case on this topic seems a little more persuasive at this point.

In support of Horgan's point of view, I could mention an article from March 2009, about the work of the Allen Institute for Brain Science. It's written by Jonah Lehrer, who earlier wrote on the Blue Brain project in one of the references cited above:

Scientists Map the Brain, Gene by Gene
One unexpected—even disheartening—aspect of the Allen Institute's effort is that although its scientists have barely begun their work, early data sets have already demonstrated that the flesh in our head is far more complicated than anyone previously imagined. ...

"The brain is just details on top of details on top of details," Hawrylycz says. "You sometimes find yourself asking questions that don't have answers, like 'Do we really need so many different combinatorial patterns of genes?' ...."

"The problem with this data," one researcher told me, "is that it's like grinding up the paint on a Monet canvas and then thinking you understand the painting." The scientists are stuck in a paradox: When they zoom in and map the brain at a cellular level, they struggle to make sense of what they see. But when they zoom out, they lose the necessary resolution. "We're still trying to find that sweet spot," Jones says. "What's the most useful way to describe the details of the brain? That's what we need to figure out." ...

Although the human atlas is years from completion, a theme is beginning to emerge: Every brain is profoundly unique, a landscape of cells that has never existed before and never will again. The same gene that will be highly expressed in some subjects will be completely absent in others.

Additional skeptical views:

I think I'll need a bit more time to digest the information here...

Tags: ,

Labels:

Monday, July 20, 2009

Want to be more creative? Get more REM sleep

REM sleep, a distinctive phase of normal sleep characterized by rapid eye movements, has traditionally been associated with vivid dreaming. New research suggests it is important for creativity in general.

Creative problem solving enhanced by REM sleep (6/8/09)
Research led by a leading expert on the positive benefits of napping at the University of California, San Diego School of Medicine suggests that Rapid Eye Movement (REM) sleep enhances creative problem-solving. The findings may have important implications for how sleep, specifically REM sleep, fosters the formation of associative networks in the brain.

The study by Sara Mednick, PhD, assistant professor of psychiatry at UC San Diego and the VA San Diego Healthcare System, and first author Denise Cai, graduate student in the UC San Diego Department of Psychology, shows that REM directly enhances creative processing more than any other sleep or wake state. ...

"We found that - for creative problems that you've already been working on - the passage of time is enough to find solutions," said Mednick. "However, for new problems, only REM sleep enhances creativity."

Mednick added that it appears REM sleep helps achieve such solutions by stimulating associative networks, allowing the brain to make new and useful associations between unrelated ideas. Importantly, the study showed that these improvements are not due to selective memory enhancements.

It might be supposed that sleep in general can aid problem solving by several mechanisms. For instance, sleep might be helpful simply by enabling avoidance of distractions that occur during wakeful periods. It is also known that memory consolidation occurs during sleep, and memories that are better established may also assist in problem solving. But there could be other benefits of sleep as well, especially specific phases of sleep.
A critical issue in sleep and cognition is whether improvements in behavioral performance are the result of sleep-specific enhancement or simply reduction of interference - since experiences while awake have been shown to interfere with memory consolidation. The researchers controlled for such interference effects by comparing sleep periods to quiet rest periods without any verbal input. ...

"Participants grouped by REM sleep, non-REM sleep and quiet rest were indistinguishable on measures of memory," said Cai. "Although the quiet rest and non-REM sleep groups received the same prior exposure to the task, they displayed no improvement on the RAT test. Strikingly, however, the REM sleep group improved by almost 40 percent over their morning performances."

The authors hypothesize that the formation of associative networks from previously unassociated information in the brain, leading to creative problem-solving, is facilitated by changes to neurotransmitter systems during REM sleep.

Here's the research abstract:

REM, not incubation, improves creativity by priming associative networks
The hypothesized role of rapid eye movement (REM) sleep, which is rich in dreams, in the formation of new associations, has remained anecdotal. We examined the role of REM on creative problem solving, with the Remote Associates Test (RAT). Using a nap paradigm, we manipulated various conditions of prior exposure to elements of a creative problem. Compared with quiet rest and non-REM sleep, REM enhanced the formation of associative networks and the integration of unassociated information. Furthermore, these REM sleep benefits were not the result of an improved memory for the primed items. This study shows that compared with quiet rest and non-REM sleep, REM enhances the integration of unassociated information for creative problem solving, a process, we hypothesize, that is facilitated by cholinergic and noradrenergic neuromodulation during REM sleep.


Further reading:

Problems are solved by sleeping (6/9/09) – BBC news article

Stages of sleep have distinct influence on process of learning and memory (2/25/09) – press release concerning prior reseach on REM sleep

Tags: ,

Labels: , , , ,

Sunday, March 01, 2009

Moral neuropolitics and ideology

An interesting paper was recently brought to my attention. It's all worth reading, but I want to focus on one specific passage, because I think it spotlights a very important question, and provides a springboard for discussion of a number of significant issues in political psychology.

This is the paper:

We Empathize, Therefore We Are: Toward a Moral Neuropolitics

It's by Gary Olson, who is currently Chair of the Political Science Department at Moravian College, Bethlehem, PA.

Olson begins by pointing out the important human characteristic of being able to empathize with the experienced injustice and suffering of others. Citing the empathy felt by people who viewed art depicting victims of the transatlantic slave trade, Olson connects this with recent neuroscience having to do with mirror neurons:
The abolitionist's most potent weapon was the dissemination of drawings of the slave ship Brooks. Rediker asserts that these images were "to be among the most effective propaganda any social movement has ever created" (p. 308).

Based on recent findings from neuroscience we can plausibly deduce that the mirror neurons of the viewer were engaged by these images of others suffering. The appeal was to the public's awakened sense of compassion and revulsion toward graphic depictions of the wholesale violence, barbarity, and torture routinely practiced on these Atlantic voyages. Rediker notes that the images would instantaneously "make the viewer identify and sympathize with the 'injured Africans' on the lower deck of the ship . . ." while also producing a sense of moral outrage (p. 315, Olson, 2008).

In our own day, the nonprofit Edge Foundation recently asked some of the world's most eminent scientists, "What are you optimistic about? Why?" In response, the prominent neuroscientist Marco Iacoboni cited the proliferating experimental work into the neural mechanisms that reveal how humans are "wired for empathy." This is the aforementioned discovery of the mirror neuron system or MNS. The work shows that the same affective brain circuits are automatically mobilized upon feeling one's own pain and the pain of others.

Iacoboni's optimism is grounded in his belief that with the popularization of scientific insights, these findings in neuroscience will seep into public awareness and " . . . this explicit level of understanding of our empathic nature will at some point dissolve the massive belief systems that dominate our societies and that threaten to destroy us" (Iacoboni, 2007, p. 14).

Given that background, the crucial passage in Olson's paper seems to me to be this:
That said, one of the most vexing problems that remains to be explained is why so little progress has been made in extending this empathic orientation to distant lives, to those outside certain in-group moral circles. That is, given a world rife with overt and structural violence, one is forced to explain why our deep-seated moral intuition doesn't produce a more ameliorating effect, a more peaceful world. Iacoboni suggests this disjuncture is explained by massive belief systems, including political and religious ones, operating on the reflective and deliberate level. As de Waal reminds us, evolutionarily, empathy is the original starting point out of which sprang culture and language. But over time, the culture filters and influences how empathy evolves and is expressed (de Waal, 2007, p. 50). These belief systems tend to override the automatic, pre-reflective, neurobiological traits that should bring people together.

Right off the bat, of course, we have a problem. As I've just been writing about (here, here), there may be serious scientific difficulties with the whole concept of a "mirror neuron system" in humans.

Olson says "we can plausibly deduce that the mirror neurons of the viewer were engaged by these images of others suffering." He does mention fairly recent (2007-8) studies that seemed to indicate the existence of mirror neurons in humans, but the skeptical opinions of neuroscientists like Gregory Hickok, whose very recent paper I wrote about, suggest that this whole issue is still up in the air.

But perhaps the issue of mirror neurons isn't all that important. While it can be questioned whether humans have mirror neurons, and whether such mirror neurons (if they do exist) actually account for empathy in humans, surely it would be hard to dispute the existence of empathy in humans.

Or would it? One does have to question how often empathy, even if it exists, plays a dominant role in humans affairs. Sometimes it does, as the ending of the transatlantic slave trade and the final ending of slavery in the U. S. attest. On the other hand, different forms of slavery still persist in the world, as well as all manner of other ills, such as crime, genocide, war, territorial occupations, and economic exploitation.

Nevertheless, let's leave that whole question aside. Human empathy does exist in many circumstances (even if we don't have an adequate neurobiological explanation of it), yet even so, there seem to be social and cultural forces that all too often are able to override empathy. Olson evidently agrees with Iacoboni in identifying the responsible factor as "massive belief systems, including political and religious ones, operating on the reflective and deliberate level."

Let's refer to those political and religious belief systems as "ideology".

The key question, then, is how to explain the substantial power that ideology has over human social behavior – not just behavior that is culturally conditioned, but even behavior that has evolutionary, biological roots, such as the empathy that derives (perhaps) from mirror neurons. That is, we have to explain how "belief systems tend to override the automatic, pre-reflective, neurobiological traits that should bring people together."

It seems to me that this is a rather important open question that political science ought to be addressing.

Because we are dealing with phenomena that can override neurobiological traits, I think we have to look at explanations that also refer to neurobiology. It just makes the most sense to consider the problem as a whole at that level.

How is it that ideology has such a compelling influence over people? What is it that ideology has to offer? How does it fit with underlying psychological factors?

We need some frame of reference to consider these issues. For the sake of concreteness, I'm going to proceed here using a circle of ideas championed by Jonathan Haidt as a convenient reference frame.

What Haidt has proposed is a "moral foundations theory" that claims to identify five "fundamental moral values" that are held by a large number of people, to greater or lesser extents, in a wide variety of cultures around the world (and in history). I think his list is incomplete as a comprehensive foundation for "morality" in general, and there are valid questions about whether some aspects of the "values" he describes even merit consideration as part of a fundamental set. Nevertheless, each of the "values" does have its devoted adherents, and so ipso facto plays a role in social behavior in those cultures where the "value" is recognized.

Here, according to Wikipedia, are the five "fundamental moral values":
  1. Care for others, protecting them from harm. (He also referred to this dimension as Harm.)
  2. Fairness, Justice, treating others equally.
  3. Loyalty to your group, family, nation. (He also referred to this dimension as Ingroup.)
  4. Respect for tradition and legitimate authority. (He also referred to this dimension as Authority.)
  5. Purity, avoiding disgusting things, foods, actions.

Further references (listed at the end of this article): [1], [2], [3], [4].

I'm not ready to make an overall evaluation of Haidt's ideas, but let's look at them and see where they might lead.

On the basis of cross-cultural research Haidt came up with these five distinguishable biological bases of morality. If nothing else, they should be factors that would give significant force and impact to ideologies that are leveraged from them.

The first two factors are (quoting from [1]) "(i) harm, care, and altruism (people are vulnerable and often need protection) or (ii) fairness, reciprocity, and justice (people have rights to certain resources or kinds of treatment)."

Haidt sees these as having evolutionary origins in kin selection and the mechanism of reciprocal altruism.

I think they could have other evolutionary origins as well. In addition, I find it a little difficult to distinguish these two factors. Both encode an obvious "golden rule" sort of morality. However that may be, it seems that mirror neurons, or something equivalent, might play a role in the neurobiology of these factors, which both relate to "empathy".

So another important question we can ask is: what are the evolutionary origins of mirror neurons (or equivalents)? Since other primates, and indeed other animals (e. g. dogs), seem to have something functionally like mirror neurons, and also notions of fairness and justice that resemble human notions, we probably need to look back further in time than the origins of hominids.

It seems to me that something like mirror neurons should be useful equipment for a member of any species that engages in intra-species combat, which is probably a large percentage of species. That certainly doesn't mean many species necessarily have mirror neurons. But, at least, the evolution of mirror neurons certainly could be a useful adaptation for many species. So something like mirror neurons could well be a primary evolutionary development, not a mere side effect of something else. And if empathy, altruism, etc. have roots in such a mechanism, they too are at least useful side effects of evolution, even if they were not directly adaptive in themselves. (Though there are plenty of reasons to think they are adaptive in themselves, especially if you believe in group selection.)

But perhaps the more interesting aspect of Haidt's ideas comprises the three other factors he regards as basic to many human moral codes.

Quoting again from [1], "In addition to the harm and fairness foundations, there are also widespread intuitions about ingroup-outgroup dynamics and the importance of loyalty; there are intuitions about authority and the importance of respect and obedience; and there are intuitions about bodily and spiritual purity and the importance of living in a sanctified rather than a carnal way."

Let's look at these separately. First up is group loyalty, preference for the ingroup, and fear/aversion towards the outgroup. This is pretty clearly, at least in part, a kin selection sort of thing.

There is also another clever evolutionary argument for this factor. It's spelled out by Choi and Bowles in [5]. They call the idea "parochial altruism". The authors present computer simulation evidence for their idea. It has the interesing property of being able to explain the otherwise paradoxical fact that humans are a fairly warlike species, in spite of countervailing empathetic tendencies. I wrote about it here at some length. See also [6].

There are, of course, other evolutionary arguments for group loyalty, such as basic considerations of group selection – successful groups should tend to be cohesive and behave something like kin groups, even in the absence of near kinship. And this would be especially true in time of resource scarcity (which probably was not infrequent).

Among the neurobiological bases of group loyalty would be any neural capabilities that enable the detection of cheating and disloyalty. These need not be discrete neural systems or brain modules. They might be just general capabilities that enable individuals to remember the past behavior of others and reason about it in such a way as to recognize signs of loyalty or disloyalty to the group. Capability for cheater detection might be a general learning ability, like the ability to learn language. Individuals need not be born being good cheater detectors. They just need to be able to learn how to be good at it.

I'm not aware of neurobiological research into cheater detection mechanisms, or other mechanisms that could support group loyalty. Studies of loyalty and cheater detection and conditions for extending trust to others should also connect up easily with the importance of "patriotism" and "solidarity" in various ideologies. This would seem to be a great area for future research.

The bottom line here is that there are very good reasons to expect ingroup/outgroup dynamics to have neurobiological underpinnings, and that these factors would strongly influence ideology. ("Deutschland uber alles." "Defend the fatherland." Etc.)

Turning to the next factor Haidt mentions: authority and the importance of respect and obedience. The psychological power of authority is quite well established. Including the Milgram experiment, which provides a glaring example of how social psychology can override any innate sense of empathy for others. Zimbardo's prison experiment is also relevant.

Respect for and obedience to authority pretty clearly have evolutionary roots in any social species that has a hierarchically organized social life – which is many species, even insects. (Some very recent research shows that even ants will attack other ants that don't follow the rules.)

Interestingly, though, degree of respect for authority varies a lot in humans. (But then, so to does a propensity to cheat.) Political scientists have known for a long time of Theodor Adorno's concept of an "authoritarian personality".

Again, respect for and obedience to authority are key features of many powerful ideologies – features that easily override empathy-based respect for peers.

This is another area that calls for much more neurobiological research. What characteristics of our neurobiology equip us to recognize and defer to authority? Is is just fear based on the consequences of disobedience to actors with substantial social/physical power? Are obedient personalities just a result of a kind of "Stockholm syndrome"?

Haidt's last factor is "intuitions about bodily and spiritual purity." This is, to my mind, the murkiest of the factors. Clearly, humans have evolved good instincts for avoiding contaminated or corrupt food, or other gathering places of pathogenic things. Exactly how that bootstraps into elaborate ideologies featuring supernatural beings is a whole bigger question.

I think there are quite a few additional factors that go into the social psychology of religion and its ideologies, including group loyalty and obedience to authority. And research into such factors seems to be pretty active these days, though not primarily into neurobiological factors. This is a large area of research all by itself. So I don't have clear ideas about how important "purity" is as a factor, by itself, that influences ideology.

Pascal Boyer has an interesting recent essay in Nature ([7]). He writes, "So is religion an adaptation or a by-product of our evolution? Perhaps one day we will find compelling evidence that a capacity for religious thoughts, rather than 'religion' in the modern form of socio-political institutions, contributed to fitness in ancestral times. For the time being, the data support a more modest conclusion: religious thoughts seem to be an emergent property of our standard cognitive capacities."

As an aside, this suggests that what we may find is that political behavior in general, and specific ideologies, are also emergent properties of our standard cognitive capacities. And that's a disquieting thought. Our cognitive capacities were shaped in a time when humans were far fewer in number, and had much less ability to cause large-scale problems for themselves and the rest of the world. Our inherited cognitive traits may result in less sanguine outcomes today than they did in the past. In particular, religion as a common sort of ideology, and Haidt's other moral predispositions, may be less beneficial for humans now than they may have been in the past.

One token of this may be seen in moral principles that do not seem to have deep roots in evolution and neurobiology. For example: aversion to war, faithful attention to truthfulness and honesty in dealing with others, sensitivity to and aversion towards manipulative behavior on the part of social elites, and respect and concern for the natural environment. Such principles don't even appear in Haidt's scheme.

Alternative reference frame: fear and emotions in general

All that said, Haidt's ideas are not the only way to approach the question of neurobiological bases of ideology. Another distinct approach involving neurobiology would center on the importance of the emotion of fear. There is, of course, a voluminous amount of research on the underpinnings of fear and its opposite (trust), as mediated by anatomical features like the amygdala and the limbic system in general.

I've just summarized a number of previous comments on this topic here. Since we're concerned with ideology in this note, it seems especially worth observing the similarities between beliefs about government and about religion, in particular the significant role that fear plays in both (see here).

Fear obviously plays a role in practical politics. How it interacts with organized ideologies is less clear. Certainly, fear of death or great harm is enough to motivate ideologies that feature institutions of authority that "protect" the populace. In any case, fear in some form or other is a strong motivator, another factor that can easily override an individual's healthier empathetic instincts.

Alternative reference frame: personality theory

Yet another direction of possible research involves trying to relate specific personality traits to ideological preferences. Perhaps the best supported of such findings could provide clues as to underlying mechanisms that link psychological tendencies to ideological features.

However, I'm skeptical. Personality traits, in fact, have been defined empirically by looking at the way people tend to use labels to describe other people. The most widely accepted type scheme, the "Big Five", is based on studies of language usage, in which factor analysis is employed to group certain behaviors using labels given to people who exhibit those behaviors. So it's entirely driven by data of a particular type, rather than theory.

That raises two problems. Firstly, there is little logical relationship between either the traits or the behavioral characteristics associated with them and specific ideologies. This is in contrast with Haidt's morality types that do connect reasonably well with ideologies. While empirical correlations between personality traits and ideologies have been found, they usually aren't very strong.

The second problem is that there's little apparent connection between personality traits and neurobiology. Perhaps that will change as more laboratory work is done that investigates the underpinnings of emotions and behavior, but I don't have the sense that clarity is close at hand.

The net result is that personality traits aren't an obvious way to make connections with either ideology or neurobiology.

Conclusion

All in all, it certainly looks like there's a huge need for research to explore how neurobiology interacts with social behavior, politics, and ideology. Understanding the potential role of something like mirror neurons is certainly important. But I think there's a whole lot more we need to understand, especially concerning the darker sides of human nature.

References and further reading:

[1] The New Synthesis in Moral Psychology – 5/18/07 Science review article by Jonathan Haidt

[2] The Roots of Morality – 5/9/08 Science News Focus article by Greg Miller

[3] Is ‘Do Unto Others’ Written Into Our Genes? – 9/18/07 New York Times article by Nicholas Wade

[4] The Moral Instinct – 1/13/08 New York Times Magazine article by Steven Pinker

[5] The Coevolution of Parochial Altruism and War – 10/26/07 research paper in Science by Jung-Kyoo Choi and Samuel Bowles

[6] The Sharp End of Altruism – 10/26/07 Perspectives article in Science by Holly Arrow

[7] Being human: Religion: Bound to believe? – 10/23/08 essay in Nature by Pascal Boyer

Tags: , ,

Labels: , , , , ,

Monday, February 23, 2009

What are mirror neurons good for?

Last week I noted that there have been some serious questions raised in a paper by Gregory Hickok about the "mirror neuron theory of action understanding".

I've now read the paper, and there's a lot that could be said on the topic.

But first, what's the discussion about to begin with?

"Mirror neurons" have received a lot of speculative attention in the popular science media. I've done some of that myself. (See here, here, here, here.)

So what is a "mirror neuron"? According to Wikipedia (which is as good a source as any for popular usage of a term), "A mirror neuron is a neuron which fires both when an animal acts and when the animal observes the same action performed by another (especially conspecific) animal. Thus, the neuron "mirrors" the behavior of another animal, as though the observer were itself acting."

That definition needs to be unpacked a little, for the sake of being specific about what's under discussion, even if we have to deviate from what is popularly understood. So, more specifically, a mirror neuron is one that's found in a motor control area of an animal's brain. It's not the only type of neuron there, but it does activate when the animal performs specific motor actions (different ones for different neurons).

The other characteristic that a mirror neuron is presumed to have is that it also activates when the animal perceives another animal performing the same action that causes activation of the neuron in the observing animal.

Notice what's not being defined as a mirror neuron. We aren't talking about just any neuron which activates when an animal perceives another animal performing some action. If that were the definition, it would be somewhat tautologous to assert that such a neuron helps the observing animal to "understand" the actions of another animal. Clearly, any animal that is capable of "understanding" the actions of other animals (of the same or even different species) must have some action-specific neurons that activate when observing actions of other animals.

It would not be pointless to discuss such things, but we would be more into the area of what it means for an animal to "understand" actions of another and how that might happen. That's a much broader issue, even applied to particular species, especially humans.

However, as soon as we agree that we're talking about certain neurons only in the limited sense mentioned above, the question arises of whether such neurons exist at all. And the interesting thing is that there are generally accepted research findings that such neurons do exist – in one particular species: macaque monkeys. These are one particular species of old world monkeys that happen to be used frequently in neurobiological research, because (being primates) they're a lot closer to humans than, say, rats, but also a lot more convenient and less expensive to work with than humans.

But note that, again according to Wikipedia, "The only animal in which mirror neurons have been studied individually is the macaque monkey. In these monkeys, mirror neurons are found in the inferior frontal gyrus (region F5) and the inferior parietal lobule." Apparently, evidence that mirror neurons even exist in other species, especially humans, is sparse to non-existent.

That's one of the main points made in the Hickok paper I've been referring to (Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans). Specifically:
there have been a host of studies aimed at investigating the ‘‘mirror system’’ in humans, but much of this work has investigated behaviors that mirror neurons could not possibly support given their response properties in monkeys, and therefore, the connection between these behaviors and mirror neurons is tenuously based on a chain of assumptions: Mirror neurons exist in humans (there are individual cells that respond both during action execution and action perception), mirror neurons have evolved to support functions in humans that they do not support in monkeys, this evolution has conserved the functional properties found in monkeys, and mirror neurons are responsible for the behavior in question.

Unfortunately, the existence of mirror neurons in humans is still just an assumption. And further, assumptions about what aspects of human cognition mirror neurons (if they exist) might take part in significantly exceed what is known about the function of mirror neurons in monkeys. Hickok provides a partial list of human capabilities sometimes assumed to involve mirror neurons: "speech perception, music perception, empathy, altruism, emotion, theory of mind, imitation, autism spectrum disorder, among others".

Clearly, that list far surpasses merely "action understanding". And it goes far beyond the data, because macaque monkeys are not known to exhibit most of those capabilities. Indeed, they are known not to have (e. g.) much ability for imitation. So to assume that mirror neurons can be used to explain all of those things requires the assumptions that humans have analogues of macaque mirror neurons, and that such analogues play a significant role in those varied capabilities.

Such assumptions may well provide fruitful hypotheses for future research. But the research to adequately justify such ideas has scarcely begun.

On top of all that, it's not clear how important mirror neurons are for "action understanding" even in macaques. I find two main points in Hickok's paper, and this is one of them. The other is to call into question the importance of a "mirror system" (of neurons in motor-related brain areas) in humans, should such even exist, for "action understanding". Unfortunately, I don't have time in this note to go into more detail on his arguments regarding these points.

I think one has to question what exactly is meant by "action understanding" in the first place. It's certainly a valid question for macaques. We can't even be sure of what it means for a macaque to "understand" anything – since we have no introspective knowledge of the macaque mind. But here's a possible way to operationalize the notion. We might say that "understanding" means an ability to predict what might follow the observation of a specific action in another individual. For example, certain facial expressions might predict further actions, such as overt aggression or attempts at mating behavior.

Many animals certainly have such abilities. Dogs, for instance, obviously know what it means when another dog is seen baring its teeth. But does that mean dogs have analogues of mirror neurons that contribute to this understanding? Do we know whether mirror neurons, even in macaques, assist in predicting future behavior of other individuals? Since I'm hardly an expert in animal behavior, I can't answer those questions.

What all of these questions mean to me is not that we have to give up hope of understanding human characteristics, such as empathy, altruism, imitation, theory of mind, and so forth. Instead, the take-away is that a vast amount of further research work needs to be done in order to establish a sound scientific understanding of the neurobiology of such abilities. The hypothesis that humans have some sort of "mirror system" of neurons that plays a role in empathy, altruism, imitation, etc. is at least a plausible one, inspired by a rough analogy with macaque mirror neurons.

But almost all the research remains to be done that would (1) identify parts of the human brain that comprise a "mirror system", and (2) show that such subsystems underlie each specific case of what we might call "empathetic behavior" in humans. This would appear to be a task similar in difficulty to that of achieving a neurobiological understanding of almost any type of "higher-level" human capability, from abstraction to complex language to long-range planning.

And yet there are good evolutionary reasons for expecting that one or more "mirror systems", in some broader sense, may exist in humans. Very generally, humans have highly developed capabilities for social organization. Such capabilities are needed to enable specific aspects of human social behavior, e. g. the ability to construct elaborate social hierarchies, build complex, enduring social coalitions, and detect covert violators of important social norms. Such capabilities would be evolutionarily favored in selecting for groups that are more cohesive and unified in competition with rivals. (At least if one thinks that group selection is possible.)

If you don't happen to believe in group selection, there are other reasons that individuals possessing good "empathic skills" would have an adaptive advantage and higher fitness compared to other individuals.

For example, consider hand-to-hand combat. Humans have always fought bloodily against others of their species, especially members of unrelated tribes. The ability to rapidly and accurately predict the next hostile moves of an opponent bent on one's destruction would have to be ranked as very valuable. Obviously, some individuals are able to become very skilled in "martial arts", which rely on highly sensitive observance of the detailed behavior of others.

More generally, beyond the level of physical combat, the admonition to "know your enemy" has always been excellent advice. Sensitive understanding of the subtleties of body language is an extremely useful skill in many competitive situations, as good poker players and canny negotiators will attest. Being able to understand the goals and intentions of an opponent certainly has evolutionary survival value. And the same ability regarding others in general is just as useful for achieving cooperation.

Understanding the neurobiology of such skills is an excellent, albeit very long-range, research goal. I'm not necessarily skeptical of the possibility of such a reductionistic goal, but I don't expect it to be achieved very soon, either....

One may reasonably ask why, if the actual evidence for a mirror neuron system in humans is so questionable, the idea is also so popular and widely discussed. The answer seems to be that, if valid, the idea provides important support for the philosophy of "embodied cognition", which holds that the nature of human thinking and emotion is closely tied to the physical body. But that's far too large of an issue to start addressing here.

Tags: , ,

Labels: , ,

Monday, February 16, 2009

Gut feelings may actually reflect reliable memories

We've had a bit of discussion of "gut feelings" recently. See here, here.

I've thought for a long time that one explanation of "intuition" and "gut feelings" when applied to dealing with a particular issue or problem is that we have dealt with a similar issue or problem before, but we do not have a clear memory of having done so. Or else we have read or otherwise learned some information related to the issue or problem. But when confronted with the issue or problem again, we "intuitively" sense how to deal with it, even though we aren't conscious of remembering the previous experience or information.

Actually, this happens a lot with experts in many areas, such as law, medicine, or business. For example, a young physician examining a patient with a certain set of symptoms may recall having learned in school that the symptoms might indicate any of several different problems. And that to distinguish among the possible causes of the symptoms it is necessary to carefully examine the particulars of the situation.

An older, more experienced physician, on the other hand, may quickly settle on one specific diagnosis without consciously going over the detailed checklist of distinguishing indicators. In this latter case, the "intuition" may simply be unconscious recollection of past experience where some specific feature in the symptoms correctly tipped the balance between one diagnosis or another.

This is not an original observation (though I can't quite recall where I first saw it), but there is new research that does support it:

Gut Feelings May Actually Reflect Reliable Memories (2/8/09)
You know the feeling. You make a decision you're certain is merely a "lucky guess."

A new study from Northwestern University offers precise electrophysiological evidence that such decisions may sometimes not be guesswork after all.

The research utilizes the latest brain-reading technology to point to the surprising accuracy of memories that can't be consciously accessed.

During a special recognition test, guesses turned out to be as accurate or more accurate than when study participants thought they consciously remembered.

"We may actually know more than we think we know in everyday situations, too," said Ken Paller, professor of psychology at Northwestern.

Actually, this is so well known that psychologists have names for it: "implicit memory" or "recognition memory". It's closely related to another effect called "priming".

So when we are, sometimes, urged to go with our intuition or "gut feelings" in making a decision, it's not necessarily bad advice. We may in fact be making well-informed decisions even when we think we are using our "intuition". But the problem is that our memory, whether explicit or implicity, can be unreliable or downright wrong. So "intuition" can just as easily get us into trouble.

In particular, we may be remembering "information" that is simply untrue. As Satchel Paige is reported to have said, "It's not what you don't know that hurts you. It's what you know that just ain't so."

When faced with important decisions, and enough time to consider them, perhaps it's not a bad idea to go consult reliable sources of information, just to be sure...

The blog Neurophilosophy has a good discussion of this research and implicit memory:

The neurological basis of intuition (2/9/09)
Most of us have experienced the vague feeling of knowing something without having any memory of learning it. This phenomenon is commonly known as a "gut feeling" or "intuition"; more accurately though, it is described as implicit or unconscious recognition memory, to reflect the fact that it arises from information that was not attended to, but which is processed, and can subsequently be retrieved, without ever entering into conscious awareness.


Further reading:

Study Suggests Why Gut Instincts Work (2/8/09) – Livescience.com

Hidden memories guide choices (2/9/09) – Nature.com

Subliminal messages really do affect your decisions (2/14/09) – NewScientist.com

Tags: ,

Labels: , , ,