Sunday, January 22, 2006

The stem cell research scandal

With relatively few exceptions, most of the major news in science, including genuine breakthroughs, gets little coverage in the media. Some topics, of course, do make news -- things like major space missions, health crises (AIDS, avian flu), and politically controversial topics (evolution, climate change, stem cells).

But scientific scandals involving faked research hardly ever fail to draw attention, especially when related to topics already in the news -- like stem cells. Is it possible that the relative rarity of such events (think of "man bites dog") has something to do with all the commotion? Major scandals don't seem to come up as often as once a year, worldwide.

In any case, one does not want to excuse scientific malfeasance. It's bad for everyone concerned. It hurts the reputations of most parties that are involved, and the reputation of science itself. And science is an area of human activity in which reputation is of rather more than averaage importance.

However, major scandals also tend to evoke somewhat melodramatic overreaction. I think the following is in that category:

Scandal over Stem-Cell Research / A hospitable environment for scientific fraud

It's an opinion piece by Spyros Andreopoulos, who is credited as "director emeritus of the Office of Communication and Public Affairs at the Stanford University School of Medicine". Some of his other writings tend to train their critical focus on scientists and scientific institutions. This, for example, argues that "We need to improve scientists' understanding of the public." That's certainly a reasonable point, but not one to get into right now.

The scientific enterprise as a whole needs critique just as much as individual scientific research. But one may also disagree about the details, so let's look at Andreopoulos' take on the stem cell scandal.
South Korean scientist Dr. Hwang Woo Suk has been regarded as one of the most brilliant researchers in his field. So why would he concoct an elaborate hoax in the pages of Nature, as his critics claim, that he had cloned a dog, and written in Science that he had created human embryonic stem cells matched to patients who might benefit from them?

Since the time that was published, the dog cloning claim has been verified as correct, but of course all the other claims have been shown to be faked. So the important questions include why the fakery was perpetrated.
Perhaps the answer is nothing more than ego. But another explanation could be the culture of science itself, which puts a premium on originality, on being first to make a scientific discovery. Being second, or third, hardly counts at all.

Stop right there. In a causal sense, Andreopoulos is most likely correct with respect to Hwang Woo Suk and the rather small number of others who have been responsible for similarly egregious instances of fakery.

But consider what he's saying, which is that a very competitive and intellectually challenging profession puts great pressure on individuals to succeed and even excel to the limits of their ability -- and sometimes beyond. This isn't any different from other professions like law, politics, journalism, medicine, and (especially?) business. Fraud and scandal are no strangers to any of those other professions either. I would dare to say that the levels of fraud found in science are a lot lower than in any other of those professions.

Competitiveness and striving are part of human nature, and affect large percentages of individuals in almost any type of endeavor. But there also would seem to be some positive correlation between the prestige of a profession and the degree of competitiveness one finds in it. This is understandable -- the larger the rewards, the harder people will work for them. That does produce undesirable side-effects. Occasional breaches of ethics -- and sometimes outright fraud -- are one kind. A different kind is the toll on the quality of life experienced by people who are caught up in the rat race to succeed.

Yet competitiveness within a profession has its positive effects as well. One is obviously the fact that honors and rewards (in whatever form -- wealth, power, self-esteem, or even a more active sex life) motivate people to produce the best results they can. Science certainly can't do without such motivating factors any more than the other prestige professions.

But there's also another positive effect, which benefits the scientific endeavor itself as much as those who win in competition. Science, more than most professions, needs to have a way to rate individuals in terms of the reliability and authority of their accomplishments. Reputation is all-important in science. In any given field, it is vital to recognize the individuals who have the most correct and accurate grasp of reality. And so science provides honors and rewards to identify the best and brighetest. These rewards come in a variety of forms -- academic tenure at top institutions, publication in the most prestigious journals, top scores in citation indexes, membership in National Academies and the like, conference speaking invitations, prizes and awards (including Nobels and numerous ones less famous). Most everyone in any given field knows, by such tokens, who the "alphas" are, since these awards are visible to anyone who's paying any attention at all.

Of course, this is elitism. It offends our egalitarian sensibilities. But science simply can't do without such things. Life is too short to read anything but the best of the literature in any particular field. Nobody can read it all. There need to be indicia of what's best. These reputational rewards which are part of the social system of science are the scientific community's method of voting for those of its members who seem most deserving of attention -- and, of course, future research grants.

It's not a perfect system. But nothing's perfect in human social arrangements. Votes are not at all weighted equally. Those who have already achieved the hightest rankings have the most heavily weighted votes. But would any other system work much better? In an egalitarian system, how would you identify people not among your immediate acquaintances who are the most deserving of having their papers read? Or most deserving of very scarce research funds? Or who run the research group that you most want to join because it has the best success prospects. These are not trivial matters.

So the bottom line is, there are limited quantities of rewards available, it's a zero-sum game, hence people compete fiercely. How could it be any other way?

It's always possible there could be another way, or at least improvements. And it would be nice if sociologists of science would apply scientific method to the fullest extent in order to determine, first, how the reward systems of science actually operate, and, second, where and how some mechanisms are "better" (in some sense) than others. With such actual information, we'd then be in a better position to make decisions about improvements.

But we've drifted quite a way from Andreopoulos' article. Let's get back to that. He has further remarks on why fakery occurs:
The causes of fakery in science are a matter of debate. Its incidence, whether episodic or widespread, could be due to individual aberrations. In "The Great Betrayal: Fraud in Science," author Horace Freeland Judson blames it on inadequate mentoring of scientists, veneration of a high volume of published research, chases for grants and glory and political pressures for practical results.

These are valid points, but in light of what I wrote above, I think that they miss the forest for the trees. Competitiveness is both inevitable and beneficial. Ideas for remediation ought to be aimed at detecting and controlling fraud and abuses rather than reigning in competition.

In a quite different sphere, that of business, competition is regarded as an almost unalloyed virtue -- provided that fraud and abuses are controlled. Anti-competitive phenomena such as monopolies and cartels are seen as undesirable (at best) or evil (except by would-be monopolists themselves). Mechanisms apart from free markets themselves -- such as government regulation -- for controlling fraud and abuse are also regarded as necessary evils (at best). But even the most ardent libertarians realize that fraud is a significant enough problem that we need a legal system to control it.

I'm not the world's most ardent libertarian, but I don't think it's a big stretch to take a similar attitude in science for controlling fraud and abuse. Respect and encourage competition, but have "appropriate" control mechanisms in place. That would limit the debate somewhat to identifying such mechanisms.

What does Andreopoulos suggest?
[A]nother probable cause contributing to lapses in individual behavior could be the scientific journals themselves. I have long suspected that the insidious rise of publication costs and fierce competition among journals may have contributed a hospitable environment for fraud.

He then devotes most of the remainder of his essay to dissecting the problem with journals and offering advice to journal editors and publishers on how to reform themselves.

I'm going to mostly skip over that for a simple reason: There is a tectonic shift underway in the journal publishing business. At the same time as publication costs and prices are ballooning -- to the point where academic libraries must continually cut back on their subscriptions -- technology is threatening to transform the whole academic publishing business beyond recognition. We now have "open access" journals like those of the Public Library of Science. In physics (and allied fields like mathematics and computer science) we have arXiv.org. And we have the elephant in the room -- Google (and a few similar efforts), with things like Google Scholar and Google Book Search.

In 10 years, the journal publishing business most probably won't look anything like it does now, so trying to "reform" it is very much trying to aim at a fast-moving target. On the other hand, some of the new electronic forms of publishing, such as arXiv, have much lower standards of peer review than present journals. That's certainly a worrisome matter as far as fraud is concerned.

So, to finally return to the consideration of dealing with fraud, do I have any recommendations (not that anyone's probably going to pay much attention)?

What I would say is this: At the grave risk of seeming too complacent, I think science's fraud control system is already pretty good, in spite of this nasty stem cell scandal. Human social systems are nothing if not imperfect. A great deal of "scientific method" consists of social mechanisms for controlling imperfections (of which ordinary fraud is only one kind) in human acquisition and cataloging of reliable knowledge about the real world. The method fails to the extent that any errors creep in, regardless of whether they are due to fraud or simply sloppy technique. It's amusing how even careless or fraudulent practices have occasionally managed to yield good science. One example is Robert Millikan's measurement of the electron's charge. Another is Edwin Hubble's measurement of the Hubble constant that describes the expansion of the universe. Because of erroneous assumptions about the intrinsic brightness of certains types of stars, which Hubble used to estimate distances, the value he originally estimated for the Hubble constant was off by a factor of seven. Even though it was decades before the value of this constant was better known, the general conclusions about the evolution of the universe were pretty much on target.

There are several reasons that science is pretty effective at self-correction. One of these lies in the very competitiveness of the process. In addition to encouraging fraud, which it can do, it is even more effective at rooting out fraud for the same reasons. For better or worse, one of the best ways for a scientist to compete is to demonstrate that some other scientist is wrong. And the more well-known and influential a scientist is, the more points can be gained by successfully discovering an error in his/her work. This can be very confusing to outside observers. A big news story may come out one month that receives attention in the popular press. And then a few months later someone comes along with evidence that the earlier results are wrong. (This happens a lot with research in health and medicine, to say nothing of the social science, but no branch of science is immune.)

It's instructive to look at how rapidly self-correction actually happens. Hwang Woo Suk's first paper found to be fraudulent was published in March 2004, and the second more important (but faked) paper in May 2005. The first public reports of problems appeared in November 2005. Hwang resigned his university post on December 23, 2005, and finally on January 10, 2006 it was announced that both the 2004 and 2005 stem cell papers were based on fabricated data. Although it took more than a year and a half for doubts to be raised, less than two months elapsed before the case was effectively closed. Hwang has admitted that mistakes were made, but not accepted full guilt. (Criminal charges may still be filed.)

Compare that with how slowly scandals in other fields are resolved. Take business for example, say the Enron and Worldcom scandals. Controversy swirled around Enron for months before it declared bankruptcy in late 2001. Top offcials of Enron (Kenneth Lay and Jeffrey Skilling) still haven't admitted guilt and have not yet gone on trial for their (alleged) malfeasance. The Worldcom case was similar. It declared bankruptcy in July 2002 after the company had been under suspicion of inflating its assets for a year and a half. The company's CEO (Bernard Ebbers) never admitted guilt, but was finally convicted of fraud, and sentenced to prison in July 2005.

Or how about politics. The Watergate scandal dragged on for two years from the break-in to Nixon's resignation. It was constantly in the news. All that time most of the guilty parties continued to protest their innocence. Today we have the Abramoff and DeLay scandals (among others) with the same pattern. Hardly anyone but the lowlier figures admit guilt. Resistance to release of relevant information is found at every turn. Only lengthy judicial processes have much chance of sorthing it all out. But religion seems to provide the worst example -- the Catholic Church and its pedophile priests. The Church has known about child sexual abuse incidents for at least two decades, and done little more (before public exposure) than move the perpetrators around. One Cardinal of the Church (Bernard Law), in particular, was notorious for negligence in taking action. Although he resigned as Cardinal in December 2002, the Church subsequently rewarded him with cushy appointments in the Vatican.

It's clear enough why the wheels of justice turn pretty slowly in business and political scandals. In the case of business, the people immersed in scandal are generally quite wealthy and able to drag things out with talented teams of lawyers. In politics, those involved in a scandal generally have friends (and quite possibily accomplices) in very high places, who can directly affect law enforcement and legal proceedings, as well as the release of crucial information. Scientists charged with malfeasance generally have none of these advantages, so once wrongdoing is suspected, justice can move swiftly to investigation, verification, and correction.

So let's get back to science. Another social mechanism it uses to deter fraud is the way young scientists are socialized. Basically, they are on probation from the time they enter graduate school until they receive academic tenure -- if they ever do. Until a young scientist manages to establish a good reputation, the onus is always on him or her to establish credibility and rigorously justify his/her research results. Young scientists have very little power compared to senior scientists (like Hwang). They just don't have the means to coerce others to help fabricate results. And in the present time, most science is very much a team effort.

Not to seem too idealistic about it, but if that weren't enough, the culture of science continually reinforces the idea that science is all about the search for truth. Fraud is simply antithetical to that ideal. In contrast, the culture of business and politics is much more about misleading marketing and cunning propaganda and what one can get away with rather than what is true. That's just the obvious (but sad) fact of the matter. And in contrast to science, business and politics (and law and other professions) have a culture where it's much more every man or woman for themselves, rather than a team effort.

We shouldn't be complacent about scientific quality, ever. I don't want to whitewash the situation. Maybe the take-away from all this is that we could certainly try to understand the sociology of science better. What factors make it work as well as it seems to actually do? What factors effectively deter deviance from the norms of good scientific content? What factors, on the other hand, induce deviance and abuse and fraud.

So maybe part of the answer is more systematic application of science to itself.

Tags: , ,

Labels:

0 Comments:

Post a Comment

<< Home