Tuesday, September 27, 2005

Daniel Dennett and the "intelligent design" pseudo-controversy

Seems like the "intelligent design" controversy is everywhere now, doesn't it? That's because Christian fundamentalists behind the controversy went on the offensive this summer in Kansas. In August the Kansas State Board of Education conducted a kangaroo court trial of evolution and later issued new curriculum guidelines that redefined science to make room for "intelligent design".

Also, late last year in the small town of Dover, PA, the school board dictated that biology classes must be read a statement asserting that Darwinian evolution cannot be trusted and students should also read about ID. This fiat led 11 parents of children in the district to sue the board for unconstitutionally promoting a religious doctrine as science, and the trial of that suit is just beginning.

But the claim that the dispute between evolution and "intelligent design" is a legitimate scientific controversy is a cynical fraud, as the following article by philosopher Daniel Dennett explains -- because the people who have instigated this "controversy" champion a theory, "intelligent design", that has no scientfic content at all.

ID consists of just two parts. The first part is a congeries of criticisms of evolution, many of which are based on demonstrably false claims -- such as that there is no fossil record of "intermediate forms" between species. The second part is a proffered -- but untestable -- solution (the "intelligent designer") to a specific "problem" which almost all real biologists have concluded does not exist, namely that living creatures have parts which are "irreducibly complex".

The article was originally published on August 29, but deserves to be bookmarked, because it lays out the issues so clearly.

Show Me the Science
Evolutionary biology certainly hasn't explained everything that perplexes biologists. But intelligent design hasn't yet tried to explain anything.

To formulate a competing hypothesis, you have to get down in the trenches and offer details that have testable implications. So far, intelligent design proponents have conveniently sidestepped that requirement, claiming that they have no specifics in mind about who or what the intelligent designer might be.

If intelligent design were a scientific idea whose time had come, scientists would be dashing around their labs, vying to win the Nobel Prizes that surely are in store for anybody who can overturn any significant proposition of contemporary evolutionary biology.

George Gilder, a longtime affiliate of the Discovery Institute - the conservative organization that has helped to put intelligent design on the map in the United States - has said: "Intelligent design itself does not have any content."

Since there is no content, there is no "controversy" to teach about in biology class. But here is a good topic for a high school course on current events and politics: Is intelligent design a hoax? And if so, how was it perpetrated?
(The article is also here and here.)

Dennett explains this scam pretty clearly:
the proponents of intelligent design use a ploy that works something like this. First you misuse or misdescribe some scientist's work. Then you get an angry rebuttal. Then, instead of dealing forthrightly with the charges leveled, you cite the rebuttal as evidence that there is a "controversy" to teach.

Note that the trick is content-free. You can use it on any topic. "Smith's work in geology supports my argument that the earth is flat," you say, misrepresenting Smith's work. When Smith responds with a denunciation of your misuse of her work, you respond, saying something like: "See what a controversy we have here? Professor Smith and I are locked in a titanic scientific debate. We should teach the controversy in the classrooms." And here is the delicious part: you can often exploit the very technicality of the issues to your own advantage, counting on most of us to miss the point in all the difficult details.


The matter can be stated even more succinctly: To challenge any scientific theory, all you need to do is make whatever unsupported claims you want about it. And then when scientists object, you say, "Look! There's a scientific controversy! We must teach about it." Of course, it it were taught honestly, the complete inadequacy of ID would have to be explained, and not left as a legitimate scientific alternative.

In fact, there are various issues in the theory of evolution that real scientists are debating, such as the details of the mechanisms by which evolutionary changes occur. (For instance, does evolution act at the level of genes, or organisms, or groups of organisms, or all of these?) So there is scientific controversy, and it would be very appropriate to teach about these issues. Such disputes involve issues for which there are competing scientific explanations, and teaching about them is quite reasonable. But additionally throwing in a non-scientific, untestable explanation (the "intelligent designer" which can't be described in any way) is of no value.

However, there is no legitimate scientific controversy on the main issue that ID insists should be taught, namely whether biological organisms are so "irreducibly complex" that Darwinian evolution can't account for them. It is simply dishonest to teach that this is a scientfic controversy.

The truth about the contents of "intelligent design science" is that there aren't any contents. Just try to cajole one of its advocates to say what "intelligent design science" is. You will not get anywhere. All they can do is make the philosophical assertion that some unknown intelligent entity caused life as we know it to take the form it does. They cannot, or will not, specify what that entity might be (and that is how they differ from Biblical creationists). Their "science" consists of nothing but various criticisms of evolution. While it is legitimate to analyze evolution critically, that in itself does not constitute a scientific theory.

It's just as much a confusion tactic to call intelligent design a science as it is to call evolution a religion. Anti-evolutionists want to blur the distinction, but that is not intellectually honest.

"Intelligent design theory" does not even pretend to address the simplest questions. (And yet its proponents demand that evolution answer vastly more detailed questions.) For example, they would agree that there is no fossil evidence for what can be called "modern humans" that is more than about 200,000 years old. So modern humans must have appeared on Earth about that long ago. Exactly how did the first modern human arrive here? Was it brought here in a flying saucer? Did it shimmer into existence as if delivered by a Star Trek transporter? Did the Intelligent Designer fashion it our of clay then breathe life into it as Genesis says? Advocates of intelligent design cannot give physical details to answer such a question. In other words, they really have no theory at all.

The approximate date of 200,000 years doesn't really matter. Whenever or wherever the first "modern human" showed up, intelligent design theorists have no answer at all regarding the event's mechanism and what it physically consisted of. How could such a non-theory be "taught" in classrooms as "intelligent design" advocates demand? There isn't anything to teach! All they have to talk about is critiques of evolution. A collection of critiques is not a theory.

Monday, September 19, 2005

When is pi(x) > li(x)?

This question probably won't seem especially urgent to most folks, even mathematicians. But since it's related to the most famous unsolved problem in math, namely the Riemann Hypothesis, it seems worth noting a little bit of recent progress.

G. F. B. Riemann stated his hypothesis as merely an incidental remark in his work on the prime number theorem. This theorem, which Riemann worked on, but didn't himself prove, concerns the distribution of prime numbers. More specifically, it concerns the function π(x), which is defined as the number of prime numbers less than or equal to x, for any postive real number x > 0. One form of this theorem states that π(x) is asymptotically equal to x/log(x), where log(x) refers to the natural logarithm. In other words π(x)/(x/log(x)) approaches 1 as x becomes arbitrarily large.

There is a sharper form of this result, which can be written as
π(x) = li(x) + O((x/log(x))e-√log(x)/15)
Here the notation O(f(x)) means a quantity that is never more than Cf(x) for some constant C, and li(x) is defined as:
li(x) = ∫2≤t≤xlog(t)-1 dt
In this form, the prime number theorem was proved independently by J. Hadamard and C. de la Vallée Poussin in 1896.

The exact formulas aren't that important here. The key thing is that π(x) is "pretty close to" li(x) for large x. When people computed π(x) by hand -- an arduous task -- to determine precisely how close, they always found that π(x)<li(x), and so it was conjectured that this was true for any value of x, no matter how large.

However, in 1914 J. E. Littlewood proved that this conjecture was wrong, and in fact the quantity li(x)-π(x) changes sign infinitely often. So there must be some x such that π(x)>li(x), and we can ask what the least such x is. Apparently it is very large. The first good result on how large was obtained by S. Skewes in 1955, who showed that
x < 1010101000

Fortunately, that wasn't the end of the story. The exponent has been reduced considerably. Until this year, the best estimate was by H. te Riele in 1987, who showed that
x < 6.69 × 10370
But there has just been a "substantial" improvement. Kuok Fai Chao and Roger Plymen have shown that
x < 1.3984775 × 10316
See A new bound for the smallest $x$ with $\pi(x) > \li(x)$ for full details (if you're really curious).

How does this relate to the Riemann hypothesis? That hypothesis concerns the values of a complex function ζ(s) of a complex variable s. It says that the only "nontrivial" values s such that ζ(s)=0 all have the imaginary part of s equal to 1/2. (The "trivial" values are negative integers.)

It turns out that this statement is equivalent to one about π(x) and li(x), namely that
π(x) = li(x) + O(log(x)√x)
It is known that this is the best possible result that can hold. So the Riemann hypothesis, if true, says that the "best possible" approximation is correct, even though so far we have been able to prove only a somewhat lesser degree of approximation.

Labels:

Dark energy, quintessence

We mentioned dark energy and the cosmological constant just a few days ago in connection with "zero point energy". The distinction is that "dark energy" refers to an undefined energy of some sort that must exist in order for the universe to be geometrically "flat" in accordance with the equations of general relativity. In fact, there are now several lines of evidence that this dark energy must exist, even though its nature and origins are quite unknown.

The cosmological constant (denoted by the greek letter Λ), on the other hand, is a very specific candidate for dark energy. While it would do the job admirably, there is no independent evidence for its existence. And it has conceptual problems, namely that if (as generally supposed) the cosmological constant can be explained as zero point energy, then its apparent density is a factor of about 10120 smaller than straightforward calculations suggest it ought to be. In other words, Λ must be nonzero but very finely tuned to a very implausibly small number. Situations like that make physicists very nervous.

However, there are many other forms that the dark energy could take, which are generally referred to as "quintessence". The idea of quintessence was proposed in 1998 by R. R. Caldwell, R. Dave, and P. J. Steinhardt. See this overview article by Caldwell and Steinhardt for more details.

This article: Dark Energy by Caldwell is an excellent recent (2004) summary of our present knowledge of dark energy, including the evidence for its existence. The evidence is:

  1. Many measurements of the apparent luminosity of distant "standard candle" Type Ia supernovae show that the expansion of the universe is now accelerating, instead of decelerating as would be the case if there were no dark energy.

  2. The existence of dark energy predicts a phenomenon known as the integrated Sachs-Wolfe effect, which represents a slowing of the collapse of overdense regions of the universe. This prediction has been confirmed by combining data from detailed measurements of the cosmic microwave background (CMB) and the large-scale distribution of galaxies. (The article by Caldwell has a good explanation.)

  3. There is weaker circumstantial evidence from CMB and galaxy distribution data.

Given that we can now be fairly confident dark energy exists, the big question is: What is it? In particular, can we tell whether it is a result of a cosmological constant or, instead, of quintessence?

One characteristic of the cosmological constant Λ is that it is truly a constant. It is the same everywhere and for all time. Quintesence, on the other hand, gives an energy density which varies spatially (i. e., isn't homogeneous) and with time (it decreases). These differences should make it possible to distinguish the alternatives, with very sophisticated measurements of the acceleration of the universe at different time periods. The instruments and space missions that could make the measurements are under design and development.

Quintessence itself can come in many possible types, and a recent technical paper, The Limits of Quintessence, by R. R. Caldwell and Eric V. Linder distinguishes two subtypes of quintessence in some detail, and both of those from Λ. If dark energy is actually quintessence, the measurements which are being developed should be able to distinguish between the subtypes. A less technical discussion of quintessence and ways to test for it appears in this news article: Finding A Way To Test For Dark Energy.

But if you're willing to tolerate a few equations (and just a pinch of calculus), we can show the essence of the difference. Your reward for following along here is that you will be able to understand the technical articles just mentioned a little better.

The important thing is a number that's conventionally written as "w". w is simply a constant of proportionality between pressure and energy density. For any given type of matter or energy the relation is this: P = wε, where P is pressure and ε is energy (or mass) density. This equation is from the theory of gases and is known as the "equation of state".

One other equation we need is called the "fluid equation". It describes how energy density, pressure, and a third quantity called the "scale factor", denoted by "a", are related in an expanding (or contracting) universe. The scale factor can be thought of as a variable yardstick that expands or contracts in proportion as the universe does. (For much more about the scale factor and equations involving it, see this.) Here is the equation:
ε&prime + 3(&epsilon+P)a′/a = 0
The prime symbol (′) in there denotes derivative with respect to time. The derivative is zero just in case the quantity is a constant. So ε′ = 0 just in case we have the equation of state &epsilon=-P, which means w=-1.

Suppose the cosmological constant Λ is the dominant form of dark energy. Since Λ is a constant, the corresponding energy density ε is constant, so ε′=0 and w=-1. In other words, dark energy being entirely the result of a cosmological constant corresponds to the parameter w=-1.

But there's no a priori reason that w couldn't be just about any varying function of time. What would it be if the dark energy were solely the result of quintessence? To answer that we need one more equation, called the "acceleration equation":
a′′/a = -(4πG/3c2)(ε+3P)
The double prime denotes the second derivative, which is interpreted as acceleration. π is the constant 3.14159..., G is Newton's gravitational constant, c is the speed of light, and a, ε, and P are as before.

What this equation says is that the acceleration of the expansion of the universe is a negative number times &epsilon+3P. Since we now know observationally that the acceleration is positive, we must have &epsilon+3P<0. And since ε=wP by definition, we must have ε<-3wε, hence w<-1/3. To be consistent with observations, we must have w<-1/3 if ε is the energy density corresponding to quintessence. (This also assumes Λ=0. If dark energy consists of both quintessence and a cosmological constant, which isn't impossible, things would be much more complicated.)

The bottom line of all this is that we can distinguish between quintessence and a cosmological constant as the source of dark energy (if both are not present) just by measuring accurately enough how the universe is expanding, which will tell us what w is. If w=-1, we have a cosmological constant. If -1<w<-1/3, we have some form of quintessence. (It is also conceivable that w<-1, in which case things are really weird.)

In fact, we can put slightly tighter bounds on w, since we know roughly how much dark energy there is, and this is because we know the universe is spatially flat. Let εt stand for the total energy density in the universe, and let εm be the energy density due exclusively to matter (most of which is dark matter). Careful measurements of the motions of stars in galaxies and of galaxies gives us a value for εm. Knowing in addition that the universe is flat tells us what εt has to be, and hence that εm is about (1/3)εt. (Actually it's a little less, but that's close enough.) Since εd, the energy density of dark energy, accounts for all the rest, we have εd = (2/3)εt.

Since the expansion of the universe is accelerating, the acceleration equation implies εt+3P<0. But P=wεd, since matter does not contribute to pressure (it has its own effective w=0), and hence P=w(2/3)εt. Plugging that in, we have εt+2wεt<0, and so w<-1/2 (instead of w<-1/3).

Finally, we can indicate what the two subtypes of quintessence are that Caldwell and Linder identified in their paper. The types are distinguished according to whether the first time derivative of w, i. e. w′, is positive or negative. If w′<0, then w is decreasing with time and in the limit is -1, so in some sense the quintessence is "freezing" into a cosmological constant, which means that the acceleration of expansion will continue forever. On the other hand, if w′>0, then w will gradually increase from near -1, away from behaving like a cosmological constant, and this case is called "thawing". As the universe expands, both εd and εm decrease (the total amounts of quintessence and matter don't change, but the volume is increasing). The quantity εt+3P = εm + εd + 3wεd = εm + (1+3w)εd. Now -2<1+3w<0 since -1<w<-1/3, so εt+3P approaches 0 as the densities decrease, and so acceleration gradually goes to 0 and stops.

Both subtypes of quintessence can be modeled using a variety of different types of "scalar fields", but not any fields that are part of the current standard model of particle physics.

What about the case w<-1? In that scenario, acceleration increases very rapidly, leading to what is called the "big rip", in which not only the universe itself expands, but in the distant future even stars and eventually subatomic particles are torn apart. This would correspond to yet another type of quintessence, called "phantom energy". But that's a story for another time.

Related:


How Are We to Make Progress With w?

Labels: ,

Sunday, September 18, 2005

Global warming and hurricanes

In light of what I said yesterday, "our preparation should certainly be a large investment in science and technology for coping with the possible effects we can foresee," the following may be of interest.

First, new research shows that there is a relationship between warming and the proportion of strong hurricanes: Warming world blamed for more strong hurricanes.
The study finds there has been no general increase in the total number of hurricanes, which are called cyclones when they appear outside the Atlantic. Nor is there any evidence of the formation of the oft-predicted “super-hurricanes”. The worst hurricane in any year is usually no stronger than in previous years during the study period.

But the proportion of hurricanes reaching categories 4 or 5 – with wind speeds above 56 metres per second – has risen from 20% in the 1970s to 35% in the past decade.
Second, there has been research into possible ways to disrupt or deflect hurricanes: Could humans tackle hurricanes? But merely deflecting hurricanes isn't without problems:
[H]urricane steering creates hard choices. “Choosing between a Category 3 hitting Pensacola and a Category 5 hitting New Orleans is easy. But the people of Pensacola may have something to say about it.”
Here's a full article on controlling hurricanes at Scientific American.

Labels:

Social surfing and knowledge management

This sounds rather interesting: Will web users ‘Flock’ to social surfing?

The idea is to create a browser (or more precisely, to enhance an existing browser -- Firefox) "to make writing, editing, sharing and displaying web content faster and easier." Among the operations it would assist are inclusion of images and maintenance of shared bookmark collections.

There's already an intriguing service for managing shared bookmarks on scientific topics, called Connotea. Basically how it works is that you register with the service under a user name of your choice. Then by saving a special bookmark in your browser you can save the URL and other information about a page you are viewing simply by activating the bookmark. Many blogging packages provide a similar facility, which saves the page information as a blog entry in your own blog.

Connotea allows you to specify tags for each page you select. Thereafter, other users can find the link by a search on the tag. For instance, this search shows Connotea bookmarks with the tag "small rna". (You don't need to be registered with Connotea and logged in for this to work.) It's also possible to search for the bookmarks of a particular user (such as yourself), and there are a number of other capabilities.

It seems to me that there's a very important need that "social surfing" addresses. There are now several billion pages on the Web. Most of these were created (at least indirectly) by humans. But finding specific, useful information among all the billions of pages can be quite difficult. The main tool now used is a search engine like Google. But that's an entirely automated computer process. A search process that intimately involved human judgment of hundreds or thousands of people to index information for specialized topics would be a great advance. This would appear to be what "social surfing" offers.

There is some overlap here with another form of "social surfing", namely "wikis", such as Wikipedia. This involves a group of people collaborating to organize information in a given topic area (from "everything" as with Wikipedia, to very specializsed topics, such as abstract algebra). The information being organized includes Web links, but also substantive topic expository articles as well, and just about anything else deemed relevant, such as pictures, audio and video files, spreadsheets, databases, etc.

How long will it take, do you suppose, before sociologists take up the study of a new subfield, which might be called the "social management of knowledge"?

I think that as long as 40 years ago Doug Engelbart foresaw all this coming about, at least in a general sort of way.

Saturday, September 17, 2005

Should we continue the space program?

I don't know, but here's an interesting discussion.

Labels: ,

Zero point energy and the Casimir effect

The all too human hope of getting "something for nothing" unavoidably affects inventors, engineers, and physicists as much as anyone else. Hence the perennial popularity of the futile quest for "perpetual motion". Akin to this, but not quite so hopeless, is the pursuit of limitless energy in the form of "zero point energy" to avert the world's looming energy crisis.

Quantum mechanics suggests that ZPE must be real for a simple reason. The uncertainty principle implies that the kinetic energy of a particle can never be precisely determined, and in particular it cannot be precisely zero. So every particle must have some nonzero kinetic energy, however small. Furthermore, there is no such thing as an absolute vacuum, since "virtual particles" can also come into existence for very short but nonzero periods of time. And so even a "perfect vacuum" can contain energy, which is called the zero point energy.

However, strangely, there is still no experimental evidence that ZPE -- or "energy of the vacuum" is actually "real". It is often supposed that ZPE accounts for the cosmological constant, also known as dark energy. Although there is now good evidence that the cosmological constant is nonzero, it's only a guess that it has something to do with ZPE.

Indeed, it hasn't been possible to actually calculate a value for ZPE. Naive calculations predict a value that is as much as a factor of 10120 larger than what it should be if it is responsible for the estimated value of the cosmological constant. Still, various arguments for the existence of ZPE are so good that few physicists actually doubt it.

Assuming that ZPE is real, many physicists, engineers, and would-be inventors have invested countless hours, in a bid for undying fame, hoping to find a way to capture ZPE in some economically useful way. Here is one of the lastest in this tradition:

Magnetic energy? Perhaps
The nation's energy industry is struggling to recover from Hurricane Katrina. Gas prices are soaring as a result of the catastrophic storm. America's reliance on overseas oil increases every year.

And from his office in the North Bay city of Sebastopol, Mark Goldes envisions a day -- perhaps not so far off -- when none of this will be a problem.

Goldes, 73, is chief executive of a small company called Magnetic Power Inc., which has spent years researching ways to, yes, generate power using magnets. ...

What Goldes believes he's done is produce power from what physicists call zero-point energy. In simple terms, zero-point energy results from the infinitesimal motion of molecules even when seemingly at rest.
Unfortunately, as already noted, there is as yet no experimental evidence that ZPE actually exists. Interestingly enough, most physicists think there is such evidence, in the form of a phenomenon known as the Casimir effect. In a nutshell, the effect is a very small but measurable force between two very flat metal plates that are very close together. Here's one of numerous references from a usually reliable source, Physics World: The Casimir effect: a force from nothing.

But just this year, in March, R. L. Jaffe came out with a paper demonstrating that the Casimir effect can be explained without ZPE:

The Casimir Effect and the Quantum Vacuum
In discussions of the cosmological constant, the Casimir effect is often invoked as decisive evidence that the zero point energies of quantum fields are "real''. On the contrary, Casimir effects can be formulated and Casimir forces can be computed without reference to zero point energies. They are relativistic, quantum forces between charges and currents.
Note that Jaffe isn't claiming that the Casimir effect isn't a result of ZPE. Instead, he's just claiming a way to derive the Casimir effect from "the forces between charged particles in the metal plates." In this view, "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates."

Is this nothing but an overly fastidious academic quibble? We should be careful about supposing that. If Jaffe is right, we still lack any experimental evidence for ZPE, however right it seems theoretically -- it's still a challenge to experimental physicists. And ever if ZPE is real, how it may relate to dark energy, if at all, is very mysterious.

Acknowledgement

I'm indebted to Phil Gossett for bringing the Jaffe paper to my attention in postings to a private mailing list.

Labels:

Global warming: point of no return?

The bottom line: we're screwed.

Global warming 'past the point of no return'
A record loss of sea ice in the Arctic this summer has convinced scientists that the northern hemisphere may have crossed a critical threshold beyond which the climate may never recover. Scientists fear that the Arctic has now entered an irreversible phase of warming which will accelerate the loss of the polar sea ice that has helped to keep the climate stable for thousands of years.

They believe global warming is melting Arctic ice so rapidly that the region is beginning to absorb more heat from the sun, causing the ice to melt still further and so reinforcing a vicious cycle of melting and heating.

The greatest fear is that the Arctic has reached a "tipping point" beyond which nothing can reverse the continual loss of sea ice and with it the massive land glaciers of Greenland, which will raise sea levels dramatically.
Some people still think this is alarmist nonsense. However, the scientific consensus continues to grow more one-sided: global warming is real. And the evidence is coming from many directions. For instance, this:

Vegetation Growth May Quickly Raise Arctic Temperatures
Warming in the Arctic is stimulating the growth of vegetation and could affect the delicate energy balance there, causing an additional climate warming of several degrees over the next few decades. A new study indicates that as the number of dark-colored shrubs in the otherwise stark Arctic tundra rises, the amount of solar energy absorbed could increase winter heating by up to 70 percent.

Yet some people still think that the warming probably isn't even real, and many more think the effects won't be as serious as projected even if warming is real. The latter sort of remind us of the people in New Orleans who decided not to evacuate before the hurricane (either because they had no easy means to, or they simply thought they could "ride it out"). Well guess what. In that case, the effects were much worse than most feared.

And there are reports that political leaders who have previously supported efforts to reduce greenhouse gas emissions, like the UK's Tony Blair, may be changing their minds. Considering the source, don't take that as a sure thing. But it is possible that the problem is now recognized as so serious that reducing emissions, even more drastically than foreseen by the Kyoto treaty, won't avert something really bad. What's the alternative? This last reference suggests:
So what will happen instead? Blair answered: "What countries will do is work together to develop the science and technology….There is no way that we are going to tackle this problem unless we develop the science and technology to do it."
However, if warming is real, unavoidable, and likely to have more serious effects -- and possibley sooner -- than generally supposed, then perhaps science and technology won't be able to let us avoid a really bad outcome. Maybe the best we can hope for is making it a little less bad.

Pursuing the Katrina analogy, it may be it's much too late to flee the hurricane bearing down on us or to prevent the breaching of the levees. At best we can employ science and technology in a crash program to build better pumps to clean up a little bit faster after the diaster that "no one could have foreseen" actually occurs. It may or may not be possible to significantly mitigate global warming over the next century. But part of our preparation should certainly be a large investment in science and technology for coping with the possible effects we can foresee.

Maybe that's a pessimistic scenario. But isn't disaster planning largely about planning how to handle the "worst" case?

People will be tempted to think that with just a little preparation we can "ride out" the coming global warming storm without much inconvenience. But if New Orleans is any indication, that might not be such a good idea.

And what about New Orleans itself? Rebuilding it might be the "right" thing to do. But we darn well better plan for higher sea levels and more frequent and powerful hurricanes.

Labels:

Thursday, September 15, 2005

Bad science journalism

Really good article by Ben Goldacre in the Guardian: Don't dumb me down

I won't try very hard to summarize it, since it has a lot of meat. Just read it if you're interested in the subject. (And you probably are if you're reading this blog.)

The first half is a taxonomy of bad science journalism. Basically there are three kinds: "wacky stories", "scare stories", and "breakthrough stories". A common characteristic they have is their purpose, which is much more to entertain than to inform. Sort of a piece with the journalistic principle that "if it bleeds, it leads". I. e., stories about war, pestilence, famine, and death sell a lot of newspapers.

The balance of the article assesses possible reasons for all the bad science writing. In general, Goldacre's thesis seems to be that it's all because science journalism tends to be practiced mostly by people who majored in humanities rather than science.

There's probably a lot of truth to that. And we can also concede that when science writing is done by people with a real science background, it ofter suffers because the writers would really prefer to be doing science rather than journalism. (Though there are numerous exceptions to this -- names like Sagan, Weinberg, Mayr, Watson, etc.)

But I have to wonder whether a lot of the reasons for the problems with science writing have to do not with the writers but with the audience. And by extension, with that part of society whose job is to educate the audience, namely the educational system.

There are some quotations attributed to Einstein, such as: "If you can't explain something simply, you don't know enough about it." "You do not really understand something unless you can explain it to your grandmother." "It should be possible to explain the laws of physics to a barmaid." Unfortunately, I have to disagree somewhat. There are some things that require a fair amount of education to understand properly, and most of modern science is among them. We wouldn't expect one could explain so quickly how financial derivatives work or a symphony in sonata form is constructed. Except at so high a level as to be of little actual use. Why should cosmology or molecular biology be any different?

In particular, people often complain of too much "jargon" or "technobabble" in writing about scientific and technical subjects. People want things explained in "plain English". I think that this demand is unrealistic and is an important reason that science writing is dumbed down as much as it is.

The fact is, language is built upon a hierarchy of concepts, and scientific or technical concepts correspond to carefully chosen, specialized "technical terms", "terms of art", and the like -- i. e. "jargon". This is just as true in fields like law, journalism, and the game of baseball as it is in medicine, genetics, astrophysics, and higher mathematics. Simply put, it is much simpler and more economical to explain a technical subject using the terminology appropriate to the subject, rather than in terms of "plain English", where one needs repeatedly to use long, inaccurate circumlocutions instead of the proper technical terms.

To take a specific example, consider elementary particle physics. By now terms like "protons" and "neutrons" and "electrons" are familiar enough to the public, and even, perhaps, "quarks". (Not that most of the public could give anything like a satisfactory definition of any of them.) But to really get into the subject, you need terms like "baryons", "strong force", "symmetry", "bosons", and so forth.

Of course, on first introducing such terms, you should give "plain English" definitions using words that are presumably more familiar. And ideally, describe the concepts using nice, crisp analogies and metaphors. But after that, heaven help you if you can't count on the audience to remember the definitions and you must instead repeat them every time you need to invoke the concept.

And don't make the related mistake of avoiding proper technical terms with Greek or Latin roots in favor of terms of your own devising using more "common" Anglo-Saxon vocabulary. While this may conceivably make your immediate job easier, the effort will be wasted when people in the audience are later unable to identify your ad hoc terminology with what is actually used by people in the given subject area.

The problem can be illustrated in particle physics. If instead of "bosons" you talk about "force-carrying particles", not only is the result clumsier, but the audience will eventually wonder what force it is that a "Higgs boson" carries if and when they come upon the term elsewhere. This is especially a problem with the use of mathematical terms. How many writers have the guts to use a term like "homeomorphism", even when it's appropriate?

Enough said about that. I think there is one other problem that accounts for a lot of bad science writing, and that is the use of mathematics in any way. Simply put, too many people seem to be scared to death of mathematics, so that when statistical ideas are needed to discuss a subject or a few well-chosen equations can make things clearer, writers will try to avoid them so as not to "frighten the horses". Or because an editor insists that use of equations will turn people off and depress sales.

What should be done if some math is really needed to explain something? I don't know. Punt?

In any case, failure to use proper technical terminology and failure to use mathematics (when appropriate), seem to me to be the most common characteristics of dumbed-down science writing. Both these problems arise when the writer has to assume that the audience will have a lot of trouble with mathematics and technical language. (Of course, this is only if the writer actually does sufficiently understand what he/she is writing about.)

But how can such an assumption be avoided if the audience hasn't been properly prepared after a dozen or so years of elementary and secondary education? That's a rhetorical question, I guess. Unfortunately, the assumption generally can't be avoided.

The problem's not limited to science either. How can the public understand journalism about politics or the economy or international trade if basic education in history, geography, civics, logical reasoning, and critical thinking isn't adequate?

Labels:

Saturday, September 10, 2005

What deuterium tells us about dark matter

Why are cosmologists so sure about the existence of dark matter? Is it possible that this "missing matter" hasn't been found simply because astronomers haven't been clever enough to look in the right place?

Those are good questions, and there is a good answer. First, recall that there are actually two types of dark matter: baryonic and non-baryonic dark matter. Baryonic matter (whether luminous or dark) is matter that is made up of neutrons and protons, including all the forms of matter that have ever actually been observed by physicists (with the exception of very lightweight elementary particles like electrons and neutrinos).

It is possible to compute approximately the total density of matter in the universe in several ways, but mainly by observing the motions of stars within galaxies and of galaxies within galaxy clusters. These observations tell us that there is far more matter out there than occurs in luminous objects such as stars and galaxies. It is quite possible, of course, that much or even most of the matter that isn't luminous is still "ordinary" baryonic matter.

But there is one good way to tell that this cannot be the case. It turns out that during a brief period in the very early universe, between about 5 minutes and 30 minutes after the big bang, almost all of the lightweight nuclei other than protons (ordinary hydrogen) were formed in a process called nucleosynthesis. The most abundant of these nuclei was ordinary helium: helium-4, consisting of 2 protons and 2 neutrons. This amounted to about 24% of baryonic matter, with almost all the rest being ordinary hydrogen (single protons). In addition, there were very small trace amounts of deuterium (hydrogen-2), tritium (hydrogen-3), helium-3, and lithium-7. Complex calculations make it possible to predict roughly what the proportions of each of these nuclei should be. Numerous observational studies have repeatedly confirmed these predictions, and this forms some of the most solid evidence for the whole big bang model of the origin of the universe.

In these calculations it happens that the very small proportion of deuterium depends very sensitively on the ratio of the number of baryons to photons that existed at the time nucleosynthesis began. Therefore, if we knew how much deuterium was formed, we could infer what this baryon-photon ratio was. Of course, we have no way to measure the amount of deuterium right after nucleosynthesis was complete. But because deuterium has a rather fragile nucleus, it is destroyed rather than created in stars. So a measurement of the amount of deuterium in the universe today gives a lower bound for what existed in the very distant past, and probably not a bad estimate, as long as one measures the abundance of deuterium in interstellar gas, most of which has not been inside a star.

Unfortunately, until recently, the only way to measure this abundance has been spectroscopically, at ultraviolet and optical frequencies. This is a problem because in those wavelengths the spectra of hydrogen and deuterium are very similar. They are much less similar at radio frequencies, and good RF measurements have just been announced by a research group at MIT: Researchers find clue to start of universe. The work was done at MIT's Haystack Observatory.

So, what's the bottom line? What was found is that the ratio of deuterium to hydrogen in the local interstellar medium implies a photon-baryon ratio of about 2 billion to 1. That is, there must be about 2 billion photons for every baryon. (See the graph at the bottom of this page for a graph of the relationship of photon-baryon ratio to deuterium-hydrogen ratio.) But it's relatively easy to determine the density of photons in the universe (from measurements of the cosmic microwave background). And so we can get the density of baryons in the universe. The result is that baryons can make up only about 15% of the mass of all gravitating matter in the universe. And hence the other 85% has to be non-baryonic dark matter.

These conclusions about the relative abundance of deuterium and hence the amount of non-baryonic dark matter are not new, but the measurements of deuterium in the interstellar medium provide an independent confirmation of earlier results.

Additional references:

Another article about this: Deuterium at the dawn of time

Original journal article: Deuterium Abundance in the Interstellar Gas of the Galactic Anticenter from the 327 MHz Line (subscription required for full access)

Labels: ,

DNA transcription isn't so simple...

It seems that the more you know, the more there is to know.

Up until a relatively few years ago, when it became possible to read the genetic code in DNA easily, the process by which the genetic information in DNA was used to produce proteins in cells seemed fairly simple. In a nutshell, an enzyme called transcriptase would "read" the portion of DNA corresponding to a particular gene and produce a transcribed copy in the form of messenger RNA, or mRNA. The mRNA, in turn, would be "read" by a protein complex known as the ribosome to construct a protein.

However, that's a substantial oversimplification. For one thing, there isn't a 1-to-1 correspondence between genes and proteins in higher animals such as mammals. It isn't known how many different proteins there are in a typical mammal, but humans might have several hundred thousand. Yet a surprising finding from sequencing of the human genome is that there are only about 25,000 genes. Furthermore, genes themselves are not simple in structure. It has been known for some time that most genes of eukaryotic cells consist of multiple segments called exons and introns. The raw transcript of a gene contains the information from both exons and introns, and the resulting RNA is called pre-mRNA. This pre-mRNA then undergoes an editing process called splicing, performed by a "spliceosome", in which the segments corresponding to introns are removed. The strand of "mature" mRNA that results is the mRNA that is used as a template to construct proteins. It is still mostly unknown why this unused information from introns is present at all.

And if that weren't enough complexity, it turns out that a gene can be spliced in several different ways, which involve only some of the gene's original exons in different combinations. This process is called alternative splicing. The set of all possible mRNA that can be produced from an organism's genome is called the transcriptome.

Research just published now reveals that there isn't even a simple relationship between the transcriptome and the resulting set of proteins (the "proteome"). It appears that there can be many mRNA strands which are not used as templates to make proteins.

There are several papers in the September 2 issue of Science (subscription required) that report on this research. A description of one of the papers is here: Mouse Genome Much More Complex Than Expected.
An international research team consisting of more than 100 scientists has been attempting since then to isolate and analyse the entire mRNA transcripts in the mouse. Their most astonishing finding is that more than 60 per cent of all mRNAs are not protein blueprints at all. 'We don't know what the function of these RNAs is,' the Bonn neurobiologist Professor Andreas Zimmer admits. However, they seem to be extremely important: even in such different organisms as hens and mice these ostensibly so unimportant RNAs are very similar. If they really had no function they would have mutated during the course of evolution so quickly that there would nowadays be hardly any similarity between them.

Research indicates that mice have about 180,000 types of mRNA in their transcriptome. If the majority of this mRNA isn't a protein template, what is it for? According to the Wikipedia transcriptome article (based on the research),
A study of 158,807 mouse transcripts revealed that 4520 of these transcripts form antisense partners that are base pair complementary to the exons of genes. These results raise the possibility that significant numbers of "antisense RNA-coding genes" might participate in the regulation of the levels of expression of protein-coding mRNAs.
Antisense RNA is mRNA that has been transcribed from the strand of DNA that is complimentary to the DNA strand which contains the original gene. This means that a piece of antisense mRNA can attach itself to its complimentary strand and block it from being used to make a protein. Consequently, such antisense mRNA probably plays a role in how strongly the original gene is expressed.

Labels:

Friday, September 09, 2005

Caloric restriction and aging in humans

If correct, this is going to be disappointing to a lot of people who've been starving themselves in hopes of living significantly longer...

Caloric Restriction Won't Dramatically Extend Life Span In Humans
"Our message is that suffering years of misery to remain super-skinny is not going to have a big payoff in terms of a longer life," said UCLA evolutionary biologist John Phelan. "I once heard someone say caloric restriction may not make you live forever, but it sure would seem like it. Try to maintain a healthy body weight, but don't deprive yourself of all pleasure. Moderation appears to be a more sensible solution."

The excitement over the possibility of extending lifespan by reducing caloric intake is based largely on animal experiments, especially with mice. But mathematical models that compare longevity and caloric intake in humans suggest it won't work nearly so well for us.
Their mathematical model shows that people who consume the most calories have a shorter life span, and that if people severely restrict their calories over their lifetimes, their life span increases by between 3 percent and 7 percent -- far less than the 20-plus years some have hoped could be achieved by drastic caloric restriction. [Phelan] considers the 3 percent figure more likely than the 7 percent.

If you find this depressing, you might consider consoling yourself by a quick trip to the store for a box of donuts...

Labels: , ,

Tuesday, September 06, 2005

Top down or bottom up?

No, it's not a question about the best way to get a tan at the beach. And I don't know whether this merits a long philosophical meditation (though it might). But I just couldn't resist this item from Jaron Lanier:
The most glaring inconsistency, however, comes about because of the weird alliance between bible literalists and free market enthusiasts. If you believe the invisible hand of the bottom-up marketplace will always be infinitely wiser than Government, why on Earth wouldn’t you believe that the bottom-up marketplace of material forms in the primordial ooze wouldn’t be wise enough to make what we see? Wiser than any Intelligent Designer could ever be?

It's a very good point about the weakness in the main argument for "intelligent design". But Lanier also applies, elsewhere in the essay, the dichotomy to another issue that everyone has been thinking about the last few days, namely the division of responsibility between local/state governments and the federal government in the job of disaster management (as well as similar issues of environmental protection policy, for example).

As Lanier points out, the same issues arise in science itself:
A few weeks ago I was at a small meeting at a physics institute in which a Harvard law professor quizzed some scientists about such conundrums. Some of the most interesting physicists these days are interested in bottom-up approaches. Physicists like Newton and Einstein provided us with top-down global laws and starting conditions that worked stunningly well at explaining local events. That doesn’t mean a bottom-up physics is unimaginable. Maybe “pre-geometric” components self-assemble to create the fabric of space and time, so that the background assumed by familiar physical theories is actually an emergent phenomenon. These ideas are new and not well understood.

There are quite a few other areas where "bottom up" approaches may be applicable. For instance, all of the things subsumed in the trendy buzzwords "chaos", "complexity", "emergent phenomena", "self-organizing systems". This would seem to be the right way to approach theories about things as diverse as weather (including hurricanes), living things (from single cells up to ecosystems), and computer networks.

Is there a conclusion to be drawn here? Maybe just this: "It all depends..."

Monday, September 05, 2005

Possible dark matter evidence for extra dimensions

Six dimensional space
Joseph Silk of the University of Oxford, England, and his co-workers say that these extra spatial dimensions can be inferred from the perplexing behaviour of dark matter. This mysterious stuff cannot be seen, but its presence in galaxies is betrayed by the gravitational tug that it exerts on visible stars. Silk and his colleagues looked at how dark matter behaves differently in small galaxies and large clusters of galaxies. In the smaller ones, dark matter seems to be attracted to itself quite strongly. But in the large galactic clusters, this doesn’t seem to be the case. Strongly interacting dark matter should produce cores of dark material bigger than those that are actually there, as deduced from the way the cluster spins.
So writes Philip Ball in Nature and the New York Times.

Silk's basic idea is very simple. The existence of some form of dark matter has for over 30 years been inferred from various independent observational facts -- for instance the way that stars orbit around the center of a galaxy and the ways that galaxies move in large clusters of galaxies. In order to account for these motions there must be a considerable amount of gravitating matter in the universe that is not directly visible to us in any way, such as in the form of luminous stars. Furthermore, the amount of matter that must exist to account for the motions is far more than could exist in the form of "ordinary" matter composed of protons and neutrons (so-called "baryonic matter"). Indeed, for other reasons, this ordinary matter, both that which can actually seen and that which can't (because it doesn't glow as in a star), must be less than 15% of the total of all matter.

More detailed studies of the motions of galaxies in clusters reveal another problem -- there is a slight difference between the expected motions of galaxies, due to all the dark matter, in large clusters as compared to smaller ones. And one way to account for this difference would be the existence of "extra dimensions", which would cause some of the gravitational force due to the dark matter to "leak away". It turns out that three extra spatial dimensions would be needed to account for the discrepancy, if indeed this "leaking" effect is real.

In the Newtonian theory of gravity (as well as in Einstein's general relativity), gravitational force decreases as the square of distance. Silk and his collaborators calculate that if there were three additional spatial dimensions and if each were about a nanometer in extent (as opposed to billions of light years as for the three ordinary dimensions), then within that small distance gravity would decrease as the fifth power of distance. And this effect would be sufficient to explain the anomalous motion of galaxies in clusters. It was, in addition, even possible to compute the approximate mass of hypothetical elementary particles that could make up the dark matter -- about 3×10-16 times the mass of a proton. This is within the range considered possible for a hypothetical particle known as an axion, frequently suggested as the main constituent of dark matter.

Early reactions to Silk's idea involve a great deal of interest, but much skepticism too. However, if the idea turns out to be correct, it would provide indirect evidence for superstring theory, which requires six or seven extra "small" dimensions. The additional three or four additional dimensions needed by superstring theory might be even much smaller than those that Silk postulates, so that they would not affect the calculations.

Further references:

A preprint of Silk's paper: Observational Evidence for Extra Dimensions from Dark Matter

Overview of dark matter with many other references: here

Overview of superstring theory with many other references: here

Blog entry: Dark Matter and Extra-Dimensional Modifications of Gravity

Blog entry: Dark matter and 3 extra dimensions

Older Arxiv preprint by Spergel and Steihnardt: Observational evidence for self-interacting cold dark matter

Labels: ,

Saturday, September 03, 2005

Another disaster the U. S. faces

And while we're talking about disasters the U. S. is facing, here's another one.
When it comes to innovation in science and technology, the United States has been the recognized global leader since the end of World War II. But today that No. 1 position is in jeopardy as many foreign governments strengthen their educational and research programs.
Why does that matter?
Sustaining the United States’ leadership position is a serious issue – with far more at stake than national pride. Because it leads to new industries and higher-paying jobs, innovation is directly linked with economic prosperity.
There are several billion people in the world outside the U. S. who envy its standard of living. Since their cost of living is so much less, they are willing to work for much lower wages. If they can produce goods like, say, automobiles for a lot less, they will eventually get most of that business. Especially if their products are as good as -- or better than -- U. S. products. And if they become more innovative than the U. S., their products will be better -- as is pretty much already true for cars.

No wonder the U. S. automobile industry is going down the tubes. How many others will follow?
'This isn’t a problem with a short-term fix,' says Alan Porter, co-director of Georgia Tech’s Technology Policy and Assessment Center. 'Beyond our knowledge base, we have nothing else – no natural resources – that gives us a competitive edge in the global economy. We’re fine right now, but in 15 years, this could really bite us.'

"No one could have anticipated...."

Whenever things turn out disasterously wrong, the party or parties most directly responsible for avoiding, mitigating, or responding to the disaster will always plead, as G. W. Bush did, that "I don't think anybody anticipated the breach of the levees."

And yet, it usually turns out in cases where disasters are very large and very public, that they have been anticipated, by people and institutions with the professional competence to analyze the situation. Case in point: the destruction of New Orleans by hurricane:
Thousands drowned in the murky brew that was soon contaminated by sewage and industrial waste. Thousands more who survived the flood later perished from dehydration and disease as they waited to be rescued.
That was written (in National Geographic, October 2004) one year before the disaster happened. But it wasn't any amazing feat of prognostication. Simply a logical conclusion of straightforward scientific analysis of the relevant facts. (Quite a few people have pointed out this article, such as here.)

Or how about this, from Scientific American (October 2001):
A major hurricane could swamp New Orleans under 20 feet of water, killing thousands. Human activities along the Mississippi River have dramatically increased the risk, and now only massive reengineering of southeastern Louisiana can save the city.
Professionals who studied the situation all knew of the problem. They knew that New Orleans' levee system was designed for at most a category 3 hurricane, and would fail in a sufficiently stronger hurricane, like the one that finally arrived.(Which would be worse? If the people at the top actually didn't know these facts, or if they did and lied about not knowing? What a choice. Lying, incompetence, unconcern, calculated negligence, or maybe all of the above.)

Unfortunately, the U. S. government is now run by people who are openly hostile to scientific facts and scientific analysis. So it's no surprise at all that they were unable to foresee and anticipate the disaster. This is unacceptable in top governmental officials.

It isn't the fault of the professional scientists and experts who work for the govenment. Those among them who studied the New Orleans situation have understood for a long time what would happen when a large hurricane struck New Orleans directly. In 2001, for instance, it was reported:
New Orleans is sinking.
And its main buffer from a hurricane, the protective Mississippi River delta, is quickly eroding away, leaving the historic city perilously close to disaster.
So vulnerable, in fact, that earlier this year the Federal Emergency Management Agency ranked the potential damage to New Orleans as among the three likeliest, most castastrophic disasters facing this country.
The other two? A massive earthquake in San Francisco, and, almost prophetically, a terrorist attack on New York City.
The New Orleans hurricane scenario may be the deadliest of all.
In the face of an approaching storm, scientists say, the city's less-than-adequate evacuation routes would strand 250,000 people or more, and probably kill one of 10 left behind as the city drowned under 20 feet of water. Thousands of refugees could land in Houston.
This wasn't just some obscure potential problem, it was one of the three most likely to occur. And one of the other two has already occurred. The third, too, is 100% inevitable. It's only a matter of time.

So why didn't the people at the top of the government "anticipate" this disaster? Because they had other things on their minds, and they didn't want to be bothered with concerns that didn't fit their agenda. And most of all, because of active hostility towards scientific analysis that isn't of use to that agenda, and may even argue against it.

Well, the New Orleans disaster is now history. But what other future disasters are still out there, disasters that are almost certain to happen, yet the people in control of the government don't want to think seriously about, and don't want anyone else to think about either?

There's global warming, obviously. It may in fact play a role in the increasing frequency and destructiveness of hurricanes, though that isn't yet proven. But there are terrible consequences that can be anticipated with near certainty, such as rise of sea levels over the next century. Other coastal cities like Miami (as well as New Orleans if it's rebuilt) will be under water, eventually.

What else? How about the end of cheap oil? While we may never totally exhaust the Earth's supplies of petroleum, it will be a disaster enough if the cost of a barrel of oil reaches $200, then $400, ... As it will eventually, considering how many other large countries in the world are reaching the point in their development where they will compete with the U. S. for unavoidably contracting supplies.

Isn't it, perhaps, foolish to trust the future of the U. S. and the whole world to people who are hostile to professional scientific analysis of such problems? People who are hostile to science in any form that is inconvenient for their ideology and/or economic self-interest?

I like the way Molly Ivins puts it:
In fact, there is now a governmentwide movement away from basing policy on science, expertise and professionalism, and in favor of choices based on ideology. If you're wondering what the ideological position on flood management might be, look at the pictures of New Orleans - it seems to consist of gutting the programs that do anything.

Thursday, September 01, 2005

Intelligent falling theory

Evangelical Scientists Refute Gravity With New 'Intelligent Falling' Theory
KANSAS CITY, KS—As the debate over the teaching of evolution in public schools continues, a new controversy over the science curriculum arose Monday in this embattled Midwestern state. Scientists from the Evangelical Center For Faith-Based Reasoning are now asserting that the long-held "theory of gravity" is flawed, and they have responded to it with a new theory of Intelligent Falling.

Yikes!!    ;-)