Sunday, August 03, 2008

Proton-electron mass ratio

Physicists speculate a lot about whether (or to what extent) the laws of nature are exactly the same in all parts of the visible universe. This question is sometimes known as the "fundamental constants" problem.

There are a variety of such fundamental constants. The best known is the speed of light. Many theories that are part of "alternative physics" are based on the idea that the speed of light (in a vacuum) is not a constant, and may have been different early in the history of the universe. There is very little, if any, solid evidence for this, however.

A closely related constant is the fine structure constant, fondly known as alpha. It is related to the speed of light by the equation α = e2(2ℎcε0)-1, where e is the charge of an electron, ℎ is the Planck constant, c is the speed of light, and ε0 is the permittivity of free space.

There have been attempts to determine from astronomical data whether α may vary with time. This question can be investigated since α affects atomic spectra that are easily measured in a laboratory on Earth, and which can also be observed in distant astronomical objects. So far, no evidence of variability has been found. (See here, from June 2007.)

If α doesn't change with time, it is very unlikely that the speed of light does either (unless other constants also change in just the right way).

Another important fundamental constant is the ratio of the mass of a proton to the mass of an electron. (This ratio is sometimes denoted by μ, but that's confusing, as μ is used for other quantities in physics also.) In the standard model of particle physics, these masses, and their ratio, are free parameters that are not determined by the model itself. They are just there.

This ratio also affects atomic spectra, so it can also be investigated in astronomical studies. In April 2006, some evidence was reported for a difference between the laboratory value of the ratio and its value in distant quasars. The difference claimed was very small – 0.002% over a time span of 12 billion years. (See here, here.)

However, now there is more recent evidence from halfway across the universe, indicating that the ratio long ago is the same as it is now, although the time span is somewhat less (6 billion years):

Earth's laws still apply in distant Universe (6/19/08)
The laws of nature are the same in the distant Universe as they are here on Earth, according to new research conducted by an international team of astronomers, including Christian Henkel from the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn. Their research, published today in Science, shows that one of the most important numbers in physics theory, the proton-electron mass ratio, is almost exactly the same in a galaxy 6 billion light years away as it is in Earth's laboratories - approximately 1836.15.

According to Michael Murphy, Swinburne astrophysicist and lead author of the study, it is an important finding, as many scientists debate whether the laws of nature may change at different times and in different places in the Universe. "We have been able to show that the laws of physics are the same in this galaxy half way across the visible Universe as they are here on Earth," he said.

The light actually comes from a quasar 7.5 billion light years away. But the spectral effect is due to ammonia molecules encountered when the light passes through a galaxy that is 6 billion light years distant.

Another account emphasizes that the latest result gives a constraint 10 times better than the 2006 result:

Changing physical constant may be constant after all (6/20/08)
There is good reason to trust the new result, Murphy says. The wavelengths at which the ammonia molecules absorb radiation depend more strongly on the proton-to-electron mass ratio than with other molecules, such as the molecular hydrogen that was used for the 2006 result. "Our constraint is 10 times better than those previously obtained," Murphy says.

Wim Ubachs, who led the 2006 analysis, agrees Murphy's result is "solid", but thinks there still might be a way to reconcile the two results. There remains the possibility that the constant varied between 6 billion and 12 billion years ago but has not varied since, he says.

Other accounts of this result:


Further reading:

Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe – 6/20/08 research article in Science

Ammonia: Proton-electron mass ratio constant for 6 gigayears – 7/14/08 blog post on the research, and reasons why no variation in the ratio should be expected

Tags:

Labels: ,

Monday, July 21, 2008

High-temperature superconductivity

"Normal" superconductivity is a phenomenon that has been known to physicists since 1911, when it was discovered by Heike Kamerlingh Onnes. The phenomenon involves the essentially total loss of electrical resistance in certain materials – mostly metals and metallic alloys – at temperatures very close to absolute zero (0° K, which is -273.15° C). Most metallic elements, except for ferromagnetic metals and some noble metals like silver and gold, become superconducting at sufficiently low temperatures.

The low temperatures needed for normal superconductivity are now easily maintainable in the laboratory, for instance by immersion in liquid helium, which has a boiling point of 4.22° K. Although such temperatures are routinely achievable, it's not easy or cheap to do so. Nevertheless, even some very large devices, such as the Large Hadron Collider, depend on superconductivity in their operation.

It is quite easy to measure experimentally when superconductivity is occurring, because Ohm's Law states that V=I×R, where V is voltage, I is current, and R is resistance. Consequently, resistance is given by R=V/I. So if there is a current flowing between two points in a material even though the voltage difference is zero, the resistance must be zero.

In most materials, resistance decreases as the temperature decreases. In superconductors, by definition, there is a temperature, called the critical temperature (TC), at which resistance abruptly becomes essentially 0. By that we mean the resistance is too low to measure, and an electric current flowing in a superconducting material does not dissipate any measurable heat. So the current (I) can flow for an indefinitely long period of time, without any loss.

Clearly, superconductivity is an extremely useful property to have in a material. It could, for example, allow transmission of electricity over wires without any loss at all, as opposed to the loss that normally occurs due to generation of heat. Unfortunately, the highest critical temperature known in a "normal" superconducting material is about 39° K. That record was established in 2001 with magnesium diboride (MgB2). Liquid helium isn't needed to keep such a material in a superconducting state. Liquid nitrogen, which is commonly used commercially and has a boiling point of 77.36°K (-195.79° C) suffices, even though it has to be kept cooled to well below its boiling point.

So it was with high hopes that the discovery in 1986 of "high-temperature" superconductors was greeted. One measure of the perceived importance of the discovery is the fact that the discoverers (Karl Müller and Johannes Bednorz) were awarded a Nobel Prize the very next year.

There were strong hopes at the time that eventually materials would be discovered that exhibited high-temperature superconductivity even at room temperatures. This would make possible the economical production of useful things like maglev trains. Although maglev trains have in fact been built that use high-temperature superconductors, the requirement for liquid nitrogen still makes them very expensive to build and maintain.

High-temperature superconductivity was first observed in ceramics based on copper oxide (CuO2). The material also incorporated the elements lanthanum and barium, and had a transition temperature of about 35°K. Just two days ago it was announced that a rather more exotic cuprate material, incorporating tin, lead, indium, barium, and thulium, had the highest TC found so far, about 195° K (-78° C) – that's the sublimation temperature of CO2. (See here.)

Although 195° K is a huge improvement over 35°K, it is still far below "room temperature". This still makes the routine use of high-temperature superconductors in most applications uneconomical, or at best difficult. There's an additional problem in that most high-TC materials are ceramics, so they lack the ductility of metallic materials, which is often required in practical applications like wires and cables. Furthermore, such materials are generally tricky to manufacture at all, and require exotic elements like thulium.

For all the reasons mentioned, more than 20 years after the discovery of high-TC materials, there is a great deal of disappointment and frustration that progress hasn't lived up to the initial high hopes.

One of the main obstacles to progress has been the surprising fact that we do not even have an adequate theory of how high-temperature superconductivity works. We do have a good theory of how "normal" superconductivity works, but that theory, even with tweaks, does not appear to be applicable at temperatures more than about 40° K. Before 1986, the highest TC known was 23° K. So at first it might seem as though 35° K was not that big an advance.

However, in 1986 it was thought that the existing theory did not apply at temperatures more than about 30° K. The fact that in 1986 both theory and experiment did not anticipate a TC of 35°K is what made the discovery so unexpected. The fact we still don't have an adequate theory for most high-TC materials means it isn't possible to figure out theoretically what sorts of materials might have TC exceeding the currently known upper limit.

So let's quickly review the theory of "normal" superconductivity. It's surprisingly simple (which is also why it doesn't extend beyond 40°K). The theory is known as BCS theory, for its developers, John Bardeen, Leon Cooper, and John Schrieffer.

Electrical conductivity at ordinary temperatures is based, of course, on the largely free movement of electrons in a metallic or semi-metallic material. Resistance is simply the result of interactions that transfer energy from the electrons to atoms of the material. Eventually all the energy carried by the electrons is dissipated as heat, and the current (I) goes to 0, unless energy is supplied (say, from a battery).

According to BCS theory, at sufficiently low temperatures two electrons having opposite spins pair up with each other to form "Cooper pairs". Electrons normally repel each other due to the Coulomb force resulting from their electrical charge. But each electron also, because of Coulomb force, distorts the lattice of positively charged ions of the material. This distortion is called a "phonon". In fact, the distortion itself has a vibrational, wavelike nature – like the quantum wavelike behavior of an electron or any other subatomic particle.

A phonon has a net positive charge of nearly the same magnitude as the negative charge of an electron, so this pairing mostly cancels out the electrical charge of the electron-phonon pair. But the cancellation is only approximate, so that each electron-phonon pair has a small net charge, which may be positive or negative. Consequently, electron-phonon pairs can attract other pairs having opposite charge. The Cooper pair is this pairing between electron-phonon pairs.

Cooper pairs form only when the electrons involved have opposite spins. Consequently, a Cooper pair has zero net spin, making it a boson. This is important in BCS theory, since bosons aren't subject to the Pauli exclusion principle. The result is that many Cooper pairs can be simultaneously in the same quantum state.

At a sufficiently low temperature it becomes impossible for Cooper pairs to interact with the lattice of the material. This is because, due to the Heisenberg uncertainty principle, there is a lower limit on the amount of energy (ΔE) that can be exchanged between a Cooper pair and the lattice. If the binding energy within the Cooper pair is less than ΔE, no interaction that disrupts the pair is possible, so the pair represents a stable bound state. It can therefore move completely freely within the lattice, with no resistance at all.

The net result is that the electrons which are paired up within Cooper pairs can move completely freely within the lattice. And since electrons themselves still have a net charge, the effective result is an electrical current that flows with zero resistance. Another way of thinking about this is that a pair of electrons moves through the lattice accompanied by distortions of the lattice, but in such a way that no energy is transferred to the lattice.

If all this legerdemain seems a little suspicious, remember that it's a quantum effect that is possible only at very low temperatures, and that's why the BCS theory does not apply, even with any variations that physicists have been able to conceive, above approximately 40° K.

Materials capable of high-temperature superconductivity are more complex than typical "normal" superconducting materials. The latter include many metallic elements, such as mercury or lead. But the former are ceramics, which are often, but not always, based on copper oxide. In the so-called cuprate superconductors, atoms of additional elements are included between planes consisting of copper oxide. This process is referred to as "doping". It has the effect of inserting either a surplus or a deficit of electrons (called "holes" in the latter case), and it is these electrons and holes that are available for carrying electric charge in the material.

It is thought that these electrons and holes are able to pair up in some way that is analogous to Cooper pairs, and that the resulting pairs are the necessary bosonic charge carriers. But one basic problem in this field is that it has not even been possible to determine experimentally exactly what the hypothetical pairs consist of.

There is a vast theoretical and experimental literature, estimated as upwards of 100,000 published papers, dealing with the field of high-temperature superconductivity. Nevertheless, the development of an adequate theory to explain the effect is still considered to be one of the most important unsolved problems in condensed matter phyaics.

Now there is additional experimental work that claims to have made significant progress:

Room Temperature Superconductivity: One Step Closer To Holy Grail Of Physics (7/9/08)
The researchers have discovered where the charge 'hole' carriers that play a significant role in the superconductivity originate within the electronic structure of copper-oxide superconductors. These findings are particularly important for the next step of deciphering the glue that binds the holes together and determining what enables them to superconduct.

Dr Suchitra E. Sebastian, lead author of the study, commented, "An experimental difficulty in the past has been accessing the underlying microscopics of the system once it begins to superconduct. Superconductivity throws a manner of 'veil' over the system, hiding its inner workings from experimental probes. A major advance has been our use of high magnetic fields, which punch holes through the superconducting shroud, known as vortices - regions where superconductivity is destroyed, through which the underlying electronic structure can be probed.

"We have successfully unearthed for the first time in a high temperature superconductor the location in the electronic structure where 'pockets' of doped hole carriers aggregate. Our experiments have thus made an important advance toward understanding how superconducting pairs form out of these hole pockets."

By determining exactly where the doped holes aggregate in the electronic structure of these superconductors, the researchers have been able to advance understanding in two vital areas:

(1) A direct probe revealing the location and size of pockets of holes is an essential step to determining how these particles stick together to superconduct.

(2) Their experiments have successfully accessed the region betwixt magnetism and superconductivity: when the superconducting veil is partially lifted, their experiments suggest the existence of underlying magnetism which shapes the hole pockets. Interplay between magnetism and superconductivity is therefore indicated - leading to the next question to be addressed.

Do these forms of order compete, with magnetism appearing in the vortex regions where superconductivity is killed, as they suggest? Or do they complement each other by some more intricate mechanism? One possibility they suggest for the coexistence of two very different physical phenomena is that the non-superconducting vortex cores may behave in concert, exhibiting collective magnetism while the rest of the material superconducts.


Further reading:

A multi-component Fermi surface in the vortex state of an underdoped high-Tc superconductor – original research paper (sub. rqd.)

Tags: ,

Labels: ,

Wednesday, June 27, 2007

Supersymmetry and big bang nucleosynthesis

The general acceptance of big bang cosmology for the past four decades rests primarily on three solid lines of evidence. First, the observation of the general expansion of the universe (using distance measurements based on "standard candles" like supernovae), is very consistent with the Friedmann equations derived from general relativity. Second, very precise measurements of inhomogeneities in the cosmic microwave background are very consistent with what is to be expected of conditions present in the universe at the time photons decoupled from matter. Third, the abundances of several light nuclei are very close to what would be expected to be produced in the process of nucleosynthesis that should have occurred around five minutes after the big bang.

As good as the agreement between theory and observation has been where nucleosynthesis is concerned, there have been various discrepancies that required some creative thinking to resolve. One of these involves the abundance of helium-3. We discussed it here.

Another example involves lithium. Although both lithium-6 and lithium-7 are calculated to have been produced in very small amounts, there was definitely some. Yet some very old stars have been observed that seem to contain no lithium at all. Where did it go? The theory here is that such stars are the result of mergers between even older stars, in which all the lithium was destroyed in the cataclysmic merger. See this.

Now there is yet another anomaly involving lithium observed in certain very old stars. The interesting thing is that one theorist is viewing this as possible evidence for very heavy supersymmetric particles that may not have yet decayed out of existence at the time of primordial nucleosynthesis.

Catalyzing Primordial Nuclear Chemistry
But a remaining puzzle is the amount of primordial lithium; both Li-6 and Li-7 are unexpectedly abundant in metal-poor stars (those with very few heavier elements). For example, a much higher than expected level of Li-6 might be pointing to a primordial origin (that is, not made later in stellar cores or in supernovas), in which case the BBN model would need to be amended. Maxim Pospelov ... of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, and University of Victoria, British Columbia suggests that the anomaly can be explained if early nucleosynthesis was aided---catalyzed---by the presence of charged heavy particles, which are common in many models of particle physics.


Tags: , ,

Labels: , ,

Thursday, December 14, 2006

Physics Story of the Year

'Tis the season for articles with titles like "Story of the year," "Notable achievements of 2006," and so forth. Here's the first one I've seen so far. And with almost 4 weeks to go (from when it was posted), it's jumping the gun a little. Who knows what might happen on the remainder of our world line before the 2007 mark?

The Physics Story of the Year
The physics story of the year 2006 was, we believe, the new high precision (0.76 parts per trillion uncertainty) measurement of the electron’s magnetic moment by Gerald Gabrielse and his colleagues at Harvard University. Then in a second paper the same experimenters used the new moment in tandem with a fresh formulation of quantum electrodynamics (QED) provided by theoretical colleagues to formulate a new value for the fine structure constant (denoted by the letter alpha), the pivotal parameter which sets the overall strength of the electromagnetic force. The new value has an uncertainty of 0.7 parts per billion, the first major revision of alpha in 20 years. A comparison between this new value and values determined by other methods provides the best test yet of quantum electrodynamics (QED).

OK, that one didn't get much play in the general press, but some additional physics stories did achieve more prominence. Here are some of my favorites, with links to additional information:

  • The observation of many more supernovas at redshifts of 1, thus establishing the idea that dark energy was around even in the early universe. [More: here, here, here, here, and here. I wrote about it here.]
  • New WMAP measurements of the cosmic microwave background, including polarization information, help to sharpen cosmological numbers such as the age or the flatness of the universe. [More: here, here, here, here, here, and here. I wrote about it here.]
  • Advances in plasmonics, or "two-dimensional light". [More: here, here, and here.]
  • Advances in the study of graphene, including the discovery of a new form of the Hall effect. [More: here and here.]
  • Progress at several labs in modeling gravity wave transmissions from black hole mergers, the kinds of events which LIGO or LISA would possibly detect. [More: here.]
  • Measuring the presence of virtual strange quarks inside protons.[More: here and here.]
  • Heaviest baryons discovered. [More: here, here, here, and here.]
  • Investigating whether the electron/proton mass ratio changed over time. [More: here and here]
  • Telecloning. [More: here, here, here, and here.]


Whew. Quite a list. But I think it leaves a lot out, too. There have been a number of interesting discoveries related to black holes. I've written about some of them. There have also been many advances in the related fields of spintronics, quantum information, and quantum computing. (I still haven't written about those.) Some other areas where there's been quite a lot of progress: carbon nanotubes, dark matter, and laser wakefield accelerators. And that's not the end of it.

Perhaps, if Santa puts an abundance of time under the tree for me this year, I'll tackle writing about what's happened in some of those areas.

Tags:

Labels:

Tuesday, September 26, 2006

Hottest topics in physics

Which research topics in physics are currently the "hottest"? It depends on how you measure "hot". In this paper: An extension of the Hirsch Index: Indexing scientific topics and compounds, it's done by modifying a citation indexing technique used to measure which scientists are most influential. Instead of counting how often an author's paper is cited by others, the technique counts how often papers are cited that have specific topics mentioned in the abstract.

Here are the highest-scoring topics, in decreasing order:

  1. carbon nanotubes
  2. nanowires
  3. quantum dots
  4. fullerenes
  5. giant magnetoresistance
  6. M-theory
  7. quantum computation
  8. teleportation
  9. superstrings
  10. heavy fermions
  11. spin valves
  12. spin glass
  13. porous silicon
  14. quantum critical point
  15. geometrical frustration
  16. quantum information


Here's a news article with a good summary: Hottest topic in physics revealed.

Tags:

Labels:

Saturday, September 17, 2005

Zero point energy and the Casimir effect

The all too human hope of getting "something for nothing" unavoidably affects inventors, engineers, and physicists as much as anyone else. Hence the perennial popularity of the futile quest for "perpetual motion". Akin to this, but not quite so hopeless, is the pursuit of limitless energy in the form of "zero point energy" to avert the world's looming energy crisis.

Quantum mechanics suggests that ZPE must be real for a simple reason. The uncertainty principle implies that the kinetic energy of a particle can never be precisely determined, and in particular it cannot be precisely zero. So every particle must have some nonzero kinetic energy, however small. Furthermore, there is no such thing as an absolute vacuum, since "virtual particles" can also come into existence for very short but nonzero periods of time. And so even a "perfect vacuum" can contain energy, which is called the zero point energy.

However, strangely, there is still no experimental evidence that ZPE -- or "energy of the vacuum" is actually "real". It is often supposed that ZPE accounts for the cosmological constant, also known as dark energy. Although there is now good evidence that the cosmological constant is nonzero, it's only a guess that it has something to do with ZPE.

Indeed, it hasn't been possible to actually calculate a value for ZPE. Naive calculations predict a value that is as much as a factor of 10120 larger than what it should be if it is responsible for the estimated value of the cosmological constant. Still, various arguments for the existence of ZPE are so good that few physicists actually doubt it.

Assuming that ZPE is real, many physicists, engineers, and would-be inventors have invested countless hours, in a bid for undying fame, hoping to find a way to capture ZPE in some economically useful way. Here is one of the lastest in this tradition:

Magnetic energy? Perhaps
The nation's energy industry is struggling to recover from Hurricane Katrina. Gas prices are soaring as a result of the catastrophic storm. America's reliance on overseas oil increases every year.

And from his office in the North Bay city of Sebastopol, Mark Goldes envisions a day -- perhaps not so far off -- when none of this will be a problem.

Goldes, 73, is chief executive of a small company called Magnetic Power Inc., which has spent years researching ways to, yes, generate power using magnets. ...

What Goldes believes he's done is produce power from what physicists call zero-point energy. In simple terms, zero-point energy results from the infinitesimal motion of molecules even when seemingly at rest.
Unfortunately, as already noted, there is as yet no experimental evidence that ZPE actually exists. Interestingly enough, most physicists think there is such evidence, in the form of a phenomenon known as the Casimir effect. In a nutshell, the effect is a very small but measurable force between two very flat metal plates that are very close together. Here's one of numerous references from a usually reliable source, Physics World: The Casimir effect: a force from nothing.

But just this year, in March, R. L. Jaffe came out with a paper demonstrating that the Casimir effect can be explained without ZPE:

The Casimir Effect and the Quantum Vacuum
In discussions of the cosmological constant, the Casimir effect is often invoked as decisive evidence that the zero point energies of quantum fields are "real''. On the contrary, Casimir effects can be formulated and Casimir forces can be computed without reference to zero point energies. They are relativistic, quantum forces between charges and currents.
Note that Jaffe isn't claiming that the Casimir effect isn't a result of ZPE. Instead, he's just claiming a way to derive the Casimir effect from "the forces between charged particles in the metal plates." In this view, "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates."

Is this nothing but an overly fastidious academic quibble? We should be careful about supposing that. If Jaffe is right, we still lack any experimental evidence for ZPE, however right it seems theoretically -- it's still a challenge to experimental physicists. And ever if ZPE is real, how it may relate to dark energy, if at all, is very mysterious.

Acknowledgement

I'm indebted to Phil Gossett for bringing the Jaffe paper to my attention in postings to a private mailing list.

Labels:

Tuesday, August 16, 2005

Einstein's Legacy

As almost everyone knows by now, 2005 is being celebrated as a very special year in the physics community, because it is the 100th anniversay of Einstein's "annus mirabilis". This "miraculous year", 1905, saw Einstein's publication of not just one spectacular paper, but of five. Most physicists would kill to have published even one of comparable quality in their whole career. At least four of these papers, and perhaps all five, would have merited a Nobel Prize, though only one actually did win the Prize for Einstein. And it had nothing to do with relativity.

All of these papers were written and published while Einstein was working as a patent examiner in Bern and before he had even been granted a doctor's degree. This circumstance has a lot to do with the magnitude of Einstein's subsequent professional reputation and popular celebrity. An interesting recent article, Einstein's Legacy -- Where are the "Einsteinians?" by Lee Smolin, one of the leading experts on quantum gravity, considers Einstein's legagy as a whole, and considers the implications for the status of the science of physics today.

Hundreds of articles have appeared describing Einstein's achievements of 1905. Here's a good one: Five papers that shook the world. Here is a brief summary of the topics of those papers:


  • The photoelectric effect -- why the energy of electrons ejected from a target by high-energy light depends on the frequency of the light and not on its intensity. James Clerk Maxwell's theory of electrodynamics could not explain this, but Einstein's paper, building on ideas due to Max Planck, succeeded. This insight culminated two decades later in quantum mechanics (which, ironically, Einstein never fully accepted).
  • Calculation of "Avogadro's number" and the size of molecules by studying their motion in a solution. This was a major step towards proving the atomic theory of matter, which (it is surprising to realize) was still far from universally accepted in 1905. The idea that matter comes in discrete chunks is closely akin to the idea of the previous paper that light also comes in discrete chunks. It was this paper that earned Einstein his doctorate.
  • Prediction of Brownian motion. Using the kinetic theory of liquids and classical hydrodynamics, Einstein derived an equation that described the erratic motion of sufficiently small particles in a liquid. The equation was experimentally verified three years later, providing the definitive confirmation of the existence of atoms and molecules.
  • The special theory of relativity. Einstein developed this (essentially very simple) theory by taking seriously the consequences of just two postulates: (1) that the laws of electrodynamics must be valid in all reference frames in which the laws of mechanics are valid, and (2) that the speed of light is a constant that does not depend on the motions of either the observer or the emitter of the light. This theory illustrates the primary strength of Einstein's thiking: the ability to build a theory by rigorous deduction from simple principles, however counterintuitive they may have seemed.
  • The equation E=mc2. This expression of the equivalence of mass and energy turns our to be a simple consequence of the theory of special relativity.


Given how spectacular these results were, Smolin makes the surprising observation that
Physicists I’ve met who knew Einstein told me they found his thinking slow compared to the stars of the day. While he was competent enough with the basic mathematical tools of physics, many other physicists surrounding him in Berlin and Princeton were better at it.


If that is the case, in spite of his spectacular achievements of 1905, then what accounts for Einstein's modern reputation as a preeminent "genius"? I think Smolin puts his finger on the answer when he remarks:
Einstein’s single goal in science was to discover what he called theories of principle. These are theories that postulate general rules that all phenomena must satisfy. If such a theory is true, it must apply universally. In his study of physics he identified two existing theories of principle: the laws of motion set out by Galileo and Newton, and thermodynamics. The basic principle of the first is the relativity of uniform motion, that the speed of your own motion is impossible to detect. Einstein’s discovery of special relativity came from 10 years of meditation on how to reconcile the relativity of motion with James Clerk Maxwell’s theory of electromagnetism, which describes the propagation of light.


This characteristic was foreshadowed in the deduction of special relativity from just two major principles as described above. But it is seen most strikingly in Einstein's subsequent general theory of relativity, published in 1915. Einstein deduced this theory by pure thought, with practically nothing in the way of experimental evidence as a guide. He considered rigorously the necessary consequences of a small number of basic principles that a "reasonable" theory of gravity ought to abide by. Some of the principles were plausible, while others were far from "intuitively obvious". Nevertheless, when considered together, they led to a theory of gravity which even now, 90 years later, has yet to fail even a single experimental test.

Taking, for simplicity, a few small liberties, the main principles were:

  • The laws of physics must be the same for all observers, regardless of their state of motion, and must take the same mathematical form in all coordinate systems.
  • Spacetime can be described as a 4-dimensional mathematical object known as a manifold, which is a higher dimensional analog of a (2-dimensional) curved surface.
  • A particle that is not acted on by external forces moves through spacetime along a curve of minimal length (as measured in the manifold). (Such a curve is called a "geodesic" and this kind of motion is called "inertial").
  • The presence of mass (or equivalently, energy) causes spacetime to curve in such a way that the effects of "gravitational force" due to the mass acting on a particle are indistinguishable from motion along a geodesic in the curved spacetime.
  • The laws of special relativity apply to observers moving inertially (without acceleration).


From these general principles, Einstein was able to derive an equation that embodies the whole theory of general relativity and makes such astonishing predictions as the curvature of a light ray in the presence of matter, the expansion of the universe, and the existence of black holes. Most such phenomena were not even known experimentally in 1915 (though some were, such as the precession of the perihelion of Mercury). The curvature of light by matter was verified experimentally in 1919, and the success of this prediction was so dramatic that it made headlines in newspapers around the world. Other predictions were so counterintuitive that even Einstein was reluctant to believe them. He doubted the expansion of the universe until Edwin Hubble gave convincing evidence of it in 1929, and never accepted the idea of black holes. Indeed, many physicists have had their doubts about black holes until the evidence for them has become very strong quite recently.

The history of Einstein's general theory of relativity turns certain overly simplistic ideas of "scientific method" completely on their heads. Powerful theories are not necessarily derived by "induction" from observation of accumulating experimental data. They are not necessarily even derived deductively from facts already well-verified. A far-reaching theory can, in fact, be derived logically from very general principles of what ought to be true of a useful fundamental theory.

General relativity came as such a surprise to practially all physicists because of how Einstein derived if from thought alone, with little experimental evidence. It is this, as much as the papers of 1905, which conferred upon Einstein his daunting reputation.

The passage of time has only emphasized what an astonishing accomplishment this was. The bulk of Smolin's article deals, essentially, with how the entire physics community -- including Einstein himself -- has been unable to reproduce this feat in the years since 1915.

Einstein devoted the last 30 years of his career to the search for a "unified field theory" that would encompass both gravity and electromagnetism (including quantum mechanincs). Einstein failed. So have all other physicists, either in the more limited objectives that Einstein pursued or in the most general form of a "theory of everything".

The reason, almost everyone agrees, is that no one has yet been able to guess what additional fundamental principles must be postulated.

It seems likely that there are additional principles that are needed, rather than modifications to the principles from which general relativity was derived. (Quantum mechanics is rather a different story. Though it has underlying principles, it has been constructed in a more inductive manner, adapting the mathematical rules -- the basis of which no one pretends to understand -- to fit experimental facts.)

The reason that additional principles are required is that in fact it has been possible to construct a vast number of possible "theories of everything" -- as many as 10100 distinct theories in the form of superstring theory alone. It would seem that physicists must be missing some essential fundamental principles that would narrow this rather large embarassment of theories to just one. (It is also possible that many or all of this huge number of theories may in fact be realized in a multiplicity of distinct "universes" that, perhaps, comprise the "multiverse".)

But if there is in fact a single theory of everything, nobody has yet been able to offer plausible guesses about the form that the necessary principles should take in order to make the theory unique, as Einstein did so successfully for general relativity.

This doesn't necessarily mean that humans are too stupid to understand the universe. Roughly 250 years separate Einstein's theory of gravity from Isaac Newton's. We might well need another 250 years to take the next step. In any case, it may be a cause for some satisfaction that there is still so much left to learn -- as opposed to the depressing thought that physics has reached nearly the end of the road, with little left to learn.

Labels:

Sunday, June 12, 2005

Pentaquarks, RIP

Not all scientific hypotheses pan out. In fact, most don't. This is disappointing to researchers eager to learn some new truth about Nature. But it is what makes science reliable and valuable. As attractive as it might seem to have the freedom to spin out a web of scientific theory to match one's imagination, it is even more worthwhile to build theoretical edifices that can be relied upon. This is actually much better for speculative theorists. It means that there is a trustworthy foundation one can build upon, without running too much risk of wasting one's time, or even a whole career, on developing theories that become worthless when the foundations that others laid turn out to be incapable of supporting further construction.

The way that science avoids such disasters is by requiring new ideas and hypotheses to meet as many strict tests against experimental data as possible. A recent article by Frank Close, as summarized in this piece On the Nonexistence of Pentaquarks, provides a good case study of this process in action.

Most of us would be better off in our personal lives as well if we'd learn to test our bright new ideas and cherished beliefs against known facts and data -- even if it takes some effort to acquire the relevant facts and data. And especially even though it means we sometimes have to give up on those ideas and beliefs when they don't pass a conscientious reality check.

Labels: