Friday, 29 June 2007

The Laws of Physics and Flexi-History

This article from New Scientist (also apparently published in the Guardian) puts into much better words the scientific basis behind the flexibility of time that I've previously hinted at. It's also very readable, so if you find my introduction boring don't give up on the article too.

In a nutshell, the author (along with several other leading physicists) believe that we have to stop viewing time as a straight line. The classical notion a timeline as immutable past leading inexorably into present, where the choices we make lead into a yet-to-be-decided future is looking increasingly dubious.

Instead, we have to focus attention on the present, the point which we as conscious beings inhabit. This of course would be no earth shattering news to those of the Indian religious tradition, who have always placed the focal point on the subjective 'now' of consciousness. It is a peculiarity of the west (and specifically of the Judeao-Christian tradition) that we attempt to understand time from 'outside'. Instead of placing our attention 'within', inbetween the fuzzy states of past and future, we attempt to begin from a precise objective point (where time began) and progress rationally along the continuum, admitting no difference in quality between our point of direct experience and those of memory/projective imagination. Of course, the rationalist timeline was first given expression in the Torah, the Jews placing the creation at an apparently definite, finite point in the past, from which time and reality has progressed quite happily until we reach the present moment. This was not always understood as a literal truth, mythology only having been reinterpretted literally in the last few hundred years (largely under the Christian/rationalist mindset which has created much of our current status quo). But I'm going to dismount my hobbyhorse because I want to briefly mention another interesting synchronicity before letting you read the article yourselves.

The numerical system provides some valuable insights into this new view of time: If we conceive the 'zero-point' as being the present moment of consciousness, we see that the negative integers and positive integers expand infinitely on either side of us. Given Georg Cantor's work last century on the number line and irrational numbers, can we project some parallel between the structure of infinitely divisible finite spaces (the interplay between rational numbers giving the number line its structure, and the irrationals giving the line its substance), and the unknowable complexity of the transcendental numbers, onto the quantum structure of time itself..?

The flexi-laws of physics

from New Scientist 30 June 2007

by Paul Davies

SCIENCE WORKS because the universe is ordered in an intelligible way. The most refined manifestation of this order is found in the laws of physics, the fundamental mathematical rules that govern all natural phenomena. One of the biggest questions of existence is the origin of those laws: where do they come from, and why do they have the form that they do?

Until recently this problem was considered off-limits to scientists. Their job was to discover the laws and apply them, not inquire into their form or origin. Now the mood has changed. One reason for this stems from the growing realisation that the laws of physics possess a weird and surprising property: collectively they give the universe the ability to generate life and conscious beings, such as ourselves, who can ponder the big questions.

If the universe came with any old rag-bag of laws, life would almost certainly be ruled out. Indeed, changing the existing laws by even a scintilla could have lethal consequences. For example, if protons were 0.1 per cent heavier than neutrons, rather than the other way about, all the protons coughed out of the big bang would soon have decayed into neutrons. Without protons and their crucial electric charge, atoms could not exist and chemistry would be impossible.

Physicists and cosmologists know many such examples of uncanny bio-friendly "coincidences" and fortuitous fine-tuned properties in the laws of physics. Like Baby Bear's porridge in the story of Goldilocks, our universe seems "just right" for life. It looks, to use astronomer Fred Hoyle's dramatic description, as if "a super-intellect has been monkeying with physics". So what is going on?

A popular way to explain the Goldilocks factor is the multiverse theory. This says that a god's-eye-view of the cosmos would reveal a patchwork quilt of universes, of which ours is but an infinitesimal fragment. Crucially, each patch, or "universe", comes with its own distinctive set of local by-laws. Maybe the by-laws are assigned randomly, as in a vast cosmic lottery. It is then no surprise that we find ourselves living in a patch so well suited to life, for we could hardly inhabit a bio-hostile patch. Our universe has simply hit the cosmic jackpot. Those universes that can't support life - the vast majority in fact - go unobserved.

Goldilocks enigma

The multiverse theory is a step forward, but it still leaves a lot unexplained. For a start, there has to be a universe-generating mechanism to make all those cosmic patches. There also has to be a process whereby each patch acquires a set of by-laws, perhaps at random, perhaps not. These requirements demand their own laws - which maybe we should refer to as federal laws or meta-laws - to govern the creation of law-driven universes.

In itself that is not an overriding objection. Cosmologists have concocted a way for an endless stream of big bangs to occur spontaneously throughout space and time, each triggering the birth of a "bubble" universe somewhere and somewhen in the boundless multiverse, with each bubble governed internally by its very own by-laws. However, their calculations appeal to quantum mechanics, relativity and a host of other conventional oddments from the standard tool kit of theoretical physics. Accepting such meta-laws as given - true without reason or explanation - merely shifts the mystery of the laws of physics in our universe up a level, to that of the meta-laws in the multiverse.

The basic difficulty can be traced back to the traditional concept of a physical law. Since at least the time of Isaac Newton, the laws of physics have been treated as immutable, universal, eternal relationships - infinitely precise mathematical rules that transcend the physical universe and inhabit an abstract other-worldly realm.

These perfect rules were supposedly imprinted on the universe - somehow - from outside, at the moment of cosmic creation, and haven't changed an iota since. In particular, the laws care nothing for what is actually happening in the universe, however violent the physical processes may be. So the universe depends on the laws, but the laws are strangely independent of the universe.

Four hundred years on, physicists still cling to this model of physical law, even though they have no idea what the external source of the laws might be. So long as science appeals to something outside the universe, we must abandon any hope of ultimately understanding why the universe is as it is. A large element of mystery will lie forever beyond our reach.

There is, however, another possibility: relinquish the notion of immutable, transcendent laws and try to explain the observed behaviour entirely in terms of processes occurring within the universe. As it happens, there is a growing minority of scientists whose concept of physical law departs radically from the orthodox view and whose ideas offer an ideal model for developing this picture. The burgeoning field of computer science has shifted our view of the physical world from that of a collection of interacting material particles to one of a seething network of information. In this way of looking at nature, the laws of physics are a form of software, or algorithm, while the material world - the hardware - plays the role of a gigantic computer.

Perfect past

The mathematics of the laws may be the same, but the change in perspective leads to profoundly different conclusions, as we discover when we ask just how powerful the cosmic computer may be. Every computer's performance is limited by the finite speed of its processors and the finite storage capacity of its memory. The universe is no exception.

Bits of information, even in the subatomic domain, cannot be flipped faster than a maximum rate permitted by the Heisenberg uncertainty principle of quantum mechanics. Meanwhile the storage capacity depends on the physical size of the observable universe, which is limited to the maximum distance light can have travelled since the big bang 13.7 billion years ago. From this, Seth Lloyd of the Massachusetts Institute of Technology in Cambridge has calculated that the observable universe can have processed no more than 10120 bits of information since its birth.

Does it matter that the universe commands only finite computational resources? Maybe not to the traditional view of the laws of physics, according to which Mother Nature computes the action of her laws in a transcendent heaven of infinitely precise mathematical relationships. But if we replace this highly idealised view with one in which nature computes in the real universe, then Lloyd's bound has serious implications. In effect, we have no reason to suppose any physical law can be more accurate than 1 part in 10120. Beyond that we can expect the law to break down and become fuzzy.

For most practical purposes Lloyd's number is so big it might as well be infinite. For example, the law of conservation of electric charge has been tested to only about one part in a trillion, still 108 powers of 10 too crude to reveal any possible breakdown arising from the finite information bound.

However, Lloyd's bound isn't fixed: it grows with time, and at the instant of the big bang it was 0. At the time the large-scale structure of the universe was being laid down during the first split second, the bound was still only about 1020 - possibly small enough to have cosmological consequences. So we are led to a picture in which the laws of physics are inherent in the physical universe, and emerge with it. They start out unfocused, but rapidly sharpen and zero in on the form we observe today as the universe grows.

Flexi-laws of this sort are not a new idea. They were proposed 30 years ago by the physicist John Wheeler. The way he expressed it is that the laws of physics were not "cast in tablets of stone, from everlasting to everlasting". Rather, they emerged over time, congealing from the ferment of the big bang.

Can the flexibility in the laws explain the Goldilocks enigma? Is there enough wiggle room for the universe to somehow engineer its bio-friendliness? Freeman Dyson, one of the pioneers in the study of the biological fine-tuning mystery, wrote that the more he learned about the various accidents of physics and cosmology that permit life to arise, "the more it seems that in some sense the universe knew we were coming". Dyson's dramatic assertion raises the obvious question: how? In the first split second, when the laws were in the process of settling down, how could the universe "know about" life and consciousness coming along billions of years later? How can life today be relevant to the physics of the very early universe?

Surprisingly it can, thanks to the weirdness of quantum mechanics. Heisenberg's uncertainty principle says that even if you know the state of an atom at one moment, there is an irreducible uncertainty about what its properties will be when you observe them at a later moment. One way of expressing this is to say that the atom has many possible futures encompassed within the overall fuzziness of quantum uncertainty. What's more, the principle works just as well for the past as for the future, so an atom has many possible histories leading up to its present state. By the rules of quantum physics, all these parallel realities must meld together to yield the present state of the atom.

The same general conclusion holds if we apply quantum mechanics to the entire universe - a subject known as quantum cosmology, made famous by the work of Stephen Hawking. Since we cannot know the quantum state at the start of the universe, we must work backwards in time from our present observations and infer the past.

As Hawking has emphasised, it is a mistake to think there is a single, well-defined cosmic history connecting the big bang to the present state of the universe (New Scientist, 22 April 2006, p 28). Rather, there will be a multiplicity of possible histories, and which histories are included in the amalgam will depend on what we choose to measure today. "The histories of the universe depend on the precise question asked," Hawking said in a paper last year with Thomas Hertog ( In other words, the existence of life and observers today has an effect on the past. "It leads to a profoundly different view of cosmology, and the relation between cause and effect," claims Hawking.

We can illustrate these abstract ideas from quantum physics with the help of a concrete demonstration suggested 25 years ago by Wheeler. His experiment is a variant of Thomas Young's famous 200-year-old double-slit experiment, designed to reveal the wave nature of light. A pinpoint source of light illuminates a screen punctured by a pair of parallel slits, projecting onto a second screen beyond. Light spreading out from each slit overlaps with that from the other. Where the light from both slits arrives at the image screen in phase, the waves reinforce to produce a bright band. Where they arrive out of phase, they interfere destructively, producing a dark band. The series of bright and dark bands are called interference fringes.

Mystery sets in when you turn the brightness right down. According to quantum theory, light may also be considered to consist of photons, which behave like a stream of particles. So what happens if you allow only one photon at a time to traverse the apparatus? Experiments show that although it takes a lot longer, an interference pattern does build up on the photographic screen, one photon at a time. Presumably each photon passes through only one slit, yet somehow it appears to "interfere with itself" and contribute to the pattern.

A wily experimenter might decide to place detectors at the slits to see which one each photon goes through. Nature, however, outmanoeuvres us. Whenever you determine the path of the photons, no interference pattern results. So you have a choice: look to see where the photon is heading and destroy its wavelike behaviour, or choose not to look, and allow the photon to manifest the wave aspect of its character. It essentially boils down to a choice of particle or wave. The photon can be both, but not at the same time. The experimenter gets to decide which.

So far so good. The novel twist that Wheeler added is that you can delay your decision to look at the wave or particle aspect until long after the light has passed through the slits. Using a pair of telescopes placed at the image screen, you can look back at the slits and infer which one any given photon emerged from. Do this and you destroy the interference pattern. In effect, the observation you make affects the nature of the past - specifically, whether the photon behaved as a wave or a particle. Physicists call this strange phenomenon "quantum post-selection".

There is a temptation to assume that the light "really was" either a wave or a particle in the past, but quantum physics denies this. It is simply not possible to ascribe a well-defined past to this system. Rather, your decision to make a particular observation - what Hawking meant by "the precise question asked" - determines the nature of the past. Crucially, however, the delayed-choice experiment cannot be used to change the past, or to send information back in time.

This aspect of quantum weirdness may appear startling, but it has been tested by experiments and found to be correct. In such experiments the quantum reach into the past is only a few nanoseconds, but in principle it could be extended to billions of years. And when it comes to quantum cosmology, it can penetrate right back to the big bang itself.

So how can this backward-in-time feature of quantum mechanics explain the bio-friendliness of the universe? Well, obviously we can rule out from the multiplicity of quantum histories any that don't lead to life, because that would conflict with the basic fact of our own existence. However, in the standard quantum cosmology advocated by Hawking, all of the alternative histories, without exception, conform to exactly the same laws of physics. So while a photon travelling from a source to a screen can take many different paths, the actual laws of motion that govern its path remain the same whichever route it takes.

Wheeler's idea was more radical. He claimed that the existence of life and observers in the universe today can help bring about the very circumstances needed for life to emerge by reaching back to the past through acts of quantum observation. It is an attempt to explain the Goldilocks factor by appealing to cosmic self-consistency: the bio-friendly universe explains life even as life explains the bio-friendly universe.


As long as the laws of physics are fixed, as they are in Hawking's cosmology, their enigmatic bio-friendliness is left out of this explanatory loop. But with flexi-laws of the sort advocated by Wheeler, the way lies open for a self-consistent explanation. The fuzzy primordial laws focus in on precisely the form needed to give rise to the living organisms that eventually observe them. Cosmic bio-friendliness is therefore the result of a sort of quantum post-selection effect extended to the very laws of physics themselves.

"As long as the laws of physics are fixed, their enigmatic biofriendliness is left out. Bring in flexi-laws and it's a different story"

Wheeler's ideas are far from properly worked out. They remain, as he quaintly referred to them, "an idea for an idea". However several theorists, including Yakir Aharonov, Jeff Tollaksen and others at George Mason University in Fairfax, Virginia, and myself are attempting to place the concept of flexi-laws and quantum post-selection on a sound mathematical footing.

How can we test these outlandish ideas? If the fidelity of the laws of physics really is subject to a cosmological bound, then the structure of the universe might betray some remnant of the substantial primordial fuzziness. A more direct test could come from the phenomenon of quantum entanglement, in which the quantum states of a collection of particles are linked in such a manner that an observation performed on one affects all the others simultaneously.

The key point about an entangled state is that it requires many more parameters to define it. For example, 10 atoms may have their spins aligned with or against a magnetic field. In a non-entangled state, you only need 10 bits of information to define the state for each atom. But if the atoms are entangled, you must specify the values of 210, or 1024, parameters.

As the number of particles goes up, so the number of defining parameters escalates. A state with 400 entangled particles blows the Lloyd limit - it requires more bits of information to specify it than exist in the entire observable universe. If one takes seriously the inherent uncertainty in the laws implied by Lloyd's limit, then a noticeable breakdown in fidelity should manifest itself at the level of 400 entangled particles. Such a state is by no means far-fetched. Entangled states of about a dozen particles have already been created, and experimenters have set their sights on 10,000 as part of the effort to build a quantum computer

In the orthodox view, the laws of physics are floating in an explanatory void. Ironically, the essence of the scientific method is rationality and logic: we suppose that things are the way they are for a reason. Yet when it comes to the laws of physics themselves, well, we are asked to accept that they exist "reasonlessly". If that were correct, then the entire edifice of science would ultimately be founded on absurdity. By bringing the laws of physics within the compass of science, and fusing nature and its laws into a mutually self-consistent explanation, we have some hope of understanding why the laws are what they are. In addition, we can begin to glimpse how we, the observers of this remarkable universe, fit into the great cosmic scheme.

Saturday, 2 June 2007

Restatement: Leibniz's Law of Identity

A while back I blogged about Leibniz's Law of Identity and how all other logical truths descend from this one necessary formula. I faced some criticism from a few people claiming that axioms such as Modus Ponens and the Law of the Excluded Middle were not evident within the Law of Identity.

Leibniz's law can be stated thus: A=A. It is evident how this principle is a necessary precursor to any subsequent logic, as if A is not identical to itself, then no other true statement can be made. If A is not A then 1+1 no longer equals 2, as the definition of 1 is fluid and not static. Staticity of terms (ie, self-identity) is necessary for any inference from the nature of those terms. New inferences cannot be developed without this principle.

But how is A=A a sufficient basis for all other self-evident logical truth?

Modus Ponens is the basic law of inference in logic. It can be stated thus:

1.P -> Q.

It states that if we accept "P entails Q" (for example, being upper class entails voting conservative), and if P happens to be true, then Q must also be true. In effect, it states the principle that if we accept one thing leading to another thing, we must accept that when the 
one thing happens the second does too. Common sense.

It can be restated as an equation:

if (P->Q) then (if(P)->(Q))


P->Q = P->Q.

Given this form, it is clear to see how it is merely an affirmation of tautology. It bears precisely the same form A=A. It holds no additional content, and therefore is not a seperate truth, but simply a derivation from the Law of Identity. Leibniz's law is both the necessary and sufficient condition for ensuring the validity of Modus Ponens.

The Law of the Excluded Middle is similarly obvious:

A statement must be either true, or if not true, then false. There is no inbetween, a statement cannot be both true and false, and cannot be neither true nor false. It must be one and only one of these. Without wishing to go into Logical Positivist-type delineations of some statements being meaningless and therefore outside the realms of truth or falsehood because they contain no sensible assertion, it should be clear how the LEM is true.

We can restate the LEM as A=not-not-A, ie A cannot equal the negation of A. Or, A=A.

What Leibniz believed was that all analytic (self-evident) truth was explicitly derived from A=A. This, he saw as the emanation of truth from (and within) the mind of God. Human intellect could trace this path through the use of logic. Synthetic (contingent) truth was similarly derived solely from A=A but the process by which this happens is not accessible to the human mind. The manifestation of reality and the circumstances which surround us are necessarily derived from God's own nature and thought - therefore this is necessarily the best of all possible worlds (as it must be, if derived from the source of goodness itself).

A=A is itself a statement (albeit an obscure one) about God's nature: God is identity. I am that I am (Exodus 3:14)