A Primer on the Interpretations of Quantum Mechanics, Part 2

If you missed it, part 1 of this article is here:

https://the-avocado.org/2019/07/02/a-primer-on-the-interpretations-of-quantum-mechanics-part-1/

I’m not going to lie; there are some challenging concepts in here. But give it a go. At worst, you’ll have learned a few phrases you can throw around to sound pretentious.

I should say that in this and the following parts, I’m leaning very heavily on the book Quantum Mechanics and Experience by David Albert (a professor of mine from undergrad). It’s not exactly what you’d call a user-friendly book – it does go through things in a formal, mathematical way, and Albert’s writing style can take some getting used to – but it does actually assume a reader with no prior knowledge of the subject, and it is more coherent and philosophically rigorous than just about any other text you’ll find on the subject, so if you’re interested in this stuff and have some time to work through it, I heartily recommend it.

 

The Measurement Problem

In part 1, I talked about Quantum Mechanics in general and then introduced the “measurement problem”. To recapitulate a little bit:

– The Schrödinger Equation, which governs the behavior of particles in Quantum Mechanics, is completely deterministic and linear. If the Schrödinger Equation is always true, then it would seem that experiments would never actually have outcomes, and it seems patently obvious that experiments do have outcomes.

– The orthodox solution to this problem is to say that the Schrödinger Equation is not always true. In the specific situation where a measurement is being performed, a different process occurs, called the “collapse of the wave function”. The collapse of the wave function is non-deterministic and non-linear.

– However, Quantum Mechanics itself does not offer any criteria for what constitutes a “measurement”.

To be clear about the issue, let’s come back to the experimental setup I described in part 1. We have a device that measures the x-spin of an electron. We picture this as a box within which we apply a magnetic field. There is an opening in the box, into which we can shoot an electron. If the x-spin of the electron is oriented up, the electron will drift upward in the magnetic field; if it’s oriented down, it will drift downward. There is a pointer on the front of the box. When an electron drifts upward within the box, that will push the pointer toward the word “up”, and when an electron drifts downward, it will push the pointer toward the word “down”. Then a human comes along and looks at the pointer. Finally, we imagine that we have an electron in a state ψ = a|x-up> + b|x-down> – in other words, it is not in a state of definite x-spin – and we send it into the box.

If the Schrödinger Equation were always true, then once the electron enters the box, the electron/box system will end up in the state:

ψ = a|x-up>|pointer-up> + b|x-down>|pointer-down>

And once the human looks at the pointer, the electron/box/brain system will end up in the state:

ψ = a|x-up>|pointer-up>|brain-sees-up> + b|x-down>|pointer-down>|brain-sees-down>

In other words, both the measuring device and the human brain looking at it will end up in entangled superpositions of different states and there will be no definite fact about what the person looking at the device sees. This would seem to be both ridiculous (since we know that measurements do have results) and nonsensical (what does it even mean to say that a person has two contradictory mental states at the same time?).

The collapse postulate resolves this by saying that, because a measurement has occurred, the Schrödinger Equation does not hold true here. Instead, there is an a2 chance that the system ends up in this state:

ψ = |x-up>|pointer-up>|brain-sees-up>

. . . and a b2 chance that the system ends up in this state:

ψ = |x-down>|pointer-down>|brain-sees-down>

So if we want to use the collapse postulate as a way of making sense of Quantum Mechanics, the measurement problem is to define what exactly constitutes a “measurement”.

 

The Copenhagen Interpretation

Most books will tell you that the standard interpretation, the one most physicists subscribed to throughout the 20th century, is the “Copenhagen Interpretation”. And that’s probably true, as far as it goes, but it’s an unfortunate historical fact that no one seems to be quite able to agree what exactly the Copenhagen Interpretation is.

It’s an interesting (though, for the purposes of this article, not relevant) exercise to read the writings of Niels Bohr (the Danish physicist for whom the Copenhagen Interpretation is named) and try to reconstruct his views on the foundations of Quantum Mechanics. But, as it’s generally understood, the Copenhagen Interpretation’s view of the measurement problem is probably some combination of the following claims:

– Quantum Mechanics is not a description of reality; it is just a tool for predicting the results of certain experiments.

– The precise definition of “measurement” doesn’t matter; everybody knows what a measurement is, and in practice there’s never any question as to whether one has taken place.

– Quantum Mechanics shouldn’t be applied to things like pointers and brains. Laboratories and macroscopic measurement devices are the realm of classical physics and Quantum Mechanics only applies to sufficiently small things.

It’s easy to pick holes in these claims. But, while I do think that ultimately the Copenhagen Interpretation is not really coherent, it’s worth pausing to try to understand the thinking behind it. And when we do that I think we see that it can be read in a way that makes it non-crazy – though ultimately, still wrong.

The Copenhagen Interpretation comes out of the philosophy of logical positivism that dominated the philosophy of science in the first half of the twentieth century. Logical positivism holds, more or less, that science is ultimately only about that which is empirically observable, and that we cannot even meaningfully talk about some underlying, unobservable reality. Not to get too sidetracked, but it’s this sort of thinking that led, for instance, to the triumph of Einstein’s Special Theory of Relativity. A competing theory existed, the Lorentz Ether Theory, that yielded exactly the same predictions as Special Relativity but posited a unique absolute frame of reference – with, however, the caveat that it was impossible to actually determine what this frame was. On a logical positivist view, these two theories were identical, except that Lorentz’s theory added an unnecessary, unobservable theoretical entity.

So the thinking of people like Niels Bohr as Quantum Mechanics was taking shape was that it was a mistake to think of it as a theory about some real, underlying microscopic reality. It was not a theory about electrons and photons; it was a theory about what experimenters saw when they performed certain experiments, and terms like “electron” and “photon” were just useful theoretical constructs to be used as part of that calculus. There is, then, an impulse to dismiss questions about what is “really” going on as being meaningless metaphysical quibbling.

Now, I’m more or less a logical positivist myself, so I’m completely sympathetic to this impulse. The problem, though, is that this question of realism vs. positivism is logically distinct from, and prior to, the question about when wave function collapse occurs. One could be a realist or a positivist about any theory – but that theory still ought to be a coherent, rigorously defined theory, whether it’s thought of as describing “real” things or empirical observations. And it remains the case that the algorithm for calculating things in QM is not well-defined, because it invokes “measurement” without defining “measurement”.

But let’s look at the charitable view of the Copenhagen Interpretation, the one I said makes sense, even if it turns out to be wrong. If you’re a positivist, and if it turns out that, regardless of when you suppose the collapse of the wave function to occur, you always end up with exactly the same empirical predictions for what an observer will see, then, and only then, you could make the (valid) argument that the term “measurement” doesn’t need to be clearly defined, because that definition is irrelevant to the empirical predictions, and the empirical predictions are the ultimate output of the theory.

Suppose we put the electron through the measurement box, and suppose the rule is that collapse occurs as soon as the electron encounters the magnetic field. Does this yield any different predictions than if collapse occurs as soon as the macroscopic pointer becomes involved? Or as soon as the human brain becomes involved? If every single thing you could possibly measure gives you the same exact results regardless of where exactly along that chain the collapse happens, then, if you’re a positivist, you can truly say that it doesn’t matter.

But it turns out – and this is a point that doesn’t get talked about enough – that this is not the case. It turns out that there are, in principle, experiments you could do that yield different predictions depending on whether the wave function collapsed before the electron interacted with the pointer or after, before the brain became involved or after. If that’s the case, then why not just do those experiments and empirically settle the question of when collapse occurs? Well, it turns out that those measurements are incredibly difficult to perform – to the point where it is, for all practical purposes, impossible to settle the matter experimentally. The trouble is that the only observables for which you’d expect different results depending on at what point the wave function collapses are observables of the whole electron/device system. And to carry out such a measurement, that whole system would have to be perfectly isolated from the environment – anything it interacted with would become entangled with its wave function, and then require an even more complex measurement of the electron/device/something else system. It’s possible to isolate individual particles, or small groups of particles, to the degree required, but isolating a macroscopic object is just not feasible. The fact of the matter, then, is that even though we can definitely say that different theories of collapse are not empirically identical, we can’t, in practice, actually distinguish them empirically.

So, even if we’re empiricists and we only care about the observable predictions of a theory, the Copenhagen Interpretation still turns out to be unsatisfactory.

 

Consciousness-induced collapse

One of the more famous (or should we say infamous) answers to the question of when collapse occurs is that of Eugene Wigner. The intuition is that what really defines the measurement of an observable is the presence of an observer. That is, the language in which Quantum Mechanics is typically talked about (even by physicists) might seem to suggest that the act of conscious observation itself plays a central role in the laws of physics – so perhaps consciousness should be the criterion for the change from a state in which multiple possible results for a measurement coexist to one in which the measurement has a definite result.

This idea probably sounds ridiculous to you. And you’re right, it is quite ridiculous. But let’s play along and take it seriously for a moment.

What this interpretation says is that the Schrödinger Equation, with its linear, deterministic dynamics, is true at all times except when that would result in a conscious brain being in a superposition of multiple mental states. Whenever a conscious brain would end up in such a superposition, the collapse postulate takes over, and the wave function of that brain, and anything entangled with it, instead “chooses” one of the quantum states corresponding to a single mental state, with the probabilities for choosing each one given, as expected, by the amplitudes of those states.

Let’s think about how our measurement of the x-spin of an electron works under this interpretation. If the electron starts off in the state ψ = a|x-up> + b|x-down>, then when it passes into the measuring device, the electron/device system will enter the state:

ψ = a|x-up>|pointer-up> + b|x-down>|pointer-down>

The electron and the pointer are now in a superposed, entangled state, but no conscious brain is involved yet. But now, when the experimenter looks at the pointer, a conscious brain is about to enter a superposition of states in which it has two different mental states – so, according to this interpretation, that is when the collapse postulate comes into effect. The wave function “chooses” one of those two states probabilistically. And because the position of the pointer and the spin of the electron are now entangled with the wave function of the brain, it is the whole electron/pointer/brain system that “collapses”; the fact that the brain has had to choose a determinate state to end up in means that the electron and the pointer must also collapse into the corresponding determinate states. So, after the experimenter looks at the pointer, we are either in this state:

ψ = |x-up>|pointer-up>|brain-sees-up> (a2 chance)

or this state:

ψ = |x-down>|pointer-down>|brain-sees-down> (b2 chance)

Let’s be clear about what this interpretation is not saying. It is not saying that collapse of the wave function is an illusion – the linear dynamics of the Schrödinger Equation really are, objectively, violated when the collapse occurs. It is also not introducing some kind of relativism – it is not saying that the question of whether or not a collapse has occurred depends on which observer is asking that question. Once the experimenter in our example has looked at the pointer, a collapse has objectively occurred, and whichever of the two possible collapsed states we end up in, that is, for everyone, the appropriate wave function to describe the electron/device/experimenter system. So this is, really, truly, an objective collapse theory. Consciousness, in this interpretation, is just the criterion by which the universe decides whether a collapse occurs or whether the linear dynamics keep chugging along.

That’s perfectly fine, as far as it goes, and we have here an interpretation that is perfectly consistent and coherent. There is, a priori, no reason to think that this could not be the way the universe works. But there are two fairly obvious problems. First, we have exchanged the imprecisely defined term “measurement” for the imprecisely defined term “consciousness”. This interpretation presupposes that consciousness is a Thing, that it is a property that certain systems of particles have and that other systems do not have, and that’s that. Indeed, it presupposes an ontology of the universe that is less parsimonious than we might like. It says that the total wave function of all the particles in the universe is not a complete description of the universe; in addition to that, we need to specify which collections of those particles are conscious. Consciousness cannot be an emergent property in this interpretation; it is a fundamental, physical property. And of course this leaves you with the question of which brains to count as conscious. Could a chimpanzee collapse a wave function? Could a goldfish? Could an ant? Could Schrödinger’s famous cat? (I should perhaps note here that, while this strikes me as rather ridiculous, I expect it does not strike everyone that way. Indeed, many people already do believe that consciousness is a definite Thing that goes beyond mere physical fact, that some beings have and some don’t. If you believe in qualia, you already believe in consciousness as a fundamental constituent of the universe, and perhaps then this interpretation does not sound crazy to you).

The other thing I would call a fairly obvious problem with the consciousness-induced collapse interpretation is that, when you step back for a moment, it seems to be remarkably ad hoc. What was it that led us to think consciousness might have anything to do with the fundamental laws of physics? It seems to me that it is just the fact that the formalism of Quantum Mechanics is couched in terms of measurements and observables. But, after all, all physical theories must in the end be couched in such terms, for no more reason than that measurement and observation are the tools we use to test them. To say that consciousness must play a fundamental role in Quantum Mechanics, since Quantum Mechanics is about measurements and observables, seems to me to be rather on the order of saying that pencils must play a fundamental role in dinosaur physiology, since paleontologists use pencils to record their findings in their field notebooks.

 

The GRW Interpretation

Where else might we locate the collapse of the wave function? Well, it might seem that the difference between quantum behavior and classical behavior is one of macroscopicness. After all, it was only by investigating the behavior of small numbers of particles, on a very small scale, that we became aware of quantum physics at all. Perhaps, then, it is macroscopicness that is the criterion for wave function collapse. That is, perhaps the linear dynamics of the Schrödinger equation apply as long as a single particle, or just a few particles, are involved, but as soon as a macroscopic object (containing billions of particles) becomes entangled with the system, a collapse occurs and the whole system chooses one definite state.

The trouble with this is that any definition of “macroscopic” is going to be rather arbitrary. It would seem deeply strange for some particular size of an object to mark a sharp boundary between quantum and classical behavior – for the laws of physics to be fundamentally different depending on whether, say, 5,000,000 particles or 5,000,001 particles are involved.

But there is a relatively easy way of turning this intuition that collapse occurs at macroscopicness into something much more reasonable. Suppose that, every so often, a particle randomly undergoes a collapse into a state of definite position. In other words, there’s some – presumably very low – probability per unit time (an X% chance per second) that a particle will undergo such a collapse. The probability would have to be rather low because as far as we can tell, we never notice individual, isolated particles undergoing a collapse. But that’s fine; if the probability per second that a particle will collapse is sufficiently small, then we’d expect to be able to observe an individual particle for a very, very long time without collapse ever actually happening1. When the collapse does occur, it follows the usual probabilistic rule of the collapse postulate that we’ve talked about before.

The trick is that if multiple particles are in an entangled state, then a collapse in any one of those particles will cause a collapse in the whole system. And if you have enough particles, then even if the chance for any particular particle to collapse is very low, the chance could be very high that at least one out of those billions of particles will collapse. So if you have a macroscopic object made up of billions of entangled particles, then you are almost guaranteed that it will undergo a wave function collapse very quickly. So what we have here is a naturalistic way of getting something that is effectively like a “collapse occurs at macroscopicness” criterion, but without any arbitrary disjunctions in the way the laws of physics work.

Well, that’s not quite right, because there’s one small wrinkle. For technical reasons, it doesn’t work to have the particles randomly collapse into states of exact, definite position. Instead, we have to have them collapse into extremely localized, but not exact, positions. In other words, even after a collapse, a particle will remain in a superposition of different position states – but these will all be in a tight region surrounding a single point. And that region could be so small that it’s well beyond the precision of any conceivable instrument, so that effectively, as far as any measurement is concerned, it is a single, definite position. This idea was proposed by Giancarlo Ghiardini, Alberto Rimini, and Tullio Weber in 1985, and is usually called the GRW interpretation2.

Let’s look at our measurement example again with this interpretation in mind. We start, again, with the electron in the state ψ = a|x-up> + b|x-down>. There is, at this point, no chance whatsoever of the electron collapsing into a definite x-spin state – remember, in GRW, the collapse always occurs in terms of position, and so far the electron’s position is not correlated with its x-spin. Once it enters the measuring device, the magnetic field will cause the |x-up> part of the wave function to drift in one direction and the |x-down> part to drift in the other direction. Now, the electron’s position is correlated with its spin. At this point, there is a very tiny chance that a collapse will occur and that the electron will “choose” which side of the device it has drifted toward, and therefore what its x-spin is. But presumably this chance will be so incredibly small that for all practical purposes we can ignore it.

Next, the electron becomes entangled with the pointer, causing the pointer to point toward the word “up” if its x-spin is up, or toward “down” if its x-spin is down. The pointer consists of an astronomically large number of particles. Even though the chance for any particular particle to collapse is small, there are so many particles in the pointer that collapses are happening all the time. So as soon as the pointer’s wave function starts to separate into two components with different physical positions, a collapse is overwhelmingly likely to occur in at least one of its constituent particles, which in turn causes the wave function of the whole pointer/electron system to collapse. So, almost immediately, we end up in one of these states:

ψ = |x-up>|pointer-up> (a2 chance)

or:

ψ = |x-down>|pointer-down> (b2 chance)

The GRW interpretation is an eminently reasonable one, and is one that is taken seriously by the sort of people who study the foundations of Quantum Mechanics. But it is not without its problems. Some of these are of a somewhat technical, esoteric nature, but the biggest issue is that one might question whether it actually solves the measurement problem. What we wanted to avoid was ending up in a puzzling state like this, where a measurement appears to have no result:

ψ = a|x-up>|pointer-up>|brain-sees-up> + b|x-down>|pointer-down>|brain-sees-down>

By positing a probabilistic collapse, GRW makes it overwhelmingly unlikely that such a state would ever come about – so unlikely that something like that would probably never happen in many lifetimes of the universe – but it doesn’t make it impossible. And if a conscious brain being in a superposition of states is really nonsensical, then we’d probably want our theory to make it impossible, not just improbable. On the other hand, one could argue that if, as it happens, that kind of state is so improbable that in the whole history of the universe, it never occurs, then worrying about the difference between “impossible” and “practically impossible” is splitting hairs.

 

Other Options

J.S. Bell, who in addition to making important technical contributions to Quantum Mechanics was one of the few physicsts to take a serious interest in foundational issues in the 1960s, 1970s, and 1980s, remarked that the measurement problem showed that “either the wavefunction, as given by the Schrödinger equation, is not everything, or it is not right.” What he meant was that if the wave function of Quantum Mechanics is a complete description of the physical state of the universe, then we need some mechanism (i.e. a collapse postulate) that somehow or other violates the Schrödinger equation, to ensure that measurements actually have outcomes. (And in the foregoing discussion, we’ve assumed this to be the case, and tried to work out just how that collapse postulate might actually work). But if we posit that there’s more to the universe than is described by the wave function, we might be able to get away without a collapse postulate. This dilemma presented by Bell allows us to categorize the possible interpretations of Quantum Mechanics in the following way:

1) Again, if it’s really true that, fundamentally, all there is in the universe are wave functions, then the Schrödinger equation cannot always be true. This is what leads to the collapse interpretations discussed here, which say that under certain conditions the Schrödinger equation is violated, and the collapse postulate takes over3.

2) Alternatively, if the Schrödinger equation is always true, then there must be something more needed than just the wave function to provide a complete physical description of the universe – something that can pick out the result of a measurement, which the wave function cannot do. These are typically called “hidden variable interpretations”. The chief interpretation that follows this approach is the pilot wave theory first sketched out by Louis de Broglie in the 1920s and later developed by David Bohm in the 1950s.

3) Finally, there is a third tradition, due to Hugh Everett, that rejects this dilemma entirely and claims that, however counterintuitive it might seem, the Schrödinger equation is always true and the wave function is a complete description of the universe. The trick here is to make sense of the very puzzling macroscopic superpositions that will arise, such as the state in which the experimenter is in a superposition of seeing the pointer in two different positions.

In this article, I’ve talked about option 1. In the next part, I’ll talk about option 2, and specifically about Bohmian mechanics. I’ve seen this called the “neo-realist” approach to QM, and while I think that nomenclature conflates separate scientific and philosophical questions, Bohm’s interpretation does offer a nice, down-to-earth, almost (dare I say it) classical way of understanding the theory. Finally, in the last part, I’ll talk about the Everett-style interpretations, which include the (in)famous “many worlds interpretation”, and, I hope, wrap things up by talking about a few other odds and ends.