Bell test experiments


A Bell test experiment or Bell's inequality experiment, also simply a Bell test, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. The experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables to explain the behavior of particles like photons and electrons. To date, all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.
According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics.
Many types of Bell test have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell test experiments". In a novel experiment conducted in 2016, over 100,000 volunteers participated in an online video game that used human choices to produce the data for researchers conducting multiple independent tests across the globe.

Overview

The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision.
In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past, relative to its location. This action at a distance would violate the theory of relativity, by allowing information between the two locations to travel faster than the speed of light.
Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents.
In 1964, John Stewart Bell proposed his now famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments.
A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics.
In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another. Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated.
To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables.

Conduct of optical Bell test experiments

In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons, rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels.

A typical CHSH (two-channel) experiment

The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982. Coincidences are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.
Four separate subexperiments are conducted, corresponding to the four terms E in the test statistic S. The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.
For each selected value of a and b, the numbers of coincidences in each category are recorded. The experimental estimate for E is then calculated as:
E = /.
Once all four E’s have been estimated, an experimental estimate of the test statistic
S = EE + E + E
can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden variable theories.
A strong assumption has had to be made, however, to justify use of expression. It has been assumed that the sample of detected pairs is representative of the pairs emitted by the source. That this assumption may not be true comprises the fair sampling loophole.
The derivation of the inequality is given in the CHSH Bell test page.

A typical CH74 (single-channel) experiment

Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use. As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic.
S = − N + N + NNN) / N,
where the symbol ∞ indicates absence of a polariser.
If S exceeds 0 then the experiment is declared to have infringed Bell's inequality and hence to have "refuted local realism". In order to derive, CHSH in their 1969 paper had to make an extra assumption, the so-called "fair sampling" assumption. This means that the probability of detection of a given photon, once it has passed the polarizer, is independent of the polarizer setting. If this assumption were violated, then in principle a local hidden variable model could violate the CHSH inequality.
In a later 1974 article, Clauser and Horne replaced this assumption by a much weaker, "no enhancement" assumption, deriving a modified inequality, see the page on Clauser and Horne's 1974 Bell test.

Experimental assumptions

In addition to the theoretical assumptions made, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs because in practice they will not be detected at exactly the same time.
Nevertheless, despite all these deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics which is now known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography.

Notable experiments

Over the past thirty or so years, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden variable theories, and recently an experiment has been performed that is not subject to either the locality loophole or the detection loophole. An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities.
Some of the best known and recent experiments include:

Freedman and Clauser (1972)

and John Clauser carried out the first actual Bell test, using Freedman's inequality, a variant on the CH74 inequality.

Aspect et al. (1982)

and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third was arranged such that the choice between the two settings on each side was made during the flight of the photons.

Tittel et al. (1998)

The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" source was used.

Weihs et al. (1998): experiment under "strict Einstein locality" conditions

In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an ingenious experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory.

Pan et al. (2000) experiment on the GHZ state

This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles.

Rowe et al. (2001): the first to close the detection loophole

The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%.

Gröblacher et al. (2007) test of Leggett-type non-local realist theories

A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden variable theory consistent with quantum mechanics must be highly counterintuitive.

Salart et al. (2008): separation in a Bell Test

This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors.

Ansmann et al. (2009): overcoming the detection loophole in solid state

This was the first experiment testing Bell inequalities with solid-state qubits. This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters.

Giustina et al. (2013), Larsson et al (2014): overcoming the detection loophole for photons

The detection loophole for photons has been closed for the first time in a group by Anton Zeilinger, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments.

Christensen et al. (2013): overcoming the detection loophole for photons

The Christensen et al. experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings. The experiment was not pulsed so that formation of "pairs" from the two records of measurement results had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality. On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time.

Hensen et al., Giustina et al., Shalm et al. (2015): "loophole-free" Bell tests

In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally.
The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality. Thereby the local-realist hypothesis could be rejected with a p-value of 0.039, i.e. the chance of accidentally measuring the reported result in a local-realist world would be 3.9% at most.
Both simultaneously published experiments by Giustina et al.
and Shalm et al.
used entangled photons to obtain a Bell inequality violation with high statistical significance. Notably, the experiment by Shalm et al. also combined three types of random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the Back to the Future films, , Monty Python and the Holy Grail, and the television shows Saved by the Bell and Dr. Who.

Schmied et al. (2016): Detection of Bell correlations in a many-body system

Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose-Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime.

Handsteiner et al. (2017): "Cosmic Bell Test" - Measurement Settings from Milky Way Stars

Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth. The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.”

Rosenfeld et al. (2017): "Event-Ready" Bell test with entangled atoms and closed detection and locality loopholes

Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events.

The BIG Bell Test Collaboration (2018): "Challenging local realism with human choices"

An international collaborative scientific effort showed that human free will can be used to close the 'freedom-of-choice loophole'. This was achieved by collecting random decisions from humans instead of random number generators. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant.

Rauch et al (2018): measurement settings from distant quasars

In 2018, an international team used light from two quasars as the basis for their measurement settings. This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit.

Loopholes

Though the series of increasingly sophisticated Bell test experiments has convinced the physics community in general that local realism is untenable, local realism can never be excluded entirely. For example, the hypothesis of superdeterminism in which all experiments and outcomes are predetermined cannot be tested.
Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting.
Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future. In 2015, a loophole-free Bell violation was reported using entangled diamond spins over 1.3 km and corroborated by two experiments using entangled photon pairs.
The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test. David Kaiser of MIT told the New York Times in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments.