Loopholes in Bell test experiments


In Bell test experiments, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". See the article on Bell's theorem for the theoretical background to these experimental efforts. The purpose of the experiment is to test whether nature is best described using a local hidden variable theory or by the quantum entanglement theory of quantum mechanics.
The "detection efficiency", or "fair sampling" problem is the most prevalent loophole in optical experiments. Another loophole that has more often been addressed is that of communication, i.e. locality. There is also the "disjoint measurement" loophole which entails multiple samples used to obtain correlations as compared to "joint measurement" where a single sample is used to obtain all correlations used in an inequality. To date, no test has simultaneously closed all loopholes.
Ronald Hanson of the Delft University of Technology claims the first Bell experiment that closes both the detection and the communication loopholes. Nevertheless, correlations of classical optical fields also violate Bell's inequality.
In some experiments there may be additional defects that make "local realist" explanations of Bell test violations possible; these are briefly described below.
Many modern experiments are directed at detecting quantum entanglement rather than ruling out local hidden variable theories, and these tasks are different since the former accepts quantum mechanics at the outset. This is regularly done using Bell's theorem, but in this situation the theorem is used as an entanglement witness, a dividing line between entangled quantum states and separable quantum states, and is as such not as sensitive to the problems described here. In October 2015, scientists from the Kavli Institute of Nanoscience reported that the Quantum nonlocality phenomenon is supported at the 96% confidence level based on a "loophole-free Bell test" study. These results were confirmed by two studies with statistical significance over 5 standard deviations which were published in December 2015. However, Alain Aspect writes that No experiment can be said to be totally loophole-free.

Loopholes

Detection efficiency, or fair sampling

In Bell test experiments, one problem is that detection efficiency may be less than 100%, and this is always the case in optical experiments. This problem was noted first by Pearle in 1970, and Clauser and Horne devised another result intended to take care of this. Some results were also obtained in the 1980s but the subject has undergone significant research in recent years. The many experiments affected by this problem deal with it, without exception, by using the "fair sampling" assumption.
This loophole changes the inequalities to be used; for example the CHSH inequality:
is changed. When data from an experiment is used in the inequality one needs to condition on that a "coincidence" occurred, that a detection occurred in both wings of the experiment. This will change the inequality into
In this formula, the denotes the efficiency of the experiment, formally the minimum probability of a coincidence given a detection on one side. In Quantum mechanics, the left-hand side reaches, which is greater than two, but for a non-100% efficiency the latter formula has a larger right-hand side. And at low efficiency, the inequality is no longer violated.
All optical experiments are affected by this problem, having typical efficiencies around 5-30%. Several non-optical systems such as trapped ions, superconducting qubits and NV centers have been able to bypass the detection loophole. Unfortunately, they are all still vulnerable to the communication loophole.
There are tests that are not sensitive to this problem, such as the Clauser-Horne test, but these have the same performance as the latter of the two inequalities above; they cannot be violated unless the efficiency exceeds a certain bound. For example, if one uses the so-called Eberhard inequality, the bound is 2/3.

Fair sampling assumption

Usually, the fair sampling assumption is used in regard to this loophole. It states that the sample of detected pairs is representative of the pairs emitted, in which case the right-hand side in the equation above is reduced to 2, irrespective of the efficiency. This comprises a third postulate necessary for violation in low-efficiency experiments, in addition to the postulates of local realism. There is no way to test experimentally whether a given experiment does fair sampling, as the correlations of emitted but undetected pairs is by definition unknown.

Double detections

In many experiments the electronics are such that simultaneous + and – counts from both outputs of a polariser can never occur, only one or the other being recorded. Under quantum mechanics, they will not occur anyway, but under a wave theory the suppression of these counts will cause even the basic realist prediction to yield unfair sampling. However, the effect is negligible if the detection efficiencies are low.

Communication, or locality

The Bell inequality is motivated by the absence of communication between the two measurement sites. In experiments, this is usually ensured simply by prohibiting any light-speed communication by separating the two sites and then ensuring that the measurement duration is shorter than the time it would take for any light-speed signal from one site to the other, or indeed, to the source. In one of Alain Aspect's experiments, inter-detector communication at light speed during the time between pair emission and detection was possible, but such communication between the time of fixing the detectors' settings and the time of detection was not. An experimental set-up without any such provision effectively becomes entirely "local", and therefore cannot rule out local realism. Additionally, the experiment design will ideally be such that the settings for each measurement are not determined by any earlier event, at both measurement stations.
John Bell supported Aspect's investigation of it and had some active involvement with the work, being on the examining board for Aspect’s PhD. Aspect improved the separation of the sites and did the first attempt on really having independent random detector orientations. Weihs et al. improved on this with a distance on the order of a few hundred meters in their experiment in addition to using random settings retrieved from a quantum system. Scheidl et al. improved on this further by conducting an experiment between locations separated by a distance of.

Failure of rotational invariance

The source is said to be "rotationally invariant" if all possible hidden variable values are equally likely. The general form of a Bell test does not assume rotational invariance, but a number of experiments have been analysed using a simplified formula that depends upon it. It is possible that there has not always been adequate testing to justify this. Even where, as is usually the case, the actual test applied is general, if the hidden variables are not rotationally invariant this can result in misleading descriptions of the results. Graphs may be presented, for example, of coincidence rate against the difference between the settings a and b, but if a more comprehensive set of experiments had been done it might have become clear that the rate depended on a and b separately. Cases in point may be Weihs' experiment, presented as having closed the locality loophole, and Kwiat’s demonstration of entanglement using an “ultrabright photon source”.

Coincidence loophole

In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station. This loophole was noted by A. Fine in 1980 and 1981, by S. Pascazio in 1986, and by J. Larsson and R. D. Gill in 2004. It turns out to be more serious than the detection loophole in that it gives more room for local hidden variables to reproduce quantum correlations, for the same effective experimental efficiency: the chance that particle 1 is accepted or measured given that particle 2 is detected.
The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary.

Memory loophole

In most experiments, measurements are repeatedly made at the same two locations. Under local realism, there could be effects of memory leading to statistical dependence between subsequent pairs of measurements. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment.

Example of typical experiment

As a basis for our description of experimental errors consider a typical experiment of CHSH type. In the experiment the source is assumed to emit light in the form of pairs of particle-like photons with each photon sent off in opposite directions. When photons are detected simultaneously at both sides of the "coincidence monitor" a coincident detection is counted. On each side of the coincidence monitor there are two inputs that are here named the "+" and the "-" input. The individual photons must make a choice and go one way or the other at a two-channel polarizer. For each pair emitted at the source ideally either the + or the - input on both sides will detect a photon. The four possibilities can be categorized as ++, +−, −+ and −−. The number of simultaneous detections of all four types is counted over a time span covering a number of emissions from the source. Then the following is calculated:
This is done with polarizer rotated into two positions and, and polarizer into two positions and, so that we get,, and . Then the following is calculated:
Entanglement and local realism give different predicted values on S, thus the experiment gives an indication to which of the two theories better corresponds to reality.

Free choice of detector orientations

The experiment requires choice of the detectors' orientations. If this free choice were in some way denied then another loophole might be opened, as the observed correlations could potentially be explained by the limited choices of detector orientations. Thus, even if all experimental loopholes are closed, superdeterminism may allow the construction of a local realist theory that agrees with experiment.