Quantum correlation
From Wikipedia, the free encyclopedia
In Bell test experiments the term quantum correlation has come to mean the expectation value of the product of the outcomes on the two sides. In other words, the expected change in physical characteristics as one quantum system passes through an interaction site. In the paper that inspired the Bell tests -- John Bell's of 1964 -- it was assumed that the outcomes A and B could each only take one of two values, -1 or +1. It followed that the product, too, could only be -1 or +1, so that the average value of the product would be:
-
- (N++ + N-- - N+- - N-+)/Ntotal
where, for example, N++ is the number of simultaneous occurrences ("coincidences") of the outcome +1 on both sides of the experiment.
In actual experiments, though, detectors are not perfect and there are usually many null outcomes. The correlation can still be estimated using the sum of coincidences, since clearly zeros will not contribute to the average, but in practice instead of dividing by Ntotal it has become customary to divide by the total number of observed coincidences,
-
- (N++ + N-- + N+- + N-+)
The legitimacy of this method relies on the assumption that the observed coincidences constitute a fair sample of the emitted pairs.
Following local realist assumptions as in Bell's 1964 paper, the estimated quantum correlation will converge after a sufficient number of trials to:
-
- QC(a, b) = ∫ dλ ρ(λ) A(a, λ)B(b, λ)
where a and b are detector settings and λ is the hidden variable, drawn from a distribution ρ(λ).
The quantum correlation is the key statistic in the CHSH and some of the other "Bell inequalities", tests of which open the way for experimental discrimination between quantum mechanics on the one hand and local realism or local hidden variable theory on the other.
[edit] References
J. S. Bell, Speakable and Unspeakable in Quantum Mechanics, (Cambridge University Press 1987) ISBN 0-521-52338-9