It is known that John Bell used the assumption of locality in proving his famous inequality. But, I don't see a need for this assumption.

Assuming that a distribution of probabilities exists P(A, B, C) for the values of three variables, A, B, C, Bell's inequality can be easily obtained without requiring locality of this distribution. 

Denoting by , , and the three averages involved in Bell's inequality, it is sufficient to write them as algebraic sum of the different probabilities, i.e. 

(1) = P(A=1, B=1, C=1) + P(A=1, B=1, C=-1) - P(A=1, B=-1, C=1) - P(A=1, B=-1, C=-1) . . . ,

(2) = P(A=1, B=1, C=1) - P(A=1, B=1, C=-1) + P(A=1, B=-1, C=1) . . . ,

(3) = P(A=1, B=1, C=1) - P(A=1, B=1, C=-1) - P(A=1, B=-1, C=1) . . . .

Using these three formulas in Bell's inequality one can easily check that it is correct. So, why is needed here the locality assumption?

NOTE: recall that the locality assumptions says that the value obtained at Alice's lab for the observable A, does not depend on whether at Bob's lab is measured the observable B, or the observable C. The question comes from the fact that the proof of Bell's inequalities seems to be independent of whether the probabilities P(A, B, C) result from local parameters/factors/influences, or non-local. 

More Sofia D. Wechsler's questions See All
Similar questions and discussions