In 1971, Joseph Hafele and Richard Keating used atomic clocks to test the prediction of time dilation resulting from motion (special relativity) and gravity (general relativity). In 1972,they reported in Science (SCIENCE, 14 Jul 1972, Vol 177, Issue 4044, pp. 168-170) the detection of relativistic time loss as a result of motion. Laying aside for the moment the issue of gravity on the effect of time, was the detection of time loss as a result of motion in accordance with what is predicted by special relativity?
In the experiment, an atomic clock A was at the naval observatory. An atomic clock B was on a commercial plane that flew eastward around the world. At the end of the flight, Atomic clock B lost time with respect to atomic clock A.
From the reference frame of clock A, this would appear to be in accordance with prediction of special relativity. That is, from the reference frame of clock A, the relative motion of clock B was expected to slow the passage of time of clock B, so clock B would be expected to lose time to with respect to clock A. The data confirmed this prediction.
How about from the reference frame of clock B? In special relativity no reference frame is privileged over another, and the same result is to be expected regardless of the reference frame from which measurements are made.
From the reference frame of clock B, clock B was stationary and clock A was moving at a high speed with respect to clock B. From the reference frame of clock B, the relative motion of clock A would be expected to slow the passage of time of clock A, so clock A would be expected to lose time with respect to clock B. Yet in the experiment, clock A gained time with respect to clock B. The data contradicted the prediction.
How could special relativity be confirmed in one reference frame and simultaneously contradicted in another reference frame. Is this a fatal flaw for the prediction of time dilation in special relativity or is there a way to resolve this conundrum?