Can we design an experiment to prove that speed of a particle cannot go from below light to above light? I am not talking only about accelerating particles but the experiment must be able to prove that no method including pushing, jumping or tunneling particles from below the speed of light to above the speed can exist. I am not looking at the theoretical derivation but an actual lab experiment.
Dear Shalender, to my understanding (and this is partly gained by studying the history of science) this is not the way physics, as a scientific discipline, works: one does not begin with a set of perceived advantages exterior to science to be had if something was true and then set out to prove the truth of that thing; this is at best a form of reverse engineering. We are supposed to observe the nature insofar as it is accessible to observation and through scientific methods come to an understanding of these observations within the body of established knowledge, which of course can itself be subject to revision. This understanding can be formulated in terms of laws, expressed most conveniently in mathematical language. To test whether our understanding is complete, or at least not erroneous, we use the laws we have written down to predict observations as yet to be made. Later observations are to verify these predictions. From my very personal perspective, any scientific investigation must be embedded in a greater whole; its links to the available body of knowledge must be evident to us, the researcher. Of course, one can make discoveries by serendipity while doing research, but this is only an added bonus for thinking about nature and attempting to understand it. You may wish to consult one of the better biographies of Albert Einstein or any other great scientist for that matter to see how they practised physics. Come to think of it, one only needs to go through the publications of Einstein (the ones he wrote in his most productive years) to realise that they invariably begin with discussing some experimental observations.
This question lies at the heart of the theoretical and experimental physics. During a period in 2011 the physics community was in fact confronted with an experimental observation that, if correct, would have shown that the speed of light in vacuum were not the highest achievable speed. For the details I refer you to the attached entry of Wikipedia.
http://en.wikipedia.org/wiki/Faster-than-light_neutrino_anomaly
I agree but still there is no experiment to prove that it indeed is the maximum speed even if one tries by whatever means possible. Is it possible to design a good experiment to prove the V
When speed of light accuracy is improved , meter definition is modified to keep speed of light to C.
As of now no matter or photon exceeded speed of light.
In matter it can happen and the way this is detected is as Cerenkov radiation, emitted by a charged particle, moving through matter at a speed greater than the speed of light in that medium (but less than the speed of light in vacuum)-a technique routinely used for decades now. In vacuum it doesn't happen and this can be understood the same way.
Dear Shalender, as my answer implied, at the time being there is no reason to suspect that speed of light was not the upper bound to the speed of propagation. However, if and when this is shown not to be the case, it will not be an isolated event; it will demand to reconsider the entire body of the extant theoretical physics. The principle that speed of light is the highest achievable speed is deeply interwoven in this body. The question "Is it possible to design a good experiment to prove the V
Sure, not only it can be designed, but hundreds devices that do exactly that exist, and zillion experiments have been done using them -- particle accelerators of all kinds. No shattering results as to particles exceeding the speed of light yet. As to "light exceeding the speed of light" -- sure, quite a few experiments have supposedly demonstrated that ( a few time with the interval of about 15 years), in particular with a presumed group velocity of a laser pulse in amplifying medium exceeding the speed of light by orders of magnitude -- but each time a (usually quite mundane) explanation was found that brought us back to the old v
Dear Shalender, from my perspective there are more urgent issues in physics to address than coming up with experiments disproving the speed of light in vacuum being the highest achievable speed. Of course one should be open-minded, but there is no indication in sight that this speed limit were a source of any problem in physics. In fact the relatively recent experimental verification of the existence of the Higgs boson has shown that the Standard Model of particle physics is more fundamental than anyone would have imagined. I should say this: showing violations of principles is not an aim in itself in physics; the aim in practising physics is understanding the observed phenomena in terms of a minimal number of basic principles. These principles are only re-evaluated when they stand in the way of our understanding and progress of science. We are not there to outsmart ourselves.
Alexander/Behnam> I agree that normal acceleration can be ruled out and accelerators have empirically proven that. Can we also design an experiment to prove that no jump/tunneling of velocity from below C to above C is allowed?
The paper here: https://www.researchgate.net/publication/265643274_Extended_principle_of_relativity_beyond_speed_of_light_and_a_method_to_push_particles_beyond_the_speed_of_light talks a bout a classical method to jump/push them. I think if this experiment is dis-proven then it is very safe to assume that current special relativity works well even above the speed of light and no method can be created. This is because the theory in the paper extends special relativity over speed of light by an simple extension that a single event occurring at stationary reference frame will be observed as multiple events OR 0 event by an observer moving above the speed of light. This is because a observer moving above the speed of light will not only pick photons arising out of a event which are coming towards it but also the photons going away from it. On the other hand if the event occurs when the observer is moving away from the event then the photons from the event can never reach it. If the proposed experiment is dis-proven, it means that one event does not map to multiple events above the speed of light, which in-turn means that no observer can actually move above the speed of light and observe events occurring below the speed of light (If the observer can, it has to measure them twice, which means that the proposed extended theory is true and so are the results).
Article Extended principle of relativity beyond speed of light and a...
Shalender, I have to admit that I haven't read your paper in too many details yet (sorry, but I've read too many similar claims before, so I've got lazy by default:-) . Well, anyway, it is always the same, sigh: either you've got a negative result (i. e. v
Here's a poser for people: if you aim your spaceship at a black hole and power-dive through the horizon, how fast are you moving by the time you cross the horizon?
The answer would seem to be “more than the background speed of light”. So travelling at more than background c //appears// to be entirely legal under current theory. Causality isn't broken because light in your region is accelerated into the hole as well, so you're never travelling faster than the velocity of local light that happens to be travelling in the same direction as you – you never overtake your //own// light.
The answers to these questions sometimes depend on whether we define the rate at which light propagates as a “velocity” with a defined direction, or as an averaged round-trip “speed”. If we use “speed” in our definitions then our calculations probably won't give complete answers for situations in which light-propagation isn't isotropic. If a region has a preferred direction for light propagation (as is the case when there's a gravitational gradient), then the round-trip-based light “speed” arguments aren't necessarily valid.
----
Another complication is that groups of moving particles seem to be associated with measurable direction-dependent differences in the velocity of light in the region, so moving matter appears to be associated with the sort of lightspeed anisotropies that we usually associate with gravitational fields, meaning that particle physics may have an inbuilt excuse for ignoring some of the limitations built into special relativity.
Putting it bluntly, even if special relativity describes the theoretical operation of the principle of relativity in a perfect vacuum, then once you start throwing particles through that region to perform real physics, it's no longer a perfect vacuum, and special relativity no longer has to apply.
Alexander> Very well written. BTW seeing double or catching photons from an event twice is very natural consequence of moving very fast (faster than the photons). You can eventually catch up with both the photon, the one which is coming towards you and which going away from you. If that does not happen, then you are not moving faster than the photon.
Now assume that you are indeed moving faster than a photons and you are approaching an stationary observer. You are emitting photons towards the stationary observer periodically. But because you are moving faster than the photons, the photons emitted by you will arrive in the reverse chronological order. If the emitting photons is considered as a EM wave, the wave pattern of the EM wave will be reversed. Using the above 2 facts one can arrive at the particle energy and momentum by relativistic Doppler effect and a clever formulation where the massive particle in the frame of reference moving faster than light explodes into 2 photons.
Given the energy and momentum formulation thus arrived the paper proposes a method to jump particles from below the speed of light to above the speed of light. If the experiment is indeed proven false, then we can safely assume that it is impossible to do so.
Hi Behnam. For the last half-century or so, particle physics has been operating under some anomalous and sometimes quite troubling operational guidelines and test theories, which, when we examine what they let us measure and what they don't let us measure, occasionally appear to be quite perverse.
One way of explaining these apparent perversions of normal scientific methodology is that perhaps there are indeed “indications in sight that this speed limit is a source of problems in physics”, that these may have been visible for some time, and that perhaps we've developed these tortuous methodologies and compartmentalised collections of theories, with ad-hoc rules as to when different sets of rules are or aren't supposed to apply, as a way of protecting ourselves from seeing what would otherwise seem to be some pretty glaring inconsistencies.
I think we've developed these arcane procedures and rules and modes of thought as a form of intellectual “scar tissue” that protects our delicate minds from having to deal with a situation that might otherwise be quite distressing to some people.
If you want to understand how a system may be failing, studying the distribution and characteristics of that scar tissue can be a useful exercise. Even if you think the system is fine, this sort of review is a useful “sanity-check” that you'd expect a healthy research community to carry out on itself, as a matter of good practice. The fact that nobody on the mainstream community seems to be willing to do this sort of review is again, IMO, possibly evidence of more scar tissue.
My opinion is such an experiment would not be performed. Why? By showing the opposite is true. A particle can be accelerated past the speed of light.
There are known theoretical situations where such does occur. Last year, I read about the thought experiment of a particle, near infinity being accelerated towards a black hole event horizon, where the escape velocity at the event horizon radius is the speed of the light. Therefore, this in falling particle, once past the event horizon, by a minimum amount, would be moving faster than the speed of the light, but ... to avoid the paradox, as it's inside the event horizon, and can not be observed from outside the horizon, there is no measuring this particle moving faster than light, from outside the horizon. "We" can not see it.
So, as a thought experiment, it 'disallows' setting up an experiment to prove a particle can not be made to pass the light barrier. As the opposite of such an experiment can be proposed, and be done, even on Earth, if one had a large enough, and 'safe,' black hole.
Of course, this is not the type of experiment you were thinking about. It's the opposite. To prove it can be done, just can not be observed/measured/reported back to the experimenter who remains alive, outside the black hole.
The philosophy surrounding Special Relativity, GR, QM, and such, does allow for thought experiments, and perhaps even to accept the results of such as the 'truth?' As proven? As reality?
The last thought I have, is all such experiments are always preceded by theoretical considerations, math equations, proofs, and such, that the experiment is 'erected' upon. So, it's a great RG question, as it forces us all to critically think. Always a worthy goal.
Feedback?
So the experiment proposed in the paper: https://www.researchgate.net/publication/265643274_Extended_principle_of_relativity_beyond_speed_of_light_and_a_method_to_push_particles_beyond_the_speed_of_light if correct will also prove the opposite and clearly show that the particles can go above the speed of light. It anyways will lead to a good answer.
Article Extended principle of relativity beyond speed of light and a...
Peter> Very well said. In fact with all the discussion we might actually reach a not 100% but a 99% experiment. The thought experiment is also good as long as it also be conducted in the real world.
Shalender,
In C19th Newtonian Optics there's a lightspeed limit to how fast you can directly accelerate a particle, because the coupling efficiency between the particle and coils placed behind it or alongside it drops to zero as v tends to cBACKGROUND. I know that it's commonly taught that transverse redshifts are unique to SR but the math says otherwise – transverse redshift-type effects seemed to show up in most C19th models. In the case of Newtonian Optics, there's an aberration redshift at 90 degrees (lab) that's actually stronger than the SR equivalent – Lorentz-squared rather than Lorentz – meaning that the “SR” particle accelerator lightspeed limit appeared to exist even in Newtonian theory, if we're confining ourselves to only considering the case of acceleration due to directly-applied force. These older models also tended to have associated velocity-addition formulae, but their physical significance was different to that of the SR version.
Anyhow … The NO relationships are unexpectedly resilient when you couple them to an acoustic metric, and are unexpectedly difficult to disprove using C20th data. Distinguishing between the NO relationships and the revised SR versions is often actually quite tricky (for instance both generate E=mc^2 as an exact result), so it appears that the SR community simply chose not to do the tests, and constructed their test theory in such a way that if you got a result that looked more like an NO result than an SR one, you were supposed to calibrate it out as assumed experimental error. I'd suggest that we don't actually yet know which of these two sets of equations gives the most accurate results. There may even be other sets of candidate equations to consider.
Back to the original question … one of the differences between NO/am and SR is that although both models say that you can't directly accelerate a particle to more than the speed of light, NO/am says that once you have it at 99.9...% of background lightspeed, the particle itself can radiate daughter particles that //do// initially travel at more than background c (although the velocity of light in the immediate region would have to be increased around the daughter particle in its direction of travel, to prevent it passing through its own wavefront). It's a “compound acceleration” or “indirect acceleration” mechanism that's very closely related to the indirect radiation effect that appears in acoustic-metric descriptions of black holes, for the indirect acceleration of particles outward across a gravitational horizon.
If you then describe the resulting visible physics using a coordinate system that doesn't take into account the acceleration-related fluctuations of the metric and of the effective horizon, you end up with an artificial description in which the radiation appears to be created outside the horizon as the result of particle pair-production mechanisms. At that point, you're talking about textbook Hawking radiation.
So … when you ask whether we can do an experiment to prove the impossibility of particles being made to go at more than cBACKGROUND if we include mechanisms //other// than direct acceleration, I think the answer is “no”.
If we //can// make particles go superfast (if only briefly) by using indirect acceleration, then then quantum mechanics already has a way of statistically describing the results as a legitimate-looking Hawking radiation effect (QM dislikes absolute classical barriers).
Conversely, if we believe that every effect that exists in QM has a real-world counterpart, then heavy particles moving at 99.99% of background c ought to be able to emit lightweight daughter particles travelling initially at more than background c, to correspond with the idea that QM should predict particles tunnelling across a classical barrier.
If you wanted to test the simple Minkowski lightmetric of special relativity against models that showed more complex light behaviours, one of the ways that you'd do it would be by accelerating heavy objects (say, lead nucleii) up to as close to lightspeed as we could, then breaking them apart, and seeing whether any of the lightweight daughter-particles arrived at a detector before the bulk part of the associated lightpulse.
If we'd ever constructed a proper test theory to compare the classical relationships of Newtonian optics against those of special relativity, checking for the presence of indirect radiation effects would have been one of the most critical tests.
Dear Shalender, it is perhaps a matter of taste, but to my opinion/taste in the realm of fundamental physics investigations should be guided by observations. In the case of Einstein's relativity, if you consider its historical background you will notice that there were mounting amount of observations that could not be explained by the Galilean relativity; think of the Fizeau experiment, to name but one example, whose explanation in the framework of Einstein's theory of relativity is a matter of one line of trivial algebra. The questions I would ask are: what set of experimental observations would be directly clarified by relaxing the principle v ≤ c? What new vista in our understanding of the physical world would be opened by relaxing this principle? What contradictions would be produced/revived by relaxing this principle? What all-encompassing theory would be justified by relaxing this principle? Etc. In this connection, historically we have moved from v unrestricted to v ≤ c, so that returning to v unrestricted is going to revive all the old problems that over the course of the past 100 years we had believed to have solved! Note that for v/c → 0 Einstein's theory of relativity reproduces the classical non-relativistic theory.
To summarise, the idea of disproving v ≤ c as a thing on itself (in the Kantian language, a Ding on Sich), in disconnection with a grand theory (i.e. grander than what we already have), is by no means appealing to me. I would personally spend no time on it. Life is too short.
Aleksei Bykov> The particle accelerators do accelerate particles above 0.707C but they have never designed it to use a time varying field, which reverses at a speed greater than 0.707c at a smooth zero crossing. This theory of the experiment is directly based on the fact that if a observer is moving faster than light it can catch double photons from the same event. It is logical and if an observer cannot catch double photons (photons ahead of it and on it's path) then it is not moving faster than light.
This fact was never considered while making the theory relativity. That is why it is incomplete. Hill & Cox ( http://rspa.royalsocietypublishing.org/content/468/2148/4174.abstract ) reached the similar results from mathematics but they are incomplete. The experiment is simply based on the theoretical results of them. In fact if the experiment does not work out then it can be pretty conclusive that no particle can move faster than light in the Special relativity flat space time.
Behnam Farid> The acceleration of particle above light using Hill & Cox ( http://rspa.royalsocietypublishing.org/content/468/2148/4174.abstract ) and this paper: https://www.researchgate.net/publication/265643274_Extended_principle_of_relativity_beyond_speed_of_light_and_a_method_to_push_particles_beyond_the_speed_of_light has definite advantage if successful. As you can see the particle reaches negative energy state, which means it will generate complete mc^2 energy. If a cycle can be made pushing and pulling and extracting energy at the right point then one make a huge amount of energy discarding the negative energy generated by the reverse process. Secondly a electron moving faster than light when accelerated also generates negative energy photons (due of faster than light Doppler effect). These results can be game changer if correct.
Article Extended principle of relativity beyond speed of light and a...
Dear Shalender, to my understanding (and this is partly gained by studying the history of science) this is not the way physics, as a scientific discipline, works: one does not begin with a set of perceived advantages exterior to science to be had if something was true and then set out to prove the truth of that thing; this is at best a form of reverse engineering. We are supposed to observe the nature insofar as it is accessible to observation and through scientific methods come to an understanding of these observations within the body of established knowledge, which of course can itself be subject to revision. This understanding can be formulated in terms of laws, expressed most conveniently in mathematical language. To test whether our understanding is complete, or at least not erroneous, we use the laws we have written down to predict observations as yet to be made. Later observations are to verify these predictions. From my very personal perspective, any scientific investigation must be embedded in a greater whole; its links to the available body of knowledge must be evident to us, the researcher. Of course, one can make discoveries by serendipity while doing research, but this is only an added bonus for thinking about nature and attempting to understand it. You may wish to consult one of the better biographies of Albert Einstein or any other great scientist for that matter to see how they practised physics. Come to think of it, one only needs to go through the publications of Einstein (the ones he wrote in his most productive years) to realise that they invariably begin with discussing some experimental observations.
But science has also predicted things and that has very useful outcome in the modern world. Most notable is the design of semi-conductor devices and things like tunneling diodes. Did tunneling diode exist before they were designed and manufactured?
The question of using experiment to test out a theory is not philosophical but a practical one. If a complicated thing has to be created by design, the same phenomena is not easy to observe in natural universe.
Dear Shalender, but nowhere have I said that science had not produced useful things. My only contention is and has been that it is not part of the scientific tradition to go about disproving things without having a broad scientific objective. The fact is that insofar as anyone knows the constraint v ≤ c is not an obstacle for a better and more complete understanding the natural world. In fact, it clarifies a whole raft of physical phenomena that otherwise would not have been clarified. Actually, the consequences of the constraint v ≤ c, leading to measurable and calculable relativistic corrections, are so pervasive in our daily lives that one of the last things one would be suspecting of being incorrect is this very constraint. Despite all these, I emphasize what I have emphasized earlier on this page, that one should not close one's mind to the possibility of the violation of v ≤ c. However, any research in this direction must be firmly rooted in a broad scientific enquiry. Single-mindedly focusing on proving v ≤ c wrong feels very wrong-headed to me.
Behnam, Christian> I agree with you both. But in the attempt to design such experiment we can realize how limited existing physics theory is. If SR says that V < C to V > C is impossible it has to propose experiment to verify it. Else it still open for someone to do it one day.
Dear Shalender, naturally I am not here to dictate how others should organise their research work. However, such statement as "If SR says that V < C to V > C is impossible it has to propose experiment to verify it. Else it still open for someone to do it one day." implicitly assumes that SR were a theory about SR itself, with no links to the rest of physics. This is not the case. Theories of relativity (Special and General) have their roots and branches in everything discovered and invented over at least the past 150 years. A mere possibility of something is not sufficient to set out on a journey of investigation with no clear end, given the fact that there is not a single experiment or a complete theory that would preserve the physical laws that we hold as valid and do away with v ≤ c. Yes, if you have huge amount of experimental data at your disposal, by all means look through them to see whether there is any signature of the violation of v ≤ c. However, as I have said earlier on this page, singling out the inequality v ≤ c and going about to disproving it -- in the absence of any experimental and/or theoretical justification for doing so -- is extremely unorthodox and in fact wrong-headed. To my best judgement, there is no ambiguity regarding scientific methods.
I think I have begun to repeat myself, so that this will be my last contribution to the discussion on this page.
Reading all the quite interesting posts for this question, I see all my past readings on the topic has been updated and expanded, the entire thread providing a wealth of past and current theories, approaching comprehensive in quality, with a medium degree of quantity (math). It's rare for a question to have such a wonderful set of posts.
To provide 'different' content, not yet mentioned, to increase comprehensiveness of the thread, here are two more ways to exceed the speed the light.
This paragraph is very theoretical in nature, the Higgs field, and particle, that gives 'mass', thus preventing a particle (with mass) to be accelerated to the speed of light, ... what if ... one could remove the Higgs particle(s) surrounding a particle ... which could then, theoretically, be accelerated to light speed and beyond. Not sure if 'thrust' could be ejected particles (they have no mass), or be charge based acceleration.
There is the use of negative energy to warp space around the negative energy generator, to tilt space in front downward, and behind upward, creating a natural lower ground state in front of the generator/space ship, which the ship slides forward to that lower energy state, continuously, allowing 'warp' speed.
Now my comments on the original question, perhaps with the angle of devil's advocate, several angles.
Experimental proof the speed of light can not be exceeded, on Earth or anywhere else, science rarely, if ever, attempts 'super broad' experiments, to prove negatives, or for the entire range of negatives. For just reason.
The value of 'an' experimental proof is for 'the' theory being proved ... with data. Always an admirable goal, as other scientists can independently confirm with different experimental set ups, or identical. Thus, future resources are more wisely allocated towards new goals.
Thus, the suggested experiment has little value to it's goal. Meaning, finding the resources to perform the experiment will be most difficult. I'm thinking self funded. And when done, the resulting value to scientific community will be quite little, if anything. Unless, novel experimental set ups provides new insights for future set ups, to prove other things. That is, providing experimental proof the speed of light can not be exceeded, once done, provides no greater insights into the theory.
To be blunt, this entire thread argues against the above paragraph's premise. There would be value of such experimental proof, as other foolish experiments to attempt to exceed light speed, would not be funded. Saving resources for more worthy experiments to prove other theories.
Personally, I am against the 'success' of such an experiment, as it dashes all hope to leave this solar system, well, perhaps the galaxy. I'd rather keep the dream alive. Not a very scientific position, but recognizing the "human condition."
Peter Benjamin> Great response. I also think we can break the speed of light. Look at the paper: https://www.researchgate.net/publication/265643274_Extended_principle_of_relativity_beyond_speed_of_light_and_a_method_to_push_particles_beyond_the_speed_of_light .
We cannot break unless we try to break and there has to be a special experiment to do it. We cannot wait to accidentally discover it.
Article Extended principle of relativity beyond speed of light and a...
Do we know how the faster than light particles will behave when observed using existing scientific instruments. Can we really design experiments to understand the physics of faster than light particles without having ever observed one.
Dear colleagues!
Before speaking about how to exceed speed of light, please, think about notions of speed (time and length) and about what (particle?) will exceed it. Actually, the notion of particle is poorly defined. May be in QFT (or other future theory) there will be no particles at all. SR is local and macro theory. Globally and in very small all above mentioned notions are poorly defined. Benham is right. Alone problem of exceeding of speed of light does not exist. The question may be about SR as a whole. But now it is not actuall physical problem. SR is selfconsistent in it's range of applicability.
Regards,
Eugene.
Specific tests of Lorentz invariance are very hard to find. Special relativity is an inseparable part of quantum field theory which describes the world of elementary particles with an almost incredible precision. Particle physics has tested special relativity in thousands of different experiments without finding a flaw: the Lorentz invariance is locally exact.
As has been pointed out several times in RG, special relativity really contains only one parameter, c, the velocity of light in vacuo, which has the dimension of length/time. One is free to choose c=1 locally because any other choice only implies a rescaling of the units of length.
Variable speed of light (VSL) would be in conflict with Lorentz invariance and with Einstein's time dilation formula. Although there have been many attempts to test VSL by measuring the transverse second-order Doppler shift by Mössbauer spectroscopy, the results are claimed to be either wrong or doubtful, and in need of improved technology. The best value is from a measurement of the radius of Mercury:
(dc/dt) / c = 0 +- 2 x 10^{-12} per year.
On the theoretical side there are interesting generalizations of the linear Lorentz transformations to uniformly accelerated or rotating frames. Some generalized transformations predict acceleration-dependent Doppler shift and time dilation, as well as a maximal acceleration
Hi Peter!
Three classical models that obey the principle of relativity for you to compare:
1 Special Relativity
... predicts a lightspeed limit in particle accelerators and an absolute lightspeed limit for how fast you can make a particle go, under any circumstances.
2 Newtonian Theory
... Newtonian optics predicts a lightspeed limit in particle accelerators, but allows you to go faster than background average lightspeed, provided that you make use of indirect-acceleration techniques.
3 Acoustic metrics
... relativistic acoustic metrics seem to describe a lightspeed limit in particle accelerators, but allow you to go faster than background average lightspeed, as long as your physical acceleration modifies the light-transmission properties of the metric so that you don't overtake your own light. So you can't "throw" a particle faster than the speed of light, but you can make use of indirect acceleration or indirect transmission to get it to go faster, provided that those efforts physically distort the metric.
We also have:
4 Quantum Mechanics.
The transhorizon behaviour under quantum mechanics (Hawking radiation) appears to be a statistical match for the results of classical nonlinear behaviour predicted by acoustic metrics ... so again, under quantum mechanics, you'd seem to still have a lightspeed barrier for direct acceleration, but the ability to get particles through the lightspeed barrier using indirectly-applied force, that could then be described as "tunelling" through the barrier.
----
So we have three or four scenarios here (given that #3 and #4 might be physically indistinguishable). One classical description forbidding FTL, two allowing it, and a further non-classical model that also seems to allow it.
But all three (or four) models generate a lightspeed limit in particle accelerators, regardless of whether SR is assumed to be right or wrong, and regardless of whether FTL is assumed to be possible or impossible.
Eugene: "SR is selfconsistent in it's range of applicability."
If the "range of applicability" is defined by experience and external factors, and isn't predicted from within the theory itself, then every theory, no matter how bad, will be self-consistent in its range of applicability.
Even the theory that the Earth is flat and supported by four elephants, while being fundamentally wrong on a scientific level, can still be useful to architects when constructing smallish buildings where the curvature of the Earth can be ignored. The "scientifically bad" idea that the Earth is flat is actually very useful within a certain range.However, while the assumption of a Flat Earth may well be extremely efficient for almost all small-scale engineering, we wouldn't consider it to be a credible C21st scientific theory, because in terms of scientific falsifiability, it's already been shown to have failed in certain key areas. It is known only to give a workable crude approximation of reality at small scales and is known not a real expression of the true underlying physics. We put up satellites and took pictures, and didn't see any elephants.It's a useful quick-and-dirty description that lets us eliminate unnecessary variables and still do useful work, but "The Earth is Flat" is not a principle that seems to give us any deeper insights into how the real universe works. So it's possible for something to be lamentably wrong as a scientific theory, but for it to still be extremely useful in engineering. Conversely, the fact that a theory is really useful for engineering, and "good enough" for most practical purposes (so that an engineer might consider it "proved" in the field) doesn't mean that it should be treated as a "good" scientific theory that is safe to use as a foundation for further theorising.
Eric,
the question is about alternatives. We need Kopernik.
Regards,
Eugene.
Dear Arno, I have earlier written to Research Gate that the possibility of simply down-voting a comment must either be removed or the identity of the person doing the down-voting be made public. In response, Research Gate informed me that they would internally discuss my proposal. My argument for the suggestion is as follows: writing comments on the pages of RG carries with it a non-vanishing amount of risk to the reputation of its writer, as this can expose the writer's possible ignorance (both in general and regarding particular subject matters) to those who are better qualified than him or her. In contrast, down-voting carries no risk with it for the anonymous down-voter (except perhaps at some level not visible to us, where the RG administrators may notice an unwholesome pattern in the behaviour of the user in question who does little more than systematically down-voting other people's comments); the down-voter, being anonymous, is essentially accountable to no one; in the extreme case, his or her mere dislike of someone's photograph can have induced him or her to down-vote that person's comment. But aside from these issues, the essential question to be considered is: why in a scientific forum such as this someone would not dare to record his or her objection to a comment in a reasoned fashion and sign his or her reasoning by his or her actual name. At a certain level, both views are equally valid and there is no reason why one of them is to be signed (the one by the commenter) and the other (the one by the down-voter) not.
In closing, I propose that those who agree, or disagree, with my above views should undertake to make their views known to RG.
Eugene: Yes. Unfortunately, our C20th enthusiasm for SR and GR resulted in a set of procedures for assessing the validity of theories that automatically ruled out anything that did not reduce to the physics of special relativity as an =exact= solution. This meant that we ended up with almost no published research on how to compare special relativity against "sophisticated" non-SR models that still conformed to the principle of relativity.
People didn't do the work, because if they failed it would be a waste of time, and if they succeeded and somehow managed to produce a working theory, they still wouldn't be able to publish. If a theory totally agreed with SR then it was redundant, and if it made testable diverging predictions then it was classified as wrong without those predictions needing to be looked at. Admitting to being interested in non-SR theory was seen as the mark of a "crackpot" – you could have a respectable career in science, or you could work on this sort of material, but it was difficult to do both.
Matts: "Particle physics has tested special relativity in thousands of different experiments without finding a flaw"
Particle physics has tested special relativity in thousands of different experiments without (usually) =reporting= a flaw. This is not necessarily the same as saying that no flaws have been found.
For instance, in a relativistic acoustic metric there are effects that correspond to both quantum mechanics and to the results expected from non-SR classical theory. You then get a duality between classical and quantum behaviour for things like (say) Hawking radiation. Special relativity doesn't include these effects, but instead of saying that the theory has shortcomings for not including these effects, we instead classify these things as intrinsically "QM-specific" effects, and say that classical theories aren't //supposed// to predict them.
Wherever the theory fails to agree with QM, we say that these are situations where classical and quantum theories are //supposed// to diverge, and we say that when the SR result is then supplemented with the QM effects, they make an amazing match to the data.
Well, yes, they would. But it's not obvious that QM isn't acting here as a "universal band-aid" that's correcting a bad or oversimplified classical model by retrofitting all the effects that the bad theory is missing.
----
C20th SR testing also had a potentially catastrophic omission in that the major test theory assumed (wrongly) that transverse redshifts were unique to SR. It said that SR should properly be compared against a model that gave zero transverse redshift, so if you found a redshift that was comfortably "Lorentz" or higher, then you'd proved special relativity, as only SR could explain that outcome. Any redshifts //stronger// than those predicted by SR were supposed to be discarded or calibrated out as having no theoretical significance, and were automatically considered to be experimental error.
Unfortunately, if you look at REAL C19th theories, you find that most theories and models predicted the functional equivalents of transverse redshifts, with Newtonian theory (for example) predicting a Lorentz-squared redshift. Compared to C20th textbook "Classical Theory", Newtonian optics generates "SR" effects!
But we didn't include these possibilities in the testing procedures. Instead, we compared SR against an invented composite that we called "Classical Theory" – but "CT" only applied Newtonian theory to moving matter, not to light, so CT relationships were never internally consistent to start with.
AFAIK, there was no actual historical theory that corresponded to CT, we used it because the CT starting-point made for a nice artificial narrative for the development of SR, and because CT generated a conveniently bad set of predictions to test against. By comparison, CT makes SR look brilliant ... but by comparison, CT also makes Newtonian theory, and probably a bunch of other C19th theories look brilliant, too.
If you want to fake "amazingness" in a theory's results, you find a way to use a deliberately-rotten set of predictions as your comparison, and it seems that that's essentially what they did.
----
Worse, when we now look at acoustic metrics (which are finally coming into fashion because they seem to eliminate at least some of the disagreements between classical and quantum models), we find that the relativistic solutions that seem to be workable lie in the range "Lorentz-to-Lorentz-Squared" ("L->L^2"), with the candidate solution that appears to be required to produce a classical relativistic description of Hawking radiation being "L^2", or, "redder than SR by an additional Lorentz factor".
So, with hindsight it was critical for us to test the range "L->L^2", and also good scientific practice to do this because of the earlier Newtonian predictions, and because of all the other archaic sets of predictions in this range.
Instead we chose to specifically specifically test the range "zero-to-Lorentz", and told experimenters that if they found anything in the range "Lorentz-to-Lorentz-squared", they were entitled to treat it as experimental error and discard it or tweak their equipment until the shifts came back into the proper range, since any redder outcomes wouldn't be explainable in the context of the test theory being used.
So we're in the humiliating situation where if someone asks us whether Lorentz (SR), Lorentz-squared, or some other redder-than-SR-thing makes the best match to the experimental evidence, we simply don't know. We know that at least some of the experimenters did find redshift overshoots at least some of the time, but for their results to be publishable, they had to delete, discard, eliminate or explain those overshoots away. We don't know how common these redshift overshoots were or what their magnitudes tended to be before correction.
We know that SR is //magnificently// better than theories with zero transverse redshift, or those whose Doppler relationships are significantly "bluer" than SR in some other way, but we don't obviously have any reliable data to evaluate how SR compares to theories that are "redder" than SR, and that's a really bad situation.
Eric,
Evidently you gave up writing papers in 2001, and nothing you wrote up to then is published in a physics journal, nor quoted by anybody else than yourself. For an active researcher these criteria are enough to leave your papers unread. This is not a judgement but there's just not time for it.
Eric,
Newtonian mechanics, Quantum mechanics, Relativity can not be proved or disproved neither by logic nor by experiment. They are not theories, they are ways of thinking. I wonder, if unique optimal way does exist.
Regards,
Eugene.