Einstein's General Theory of Relativity seems to have “crashed” as a scientific theory in about ~1960, and to have been "rebooted" some time in the early 1960s as "modern GR" with a different set of definitions and rules that differ from those laid out by Einstein.
I'd like to know who originally made those "redesign" decisions, how the community consensus was reached, and where the changes (and their justifications) are documented.
Background: Einstein had based his theory on the General Principle of Relativity: the idea that all motion was relative, and that even “absolute” motions such as rotations and accelerations could successfully be “relativised” if bodies showing those relative motions could be associated with suitable gravitational/distortional effects. This was an idea previously proposed by Ernst Mach, and Einstein described his general theory as being the theoretical embodiment of Mach's principle.
For derivational convenience, Einstein also initially assumed that the theory should reduce to the physics of special relativity over small regions.
However, the publication of the Harwell group's 1960 paper on centrifuge redshifts (Phys. Rev. Lett. 4, 165 (1960) ) apparently triggered a controversy within the community, and an appreciation that a literal application of the GPoR seemed to lead to results that were geometrically incompatible with special relativity – the consequence of the GPoR being treated as a “law” then seemed to be not only the loss of Einstein's 1905 "Special" theory, but also the loss of the 1916 "General" theory that had been partly built upon it (Schild, Am. J. Phys. 28, 778 (1960) ).
We were facing the unpalatable prospect of a major rewrite of theoretical physics, and although a rederivation of GR to avoid its dependency on SR had already been suggested by Einstein back in 1950 (SciAm 182, 4, 13-17), we found it easier to modify the rules of general relativity to allow the GPoR to be suspended in cases where it seemed to clash with other parts of the 1916 theory. In effect, we accepted that the original “SR+GPoR” structure was logically inconsistent, but maintained order by redefining SR's position in GR's definitional hierarchy to one in which GR could not disagree with SR “by definition”, and establishing a "failure etiquette" ("If the GPoR conflicts with SR, keep SR and suspend the GPoR").
This change seems to have happened with minimal recorded public comment or discussion. Although Schild's paper mentions discussions and "a certain lack of unanimity" in the community as to how to proceed (before he presents the "modern GR" position as unavoidable) Schild doesn't indicate who participated in those discussions.
I'd like to know who was on the committee, who voted for or against the change, and whether any of those concerned published anything on the nature of the 1960 crisis and the chosen response. Does anyone here remember it or have direct personal experience of what happened? Is there any historical record of the episode other than the rather skimpy Schild paper? Did anyone else publish the arguments for modifying Einstein's theory, or the contemporary arguments why GR1916 couldn't continue to be used in its pre-1960 form?
Any references to additional contemporary material would be very, very welcome.
http://dx.doi.org/10.1103/PhysRevLett.4.165
http://dx.doi.org/10.1119/1.1936000
Akira ~
Taking your comments paragraph by paragraph:
(1) As you say, two bodies with different masses approach each other with different accelerations. That is a consequence of Newton’s gravitational theory and his second and third laws. At any instant there is a relative speed. Nothing is contradicted.
(2) Frictional force is dependent on the relative speed of an object and a surface. Nothing is violated.
(3) A ball with momentum p strikes a wall and bounces off it with momentum –p. Because momentum is conserved, a momentum 2p is imparted to the wall. That may be hard for you to imagine because you think of “walls” as totally rigid and inflexible. They are not.
(4) “Did relativity theory looked after all of these contradictions which cripples classical physics. It appears to me that the only issue Einstein addressed was the problem of the Michelson-Morley experiment.” No. He addressed the fact that Maxwell’s theory required the velocity of electromagnetic radiation to be a universal constant. That violates the classical idea of combining velocities by simple addition. A modification of classical physics was necessary to address that issue. I don’t know what you mean by “the contradictions that cripple classical physics”.
(5) “Let me focus on General Relativity.” The meaning of this paragraph eludes me. If you are talking about gravitational forces, you need to be alerted to the fact that in General Relativity there is no such thing as a "gravitational force”.
(6) Quantum mechanics. Planck’s law does not imply that Maxwell’s theory is wrong. All that Planck’s law reveals is that when matter extracts energy from radiation it does so not smoothly and continuously, but in discrete chunks. When it absorbs a chunk of energy E, this energy is extracted from that portion of the radiation that has frequency f = E/h. As you rightly said “f is a variable which ranges over a continuum” – radiation that is not monochromatic carries a range of frequencies.
(7) I agree with you – “QM is a relativistic theory”. But I do not see any connection between the QM concept of “tunnelling” and the speed of light. I admit ignorance on that one.
I don't know about Schild's work, but many thought GR1916 is incomplete as it lacks an energy tensor. Einstein's pseudo-tensor violates locality. Resolving this can be traced from Hilbert to Fock, Weinberg, Logunov and Grishchuk. The Equivalence Principle shows its inadequacies best in collapsing neutron stars for which the Einstein equation has a solution with matter forming a shell enclosing hypergravitational field, whereas the EP gets lost in topological arguments about a surface of separation and singularities. For references see journalofcosmology.com/MarshallWallis.pdf 2010
Hi Max! I don't have an opinion on Schild's own personal research, but I'm very interested in his contemporary account of an episode in which a significant-but-unnamed section of the GR community back in 1960 supposedly agreed that the general principle of relativity was fundamentally irreconcilable with special relativity, and that GR1916, presented by Einstein as including both SR and the GPoR, was therefore logically falsified since it depended on including both mutually-incompatible components! :) :) :)
I also find it intriguing that if Schild's account is true, one of the biggest theories of the C20th failed (and was quietly accepted as having failed, and modified ) way back in the 1960s without journalists or newspapers (or, apparently, most physicists!) ever finding out about it.
I've probably read a reasonable amount about GR, and some of the histories have seemed exceptionally good, but not one of them has mentioned "Oh, and by the way, the theory was dismissed as logically unsound back in around 1960".
They'll tell you that the original equivalence principle is now considered "old-fashioned" or "naive", that the principle of relativity applied to noninertial motion is now considered "more of a guideline than a rule" (like the Pirate's Code in "Pirates of the Caribbean"), or that Mach's principle is now considered "controversial", but they won't tell you why. Schild's story seems to provide a reason why all these changes may have happened.
So ... can anyone else out there corroborate Schild's account of what was going on in the community in early 1960 or late 1959? Did anyone other than Schild document this stuff?
I have no access to the two papers in the links you gave (I may look them up next time I visit my institute’s library) so the following remarks are based only the abstract of Schild’s 1960 paper.
The two aspects of General Relativity relevant to the issues raised are (1) spacetime is a (pseudo)-Riemannian manifold, and (2) the worldlines of light rays and test particles are geodesics. When the spacetime is flat (ie, Minkowskian) we have Special Relativity.
These purely mathematical concepts are self-consistent. They are unambiguous parts of the formulation of what the self-consistent and mutually consistent theories (SR and GR) are. No “controversy” in the early 1960s can change that. The logical conclusion is that the controversy faded into history and got forgotten because it arose from misunderstandings.
Many authors regard the “equivalence principle” as a foundation stone of GR. It is not. It is simply a step in Einstein’s thinking, along the path that eventually led him to the the final theory of GR (one of his "thought experiments"). It has very limited validity: the observations of a constantly accelerated observer in SR and a “stationary” observer in a uniform gravitational field are identical. The spacetime of a “uniform gravitational field” is flat – the thought experiment does not go beyond the special theory! This primitive version of the equivalence principle can be applied in a curved spacetime only as an approximation in “sufficiently small regions” – a consequence of the geometrical nature of a Riemannian or pseudo-Riemannian manifold.
“…whether experiments on accelerated systems (e.g., red-shifts produced in rotating disks) can serve to verify the general theory of relativity. The answer is ‘no’.”
Of course the answer is ‘no’! The rotating disc problem is a problem in SR, not GR. The simplest approach to the problem is to employ an appropriate curvilinear coordinate system in the flat spacetime. The predicted effects (such as red shift) come from the Christoffel symbol terms. Einstein’s intuitive leap was the recognition that in a curved spacetime, gravitational phenomena should also be implicit in the Christoffel symbol terms. It then follows that it makes no sense in GR to make a distinction between “inertial” and “gravitational” effects. That is the extended version of the “equivalence principle”.
“…the equivalence principle… does not lead to specific values for the bending of light rays by a star or for the perihelion rotation of a planetary orbit.”
It is well-known that Einstein already computed the bending of light in a uniform gravitional field several years before he arrived at his gravitational theory (Ann. d. Phys. 35, 1911), employing SR and concepts from Newtonian gravitational theory. A Schwartzschild field is not a “uniform” gravitational field so it’s hardly surprising that it gives a different result – it takes into accound higher order (“tidal”) effects.
As for the perihelian rotation of a planetary orbit, I cannot imagine how one could conceive of trying to get that from the equivalence principle as formulated in special relativity! Gravitational phenomena are absent in SR, by definition.
Akira ~
Taking your comments paragraph by paragraph:
(1) As you say, two bodies with different masses approach each other with different accelerations. That is a consequence of Newton’s gravitational theory and his second and third laws. At any instant there is a relative speed. Nothing is contradicted.
(2) Frictional force is dependent on the relative speed of an object and a surface. Nothing is violated.
(3) A ball with momentum p strikes a wall and bounces off it with momentum –p. Because momentum is conserved, a momentum 2p is imparted to the wall. That may be hard for you to imagine because you think of “walls” as totally rigid and inflexible. They are not.
(4) “Did relativity theory looked after all of these contradictions which cripples classical physics. It appears to me that the only issue Einstein addressed was the problem of the Michelson-Morley experiment.” No. He addressed the fact that Maxwell’s theory required the velocity of electromagnetic radiation to be a universal constant. That violates the classical idea of combining velocities by simple addition. A modification of classical physics was necessary to address that issue. I don’t know what you mean by “the contradictions that cripple classical physics”.
(5) “Let me focus on General Relativity.” The meaning of this paragraph eludes me. If you are talking about gravitational forces, you need to be alerted to the fact that in General Relativity there is no such thing as a "gravitational force”.
(6) Quantum mechanics. Planck’s law does not imply that Maxwell’s theory is wrong. All that Planck’s law reveals is that when matter extracts energy from radiation it does so not smoothly and continuously, but in discrete chunks. When it absorbs a chunk of energy E, this energy is extracted from that portion of the radiation that has frequency f = E/h. As you rightly said “f is a variable which ranges over a continuum” – radiation that is not monochromatic carries a range of frequencies.
(7) I agree with you – “QM is a relativistic theory”. But I do not see any connection between the QM concept of “tunnelling” and the speed of light. I admit ignorance on that one.
Akira ~
Addressing the second of your recent posts:
“In our universe, gravitation makes stars move. Stars create gravitational field and it is dynamic as the spatial distribution of stars change in time. When you express this gravitational field using energy-stress tensor, what you get is a dead universe which does not change and the only thing which move in it is test particles and photons. Do you think this is a legitimate representation of the dynamic universe?”
The difficulty here is that General Relativity is so formidably complicated that we have to use drastic simplifying assumptions to get anywhere at all. One thing we can do is to consider a single spherically-symmetric mass and find out how light rays and “test particles” behave in its vicinity. General relativity passes all the experimental tests in that context. We presume – we conjecture - that, in nature, the dynamics of a collection of massive bodies behave according to the principles of GR. Unfortunately, we do not even have a GR solution for the two body problem. But you are wrong when you say “When you express this gravitational field using energy-stress tensor, what you get is a dead universe which does not change and the only thing which move in it is test particles and photon.” The general idea is that the gravitational field (curvature) responds to the energy-momentum of “matter” and the “matter” responds to the curvature. Matter (energy-momentum) and curvature are dynamically interacting with each other. Where do you get the idea that it’s “a dead universe which does not change”?? We talk of “test particles” and “photons” (I prefer to say “light rays”…) only because the full complexity of what actually happens in nature is far beyond our computational capabilities.
“Einstein ended up with an inconsistent theory called Special Relativity Theory”
All I can think of to say to that (and to the remainder of your comments - tl;dr ) is: “Please learn some physics before posting again on ResearchGate.”
Dear Eric Baird,
As I see that there are opinions many non physicists and you are happily discussing basic concepts of Physics. Perhaps it would be interesting to make a reflexion before send you experimental information besides theoretical one on this issue.
Although the words "relativity" and "relational" share a common root, they refer to two different physical concepts. The principle of relativity says that for any material particle in any state of motion there exists a system of space- time coordinates where the particle is instantaneously at rest and inertia is homogeneous and isotropic. Thus the natural (inertial) decomposition of spacetime intervals into temporal and spatial components can be defined only relative to some particular frame of reference. On the other hand, the principle of relationism tell us that the absolute intervals between material objects characterize their extrinsic positional status, without reference to any underlying system of reference. From a purely relational point of view, the concept of absolute inertia, on which the principle of relativity is based, has no meaning at all. Therefore, relativity and relationism are basic incompatible principles. Admittedly, during the years when Einstein was developing general relativity (It took a brilliant mind of him more than eleven years) – and even for several years thereafter – he tended to conflate the two, since he hoped that the theory would vindicate Mach’s idea of a relational basis for inertia, but it soon became clear that general relativity is not a relational theory, at least not according to the traditional meaning of that term.
The old debate between proponents of relational and absolute motion (such as Leibniz and Clarke respectively) is without any interest if continuous fields are accepted as extended physical objects, permeating all of space, because this implies there are no unoccupied locations. In this context every point in the whole spacetime manifold is a vertex of actual relations between physical entities, obscuring the distinction between absolute and relational premises. Moreover, in the context of the general theory of relativity, spacetime itself constitutes a field, and this field is a dynamical elements in the theory, i.e., it is an extended physical entity which not only acts upon material objects but is also acted upon by them. This renders the distinction between absolute and relational concepts even more obscure. In this context the absolute-relational question remains both relevant and unresolved. Perhaps only with historical value.
But this concept of absolute spacetime presents an ontological puzzle, because we can empirically verify the physical equivalence of all inertial states of motion, which suggests that position and velocity have no absolute physical significance, and yet changes in velocity ( accelerations) do appear to have absolute significance, independent of the relations between material bodies (at least locally). If the evident relativity of position and velocity lead us to discard the idea of absolute space, how are we to understand the apparent absoluteness of acceleration? Some have argued that in order for the change in something to be ontologically real, it is necessary for the thing itself to be real, but of course that's not the case. It's perfectly possible for "the thing itself" to be an artificial conception, whereas the "change" is the ontological entity
“…But this concept of absolute spacetime presents an ontological puzzle, because we can empirically verify the physical equivalence of all inertial states of motion….If the evident relativity of position and velocity lead us to discard the idea of absolute space..”
It seems that there is a sense to comment the text above.
(1) To “empirically verify the physical equivalence of all inertial states of motion” firstly is necessary to define – how (according to what physical theory) should be made this empirical verification? If that should be based on the relativity theory, then indeed, such an experiment will not show any difference in the case, say, of a pair of relatively moving observers – in accordance with the theoretical SR postulate about total equivalence of all reference frames. But this postulate (and the empirical verification in this case also) leads to evident logically wrong corollary.
(2) So in the reality there is no “the evident relativity of position and velocity” ; the measurement of the absolute speed (relating to the absolute space) is rather simple – see “To measure the absolute speed is possible?” http://viXra.org/abs/1311.0190
Cheers
Is the gravity a Newotonian force or Einstein's space-time curvature?
Akira ~
"... the em wave has a constant speed in vacuum only when there is no conducting current. To create em waves, one need electric current. So, the folklore claim that the speed of light in vacuum is constant c is false."
Yes, of course the the speed of light in a material (where there are currents) is less than c. That's why a material has a "refractive index". When physicists say that the speed of light is a universal constant, they are referring to its speed in a vacuum. In a vacuum there are no currents.
I repeat: learn some physics before posting again on ReseachGate.
Dear Sergey,
It seems that you have a theory that do not need the transformations of Lorentz which belongs to SO(1,3) group of Lorentz. Or Galileo SO(3) with time as an absolute variable t=t'. Fantastic! I doubted if this basic concepts were necessary before answering properly the question of "rewrote General Relativity" but I see that I wasn't wrong an it was necessary. Confusion of basic concepts of Relativity are still alive.
Dear Daniel,
“It seems that you have a theory that do not need the transformations of Lorentz….” -???
(1) -In the link http://viXra.org/abs/1311.0190 that is in my previous post the Eqs. (1) and (2) are usual Lorentz transformations (they are reduced to the standard form by using elementary algebra);
(2) – the principal problems of the relativity theory arise from a number of reasons: first of all – from the non-legitimate absolutisation of the relativity principle up to logically nonsensical postulate about total equivalence of all inertial reference frames; further – in the thus also non-legitimate postulating that real 4D spacetime is pseudo Euclidian (pseudo Riemannian in the GR), with rather strange implications, as that real spacetime metric is partly imaginary; that “spacetime geometry” is some active essence that governs kinematics and dynamics of material bodies; that the time can be “dilated/accelerated” and the space – “contracted/stretched; that in the GR there are possible “time travels”, “spacetime wrapping” up to “wormholes” ; that “there is no gravitational field/force” – and so photons don’t change their energy when moving between points with different gravitational potentials (what is wrong – see http://vixra.org/abs/1409.0031 ), etc.;
(3) – all “strange” implications above are, in fact, based on the uncertainty in the relativity theories of basic in this case notions – what is the space? what is the time? How they can be impacted to be transformed?
And, as well as, they are based on some blind belief that if there is some the mathematical model, which is adequate to the reality in some cases, then it is true always; including – if the model predicts something that is unknown now, then that indeed exists in the reality. But the mathematics principally cannot predict anything new to that is already in postulates of a theory; it is nothing more, then an instrument, which helps to clear concrete physical situations. In other cases somebody mathematically “predicts and studies properties” of rather strange objects – that cannot exist in given concrete Matter.
When all kinematics and dynamics of material bodies (including “massless particles”) can be correctly depicted basing on a few quite natural (and which also quite naturally follw from the “Information as Absolute” conception, which, in turn, is undoubtedly true) postulates: (i) – every material body moves in the 4D “Cartesian” spacetime with the speed of light only; (ii) – the classical definition of the 3D momentum P=mV is true always, so every body always has a momentum P=mc; and (iii) every particle is always oriented in the 4D spacetime.
Further – it’s enough twice to apply the Pythagorean theorem.
Though at that is necessary to understand – what is the time (including that there are two times, one (“coordinate time”) is the 1D coordinate in the 4D spacetime; the true time isn’t a coordinate) and what is the 3D space;
further – to understand that all – the time and the space are absolute and by no means can be transformed; to remember, that the Lorentz transformation fully are valid only for rigid bodies, including –for rules in a reference frame; that the coordinate systems of any reference frames can be 4D translated but only 3D rotated, since in the reality there exists only one t-axis. And seems that’s enough to understand – where, say, the SR/GR can be applied and where they are wrong; to measure the absolute speed, possible randomness of the gravity force, etc…
Cheers
Dear Sergey,
I am very sorry to say that you are quite wrong using the physical concepts that you say. Let me to tell you that:
1. Your equations 1,2 and 3 are no Lorentz transformations at all. They use the gamma of special relativity in a quite doubtful form without distinguishing what kind of mass is associated.
2. You never use General Relativity although you use this name and even you don't use Special Relativity when you assign a potential energy to the photon.
And your postulates are without meaning for me:
"When all kinematics and dynamics of material bodies (including “massless particles”) can be correctly depicted basing on a few quite natural (and which also quite naturally follw from the “Information as Absolute” conception, which, in turn, is undoubtedly true) postulates: (i) – every material body moves in the 4D “Cartesian” spacetime with the speed of light only; (ii) – the classical definition of the 3D momentum P=mV is true always, so every body always has a momentum P=mc; and (iii) every particle is always oriented in the 4D spacetime."
Dear Daniel,
“Your equations 1,2 and 3 are no Lorentz transformations at all…”, etc.
-?
It seems you read the SS posts not attentively enough and, at that, don’t attempt to understand - what is written. Again – the Eqs. (1) and (2) are the Lorentz transformations that easily can be reduced to the standard form, but in the form that is in the paper http://viXra.org/abs/1311.0190 the Eqs. seems utmost clearly show – how in the Fig. 1 the main parameters of an always moving in the 4D “Cartesian” spacetime rod are connected – as the second application of the Pythagorean theorem.
When the “gamma” (what isn’t “of special relativity”, that is “Lorentz factor”) appears at the first Pythagorean theorem application, because of (see above) every particle (including “massless photons”) and so every material body always – because of the energy conversation law – moves in the 4D spacetime with the speed of light only.
At that there are two main types of particles. Some are created at some spatial impacts and so after creation move in the space only – for example photons. Others are created at the impacts when 4D momentum is directed along the t-axis, and so, if are at rest in the space, move along the t-axis only with the speed of light and having momentum P0=m0c. If such a particle is impacted with a spatial momentum, it start to move in the space also, so having 4D momentum P=mc and its spatial projection P=mV. At that – since the t-axis is orthogonal to the x-axis - P0=mVt = m0c and Vt
Dear Akira,
Perhaps I am not understanding you because I couldn't follow properly your reasoning, but let me to remember you some basic concepts which makes not too simple the mathematical understanding for you (as mathematician).
1. Lorentz group is a continuous pseudo orthogonal group on the real numbers R. The metric invariant is the Minkowskian one with three types of 4-vectors: light, time and spacelikes. There are no allowed transformations between timelike and spacelike physical magnitudes at difference of what happens with the orthogonal group.
2- Thus topologically is not trivial as it is non-compact.
3- There are four components: orthochronous proper, non-orthochronous proper. orthochronous improper and non-orthochronous improper. The algebra de Lie with the infinitessimal generators is defined only in Orthochronous proper topological component (restricted Lorentz group) where there are no problems to define all the diffeomorphisms that you want.
4- Take care because the mathematicians usually do not know physics as you said, but physicists know mathematics and even has made them. Remember the relation of Newton with the differential calculus, the distributions with Dirac, etc.
5. I recommend you to go to my contributions here for seeing my book on Electrodynamics within differential forms and as a relativistic gauge theory.
Akira ~
In the spacetime of Special Relativity we can choose the coordinate system so that measures of length and time are given by (ct)2 - dx2 - dy2 - dz2. Lorentz transformations are simply transformations to a different coordinate system in which that expression remains unchanged. They form a group.
This is nothing more than a simple extension of Euclidean geometry: in Euclidean geometry we can choose “Cartesian coordinates”. The measure of length in any direction is then given by dx2 + dy2 + dz2 (cf. Pythagoras…). Rotations do not change that expression. They form a group.
In introductions to Special Relativity the Lorentz transformations are given as
z’ = γ(z + vt), ct’ = γ(vz/c + ct) where γ = 1/√(1 – v2/c2). That’s just the one-parameter subgroup (the parameter is v) of the full Lorentz group, when x and y are kept fixed.
I agree entirely with Akira when he says “Maxwell's theory thus has nothing to do with the so called photons”.
Black body radiation is comprised of electromagnetic waves with a continuous spectrum of frequencies. So called “photons” are discrete units of energy associated, not with the radiation but with the interaction between radiation and matter (essentially, they are a peculiar feature of the way electrons exchange energy with the (Maxwellian) electromagnetic field). I find that I cannot agree with Einstein’s proposal that an electromagnetic wave in free space is a stream of particles. An electomagnetic wave in free space is not interacting with anything and is not observed. Quantum mechanics deals with observations. An observation is an interaction.
The crucial experimental observation is that, if an electron in a metal requires an energy E to dislodge it, it cannot be dislodged by an electromagnetic wave, however intense, if the radiation contains no frequencies greater than E/h. This experimental fact seems to justify Planck’s analysis of black body radiation and provides a little insight into why it gives the correct frequency distribution.
These ideas led eventually to the development of quantum electrodynamics. When physicists use Feyman’s diagrams to do calculation in QED they habitually think of “photons” as if they are particles. In fact they are simply units of energy and momentum involved in a fundamental interaction. Whether they are really particles is a metaphysical question. What do we actually mean by the word “particle” anyway? Experiments don't observe particles - they measure quantities associated with interactions. We then construct theories to account for those measurements - a process that (inevitably) involves the projection of concepts arising from our experience of the world at the human scale, onto the subatomic world.
to @Eric Lord
Dear Eric,
your wrote an interesting sentence : " In fact they are simply units of energy and momentum involved in a fundamental interaction."
Does it mean that there exists "energy? without its carrier? I wonder that the photons are real particles, yet if we assume the material form of teh field - then we should also assume the existence of the energy carriers... What are you thinking of?
Regards!
Dear Eric Lord,
There are many effects as Compton effect, coherence bosonization stimulation effects in laser or maser, Zimman effect, fluorescence, phosphorescence, photelectric effect and so on. The photon for existing needs an electric charge to be accelerated! What is the difference between a photon from an electron?
Dear Akira,
Perhaps you could start looking at:
http://en.wikipedia.org/wiki/Representation_theory_of_the_Lorentz_group
Obviously this subject is not for discussing just in a message or give happy opinions on it.
Dear Sergey,
In your paper it seems that you define very originaly the p zero and after that you draw a fig.1 for finding the Lorentz transformations. Fantastic!!!!.
You write:
From the Fig. 1 immediatelyfollow the main equations of the special relativity
theory(as well as of the Lorentz theory though). Lorentz transformations:
I think that everything is clear between us and this is my last message with you. Bye! Have a good new year 2015!
Dear Akira,
Some basic information too as the distribution branch of mathematics started:
http://en.wikipedia.org/wiki/Dirac_delta_function
Dear Daniel,
Well, though it seems your post should be a little commented.
There is nothing fantastic in the definition of p zero in the informational model – that is only t-component of 4-momentum in the real 4D “Cartesian” spacetime of given Matter. When 4-momentum P=mc is quite natural extrapolation of the classical 3D momentum P=mV; taking into account the postulate that all/every material objects move in the 4D spacetime with identical 4-speeds that are equal to the c only.
Moreover, in the SR all/every every material objects move in the 4D spacetime with identical 4- speeds that are equal to the c only (sometimes in textbooks c=1 here) also; and in the SR the 4-momentum is defined also. The natural difference – since in the SR is postulated that the real spacetime is 4D Minkowski continuum, the absolute value of the momentum is equal P=E/c=mc and is directed along t-axis, when in the informational model the 4-momentum quite naturally is directed along the direction of the body’s 4-motion.
But if a body is at rest in the space – and so moves along t-axis only – in both cases the momentums’ values are equal: PSR=PIM=m0c.
Have a good New Year 2015!
Cheers
Although I fully agree with Eric Lord that the GR was not rewritten in the 60's as is suggeste in the question of Eric Baird. What is true is that at those years the GR received a great advance with issues as Black Holes, microwave background etc. Let me try to put some names in one of these subjects.
The term "black hole" was introduced by Wheeler in 1967 although the theoretical study of these object has quite a long history. The very possibility of the existence of such objects was first discussed by Michell and Laplace within the framework of the Newtonian theory at the end of the 18th century [see Barrow and Silk (1983), Israel (1987), Novikov (1990)]. In fact, in general relativity, the problem arose within a year after the theory had been developed, i.e., after Schwarzschild (1916) obtained the first exact (spherically symmetric) solution of Einstein's equations in vacuum. In addition to a singularity at the center of symmetry (at r =0), this solution had an additional singularity on the gravitational-radius surface (at r = rg ). More than a third of a century elapsed before a profound understanding of the structure of spacetime in strong gravitational fields was achieved as a result of analyses of the "unexpected" features of the Schwarzschild solution by Flamm (1916), Weyl (1917), Eddington(1924), Lemaitre (1933), Einstein and Rosen (1935), and the complete solution of the formulated problem was obtained [Synge (1950), Finkelstein (1958), Fronsdal (1959), Kruskal (1960), Szekeres (1960), Novikov (1963, 1964a)]. The length of this interval may have been influenced by the general belief that nature could not admit a body whose size would be comparable to its gravitational radius; this viewpoint was shared by the creator of general relativity himself [see e.g., Israel (1987) and references therein]. Some interest in the properties of very compact gravitational systems was stimulated in the thirties after Chandrallekhar's (1931) work on white dwarfs and the works of Landau (1932), Baade and Zwicky (1934), and Oppenheimer and Volkoff (1939) who showed that neutron stars are possible, with a radius only a few times that of the gravitational radius. The gravitational collapse of a massive star which produces a black hole was first described by Oppenheimer and Snyder(1939).
Daniel ~
“What is the difference between a photon from an electron?”
They are both ways of thinking about the exchanges of energy, momentum and angular momentum that take place when the Maxwell field and the Dirac field interact.
A field can manifest as a “particle” only when it interacts. This is as true for electrons as it is for photons (deBroglie 1924).
I wouldn’t say that GR had been reformulated, but there was definitely a different attitude going into the 50s and 60s. The central change was from Einstein’s “Schwarzchild singularities do not exist in physical reality” to mixing time and spatial components for the removal of singularities. Due to diffeomorphism the mixing of components does not change this fact (i.e. that singularities cannot form relative to outside observers), although some like to believe it does. Einstein had also made it clear that the stress-energy tensor is only provisional and meant to be replaced with a microscopic (per particle) source; this has obviously been lost in today’s mainstream.
“We have seen, indeed, that in a more complete analysis the energy tensor can be regarded only as a provisional means of representing matter. In reality, matter consists of electrically charged particles, and is to be regarded itself as a part, in fact, the principal part, of the electromagnetic field. It is only the circumstance that we have not sufficient knowledge of the electromagnetic field of concentrated charges that compels us, provisionally, to leave undetermined in presenting the theory, the true form of this tensor.” – Albert Einstein (1921)
But then again recent observations have shown a jet of particles being emitted from 1/4th the predicted event horizon distance of a “supermassive black hole”, so it is likely that things will be going back to Einstein’s original objections (and my independently arrived at ones).
“Since the event horizon of the supermassive black hole in IC 310 is known to be about three times as large as the Sun-Earth distance, finding variable gamma-ray emission at only one quarter of this distance was a complete surprise to us.”
http://www.irb.hr/eng/Izdvojeno/A-Lightning-Inferno-at-the-Event-Horizon
There is also considerable evidence that the cosmic background radiation isn't what most believe it to be: http://adsabs.harvard.edu/abs/2014AstRv...9c...4P
Dear Eric Lord,
Photons are particles (neutral bosons) which allow the interaction between electrons or other electric charge. Obviously Maxwell's electrodynamics cannot takes into account any of these particles. The only thing to exit in such electrodynamics related with particles is the density of charge or density of current.
But notice that no QED (Quantum Electrodynamics) is necessary for defining photon, in fact Einstein introduce this concept without employing QED at all.
The another aspect in the 60's which was fundamental for the development of GR was the discovery of the Cosmic Microwave Background Radiation (CMB) that provided evidence for the Big Bang. Bell Labs radio astronomers Arno Penzias and Robert Wilson were using a large horn antenna in 1964 and 1965 to map signals from the Milky Way, when they serendipitously discovered the CMB. As written in the citation, "This unexpected discovery, offering strong evidence that the universe began with the Big Bang, ushered in experimental cosmology." Penzias and Wilson shared the Nobel Prize in Physics in 1978 in honor of their findings.
The CMB is "noise" leftover from the creation of the Universe. The microwave radiation is only 3 degrees above Absolute Zero or -270 degrees C, and is uniformly perceptible from all directions. Its presence demonstrates that that our universe began in an extremely hot and violent explosion, called the Big Bang, 13.7 billion years ago.
In 1960, Bell Labs built a 20-foot horn-shaped antenna in Holmdel, NJ to be used with an early satellite system called Echo. The intention was to collect and amplify radio signals to send them across long distances, but within a few years, another satellite was launched and Echo became obsolete.
With the antenna no longer tied to commercial applications, it was now free for research. Penzias and Wilson jumped at the chance to use it to analyze radio signals from the spaces between galaxies. But when they began to employ it, they encountered a persistent "noise" of microwaves that came from every direction. If they were to conduct experiments with the antenna, they would have to find a way to remove the static.
Penzias and Wilson tested everything they could think of to rule out the source of the radiation racket. They knew it wasn’t radiation from the Milky Way or extraterrestrial radio sources. They pointed the antenna towards New York City to rule out "urban interference", and did analysis to dismiss possible military testing from their list.
Daniel,
Since the big bang theory cannot explain many aspects of the cosmic background radiation, how is it strong evidence for it (over-reliance on confirmation)? What we have is very weak evidence based upon the pure existence of something that is contradicted by many recent anomalies such as hemispherical power asymmetry, alignments and SZ cluster counts.
In fact, a plethora of observations are strictly against a universe undergoing accelerated metric expansion. Instead, the best fitting model is that of a cosmological-scale gravitational potential where local geodesics deflect towards its center producing the illusion of accelerated expansion; i.e. distant objects are falling back into it (I won’t go into the details of 3.5 sigma significance. Recent SNIa observations have also confirmed another prediction of this theory due to such global gravitational potential (http://phys.org/news/2011-09-evidence-spacetime-cosmological-principle.html). Although this cannot rule out all interpretations of the CMB as being due to a big bang, other observations clearly dismiss redshift as being due to metric expansion and thus cause serious problems for big bang cosmology.
http://adsabs.harvard.edu/abs/2014AstRv...9c...4P
Dear Michael
You are discussing a very different issue: the acceleration ( 1990). I have just tried to answer the question of Eric Baird of the advances in gravitation in 60's. If you want to enter in the dark energy or so on that is another story.
Daniel,
I wouldn’t say that it was completely off topic with respect to the history of GR. Recent additions to general relativity arise from purely cosmological aspects, i.e. dark energy. Although a replacement for the cosmological constant, it has clearly “rewrote” GR in the cosmological sense. I felt that simply referring to the 60s-70s and avoiding a discussing on the plethora of problems in LCDM and big bang cosmologies thereafter (in these particular subjects) was incomplete.
Thus the content of my post was that the CMB has not been proven to be due to a big bang and that the development or rewriting of GR is far from complete. Although I do agree that many other proposals for the CMB are absurd (such as being due to water on Earth), the following is not a factual statement but instead a belief (hence my short discussion on the CMB and its various observables with respect to cosmological dynamics/structure).
“The CMB is ‘noise’ leftover from the creation of the Universe”
Eric Lord: "I have no access to the two papers in the links you gave (I may look them up next time I visit my institute’s library) so the following remarks are based only the abstract of Schild’s 1960 paper. ..."
Hi, EL!
I'm only interested in the first half of Schild's paper, in which he talks about the Harwell "centrifugal field" arguments and “some confusion on this point and even some lack of unanimity among theoretical physicists” as to how to react to the logical problems thrown up by the Harwell group paper, and then reasons that if the GPoR conflicts with SR, that the only course of action is to keep SR and downgrade the GPoR (rather than the other way around) … and gets that position ratified by peer review. That seems to me to be possibly a historically significant moment. It was a decision-fork that doesn't seem (to me) to have been adequately studied or analysed.
I'm hoping that the people involved in the discussions at the time might have committed something to paper and that his wasn't a case of the community trying to sneak something through with minimal discussion, because I'd really like to read those additional arguments.
Eric Lord: "The two aspects of General Relativity relevant to the issues raised are (1) spacetime is a (pseudo)-Riemannian manifold, and (2) the worldlines of light rays and test particles are geodesics. When the spacetime is flat (ie, Minkowskian) we have Special Relativity. "
The key phrase here is "when spacetime is flat ..."
This is where we meet a decision-fork as to how we believe a general theory of relativity ought to be constructed, because although a reduction to flat-spacetime geometry is arguably compulsory in a classical theory, a reduction to flat-spacetime physics is not. it's a design decision. I realise that that statement sounds like nonsense at first sight, but bear with me, there's a subtlety here that some mathematical physicists missed ...
It's possible to have geometrical limiting cases (in a mathematical derivation of physical theory) which appear mathematically valid, but which have no obvious physical meaning (“null” solutions), because the limiting case also represents a limit at which, depending on the type of theory, there may (by definition) no longer be any meaningful physics left for the mathematics to describe.
For the sake of argument we can consider a hypothetical “Cliffordian” universe (after W.K. Clifford 1845-1879) in which "all physics is curvature". In such a universe, any problem involving two nearby particles with non-zero mass and non-zero relative motion will be expressable in terms of the properties of an associated curvature of the region in which both particles are embedded – the curvature contains all the information about the particles and how they are moving, and the interaction of the particles can be calculated from the interaction of their associated imprints in the background field Assuming that the particles' physics obey the principle of relativity, we get a relativistic acoustic metric.
A "RAM" is a counterexample to the idea that curved-spacetime physics always has to reduce to the physics of a flat-spacetime theory. Within a Cliffordian unverse, a mathematician could argue (as we currently do) that since classical curvature has to reduce to flat spacetime over small regions, that since relativistic behaviour in flat spacetime has to obey SR, the relativistic physics of that universe has, logially, to reduce to SR physics. And this might appear to be mathematically indisputable.
But they'd be wrong. Because in the “Cliffordian” scenario described, you only get effectively-flat spacetime when you zoom in so far that there's no significant physics present in a region – we can zoom in on a small effectively-flat region between our two particles, and say that SR technically applies to all observations of matter in that flattish region … but that region would only be effectively flat because (by definition) it would contain no matter to observe, and no physical observers to do the observing. … it's null physics. If we try to introduce particles into our flattish region, its no longer flat, and to get back to flatness, we have to zoom in further, until all signs of interesting behaviour disappear and we find another smaller region that's empty and effectively physics-free. So we get SR as a nominal mathematical limit, but the actual physics of the particles obeys a different set of geometrical rules. The inevitable mathematical reduction to SR geometry does not (in this case) mean that the scenario's relativistic physics reduces to SR physics.
In our "Cliffordian universe" scenario, a mathematical physicist could still argue that special relativity was unavoidable, and that therefore, it was impossible for a general theory of relativity to be incompatible with special relativity. But they'd be wrong. In a Cliffordian universe, a general theory of relativity that correctly describes that universe's physics would have to be incompatible with special relativity, because the underlying geometry that described inertial physics would be an acoustic metric rather than a Minkowski metric, and the two descriptions are geometrically different.
The “real” physics would still be closely related to special relativity because of symmetry issues, and SR could still be used as a flat-spacetime "quick-and-dirty" approximation, but there would be some relationships and behaviours that would diverge from those of an "SR-based" universe.
Interestingly, one of the most pointed differences between Cliffordian and non-Cliffordian universes is that in a Cliffordian universe, the properties of an acoustic metric give rise to gravitational horizons that fluctuate and radiate much like cosmological horizons, whereas in a universe that reduces to SR physics, that behaviour seems to be absent (giving the perfect non-radiating horizons of current textbook GR). So not only does a general theory of relativity not have to reduce to SR, since acoustic metrics generate the statistical equivalent of Hawking radiation across a curvature horizon, a general theory that doesn't reduce to SR doesn't seem to have to be in conflict with quantum mechanics, either.
"These purely mathematical concepts are self-consistent. They are unambiguous parts of the formulation of what the self-consistent and mutually consistent theories (SR and GR) are."
But if we lived in a Cliffordian universe and didn't realise it (and didn't realise that a reduction to SR physics was not appropriate in our situation), then we'd be making exactly that same argument.
We'd be saying, "Our general theory HAS to reduce to special relativity out of pure mathematical necessity, so there cannot be the slightest incompatibility between SR and general relativity ".
We'd then be able to go on and argue that since the appearance of SR was unavoidable, any principle or concept that appeared to clash with SR had to be suspended, not because of an inconsistency in the model, but because the consistent application of logic based on our knowledge that SR had to be right, demanded it. Any subsequent fudge, bodge, special-case behaviour or logical compartmentalisation required to protect SR would then be seen by us not as signs of our patching up a really bad theory, but as a wonderful a range of interesting and exciting new behaviours derived deterministically from our model, which demonstrated how many cool new things the theory was capable of predicting.
"No “controversy” in the early 1960s can change that. The logical conclusion is that the controversy faded into history and got forgotten because it arose from misunderstandings."
That may well be a logical conclusion if one knows for certain that one is not living in a Cliffordian universe. But to test which sort of universe we really live in, we have to apply "sanity-checks" to our belief systems. We have to be able to compare the behaviour of an acoustic-metric-based general theory against that of an SR-based general theory, so that we can check how many of the things that we currently believe support SR+GR are actually SR-specific, and how many also appear in an acoustic metric-based system. Without that outside reference to use as a comparison, we don't have an obvious way to evaluate how good or how bad our current system really is.
It might be wonderful. It might be absolutely awful. But unless we also know at least something about the characteristics of a general theory that doesn't reduce to SR, we don't have enough information to formulate a scientific opinion, and we don't know that the choice we made in 1960 was the correct one.
http://en.wikipedia.org/wiki/William_Kingdon_Clifford
Daniel ~
“Photons are particles (neutral bosons) which allow the interaction between electrons or other electric charge. Obviously Maxwell's electrodynamics cannot takes into account any of these particles. The only thing to exit in such electrodynamics related with particles is the density of charge or density of current.”
I do not see any contradiction between what I said and your response to it. The key word is interaction. A photon is a relevant concept in the context of matter interacting with radiation. The idea that it is an individual entity that can travel unchanged through empty space (where there is no interaction) seems to me to be in conflict with phenomena such as diffraction and the Doppler effect. The ancient “wave-particle duality” controversy seems to be resolved by accepting the idea that an elementary constituent of nature behaves as a “wave” when it’s not interacting and behaves in its interactions with other constituents “as if” it consists of particles
My interpretation is perhaps unusual but it does not seem to me to be in conflict with known physics. If I’m mistaken about that I would like someone to explain.
I mentioned QED because it is clearly a theory of interactions. A Feyman diagram is a graphic representation of an elementary interaction. The concept of a photon as a “particle” is acceptable in that context.
Eric Baird ~
Thank you for a detailed and thoughtful response.
GR is formulated in a spacetime with curvature. Mathematically, the case of zero curvature is included. There can be no contradiction there!
SR is formulated in flat spacetime. It follows that it is applicable in physical situations where effects due to curvature (ie, gravitational effects) can safely be regarded as negligible compared to other phenomena. SR has abundantly established its correctness in those situations.
In a Cliffordian universe, that argument would break down - there would be no precise distinction between “curvature effects” and “other phenomena”. The resounding success of SR therefore seems to suggest that the universe is not Cliffordian. (For example, the behaviour of an atom is analysed on the assumption that it exists in a flat spacetime; the resulting calculations based on that assumtion prove to be correct to extremely high accuracy).
I think that the conclusion is too hasty. Hidden in the argument is the assumption that spacetime is four-dimensional. The first attempt to go beyond that was the Kaluza-Klein theory (1919) in which GR is formulated in five dimensions. Gravitation and electromagnetism then both arise as curvature effects. The theory describes a simplified Cliffordian universe wherein the usual Einstein-Maxwell theory holds in a four-dimensional subspace. Further developments of that idea are multidimensional theories (often expressed in “fibre bundle” formalism) in which all gauge fields (intermediate bosons) are curvatures in a higher dimensional spacetime. The idea behind these theories is that the excess dimensions are somehow “compactified” and unobservable. SR still applies as a valid approximation in the four-dimensional subspace. It would appear that the idea that we live in a Cliffordian universe is not inconceivable.
Akira ~
“What is the definition of Lorentz group? A simple question…”
I already gave you the simplest possible answer yesterday when I said:
In the spacetime of Special Relativity we can choose the coordinate system so that measures of length and time are given by (ct)2 - dx2 - dy2 - dz2. Lorentz transformations are simply transformations to a different coordinate system in which that expression remains unchanged. They form a group.
This is nothing more than a simple extension of Euclidean geometry: in Euclidean geometry we can choose “Cartesian coordinates”. The measure of length in any direction is then given by dx2 + dy2 + dz2 (cf. Pythagoras…). Rotations do not change that expression. They form a group.
In introductions to Special Relativity the Lorentz transformations are given as
z’ = γ(z + vt), ct’ = γ(vz/c + ct) where γ = 1/√(1 – v2/c2). That’s just the one-parameter subgroup (the parameter is v) of the full Lorentz group, when x and y are kept fixed.
(Why ask questions on RG when the answers are readily available in textbooks?)
Dear Michael,
In what that I know, all the cosmological models still use as fundamental data the one of CMB. See
http://arxiv.org/pdf/0911.1955v2.pdf
Dear Eric Lord,
Contradiction is a very strong word that I don't want to use in our discussion. But let me to tell you some things for being clear:
1. The photons are particles which can exist in vacuum as the electrons or other ones and no interaction is assumed for its existence.
The photoelectric effect shows it when the door opens when you cross it between two photoelectric cells. It is not necessary at all to name QED (although you can do it) at all, because this a field theory mainly created for the renormalization of the charge, mass or polarization of the vacuum and which the photons are taken as the bosons of the electromagnetic interaction as the gluons are for the quarks in QFT.
2. I understand your argument using Doppler effect because this effect is usually used with wave fronts, but this is not at all a prove that the photons cannot exist in vacuum. I attach you two papers one related with Doppler and the another with the Syncrotron that is every day using those concepts.
Dear Akira,
Let me to show you what is the Lorentz group, although I think that it is enough to say that it is SO(3,1), i.e. the special orthogonal continuous group which keeps invariant the Minkowskian pseudometric. But if you like to see it more explicitly I will try to do it.
Consider the set of linear transformations of Rn preserving the scalar product of two vectors defined by the regular symmetric bilinear form g:
g(Ax, Ay)=g(x,y) all x,y belonging to Rn
This is a subgroup of GL(n,R), associated with g and characterized with the fundamental relations
AT g A=g
(I think that so far you have no problems)
The diagonal form of g is determined by the signature s of the vector space Rn. To each possibility correponds a pseudo-orthogonal group noted as
O(n-s,s)
The pseudoorthogonal transformations can be distributed in two classes
det A=+1 , detA=-1
The set of these two classes is isomorphic to the cyclic group of dimension two Z2. Thus we have the isomorphism
O(n-s,s)/Z2 ----->SO(n-s,s)
Which in the case of the Lorentz group we have n=3 and s=1.
Is this enough for your definition?
Akira ~
Why are you doing this? I answered your question “what is the Lorentz group?” in very elementary and simple physicists language, because, as you said, “it’s a simple question”. Daniel answered in mathematicians’ language (which, contrary to your belief, most competent physicists are very familiar with). Yet still you behave as if you haven’t taken in anything that’s being said to you.
Since you regard yourself as a mathematician rather than a physicist, you of course know about continuous one-to-one mappings of Rn onto itself (automorphisms). The spacetime of Special Relativity isn’t simply R4, it is R4 together with a metric – a symmetric bilinear mapping M:R4 → R defined by M(x, y) = x1y1 + x2y2 + x3y3 − x4y4. We can consider those automorphisms P:R4 → R4 under which this particular metric is invariant: M(P(x), P(y)) = M(x, y). They constitute a group of automorphisms – the Poincaré group. It acts linearly on the underlying R4, P(x) = Lx + a. The 4×4 matrices L constitute a subgroup - the Lorentz group SO(3, 1).
That’s the pure mathematician’s definition. Its relevance to physicists comes from identifying x1, x2, x3 as Cartesian coordinates in Euclidean space, and identifying x4 as a time coordinate. That is an “inertial frame”. A Lorentz transformation then corresponds to a mapping of one inertial frame to another that, in general, comsists of a spatial rotation and a relative velocity between the two frames.
To deny the logical consistancy of any of this is absurd. It amounts a denial of the possibility of using coordinate systems in physics!
Dear Eric,
"The idea that it is an individual entity that can travel unchanged through empty space (where there is no interaction) seems to me to be in conflict with phenomena such as diffraction and the Doppler effect."
Light in itself is a collective phenomenon, and laser is a collective coherent phenomenon and under such assumptions it can be better understood even in the diffraction case (Young's double slits).
That Maxwell equations well account for these collective phenomena of photons is surely proven. The vector potential is the key to understand both the collective classical emission and the coherent (LASER) phenomena through quantised vector potential.
The doppler effect is easily derivable from energy-momentum conservation of quanta. To obtain the classical doppler effect, simple approximations have to be done, the two version (moving source 1/(1-v/c), and moving emitter (1+v/c)) for material waves, are valid if V
Dear Akira,
The Lorentz group has to keep invariant a symmetric tensor of sencond order and therefore is generated by six elements whose composition law is direct product of the tensorial algebra. Usually it is represented by six 4x4 matrices in every point of the differential manifold associated to the space-time.
Perhaps you are also interested to know what is the alef associated every class of equivalence related with a given inertial obsever (local coordinate chart) but frankly I think that is non sense for knowing the state of motion of a given observer.
Let me to tell you that there are two Casimir for this group of rotations and pseudorotations. If you want to introduce also translations then you need to enlarge the group to the Poincare one.
You are welcome Akira and sorry if I don't write you all these elements.
Akira -
"...transformations must map from X to X itself. Lorentz transformations map from one inertial frame to the other."
Lorentz transformations map from X to X itself, where X is Minkowski spacetime, The image of each point x is Lx. That's the "active" interpretation of x → Lx. Physicists often think in terms of the "passive" interpretation - a coordinate transformation in which the point with coordinates x is assigned a new set of coordinates Lx. A conceptual difference that makes no difference in practice but which may have given you the impression that physicists don't know what they are talking about!
"The simplest way to explain Lorentz group is to follow the basic definition of transformation group"
The basic definition of transformation group was implicit in what I said.
I think you know perfectly well what the Lorentz group is and are just amusing yourself by wasting our time.
I've had enough - I give up
Best wishes ~ Eric
Is there anyone that is going to answer the question, Who was involved in the work? I do not think this is a question of what it means but Who did this? Correct me if I am wrong? We could debate the work all day but if we do not know why and who then it is irreverent.
Akira,
Thanks for your thoughts. I just think that we are putting the cart before the horse. Lets see what they were thinking and then you can decide if you want to debate the problem.
We know without a shadow of a doubt that the theories we are talking about here, ( Quantum Mechanics, General Relativity) have some paradoxes that exist which means they are not the end theory they are only the latest step in the quest for the Unified Theory. Do not get me wrong they are the best we have but if logic plays any role in the quest for truth then they are at best incomplete.
Dear George.
You are wrong asking for an answer to this question for several reasons as it is clear if you follow the different opinions.
1. This question is wrong as very soon has said Eric Lord. Nobody has rewritten the General Relativity after it was made by Einstein in 1916.
2. In any case, there is a part of the question that I have tried to employ for saying that two important facts in the sixties:
a. Measure and interpretation of the Cosmic Microwave Background Radiation (CMB) as the main signal of the Big Bang model. The people involved in such discovery were:
Arno Penzias and Robert Wilson
who were the winners of the nobel prize (Physics, 1978) for such discovery.
b. The another important advance in this field was given by several people, the black holes, but Wheeler in 1967 named them for interpreting the singularities of Schwarzschild metric. Soon Hawking and Penrose leader these field of knowledge and nowadays there are many candidates to be identified observationally.
Happy 2015 for everybod
Daniel,
It may be true that the CMB is used as fundamental observational data (I use B-mode, SZ-effect and power with respect to my own research). However, theories such as LCDM and inflation are tunable to the point where they have become unfalsifiable. Thus the CMB is a relatively poor test in such situations because the possibility of false positives is substantial, i.e. it is no where near proven that the CMB is due to a big bang. Naturally, another plausible scenario is a cosmological-scale gravitational potential with a central source that emits black body radiation, i.e. some type of sink-source steady state universe. I’m still surprised that Einstein never considered a classical gravitational potential in terms of creating a steady state versus cosmological constant, although his first attempts were for a static universe.
Daniel,
Arno Penzias and Robert Wilson were the ones that found radiation in the microwave band that was messing with their instruments and then talked to the real researchers Robert H. Dickie and PJE Peebles that had proposed this years before and were looking for a way to prove it. All this came from the work of George Gamow, Ralph Alpher, and Robert Herman who were never given credit for proposing this work. The first two should not have been given the Nobel as they were not even looking for this and published papers side by side with the others.
I think that if this were the embodiment of the big bang then it would have more variances in the temperatures. The fact that it is so consistent points to it being a local event that is causing this back ground.
I am familiar with the history and not convinced that the direction that science has taken on this being about the Big Bang is correct. I think the counter to that is that the big bang was very unlikely and just the best guess at the time that fit our belief system at the time so we still use it today.
Hi Akira!
If you have a range of other questions that you would like to discuss, about Karl Popper, or the Big Bang, or what might be wrong in theoretical physics in other areas, I think that the usual thing to do is to start your own question pages for those topics, or to start a general-purpose question page. That would make your questions easier for people to find who are interested in those subjects, and might make it easier for me to get an answer to my (rather more specific) queries.
I really was hoping that someone here might be able to help me, and the interjection of all these apparently-almost-unrelated topics may make that result less likely. Thanks
George: " Is there anyone that is going to answer the question, Who was involved in the work? I do not think this is a question of what it means but Who did this? Correct me if I am wrong? We could debate the work all day but if we do not know why and who then it is irreverent. "
Hi George!
Thanks for the interjection!
Yep, in the absence of a proper provided proof by Schild (IMO), I'd really like to know who else was involved, partly for context (were there any major names in favour or against?), and partly so that I have a better idea of where to search for further information. If other contemporaries of Schild who lived through the episode agreed or disagreed with his characterisation of what happened, then that's be useful to know.
If people want to know why I want the information and why I consider it important, then I'm very happy to explain (and to learn more about other people's perspectives in the process), but yes, this was primarily a request for historical information.
Hi EricLord!
Thanks for your nice reply.
I agree that a Cliffordian universe there's no precise distinction between “curvature effects” and “other phenomena". The way that we traditionally approach "inertial" and "accelerated" motion is to start with simple "flat-looking" physics, derive principles based on that, and then derive additional principles to describe explicitly-curved behaviour. That was an entirely logical, pragmatic and IMO quite sensible approach. And it gave us SR and GR1916.
When I first tried a Cliffordian approach, I expected it to go completely to hell when applied to inertial problems. It seemed to me that there were two main arguments that made the approach unworkable:
Counterargument #1 was that the approach seemed to suggest that gravitational dragging effects didn’t just apply to rotational and accelerational motion (and to higher-order variations), but also to any simple relative velocity between bodies.
It seemed that the v-dragging effect produced a polarised gravitational-looking effect around a body whose terminal velocity was equal to the body's velocity. This fixed the problem of purely-local lightspeed constancy without involving special relativity, as it meant that light could be emitted at cEmitter, cross an effective terminal velocity of v and arrive at cReceiver, giving regulated local lightspeed constancy for all objects. I tried a few different propagation models, and the recession/approach shift due to motion always seemed to be the same as the effect due to the "additional" effects due to curvature, so if our existing measurements weren't out by a factor of two, this meant that there was a duality principle in operation for velocity shifts and gravitational shifts –
– a photograph of moving body, appearing Doppler-shifted and distorted by aberration effects, could be explained either by tracking signals over time (time-domain description) or by saying that this mysterious "v" parameter associated with the photographed object described a polarised gravitational field that produced the same effects (gravitational domain description).
This all sounded great, but this "observerspace" approach (treating "apparent" behaviour as "real" for an observer whenever possible) meant that if a rock travelled at any speed wrt its background starfield, the redshifted stars to the rear would pull more strongly than the blueshifted ones ahead of the rock, by the same proportion as the observed Doppler shifts, and the rock would try to decelerate towards a standstill.
This obviously doesn’t happen , therefore the theory was wrong.
Counterargument #2 was a side-effect of the earlier "observerspace" arguments. If the gravitationally-observed positions of objects also coincided (at least to a first approximation) with their optically-observed positions, then angular aberration meant that a moving rock would see the background starfield to be concentrated ahead of them and diluted behind them, resulting in a forward free-fall acceleration to the region of apparently densest mass, which would in turn further distort the starfield image and produce a further positive-feedback forward acceleration …
This didn’t happen in real life, either. The theory was doubly wrong.
However, both effects on the rock seemed to have the same magnitude but opposite signs, suggesting that they cancelled. From the rock's perspective, the fewer redshifted stars behind the rock each pulled more strongly, and the larger quantity of approaching blueshifted stars ahead each pulled more weakly.
In the resulting model, instead of there being a flat-spacetime physics with curved-spacetime effects layered on top, the underlying physics is curved, and what we see as flat spacetime is an "emergent" effect due to the special-case cancellation (which then fails if the relative motion is more complex).
Consequences: If we take this "balanced" situation and add one extra nearby moving star, that star's individual dragging effect isn't countered by anything, and if the star whizzes past our location , we should feel an overall tug in its direction of motion as it passes. This seems to agree with how momentum exchange appears under current physics (e.g. the slingshot effect). Another predicted result of assuming that any moving mass is associated with "gravitational"-looking dragging effects is that we'd expect the receding side of a rotating star to attract more strongly than its approaching edge, resulting in an apparent centre-of-gravity offset of the star's centre of mass to the receding side, giving a tendency of the rotating star to drag matter around with it. This is standard behaviour, too. So it's surprisingly difficult to find things in this model that doesn’t coincide with known physical behaviour (one could also mention the apparent proximity-dragging effect of a cloud of particles on light passing through a region, as demonstrated by the Fizeau experiment). The difference is that we're only using a single layer of theory, not NM/SR, plus GR, plus QM.
Eric Lord: " The resounding success of SR therefore seems to suggest that the universe is not Cliffordian."
A quick PS: In the model that I was looking at, which seemed to me to be the main candidate for a relativistic acoustic metric, pretty much all of the "good stuff" of special relativity carries over. You have different shift equations (a transverse "Lorentz-squared" redshift rather than the single transverse redshift of SR), but you still get the same E=mc^2 result as SR, and because you still have "Lorentzlike" relationships, and the definition of velocity can change as you apply the two different descriptions to the same physical situation, most of the results are at least qualitatively similar, and some outcomes are precisely the same.
You have the same particle accelerator lightspeed limit in both theories, muons stored in a circular particle accelerator ring age more slowly in both theories, and the distance travelled by an unstable particle before it decays (for a given agreed momentum) is exactly the same for both theories.
Differences appear with horizon behaviour, with indirect acceleration, and in the three-way ratio that results from comparing a moving particle's forward, rearward and rest frequencies. In theory, some SR tests should show an excess redshift wrt the SR predictions, but by an unfortunate coincidence the main C20th SR test theory had a "blind spot" when it came to excess redshifts - it assumed that since these didn't happen in either of the reference theories being compared ("SR" and "Classical Theory"), that any excess redshift counted as experimental error and could legitimately be "made to go away" with calibration or compensation without affecting the legitimacy of the experiment.
So although we should probably be able to tell the two theories apart using 1970s hardware, we don't unfortunately seem to be able to tell them apart using published C20th data that was collected and dealt with using a major SR test theory.
Acoustic metrics (AFAIK) didn't seem to become a "subject" until the very late 1990s, so for most of the time during the Twentieth Century when we were testing SR and GR, the idea of there being another type of theory that was "relativistic" but that didn't reduce to special relativity probably didn't seem to be a logical possibility, let alone something that could be tested for.
Dear Stefano ~
Sorry I’m a bit late replying to your response to my remark “The idea that it is an individual entity that can travel unchanged through empty space (where there is no interaction) seems to me to be in conflict with phenomena such as diffraction and the Doppler effect.” (Dealing with RG discussions is getting too time-consuming…)
You refer to Young’s double slit experiment. Consider a single photon created by an interaction on one side of the barrier with the double slits – an emission of a photon. Light then passes through the two slits and another interaction takes place – absorbtion of a photon. To claim that they are the same photon leads to the absurdity that an indivisible “particle” has passed through both slits. What has passed through the slits is a probability wave, determining the probabilities that various absorbtion process can take place. That wave is an electomagnetic wave, governed by Maxwell’s equations. What I call a “photon” is a property of interaction.
If we try to understand the Doppler effect in terms of a single photon travelling unobserved through a vacuum between emitter and receiver like a classical particle we again get a paradox: the photon arrives with an energy different from the energy it started out with.
Similarly, in Compton scattering, we consider a monochromatic electromagnetic wave scattered by an electron. We can legitimately use the photon concept to analyse this elementary interaction – a “photon” has been absorbed and re-emitted, changing the energy and momentum of the electron. This enables us to calculate the dependence on direction of the frequency of the scattered light. Observing the new frequency in a particular direction involves another elementary interaction. Again, the photon concept can be invoked. However, the idea that it is “the same photon” that has travelled in a straight line like a classical “particle”, uninteracting and unobserved, is not warranted. What has travelled is an electromagnetic wave, not a particle.
Consider the elementary Feynman diagram that represents this. It has a “wiggly line” which it’s tempting to imagine represents the trajectory of a “particle” between two vertices representing spacetime points A and B where interactions takes place. But according to QED that is just the first of a series of ever-increasingly complex diagrams, involving all points of spacetime, correspond to virtual (ie. unobserved) interactions. Taken together, they correspond to the unobserved and unobservable electromagnetic radiation between A and B. In this scenario the idea of a “particle” travelling between A and B with a definable trajectory is lost.
Quantum mechanics deals with observables. Observations are interactions. QM deals with the probability of a particular interaction at B, given the interaction at A. Conclusion: A “particle” is an artifact of interaction and “exists” only when it interacts. (According to de Broglie and confirmed experimentally by Davisson and Germer this applies to all particles, not just photons.) In the absense of interaction, we have “fields”, which, though amenable to mathematical treatment, are unobservable. The fundamental constituants of nature are thus neither “particles” nor “fields”, but “interactions” (whatever that means! To quote Wittgenstein: “Whereof one cannot speak, thereof one must one be silent".)
This is where my attempts to understand physics have led me.
Eric..
"If we try to understand the Doppler effect in terms of a single photon travelling through a vacuum between emitter and receiver like a classical particle we get a paradox: the photon arrives with an energy different from the energy it started out with."
I don't see the paradox. The energy you are talking about comes from the inertia of the atom.
Consider an isolated system, it has a center of mass which does not change. The kinetic energy, measured in the reference system of the center of the mass, of the moving absorber is communicated to the ensamble atom-photon and the photon is absorbed at higher frequency (approaching situation). Once the absorption takes place the speed of the atom, in the laboratory reference frame, or in the center of mass of the isolated system, slows down. The kinetic energy of the system becomes a Greater mass, it's a photon absorbed with higher energy.
"Taken together, they correspond to the unobserved and unobservable electromagnetic radiation between A and B. In this scenario the idea of a “particle” travelling between A and B with a definable trajectory is lost."
I don't need the concept of a particle travelling with a definable trajectory, I only know that an atom emits at a certain frequency and if the absorber is approaching, it will detect an higher frequency and will slow down a bit due to the recoil of quantum which is absorbed.
The fundamental constituants of nature are thus neither “particles” nor “fields”, but “interactions” (whatever that means!. To quote Wittgenstein: “Whereof one cannot speak, thereof one must one be silent".)
Yes the particle/wave concept has unhappy consequences.
A third way is necessary and the possible simplest suggestion is:
a Photon it is just an entity absorbed and emitted, respecting the energy-momentum conservation laws. This may not be true if somebody proved that some energy is lost or gained forever during such process, which is very hard to imagine for such a simple process.
Stefano ~
I said "This is where my attempts to understand physics have led me". Yours have led you somewhere else. That's fine so long as we are logically consistent - there's more than one way of understanding physics. I don't think we disagree in any fundamental way - your way of thinking feels a bit too "Newtonian" for my taste, that's all. At least we agree that "photon is just an entity absorbed and emitted, respecting the energy-momentum conservation laws."
Akira ~
(1) There is no statement A for which Newton’s gravitational theory together with his second and third laws imply A∧~A. As you know, MG/r2 and mG/r2 are accelerations, not speeds. MG/r2 = dv1/dt and mG/r2 = dv2/dt. The unique relative speed at any instant is |(v1 –v2)|.
(2) The law of friction is f = −μv where μ is a constant and v is the speed of the object relative to the surface. Newton’s second law then gives mdv/dt = −μv which tells us how friction causes v to decrease: v = v0exp (−μt/m).
(3) “This one I already countered your argument”. No you didn’t. Are you really trying to claim that the the law of conservation of momentum is invalid? Unbelievable!
(4) Maxwell’s equations contain a universal constant c (in the operator (1/c)∂/∂t), with the dimensions of velocity. That is in conflict with the Galilean relativity principle. That is why Einstein realised that a different relativity principle was needed. The Michelson-Morley experiment only confirmed experimentally what Maxwell’s theory indicated and was not central to Einstein’s concern. (It was Lorentz, not Einstein, who arrived at the Lorentz transformations by worrying over the Michelson-Morley result.) The "speed of light" is c only in a vacuum, where there are no currents and no charges. In a medium, there are currents and charges, the speed of light is not c, and is dependent on frequency. This dispersion effect in optics comes from the interaction of the Maxwell field with matter, where the radiation is absorbed and re-emitted by atoms. Currents and charges are the sources of Maxwell’s field. In a vacuum there are no currents and no charges and the appropriate Maxwell equations are the source-free equations describing the behaviour of electromagnetism in regions away from its sources. The velocity is c in those regions – your statement that “without current, there is no em waves according to Maxwell” is simply wrong.
As I said before in another context, I think you know all this and are just amusing yourself by wasting our time. If that is not so, I repeat: learn some physics before posting again on ResearchGate.
Akira,
"A ball with momentum p strikes a wall and bounces off it with momentum –p. Because momentum is conserved, a momentum 2p is imparted to the wall. That may be hard for you to imagine because you think of “walls” as totally rigid and inflexible. They are not."
This is an approximation, never happens in reality unless we consider effect at Planck scale. No rigid or inflexible body made of atoms can be found in nature unless considering coherent ensambles of atoms at absolute zero like in the Mossbauer effect. But here we approach a very particular situation.
Dear Eric Baird,
"Einstein also initially assumed that the theory should reduce to the physics of special relativity over small regions.However, the publication of the Harwell group's 1960 paper on centrifuge redshifts (Phys. Rev. Lett. 4, 165 (1960) ) apparently triggered a controversy within the community, and an appreciation that a literal application of the GPoR seemed to lead to results that were geometrically incompatible with special relativity – the consequence of the GPoR being treated as a “law” then seemed to be not only the loss of Einstein's 1905 "Special" theory, but also the loss of the 1916 "General" theory that had been partly built upon it (Schild, Am. J. Phys. 28, 778 (1960) )."
I don't think anybody rewrote GRT since 1960's but doubts certainly were raised due to the many experiments performed since (Redshift) Harvard Towers, Vessot and Levine, spinning disks etc as you said.
It is true that the results of the experiments performed have not been carefully analysed and should have had consequences on the theory. Some evidences were distorted to account for the official interpretation of the theory in that period.
That it is necessary to make order in GRT, it is clear especially after 100 years.
Synge was the first in 1960's to do reject the assumptions of Einstein about his interpretation of the equivalence principle
Wheeler himself in his last years of activity (1990's) rejected the equivalence principle as formulated by Einstein in 1907 as fundative of the theory.and supported the remarkable paper of 1999 written by LEV OKUN attached.
The two interpretations of the gravitational Redshift cannot be true at once, as in this article is carefully and willfully exposed, but only one can hold and it is the one which respects the verified clock hypotesys.
The equivalence between an accelerated frame and a static gravitational field holds only for infinitesimal distances, if holds.
The Einstein's 1911 paper has to be considered conjectural, and I think this was the initial purpose of Einstein himself. For some predictions he was right but his description brought to the absurdity to define a gravitational mass for photons, later excluded in GRT and QM.
The original EP or NEP or WEP is striclty respected, since experimentally verified, but it regards the equivalence between forces exerted and masses, not fields/frames. The extension proposed by Einstein is considered by some authors an euristic tool which works at first order in finite regions but doesn't account for the underlying nature of the phenomena.
I don't like Schild, since he gave a positve opinion about the Schiff's derivation of time dilation in the 1960s. Schiff's work should have been rejected, as also requested by Rindler in a paper in 1969. A series of manipulations which many should have been aware about, before allowing the Schiff conjecture.
It is clear that the Gravity Probe A experiment (Vessot and Levine experiment ) was not properly interpreted and its Nasa Report contains issues.
Concepts like inertial reference frames, free falling frames. as defined in GRT as locally Galiean and having the property to maintain the laws of physics unaltered, is something which is not universally accepted, even though by many given for granted.
The validity of the Schwartzshild solution is instead undiscussed under the approximations it involves, since its predictions were verified respecting the energy momentum conservation laws.
Article Gravitation, photons, clocks