I am looking for theories that give different observations, i.e. that are measurably different.
Dear Robert,
any metric theory satisfy the EP. Please have a look to my paper
http://arXiv.org/pdf/1108.6266.pdf
Bests
Salvatore
Just viewpoints:
The Einstein equivalence principle, as it is usually stated, is a combination the weak and the strong equivalence principles.
The weak equivalence principle is satisfied by any gravitational theory that couples minimally and universally to matter. The specific form of the gravitational Lagrangian is irrelevant. (What this means is that it is how gravity couples to matter that's important, not what form the free gravity theory takes. So long as that coupling is identical to the way general relativity couples to matter, the weak equivalence principle holds just the same way.) Fields other than the metric are permissible, so long as they themselves do not couple to matter (so the motion of free test particles is determined entirely by geometry, i.e., by the geodesic structure of spacetime.) So theories like f(R) gravity, or the archetypal theory involving an additional (scalar) field, Jordan-Brans-Dicke theory, both satisfy the weak equivalence principle. Both these theories (or, in the case of f(R), families of theories, so long as f(R) is a homogeneous function of R) are consistent with Minkowski spacetime as the empty space solution. (By the way, an inhomogeneous f(R), i.e., a cosmological constant, means that empty space no longer corresponds to Minkowski spacetime.)
The strong equivalence principle places no restrictions on the gravitational theory itself; it basically amounts to restricting the theories of other fields to be generally covariant.
The observations given by JBD theory are quite different in the solar system, which is why JBD theory fails observational tests unless its dimensionless coupling constant is given an unrealistically large value. Many f(R) theories also run into this difficulty.
Brans-Dicke theory is an example. As far as I know, this theory is in agreement with observations so it is hard to distinguish between general relativity and Brans-Dicke theory.
Let us first agree that equivalence principle is not a fundamental property of Nature. I mean we have strong observational indications for its validity, but still they are observational indications. There is not a fundamental pronciple of Nature that leads to the equivalence principle.
The equivalence principle is pretty much saying that the theory has to be covariant. In terms of an action principle, any action that you can write as a scalar under diffeomorpjhisms does the job.
Insisting on the Newtonian limit rules out negative powers of the curvature and allows only week coupling to other fields (which would otherwise provide too much of a fifth force, like the Brans Dicke scalar). So apart from other fields, all scalar expressions in the curvature tensor work.
If you are interested in a low energy theory (so throw away everything which is higher than two derivatives in the field equation) you are left with Einstein's theory possibly with cosmological constant. But the second derivative constraint is somewhat arbitrary, so without it, you can add any higher tensor power. Buy the theories you obtain that way differ from General Relativity only for strong curvature and thus would not differ in the lab or astronomical observations (that can be done at present).
For my point of view it is maybe the case to specify what is intended by EEP, this way we would all start with the same base of Agreement at least in the premises. The first Einstein formulation of 1907 applied in 1911, reapplied by Schiff in 1960, doesn't seem to be the updated formulation. I'm not expert of EEP so I won't define it myself.
1. The Principle of Equivalence amounts to saying that the space-time is a 4D Riemannian manifold with the metric $g_{\mu\nu}$ representing the gravitational potential.
2. The principle of general relativity requires that the Lagrangian action be invariant and the law of gravity (field equations) be covariant under general coordinate transformations. This principle dictates the specific form of the action given by the Einstein-Hilbert functional.
3. The field equations are uniquely determined using the variation under energy-momentum conservation constraints, which we call the principle of interaction dynamics (PID), due essentially to the presence of dark matter and dark energy; see the post: Dark Matter and Dark Energy: a Property of Gravity.
http://physicalprinciples.wordpress.com/2014/08/10/dark-matter-and-dark-energy-a-property-of-gravity/
It is possible to define a Equivalence Principle for manifolds with torsion (alternative theories to General Relativity (RG) ) as a possible extension of the Equivalence Principle to non-Riemannian geometries, which of course, in the limit of null torsion corresponds to the one of RG. See M. Castagnino et al. Int. J. Mod. Phys. A Vol. 14, No. 30 (1999), 4721-4734
Dear Robert,
any metric theory satisfy the EP. Please have a look to my paper
http://arXiv.org/pdf/1108.6266.pdf
Bests
Salvatore
A lot of interesting answers this morning. I think in fact I see what I had been missing in several of the answers, especially Toth, Capozziello, and a few others but those were the ones that clicked it for me. I would summarize this way:
Thanks all.
Robert,
With respect to only Einstein’s equivalence principle, there are a couple of viable theories. His version of the equivalence principle is usually stated as “The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime”. With respect to any metric theory, Einstein’s equivalence principle remains true as previously stated by others.
However, the strong equivalence principle adds “the laws of gravity” to the locally invariant nature of matter. Consider the case where I have two masses and a ruler made of atoms. If I place the masses far away from all other objects with finite separation and measure the force between them, it would be equivalent to the force when placed for example on Earth. Therefore, gravitational potential itself must be locally invariant for each particle with respect to the space-time metric induced by all other particles under consideration. The problem is that gravitational potential only provides some information about metric curvature, i.e. there are many choices in terms of locally isotropic or anisotropic metrics. The metric I use for a single particle however is locally isotropic and has scalar curvature: R = 0.
I have gone into the details of this framework and how it connects to electrodynamics in my “theory of everything” (on researchgate). It is a per-particle method that uses converging continuous fractions or numerical iterative methods to find the space-time metric of composite objects. However, the dynamics arise from the geodesic equations and standard model of particle physics. Thus it does not predict gravitational waves, which still lack direct experimental confirmation. Furthermore, event horizon can only form with an infinite amount of energy, i.e. the theory provides two verifiable predictions.
Yes Robert.
If I'm allowed to add a simple consideration.
since cutting and pasting from Prof. Capozziello document the EEP is:
1) Weak Equivalence Principle is valid;
2) the outcome of any local non-gravitational test experiment is independent of velocity of free-falling apparatus;
3) the outcome of any local non-gravitational test experiment is independent of where and when in the Universe it is performed.
This does not help me, at least in verifying if its application was performed by L. Schiff correctly.
if Leonard Schiff in his paper, “On experimental tests of the General Theory of Relativity” (1959) Am. J. Phys., 28, 340–343,
applied the EEP correctly, starting from incorrect premises, and by using EEP, got the results right, he risks to cast serious doubts on the principle itself.
Getting only yesterday Rindler's paper, I noticed he affirmed about
the light bending part of Schiff's paper "it is a chain of manipulations...." and I had already noticed, after a careful study,
that also the first part (Gravitational Redshift) has heavy issues.
So my answer is, did Schiff really make use of the EEP properly? If so a big problem can raise.
Robert Helling, to be precise, the principle of equivalence does not say that the theory has to be covariant, that one is the principle of covariance, although it is true that only in a covariant theory can the principle of equivalence be implemented: mathematically speaking, the principle of covariance is implemented in the theory by requiring to work with tensors, e since covariant derivatives need a connection, the principle of equivalence amounts to require that the gravitational field is within the symmetric part of the connection (the reasoning is that the gravitational field can be made to vanish locally due to the principle of equivalence as well as the symmetric part of the connection can be made to vanish locally because of its transformation law). Then if you have non-metricity and torsion problems may arise (because there is no unique symmetric part of the connection but still there is a unique gravitational field): in a theory that is metric and with a completely antisymmetric torsion, then the principle of equivalence can unambiguously be implemented by saying that the gravitational field is encoded by the symmetric part of the connection, and hence in the metric tensor. So the principle of equivalence is inbuilt in all and only those theories in which the gravitational field is represented by the metric tensor, and this no matter the actual dynamical Lagrangian.
Therefore, theories with the equivalence principle are Einstein gravity, Weyl gravity and f(R)-gravity or in general any higher-order theory of gravity, while theories in which there is no equivalence principle are for instance all teleparallel equivalents of gravity in the basic form or f(T).
I have an opinion on equivalence vs. covariance, having recently studied through Einstein's original statements on the two. They are nearly identical postulates. What I suspect is that given equivalence, one always can find a covariant formulation. But, and especially in regard to the geodesic principle, some things come from a particular coordinate system choice. One can also always choose another coordinate system. For certain choices, I can formulate gravity as a set of transformation laws. I believe this is equivalent to the covariant formulation, but it doesn't use tensors per se. Sort of a six of one and half dozen of the other situation.
Robert Shuler, covariance is what tells you to use tensors, equivalence is that the theory of gravity is geometric: you must have covariance to implement equivalence (and that is why Newton never implemented equivalence, although he knew it well, since the principle of equivalence is a Galilean thing), but the converse is not true; since Einstein, we know theories of gravity that do not have the principle of equivalence in it, but we know of no theory of gravity without covariance.
By the way, I guess you all know the so called "Teleparallel Equivalent of General Relativity", constructed by Einstein himself. In that theory you do not need the Equivalence principle. Gravity is just a gauge theory...
http://www.amazon.co.uk/Teleparallel-Gravity-Introduction-Fundamental-Theories/dp/9400751427
http://www.amazon.co.uk/GAUGE-THEORIES-GRAVITATION-COMMENTARIES-Classification/dp/1848167261/ref=sr_1_2?s=books&ie=UTF8&qid=1410263669&sr=1-2&keywords=gauge+theory+of+gravity
All metric theories of gravity are compatible with equivalence principle. Scalar-tensor theories like that of Brans-Dicke or the action at a distance based Hoyle-Narlikar theory are examples of such metric theories. Essentially any gravitational theory that espouses gravity being just a manifestation of space-time geometry is compatible with weak equivalence principle since trajectories of particles depend on the metric and its derivatives, and not on their rest masses.
The equivalence principles as a whole are not compatible with every metric theory, only Einstein’s equivalence principle (EEP) is. The defining characteristic between metric theories is actually the strong equivalence principle (SEP). For example, the Brans-Dicke theory violates SEP by definition, because it has a variable gravitational constant. Some claim that Einstein’s field equations do not violate the SEP; however, this can easily be shown as false. The entire basis of the SEP is that the nature of gravity is independent of space-time position. For example, the gravitational force between two masses will be equivalent in strongly curved regions of space versus flat. The SEP therefore directly implies that the gravitational potential of each particle must be locally invariant with respect to the space-time metric.
Consider what occurs if the SEP is applied on a per-particle basis. The field(s) of each particle follows distance as defined by the space-time metric (electromagnetic field, gravitational potential, ect). Thus from a Schwarzschild-like frame of reference, the superposition laws are similar to the Newtonian approach; however, r -> r’ for each particle. Notice that if each particle has a 1/r' gravitational potential, then it is impossible to produce event horizon in theories that satisfy the SEP.
More precisely, an event horizon will have infinite gravitational potential in the classical sense. Therefore, how can someone arrive at infinity by summing a finite amount of finite values? In fact, the approach of applying a classical stress-energy tensor to a geometric field equation may be deeply flawed. The future direct detection of gravitational waves IMO is the defining point in how field equations should be approached in general relativity. If gravitational waves are not detected within 2-3 years, the current approach to general relativity would be without a doubt wrong.
Michael, I have to admit your arguments are not entirely clear to me, but anyway what I think I can say is that I detect some sort of pessimism toward the equivalence principle as a whole; also the fact that you think it may be flawed to couple the energy tensor to a geometric field equation points toward a possibility: that gravity is essentially not geometric, am I right? If so, may I ask from where you have acquired this opinion? Of course, you may also reply that it is from nowhere, and that simply it is what you like the most, but if there is a reason I'd be curious to know. Many thanks in advance.
Luca,
It isn’t that I’m against the equivalence principles, as I fully support them. It’s that nature exists in the form of discrete particles rather than classical densities, i.e. energy, momentum and pressure. The basis of my argument is what happens when the equivalence principles are applied on a per particle basis. In other words, each particle of an object will contribute a finite amount of gravitational potential to the total. For example, experimentally I could measure the gravitational force between two objects (made of N particles) even if they are included in a larger gravitational field. With a ruler composed of atoms, I could measure the force between them at various metric distances and plot their potentials. Therefore, summation rules must exist on a per particle basis for gravitational potentials, even if they are non-linear. My argument is based on performing per particle calculations with 1/r potentials, where event horizon(s) no longer form without infinite energy. My only assumptions were the three equivalence principles, a locally isotropic metric and 1/r (per particle) gravitational potentials.
Although coupling a classical stress-energy tensor to a metric field theory could end up being incorrect, this does not contradict the geometric interpretation of gravity. I suppose my opinion of this originates from the current lack of gravitational wave detection and my work towards a unified field theory. I took a bottom up approach and formulated the framework in the form of a Planck-scale space-time metric that is analogous to a 3D spring-mass system. The Hamiltonian density of these non-classical spacetime fluctuations is something similar to the Ricci scalar and a vector field, i.e. H = R + iv. The quaternion norm sqrt(HH*) provides the “vacuum energy density”, which at a particles classical position is proportional to E = sqrt(m^2 + p^2). Now at a macroscopic scale, one can introduce a classical space-time metric with “vacuum energy density” rather than the stress-energy tensor. For a massive non-composite particle, the “vacuum energy density” is approximately proportional to 1/r due to three spatial dimensions. In fact, vacuum energy density in this framework is directly proportional to gravitational potential but with opposite sign, producing locally isotropic metrics. If you are wondering why I prefer a Planck-scale metric (similar but distinct from space-time or quantum foam), it is because space itself is the simplest answer as to Planck-scale structure and should be ruled out before more exotic approaches.
To answer the original question: The Einstein equivalence principle is simply the requirement of a covariant matter Lagrangian. So, there is a whole class of theories of gravity, named metric theories of gravity, where the gravitational field is represented by a spacetime metric g_mn(x,t) and the interaction with matter is described in the same way as in GR, that means, with a covariant matter Lagrangian.
They may differ from GR in various things, some of them by introducing other covariant terms to the gravitational Lagrangian like f(R) instead of R alone, or some other expressions depending on R^i_jkl and its derivatives, so that even the Strong Equivalence Principle holds (which requires covariance for the gravitational Langrangian too). Others can have a non-covariant gravitational Lagrangian. Examples are Logunovs RTG and other theories of massive gravity, where gravity interacts which some background Minkowski metric which is hidden for matter, or my theory http://arxiv.org/abs/gr-qc/0205035 where this background is even a classical Newtonian spacetime.
The first two points (Minkowski metric and (weak) equivalence prinicple hold for all them in the same way as they hold for GR, by definition (all they require that the matter Lagrangian has to be covariant). The only difference is my theory, in the sense that this requirement is derived here from independent first principles and does not have to be postulated.
Once the resulting theories have already different Lagrangians, it is clear that the final results of the theories are different. But the theories themself are, of course, not derived by integrating some local principles. Some sort of "integrating" would be meaningful anyway only if it would lead to a unique result. In this sense, there cannot be a positive answer to the last question.
Ok but it is important to know how it can be applied. Is the Schiff's application correct or not in finding the time dilation that way???
Ilja Schmelze, the principle that requires the formalism to be covariant is not the principle of equivalence, it's the principle of covariance.
Luca, of course this is a very rough characterization to equate the SEP with covariance (and, correspondingly, the EEP with covariance of the matter Lagrangian), in a scientific article it would not be appropriate. Last but not least, it is well-known that one can construct covariant formulations for all physical theories, even for those with a preferred background or a preferred frame. Thus, as a physical principle, covariance would be trivial. (That's known since Kretschmann 1917.)
Nonetheless, for a rule of thumb of what it essentially means it remains useful. Last but not least, covariant formulations of theories with preferred backgrounds look more complex, thus, to present them one prefers non-covariant formulations in the preferred background coordinates.
Ilja,
There are covariant formulations of classical electrodynamics and these are in no way related to the SEP. The entire concept behind covariant theories is diffeomorphism, where one differential manifold or tensor (guv) can be mapped to another (g’uv). Thus the equivalence principles are a completely different, unrelated concept in regards to general covariance. But I do agree that covariant theories can be constructed regardless of preferred or independent background assumptions.
The SEP is instead related to fundamental aspects of gravitation and thus one can have a metric theory that satisfies general covariance but not the SEP. In fact, any metric theory that is directly coupled to a stress-energy, which also embodies Lorentz-invariance, will predict gravitational radiation from binary systems. If the theory violates SEP then dipole radiation is allowed, otherwise quadrupole radiation will arise when the SEP is included via stress-energy field equations. Therefore, if gravitational waves are not directly detected this would mean one of two things; 1) metric theories that are directly coupled to a classical stress-energy tensor are incorrect or 2) the SEP is invalid. I won’t go into too much detail about gravitational wave experiments, but according to previous theoretical predictions (1990-2000) gravitational waves are ruled out to the >6 sigma level. It was only after these negative results that researchers changed the expected event rates by several orders of magnitude due to non-detection rather than any better understanding from a theoretical standpoint. Even with the revised rates, chances are against the existence of gravitational waves regardless of any indirect evidence (although advanced Ligo and Virgo will end the debate in 2-3 years).
Thus in response to the original question, yes there are an infinite number of theories that satisfy Einstein’s equivalence principle. One does not begin to filter out gravitational theories unless they consider more properties such as the SEP, gravitational radiation and strong gravitational fields; all of which ATM lack enough observational data to give a final answer. In terms of the other listed requirements, these properties can more or less be fulfilled through post Newtonian parameters. The equivalence of gravitational and inertial mass for example can be parameterized by the η PPN parameter.
@Peck:
"It’s that nature exists in the form of discrete particles rather than classical densities, i.e. energy, momentum and pressure."
And in field theory, these discrete particles simply become excitations of a continuous but operator-valued field. So, yes, we do not have classical densities. But we still have fields, not point-like objects.
Dr. Kassner,
Yes, but there is something rather significant about particles being discrete even if they are excitations of an underlying field(s). The main feature of this is wave-particle duality, as classically we can explain nature in the form of point-like particles. However, it is obvious that particles are not point-like, but instead localized excitations of underlying field(s). Note that just because discrete particles could be excitations of an underlying field, it does not mean one should take such discrete excitations and describe them with another continuous field (stress-energy tensor); at least in terms of general relativity.
Consider that the field(s) of each particle is Lorentzian and complies with the equivalence principles. When this approach is taken in contrast to applying a stress-energy tensor, event horizon can no longer form. Furthermore, a single particle cannot emit gravitational radiation, as any collisionless form of energy transfer must take place via Bremsstrahlung. The problem is that the macroscopic application of the equivalence principles is not in agreement with the microscopic application, with the later clearly being a more fundamental approach. It is easy to see this with basic calculus, as with any metric theory the field(s) of each particle will follow distance as defined by ds. Therefore, from the position of each particle, one can do the transformation of r -> r’ for any of its fields and achieve the strong equivalence principle (and Einstein’s).
If the far-field gravitational potential for each particle is proportional to E/r’, then it is impossible to arrive at infinite gravitational potential. Take the Newtonian case where r is used instead. No matter how much mass or energy I can cram into an object, the total external gravitational potential is the sum of a finite amount of finite values at each location in space. The only difference with the strong equivalence principle is that r -> r’, which has little effect on this summation process beyond non-linearity. For example, the diffeomorphism applied in no way changes the fact that the total gravitational potential is the sum of a finite amount of finite values. Event horizons by definition have infinite gravitational potential and thus I believe this proves my point; i.e. that coupling a classical stress-energy tensor to a metric field equation is likely fundamentally flawed. If this simple explanation isn’t sufficient, you could use my per particle methods in “the theory of everything” to numerically prove it.
@Peck:
"Event horizons by definition have infinite gravitational potential and thus I believe this proves my point;"
I think this is the view that is flawed. First, you should rather think in terms of the metric or the spacetime curvature than in terms of gravitational potentials. Second, descriptions in which something diverges at the horizon are simply set up in bad coordinates. A horizon does not imply a singular metric. It simply is a surface which cannot be passed by any particle, because locally, it moves at the speed of light (in any local inertial system).
Dr. Kassner,
Yes, I fully describe everything in terms of a space-time metric in my actual work, I am just simplifying the discussion here. However, gravitational potential constrains several aspects of curvature and thus removes some degrees of freedom. Relative to the strong equivalence principle, the laws of gravity are claimed to be locally invariant. For masses that are at rest or only have radial motion, the geodesic equations reduce to what can be derived from potential energy and special relativity (see pgs 8 and 27 in https://www.researchgate.net/publication/236885728_The_Theory_of_Everything_Foundations_Applications_and_Corrections_to_General_Relativity?ev=prf_pub ). Since in this case gravitational potential defines the local invariance of gravity or outcome of gravitational experiments, it can be discussed without the space-time metric via superposition laws.
Finally, when you change coordinate systems away from the Schwarzschild metric (or Minkowski background frame), you also change the definition of gravitational potential. The event horizon in Einstein’s field equations are not just some imaginary boundary. Outside observers will definitely notice various characteristics that indicate the existence of an event horizon. When a particle falls towards an event horizon, it approaches the speed of light due to infinite gravitational potential. In fact, gravitational potential and redshift are directly related, where redshift goes to infinity at the event horizon. It’s like saying that a valid frame for comparison exists, but this should be discarded because frames where a comparison would be invalid also exist.
Book The Theory of Everything: Foundations, Applications and Corr...
Three interesting quotes from Peck:
On the first point, it's noteworthy that the uncertainty of discrete particles can cause energy to flow in reverse direction across an event horizon. (Hawking radiation)
The latter two points are not much discussed. Some people have recently told me to avoid viewing redshift as gravitational potential, but I do not see any other choice. It provides a direct measure of energy differences. I posted a paper on this a while back some of you may remember (Hamiltonian analysis using time dilation). The paper was unfortunately written before JM and CF straightened me out on the coordinate velocity of light, and needs some clarification, but it is currently in review and I imagine the reviewers are struggling with it as it has been three months already.
I am gradually beginning to suspect, in the last week, that GR is not actually compatible with equivalence. This is an indirect conflict, not a "glaring" conflict. It arises in seeking compatibility of solutions of the field equation with the Newtonian limit. If the compatibility requirement is applied during derivation of a metric from equivalence, a 1+φ factor arises instead of (1-2φ)-1/2. In 1st order post Newtonian they are of course the same, but the former agrees more with Peck's conclusions.
Robert,
Gravitational Redshift is strictly related to gravitational potential of masses through Schwartschild metric in weak field. The G. Redshift is a result of UCR, of the clocks, of the masses which emit and absorb the photons in different way according to their gravitational potential. It is not due to the gravitational potential of photons, which does not exist, even if, seen this way, numerically, in the static situation, it comes out right. The doppler Redshift is instead due to an energy variation, in a flat space time. Applying the energy conservation it comes out that the emitter and observer add or subtract energy to the quantum they Exchange.
Originally I was going to write: "Obviously gravitational potential of photons does not exist. Stefano, no one here means to imply that, as far as I can tell. We are using the fact that photon frequency does not change in order to measure the potential of objects which might be converted into photons via annihilation." But now I am wondering if under the same conditions as particles is this true? A particle falling at nearly the speed of light increases by local measure its de Broglie frequency (due to relativistic kinetic energy) by the same proportion as a photon. Interesting thought. And a photon "in a box" or confined in some way would lose energy when lowered slowly due to Doppler shifting as it reflected from the sides of the box. As Asif's paper points out, particles may just be confined photons. Thanks for provoking the interesting thought, Stefano.
I used to believe as you suggest about the flat spacetime, but this cannot explain the coordinate velocity of light (evidenced by both Shapiro delay and bending). In my "two steps" paper I find the elusive factor in the equivalence derivation which gives the correct coordinate velocity.
Let's try to stay focused on this question ... if 1+φ emerges from equivalence as the time dilation factor which preserves the Newtonian limit, and (1-2φ)-1/2 emerges from the field equation through the Schwarzschild solution as the time dilation factor which preserves the Newtonian limit, what is the reason for the mismatch?
Robert,
In the formulation I use, it is true that (1+φ) provides an identical weak field limit with respect to Einstein’s field equations. It's interesting because the Ricci scalar of this metric when made isotropic becomes R = 0, but Ruv ≠ 0. Recently I thought along the lines of modifying EFEs, where γ0 is the first gamma matrix.
Ruv – (½)guvR = γ0guvΛ : Λ = φ2/(1+φ)4
However, this doesn’t really make formulating microscopic equations any easier. A few years ago I also noticed that my single particle metric and the Maxwell-Einstein vacuum solution are very similar.
ds2 = -A2dt2 + B2dr2 + C2dΩ2
Reissner-Nordstrom metric (A = B-1, C = 1):
A2 = 1 – 2φ + [ φ (keQc2/MG) ]2
Single particle (A = B-1 = C-1):
A2 = 1 – 2φA + [φA]2 = (1+φ)-2
The reason for this is that the electromagnetic contribution of a single charged particle is already included in the 1/r potential. The inclusion of A in the φ components would further be similar to varying the gravitational constant in EFEs, but keep in mind that this is the solution for a single particle and not composite objects; i.e. the equivalence principles are built upon this with per particle r -> r’ and G is constant. So this led me down the path of trying to formulate some type of Brans-Dicke-Maxwell field theory, where Bremsstrahlung would be included in the dynamics. In the non-relativistic case, dφ/dt = (mva/r)sin2(θ) for any collisionless deceleration in Planck units (G=c=1). Thus the electromagnetic field from charged particles does not modify the metric when compared to neutral particles; however, any free field radiation will contribute independently (in the case of Bremsstrahlung or photon). Finally, I decided not to continue along this path because it fails to include neutral fields, e.g. neutrinos. For example, just because one can superposition + and – charges to eliminate the electric field (e.g. composite particles composed of quarks), the vacuum field from which the EM field emerges should still exist. At this point, I felt it would be best to finish the unified field theory rather than attempt to formulate an ad-hoc version in terms of a classical stress-energy tensor (although I've recently been focusing on cosmology instead). Plus my per particle methods are sufficient at the moment, even though I plan to later publish a more refined article on them.
Michael, that's interesting. Some of it beyond what I can intelligently comment on. I would not expect necessarily to get Bremsstrahlung from free fallers, but whatever you get I'd expect equivalence to be upheld, This gives a lot of people immense grief, who don't realize that uniform acceleration doesn't produce radiation, a factoid which is buried deep in the mathematics and usually glossed over.
My baseline assumption is that all particles are electromagnetic, possibly with the extension to include the 3 GUT interactions. It is not necessary to assume that for this question, just FYI. So obviously electromagnetic energy results in a gravitational field of some kind, but I don't see that it can possibly be an energy-time field as in QFT, because of the endless mass loop. I have downloaded one of your papers to look at when my energy level gets up to it.
Michael,
"however, any free field radiation will contribute independently (in the case of Bremsstrahlung or photon). "
according to what you just said, As far as I can understand the radiation or free energy (photons) modifies the metric,
This is a really interesting fact. Einstein affirmed that the contribution of radiation for the metric was negligible, so he didn't try to include it in his equations. But it was 1915-1950.
Later it was confirmed the strange behaviour of photons whose wave function could be divided in two parts and then make it collapse at different distant points according to the presence of the detection system. In such case how can it possibly be modified the metric by an entity whose position in time is not definite? The photon reaches a point if there is something which is there to acquire it, otherwise the same photon is detected somewhere else far away.
Isn't that the case to follow Einstein's policy and make a further step stressing the possibiliy that the influence over the metric of travelling zero mass entities is null, or they don't stress the fabric of space time at all, even though they follow the curvature?
Stefano: "Einstein affirmed that the contribution of radiation for the metric was negligible, so he didn't try to include it in his equations. "
There is no need to try. Once you write down a Lagrange formalism which includes the EM field, the action equals reaction principle manages automatically that this backreaction of EM radiation on gravity is incorporated.
The question of how the collapse of the wave function can be incorporated into the theory of gravity is a problem of quantization of gravity, it is the problem which suggests that the gravitational field has to be quantized too. But it is not a problem of incorporation of the EM field.
@Peck:
"Finally, when you change coordinate systems away from the Schwarzschild metric (or Minkowski background frame), you also change the definition of gravitational potential."
Sure. But not the physics. This is just a change of gauge, and while potentials vary under gauge transformations, physically observable results don't. Choosing a gauge where a potential becomes infinite is usually not well-advised, if you want to draw conclusions from the value of the potential. The coordinate singularity may "wash out" useful information.
"The event horizon in Einstein’s field equations are not just some imaginary boundary. Outside observers will definitely notice various characteristics that indicate the existence of an event horizon."
Event horizons are not singularities of spacetime. Whether they are just some imaginary boundary or not depends to some extent on what you mean by "imaginary". An event horizon can be present for one observer and absent for another. Example: the Rindler horizon for an accelerated observer is absent for an inertial observer. Moreover, it may be impossible for an observer to decide -- on short time scales -- whether he is inside or outside an event horizon. The precise position of an event horizon is, in principle, decidable only by a global view on the spacetime under consideration, i.e. you need knowledge of the future of all world lines in the patch where you suspect an event horizon.
"When a particle falls towards an event horizon, it approaches the speed of light due to infinite gravitational potential."
That is not generally true. In some coordinates, it approaches the speed of light, in others, its speed remains smaller than c, in yet others, its speed becomes zero. A particular value of the gravitational potential does not have any physical meaning. Even an infinite one. Potentials are not the reality. What can be said, however, is that the relative speed of the horizon with respect to the infalling particle is the speed of light.
"In fact, gravitational potential and redshift are directly related, where redshift goes to infinity at the event horizon."
That's true only for a particular definition of gravitational potential, e.g. the one based on the Schwarzschild metric, which is not well-suited for the discussion of these questions, as it becomes singular at a horizon. (And that is due entirely to the choice of Einstein synchronization for the Schwarzschild time. But near the horizon, Einstein synchronization -- the assumption that the speed of light along a curve is independent of the direction within the curve -- looks outright silly. Obviously light cannot come back from the horizon as fast as it can go there.)
"It’s like saying that a valid frame for comparison exists, but this should be discarded because frames where a comparison would be invalid also exist."
You've got the cart before the horse. If there is one valid frame you should discard the invalid ones but not vice versa. The fact that you can choose bad coordinates for a description (diverging at the horizon) should not prevent you from choosing good ones. Why on earth should I discard a quantitatively correct description, just because there are others that are incorrect?
@Peck:
"It's interesting because the Ricci scalar of this metric when made isotropic becomes R = 0, but Ruv ≠ 0"
How can the Ricci scalar of a metric depend on whether the latter is made isotropic or not? The Ricci scalar is coordinate independent, so it should be the same whatever coordinate transformation you apply to your metric.
@ Ilja Schmelzer, if as you say there is no need, why Einstein said that?
If the tensor cannot be defined in certain situations it means something in the relation between curvature and light. It is better to admit that the stress tensor is not affected than quantizing gravity and going nowhere.
Robert,
The case of Bremsstrahlung would only occur under specific situations based upon classical electrodynamics. For example, it only affects charged particles and cannot occur with gravitational acceleration or that of an elevator. In short, my methods allow the space-time metric at a given time to be determined from classical variables, i.e. mass, energy, temperature, pressure, velocity, ect. From here I apply the geodesic equations and standard model to time-step the simulation, where the space-time metric is then recalculated on a per particle basis. The time-dependence is only as good as our understanding of forces beyond gravity.
In terms of my “theory of everything”, particles are discrete (localized) excitations of a Planck-scale metric that is analogous to a 3d spring-mass system (infinitesimal and continuous). Thus an electron will have a central location with the energy density of the metric fluctuations proportional to ~1/r (due to three spatial dimensions). Gravity is now due to the vacuum energy density of a Planck-scale metric, which can be treated geometrically via a macroscopic space-time metric. Note that this is literally the only other possible interpretation of c4/G as being the coefficient for a source term, i.e. some type of energy density. The other classical forces arise from the time-dependence of the Planck-scale fluctuations. Say for example that the solution for two electrons at finite distance was known. Setting up the initial conditions and letting the system evolve will automatically have electrodynamics and all other classical forces emerge within the dynamics. Thus the electromagnetic field can in some way be seen as something physically real, more precisely the electromagnetic four-potential (notice that gravitational potential, electric potential and "vacuum energy density" are all ~1/r). This is perhaps why the standard model works in its rather ad hoc form, because it is not too far off.
Stefano,
In terms of quantized electromagnetic radiation (photon), this is commonly implemented via the particle’s energy; i.e. through the energy density component of the stress-energy tensor. For example, to determine a massless particles contribution to the space-time metric, I assume it has cylindrical symmetry and plug this into a complex Helmholtz equation to approximate the field. This results in E0e-|z|/sqrt(r) rather than E0/r, where E0 = hc/λ. The double slit experiment depends on how you interpret the results. For those who believe in a deterministic universe, the classical position of a particle or photon follows a path. In an ensemble interpretation with many initial conditions, there will be a distribution of paths providing the outcome of the double slit experiment.
Dr. Kassner,
The only way to remove these coordinate singularities is to mix spatial and time components. However, from the frame of an outside observer, the clock of infalling matter will cease to evolve as it approaches the event horizon. The frame of infalling matter once reaching the event horizon has no casual connection to the outside universe. The physical meaning of gravitational potential as derived on a per particle basis is therefore lost in such coordinate transformations. If I have equivalent gauges (and thus equivalent definitions of gravitational potential) between my per particle calculations and the Schwarzschild metric, I really don’t see the problem. I'm sure you could apply diffeomorphism to both and arrive at equivalent definitions in other coordinate systems.
It has nothing to do with personal preference for a particular coordinate system, but instead the equivalence of gauges between the Schwarzschild metric and per particle solutions based upon the equivalence principles. However, I could have worded the phrasing of an isotropic metric clearer. There is really no choice in using a locally isotropic metric in my framework, the fundamental forms are instead used to derive a curvilinear space-time metric from a scalar field (“vacuum energy density”). Therefore, ds at each point in space will be spherically symmetric, but its derivatives will not.
Robert, researchgate just now emailed your question to me, and coincidentally on Friday I attended a presentation by physicist Russell Humphreys, where he presented his acceleration-based theory of gravity. He is a creationist ( I interviewed him at http://rsr.org/humphreys ), and has had a distinguished career at Sandia National Labs in nuclear physics, and theoretical atomic and nuclear physics, worked with Sandia’s ‘Particle Beam Fusion Project’, co-invented the special laser-triggered ‘Rimfire’ high-voltage switches, now coming into wider use, and contributed to light ion–fusion target theory. His talk was videotaped and if it's released online, I'll post the link here.
"Thanks for provoking the interesting thought, Stefano."
Thank you Robert
I spent a lot of time defending Feynman about his massive photon, which is written also in his book LECTURES ON GRAVITATION, against the article of OKUN. Then I had to surrender in front of experimental evidence in GP-A experiment, which I revisited quite carefully in its NASA REPORT of 150 pages. The clock theory is the guide. The article of Einstein of 1911 in the application of the equivalence principle is faulty. Applying the equivalence principle to light which in a flat space time is affected by the doppler shift, it comes out the gravitational energy of photons which cannot exist. This has to make us reflect on the applications of the WEP.
I went through the Debroglie frequency too. But it is only for Fermions and composite matter particles. In such case the debroglie frequncy is strictly tied to the gravitational potential energy of the particles.
Michael,
it is not the double slit experiment which is the same for fermions and photons, here I'm talking about something which is strictly a property of Photons. The division of the wave packet (PENROSE) and its detection far away, buf very far away is not an invention, it creates huge problems to the tensor theory which is local.
In the case a very powerful light a beam of 1010 gamma photons, passing near a gravitating object, bended the space time through which it passes, according to their energy content, it would change the gravitational fields around itself also, it would Exchange energy with the massive gravitational object, and lose its energy. For example in the phenomenon of gravitational lensing its frequncy would change..this is really a problem...
@Peck:
"The only way to remove these coordinate singularities is to mix spatial and time components."
So what? That's what the Lorentz transformations do as well.
Also it may be considered reasoning backward. What if you start in deriving a metric that is not time-orthogonal for the Schwarzschild geometry? Then the only way of getting the coordinate singularities is to mix spatial and time components. But relativity is about the necessity to mix space and time components when transforming from one observer to another.
No. If the spacetime manifold is non-singular at the horizon, and this has been shown by Kruskal and Szekeres, it is a bad idea to use singular coordinates there and to believe everything derived from them.
Are you aware of the fact that the divergence of the time coordinate at the event horizon in the Schwarzschild metric is only due to the insistence on time-orthogonality, that is, Einstein synchronization? Now, Einstein synchronization requires that light takes the same time on a path to a distant event as it takes on the way back along the same path. Do you think that is a reasonable requirement near a horizon, where there is no way back? That's why Einstein synchronization fails near a horizon. And hence, Schwarzschild coordinates are not a reasonable choice there.
"However, from the frame of an outside observer, the clock of infalling matter will cease to evolve as it approaches the event horizon."
No. It will evolve until the horizon and it will disappear from view then. Just as a sufficiently distant galaxy will disappear due to the expansion of the universe. Both on the galaxy and for the infalling observer time will continue as before. We just cannot observe them anymore.
"The frame of infalling matter once reaching the event horizon has no casual connection to the outside universe. The physical meaning of gravitational potential as derived on a per particle basis is therefore lost in such coordinate transformations."
Potentials are not fully physically interpretable, because they are defined only up to a gauge. The additive infinite constant in your potential is due to a bad coordinate choice, so it presumably does not have a physical meaning. Of course, the ability of humans to interpret is hardly limited, so you may assign such a meaning to it, but it will not be an objective meaning...
I think we have come a long way far from the equivalence principle, so I suggest to stop here and open another thread focusing on the last issues
Luca,
I think maybe some aspects have drifted from the original question, but most of what is now being discussed are the details. For example, it isn’t just about having other theories that are in agreement with Einstein’s equivalence principle, but also differentiating between such theories through measurements. My main focus was on per particle microscopic solutions and how the metric relates to that of a charged object via the Maxwell-Einstein vacuum solution. Beyond possible insight as to why the stress-energy tensor may not be applicable to a metric field theory, I suppose everything about a unified field theory and vacuum energy density can be ignored in these regards.
Nonetheless, I would like to point out that there is something rather significant in achieving equivalent first order gravity with respect to Einstein’s field equations and the relation to the metric of a charged object. In EFEs, it is assumed that regardless of how much mass or particles exist in an object, the gravitational potential of the object follows distance as defined by its metric (i.e. induced by its own presence, even if there is only a single particle). This is by definition the strong equivalence principle as applied to a continuous source term, i.e. a stress-energy tensor. However, on a per particle basis, the gravitational potential of a single particle follows distance as defined by a Minkowski metric (or if other particles exist, the space-time metric induced by their presence). This is the meaning behind the φA terms and thus (1+φ) for a single particle. In weak fields this does not add up to much (A -> 1); but when you get on the order of “black holes”, EFEs begin to fail. Thus the central point I would like to focus on is that microscopic general relativity is not in agreement with macroscopic general relativity when applying the equivalence principles. This is measurable through future tests of event horizon and direct detection of gravitational waves. Furthermore, this would indicate that proper macroscopic equations (if coupled to a stress-energy tensor) would need to violate the strong equivalence principle in order to fit the more fundamental microscopic predictions.
ds2 = -A2dt2 + B2dr2 + C2dΩ2
Reissner-Nordstrom metric (A = B-1, C = 1):
A2 = 1 – 2φ + [ φ k ]2
Single particle (A = B-1 = C-1):
A2 = 1 – 2φA + [ φA ]2 = (1+φ)-2
Dr. Kassner,
Coordinate singularities never occur in my formulation unless infinite energy exists, so there is no need to make such transformations in the first place. The assumption that the Schwarzschild metric has bad coordinates is a personal preference and does not change the fact that it has an equivalent definition of gravitational potential with respect to my formalism.
“It will evolve until the horizon and it will disappear from view then.” My point was that by the time it reaches the location of the event horizon in local proper coordinates (even if the event horizon does not exist in such coordinates), the event would take place after an infinite amount of time with respect to outside observers and thus is no longer casually connected with the outside universe.
Assuming that gravitational potential as I have implemented them are not fully physically interpretable however is wrong. Gravitational potential is a component of the post Newtonian formalism and thus when determining PPN parameters for EFEs, time and spatial components are never mixed. There are many ways to view general relativity, but my aims are to take all aspects into consideration. I believe that if you actually consider what I am saying about per particle solutions and the problem with applying a classical stress-energy tensor, then it will become clear that my argument is not based upon a misunderstanding but perhaps a better understanding than most.
@Peck:
"Coordinate singularities never occur in my formulation unless infinite energy exists, so there is no need to make such transformations in the first place."
I don't think that is true. In using the Schwarzschild metric, you have a coordinate singularity in your formulation at r=2M. You cannot discuss it away by saying that r=2M is not reachable. (Which would be incompatible with the relativity principle.) It is in the formula and it can be shown to be only a coordinate singularity, removable by an appropriate coordinate transformation (which also shows that r=2M is reachable, because it is spacetime and not its coordinization that determines whether an event is reachable or not).
"The assumption that the Schwarzschild metric has bad coordinates is a personal preference and does not change the fact that it has an equivalent definition of gravitational potential with respect to my formalism."
It is not a personal preference. "Bad coordinate" is quite objective: You have bad coordinates whenever the metric becomes singular without spacetime becoming singular (i.e. the Riemann tensor remains regular).
"“It will evolve until the horizon and it will disappear from view then.” My point was that by the time it reaches the location of the event horizon in local proper coordinates (even if the event horizon does not exist in such coordinates), the event would take place after an infinite amount of time with respect to outside observers and thus is no longer casually connected with the outside universe."
But that is not true in general. The causal connection gets lost, right. But time need not become infinite, and if you take coordinates that become singular only where spacetime is singular then you will see that time is not infinite at the horizon (which is possible for the Schwarzschild geometry). I refer you to the Wikipedia article on Gullstrand-Painlevé coordinates for a nice example that I will discuss a bit more below.
"Assuming that gravitational potential as I have implemented them are not fully physically interpretable however is wrong. Gravitational potential is a component of the post Newtonian formalism and thus when determining PPN parameters for EFEs, time and spatial components are never mixed."
Since gravitational potentials can, just as any other potential, be changed by an additive constant (and in fact, in general relativity a whole additive function due to gauge invariance) without changing the physics, they cannot be fully physical. So I maintain my statement in that regard. If time and spatial components are never mixed in your theory that would be a reason to discard the theory as most likely being wrong. Mixing of time and space is a characteristic feature of relativistic theories. Any theory that is not able to reproduce that feature is probably wrong, because of the good agreement of relativity with experiment.
"There are many ways to view general relativity, but my aims are to take all aspects into consideration."
I would dispute that. On the contrary, you are focusing on a single coordinate system although taking into account all aspects would require you to be able to give a coordinate-independent description. "Time" does not diverge at the horizon, only Schwarzschild coordinate time does. Moreover, you seem to introduce elements of action at a distance with your particle view and 1/r potentials. I did not really buy your answer on my objection stating modern particle physics to be in terms of field theory so we do not have point particles but extended fields that should enter the description. You said that particles are discrete excitations of the field. Yes they are discrete but in energy, not in spatial extent. Their spatial description is by a continuous wave function or by a (continuous) operator field.
"I believe that if you actually consider what I am saying about per particle solutions and the problem with applying a classical stress-energy tensor, then it will become clear that my argument is not based upon a misunderstanding but perhaps a better understanding than most."
These are two different statements. I think that you indeed misunderstand a number of things. However, I fully agree that you have a better understanding of the theory than most posters here. (Although, of course, not as good an understanding as myself :-).)
Now let me return to the example of Gullstrand-Painlevé coordinates. The interesting thing about the GP metric (discussed in the Wikipedia article) is that it was found by Painlevé from a direct solution of the field equations of a spherically symmetric system. He did not recognize its equivalence to the Schwarzschild solution and was perplexed by the fact that there seemed to be a different soution to a problem that he had expected to be uniquely solvable. Anyway, let us imagine that Schwarzschild would never have found his solution and we would only have Painlevé's. Suddenly, somebody would point out that a certain coordinate transformation diagonalizes the metric and would obtain Schwarzschild's form of it. Would you then maintain that because this has been achieved only by a transformation mixing space and time components, the Schwarzschild solution was invalid?
There are some interesting features of the Gullstrand-Painlevé form of the metric. First, a particle that falls from rest at a finite radius towards the center, will reach the horizon in finite time. (And the Gullstrand-Painlevé time can be made to coincide with Schwarzschild time at the starting finite radius by a simple redefinition of the time origin.) Second, if you define a gravitational potential via exp(-2 Φ)=g00 as usual, then the gravitational potential at the horizon will still be infinite. But, third, a particle that was at rest at infinity and freely falls towards the center will have a coordinate velocity -1 on passing the horizon. Fourth, this is not the coordinate velocity of light as one might suspect; the velocity of infalling light is given by -1-√ 2M/r, so it becomes -2 at the horizon.
What does this tell us? Well, I would say two things, at least. The Gullstrand-Painlevé time introduces a different synchronization between events near the horizon and distant events than the Schwarzschild time (it does however, agree with the Schwarzschild time at a certain radius, if we set tr=t-f(r)+t0; that radius is then given by f(r0)=t0) . In this synchronization, a particle will pass the horizon in finite time from the point of view of the distant observer. The second thing we learn is that infinite potential does not mean that an infalling object reaches the speed of light. (This you might of course avoid by a redefinition of the potential.)
Dr. Kassner,
“I don't think that is true.”
I’m not sure how the Schwarzschild metric has anything to do with my formulation beyond the φA terms, but I would agree that the Schwarzschild metric requires what you have described. The only coordinate singularity that exists for the single particle metric is at r = 0 due to applying the classical 1/r approximation of the potential. Thus in this far-field approximation one could apply a coordinate transformation and arrive at no coordinate or curvature singularities.
“It is not a personal preference.”
If you define “bad coordinates” as those with coordinate singularities, then what about those with curvature singularities? I don’t think singularities can fully justify a metric as good or bad, the only factor that matters is equivalent definitions of redshift and gravitational potential when making a comparison between theories (thus my reference to the standard method of comparing post-Newtonian theories through PPN parameters and gravitational potential). If what you were saying is true, the PPN formalism would be useless and different metric theories could not be compared based upon basic quantities like curvature from rest mass, gravitational potential, non-linearity, ect.
The only difference is that EFEs are macroscopic while my equations are microscopic. This is the significance of the φA terms for a single particle, as after proper coupling to the particle’s electromagnetic or neutral field (electroweak) k = 1.
A2 = 1 – 2φ + [ φ k ]2 -> A2 = 1 – 2φA + [ φA ]2 = (1+φ)-2
“If time and spatial components are never mixed in your theory that would be a reason to discard the theory as most likely being wrong.”
Just because there are no reasons to mix the spatial and time components in my formalism does not mean one cannot mix them. The assumption that metrics with singularities are “bad” is not a strong argument in terms of needing these transformations, let alone being a requirement for all metric Lorentzian theories.
“I would dispute that.”
To be precise, I am focusing on the test particle perspective based upon the geodesic equations and metric theories. My views on a unified field theory are irrelevant in these regards, as all particles will have fields that follow distance as defined by the metric. Thus classically particles are localized entities; whether one wishes to consider the particle and it’s classical fields as a single entity or treat them via the kinematic part of field strength tensors, the r -> r’ and 1/r potentials are built into the foundations of modern particle physics and general relativity. Applying the ensemble formalism (QFT) to these theories of course brings about the wave function based upon observables, but the underlying fields still exist and these are extended (e.g. Fuv, Buv, ect.)
“But that is not true in general.” and “Gullstrand-Painlevé coordinates”
Yes, I can see how infalling matter could reach the event horizon in finite time with respect to outside observers IF the spatial and time components are mixed. However, perhaps there is a fundamental reason that diagonal space-time metrics provide a more proper frame regardless of diffeomorphism. For example, it is possible that the electromagnetic four-potential and gravitational potential are more fundamental than the electromagnetic field and space-time metric, at least in terms of discrete particles. In an experimental sense here on Earth, if we tracked matter falling into a black hole it would still take an infinite amount of time for it to reach the event horizon. Thus there is one coordinate system that is valid for what distant stationary observers see and this is not the Gullstrand–Painlevé coordinates.
Nonetheless, I find the different synchronization between events in these coordinates interesting, but its like comparing apples to oranges. In general relativity one can transform apples into oranges, but once this is done a comparison can no longer be made. I’m saying that I have two apples, the macroscopic and microscopic equations of general relativity. Your saying that you can take the macroscopic "apple” and transform it into an “orange”, i.e. that I cannot make a comparison because other forms exist. However, this does not change the fact that when both are in the form of apples or even oranges they are equivalent and thus comparable.
"I think that you indeed misunderstand a number of things"
The only disagreement that I can think of would arise from the assumption that a single particle's potential follows a flat Minkowski metric (or space-time metric if other sources exist). However, this is by definition the equivalence principle on a per particle basis and equates to what would be tested in gravitational constant experiments placed at various space-time locations.
@Michael.
As far as understand your per-particle model starts from the assumption of the newton potential of 1/r of all particles. Won't this be like assuming that the behavior of the space-time follows the same determined law in any stress condition? Regardless of the global stress condition the single particle contribution to the potential is always 1/r?
@Peck:
"If you define “bad coordinates” as those with coordinate singularities, then what about those with curvature singularities? "
But that was not my definition. I define "bad coordinates" as coordinates that are singular. where spacetime is not. (And only if you are not describing events that happen far away from the singularity anyway. Schwarzschild coordinates are virtually perfect for our solar system, they are just not good to describe things near the event horizon of a black hole.)
How to decide this has been first thoroughly discussed by Szekeres. If spacetime has a true singularity, then you cannot get rid of it by any choice of coordinates.
"Yes, I can see how infalling matter could reach the event horizon in finite time with respect to outside observers IF the spatial and time components are mixed. However, perhaps there is a fundamental reason that diagonal space-time metrics provide a more proper frame regardless of diffeomorphism."
No, there isn't. The reason why time-orthogonal coordinates are preferred by people (not by physics) is that they keep the feature of Einstein synchronization making the speed of light the same in two opposite directions (which facilitates the decomposition into proper time and proper space). But that is a pretty unnatural requirement near a horizon. Why would one require light that comes from the horizon to have the same velocity as light that falls into it? (In particular, if this pushes the time coordinate of the horizon to infinity?)Therefore, my suggestion is that time-non-orthogonal coordinates are essentially always superior in describing physics near a horizon. (The only exception being constructions like Kruskal coordinates which are time-orthogonal and nonsingular but at the price that a constant "spatial" coordinate will run into the horizon.)
Stefano,
The central assumption for a single particle without any other sources or fields is a 1/r gravitational potential. This is due to several things, 1) classically this is expected for the weak field limit and 2) when working in three spatial dimensions solutions will always be 1/r. On a more fundamental basis, if one wishes to view classical fields as separate entities from their respective particles, then the electromagnetic or electroweak contributions to a single particle will already be included in the 1/r potential as I have shown.
However, for composite objects the potentials are transformed via r -> r’ to manually implement the equivalence principles on a per particle basis. For example, particle A will have it's field(s) follow the space-time metric induced by particle B and vice-versa. Numerically I achieve this by starting with a flat Minkowski space-time and summing up the contribution from each particle. Now I have the first iteration, which provides a space-time metric based upon 1/r solutions. Next the space-time metric from the first iteration (no longer Minkowski) is applied so that the potential of each particle follows distance as defined by the current space-time metric (1/r’). The contributions are then summed again for each particle to arrive at the space-time metric for the next iteration. Although this process rapidly converges, the exact transformation of r -> r’ occurs after an infinite amount of iterations. Furthermore, one can also break objects into chunks (N) rather than particles (n), where exact solutions arise as N -> n.
Thus in the local frame the potential will always be 1/r (EEP and SEP), but for distant observers this becomes r -> r’ or undergoes Lorentz transformations.
Dr. Kassner,
I understand your point as to why you would want to transform the coordinates with respect to the Schwarzschild metric, but I just don’t see how such thing is a fundamental requirement of metric theories. My point of a curvature singularity was that EFEs predict them and they cannot be removed (while my theory does not). Thus if one objects to certain metrics due to coordinate singularities, what is stopping me from applying a similar type of attitude and rejecting all solutions with curvature singularities?
I also believe that there is convincing evidence that the four-potential and gravitational potential are fundamental. For example, the Aharonov–Bohm effect offers direct proof that the four-potential is more fundamental than the electromagnetic stress-energy tensor. Of course one can make gauge transformations to the four-potential like the gravitational potential, which argues against your earlier objection to the gravitational potential being fundamental due to this. Furthermore, nothing is ruling out the gravitational potential as being more fundamental than the space-time metric. As such, there is a possibility that non-diagonal metrics are improper because they do not conserve the definition of gravitational potential. Since noone has the unified field theory yet, I see no reason to rule this out nor to accept Einstein's field equations as the final answer.
Consider this, if the strong equivalence principle is valid for microscopic general relativity then it must be invalid for macroscopic general relativity and vice-versa. For metric field theories coupled to a stress-energy tensor, macroscopic equations will have dipole radiation if the SEP is violated. Furthermore, this radiation in the context of a charged particle is exactly equivalent to Bremsstrahlung. It is therefore highly likely that gravitational waves do not exist, as any radiation is instead in the form of electromagnetic or electroweak Bremsstrahlung. Thus the reason why researchers had to decrease expected event rates by 100x post-2000 is because we are getting a null result, otherwise gravitational waves at present would be ruled out to the >6 sigma level. In fact, when gravitational waves are never detected by advanced LIGO/Virgo this will end our debate; i.e. that the gravitational potential is fundamental and GR should be applied on a per-particle basis if assuming SEP (or in the form of macroscopic EQs that violate the SEP and are properly coupled to electromagnetic and electroweak fields).
I have added a condition about Newtonian limits which I think restricts the question to what I was originally trying to ask. OK, obviously any metric theory will act on matter and energy equally and preserve falling rates. But the added condition requires the global preservation of equivalence, i.e. that observations in an equivalence frame must integrate to the global theory. While this may not restrict it to just one theory, I think it rules out the Schwarzschild metric, because integrated equivalence observations cannot have a potential of form 1/(1-2φ/c2)1/2. It will have a numerator form and a positive sign, 1+φ/c2.
Michael you said: "The central assumption for a single particle without any other sources or fields is a 1/r gravitational potential" yes ok... So I understood well, it it assumed that in the strong field the conditions are the same, the law is always that.
I seriously doubt that the contribution of the single particle in a strong field can be the same. As the Schwarzschild metric works only for weak fields, the configuration of elementary potentials proposed by you should work differently where Mass-energy contribution is very strong. Imagining that the Space time preserves always that behavior, considering the non linearity of the Einstein Equations, raise some doubts to me. Unless somebody is able to show that the Einstein Equations are not valid.
Stefano,
Yes, this is the strong equivalence principle when gravitational physics are locally independent of space-time curvature, i.e. the laws will be the same in both weak and strong fields (EEP covers all other laws like electrodynamics). In terms of a stress-energy tensor, Einstein’s field equations achieve this. However, this is a continuous source term that acts on a macroscopic level. It is this application of the strong equivalence principle to a continuous source term rather than to each particle that is responsible for event horizon, curvature singularities and gravitational waves in Einstein’s field equations. Therefore, my central point is that if the SEP is valid microscopically then it is invalid macroscopically and vice-versa.
Consider the physical case in a lab where I have two masses composed of N atoms each. One would naturally want to test the gravitational force between two particles, but this is unreasonable experimentally due to the magnitude of force. Instead I have two spherical objects where I know exactly how many particles are in each. If the strong equivalence principle is violated on a per particle basis, then the gravitational constant (G) as measured from this experiment will vary depending on its place in space-time. Note that experimentally we do not work in the form of a stress-energy tensor but instead with discrete particles. Furthermore, the test particle case (geodesic equations) directly implies that this is how general relativity should be formulated, i.e. in the form of microscopic equations with EEP and SEP. This is the basis of any metric theory with EEP/SEP in the test particle case, i.e. all of the test particle’s fields including gravitational potential will follow distance as defined by the space-time metric (ds). Unfortunately this leaves very little wiggle room for Einstein's field equations, because now they must be fully consistent between the macroscopic case (stress-energy tensor) and microscopic case (test particles) which is physically impossible.
This brings me to my second point. Lets assume the SEP is valid microscopically; what would this mean? For some macroscopic (stress-energy tensor) metric Lorentzian theories that violate SEP, there will be dipole radiation instead of quadrupole. Furthermore, if we treat the gravitational potential as arising from an underlying energy density (scalar field), applying a Lorentz transformation to such scalar field (φ) will naturally predict dipole radiation. For a non-composite massive particle, the scalar field and its electric potential will be directly proportional; more precisely the gravitational and electric potential. Thus gravitational radiation in this sense equates to electromagnetic radiation, i.e. it is the metric curvature induced by the presence of traveling EM waves (which means k=1 for the proper coupling constant). Also note that in U(1) the dynamics are valid for non-composite massive particles, i.e. electron an positron. For electroweak or composite particles one must go to SU(2) x U(1) or SU(3) × SU(2) × U(1) respectively. However, if you work in the framework of my "theory of everything", there is no need to worry about these aspects in regards to the space-time metric of composite objects. For example, the far-field of a neutron will be equivalent to that of an electron beyond their differences in rest energy (E/r), while all dynamics are implemented via the standard model and geodesic equations.
@Peck:
"Thus if one objects to certain metrics due to coordinate singularities, what is stopping me from applying a similar type of attitude and rejecting all solutions with curvature singularities?"
Then you cannot have a classical theory at all. To remove the curvature singularities, you probably must go to quantum mechanics. Otherwise, they can be shown to be inevitable, given certain initial conditions (Penrose, Hawking). The fundamental thing in general relativity is the spacetime manifold, not the coordinates, nor "potentials". If the manifold is non-singular, then you should not trust coordinates that give a singular result. If the manifold itself is singular, you have to face the problems arising due to this. If the coordinates are singular, but the manifold is not, all arising problems can be avoided by going to better coordinates.
"I also believe that there is convincing evidence that the four-potential and gravitational potential are fundamental. For example, the Aharonov–Bohm effect offers direct proof that the four-potential is more fundamental than the electromagnetic stress-energy tensor. "
That evidence is not convincing al all. Interesting that this fallacious argument stays in the minds of people even though it has long been shown to be unsatisfactory. In any description of observable phase differences of the Aharonov-Bohm effect, we need a closed path, so we have an integral over the magnetic flux through a surface, there is no need for a vector potential. So precisely the part of the vector potential that is not real, because it can be arbitrarily chosen, cancels out. (The part of the description for which the vector potential is useful, on the other hand, is the arbitrary part of the phase, which does not correspond to a physical quantity.) You could use the same kind of argument to "prove" that the wave function is more fundamental than, say, the energy of a particle.
Gravitational potentials are certainly not more fundamental than the metric. They are a relict of Newtonian physics, useful for visualization by Newtonian-trained minds. You can get by completely without them and use only geometric language. And geometry is more fundamental than gravitational potentials or coordinates in general relativity.
Dr. Kassner,
“Then you cannot have a classical theory at all.”
How so? My entire framework is derived from classical theory and the equivalence principles. In fact, it is directly related to electrodynamics due to parameterizing the space-time metric with a scalar field proportional to the electric potential of a charged U(1) particle, i.e. γg=1+φ and γgdS = ds in planck units (φ = E/r). As previously stated, my framework unifies U(1) with general relativity due to the dipole radiation emitted from φ being equivalent to that emitted by a charged particle undergoing bremsstrahlung. Thus the proper coupling constant in the Maxwell-Einstein equations provides k = 1, reducing to my metric after the single particle case (φ -> φA). Exact solutions can be derived where φ at the position of the particle is instead proportional to E rather than infinite, resulting in smooth Riemann manifolds. This is naturally a more sound theory because it avoids problems like black hole information paradox.
A2 = B-2 = 1 – 2φ + [ φ k ]2 -> 1 – 2φA + [ φA ]2 = (1+φ)-2
“That evidence is not convincing al all.”
With Feynman’s path-integral formalism, the scalar potential directly changes the phase of an electron’s wave function which is measurable. For a charged particle I agree the vector potential does not directly correspond to a physical quantity, but for photon or electromagnetic radiation it is the field applied in QED. However, when you begin to apply multiple Lorentz boost, the vector potential inevitably contributes to the scalar potential. Thus for a fully consistent U(1) theory you must apply the four-potential, which is why Feynman preferred the four-potential rather than E and B fields.
Nonetheless, you have managed to avoid every sound argument I've made for gravitational potential being fundamental.
1. The connection between redshift and potential in the Schwarzschild coordinates, which you claim should be ignored due to other coordinates existing (even though this is the standard frame to compare gravitational potential between metric theories in PPN formalism, i.e. it is the coordinates that naturally arise when a unit rest mass is placed upon the Minkowski background. Furthermore, no objection to my "apples to oranges" refutation was provided).
2. The foundations of the geodesic equations and change in particle energy due to gravitational potential.
3. The situation in a lab where I have two test particles relatively at rest, so that the geodesic equations reduce to what is identically described by proper force and gravitational potential; i.e. what is measured by gravitational constant experiments.
I cannot seem to figure out why you are against something that is so standard. Does gravitational potential completely describe the metric for all field theories? No. Does it completely describe some metric theories? Yes. Is it fundamental and shared by all post-Newtonian theories? Yes. It seems that your objection arises from my simple thought experiment of applying the equivalence principle on a per particle basis rather than to a stress-energy tensor. Of course its harder to argue against a mathematical fact of infinity through summation versus gravitational potential being fundamental, but the precedent has already been set by PPN formalism and under case (3) where only gravitational potential matters. As I’ve said, it is more probable that gravitational waves do not exist with respect to current results, but I’ll wait until 2016-2017 when the first full science runs from advance LIGO/Virgo are complete. If the gravitational potential is not fundamental and my theory is wrong, then they will detect something. Otherwise, there will continue to be no gravitational waves simply because they do not exist as Einstein had described them; i.e. his fatal flaw was coupling a macroscopic source-term to a metric field theory and assuming the strong equivalence principle was still applicable.
@Peck:
"How so? My entire framework is derived from classical theory and the equivalence principles. "
I have not looked at your theory in any detail and do not have the time to do so right now. But I guess, it does not solve the usual problem of infinite self-energy appearing with point-like particles. Also, if it has 1/r potentials, it probably has action at a distance, which is incompatible with usual causality considerations. I think there may be a difference between what you claim your theory to achieve and what it really does. (As there is a difference between what defenders of Bohmian mechanics claim their interpretation to achieve and what it really does.) Your framework may turn out to be inconsistent. I would even say, most likely it will.
"With Feynman’s path-integral formalism, the scalar potential directly changes the phase of an electron’s wave function which is measurable."
No. Phases of wave functions are not measurable. Phase differences are. Neither the scalar potential nor the vector potential are measurable. The vector potential is detemined only up to an arbitrary additive gradient, which you can choose as a gauge. The scalar potential is determined only up to an additive time derivative of an arbitrary function, which again is fixed only after choosing a gauge.
"it is the coordinates that naturally arise when a unit rest mass is placed upon the Minkowski background."
No. Gullstrand-Painlevé coordinates were directly obtained from a solution to the field equations for that case, not just by a coordinate transformation. If Schwarzschild had died one year earlier (and Droste not developed his solution independently) the Painlevé solution might have been the first, and nobody might have worried about coordinate singularities for a long time.
What you don't seem to be willing to accept is that there are no preferred coordinates, since physics is independent of coordinates.As long as coordinates do not become singular, you can use whichever set you want. When they become singular (without the Riemann curvature becoming singular), it is preferable for practical reasons to use non-diverging ones, but only for those. However, you must account for the fact that in one set of coordinates the external time to fall into the horizon diverges whereas in another set it does not. The only way to reconcile these facts is to recognize that simultaneity at a distance is not a physical concept. Time coordinates have an indeterminite element akin to potentials. The time difference until the horizon is reached does not have physical meaning, whereas the spacetime interval does.
Regarding the question of fundamental fields and non-fundamental ones, you should maybe take into consideration, how the various fields enter the body of theory.
Simple cases first: electrodynamics. This may be introduced as follows. The solutions of the Dirac equation are determined only up to a global phase, they are physically invariant under a change of this global phase. They are not invariant under a change of local phase, because momentum is connected with the gradient of phase. Since absolute phases are, however, unobservable, it may be desirable to have a formulation of the theory, in which there is invariance with respect to a local phase change. That can be achieved by introducing an additional field Aμ, which is the gradient of some scalar and added, supplied with a coupling constant, to the momentum, so momentum changes due to phase changes are compensated by that gradient field. The canonical momentum becomes unobservable this way but the kinetic momentum, given by pμ -q Aμ remains a valid physical variable. The gradient defining the momentum operator becomes replaced by a covariant derivative for kinetic momentum. Now the condition that Aμ is a gradient field reads ∂νAμ-∂μAν=0, and as long as it is satisfied, the new gauge invariant formulation of the Dirac equation is strictly equivalent to the old one without local gauge invariance. However, if we require that the curl expression ∂νAμ-∂μAν is a dynamical object of the theory, i.e. if we reify it, then we have a new theory. We need, of course, equations of motion for the new object Fμν, which we obtain from symmetry considerations, postulating an appropriate Lagrangian. This way we obtain the Dirac equation plus the equations of the electromagnetic field and their coupling. But the quantity that is the new fundamental object of the theory, is the field strength tensor Fμν, not the auxiliary field Aμ that has no physical meaning at all prior to the introduction of the electromagnetic field tensor.
Now for general relativity, we can proceed the same way by starting not from the Dirac equation but from special relativity. Require special relativity to be invariant under arbitrary coordinate transformations. That gives you the metric tensor and a condition between derivatives of its components that concisely can be cast into the form Rμ νρσ =0. As long as you require this quantity (which is the Riemann curvature tensor) to be zero, you are still in the realm of special relativity albeit formulated in an arbitrary gauge (coordinate transformations are the gauge transfomations). However, you can get a new theory by requiring the curvature tensor (and only it!) to be a dynamical physical object. You need to postulate a dynamics for it. Symmetry and simplicity suggest the Einstein-Hilbert action. Comparison with the Newtonian limit gives the only free constant. Then you have general relativity. And you know what is the fundamental object of general relativity, by construction: not the metric, not the coordinates, but spacetime curvature.
Fundamental objects should correspond to observable quantities, IMHO. Others may be auxiliary and useful, but they are not fundamental.
Dr. Kassner,
“But I guess, it does not solve the usual problem of infinite self-energy appearing with point-like particles.”
Well my framework does not view particles as point-like, it simply allows them to be treated as such. For example, say that instead of the space-time metric (guv) being fundamental, it is instead a metric manifold of each particle (kuv) that is deformed so that gikkij=δjk. Thus instead of tracking the dynamics of a bunch of fields, one can introduce a space-time metric so that the deformed particle’s field(s) will locally represent the Minkowski version via gikkij=δjk; i.e. one can determine the dynamics of a localized entity from its classical (point-like) position. Although I do not believe QFT is the final theory, I make no changes to the standard model of particle physics. The only changes were with respect to one or two fundamental assumptions in general relativity, which I address in the very last question of this post.
However, I do not quite understand how you claim causality is violated by having action at a distance via extended fields. Naturally one would assume that causality is violated by having faster than light messenger particles rather than localized objects, but due to abusing the uncertainty principle virtual particles are widely accepted as something real. For example, the uncertainty principle has to do with making measurements in an infinite Hilbert space (related to the uncertainty principle in Fourier transformations). More precisely, it is saying that if we know the value of one conjugate variable then there will be a spread in values for the other conjugate variable. What it does not say is that energy or information can travel faster than the speed of light and I know many others dislike the perturbative virtual particle paradigm for this reason. In fact, if one sticks with subluminal theories the only logical explanation would be that particles are not point-like, but instead extended objects.
“No. Phases of wave functions are not measurable. Phase differences are.”
Changing the phase of an electron’s wave function leads to something that is measurable and this is due to the scalar potential. But the central point of the Aharonov–Bohm effect is whether or not the four-potential is physically real rather than a mathematical construct. However, this is irrelevant in regards to Einstein’s and the strong equivalence principle. Whether one describes things in terms of force or potential, the resulting dynamics must be locally the same. Therefore, even if the four-potential or electromagnetic field is a mathematical construct, they still describe the laws of nature in a locally invariant way. This is my central point in regards to gravitational potential and the strong equivalence principle. When treated in a fully relativistic manner, gravitational potential describes the laws of nature up to a point depending on the theory of general relativity. For my theory, gravitational potential fully describes diagonal space-time metrics.
“No. Gullstrand-Painlevé coordinates were directly obtained from a solution to the field equations for that case, not just by a coordinate transformation.”
But the Gullstrand-Painlevé coordinates are not based upon a Minkowski space, as the metric is non-diagonal. Thus when placing a unit rest mass upon a Minkowski space via Einstein’s field equations, the result is the Schwarzschild metric. When you change to non-diagonal metrics, you fundamentally change the gauge and definition of gravitational potential; i.e. it would no longer be based upon a Minkowski space.
“What you don't seem to be willing to accept is that there are no preferred coordinates, since physics is independent of coordinates.”
Just because physics are independent of coordinates doesn’t mean a more fundamental coordinate system can’t exist. In PPN formalism, one is restricted to diagonal metrics because otherwise it would be like comparing apples to oranges. Regardless if more fundamental coordinates exist, different theories must be put on equal grounds when compared. If I take a Minkowski space [-1, 1, 1, 1] and place a unit rest mass upon it with respect to Einstein’s field equations, I get the Schwarzschild solution. If I take my theory relative to a Minkowski space and place a single particle upon it (unit rest mass), I will get a metric that can be directly compared to the Schwarzschild coordinates in terms of potential energy, redshift and several other classical variables (i.e. PPN parameters). Thus when I begin to calculate per particle solutions relative to this Minkowski space and event horizon no longer form, a problem arises. The problem is a disagreement between the microscopic and macroscopic application of the SEP and has nothing to do with preferred coordinates. In my theory, event horizon cannot exist relative to distant observers while in EFEs they can, a fundamental difference.
“Regarding the question of fundamental fields and non-fundamental ones, you should maybe take into consideration, how the various fields enter the body of theory.”
Yes, I understand your point as to how QFT or QED can be formulated with either Au or Fuv (E and B) as pointed out by Vaidman. However, I wouldn’t go as far to say that Au has any less physical meaning than Fuv. They both determine the outcome of experiments and this is all that is required for the equivalence principles. The same can be said for gravitational potential in the sense of two particles/masses relatively at rest in weak or strong gravitational fields, i.e. it is the sole determinant of gravitational experiments for this case. Even though this is not applicable to some cases in EFEs, it is definitely valid for the particular case where we experimentally measure G.
“Now for general relativity, we can proceed the same way”
The problem is that reality does not exist in the form of a stress-energy tensor, but instead as discrete particles. How can one expect a theory based upon an approximation to give proper answers? If you can prove that matter physically exists in the form of a stress-energy tensor rather than discrete particles, then I will promptly concede my view.
Klaus, I would also object against “Then you cannot have a classical theory at all.” It should be, of course, "Then you cannot have classical GR at all". I have not proven any theorems of global existence, and even don't believe they will hold, but at least the black hole and big bang singularities do not appear in my theory, which is also a metric theory of gravity.
Then, I believe, you have not understood the seriousness of the Bohm-Aharonov effect. Ok, one cannot measure the potential. But if you want to describe all observables, and do this in a non-simply-connected space, the field strength is no longer sufficient. You have to introduce complex nonlocal objects into the theory - something many orders of magnitude more complex than the potentials. And even without this, the theory written in terms of A_m is much simpler than the theory written in terms of the F_mn. It is sufficient to look at the interaction term, which is local in A_m but should be nonlocal (and I would not even know how to write it down in a simple way) in terms of F_mn.
The same about gravity. How do you plan to write down the observable clock time, which is an integral over the metric, in terms of curvature?
Last but not least, what is your point against Bohmian mechanics? Where do you think the proponents claim to achieve something which they don't reach? (Unfortunately completely off-topic here, so I don't know what to do, but if there appears something to be discussed in detail we could start a new discussion for this. Same for Aharonov-Bohm effect.)
Ilja I agree with you would be the case to reserve a section to the Aharonov-Bohm and maybe involve Yakir Aharonov himself who is still working at Haifa University.
Ilja,
"Klaus, I would also object against “Then you cannot have a classical theory at all.” It should be, of course, "Then you cannot have classical GR at all". I have not proven any theorems of global existence, and even don't believe they will hold, but at least the black hole and big bang singularities do not appear in my theory, which is also a metric theory of gravity."
In your theory, the event horizon already is singular, which is worse, I would say. (If you have incomplete geodesics in a metric theory, it has singularities.)
"Then, I believe, you have not understood the seriousness of the Bohm-Aharonov effect. Ok, one cannot measure the potential. But if you want to describe all observables, and do this in a non-simply-connected space, the field strength is no longer sufficient."
Maybe it is you who have not understood that the vector potential can be completely eliminated from the description of the Bohm-Aharonov effect? The field strength is sufficient. The only thing that appears in the formulas describing observable phase differences is the flux of the magnetic field through a closed loop. The vector potential is a convenience. It helps to simplify the description, but it could be eliminated from the theory, in principle.
"The same about gravity. How do you plan to write down the observable clock time, which is an integral over the metric, in terms of curvature? "
Well, the observable clock time is not only an integral over the metric. Even in flat spacetime, you can have different clock times along different paths in the same metric. In any case, why should I render my life more difficult in avoiding all quantities that are not gauge-invariant and hence do not describe physical objects? I do not want to give up coordinates either.
"Last but not least, what is your point against Bohmian mechanics?"
Bohmian mechanics simply is out. It has been shown that the Bohmian trajectories are metaphysical rather than realistic. A number of (thought) experiments have been devised that allow to non-destructively measure particle positions and find two successive positions not to lie on a Bohmian trajectory [1,2,3]. (Some of these may have been performed in the meantime, but since I do not doubt the correctness of the ordinary quantum mechanical description, I have no doubt about the outcome anyway.)
Hence, the only way Bohmianists can now pretend the trajectories they calculate to be realistic, is to assert that only destructive position measurements are position measurements and hence that only the endpoint of a trajectory gives you a measurable position. That is just too incredible for me.
[1] M. O. Scully, Do Bohm Trajectories Always Provide a Trustworthy Physical Picture of Particle Motion?, Physica Scripta T76, 41-46 (1998).
[2] Y. Aharonov, L. Vaidman, About position measurements which do not show the Bohmian particle position, arXiv: quant-ph/9511005v1
[3] Y. Aharonov, B-G. Englert, M. O. Scully, Protective measurements and Bohm trajectories, Phys. Lett. A 263, 137-146 (1999).
Ilja,
Have you been able to find spherical solutions to your field equations and do these predict coordinate singularities relative to a flat Minkowski reference frame? I have been searching for the proper macroscopic equations (stress-energy tensor) with respect to my microscopic solutions, where the constraints are dipole radiation rather than quadrupole and coordinate singularities cannot form. I think we are on the same page with theories predicting coordinate singularities being less desirable than those without (standard Minkowski reference frame) due to the various complications and paradoxes that arise.
Unfortunately, the physicality of 1/r potentials will not become clear until next generation experiments fail to detect gravitational waves. This will indicate that the SEP cannot be applied on a macroscopic scale, which is the cause of coordinate singularities and quadrupole gravitational radiation in EFEs.
Nonetheless, if you check out my recent publication (http://adsabs.harvard.edu/abs/2014AstRv...9c...4P ), I have demonstrated the inferred accelerated expansion of the universe is an illusion due to local geodesics deflecting into a cosmological-scale gravitational potential. Thus with distant galaxies and clusters accelerating into the potential, our local projection of the universe provides the illusion of accelerated metric expansion (local redshift is more complicated, but fully consistent). Combining the consequences of this with my work on microscopic general relativity, there is strong evidence that EFEs are wrong and event horizon do not exist in nature.
Klaus, to be honest, I'm a little bit frustrated by a part of your answer.
Do you really think that by writing " the observable clock time, which is an integral over the metric,", I have in minds something different than the integral over the trajectory of the particular clock? So I take this answer as a suggestion that I'm stupid, and, by the way, the point itself has not been answered at all.
Then, I have explicitly written that the field strength is not sufficient in a "non-simply-connected space". I would assume that you know this.
Of course, you can ask me why I would care at all about such spaces - in my theories the only spacetime is R^4. But this is a question to you - you believe into a theory which allows non-trivial topologies, so you would have to care about this if you take your own theories seriously.
By the way, if you use B-A in my preferred variant, with a toroidal coil, and exclude the inner part of the coil from the consideration (say, because it is unreachable by the electrons outside) it appears that their observable properties depend on the EM field in such unreachable parts.
This is, of course, less strong than the variant with a real non-simply-connected space - something like an artificial one, created by exclusion of unreachable parts - but, compare the very argument with your argument against Bohmian mechanics. A quite similar point, something observable depends on the wave function at points where no Bohmian particle is. Here, EM effects in regions where F_ij = 0 depend on the F_ij at places where none of the measured electrons is.
About all these weak measurement business: I do not take this seriously. In the other direction too - I have seen some claim about some weak measurement of "particle velocity" which claims to give the Bohmian velocity - and, guess, immediately ignored it as misguided. Quantum measurements measure properties of the quantum wave function, and nothing else. If it measures |psi(x)|^2, fine, it measures |psi(x)|^2, even if the Bohmian particle is at y =/=x.
The role of the configuration is a completely different one. It is our own state, our own universe, which is described by the configuration.
Ilja,
"Klaus, to be honest, I'm a little bit frustrated by a part of your answer.
Do you really think that by writing " the observable clock time, which is an integral over the metric,", I have in minds something different than the integral over the trajectory of the particular clock?"
Sorry, but I think this is due to your unclear formulation. I thought you had in mind an integral such as ∫sqrt(g_00) dt to emphasize the coordinate dependence of the concept. Had you spoken of an integral of the line element instead of the metric, which would have been more correct, my answer would have been that I am not aware of ever having claimed that the curvature is the only objective concept in relativity. The line element is an invariant, so it is as objective as the curvature tensor. The coefficients of the metric are coordinate dependent, so they should not be considered physical (nor the potentials arising from them). The line element is a coordinate independent combination, so the proper time is independent of coordinates or gravitational potentials. It is another objective quantity besides the curvature tensor. A scalar one.
"Then, I have explicitly written that the field strength is not sufficient in a "non-simply-connected space". I would assume that you know this."
I would assume you know that this is but an interpretation. There are claims to the opposite. And in fact, AB can be explained without reference to the potential. Here is how. The quantitiy that is important for the interference pattern is the phase of the wavefunction. Even if the amplitude of the wave function is zero in the region where the B field is non-zero, its phase need not be zero there. (It can be continued across the region where the B field is nonzero and it makes a difference for the total wave function whether the phase becomes singular in that region or can be continued smoothly.) In the final formula describing the measurable phase shift, we do not need the vector potential, everything is expressible by the B field. Hence neither the explanation nor the calculation does have to refer to the vector potential. It is convenient to do so, no doubt.
Anyway, my point is not so much whether A or B are more fundamental, I find that question only mildly interesting (Feynman for example, considered potentials more fundamental, but Feynman also had no problems using advanced potentials in electrodynamics, i.e. causation backwards in time, as it were). My point was that the vector potential is not a physical object, but the field strength is. An object is characterized by the property of objectivity, i.e. independence of the observer (in a wider sense: the descriptions of different observers may differ but they will agree on the result after having accounted for the transformations between their frames of reference). This is not the case with quantities, of which the observer can freely choose a part without changing the contents of what is observable. To argue that the Aharonov-Bohm effect is not a counterexample, does not require to argue in favor of an alternative description. It is sufficient to demonstrate that the description can be made to use only gauge-independent observables. Now the quantum mechanical description always contains the kinetic momentum p-q A, which is an observable, whereas neither the canonical momentum p nor the vector potential A are gauge-independent. The requirement of local gauge invariance, i.e. invariance of the physics under an arbitrary local change of the phase of the wave function, which leads to the introduction of electromagnetic fields into QED, leaves the probability distribution for position measurements unchanged. Without introduction of the gauge field, however, the distribution for momentum measurements which are directly related to phase gradients, would be changed. The gauge field A prevents this, at the price that the canonical momentum becomes unmeasurable and only the kinetic momentum remains as a physical variable. But in the description of the two branches of the Aharonov-Bohm wave function it is precisely this quantity that appears. So all is well. Neither p nor A need to be defined, if p-q A is.
"Quantum measurements measure properties of the quantum wave function, "
Not really. Quantum measurements measure properties of the observables that are measured. Also, the standard interpretation of Bohmian trajectories, before the weak measurement business came up, was that the point where the particle was measured on the screen, indeed corresponded to the endpoint of its realistic trajectory. So it was not just a property of the wave function that the measuerement revealed.
But I was never convinced that the Bohmian interpretation lived up to its promise, probably because I never encountered a really sharp defender of it. Neither of its defenders had the analytic capabilities of a Bohr, say.
Here is a typical example of an unconvincing line of reasoning. Bohmian mechanics claims to be deterministic and some of its proponents claim there is no collapse of the wave function. They say the Bohmian position of the particle determines the outcome of a position measurement and afterwards you can throw away the empty part of the wave function. So no collapse. My point is then, if throwing away the empty part of the wave function is just an option, you could also keep it. But this would change the statistics of subsequent measurements. So it is not an option, you have to throw away the empty part of the wave function after a measurement, i.e. there is a collapse. Maybe the theory is not as deterministic after all... Fortunately for the Bohmianists, a position measurement means that the particle has been absorbed, so my counterargument can never be tested. But wait, now we do have position measurements, that restrict a particle only to a large volume and do not destroy it. And it turns out that Bohmian mechanics gives wrong results, if you do not throw away the empty part of the wave function after such a measurement...
Which-way measurements are routinely used in experiment to switch off the interference in two-slit experiments. And you can be hundred percent sure that if such a measurement tells you a particle has taken the left slit that you will find it in a destructive measurement there, too. (This can in fact be tested, probably even has been tested, by blocking the particle's path each time it has passed the which-way detector and it will always register on the blocking screen of the side indicated by the which-way detector. So these measurements give reliable information on the path of particles.) But Bohmian trajectories do not agree with the general course indicated by these measurements. Hence, they cannot be a realistic property expressing position. The term that has been coined for this is "surrealistic", and it seems as appropriate as "metaphysical" to me.
The semester starts here in Magdeburg on Monday, so I will not be able, due to teaching obligations, to follow this thread as closely anymore as before.
[1] Vaidman, L.. Physical Review A 86 (4): 040101.
"The line element is a coordinate independent combination, so the proper time is independent of coordinates or gravitational potentials. It is another objective quantity besides the curvature tensor."
Fine, almost agreement, except for the minor point that all the line elements give you also the curvature tensor (thus, curvature tensor is not really "another").
For the AB-effect: "It can be continued across the region where the B field is nonzero and it makes a difference for the total wave function whether the phase becomes singular in that region or can be continued smoothly"
I know. That's why I have named this "less strong than the variant with a real non-simply-connected space", where you do not have such a possibility for continuation.
"My point was that the vector potential is not a physical object, but the field strength is."
I would say this depends on the theory. A theory which considers the potential as real but the field strength only as some derived property is also possible. The question what can be freely choosen is also theory-dependent, in a theory which considers the potential as real there will be, of course, also an equation for the potential. square A_m = 0 and d_mA^m=0 is, as a set of equations, much simpler than the Maxwell equations in F_mn, I have never seen a reasonably simple expression for the interaction between fermions and a gauge field expressed in the F_mn, So, this is, for me, an archetypical example of bad philosophy (positivism, only observables are physical) leading to bad (more complicate) physics.
About dBB theory: I agree there is a lot of unconvincing reasoning in the dBB community too. Some really have a hope that one can develop some Lorentz-covariant version of it, despite Bell's inequality ....
But dBB theory is, of course, deterministic, as can be seen from the equations. And there is no need to throw away any branches. All one needs for the collapse is the concept of the effective wave function for some subsystem. This effective wave function depends on the configuration of the environment: psi_eff(q_system)=Psi_universe(q_system, q_environment). And this effective wave function contains,automatically, only the "branch" which corresponds to the measurement result, because the measurement result is part of q_environment.
The "surrealistic" trajectories do not match naive classical expectations about trajectories as well as about what is measured in quantum measurements, that's all. This has nothing to do with not being realistic in the sense used in Bell's theorem.
So we have a rather paradoxical situation here: On the one hand, you reject the very very general and abstract concept of realism and causality as used in Bell's theorem - an extremely weak notion of realism which is quite trivially fulfilled by dBB theory.
On the other hand, you reject dBB theory for not fitting much more rigorous ideas about "realism" like that it should fit classical ideas about trajectories and should not have "surrealistic" trajectories.
The other way seems more reasonable. We have at least one theory which is realistic and causal in the abstract, weak sense used in Bell's theorem - fine, that means, these principles may be preserved. We don't like particular properties of this theory, like "surrealistic trajectories"? Ok, such is life. Try to find a better one, with "realistic" trajectories (whatever this means). But a theory which gives up even the weak notion of realism and causality preserve by dBB theory is clearly nothing "better" in this sense.