John Archibald Wheeler said: “Empty space is not empty.” However, there are many different models of the energy content of the vacuum. One extreme position is that the only energy density present in the vacuum is dark energy which is about 6x10-10 J/m3. The standard model has 17 named particles and each particle has its own field which fills all of spacetime. For example, the Higgs field is one of these fields and the energy density of this field has been estimated at about 1046 J/m3. Quantum chromodynamics also requires energy density at least this high. Field theory has zero point energy where the vacuum is assumed to have harmonic oscillators with energy E = ½ ħω where all frequencies up to Planck frequency are represented. This implies Planck energy density equal to about 10113 J/m3. This is often assumed to be impossible, but the argument can be made that general relativity implies that spacetime has impedance equal to c3/G ≈ 4x1035 kg/s. This tremendously large impedance is consistent with the vacuum having Planck energy density. http://onlyspacetime.com/QM-Foundation.pdf
We do not interact directly with the energy of the vacuum, but something is giving the vacuum properties such as constants G, c, εo, µo, ħ, etc. Therefore, how do you view the energy density of the vacuum?
Akira: Since your question does not have anything to do with vacuum energy, I will correspond with you privately about your question
A conjecture and question: Since the big bang can be said by some to generate everything from nothing, some of the vacuum energy must be "borrowed" perhaps from the initial fields which were supposed to be quite different than those we have now. So this amounts to negative energy. So as you add up energy, what's negative and what's positive?
James, You bring up an interesting point about theories that say the Big Bang generated everything from nothing. This implies that half the energy in the universe is negative energy. In my view, this cannot be correct because this implies that it should be possible to violate the conservation of energy. For example, adding together equal amounts of positive and negative energy should produce an annihilation of both entities. Matter and an equal amount of negative energy would disappear without a trace. Some might say that there is no violation of the conservation of energy, but an actual experiment would appear to be a violation.
In my view the energy density in the vacuum equals Planck energy density. The impedance of spacetime (c3/G) supports this. Furthermore, the multiple fields of the standard model can be united into a single spacetime field with multiple resonances which the standard model now considers to be discrete fields. In this view the vacuum energy density currently exceeds the observable energy density in fermions and bosons by a factor of about 10122 if we average over the observable universe. These concepts are explained further in a paper to be published next month. For a preprint: http://onlyspacetime.com/QM-Foundation.pdf
John,
In my unified field theory, vacuum energy density ends up being directly proportional to gravitational potential. For example, if there are no particles or free fields then there is no vacuum energy. On the other hand, a stationary single particle would have vacuum energy approximated by E/r. Realistically the value should be proportional to the classical energy of the particle at its classical position, so a better approximation would be (in Planck units),
vac_E = E*sqrt[(sin(r)/r)2+(-sin(r)/r2+cos(r)/r)2]
To connect this with general relativity, I introduce a general Lorentz scalar (yg) defined by,
yg = 1 + vac_E*G/c4
y = 1 + delta_E/E0
In conclusion, trying to give a constant value for vacuum energy density appears to be the wrong path. Instead, vac_E defines the space-time metric and thus acts as a medium. As vac_E increases at the classical position of a particle, it causes time-dilation. As a particle moves onto an increased background level of vac_E, it experiences time-dilation.
To visualize this, treat vac_E as the Hamiltonian energy density of an underlying Planck-scale spring-mass system. If the “masses” are at rest and the springs are at equilibrium, then there is no vac_E. On the other hand, if the related vector and scalar fields are none-zero (i.e. the masses are not at rest and/or the springs are not at equilibrium), then this energy density will act as a medium for any other waves traveling through the spring-mass system. Of course, springs and masses here are not in the classical sense, but instead related to space itself at the Planck-scale; i.e. points in space are moving back and forth at the smallest of scales.
Remi, Various people have different perceptions of vacuum energy. I happen to believe that it is possible to prove that zero point energy and vacuum perturbations are related. You are correct that energy cannot be extracted from zero point energy. In the previously cited paper I give equations which describe the distortion of spacetime (wave amplitude), frequency, energy density and the impedance of the spacetime when it is treated as an acoustic medium.
Michael, I can understand how it is possible to arrive at some of your conclusions. You said "vacuum energy density ends up being directly proportional to gravitational potential. For example, if there are no particles or free fields then there is no vacuum energy." I answer that all spacetime has impedance of c3/G. When a medium has impedance, it has the ability to propagate waves. The medium must be able to absorb energy and return energy to a wave. Gravitational waves undoubtedly propagate in the medium of spacetime and they do not require proximity to matter. It is very hard to detect gravitational waves because the impedance of spacetime is so large that a wave with a very large intensity still produces a very small displacement of spacetime. The tremendously large vacuum energy density creates the large impedance and also determines the value of other constants such as c, εo, ħ, etc. Matter distorts this background energy density. Therefore, what you consider to be spacetime with zero energy density, I consider to be a uniform, homogeneous background condition created by vacuum energy when it is not distorted.
Zero-point energy beyond ordinary quantum mechanics is plagued by unphysical anomalies. For example, there can be an infinite amount of zero-point energy in a finite amount of volume within QFT, requiring renormalization. In QFT the zero-point energy is the expectation value of the Hamiltonian; however, I arrived at my work independent of this. The Hamiltonian applied in my theory is only proportional to the Hamiltonian found in conventional physics at a particle’s classical position. This is because it is an emergent theory. For example, gravity is due to vacuum energy density acting as a medium, while all other forces emerge from the underlying Planck-scale time-dependence analogous to a spring-mass system. This is the central problem I see with the standard model, because we have taken classical fields derived from macroscopic dynamics and tried to derive a more fundamental theory with them.
Consider how an emergent theory would work with a single unified field. We have some underlying Planck-scale system analogous to a spring-mass system with localized excitations. With the two particle case (e.g. two electrons), electrodynamics arises from the time-dependence of the spring-mass system as a whole. There shouldn’t be a separate field for each perturbation we wish to add or built-in potentials/forces, but instead a single unified field that naturally provides the correct dynamics of our classical representation of physics (gravitation, electromagnetic field, electroweak field, Higgs field, ect). Of course, mathematically one can get any theory to match observations with enough parameters determined from experiment. This is what has happened to the standard model of particle physics (19 parameters to be exact).
John,
It is interesting that you mention c3/G as an impedance, as I’ve only worked with c4/G relative to an energy density (linear density). In Einstein’s field equations, the source term represents classical energy densities (energy, momentum, pressure) while I interpret it as vacuum energy density. I do agree that there is a necessity of impedance in terms of the propagation of waves upon mediums.
However, I can prove that Einstein made serious errors in formulating general relativity. His equivalence principles were on mark, but his implementation of them was not. The problem originates because matter does not exist in the form of a stress-energy tensor, but instead as discrete particles with extended fields. For example, consider a single particle and the strong equivalence principle; any gravitational experiments must be independent of their location in space-time. Now consider many particles that together create an object. With any metric theory, the classical potential of each particle must follow the space-time metric induced by all other particles under consideration (the test particle case). Assuming that each particle by itself has a potential of 1/r due to three spatial dimensions (lacks an event horizon relative to the Schwarzschild metric), it would be physically impossible to create event horizon without an infinite amount of classical energy. For example, what we would have is a finite summation of finite potentials where one applies the transformation of r -> r’ for each particle's potential. As energy or mass goes to infinity, the potential of composite objects instead becomes more point-like relative to the Minkowski reference frame.
Newtonian: 1/r1 + 1/r2 + 1/r3 + … = finite
General Relativity: 1/r1’ + 1/r2’ + 1/r3’ + … = finite
At the moment, I have a classical unified field theory (macroscopic) that connects general relativity with electrodynamics/electroweak; I actually arrived at it from starting with solutions to the Einstein-Maxwell equation (my microscopic work is in "The Theory of Everything"). It turns out that the scalar potential is fundamental to massive particles, while the vector potential is fundamental to EM radiation or photon; at least when considering their contribution to the space-time metric. In its current formulation, the theory is gauge invariant and thus completely valid. The results are no event horizon and no gravitational waves. Instead, it is possible to have traveling metric waves in regards to say bremsstrahlung (dipole radiation), i.e. metric curvature induced by the presence of electromagnetic radiation. Of course, this also predicts that we will never detect gravitational waves in the modern sense, but I’ll give next generation detectors a few years to prove this.
Micheal,
"At the moment, I have a classical unified field theory (macroscopic) that connects general relativity with electrodynamics/electroweak;
GRT has serious problems with energy conservations, doesn't use a Poincarè Group as quantum physics. You use grav. potentials of single entities 1/r as Newtonian.
If you don't previously grant isolation or external non interaction which imply conservation of energy, with GRT you don't go anywhere, the problem is undefined. GRT can't treat behaviour of non stationary phenomena which Netwonian gravitation can.
If your aim is to unify gravitation with QM it is already a different thing.
Stefano,
Yes, in terms of microscopic energy conservation Einstein’s field equations fail due to underlying non-linearity. Macroscopic energy conservation however is built into Einstein’s field equations via Tab;b
The gravitational potential determined by the space-time metric is only 1/r for a single particle with respect to the Minkowski reference frame and requires additional terms in Einstein’s field equations. Higher orders exist for the macroscopic theory rather than quantum theory (see equations 163 and 164 in my “Theory of Everything” for insight into 1st order). Implementing the strong equivalence principle at the microscopic scale requires violating it at the macroscopic scale (and vice-versa). I’ve only been able to obtain higher orders through numerical microscopic calculations at the moment, where time-dependence needs to be implemented through the standard model of particle physics and geodesic equations.
I guess you could call it a scalar-vector-tensor theory (scalar potential, vector potential and stress-energy tensor) at least up to 1st order. Take Brans-Dicke for example, it allows time dependence of the source, scalar field and space-time metric. Properly implementing electrodynamics/electroweak/nuclear however is not so straightforward and requires a non-trivial path. Furthermore, the central problem in unifying gravity and the standard model is the presence of event horizon and curvature singularities in Einstein's field equations (non-renormalizable). Per particle 1/r' potentials (and the associated space-time metrics) is the only logical choice if one wants a working theory. What Einstein's field equations do is make the potential induced by a source relative to its own space-time metric. Thus there is no base (quantum) unit or 1/r potential, but instead solely the Schwarzschild metric.
Michael,
"Macroscopic energy conservation however is built into Einstein’s field equations via..."
That's a non local continuity equation, guaranteed only if I choose isoated systems intrinsically stable which don't let energy in (may let it out with gravitational waves ) the old theme of the pseudotensor which includes gravitational energy if the zone considered is so wide that the periphery is very far from the sources.
Stefano,
That equation is a local conservation law that balances the flux of energy with the change of energy at a point in space-time. For example, if at a point in space-time the flux of momentum is non-zero, then over time that point in space must loss/gain energy density. I would say that perhaps the conservation of vacuum energy is more of a non-local law with respect to point-like particles and their extended fields. However, at the Planck-scale one would expect a local conservation law with the underlying vacuum dynamics and thus vacuum energy (non-local relative to point-like particle interpretations).
More precisely, the stress-energy tensor is the Noether current for spacetime translations, as any differentiable symmetry of a physical system's action has an associated conservation law. Of course, Einstein’s field equations allow energy from the stress-energy tensor to be transferred into gravitational waves.
The only way to define vacuum energy (density) is through the cosmological constant in general relativity. The reason is that only in this case, where time translations are a local symmetry, can energy be defined in ``absolute'' terms, on the boundary of the spacetime in question. Absent a quantum theory of gravity, however, only the ratio of the cosmological constant and the appropriately rescaled Newton's constant is relevant for describing the solutions to Einstein's equations.
The cosmological constant term is the term, of zeroth order in the derivatives of the metric, that is consistent with all symmetries of the Einstein-Hilbert action. Standard methods, described in all textbooks, allow it to be identified with the vacuum energy density. Test particles don't have anything to do with this issue.(In particular since the energy-momentum tensor of matter vanishes in the vacuum, where, by definition, there aren't any matter excitations.)
This is one of the major unsolved problems is physics: the huge discrepancy between the dark energy ( or cosmologigal constant or vacuum energy) predicted in cosmology and responsible for the accelaration of the expansion( 10^-46 GeV^4) and the value obtained from partcle physics(10^76 GeV^4). Maybe one needs a different explanation for the dark energy.
The value computed from particle physics, I.e. from the zero point contributions of the matter excitations, is known to be suspect, so there's no reason to mention it. The discrepancy is meaningless, anyway, because there isn't any reason to assume that a classical contribution, I.e. that of the cosmological constant, could be expected to have any relation to a quantum contribution, namely the zero point energy of matter excitations, computed while neglecting the gravitational back reaction.
Regarding dark energy, all available measurements are consistent with a contribution of the cosmological constant term.
Unfortunately, there are no real grounds for dark energy besides trying to patch up the big bang theory. Since the big bang theory has so many issues, more or less conflicting with a large amount of observational data, there’s simply no reason to rely on such assumptions. Over-reliance on confirmation rather than serious attempts at refutation is known as pseudoscience because it can lead to false conclusions. The facts are that the CMB does not perfectly fit big bang predictions, but instead that of a non-homogenous universe (smoothness, hemispherical power asymmetry, large-scale structure, alignments, ect.). Furthermore, angular-scales and volume element measurements fully support a static metric rather than one undergoing accelerated metric expansion; you can only have one or the other and these issues are far from being resolved in big bang frameworks (and no, I do not support tired light; only Doppler and gravitational redshift are needed for a fully consistent theory).
http://adsabs.harvard.edu/abs/2014AstRv...9c...4P
Without considering the possibility that dark energy does not exist and that vacuum energy density could vary with underlying fields, many viable solutions are being excluded. Furthermore, most physicist are stuck on the assumption that event horizon or curvature singularities exist in nature when such hasn’t been proven (and will never result in a consistent quantum theory due to problems with renormalization), further impeding exploration into alternatives. QFT is perhaps the closest to properly describing vacuum energy density, but the problem is very open at this point.
Akira, I have not answered your questions because I find some of your statements too inflammatory. I have presented a broad range of original ideas in both the technical paper cited in the question above and in the 370 page book referenced in that paper. However, even when I disagree with some theories, I respect the work that has gone before. Someone once said, "We can see far because we stand on the shoulders of giants."
Michael,
"More precisely, the stress-energy tensor is the Noether current for spacetime translations, as any differentiable symmetry of a physical system's action has an associated conservation law. "
Michael, there is something missing. The stress energy tensor is not enough to account for the Noether current. In the Einstein Hilbert action the Noether currents are ok, it is something more than EFE. E-H action guarantees automatically conservations and symmetries globally.
From the H-E action I can get the EFE. But also from the FGE (field gravitation equations), as demonstrated by Richard Feynman in his book Lectures on Gravitation, I arrive to the same H-E action.
But what is the difference??? That in FG the conservations are also locally guaranteed since the gravitational energy is local like in Newtonian gravitation. While in the EFE they had to define the pseudotensor to maintain the differential form consistent with the conservation of energy. The pseudotensor is valid only sufficiently distant from the sources, where the systems is isolated (it obvious then that the energy is conserved).
The issue of conservation of energy in GRT is stressed in the document attached, a revisitation of the theorems of EMMY NOETHER. The GRT is doomed to infringe in finite volumes the conservation of energy unless you make it work where it is globally conserved by defalut, hence isolated systems.
The fact that the space-time according to GRT is "locally Minkowskian", infinitesimally small IRF are anyway present, and is Minkowskian too (Galilean) at infinite, is the price to pay that nothing moves unless I assume that some energy is there in a steady state.
The locally Minkowskian is a very strong, non-obvious, assuption in the presence of a gravitational field. Such assumption gives a way to get rid to the gravitational interaction at sufficiently small distances. This while it might work in presence of a single particle only in deep space, I don't see how can it work in the gravitational field of a massive body. The g field locally is independent on the distance from the single point of the field, is active the same way in "all the points"...
Dear Akira,
I was reporting what is the theory and what you pointed out is someting which I point out too, infact GRT has issues...
Stefano
Stefano,
The conservation law for the stress-energy tensor can be obtained by manipulating the action or field equations (see sections 4.3 and 4.4 http://web.mit.edu/edbert/GR/gr5.pdf). More precisely, the conservation law(s) arise due to gauge invariance of these equations.
With any equations of this type, regions where the source exists must be separate from vacuum. That is unless the source also exists in the vacuum such as theories like Brans-Dicke, e.g. a scalar field. If the source in macroscopic equations (in order to reproduce proper microscopic predictions via stress-energy tensor) requires something like a variable magnitude that depends on the local space-time metric (e.g. a scalar field), then the difference between source and vacuum becomes ambiguous.
Nonetheless, I do not think the stress-energy tensor is the proper source for general relativity. The only other logical choice would be vacuum energy density, which could be viewed as zero-point energy similar to QFT. It would still be possible however to produce a working macroscopic theory of general relativity with stress-energy tensor; it just wouldn't necessarily be the simplest mathematically (non-trivial).
The energy-momentum (or stress-energy) tensor is *a* source term: it can't be eliminated by fiat, any more than the cosmological constant term can. However, it doesn't contribute to the vacuum energy density, since then the matter fields take the values that make it vanish. So the only consistent contribution to the vacuum energy density is that of the cosmological constant-which experiment has shown doesn't vanish.
Stam,
I agree that the Hilbert-Einstein action can include a scalar field, but the assumption that this scalar field is related to dark energy causing accelerated metric expansion is not supported by observations. I’ve attached the results from my published work that proves beyond a doubt that the universe is not undergoing accelerated metric expansion. In fact, the alignment between large-scale B-modes and hemispherical power asymmetry in the CMB are fully consistent with local geodesics deflecting into a global gravitational potential. This in return provides the illusion of accelerated metric expansion, when in reality we are viewing distant objects accelerating into a cosmological-scale gravitational potential, i.e. in a single direction.
http://adsabs.harvard.edu/abs/2014AstRv...9c...4P
Akira,
This problem reaches far beyond physics in regards to proper scientific conduct. I actually got into physics because I thought it might be one of the few fields free from politics, but this couldn’t be further from the truth. Censorship is rampant against those who propose alternatives to big bang paradigms even with sufficient observational support. In fact, I would go as far to say that the big bang theory has become a religion in recent decades.
Nowhere in this discussion was it proposed that a scalar field describes dark energy-among other reasons because it's off topic. However, even if a scalar field were needed (such fields have been considered in so-called ``quintessence'' or ``k-essence'' models for dark energy, in order to describe its putative variations, that a cosmological constant doesn't describe), this would not affect the fact that a cosmological constant term is, also, inevitably, present. And, in fact, measurements indicate that such a term can be used to describe the properties of dark energy that have been measured.
Stam,
The cosmological constant is a scalar field (albeit constant), where the Ω∧ parameter is the ratio between the cosmological constant’s energy density and the critical density of the universe. When you actually get into the specifics of so-called measurements, there is very little if any support for accelerated metric expansion.
For SNIa measurements, we simply tuned the variables to fit observations; i.e. no prediction is actually made. In fact, the two remaining fundamental cosmological tests (volume element and angular scales) are in stark contrast with accelerated metric expansion. The theory is already broken because it can’t explain the faint blue galaxy excess, which occurs prior to other bands due to the respective redshift versus magnitude distribution. The underlying angular scale bias is further confirmed with the largest objects observable at moderate to high redshift.
For CMB, new results from Planck indicate that the smoothness parameter (sigma_8) predicts 2.5 times more clusters than observed in x-ray or optical studies. From the 13 or more parameters used to constrain the CMB in big bang cosmologies, there is little doubt that some parameters could be fudged (sigma_8) to obtain consistent parameters for Ω∧ or others. Thus any support for a cosmological constant is extremely weak at best and should not be viewed as an inevitable aspect of nature.
Classical gravity can't predict the value of the cosmological constant (any more than it can predict the value of Newton's constant; it is sensitive only to an appropriately rescaled ratio of the two and there exist solutions for any value of this ratio). What the supernovae measurements established is the accelerated expansion of the Universe. This is consistent with a non-zero, positive, value of the cosmological constant-and is *inconsistent* with a zero value. That's the important point. (The study of the anisotropies then places constraints on quintessence or k-essence models, i.e. scalar field models, while the equation of state places others. In the presence of matter it isn't obvious how to disentangle the contribution of the cosmological constant from that of the matter fields to the term proportional to the metric. What this means, is that any matter contribution will, inevitably, contribute to a non-zero value for the cosmological constant. ) In any case, the issue is that, absent any mechanism that would imply the cosmological constant vanishes, it's there and must be taken into account. It cannot be decreed away. So people that would like to propose alternatives must propose a mechanism that makes the cosmological constant vanish, first. So far no such mechanism has been proposed.
Stam,
SNIa measurements ruled out a universe undergoing constant metric expansion, they did not prove that accelerated metric expansion is the correct mechanism. We have inferred that there is accelerated metric expansion based upon the redshift of light from distant objects, an indirect observation. Nothing has proven that such redshift is actually due to metric expansion; for example, it could also be due to gravitational redshift or relativistic Doppler effect (static metric).
Could one not use the stress-energy tensor to define all matter in the universe and the resulting cosmological-scale gravitational potential? Furthermore, if the inferred accelerated metric expansion were instead an illusion due to large-scale gravitational lensing (hence hemispherical power asymmetry), then why would we even need a cosmological constant? I can define a gravitational potential quite easily without it, e.g. the Schwarzschild metric.
The central problem is a lack of creativity and search for other solutions besides big bang foundations. Forget tired light, what about viable solutions of a static metric with dynamic universe/stress-energy tensor? For example, a water fountain is in a steady state with static metric and dynamic stress-energy tensor. In fact, this is perhaps the best analogy for the model that is in agreement with all observations (we are witnessing the "water" accelerating back into the potential with respect to our distorted view of the universe). I suppose for me it’s like being told that the Earth is flat due to a limited perspective, while in reality I know from a plethora of observations that it is round.
The reason we need to deal with the cosmological constant is that it is a term consistent with all symmetries of the theory-it's not consistent to set it to zero by fiat, but, eventually, by a dynamical mechanism. And there are many models of scalar fields, called ``quintessence'' or ``k-essence'' models; these, however, try to describe other properties of dark energy-and, once more, any such model will, inevitably, give rise to a cosmological constant contribution anyway, for the reason mentioned.
@John Macken:
Your theory of spacetime impedance is based on the general relativity assumption that gravitational waves exists and graviton mediates gravitational force. If the gravitational waves are not found, your theory cannot stand. In my theory particles like graviton can exist only after the stars are formed. But the gravity existed at Planck time. So graviton cannot mediate gravitation.
I tend to agree with remarks of Stam Nicolis:
The value computed from particle physics, I.e. from the zero point contributions of the matter excitations, is known to be suspect, so there's no reason to mention it. The discrepancy is meaningless, anyway, because there isn't any reason to assume that a classical contribution, I.e. that of the cosmological constant, could be expected to have any relation to a quantum contribution, namely the zero point energy of matter excitations, computed while neglecting the gravitational back reaction. Regarding dark energy, all available measurements are consistent with a contribution of the cosmological constant term.
My views on the energy density of the vacuum:
Normally zero-point energy is discussed in the context of present epoch. In my theory at Planck epoch, the ground state fluctuations of particles can perfectly cease so that, upon annihilation of particle anti-particle at Planck energy level, the energy becomes perfectly motionless and therefore canot be called energy. In your terminology this is zero impedence. This is the fundamental nature of vacuum. So prior to Planck epoch, the energy density of vacuum is zero and at present epoch it corresponds to the cosmological constant. In my theory nothing but kinetic energy exists at the planck epoch. Particle masses condense out of this kinetic energy as the universe cool. And a single formula generates the entire table of standard model particles. It also predicts innumerable other particles which contributes to the vacuum energy density at present epoch. There is no negative energy in my theory. Energy is just energy. The stress energy tensor does not exist in my theory because the theory does not depend on weak field approximation. And there is no discontinuity between electromagnetic field and the matter field (massive particle filed). There is only one continuous energy spectrum. Therefore my theory is a generalization of special relativity in which there are only null geodesics.
Stam,
Your argument appears to be that dark energy is required for big bang models and thus a cosmological constant is needed. However, you fail to prove that dark energy or big bang frameworks are indeed the only viable solution. Thus such premises is invalid. Furthermore, Einstein’s field equations are far from complete and result in a non-renormalizable quantum theory, i.e. there’s no proof that it is the only proper theory of macroscopic general relativity.
But when we actually forget all of these theoretical aspects and look at cosmological data, it is clear that this becomes moot. The big bang theory is non-repairable because the universe clearly has a static metric via a plethora of observations starting from the early 1980’s. It is unfortunate that the mainstream has gone to such lengths to hide this fact and rely on the few aspects that sort-of work out via many tunable parameters. Either way, you are claiming fact when in reality we are dealing with hypothesis.
Earlier evidence of a static metric (one of many articles):
http://articles.adsabs.harvard.edu/full/1986ApJ...301..544L/0000549.000.html
Modern evidence of a static metric:
http://adsabs.harvard.edu/abs/2014AstRv...9c...4P
The cosmological constant is a term that appears in general relativity necessarily-it's not optional and this fact is independent of any considerations about the stress-energy tensor of the non-gravitational action and of any cosmological model. It's a mathematical statement, regarding the structure of the action and the equations of motion. (These statements are well-defined for the classical effects that are relevant, when the dynamics is described by the equations of motion.) The reason is that it is a term consistent with the symmetries of the theory, as much as the Ricci scalar is. It was the insight of de Sitter and Friedmann that this term leads to a Universe, whose expansion is accelerating. So, if the expansion is uniform, this term can describe it. This seems to be the case. If the expansion turns out not to be uniform, then additional fields will be needed and examples of such models, called ``quintessence'' or ``k-essence'' (if there are k of them) have been quite extensively studied. These will add, in particular, a non-trivial stress-energy tensor to Einstein's equations-in addition to the cosmological constant term, that's there anyway. (However for such fields to describe an accelerating expansion, they cannot describe known matter or radiation fields, not even dark matter, since the equation of state implies negative pressure.)
If there is a non-trivial stress-energy tensor due to the presence of such fields, (or any other fields, with positive pressure, when ordinary matter is present), that evolve and, thus, don't take their vacuum expectation values, then a static metric is no longer a solution of the equations of motion, without considerable fine-tuning between these fields and the cosmological constant. The cosmological implications of such fine-tuning were studied by Bondi, Gold and Hoyle, incidentally, who proposed steady state cosmologies-that turned out to be in contradiction to the measurements carried out by Ryle and Clarke, in particular.(cf. http://adsabs.harvard.edu/full/1962MNRAS.123..425D for one of the first reviews)
More recent quasar measurements are reported here: http://newscenter.lbl.gov/2012/11/12/boss-quasars-early-universe/
and, of course, the Planck data and their analysis are presented here: http://planck.caltech.edu/publications2015Results.html
(cf., in particular the entry on dark energy)
No one is hiding anything. However, Laviolette's article does end by stressing that the CMB doesn't fit the framework he studied.
Stam,
I’m not disagreeing with how one can mathematically arrive at a guv∧ term in Einstein’s field equations, but instead disagreeing with the interpretation of cosmological redshift being due to accelerated metric expansion. Without the later, there is no need for a cosmological constant and thus it is not written in stone. Recent observations of SNIa have also shown that the inferred accelerated expansion is not uniform but has hemispherical dependence which is explained by my recently published theory (http://phys.org/news/2011-09-evidence-spacetime-cosmological-principle.html).
I should also clarify what I mean by a dynamic stress-energy tensor. It is possible to have closed bulk flows of matter in the universe with globally static metric. For example, consider the possibility of a collaminated big bang analogous to the jets observed from supermassive black holes. Matter would initially be more dense/hot, expand after some time (local Hubble flow) and then accelerate back into the central "black holes" gravitational potential (the inferred accelerated expansion). Note that although the stress-energy tensor may fluctuate on small scales, each region in such universe would have an overall constant state relative to entropy, thermodynamics and stress-energy tensor.
Earlier attempts at static metrics such as Einstein’s failed because matter was also made static. In fact, I’m not even sure how you would properly implement a non-homogenous universe as previously discussed with Einstein’s field equations (large-scale bulk flows and such). I simply define a linear gravitational potential relative to metric distance via microscopic general relativity and relativistic Doppler effect, a completely valid approach. The steady state model that I have proposed however is different from any example in the past, so I can guarantee that any previous contradictions do not apply.
1. There has never been an agreement on the problem of radio source counts, as the debate had continued on both sides throughout the 60s, 70s and 80s (there are still recent publications on this topic). Nonetheless, observations support a static metric unless ad hoc evolution is introduced (which seems to have been ruled out in later developments).
2. The local disagreement with galaxy counts is found not to be due to evolution or mergers; recently it has been proposed that we are centered on a massive local hole. However, such proposal is inconsistent with LCDM to the 4-5 sigma and these observations perfectly fit a static metric. The problem only gets worse at fainter magnitudes and as we continue to increase telescope resolution due to angular scale bias.
3. BAO studies suffer from many problems including bias in datasets and fine-tuning (Malmquist or angular scale bias). Large studies such as SDSS have also shown that large-scale power is inconsistent with LCDM to a high degree of significance. Until these significant anomalies in BAO are resolved the studies not only have a high likelihood of being flawed, but are incomplete.
4. All objects up to high redshift support a static metric and these measurements do not have the substantial uncertainty found in BAO measurements. For example, FR-II radio lobes of similar luminosity should be considered as standard rods due to their mechanical/consistent nature (like SNIa). In fact, these fit a static metric better than almost any other object. Furthermore, the angular size of distant early type galaxies have posed serious problems for LCDM. High redshift clusters are also about 100x too small while being fully mature and experiencing less than two mergers up to present with big bang time-scales. The same thing can be said for galaxies, the amount of mergers observed are insignificant relative to the required change in size for LCDM.
5. Both Planck and WMAP have demonstrated several anomalies inconsistent with LCDM including hemispherical power asymmetry, the dark flow and the sigma_8 parameter versus actual cluster counts. Again, until the theory is fully consistent with all observations it is incomplete.
6. We observe metallicity, cold baryonic matter and merger remnants increasing with redshift, the exact opposite would be expected in LCDM. These observations insist that we are witnessing older objects accelerate back into a global gravitational potential rather than looking back in time at a big bang.
So when I say hiding things, I mean that there have been many ridiculous proposals to fix theory-breaking problems in LCDM, which by themselves are incompatible with LCDM. These include local holes, disappearing galaxies, drastic hierarchical merging not supported by observations and more, rather than accepting the possibility of a static metric. Why? because you can't have a big bang with static metric, it would automatically rule the theory out..
*CMB images credited to NASA and WMAP science team
*Graphs from “Evidence of a Global Gravitational Potential”
http://adsabs.harvard.edu/abs/2014AstRv...9c...4P
One should distinguish theoretical issues from experimental issues. The presence of the cosmological constant term isn't mandated by experiment, but by theory. In particular, in the presence of matter such a term will be generated inevitably. So one cannot set it to zero-one must come up with a mechanism, that makes it vanish. The steady state model, in particular, doesn't do that. The known ways to obtain static metrics, in its presence, is as BPS solutions in supergravity, with various brane configurations; but then the delicate point is their stability to perturbations. This is a point, mostly overlooked, but now a major topic in mathematical general relativity: it's not enough to find solutions to the equations of motion; the stability of these solutions to perturbations must, also, be investigated. So the issue of steady state models, in particular, lies here: perturbations will not keep the metric static (absent some symmetry, that gives rise to Birkhoff-like theorms, like for the Schwarzschild case. However, even in this case, while the endpoint may be static, the intermediate stages won't be. ).
Perhaps I fail to understand the notion of static metric. Do you perhaps mean just the curvature? Does this notion entirely ignore inflation? BTW I think your "modern evidence for a static metric" is no longer credited.
A static metric is a metric, whose coefficients don't depend explicitly on the time-like coordinate. (Equivalently, it admits a corresponding Killing vector.) Incidentally, use of the personal/possessive pronoun can lead to ambiguities in understanding the answer.
In gravity the manifolds are Lorentzian, not Riemannian. This difference has consequences-that's how a causal structure is defined, for instance, in the former. Cf., also, https://www.dpmms.cam.ac.uk/~md384/research/stability-of-black-holes/asymptotically-ads-black.html
(Not to mention the fact that the local properties of a manifold can be quite different from the global properties, even for Riemannian manifolds.)
James,
At least with cosmological scales, a static metric would imply that the overall curvature, gravitational potential or even space-time metric is time-independent. It could however vary on smaller scales like that of galaxies versus voids within a bulk flow. I added some text until I can find time to fix the image (busy week).
Stam,
Yes, a term of guvS (where S is a scalar field, constant or some combination of derivatives) from my own research is inevitable in proper formulations of GR, but I don’t feel that Λ is the correct component. I am currently working on a scalar-vector-tensor theory where the scalar-vector components arise from the electromagnetic stress-energy tensor (scalar and vector potentials); the trick is doing this in a gauge invariant way. Furthermore, the theory is microscopic in that it only provides the metric/time-dependence for something like an electron or other charged particle (but the theory is renormalizable). My starting point was the following metric (1/r potential) and a relation to the Einstein-Maxwell vacuum solution.
ds2 = -(1+Gm/rc2)-2 c2 dt2 + (1+Gm/rc2)2 (dr2+r2(dθ2+sin2(θ)dφ2)
R = 0
To get first order macroscopic equations for composite objects I apply a trick I developed from microscopic methods (variable G depending on scalar field magnitude). I believe that a full order theory will properly describe binary systems with only classical radiation (EM or EW). The microscopic equations themselves are simply the application of equivalence principles to each particle rather than stress-energy tensor. For example, to preserve the outcome of local gravitational experiments, I deform the potential(s)/field(s) of each particle according to the metric induced by all other particles/sources under consideration. Thus instead of a physical space-time metric, it is the fields/frame of each particle that is becoming deformed analogous to Lorentz transformations. I’m still waiting to see if next gen gravitational wave detectors will find anything, because if they don’t it would imply that this interpretation is correct.
The idea behind having a "water fountain" like universe is that gravity and the conservation of momentum/energy will keep things in a steady state. The requirements would be (1) a constant bulk flow, (2) a central pump and (3) gravity to create a closed loop. Although entropy would vary along the bulk flow, it would be constant for the universe as a whole (unlike cyclical universes where entropy constantly increases unless one violates thermodynamics). With a renormalizable theory of GR, a central object or "pump" would be massive enough to support a color superconducting state (super-fluidity and conductivity). Furthermore, one would expect for such object to emit a near perfect black body spectrum that undergoes gravitational redshift/lensing to local observers (CMB).
What is written is a form reminiscent of the extremal Reissner-Nordstrom metric, though the exponents seem to have been switched between the time and radial parts. However the spherically symmetric part is simply multiplied by r^2 (the near horizon geometry is AdS_2 x S^2). Cf. http://www.hri.res.in/~sen/asian12.pdf for more details. (It is possible to include the contribution of the cosmological constant, too.)
They are fascinating models, with which to understand certain specific issues and the spacetime geometry is the exterior geometry of a charged black hole (it's, in fact, an example of a BPS solution, mentioned in a previous message). The Universe, as a whole, is electrically neutral, however-and the configuration of electric and magnetic fields measured doesn't seem to be consistent with that of an extremal black hole, cf. http://arxiv.org/abs/0707.2783, for instance.
Incidentally, extremal black holes have been found to be unstable, https://www.dpmms.cam.ac.uk/~md384/research/stability-of-black-holes/extremal-black-holes.html. What the endpoint of the instability is, isn't yet, fully understood. However the study of cosmological perturbations, as done since COBE, WMAP and now Planck is quite constraining on the models. That's why equations are more revealing than words-with equations the constraints are much easier to grasp.
Stam,
Yes, it is similar to the Reissner-Nordstrom metric but with different coupling to the electromagnetic stress-energy tensor. The trick to flipping the exponents arises due to a single particle’s potential following a Minkowski reference space rather than the metric induced by it’s own presence (think in terms of geodesic equations with a single test particle, its fields should follow ηuv). For example, consider the g11 term (or comparable scalar field on right side of EQ), you can arrive at a single particle metric by G -> G/g11
g11 = (1 + Gm/rc2)2 = [1 – 2Gm/g11rc2 + (Gm/g11rc2)2]–1
I agree that the universe as a whole should be electrically neutral, but I don’t necessarily believe a central core would need to be a supermassive black hole. For example, if the metric for a single particle has a 1/r potential, applying the strong equivalence principle and any metric theory would require composite objects to have potentials of 1/r1’ + 1/r2’ + 1/r3’ + … (r’ represents metric distance relative to a Minkowski reference space). Instead of forming event horizon in this model, the potential is always finite and no curvature singularities form; the object's potential simply becomes more point-like as energy approaches infinity. Applying this with theoretical QCD demonstrates a large range of degenerate states beyond what is commonly available with Einstein’s field equations. What I suspect after diquarks like (possible in some stars with EFEs) is perhaps even more degenerate diquark pairs or color superconducting states. I don’t think such object(s) would be easily distinguishable from actual black holes with event horizon, although there is evidence of relativistic jets forming prior to the theoretical location of event horizon (or at least closer than expected) in local SMBH.
http://chandra.harvard.edu/press/04_releases/press_010504.html
http://www.space.com/27683-rare-flickering-black-hole-jet.html
http://www.dailymail.co.uk/sciencetech/article-2853083/Scientists-lightning-sparking-supermassive-black-hole-appears-travel-faster-speed-light.html
Of course, even if there was some degenerate, extremely massive core of the universe, it would emit a black body spectrum without anisotropy (beyond our relative motion or dipole moment). The anisotropy in the CMB would be expected to arise due to hot/cold gas throughout the universe that scatters the black body radiation. For E-modes, this would range from large-scale structures to local voids/superclusters. B-modes however range from the effects of large-scale gravitational lensing to small-scale lensing of clusters or galaxies. Gravitational lensing also mixes E-modes into B-modes and vice-versa (hence the large-scale B-modes, but also relevant to higher multipoles). I’ve only developed a mathematical equation for testing large-scale B-modes, as the remaining aspects are highly tunable and require complex (supercomputer) calculations.
I am not expert in vacuum energy, but I have a philosophical question about the question to interject here. In all cases that I know of in physics, energy is only meaningful as a relative term. For example, the kinetic energy of an object depends on the coordinate system. In the object's frame it is zero. In the case of the energy in gasses or materials, we do have an absolute zero that we can calculate, and come very close to obtaining, but even then the "rest energy" of the object remains, and it may very well depend on gravitational potential, i.e. the relative position of other objects in the universe (so might "vacuum energy" according to some authors).
But in the case of the vacuum, how does one define the absolute zero reference? Can we have two regions of vacuum with different energy content, and thereby do useful work through an entropic process connecting them? I have seen numerous articles in which researchers propose to "extract" the vacuum energy, but I don't see how that is possible without violating entropy, and without extracting it how does one measure it (Casimir force or no)? In other words, what is the meaning of the question?
Robert,
I would say that you are correct in terms of energy being relative, or at least how it could be interpreted in different frames. For example, if an observer is co-moving with an object they may state that it has no kinematic energy. On the other hand, this object could be part of a larger system, where in the center-of-momentum frame it has kinematic energy. This is perhaps the benefit of working in a center-of-momentum frame, because the only thing that matters is particles/fields relative to each other. Thus why not simply include all particles/fields and arrive at an absolute reference frame? One could transform this to local, relativistic or other frames afterwards. Furthermore, once one has the center-of-momentum frame, you just apply the Lorentz transformation to the space-time metric (ds) to directly arrive at definitions of energy such as pressure or temperature.
The same thing applies to the vacuum. For example, I’ve proposed that space itself at the Planck-scale acts analogously to a spring-mass system. If particles or fields emerge from these underlying fluctuations, there’s no guarantee that they are at rest with respect to the background spring-mass frame. This is an outcome of having a Lorentzian universe, where any relative motion would transform both the local frame and for example, a particle’s field(s). There would simply be no way to find out if the universe as a whole is stationary or moving with respect to such Planck-scale background; it would be pointless to know anyways because it would produce no observable effects in a Lorentzian universe. Nonetheless, if such theory could explain all observables in the simplest manner, then it would be the superior model. So it doesn’t matter if we can directly observe the Planck-scale, testing these theories arises from (A) does the theory fit all observations and (B) is the theory the simplest possible. I believe that this is what the question really boils down to; from all observables, how should we view vacuum energy density?
The Casimir effect does support the idea of zero-point energy as vacuum energy density. Furthermore, there are serious issues in LCDM cosmology that would insist that a constant energy density (cosmological constant) is inappropriate in terms of arriving at a fully consistent theory (cosmology + QFT). But then again, I don’t think QFT is complete either; it’s simply closer to the correct answer. The term c4/G in general relativity does however leave the possibility of non-classical energy densities (vacuum energy density, zero-point energy, ect.) as the source (applied in my own theory of GR).
Article Evidence of a Global Gravitational Potential
Book The Theory of Everything: Foundations, Applications and Corr...
The Big Bang is logically faulty concept since it uses so called " self reference reasoning " or " self reference logical loops" That kind of faulty reasoning is known in science and leads to paradoxes and inconsistencies. See any source on self reference.
In the case of Big Bang cosmology ,the loop is like that: Big Bang implies space inflation ( redshift). Then; we observe a redshift and that means there was Big Bang.
Big Bang is a sort of concept like FLAT EARTH, GEOCENTRIC UNIVERSE and other products of human intellectual prostitution . For cosmological redshift and CMB see our papers on this site or elsewhere. Have a nice day.
There are important unsolved theoretical problems in this area.
Supposedly, there are forces between two very near metal plates, due to
alteration of energy density, so the matter is subject to experiments.
In other threads I have proposed that the energy in vacuum space is large but finite. To make nearly flat space from so much energy the vacuum potential must be partitioned into different type types of energy that compete for control of curvature. The requirement is that convex and concave curvature are represented by equal amounts of energy in flat space.
My proposal is that electromagnetic energy provides convex curvature in agreement with all the rigorously integrated metrics of GR. FLRW is not in agreement, but it is not rigorously integrated. So I have recommended a correction to FLRW that largely eliminates the need for dark energy in space, and brings FLRW into agreement with the rigorously integrated metrics.
With partition theory there is an exact calculation for the amount of energy in space. A simplified form is made from an average of about 30 ZPE oscillators to use the classical laws rather than the quantum probabilities. Curved space is included by use of a partition function Z to represent the fraction of energy in the electromagnetic potential. Z is one half in flat space.
Gravitational potential in space is given in much the same way Dirac estimated his sea of energy, giving about the same results as Dirac, and about the same gravity potential as is found at the event horizon of a black hole.
Gravitational energy in the Zero Point oscillates between two states, potential and dynamic. There is a gravitational potential energy when the virtual pair is separated by one wave length, and a dynamic energy when the pair recombines at a center point. All measurements are made in curved space.
(1.1) m2G/λ = 2mc2= (1/2)(1-Z)hf gravitational part of Zero Point energy
(1.2) λf = c light speed, wavelength, and frequency
(1.3) m2= (1/2)(1-Z)(hc/G) virtual mass
(1.4) h2f2= (8/(1-Z)) hc5/ G Planck energy squared
(1.5) f2 = (8/(1-Z)) c5/ (hG) frequency squared
(1.6) λ2 = ((1-Z)/8) (hG / c3) wave length squared
The other part of the zero point energy is represented by an LC electronic oscillator that exchanges energy between virtual static electricity and virtual magnetic fields, using the same frequency and wave lengths as the gravitational energy, considering that charged particles are always found associated with a mass, and there are no degrees of freedom in choosing a different frequency.
(1.7) q2/ m2= 4π ε G
(1.8) q2= 2π (1-Z)( ε h c) virtual electric charge
The electronic capacitance C is defined.
(1.9) (1/2) (q2/ C) = (1/2)(Z)hf maximum capacitor energy
(1.10) C = q2/ Zhf = 2π((1-Z)/Z) ε λ = (2π)2 ((1-Z)/Z) ε λ / 2π
The magnetic inductance L is given.
(1.11) L = (1/4π2)(Z/(1-Z)) μλ / 2π = (Z/(1-Z)) μλ / 8π3
Reactive impedance is given.
(1.12) (L/C)1/2 = (Z/(1-Z))(μ/ε)1/2/ 4π2
(1.12.a) LC = 1/(2πf)2 in classical agreement
Even in this simplified model it is shown that vacuum energy is large but finite, nearly equal in partitions, and scaled from the classical laws in averages. These results differ from other methods only by minor difference of coefficients.
There may be a connection between what I have been working on and Jerry Decker's theory. Not sure. Suppose that we think of electrons, protons and neutrons from the perspective of Quantum field theory in that they are the quanta of a matter field. The intensity of these fields is related to their total energy and they "draw" energy from the quantum vacuum fluctuations. In a small region of space where these matter fields exist, the vacuum energy is reduced. However, the "stability" of the vacuum geometry depends on the energy of its quantum fluctuations. This causes a slight weakening of the geometric structure of the vacuum. It begins to collapse just a tiny amount leading to what we call space-time curvature. Going to the Einstein field equations ( in coordinate free form) G = kT. The right hand-side of his famous field equations is the stress-energy tensor of matter. So the question always haunted me: Why should space-time react to the existence of matter and become curved? I ask this because in general relativity, physical objects are treated as a classical continuum of matter or as a point mass. And this does not explain the "why" of space-time curvature. So, must we go down to the level of elementary particles of matter, and ask how they interact with the quantum vacuum in order to understand how gravitation is actually created?
A splish splash of different constants, does not mean there is a theory, By the way, I have never seen a partition function used in this way. It is a statistical
concept used to derive thermodynamic quantities, not a meaningfull variable by itself.
Granted that a capacitor can store energy, but this is entirely explainable just in terms of comon electrostatic or elecromagnetic principles. Its gravitational energy content would be utterly negible in comparison.
According to GRT space time bends according to the presence of mass. If one wants to shift to pure energy density, you must partly change theory. Im not saying this is impossible.
Maybe it makes sense.
But you have to do the whole thing over and convince people properly.
All of quantum action is a statistical function including the partition function. I showed it in the average of 30 or more ZPE oscillators which makes a set of equations most researchers can read and understand since it is written at the level of an advanced high school student. We could have the same proposal represented in wave equations which might be more acceptable to specialists, but not much understood by the other researchers for whom I am writing.
I've been writing and corresponding with other researchers about this partition theory for about 10 years. Symbols don't make a theory, but a complete and consistent set of equations certainly does. It is not my intent to produce a completely new cosmology. My topic is of an engineering advance in equipment design for testing at high speed in deep space, where a number of disputes and inconsistencies in the science must be resolved.
High speed forces the issues to predict what might happen to people, machines, even the vacuum of space, and how to achieve the constructions and speed.
Concerning the original question and introduction, impedance of free vacuum space is ( µo / εo )(1/2) or about 147 ohms.
If researchers consider where in nature the physical laws reside and how they are applied, it becomes apparent that the laws are everywhere even in vacuum space. Then the energy in the vacuum is proposed as the means by which the laws are enforced.
A small energy field is not able to enforce such laws against the power that engineers can send against it, no matter what infinity a Lorentz transformation might predict. The tiny energy accepted by many researchers could easily be over powered allowing society to change the laws to advantage. So the small energy theory must be rejected.
Science is left with a dispute about cosmological constant which by observation represents nearly flat space, suggesting small energy. Quantum mechanics predicts large energy even infinite energy. At high energy many researchers agree QM must join with GR in some new theory.
One way to resolve the impasse that has hindered progress for decades is to look at the energy as being partitioned into equal amounts associated with each of the four fundamental forces in flat space. Equalities create equations sufficient to calculate an exact energy density large but finite. Also from the equations come an average frequency of ZPE oscillators and wave length. Then an exact calculation is done to get the average virtual mass and virtual electric dipole charges of the ZPE. It is a systematic development not a splish splash.
Results agree with the estimates of Paul Dirac within a factor of 2. Also in agreement are results of a different method based on gravity potential at the event horizon of a black hole.
Cosmological constant can be small when the different types of energy compete with each other for control of curvature. A number of metric solutions from GR allow this, but FLRW does not. The result is that FLRW needs dark energy and the other metrics do not.
The competition of forces is found in Lagrangian methods of celestial mechanics, where by definition potential energy including gravity is subtracted from kinetic energy to correctly predict the action.
Now some researchers are finding other ways to compute cosmic expansion without dark energy, suggesting that FLRW will eventually be changed to agree with other metrics or discarded in favor of newer methods. The proposed change removes the need for dark energy.
Matter is in equilibrium with the local ZPF's and the radiation fields of the local matter, and all matter in the universe. Variations in the local equilibrium for a point particle are what result in gravity.
The variability is small because the energy density is much greater than the possibilities to perturb it.
In the Earth gravity the variation is about 3 parts per billion from flat space, but we can't ignore it.
I give the exact calculation but as an average to cover what small variation may occur.
Jerry
The quantum of action is h. I dont see why you suddenly mix it up with statistical
mechanics, and the dimensionless partition function. You cannot say "it is"
Are you sure you understand the word action properly, in all its classical or
quantum sense?
In current theory, in now way can you get a sensible energy density from ZPE oscilators
that I know of.
Thanks for all the responses.
If specialists in GR and QM could resolve their energy differences by some other way, then I would be happy to read the results. On the other hand the vacuum has thermodynamics and a kinetics representation, suggesting to me that partition of energy types is reasonable and necessary.
Equal partition of energy is a well-known and classical way to derive thermodynamic relationships. In my representation the partitions are equal in flat space, but unequal under stress curvature.
The mixing up of quantum action with statistical mechanics is not sudden. It is the result of a life time of reading the works of famous scientists. My result in flat space is almost identical to published natural units, and the results of Paul Dirac, with the small difference of a coefficient which I precisely define in place of previous approximations by others.
The purpose of partition is that the cosmological constant must be small while the total energy density is large. The only remedy I am aware of is one where about half of the vacuum energy must be concave curving like gravity potential energy, while about half is convex curving like kinetic energy.
http://www.physics.umd.edu/courses/Phys851/Luty/notes/diagrams.pdf
See equations (5.9) and (5.10) for comparison of partition functions.
Also in Table 1, notice that hbar plays the same part in QFT that temperature plays in classical statistical mechanics.
Lagrangian function and Lagrangian density are the well-known examples where by definition potential energy is always subtracted from kinetic energy. Electromagnetic energy is in the kinetic term, meaning convex curvature centered on the longitudinal direction.
If a large scale Lagrangian function is written, the gravity potential of galaxies and clusters is subtracted from the electromagnetic radiation and other kinetic energy, giving a slightly negative Lagrangian in the near field where gravity prevails, and a slightly positive Lagrangian in the far field where radiation prevails and galaxies are accelerating. Maybe the combination is a bit unexpected, but Lagrangian method is decades old and often used.
Then the cosmological constant doesn’t have a problem and GR is in agreement with QM. Partition puts a precise and finite limit on average ZPE. Dark energy doesn’t exist in this representation.
It has to be the ZPE density, but partitioned in energy types to balance convex and concave curvature.
Total energy is large like Dirac sea, but Lagrange Density is small by subtraction of potential from kinetic. GR is not violated when cosmological constant is based on Lagrange density.