It's been a long journey for researches which tried to unify quantum mechanics with general relativity but with no valuable clear result until now. Do you think that the fault may be in general relativity because not all its assumptions are proved until now?
Dear colleagues
I am an old tired physicist and I was invited to this site few days ago. I'm not here to be examined or taking examination to somebody.
So, Charles Francis, don't push me to irrelevant personal discussions. Don't worry about my ignorance, which I know is enormous, you should take care of your weaknesses.
I've noticed a lot of aggression in this chat, especially with Stephen Crothers and recently with me.
To be aggressive is usually the way to proceed when there are no valid arguments.
Besides, as far as I can see in this particular discussion there are a lot of assertions without scientific background. It doesn't look as a scientific discussion.
I request that, from now on, we should avoid personal aspects and endorse science-based arguments, with references. Could be possible ?
Hugo
It needs modification, its one of the greatest theories but it still needs some modifications to be the absolute theory
Ok nice beginning , the goal of this question indeed is discussing these modefications and suggestions to make it a perfect theory.
Do u think there can be a new perception to the beginning of everything which then can give even a different insight which might help general relativity in even more strengthening its pillars to be the final theory
I think there are some assumptions that must be reexamined in general relativity like singularities and the changing speed of light with gravity. these are real difficulties that make unification with quantum mechanics harder.
When I was nearing the end of my studies, I was hoping to get into research in GR theory. This did not happen, so I read papers by other people, but nothing more. As far as I see it, GR is a fantastic achievement, cinfirmed in many applications,...but open to improvement. Will it be towards unification with quatum mechanics,...remains to be seen.
The answer to your question depends on what do you define as a perfect theory. In fact any theory can be modified according to new aspects which can be obtained both experimentally or theoretically. With passing time usually theories readapt in the sense of refinements or substitutions.
Thanks to all, the point of this question is not trying to say that the equations of GR have problems like contradictions with experiment . What Im trying to say that there are assumtions that complicated the theory and make it hard to combine with quantum mechanics. The point is reformulating the equations of GR in simpler assumtions and in more relavent to quantum mechanics form. What do you think?
Even aside from the issue of combining GR and QM, GR may need to be reformulated anyway because of Dark Energy. There is no good theory for Dark Energy, therefore much effort is going into searches for extensions to GR for that reason alone.
Recent observations predicting an accelerating universe. The acceleration is believed to be due an exotic form of energy dubbed as dark energy. Especially the dynamical nature of dark energy is the key behind the idea of modifying GR.
I think the problem of describing what called dark energy Using GR is much more complicated than unifying quantum mechanics with general relativity
I don't know may be its easier to modify GR than understanding dark energy and then modifying GR.
Scalar tensor modifications of GR seem to be the most natural way forward. The dark energy conundrum is a pretty good case in point. In my opinion, such theories ought to be taken more seriously.
@Davies
I agree with you these theories are easier to make them compatible with quantum mechanics.
General relativity is a proved theory when applied inside a specific framework, namely large masses, long distances. Without it GPS would not work properly. But Newton's or Laplace's theories also are perfect theories when applied in their respective frames - two and more than two celestial bodies.
When applied at Planck's scale [1] general relativity does not work and leads to a wrong concept: the big-bang (All the mass of the universe condensed as a geometrical point).
For decades now physicists have been working hard traying to find a theory that would unify the gravitation force that is governed by the general relativity theory with all the other forces that are governed by quantum mechanics.
May I suggest you to pay a visit to the question: "Is time quantized" on this same network? https://www.researchgate.net/post/Is_time_quantized2
[1] http://en.wikipedia.org/wiki/Planck_scale
General Relativity, has indeed been a successful theory of Gravity in the realm of the presently observable universe,as well as the most beautiful theory from the mathematical perspective.Further, it is to be noted that one particular extension of GR, the Einstein -Cartan theory,which includes Spin as a manifestation of Torsion,like Mass(energy-Momentum) is a manifestation of Curvature of Spacetime, is a natural extension of Special Relativity , from the holonomy point of view as indicated by A.Trautman, (On the structure of Einstein –Cartan equations”, (1973)Symp.Math. 12 ,p 139 -162). Further, this theory is also a Gauge theory (gauging the Poincare Group),which is a notable feature, because of the fact that the micrphysical counter part of Gravity, the other three interactions (electroweak and Strong) is covered by GUT which is a gauge theory. From this point of view ,it could be possible that for a final unification what one requires as an extended theory of GR and QM, one has to look for a theory that understands SPIN, as a fundamental entity that perhaps link the macro and microcosm geometrically with some observable features for confirmation.
.
I agree with Nadeem and by implication Susskind, and in fact reading Susskind's "War" book 6 years started me on a major project which through various trials and mistrials resulted in a formulation in which curved (or variable) spatial uncertainty replaces curved space, the time formulation remaining the same, and the observed effects of the "proved" portion of GR are matched well. There are some differences in the as yet unobserved portions of GR, such as event horizons. Since this formulation inherently stems from quantum uncertainty, there are no fundamental issues with QM, though it is a new formulation using a position-momentum field rather than a time-energy field. While I wouldn't propose that it is proved by any means, the fact that a reasonable theory exists which is indistinguishable from GR over observed realms, and does not have a dark energy or QM problem, suggests that GR has a good chance of falling in the next couple of decades. If interested further see http://dx.doi.org/10.4236/jmp.2013.41018 or for less technical discussion http://InertiaFirst.com
Hi Robert,
What does you formulation of the GR predict for the origin of the universe? Does it lead to a geometrical point concentration like the big bang?
Hi Charles, if you think about it, GR only allows and does not force a big bang, and certainly does not explain the cause of one. Quasi-measurement dynamics (or quantum inertia more loosely) allows a big bang but also powers an expansion from other initial states. - Robert
Robert
I looked at your paper, but you did not make any predictions, at least not verifiable ones. It is not much use to offer new theories which have no measurable consequences. Perhaps that gives you an intellectual satisfaction but it does not motivate anyone to invest work in it.
Matts, thanks for looking at my paper. I predicted there will be no event horizons and gave formulas which can be used to calculate orbits near massive objects. When observations permit, the theory will face simple quantitative tests. It is possible that observations of a gas cloud which is currently approaching the galactic center (which has otherwise been quiet since first observed) may give this opportunity. Otherwise it could be quite a while. Within the solar system, even at 2 million miles from the sun which is incredibly close, it agrees with GR within 13 decimal places, which is hard to measure.
Retrospective predictions are of course not counted as heavily, and for good reason. But for what it's worth, flat space is natural and requires no tuning in quasi-measurement dynamics. It is also possible that dark energy is not necessary, but I did not try to make a quantitative analysis of that one as it has many unknowns.
I get very little satisfaction from this. I started out to understand some things about the cosmic horizon and write an informative article. On the way I got interested in the effect the horizon and distant objects might have on us, and when I looked into how to analyze this situation in GR, I slowly began to realize GR was broken and that other people had put forward good theories of inertia. All I did, really, was give some simple math by which these previous classical inertia relations can be derived from quantum mechanics. In my own opinion, there are still some questions that need answering before it is a comprehensive explanation, chief among them a satisfactory quantum mechanical explanation of space and the Lorentz contraction. That's why I have not written a book. My current focus is actually in the field of economics because I have two complete theories there, and have just finished a book about one of them. But I enjoy discussing gravity and learning about other people's ideas. Thanks for posting.
IMO, the current version of general relativity is really very, bad in places.
It has some serious structural and design issues, and Einstein later said that although the way that he'd constructed the theory was understandable given the situation at the time, with hindsight, the way that it was built could no longer be justified - Einstein already considered the theory to be "past it's use-by date" over half a century ago.
Eric
Those who search for an improved GR have their hands full, their work does not profit from layman thinking, and they never need to refer to what Einstein said at some time.
@Eric, do you have a reference to that comment?
@Matts, I have academic credentials and 25-30 published papers in engineering, economics and physics. I set out only to understand certain aspects of GR, but it is too flawed to be of use. However, I completely understand that once one is making a living from it, there is no motivation to change. We have had at least two directly conflicting observations (flatness of large scale space, and accelerating expansion) and the GR community will probably absorb several more without missing a step. If you really look at past revolutions, a new group of players arise and the old group never really converts over. It is a generational passing.
Robert, a reference to the Einstein comment? Sure:
Albert Einstein, Scientific American, April 1950:
"This is the reason why all attempts to obtain a deeper knowledge of the foundations of physics seem doomed to me unless the basic concepts are in accordance with general relativity from the beginning. This situation makes it difficult to use our empirical knowledge, however comprehensive, in looking for the fundamental concepts and relations of physics, and it forces us to apply free speculation to a much greater extent than is presently assumed by most physicists. I do not see any reason to assume that the heuristic significance of the principle of general relativity is restricted to gravitation and that the rest of physics can be dealt with separately on the basis of special relativity, with the hope that later on the whole may be fitted consistently into a general relativistic scheme. I do not think that such an attitude, although historically understandable, can be objectively justified. The comparative smallness of what we know today as gravitational effects is not a conclusive reason for ignoring the principle of general relativity in theoretical investigations of a fundamental character. In other words, I do not believe that it is justifiable to ask: What would physics look like with out gravitation? "
In other words, although his 1916 general theory of relativity had been designed to break the problem into a layer of "nongravitational" physics (dealt with by the existing structures of SR) and a further layer of "gravitational" physics (dealt with by the further structures of GR), Einstein now felt that the theory needed to be designed around GR-style principles, "all the way down". He no longer believed that it was justifiable to make a distinction between "gravitational" and "nongravitational" physics, although there were understandable historical reasons why we'd ended up going down that path.
Although special relativity had been a reasonable answer to the question of what (relativistic) physics would look like without gravitation, Einstein in 1950 no longer seemed to consider it to have been a reasonable question.
Robert, in the subject of spatial uncertainty, have you read Kh. Namrai's stuff on stochastic QM? Namrai reckoned that the statistical mechanics of interactions between a particle and its surroundings, including quantum uncertainty regarding the particle's position and velocity, could be modelled (once an arbitrarily-high number of events had been summed to produce a smooth distribution) as a classical field that represented the particle's mass and momentum, smudged out into the surrounding region of spacetime. His sketch was like a gravity-well, but with a tilt that told you where the particle was headed, and how fast.
IMO, Namrai was essentially using QM to derive the properties of a hypothetical underlying classical mass-field that included a velocity-dependent gravitomagnetic ("frame-dragging") component. Might that be the inverse of the bit in your paper where you start with a classical field and end up with a probability field?
----
One recurring "public relations" problem with any attempt to produce a geometrical description that includes the momentum of a particle as a mass-field effect, or the momentum of a gravitational body as a distortion of its external gravitational field, is that as soon as you express velocity in terms of curvature effects, you are no longer copying Minkowski spacetime's geometrical description of how things ought to work - you're instead into "relativistic acoustic metric" territory.
Acoustic metrics are very, very cool, and support Hawking radiation across gravitational horizons, so Namsrai's approach of creating a classical field description by working backwards from QM to produce a QM-compatible classical field model was IMO a great idea ... but since the resulting geometry isn't Minkowskian, the associated relationships can't all be guaranteed to provide a perfect match to those of special relativity.
That, IMO, doesn't mean that any resulting theory would be wrong, but it does mean that it'd be likely to have trouble passing peer review, if the referees are bright enough to understand that velocity-dependent distortion effects imply non-SR relationships.
Last time I looked, SR-compliance was listed as one of the essential listed properties that a gravitational theory had to have in order to be classed as "credible" (and reduction to a Minkowski metric was listed as one of the essential properties of metric theories), so it'd seem that almost everyone who's tried to work on this class of problem has been blocked, and will probably continue to be blocked for the foreseeable future. The acoustic metric guys get around the block by describing their approach as a "toy model", which gives them more latitude.
Hi Eric, thanks for the quote, and particularly for your second post. I googled Kh. Namrai and the very first entry was this thread! Nothing else was much of a match. Can you provide more info?
One of my big concerns from the first was to preserve SR. For some reason that was my compulsion, and let me tell you it wasn't easy. But I did. It requires some very intricate reasoning to model inertia this way and not come up with a preferred reference frame. But there is a "crowd" of people promoting ether theories, and I just didn't want to be lost in it.
The other problem you mentioned, of a velocity dependent distortion, doesn't occur in my theory either. I'm not quite sure how I escaped this trap, but I did, and I'm pretty sure I was thorough. You are probably the only person I have (so far) conversed with who could pick up my 2nd paper (quasi-measurement dynamics) and just casually read through it without struggling.
Link for paper mentioned above
http://www.scirp.org/journal/PaperDownload.aspx?paperID=27250
Regarding a discussion a few days back between Robert and Eric, I just wanted to mention that V.P.Belavkin did a lot of work in stochastic quantum mechanics (quantum stochastic calculus) and he made an interesting discovery. He performed a kind of GNS construction to find the irreducible matrix representation of Ito algebra... He found that the representing vector space is not Hilbert space but pseudo-Hilbert space. In particular, the pseudo-metric for this space was the Minkowski metric. This was completely independent of relativity. We showed that the role of the Lorentz transform in this quantum probability framework was to change the intensity of a Poisson process. A final remark: Belavkin proved that every stochastic process may be dilated to a pseudo-Poisson process (this is done in the Fock-space representation of stochastic calculus originally put forward by Hudson and Parthasarathy in the early 1980's).
Hi Matthew, you are rattling along with mathematics slightly out of my normal discourse, but it sounds intriguing. I saw in your profileyou are interested in observation generated flows of time, also intriguing since I have characterized both time and inertia as quantum observation (quasi-measurement I called it) generated, though I've no idea if it is related to your topic or a coincidence of word choice. By any chance have you run across anything that can explain at a quantum level the appearance in classical measurements of the velocity vector dependent Lorentz contraction? I can explain almost everything else in terms of that, but whenever I try to explain the length contraction, I quickly get circular reasoning.
I think the answer to your question is yes, see what you think:
Regarding classical measurements: Belavkin formulated a mathematical framework for the process of extracting data from an object (generally a quantum system).
In summary there is an apparatus, churning out data in the laboratory. The data it is churning out is with regards to the object under observation/measurement. However, the data is understood not as information about the object, but information about the interaction between the apparatus and the object. Given that the apparatus is 'well understood' one may then infer properties of the object by virtue of the data collected; which is joint data about the compound system: apparatus + object.
This was generally formulated as interaction dynamics between a quantum future and a classical past. This is also the generation of classical data form the measurement of a quantum object.
Lorentz contraction (i): In this `solution to the quantum measurement problem' there is a coupling parameter between the apparatus and the object. We may begin by considering the data generation to be a Poisson process. In this case we have a counting process (discrete data output) whose Poisson-intensity (the frequency of data output) is the coupling parameter. When this is zero there is no detection of the object. When the coupling diverges we get the central limit, so that a time-continuous measurement of the object results in an entangled object-apparatus diffusion process.
Lorentz contraction (ii): If you've made it this far, thanks for bearing with me. In the mathematics we have an `interaction operator' which generates the joint evolution of object and apparatus. There is a part which acts on the object, and a part which acts on the apparatus, and these two are not separable. Hence the evolution is joint. However, there is a particular operator acting, in this context, only on the apparatus. This operator/matrix transforms the interaction operator/matrix so as to rescale the magnitude of the coupling between the object and the apparatus.
Finally, Lc (iii): This particular operator is the Lorentz transform, and the object-apparatus coupling parameter has the form exp(a) for a hyperbolic angle `a' . (Why general apparatus is fundamentally hyperbolic is another story, but it is in fact related to the matrix representation of the Newtonian (not Einsteinian!) time increment dt). So, at last, where does velocity come into it? Well, if one considers the hyperbolic parameterization of velocity: v=c tanh(a), where `a' is hyperbolic angle, then the connection can be made.
Interpretation: If one wishes to make a connection with relativity, then note that the diffusive dynamics of the object, in the presence of the apparatus, corresponds to v approaching c. And v=0 when no object is detected by the apparatus. One may also wish to consider the connection with thermal phenomena. To whom the property v should be attributed may be a matter of interpretation, but I consider it to be a relative property of the apparatus and object when the two interact.
I hope that this may be of some interest/use to you Robert.
Matthew, I followed your discussion at least somewhat (I never think I understand things until I can work them out for myself) and it is very intriguing. It is along the lines of what I had been thinking it should be, of a quantum interaction (which some might interpret as a statistical interaction, but I think it goes a little beyond that) which varies because the wave function of one object evolves into different relative positions in the frame of the other object. So it interacts more, but then I could never get the Lorentz relationship particularly.
The point about the Newtonian dT is interesting because I had concluded it basically had to be something like that. Because if you use the Einstein dT from SR (and of course GR includes SR at least locally, though occasionally someone will argue with that), then it already has the Lorentz factor and that's where the circle comes in!
I would like to stretch my brain to consider this in more detail. Have either you or Belavkin or anyone published an accessible paper addressing the Lorentz contraction in this way, or was this discussion just a raw idea you are putting forth? Thanks.
Robert, I wrote a paper that you can download from my page 'the stochastic rep. of Hamiltonian dynamics...'
In this paper I show that Hamiltonian dynamics may be dilated to a discrete object-clock interaction dynamics where this clock is a fundamental object having two degrees of freedom: past and future. The interaction transforms the state of the quantum object (which I consider to be the observer in this paper) whilst also transforming the state of the clock from future into past (an event is in the future (potential) until it is observed, then it becomes memory/data).
Anyway, this `clock' is a "massless particle in 1+1 Minkowski space", and it is derived from the Newtonian time increment. I illustrate this in a lot of detail but I do not go into the details of the original proof/construction given by Belavkin. I also go into all the details about the Lorentz transform stuff, and also details about the Dirac equations that may be derived for this dilated Hamiltonian dynamics.
The purpose of this paper was to introduce the Lorentz transform stuff, and the mathematical similarities between `quantum stochastic calculus' and SR. In particular, one can understand all the fundamentals of the matrix formulation (Belavkin Formalism) of stochastic calculus in the case of purely deterministic evolution. This is `The Stochastic Representation of Hamiltonian Dynamics'.
I gave a talk on this stuff at a conference earlier this year which I can send you if you like - it gives a good overview, and introduces the notion of `The Boosted Schrodinger Equation'.
I can give you the reference for Belavkin's paper I mention above (his DSc work) - `Chaotic States and Stochastic Integration in Quantum Systems' - a wonderful thing... but I do not recommend reading it until you are comfortable with the mathematical framework of my paper mentioned above.
Enjoy!
Hi Robert!
If you cut and paste:
"11.8. Space-Time Structure near Particles and its Influence on Particle Behavior"
into Google, then you should get at least part of the relevant chapter in Namsrai’s book, via Google Books.
That section was also published as a separate paper (with the same title), in IJTP Nov 1984 [23] No.11, pp 1031-1041, and the abstract is online at
http://link.springer.com/article/10.1007%2FBF02213415
I’m not a mathematician, but when someone’s discussing an interesting approach to an interesting general physics problem, I find that I can usually follow the gist of the argument through the accompanying text, if I have some familiarity with the approximate territory.
A lot of these problems only seem to allow a limited number of broad decision-forks in the way that they can be tackled (regardless of the terminology or the discipline used to model them), so once you know somebody’s chosen approach (in terms of their initial "design decisions") I find that that’s often enough to know the answers that their approach ought to produce.
FYI There's also an interesting "Lorentz-squared" transform that nobody ever seems to look at. Most of its testable relationships are either identical to those of SR or are pretty similar (after appropriate parameter-remapping).
Where the normal "single" Lorentz relationship describes a flat-spacetime model of inertial physics and seems to be the cause of GR's incompatibility with QM, the "Lorentz-squared" version seems to describe a gravitomagnetic model and to allow permeable gravitational horizons ... but only seems to work in curved spacetime.
If anyone is considering building a revised version of general relativity that uses W.K .Clifford's "All physics is curvature" concept, then I think it's worth considering the "Lorentz-squared" option.
@Matthew, I retrieved your paper. Unfortunately I am not able to fully follow it. I do not have familiarity with enough of the notation and concepts. I tried to follow along qualitatively and I see that on page 11 you use a form of the Lorentz transform (‡-unitary real Lorentz transformation), but between there and your proposition-1 instead of emerging the mention of λ seems to disappear. The paragraph at the top of page 14 is very interesting, about how in the Schrodinger (I assume non-relativistic) picture the future comes to the object. Then again you plunge in over my head, though some of the middle of 15 is similar. But if I have understood your notes correctly that Lorentz may be emergent, I'd like to butt through it somehow even if I have to take a university course in the background. Did you make any headway with my paper? I spent a couple of years figuring out how to get full relativity (no preferred frames) of time and mass observations given length contraction, which so far as I know no one else has been able to do, and I admit the argument is tenuous. But I have not experssed it with the type of formalisms you use so easily it makes me jealous. :D
@Eric, thanks for the links. I will save this up for later. When I looked at the description of your book, I realized we are sort of headed in different directions, so I will look intoMatt's material first, if I get lucky enough that he explains it to me. :)
Robert, I have assigned Wednesday as my `reading day', so I will certainly be reading your paper - I have lots of interest in inertia.. today I must clear a fallen tree.
Indeed the paper I wrote is a little indigestible, and intended to prepare people for Belavkin's work, which is largely unknown even to professionals in the field; because it is so heavy going... However, what I have been intending, sooner rather than later in view of your comments, is write this paper in a simplified form to introduce the basic ideas. I'll try and get this done by the weekend because I need to churn out as many papers as possible at the moment.
For the time being I would strongly recommend that you try to comfortably understand the beginning of chapter 3, up to `and so begins the algebraic realization of calculus.' In particular, that the matrix representation of dt is an operator that transforms future state of the clock into past state. Then try to comfortably understand chapter 4 from the bottom of page 8, `Now, we consider...', up to to equation (4.3), and then equation (4.9) which is the stochastic object-clock interaction dynamics.
Regarding the Schrodinger picture, what's interesting is that with the future now propagating towards the object, the is additional `momentum of the clock' (see also Quantum Stochastics as a Dirac Boundary Value Problem by Belavkin - but this was not done in the dilated context of Minkowski space..). Belavkin began to use this principle to construct a quantum time operator as the generator of Hamiltonian shift - corresponding to perturbation by clock momentum.
Anyway, I look forward to reading your paper tomorrow, it is all printed and ready to go! I will let you know how I get on, and I will doubtless have some questions.
Matthew, thanks for comments and suggestions, I will follow up. And eagerly awaiting your more accessible version!
Another simplification in current GR that might be unsafe:
As far as I can tell, people tend to do cosmology by first assuming a fairly smooth, roughly hyperspherical universe, and then study patches of it by assuming a background "baseline" level of gravitational field defined by the distribution of the distant "shell" of surrounding cosmological background matter - which is a calculation that you might /expect/ to be pretty much the same regardless of whether you're inside a galaxy or not. By first defining an "arena" to study whose background is effectively flat (or at least, shows only gross cosmological curvature), and then superimposing the (positive) gravitational effects of galaxies within that arena, we tend to assume that nowhere in the region can have a lower background gravitational field density than that surrounding initially-defined floor.
However, there's a suggestion (I think it gets a mention in MTW) that the radius "a" of the cosmological hypersphere at any given point in its history might be treated as a form of time coordinate - if we choose to take this idea literally, and treat a region's expansion rate as a measure of the region's rate of entropic timeflow, then a region with a greater rate of timeflow also ought to expand faster.
That would mean that even if we started with a nominal "floor" model, and then allowed an even slightly greater rate of timeflow out in the intergalactic voids, those more rarefied regions would expand faster and "lobe out" from the rest of the surface, with the lobing then invalidating the geometrical assumption that you can't have a field density lower than the initial calculated baseline field density - as a less-dense region "lobes", its interior becomes even more rarefied, its interior rate of timeflow further increases, and its expansion rate increases even more, as a positive-feedback process.
If this has been happening for a while, then an observer living in the centre of a well-developed "lobe" should measure a greater age for the universe than we do, and by extrapolating their own distances outwards (and naively assuming a more simple spherelike geometry) obtain a greater size for the universe than the value that we'd get doing the same exercise from our own location. If they then try to do their own "baseline" calculations, they'll get a lower value for the supposed baseline gravitational field strength than our calculated value.
As the lobing progresses, there'd seem to be no lower (positive) limit to how far the field-density could then drop in a faster-expanding region.
----
The two initial physical predictions associated with switching to a "no-floor" universe would seem to be:
(a) We'd expect large-scale maps of the distribution of matter in the universe to show a "bubbled" or "foamy" structure, with galaxies concentrated in sheets that represent the boundaries between emptier (faster-expanding) spherelike voids. This sort of distribution seems to be confirmed by recent surveys. Current theory has trouble explaining how these sheets form, given the massive timelags between different distant parts of a sheet. If the "interesting" physics that causes this distribution is happening not in the sheets but in the voids, then this isn't a problem.
(b) If we're "losing the floor", then if the field strength as we leave a galaxy drops off to below the floor value, then the galaxy is going to be more gravitationally self-contained - lower inertia in the surrounding more rarefied regions means that a rotating galaxy appears to be embedded in a more "fluid" space than it is in a "floor" model, and its rotational inertial behaviour will diverge from the Newtonian predictions.
This is again something that we seem to have observed happening (the galaxy "rotation curve" problem), but instead of using nonlinear gravitational arguments to argue for the weakening of the galaxy's interactions with external matter, we currently prefer to strengthen the mutual coupling of matter within the galaxy by supposing the influence of large quantities of additional hypothetical "dark matter" , whose only purpose is to make this gravitational correction.
I think that if we'd done a comparative study of how the existence or non-existence of a background gravitational "floor" affected our physical predictions, then these two effects would probably have dropped out of the study as the two main ways to distinguish between a "floor" universe and a "non-floor" universe. We'd have been able to predict both results, and their later verification would have strengthened the arguments in favour of a no-floor model. Instead, both results caught us out.
Eric
You understood that "radius "a" of the cosmological hypersphere at any given point in its history might be treated as a form of time coordinate".
The age of the Universe or the lookback time t(z) from now to redshift z is related to the cosmic scale parameter "a" by an integral over a function (an inverse square root) of the various content: the a-dependence of matter is different from the a-dependence of radiation and the a-dependence of dark energy..
The consequence of this is, as you have correctly realized, that time appears to run differently in voids than in surrounding dense matter. This has been studied as a possible solution to the dark energy problem: because we are in a void the expansion appears to accelerate.
Unfortunately this explanation does not work: we are not in an empty enough void to expect to see an acceleration large enough to match observations.
To explain the dark energy I think we need more than the general relativity.
I think this energy is even not lie under the conservation of energy, because simply the whole force of universe is not sufficient to supply such energy, in addition this
energy is seemed to be increased as we go towards the edges of universe.
Hi Matts.
Do you happen to have any references to people studying this issue in enough depth to take into account the effects of “lobing”? If so, I'd be quite interested to see how far they got. The last time I checked (about five years ago) nobody seemed to know of anyone having done any work on the subject.
There were two parts to the suggestion:
(1)
You're quite right that we'd normally calculate the background field density in an empty region of deep space to be only slightly lower than our own measured background field density, and that if we chose to associate the corresponding increase in timeflow with a faster expansion rate, that that expansion rate would then be assumed to be only slightly faster, in an initial calculation.
(2)
However, the problem with that exercise is that the end result result tends to invalidate the initial simplifications that we used to do that initial calculation.
If the region expands faster than its surroundings, then it'll tend to bulge out from the rest of the cosmological hypersphere, and that bulge means that the internal field-density is then lower than we'd calculate by assuming that the region still lay on the initial hypersphere surface. That would mean that the resulting internal field-density would be lower in the region than we'd calculated, the expansion rate would be greater, and the deviation from the original assumed geometry would be even greater, which means that the actual rate of timeflow goes up again, the expansion rate goes up again, and the region "balloons" out from the surrounding region like the "weak patch" on an over-inflated bicycle inner tube… it's an unstable positive-feedback situation.
In the most extreme scenario, the void would go beyond being a simple bulge or blister, and become a full blown mini-hypersphere in its own right, connected to the rest of our universe by a “neck” region marked by galaxies that have been pushed apart by the expanding lobe, and if the speed of light is faster just inside the lobe's neck connection, the neck will also tend to end up nicely “smoothed” and rounded.
Two things that we'd need to look out for if this suggestion was correct would be (1) a “foam-like” distribution of galaxies at large scales, forming wall-like structures between spherelike voids (which we've now seen, and embarrassingly, didn't predict), and, (2) any remaining stars deep in these voids would appear anomalously old (unfortunately, since the voids are almost empty by definition, these stars would be difficult to find).
I suppose that we'd also expect some sort of lensing effect, probably similar to the effects calculated for wormhole mouths. If our modelling method is sophisticated enough to include nonlinear gravitational aberration, then I suppose the that the effect of the curvature around the “neck” would be to tend to make gravitational fieldlines that enter the region deflect to one side so that, again, the fieldline density on the central region of the void is even lower than the previous arguments would suggest.
Another consequence of the idea would be that since we'd expect a milder version of the effect to apply to the regions between between individual galaxies (weakening the inertial coupling between a rotating galaxy and its surroundings), we'd expect the outside rim of a rotating galaxy to find it easier to rotate with the rest of the galaxy than Newtonian mechanics would suggest, which is, again, an effect that we've already noticed but failed to predict in advance.
----
IMO, The weak point of the argument is not that it can't be used to predict strong expansion, it's more that it can predict such /strong/ positive-feedback expansion (in the voids that then form) that it's difficult to set a theoretical upper limit to the effect. Also, I suppose, the non-linear nature of the thing will make it more difficult to do exact calculations.
Eric
I haven't followed the development of this research ever since it was demonstrated that we aren't located in a deep void. I suggest you just go to arXiv.org and search for recent papers with "void" in the title.
I think it is worth directing you to the work of Laurent Nottale and his theory of Scale Relativity. The theory is founded on the principle that the apparently smooth (differentiable) geodesics of space-time at the macro-scale described by general relativity are an incomplete description of the structure of space-time at the micro-scale. In simple terms, the hypothesis states that the structure of space has both a smooth (differentiable) component at the macro-scale and a chaotic (non differentiable) component at the micro-scale.
At the macro-scale, the fractal (non-differentiable) component and its influence, is small (and generally considered unimportant in classical physics) but in the quantum domain, there is a transition at which the fractal component and its influence, dominates, with the origins of quantum properties in the underlying infinite chaos.
The hypothesis around the micro-scale structure of space-time builds on the work of Feynman, who in 1948, suggested that the typical quantum mechanical paths that are the main contributors to the path integral, are non differentiable and fractal although the term fractal was only coined by Mandelbrot in the 1980’s.
If you accept Feynman's path integral then I think that the theory offers an "intuitive" insight into how general relativity and quantum mechanics could be fundamentally linked through the geometry of space-time.
Nottale is an astrophysicist based at the National observatory in Paris (CNRS). He has made a number of predictions using his theory which have been validated by astronomical observations. This would indicate that the idea is worth further consideration and debate. He also has some interesting ideas about Dark matter which are worth reading. Happy to expand on this if anyone is interested.
You can view a number of his papers at the link below
http://luth.obspm.fr/~luthier/nottale/
dear Turner
I think Nottale s theory is intresting but I dont know why it doesnt attract so much attention from the scientific community as it deserves
I have to concur with your point, I am not myself sure why his ideas are not more widely debated. The breadth of his work and his outputs have been substantial. There is general consensus that a fresh view of quantum gravity is required and Nottale offers this. His theoretical foundations for quantum physics and his hypothesis of macroscopic quantum potentials which can be inserted into a macroscopic Schrodinger equation that can be used to model structures of biological origin, to solar systems, galaxies and the Universe (with numerous examples of validation) should certainly put him in the frame for lively discussion (positive and negative) and attention. I would certainly be interested in other views on this. For those that don't have the time to read through his extensive publications list I can recommend his most recent book "Scale Relativity and Fractal Space-Time. A new approach to unifying relativity and quantum mechanics (Laurent Nottale, 2011) published by Imperial College Press: ISBN 978-1-84816-650-9). I think this book will assist in introducing the theory to a broader audience in accessible way for physicists.
Thanks Philip,
it is a really interesting approach which is more close to my ideas.
Torsten
Hmm, in the context of the uncertainty principle, does space-time structure much below the level of the nucleus make any difference? Particles are not reliably localized to such dimensions.
Thanks for your comment Robert
I think that in the first instance it would be most efficient to refer you to two recent papers which summarise the concepts. These articles summarise some of the thinking around macroscopic quantum potentials and their relevance to astrophysics and biological systems that I referred to which many may not be familiar with. which is backed up by an extensive list of reviewed articles which you can download from Nottale's site.
http://luth2.obspm.fr/~luthier/nottale/arUdine.pdf
http://arxiv.org/pdf/1306.4311v1.pdf
I hope they are helpful. I would be interested in your feedback and discussion once you have had time to look at them
Regards
Philip
A short literature seach shows that only a few of Nottale's papers have been published and cited only 6 times (in addition to self-citations). Thus it is true that the science community has not paid attention. This is not a criticism.
Hi Matt. I am not quite sure where your figures come from. I just checked and Google Scholar indicates that Nottale has been cited 3342 times with 1351 since 2008.
@ Robert and all Professors
there must be a difference in space time representation when you go to micro scales even the number of dimensions may become more than four.
@Philip, thanks for the links. The papers are very detailed for a "summary" however. I think of my papers as summaries too, since I have to compact them so much to get reasonable page counts, but in fact other people have trouble getting through them, and I have trouble getting through Notfale's.
What I did get out of about the first page and a half of each, what I would consider a non-technical "summary" without getting into trying to exactly visualize it, is that a fractal space-time structure is proposed. I gather it is in addition to not in lieu of curved space-time structure. This might be used to justify macro scale quantum functions (in case they are not already justified, like superconductivity, but I'm not aware of a controversy). I did not read far enough to see if the idea is extended to explain entanglement, but it seems like an intriguing path to follow.
My own preference is to damp down all the curvature and complexity in space time, and to get time essentially out of it, as it is an emergent property of motion not really a coordinate. I have already done this for gravity quite satisfactorily. I'm about to do it for Minkowski space, showing that the whole structure emerges from quantum interactions in a 3-space. : )
Thanks for your comment Robert
It is always difficult to get the right balance on feedback. I will attempt to give a summary of the theory, which is more accessible on first read and which I hope will encourage you to dig a bit deeper and stimulate more specific questions. I should stress that in such an overview that it is not possible to cover or anticipate the multitude of questions that inevitably result.
As you say it starts with the principle that space-time is differentiable at the large scale and non-differentiable (fractal) at the micro-scale. The transition taking place at the de Broglie scale. One of the most important implications of this is that the fractal fluctuation of space-time at the micro-scale leads to the emergence of a fractal field which generates a potential energy (“quantum potential”) and a quantum force which are analogous with the gravitational field, gravitational potential and gravitational force respectively in the theory of General Relativity.
The fractal (non-differentiable) component exists at macroscopic scales but its influence, masked by classical motion is small (and generally considered unimportant in classical physics), but at the de Broglie scale (in the quantum domain), the fractal component and its influence, dominates, with the origins of quantum properties in an underlying infinite chaos, the effect of the non differentiable geometry of space-time.
An additional, insight from the theory of Scale Relativity that I referred to is the identification of an additional potential energy (derived from the underlying macroscopic fractal field and macroscopic quantum potential), which leads to an additional macroscopic quantum force. This is evidenced by the existence of macroscopic quantum effects in for example Bose Einstein condensates and superconductivity when the de Broglie length becomes macroscopic as the motion of the particles decreases at very low temperatures. However, in addition there is evidence of this macroscopic quantum potential in the classical world where it is seen as an additional weakly structuring force
Nottale gives numerous examples in his work in which various forms a macroscopic Schrodinger equations can be constructed, which are not based on Planck’s constant, but on constants that are specific to the macroscopic system. These equations can be used to describe the formation of probabilistic structures in the classical world which have profound implications for biologists, physicists materials scientists etc.
As an example, Nottale considers the development of a Schrodinger approach to gravitational structuration in which he takes account of the standard approach to gravitational structure formation plus this extra potential energy. He uses this approach to describe a number of apparently quantized gravitational systems.
A nice case study is diffusing space debris in orbit around the Earth, which are predicted to be given by probability peaks at 718km, 1475km and 2269km. The predictions are in close agreement with the actual data at 850km and 1475km (the second peak in total accordance with prediction). The data adds weight to the existence of the predicted probability density peaks which should shouldn't exist in the absence of the predicted self-organising underlying process.
His latest book, which I referred to, offers numerous further examples including our own solar system, planetary nebulae, stars and galaxies. As a further example, the proposal of missing mass in the universe (dark matter), the result of the existence of a missing energy in the energy balance of the gravitational system may be explained by this additional potential energy of fractal geometric origin, negating the need for a dark matter hypothesis. The implications of this hypothesis if correct are you will I am sure you agree profound.
The theory also offers a new physical meaning to the electrical charge and to gauge transformations and a geometric interpretation of the nature of gauge fields. As an example he proposes a new geometric derivation of Maxwell’s field equations.
In another important aspect of the theory worth mentioning briefly, quantum particles are a manifestation of the fluid of space-time geodesics in which the various properties of the wave-particle emerge from their internal fractal geometries. The existence of quantas are a consequence of the quantization of these geometric properties, derived from the properties of the geodesic equation.
On a final note it is worth noting that many physicists studying quantum gravity (e.g. superstring theory and loop quantum gravity) have pointed out the expected fractal structure and properties of quantum space-time. The main difference is that these quantum gravity studies assume the quantum laws are fundamental. i.e. the fractal geometry of space-time at the Planck scale is a consequence of the quantum nature of physical laws, In Nottale’s approach, the quantum laws are considered as manifestations of the fractality and non differentiability of space-time, so they do not have to be added to the geometric description. Maybe these different approaches could one day meet.
I hope this extra information is useful. I am happy to provide more comments if required. It’s perhaps worth mentioning that I am busy finalizing a review of Nottale’s latest book (a large volume of more than 700 pages), which offers an riveting overview of more than 20 years of his work, which I will upload to my research gate site before Christmas.
Is the fractal spacetime at micro-scale and its associated potential energy something intrinsic, or arising from something else? Is this something different than vacuum energy or the same thing? Is this what you meant by "quantum potential?" You said "analogous to" various aspects of gravity. Do you mean that it may explain gravity, or it is some other force just analogous to? How is fractal geometry distinguishable from simple position uncertainty, or is it?
Hi Robert
Thanks for these questions which I find useful in helping to understand how to better explain things in future.
I am afraid I have to answer very briefly at the moment due to an urgent deadline but I hope that it helps to clarify things a bit.
Within the scale relativity framework, fractal spacetime at micro-scale is nothing other than an extended description of the fabric of space-time as defined within the theory of general relativity. Gravitational force is a manifestation of the curvature (geodesics) of space-time at the macroscale, whilst quantum laws (and effects such as position uncertainty) and gauge fields are a manifestation of the fractal fluid of geodesics of space-time at the microscale. The macroquantum potential is an additional potential energy, which is separate from gravity and the vacuum energy all of which are fundamentally linked by the geometry of space-time.
It should be noted that the theory is not yet complete with further work being required to describe how the two component parts merge at the Planck scale. It is envisaged that the fractality and curvature become mixed and accounted for on the same footing. To quote Nottale “one doesn’t combine the motion and scale covariant derivatives after they have been separately constructed. This work requires the construction of a new covariance that accounts for all effects together”. A big challenge for all theories of quantum gravity.
Happy to answer in more detail and offer greater clarity on any outstanding questions after the 11th of December if required.
So then it is another name for Wheeler's quantum foam? Perhaps with more detailed formulation? (I'm not sure how much detail Wheeler actually went in to ... almost none in his book with Misner & Thorne where I first read of it).
Hi Robert
I am not exactly sure at what level you are comparing the theory Scale Relativity (ScR) with Wheeler's quantum foam. At the most basic conceptual level, ScR has some commonality with Wheeler's idea. However, this could also be said of a number of other approaches to Quantum Gravity. When I try to explain the ScR concept within lectures that i have given, at the most basic level I have used the quantum foam analogy. However, one then quickly moves on to describe ScR theoretical framework in terms of a fractal space-time geometry at the microscale and its relation to quantum laws and gauge fields which didnt exist when wheeler developed the quantum foam postulate.
I am not sure if this answers your question or if you would like me to go into more detail. Happy to do this. Do you have any specific issue in mind. Otherwise the answer could be a very long one !
I have now uploaded a review of Laurent Nottale's book which I referred to. I hope it answers some of the questions and points in this thread in a little more detail.
Apologies for the missing link which is now attached
https://www.researchgate.net/publication/259373681_A_Review_of_Scale_Relativity_and_fractal_space-time?ev=prf_pub
Article A Review of Scale Relativity and fractal space-time
Relating to Robert Schulers question on the Vacuum energy I thought it might be useful to refer a paper by Nottale on Scale Relativity and Structuration of the Universe.
One of the most difficult open questions in present cosmology is the problem of the vacuum energy density and its manifestation as an effective cosmological constant.
His solution consists in considering the vacuum as a fractal, (i.e., explicitly scale dependent). As a consequence, the Planck value of the vacuum energy density (that gave rise to the the normally huge discrepancies with observational limits) is relevant only at the Planck scale, and becomes irrelevant at the cosmological scale.
The paper can be found at Laurent's Research Gate site for those interested
https://www.researchgate.net/publication/251979573_Scale_Relativity_and_Structuration_of_the_Universe
Regards
Philip
Article Scale Relativity and Structuration of the Universe
Dear all
I would like to mention a new theory to you which is the time of events theory you can find the full paper on the site of European physical journal plus. this theory based on assuming a dynamic space time instead of static space time assumed in special theory of relativity. It describes geometrically a fractal space time. the link is
https://www.researchgate.net/publication/259996539_Solving_the_instantaneous_response_paradox_of_entangled_particles_using_the_time_of_events_theory
and the journal site for fullpaper is
http://dx.doi.org/10.1140/epjp/i2014-14023-5
Article Solving the instantaneous response paradox of entangled part...
Dear All
I got the permission from the editors of Eur. Physical J. Plus to put a primary revised accepted version of the paper you can find it now on my profile on research gate freely. Hope this will answer many of your questions. I am glad to discuss it with you after reading the primary revised version
with best regards.
Dear All
According to the fundamental particle physics theories and energy issues in the production and decay of pairs of matter–antimatter are included in finding the common features between matter and energy which can be considered the constant velocity of photon as a property that can be transmitted from matter into energy and vice versa and also differences in the mass, structure of matter and its relation fields are explained by the relationship between length contraction (reduce in volume) and relativistic mass and relativistic form of Newton second law which show the mass variations (i.e., the infinite speed in classical mechanics is replaced by the infinite mass). In the end with regard to the equivalence of mass-energy, definition of singularity and time with the new approach and analysis are presented.
For more see:
Definition of Singularity due to Newton's Second Law Counteracting Gravity
http://www.sjournals.com/index.php/SJPAS/article/view/602/pdf
Thanks. Please visit my work 'Discussion on Mass in a Gravitational Field'
http://pubs.sciepub.com/ijp/1/5/3/index.html
The work proposes that: Mass of an object decreases with it close to center of a gravitational field.
△U = △mc^2
Where U is potential energy, m is mass, c is light speed
We wish the hypothesis provides some ideas for the gravitational singularity, and make the meaning of gravity more accessible.
For three years, I really appreciate that many Editors help to improve my work, and sincerely thank for that scientists worldwide examine my work carefully, their opinions are very precious, and have led to significant improvements. I must say thanks again. I respond editors, and try to explore the prospect of the work so that motivate new ideas.
1. The hypothesis is summarized from gravitational field, in view of E=△mc^2 can be applied to chemical reactions or nuclear reaction, where E equal to change of potential energy. It might be extended to electromagnetic force, weak force and even nuclear force, and explain the source of mass loss. Therefore, change in potential energy of a particle is equivalent to its change in mass times the square of the speed of light. If Mass is associated with Field (MF), and is the fundamental law of the universe, it will help us to understand the physical world and explore the origin of mass and the nature of field. People always want to use one theory to describe the microscopic world and the macroscopic world.
2. Why the universe is homogeneous on large scales? Why does the matter not concentrate to some point? What are the quasars? How their extraordinary redshift come about? We wonder if MF could answer some of the issues. A super massive star has powerful gravitational field which will cause time dilation and distance extending (there light speed still is c), radiation capacity of the star declines, when it collapses, it will hardly luminous (a black hole). We think that observations of far infrared band might be conducive to study of black holes, the singularity might be not. MF indicates that matter cannot go through the schwarzschild radius in the form of mass, it might be a bad news for the big bang.
3. Subatomic particle cannot be seen. There is a lot of puzzle in the field. MF might provide a new research tool in the field. For example, MF might be provided a new equation for the research of the fine structure of atomic spectrum,and takes into account the fine structure of atomic spectrum in magnetic field or electric field. The Standard Model deviate tests at high energy region. MF might be more obvious in strong interaction. We look forward to explore some non-accelerators, for example, some physical and chemical experiments in a strong electric field or in the border of the electric field. We think that two magnets attract and close each other. A certain external work can be supplied in the process. Mass of the magnets would be decreased if a balance is accurate enough.
4. We spent ten years studying moving clocks. Two clocks (327.68 MHz Oven Controlled Crystal Oscillators) were installed on both ends of a revolving bar. The two clock signals were transmitted into a mixer in the middle of the bar to get a beat frequency (53Hz). The sine wave of the beat frequency was adjusted to a square wave. A high speed counter (80MHz) was used to count the width of the square waves. When the bar rotated in a horizontal plane on the ground, the count values were always the same in different directions. The results indicate that there is no difference between the clocks. The results have nothing to do with the speed of the earth in the universe. In view of a lot of high precision experiments at various elevations nowadays, we think that physics laws are the same in all reference systems is sustainable. Experiments at different gravitational radius should get the same charge to mass ratio. Thus, the elementary charge should decrease in a gravitational field. MF is universal, the elementary charge also should decrease in a electric field. In other words, the process that an electron enters a nucleus, will cause two changes. First, their respective mass decrease. Second, their respective charges decrease. Changes in mass can be verified from the mass-energy equation. Decrease of elementary charge in a gravitational field can explain the gravitational redshift. Exploratory experiment: change of atomic energy level can be observed when a electric field is applied atoms of a receiver of a Mossbauer spectrometer (The atoms should embed in surface of a ceramic).
5. The detection of gravitational waves: If MF can unify the four fundamental forces, research of gravitational waves can use achievements of electromagnetic waves and greatly amplify the research process. We suggest increasing the size of gravitational wave detectors. Perhaps we will become accustomed to study gravitational field from outside, as with electromagnetic waves. An experiment as fig. 4 in my work is significant. It might be a method of research gravitational waves indirectly. The direct method is two mass close and separate at a high-speed. The experiment might be used to measure the radiation efficiency of gravitational waves, so as to determine the dynamic equation in a gravitational field. Once we have carried out a test. A gyro is installed lying on a rotary table. Two motors are used, the one (M1) drives the rotary table revolves, the other one (M2) drives the gyro spin (it is necessary, a gyro will lose speed fast when direction of its axis is forced to change). The power dissipation that the two motors are powered on simultaneously is P12. Next, M1 is powered on, M2 motionless, the power dissipation is P1. Then, M2 is powered on, M1 motionless, the power dissipation is P2. We can obtain P12=6(P1+P2). The difference of power dissipation is great. The coppery gyro does not heat. Where the lost energy go? Of course, the test is still very primitive.
6. Exploration of gravity: Based on MF, an object fall to direction of gravity, because its microscopic particles trend to lower state and release energy. People expect to reduce gravity in some way, for example, local spatial structure would be changed by a strong electric field, or the quantum level of the particles would be increased by ultra low temperature, and other ways. We think: a meteorite is captured by a gravitational field, and an electron is captured by an atom. Both of them send out light, and mass of the two systems have reduced. The both might be the same thing if the rule of quantization were not enforced. Perhaps people could reduce gravity after fully understand the gravity some day.
7. Exploration of ultra-low temperature: Based on MF, atoms would lose a part of mass and release energy when they enter into a electric field, when the energy has been released, suddenly, dropping the electric field, the atoms need add mass and absorb heat, it is possible to achieve or overstep the absolute zero.
8. Light and electromagnetic fields: We have tried a number of experiments, and have found that various clocks (oscillators) are strongly affected in an electric or magnetic field. We guess that the light speed slows down in a glass relate to the atomic electric field. We wish to study light speed in a strong electric or magnetic field, and the principle of Michelson-Morley experiment may be referred.
9. Example device: a. scan of the gravitational field: a high-frequency oscillator keep synchronized with satellite signal (voltage-controlled oscillator, divider, phase comparator, feedback), the high-frequency signal and another local high-frequency signal are fed into a mixer, and the output beat note signal could be monitored by a mixer. The device can be placed on a ship to scan the gravitational field of the earth. b. A number of laser beams are adjusted, electric field in local space are superimposed and delayed, to explore to reduce the ignition energy of nuclear fusion. c. Mf indicates strong electric field or strong magnetic field can decrease the energy of combination reaction or temperature, this might have a wide range of applications. Etc. The relevant principle shall not be used for any military purposes.
The work also appears on
http://www.paper.edu.cn/index.php/default/releasepaper/content/201012-174
And http://vixra.org/abs/1011.0075
General relativity gave us a new insight on how one can have a different explanation of the same phenomena. It is a geometrical view of gravitation while Newton view is dynamically. Analogies exist between the two paradigms. In a recent article we have shown that a different view also exists. The details of this is available at
http://www.scirp.org/journal/PaperInformation.aspx?paperID=23098
Feedback is welcomed.
At the beginning of the 20th century, Newton’s second law was corrected considering the limit speed c and the relativistic mass. At that time there has not been a clear understanding of the subatomic particles and basically there was little research in high energy physics. Moreover, the approach of relativity toward the physical phenomena is hyper structural and explains the observations of the observer while there is little consideration to the intrinsic entity of the phenomena. However, in this point of view, through various arguments and investigation of some physical phenomena, it has been attempted to show the necessity of reviewing Newton’s second law.
For more see: New Discoveries and the Necessity of Reconsidering the Perspectives on Newton's Second Law
http://article.sapub.org/10.5923.j.jnpp.20120203.02.html#Ref
Dear all
thank you for nice participation. what about the velocity of light in general relativity its assumed to be changed globally but not locally in variable gravitational potential is this assumption can be changed?
please join the related question on RG if you want:
"Does the assumption that the speed of light is changed with changing gravitational potential represent reality or an assumption?"
or we can discuss it here
Einstein first mentioned a variable speed of light in 1907, he reconsidered the idea more thoroughly in 1911, but Einstein gave up his VSL theory for some reasons.
http://en.wikipedia.org/wiki/Variable_speed_of_light
When we talk about VSL, a preferred reference frame is unavoidable. So the existence of a cosmic axis which is associated with a preferred reference frame will rule out GR as a valid theory in cosmology.
http://www.sciencedirect.com/science/article/pii/S0370269311003947
http://www.newscientist.com/article/dn23301-planck-shows-almost-perfect-cosmos--plus-axis-of-evil.html#.Uz1oB01OVol
Charles,
Einstein indeed assumed that there was no preferred reference frame in universe and this lead to GR, but have you read the second part of my last post? Einstein probably was wrong to make that assumption.
Charles,
A preference for spiral galaxies in one sector of the sky to be left-handed or right-handed spirals has indicated a parity violating asymmetry in the overall universe and a preferred axis. The "axis of evil" was identified by Planck's predecessor, NASA's Wilkinson Microwave Anisotropy Probe (WMAP). Planck's map has also confirmed the presence of a mysterious alignment of the universe.
So who has made random claims?
NASA? Planck's map? Michael J. Longo or Einstein?
The existence of a preferred reference frame for sure can rule out GR, even though nobody can explain it.
Dear Charles Francis and Guoliang Liu
Thank you for your participation I think according to your answer to the following question one can tells if there is a preferred reference frame or not.
If a clock is moving at speed of light in a variable gravitational potential, will the clock encounter time dilation or not? according to GR and according to your opinion? if it is not agreed with GR please give it separately.
Dear Sadeem,
In order to answer your question, we have to assume all of our measurements are referring to a clock sitting in the universal flat space-time reference frame, which is sitting infinitely far away from any gravitational fields, actually it is the only valid inertial reference frame in reality. And then we have to find out the relationship between special relativity time dilation and gravitational time dilation.
I don't think GR can answer your question. My answer is in section 4.1. of my paper posted on RG.
If talking about time dilation without referring to a preferred reference frame, then it can only lead to all kinds of paradoxes.
To answer the question: it's just a theory, but a pretty good one that has done all that has been required of it. It is also a beautiful theory. What more could you want?
In fact if you have a GPS system you are already using General Relativity in your everyday life - and you haven't got lost yet have you?
FYI:
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.108.110801
Kentosh and Mohageg looked through a year’s worth of GPS data and found that the corrections depended in an unexpected way on a satellite’s distance above the Earth. This small discrepancy could be due to atmospheric effects or random errors, but it could also arise from a position-dependent Planck’s constant.
Dear Sadeem Fadhil
I have not found your article, please give link of paper.
The electroweak coupling constant is a variable dimensionless constant, which implies the principle of general covariance is breaking down in microscopic scale. So it is impossible to fit the quantum theory into the framework of GR based on the principle of general covariance, because GR is breaking down in microscopic scale as well.
A speculative theory???
For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979.
There exist also other theories with a parameter, which by a special adaptation can give the GR or not, so that problem needs further observational investigation: Let the labs decide about the best theory.
On the subject of evaluating theories via experiment rather than by philosophy or inspiration:
The most worthwhile (and perhaps the only?) theory of gravity comparison system, PPN, is described very clearly in Clifford Will's fine article "The Confrontation between General Relativity and Experiment". This is available at at
http://relativity.livingreviews.org/open?pubNo=lrr-2006-3&page=articlesu5.html
It's basis is that Newtonian theory does a great job, so this has to be a zero order approximation, and all else is a wrinkle on that, including Einstein's.
The title is a little misleading since I suspect any theory of gravity can be put in this form: so it is possible to make a direct validation of any theory against the tests listed in the article. The article is really about "The confrontation of theories of gravity ...".
Of course theories that include Einstein's theory in some limit, like f(R) theories, are not differentiated in this way: we need other pragmatic indicators for that. For example, tests using the cosmic microwave background, which serve to test the cosmology derived from that particular modified theory . There are also tests on the higher-order clustering that develops in those cosmological models: those are less constraining than the CMB, but they do test what happens between recombination and the present epoch, ie the clustering process given the initial conditions derived from the CMBdata.
I would venture to suggest that all those who have alternate theories which go beyond Einstein's GR, or even abandon Einstein, should report how well they do on the coefficients listed the Clifford Will article, and their level of consistency with CMB and clustering data.
Dear all
thank you for your answers. I prefer to reask the question in a clearer way.
Light it self can be considered as clock throgh its frequency.
Its obvious that the frequency will be changed but remember
our clocks ticking rate is also changed within such varying gravitational potential.
So which clock is really varying in its rate in such potential?
Sadeem,
All kinds of clocks should slow down to the same pace at the same background gravitational potential; no matter they are atomic clocks or just simple pendulums, or even the mean lifetime of a decay particle.
Dear all
I think there is misunderstanding to the question. The question is about the
origin of the changing of frequency of the speed of light
wave in variable potential, is it due to changing the frequency of the wave itself or due to a difference of the clock ticking rates between the source and the observation region.
I hope it s clearer now.
Hi All
I think relativity has a fundamental flaw.
According to the fundamental particle physics theories and energy issues in the production and decay of pairs of matter–antimatter are included in finding the common features between matter and energy which can be considered the constant velocity of photon as a property that can be transmitted from matter into energy and vice versa and also differences in the mass, structure of matter and its relation fields are explained by the relationship between length contraction (reduce in volume) and relativistic mass and relativistic form of Newton second law which show the mass variations (i.e., the infinite speed in classical mechanics is replaced by the infinite mass).
According general relativity a singularity, space and time cease to exist as we know them. Thus the usual laws of physics break down near such a singularity. So it's not really possible to envision some thing with infinite density and zero volume.
In fact, of the first we should reconsider relativity Newton's second law. For more see;
https://www.researchgate.net/publication/261355836_Quantum_Gravity_Chromo_Dynamics
Article Quantum Gravity Chromo Dynamics