There is no reason to believe Strict Relativity is valid over large distances. First, there is no evidence, experimental measurement of anything.
Second, this is not a Fundamental Theory. There is not intuitive support for covariance of intervals. It cannot be derived from anything we know. Needless to say, it is a hypothesis (cannot be proved).
SR is just a 'theory' onto which one can graft the Lorentz transformation as a metric and nothing else. Time Dilation and Length Shrinkage are just the result of chosen laws for dynamics. That interpretation is not the only one possible. They provide the accumulation point at v=c, that is all.
No physics there.
Cosmology somehow feels at home using it to derive the SR Doppler effect redshift and from there it goes into another non-unique solution to Cosmology (GR) and beyond (Dark Energy and Dark Matter).
So my question is WHY? Why is not the fact that the legs of Cosmology (explanation of redshift, estimation of receding velocity) are based on a theory that has no evidence to support its validity.
The first possible test on a slightly longer distance happened in the Pioneer spacecraft. Somehow that didn't help. Voyager presented the Pioneer anomaly.
For some time, I believed that the anomaly could be totally explained by the curvature of the hypersphere. It was too constant. Solarwinds, vacuum matter density fluctuations (a possible explanation) couldn't be that constant.
I either made a mistake in the derivation (I derived the thing many times) or the hypersphere possible contribution to the anomaly was only 50% of the observed one.
Contribution 50% for the deceleration precluded me from saying anything (not that anyone would listen to me..:). Too much uncertainty. The explanation about anisotropic thermal irradiation (thus anisotropic photon pressure) was as good as anything.
Someone should try to derive the influence of the hypersphere curvature on the deceleration of the Pioneers spacecrafts. Check my work. If I was wrong by a factor of 2, that will be my Biggest Blunder...:)
xyzTau
For example in HU, the 3D Universe is not flat. Its curvature is 1/4D_Radius, so just by embedding the 4D spacetime inside a hyperspherical surface is enough to make SR fail at longer distances.
That is one thing going against the use of Doppler Shift for relating redshift z with velocity.
Let's digress a little on History. Before relativity, the covariance (by another name) of the wave equation was known. That covariance brought about the Lorentz transform to convert field on referential A to field on referential B.
Lorentz transform had no physical meaning until Einstein assigned it to a metric and stated that that metric was supposed to be used to convert dynamical quantities between referential frames. Einstein contribution was to give a physical meaning to Lorentz transform.
Since the metric is consistent with the covariance of intervals, then the covariance of intervals has to be consistent with the covariance of the wave equation, because the metric is the Lorentz transform.
It is circular.. it makes me dizzy..:)
That said, there is no evidence that the interval will remain covariant on cosmological distances, which is the main point of the question.. Which is the same as saying that there is no evidence that SR Doppler shift can be used on cosmological distances.
I wrote the question to showcase holes in the current Cosmology... one hole at a time.
Are you saying that i can define two local referential frames separated by 10 Gly and that SR will hold water. Where is the evidence?
You observe light, absorption lines... you conclude, within a model, velocity (non-observed dynamical property). Velocity/ acceleration is an important ingredient on Cosmology.
Doppler effect for instance is one explanation for the observation of redshift. It is not the only one. My theory provides another explanation for redshift that is based on the Law of Sines (well documented Trigonometric Law and valid everywhere that is Cartesian).
The Doppler effect states that given the observed wavelength of an electromagnetic waves or its absence (Absorption lines) , the source is traveling at some given speed.
That speed doesn't have any other validation. That is what I mean by SR is not validate at Cosmological Distances. Compare that with the Law of Sines.
Am I wrong?
I should expect, I think, that if I turn my head, 90 degrees to the left, that in my view, galaxies, even if they are 10 billion light years in front of me, should appear, in my view, to move with everything else Pi/4 radians in my view, and thus, their coordinates should change from 10 billion light years in front of me, to 10 billion light-years to my right.
The coordinates of events are affected by something called a rotation transformation.
x' = x * cos(theta) - y * sin(theta)
y' = x * sin(theta) + y * cos(theta)
We have no reason to expect, that somehow, objects nearby are subject to transformation by rotation of the observer, but somehow more distant galaxies are immune.
But for some reason, when it comes to similarly applying the rules of the Lorentz Transformation to faraway events:
c t ' = c t cosh(rapidity) - x sinh(rapidity)
x' = - c t sinh(rapidity) + x cosh(rapidity)
...there are many in the scientific community, I think, who believe that these equations should somehow only apply in regions nearby.
So to answer your basic question: "Why would anyone think that Strict Relativity is valid over Cosmological Distances?"
If by "strict relativity" you mean, application of the Lorentz Transformations, then yes, I do believe that Strict Relativity is valid over cosmological distances. If somehow the Lorentz Transformations affected nearby objects differently than faraway, Stellar Aberration would be a different-looking phenomena.
In answer to another part of your question, though. You say here "Cosmology somehow feels at home using it to derive the SR Doppler effect redshift "
I don't feel that is always the case. Have a look at the wikipedia article on redshift, and you'll find that they have a table there, called "redshift summary". One of these equations
z(v) = sqrt{(1+v)/(1-v)} - 1
states that the redshift of a photon is a function of the distant object's velocity at the time the photon was emitted. The other
z(a) = a_now/a_then -1
states that the redshift is a function of the scale of the universe at the time the photon was emitted, as a fraction of its current scale.
To my knowledge, cosmologists have, for the most part, rejected the formula for z(v), and fully embraced the formula for z(a).
And here, where you say "My theory provides another explanation for redshift that is based on the Law of Sines (well documented Trigonometric Law and valid everywhere that is Cartesian)"
When you say this, are you arguing for the formula for z(a) and against z(v), or are you arguing against both formulas?
I should expect, I think, that if I turn my head, 90 degrees to the left, that in my view, galaxies, even if they are 10 billion light years in front of me, should appear, in my view, to move with everything else Pi/4 radians in my view, and thus, their coordinates should change from 10 billion light years in front of me, to 10 billion light-years to my right.
During the 10 years that nice Robotic Telescope (SDSS), you can consider that far away stars didn't really move (angularly) at all. So the Declination and Right Ascension can be transformed (to take into consideration Earth's tilt and motion) into a Cosmic Fixed Reference Frame. So there is no need to consider the Gedanken Experiment you created.
The coordinates of events are affected by something called a rotation transformation.
x' = x * cos(theta) - y * sin(theta)
y' = x * sin(theta) + y * cos(theta)
We have no reason to expect, that somehow, objects nearby are subject to transformation by rotation of the observer, but somehow more distant galaxies are immune.
There is no reason to consider that a rotation of your local reference system would have a different effect on what is closer and what is far. You are just changing angular coordinates. In you example you used x and y, so things were not so clear. Had you used spherical coordinates it would be clear that you are just changing the baseline of your angular coordinates system. You are not touching the distance r.
But for some reason, when it comes to similarly applying the rules of the Lorentz Transformation to faraway events:
c t ' = c t cosh(rapidity) - x sinh(rapidity)
x' = - c t sinh(rapidity) + x cosh(rapidity)
...there are many in the scientific community, I think, who believe that these equations should somehow only apply in regions nearby.
This is different, because here you are considering motion and thus change in r. This is different from your Gedanken Experiment. That said, I am not the one who would defend Lorentz transforms on a Physical basis. My theory states that they are not fundamental. That they are only an artifact of using the wrong (not as fundamental) laws of dynamics and 4D-Spacetime as opposed to 5D-Spacetime.
So to answer your basic question: "Why would anyone think that Strict Relativity is valid over Cosmological Distances?"
If by "strict relativity" you mean, application of the Lorentz Transformations, then yes, I do believe that Strict Relativity is valid over cosmological distances.
I didn't ask for Belief. I asked if there was any evidence of its validity.
To validate Strict Relativity (specifically the Doppler Shift, since I am talking about Cosmology), one needs an independent measurement of velocity on a Cosmological Distance (e.g. a star traveling around a Black Hole) very far away. The difficulty with that is that even if you know the trajectory (from an angular perspective), to calculate velocity you still need to know distance. So you cannot have a measurement of distance to test Doppler shifts validity!
If somehow the Lorentz Transformations affected nearby objects differently than faraway, Stellar Aberration would be a different-looking phenomena.
Please explain this statement.
In answer to another part of your question, though. You say here "Cosmology somehow feels at home using it to derive the SR Doppler effect redshift "
I don't feel that is always the case. Have a look at the wikipedia article on redshift, and you'll find that they have a table there, called "redshift summary". One of these equations
z(v) = sqrt{(1+v)/(1-v)} - 1
states that the redshift of a photon is a function of the distant object's velocity at the time the photon was emitted. The other
z(a) = a_now/a_then -1
states that the redshift is a function of the scale of the universe at the time the photon was emitted, as a fraction of its current scale.
To my knowledge, cosmologists have, for the most part, rejected the formula for z(v), and fully embraced the formula for z(a).
If that is the case, my work is done.
And here, where you say "My theory provides another explanation for redshift that is based on the Law of Sines (well documented Trigonometric Law and valid everywhere that is Cartesian)"
When you say this, are you arguing for the formula for z(a) and against z(v), or are you arguing against both formulas?
Yes. I was going to do it one at a time.
It was surprising that people would come to rescue
Now that we settled the LACK of validity of Strict Relativity, let's move to a theory that predict the distances properly BUT has not parameters..:) Not a single one.
I asked George to create a perfect prediction of all SN1A distances using just $H_0$ or the 4D Radius of the Universe since in HU, $H_0=\frac{c}{R_0}$
George, faster than a speeding bullet came up with d(z)= ln(1+z)
The following plot below has the original SN1A Survey data, the Astropy L-CDM fitting, the overestimated HU predictions and George's predictions.
NOTICE THAT HU PREDICTIONS DO NOT USE A SINGLE PARAMETER
Cosmologists should at least consider:
What are the odds that this fellow came up with a Numerology solution that doesn't invoke Inflation..:)
Convincing Cosmologists is like Herding Cats...:)
I think you just revalidated my point that given a value for z, there is no way to double check the velocity of an object far away (say z=3, or 4 or 10 or 1000).
I was trying to be charitable and say that there was some star going about a fraction of c around a Supermassive Black Hole in our neighborhood. In that rare but possible event, one could actually test v(z) or the Doppler Shift SR law. On the top of that dynamics, you would also have z(a) and the six parameters of L-CDM. So the situation is untenable... Too many parameters...more than necessary.
As you might not know, the question is about Cosmology (not about local approximations) and that means z varying from 0 to 1000 or infinite, so that corresponds v varying from 0 to c.
This is part of a series of questions that I will use to guide your reasoning and permit you to understand my theory. It will also expose any skeletons I might have...:) and give you the option to bet on the theory or try to bury it..:)
Spoiler alert. The theory gets rid of two forces - Strong and Electroweak but bring about a new force - the de Broglie Force (in honor of de Broglie). The de Broglie force comes about when you stop considering a wavefunction something that just happens without any reason and as a solution to Schrodinger's equation into something that happens for some reason - I don't know why .. HU states the the WHY is a force that becomes invisible when you adopt the Copenhagen Interpretation.
Not unlike the radial waves not predictable by L-CDM but clearly predictable by HU, the change in Wavefunction interpretation allows for us to look into something that has been overlooked.
Just in case you don't know. I used my theory to create a map of the Universe directly from the SDSS datasets.
https://www.youtube.com/watch?v=ytuEctnD334&t=44s
Then I looked into the cross-section along the Declination direction (already aggregated by Right Ascension) and this is what I got:
https://www.youtube.com/watch?v=YfxqMsnAinE
A little averaging and I was able to see up to 36 acoustics waves (The Big Pop and the many-Bang Universe Theory)...:)
Needless to say, those waves were always there. Nobody looked for them because they had the wrong theory L-CDM telling them that the Universe is a 4D Spacetime. instead of a lightspeed expanding hypersphere in a 5D Spacetime
Changing the Copenhagen Interpretation to the HU Interpretation will allow for the existence of a New Force (useful force), that you can manipulate to create nuclear fusion, antimatter or intergalactic travel
SR only applies in flat space which seldom occurs, and never happens when gravity must be accounted for. Moving on to GR helps, but doesn't answer every question. For me the thermal radiation from the parabolic antennae should be sufficient to account for the difference observed. But I lost that argument on Wikipedia a long time ago without sufficient references to satisfy the WP rules.
Calculate the deceleration that takes place if you mistakenly considers the universe flat instead of a lightspeed expanding hypersphere. I did it a few times...never liked the result. Always suspected I made a mistake.
Take a shot at it and see what kind of curvature you would need to fully explain the Pioneer anomaly. My recollection is that I needed twice as much curvature as I have in the universe. Since I couldn't explain it fully, I dropped the subject. A clean answer is nice. Half-ass solution is not as much.
I might have to take look at it again sometime, because I am not happy with my calculation. I think I missed something.
This problem is not a simple radar problem. There are details on the detection protocol that might double the contribution. That would mean that the deceleration would be the centripetal acceleration where the signal travels at the speed of light...
c^2/R0= constants.c**2/13.58E9/units.lightyear.to('meter')/units.meter=
6.9954711×10−10 m/s2
The observed is 8.74E-10 m/s^2
So my theory explains 80% of the Pioneer Anomaly... some 20% left for the photon pressure solution.
I think that when I calculated the contribution, instead of c^/R_0, I got c^2/2/R_0
so it would explain even less. I was never happy because the heterodyning solution was never clear to me. If electromagnetic waves are being created in two different points of the hypersphere that will change the result. It will be different that if it was just a bouncing back of a single EM source.
I never had time to get down to the details...:)
In addition, you people are already in celebratory mood...:) Celebrating your great discovery...:) Who am I do pop that balloon..:)
If the wave is being reamplified and sent back, then the distance d and velocity v will play a role. The distance/velocity together with the curvature defines the orientation of the proper time.
The problem is not complicated but one has to delve into the details of the detection system. One has to understand exactly how the measurement is done.
Ground control has an oscillator... it sends a signal. By the time it arrives at the spacecraft, its perceived frequency will be modified by the orientation of the proper time where the spacecraft is. The spacecraft will amplify it and send it back. By the time it arrives back its perceived frequency changed again. So the heterodyning doesn't count just the increase in distance between radar pulses (as in a radar gun experiment). It is also influenced by the orientation of proper time.
So I am not happy with my first ...and second... and third... etc attempt at solving this problem... it just happens it might have contributions outside curvature and I didn't have time nor anyone to ask questions about their detection system.
Marco.
You said "During the 10 years that nice Robotic Telescope (SDSS), you can consider that far away stars didn't really move (angularly) at all."
This is incorrect.
https://en.wikipedia.org/wiki/Stellar_aberration_(derivation_from_Lorentz_transformation)
They have listed there, the stellar aberration due to the orbit of earth around the sun yearly is 20.5 arcseconds.
The aberration due to earth's rotation is 0.32 arcseconds.
This effect was first noticed by James Bradley, and if it has gone away, I haven't heard about it.
When a nearby star moves 20.5 arcseconds due to to stellar aberration over the course of the year, because of earth's motion around the sun, also galaxies billions of light years behind it will also move 20.5 arcseconds .
What I'm saying is that if the Lorentz Transformations (The Special Theory of Relativity) somehow FAILED at cosmological distances, then one of the things we could expect to see as a result of this failure would be that nearby stars would be affected by stellar aberration, but faraway stars wouldn't be.
-----
You said "I didn't ask for belief"
This seems rather strange, because, in another place, you said to me you have to trust the redshifts of the SDSS because you are not an astronomer. You seem willing to accept certain things on trust, but not willing to accept statements of trust from other people.
I think I know where this comes from... Somewhere along the line, you've come under the impression that "having a belief" is unscientific. That's not the case. Having an unjustifiable belief, and clinging to it regardless of evidence to the contrary is unscientific. Having a belief and refusing to acknowledge other hypotheses is unscientific.
I'll just say, I have seen no evidence that has convinced me the Lorentz Transformations fail over large distances. I am aware that other people have hypothesized that the Lorentz Transformations are only valid locally... but that seems to contradict available evidence, to me. In particular, Stellar Aberration.
-----
I mentioned that cosmologists have, for the most part, rejected the formula for z(v) and fully embraced the formula for z(a).
To which you responded, "if that is the case, my work is done"
But then you go on to say "My theory provides *another* explanation for redshift."
So I don't think your work is done. If you are providing another explanation for redshift, you are adding an additional hypothesis. If this hypothesis is consistent with the current lambda-CDM model, then your work would be done, because that's the prevalent description.
My impression, though, is that you're introducing a different hypothesis altogether, and therefore, it ought to be held up against other ideas, and generally, the people best prepared to defend ideas are those who actually believe them, but open-minded enough to acknowledge the existence of other ideas.
JD: You said "During the 10 years that nice Robotic Telescope (SDSS), you can consider that far away stars didn't really move (angularly) at all."
You're wrong.
https://en.wikipedia.org/wiki/Stellar_aberration_(derivation_from_Lorentz_transformation
MP: You are right. There is aberration and that is sensitive to the motion of Earth. Due to ignorance, I considered that SDSS data would come aberration free and referred to a given equinox J2000.0 equinox. That doesn’t change my observations in the sense, that even if aberration were still there, my results are impervious to aggregation along RA or DEC coordinates, so the Banging is in the aggregated values and that means the small errors on angle don’t matter.
#####################################################
JD: What I'm saying is that if the Lorentz Transformations (The Special Theory of Relativity) somehow FAILED at cosmological distances, then one of the things we could expect to see as a result of this failure would be that nearby stars would be affected by stellar aberration, but faraway stars wouldn't be.
I was told that Special Relativity failed over cosmological distances because there is acceleration on measurements on cosmological distances. So, I consider that battle won.
Now, I mentioned that the topology of my Universe is hyperspherical lightspeed expanding hypersphere. The local frame is called xyztay…. Different objects at cosmological distances will have local frame and the conversion from one frame to the other has to be done using hyperbolic projections, or SR or Lorentzian transforms.
That said, you can see that as we move further away, the curvature of the hypersphere matters and that will be added to any torsion already present due to tangential velocity.
So you are right that aberration will always be there and that is not where SR or a HU SR fails.
My biff is just with Doppler Shifts explained by SR. I was told that that is not the case anymore, so I rested my case.
###############################################
JD:You said "I didn't ask for belief"
This seems rather strange, because, in another place, you said to me you have to trust the redshifts of the SDSS because you are not an astronomer. You seem willing to accept certain things on trust, but not willing to accept statements of trust from other people.
This is not a cop-out. It is just a confession of ignorance of all the details in all astronomical measurements. Every so often someone corrects me on those details, as you did about Aberration and I am extremely happy and grateful.
###############################################
JD:I think I know where this comes from... Somewhere along the line, you've come under the impression that "having a belief" is unscientific. That's not the case. Having an unjustifiable belief, and clinging to it regardless of evidence to the contrary is unscientific. Having a belief and refusing to acknowledge other hypotheses is unscientific.
I pointed the issue of believe because I wanted to hear the argument. Anyone who says I believe, I most likely will ask for a reason. That said, I will repeat that in doing my calculations, I had to interact with data SDSS that has tremendous amount of inner life... (from aberration - is the aberration subtracted from measurements, to are these angles dependent upon which date they were take or has SDSS referred the angles to a given day and hour J2000 equinox). I can only guess that they did what they had to do to make the data easy to use. I have to believe (in first approximation) that they did their homework and provide useful data. If someone (JD) tells me that SDSS data still requires aberration correction, then I would be surprised because I didn't see a timestamp among the columns (I also didn't look for it). So, my work is done under horrible conditions and contains no promise of being right. The only thing I stand by, is that I did some due diligence before opening my big mouth.
###############################################
JD: I'll just say, I have seen no evidence that has convinced me the Lorentz Transformations fail over large distances. I am aware that other people have hypothesized that the Lorentz Transformations are only valid locally... but that seems to contradict available evidence, to me. In particular, Stellar Aberration.
Stellar aberration has to do with coordinate projection. This is not the case for Doppler Shift. That said, SR should be a good first approximation in calculating aberration. HU might (should if the theory is correct), provide an extra adjustment for the curvature.
###############################################
JD:I mentioned that cosmologists have, for the most part, rejected the formula for z(v) and fully embraced the formula for z(a).
To which you responded, "if that is the case, my work is done"
But then you go on to say "My theory provides *another* explanation for redshift." So I don't think your work is done.
My work is done against SR.
My work against L-CDM requires me to challenge SN1A Survey distances (thus disinflating the Universe). That is to be done along the sequence of questions.
###############################################
JD:If you are providing another explanation for redshift, you are adding an additional hypothesis. If this hypothesis is consistent with the current lambda-CDM model, then your work would be done, because that's the prevalent description.
My impression, though, is that you're introducing a different hypothesis altogether, and therefore, it ought to be held up against other ideas, and generally, the people best prepared to defend ideas are those who actually believe them, but open-minded enough to acknowledge the existence of other ideas.
I don’t believe in anything. And that includes my theory. I see evidence from observations and I see too many variables, too much unproven physics in L-CDM. I don’t like L-CDM… I consider it ugly by introducing physics without proof (Dark Energy).
Wouldn’t it look stupid if the observed Inflation was just a mirage and Gravitation is actually epoch-dependent. Since we (you) didn’t derive Gravitation from a more basic theory, you have no idea what controls it. How does it depend upon time… So epoch-dependent G is quite a tame hypothesis in comparison to having the CMB coming from 35 trillion light years away (z=1080) and the Universe first bursting into scene while the universe expanded at 10^7 times the speed of light (or infinite if you want to consider infinite z), adding Dark Matter and Dark Energy as major components of the Universe... These are really very BOLD hypotheses. I would say unnecessarily BOLD
I have no stake here. If proven wrong here…J I will be the happiest man alive because I don’t have to argue anymore ..:)
JD,
George Dishman just proved me wrong on my assertion that Ancient Photons slow down as they approach us. I was wrong and I will not repeat that again...:)
MP: " I considered that SDSS data would come aberration free and referred to a given equinox J2000.0 equinox.... that means the small errors on angle don’t matter."
It sounds to me like they have done the right thing here... but not necessarily for the right reasons. Yes, adjusting to J2000.0 equinox, or J2001.0, or J2002.0 equinox is good, to get the parallax right. And by going back to the same month, it makes the aberration all-but-disappear. But to call the "aberration" an "error" is incorrect. Actually, come to think of it, naming it "aberration" in the first place is a bit Orwellian.
It is neither "aberration" nor "error": the coordinates really change over the course of the year. When you go out and see where the stars are in the night sky, the light is really coming from the direction you see it. They're not correcting an "error" when they adjust the coordinates to the J2000.0 position.... They're correcting for "blur".
Calling it "error" implies that the J2000.0 position is correct, and all the other positions are in error.
MP: "My work is done against SR."
As far as the supporters of Lambda-CDM are concerned, you may well be correct. I have tried to have discussions supporting a SR-consistent explanation of inflation on "PhysicsForums" and they generally delete my posts because they consider it "Original Research" You don't have to convince them that SR is wrong, because they have already adopted that view.
MP: "I see evidence from observations and I see too many variables, too much unproven physics in L-CDM. I don’t like L-CDM… I consider it ugly by introducing physics without proof (Dark Energy)."
On this, we're agreed.
MP: "I don’t believe in anything. And that includes my theory. "
I suppose that's honest, in some sense. What I find absurd is that people will say you are being scientific because you are arguing for a theory you do not believe in, while when I say I actually do believe that Lorentz Equations work at large distances, and change the coordinates of those events, they say I am being unscientific because I believe what I am saying is correct...
It's a very troubling pattern, when people who believe what they are saying is correct are considered unscientific. While people who do not believe in anything, including their own theories, are considered scientific.
So far as I know, the people that got the Nobel Prize for Physics for showing that the universe was accelerating actually didn't "believe" the theory under which they were modeling... They just said something like, "Well, let's plug in the data and see what happens." And when they plugged in the data, they found a lot of results that they found quite troubling. If I recall correctly, there's a point in this video https://www.youtube.com/watch?v=50fHoJD2YNQ&t=1444s where Nobel Prize Winner, Adam Reiss confesses that he is just plugging the data into the model--and instead of confirming the expectations of the model, it created new predictions--massive amounts of dark energy and dark matter.
I would give credit to Adam Reiss and their team, though, for being scientific. They plugged their numbers into a model, and figured out the results. They said the theory predicts massive amounts of dark energy and dark matter that we do not see.
What's NOT scientific, though, is how the press has run with this. They think that the High z Supernova team proved the existence of Dark Matter and Dark Energy. That's not the case at all. What they've done is proven that IF the Lambda-CDM model is correct, then there MUST BE undiscovered Dark Matter and Dark Energy out there. The press seems utterly unable to cope with multiple possibilities... that either there is dark matter and dark energy OR the lambda-CDM model is incorrect.
MP: "Wouldn’t it look stupid if the observed Inflation was just a mirage and Gravitation is actually epoch-dependent. Since we (you) didn’t derive Gravitation from a more basic theory, you have no idea what controls it. How does it depend upon time… So epoch-dependent G is quite a tame hypothesis in comparison to having the CMB coming from 35 trillion light years away (z=1080) and the Universe first bursting into scene while the universe expanded at 10^7 times the speed of light (or infinite if you want to consider infinite z), adding Dark Matter and Dark Energy as major components of the Universe... These are really very BOLD hypotheses. I would say unnecessarily BOLD"
"Wouldn't it look stupid?" you ask. I think we should try to avoid that sort of terminology. (though I've been guilty of it myself.) Many of Aristotle's ideas would look stupid to you or me, because we now know better, but Aristotle was clearly a brilliant man. My motto has become "Acknowledge the Hypothesis." regardless of how absurd it sounds.
If the hypothesis is internally consistent, and consistent with available data, then it's not really stupid. Aristotle thought the sun orbited the earth and the stars were a big dome overhead because he did not see any parallax in the stars. He had no evidence to the contrary, and though he was incorrect, he wasn't stupid.
I don' t know where you're getting the 35 trillion light-year figure. I had heard it was up to 40 billion light-years across... Though I thought that distance estimate was based on magnitude measurements of the most distant galaxies and supernovae, rather than redshift..
The z=1080 figure comes from assuming that it is a 3000 Kelvin blackbody redshifted to 2.7 Kelvin. 3000 Kelvin is the temperature where hydrogen turns from plasma to a gas. The distance one would estimate to that surface would vary depending on what model you were using. The simplest model would put it at just under the speed of light times the age of the universe. But if the age of the universe is 13.7 billion years, and the most distant galaxies are 40 billion light-years away, then obviously either the simplest model is incorrect, or the data is incorrect.
People who have adopted the lambda CDM model throw out the simple model in favor of the data, and then have to introduce Dark Matter and Dark Energy to work out the details.
As for me, my "personal theory" is that the problem is with the 13.7 billion year figure. While the LOCAL universe may be 13.7 billion years old, the universe as a whole is closer to 40 billion years old. I would explain the discrepancy by using the Twin Paradox of Special Relativity. My argument would be that in the early universe, particles of the local universe accelerated, via thermal collisions, a great deal, so they aged less than the universe as a whole. By the time they were able, on average, to travel in a straight line (which they've been doing for the last 13.7 billion years) the universe as a whole had already aged about 25 billion years.
So I've had the impression there is effectively a double-hubble law... with a hubble's constant of 1/13.7 billion years out to a redshift of around 0.7, and then a smaller hubble constant, of around 1/35 or 1/40 billion years, for redshifts beyond that.
The attached video goes into more detail.
https://www.youtube.com/watch?v=UuGX2_aReew&index=13&list=PLC-qVSnsyc7_24tKNLFKrSTB5ZkwjkFuH
MP: " I considered that SDSS data would come aberration free and referred to a given equinox J2000.0 equinox.... that means the small errors on angle don’t matter."
It sounds to me like they have done the right thing here... but not necessarily for the right reasons. Yes, adjusting to J2000.0 equinox, or J2001.0, or J2002.0 equinox is good, to get the parallax right. And by going back to the same month, it makes the aberration all-but-disappear. But to call the "aberration" an "error" is incorrect. Actually, come to think of it, naming it "aberration" in the first place is a bit Orwellian.
I didn't do that. SDSS does that for me. They made the mistake of referring to J2000.0 equinox. I would never do that. I would certainly use J2002.0 as you advised.
Which is what I expected from them. Aberration, hadn't they eliminated from their data, would produce an 'error' (it would place the actual galaxies in a position slight different from what I am considering to the where they are) on my mapping. So Aberration is not an error... I totally agree with you. By not taking it into consideration, I would had introduced error in my mapping. The ERROR would be on me, never on Aberration per si.
That misinterpretation (because of my ignorance of the possibility that aberration was in my data) would cause map to be distorted (albeit slightly).
That kind of 'error' or 'misinterpretation' due to ignorance of some unknown variable is exactly what I am correcting when I point out that HU topology, epoch-dependent G introduces an overestimation (that is not small and goes to infinite at the very end)...:)
So, my correction is not unlike you bringing about my potential overlooking of Aberration... the only difference is that I am correcting an infinite stretch provided by Inflation.
It is neither "aberration" nor "error": the coordinates really change over the course of the year. When you go out and see where the stars are in the night sky, the light is really coming from the direction you see it. They're not correcting an "error" when they adjust the coordinates to the J2000.0 position.... They're correcting for "blur".
Calling it "error" implies that the J2000.0 position is correct, and all the other positions are in error.
This is nitpicking on my poor Astronomes...:) This is my n-Language..:)
By the way, I will reemphasize that SDSS provides the data already referred to J2000.0... You, a native speaker of Astronomes should know that and not badger me for no reason..:)
###################################################################################################################################################################################################################################################################################
MP: "My work is done against SR."
As far as the supporters of Lambda-CDM are concerned, you may well be correct. I have tried to have discussions supporting a SR-consistent explanation of inflation on "PhysicsForums" and they generally delete my posts because they consider it "Original Research" You don't have to convince them that SR is wrong, because they have already adopted that view.
Great...:) We all agree there..:)
###################################################################################################################################################################################################################################################################################
MP: "I see evidence from observations and I see too many variables, too much unproven physics in L-CDM. I don’t like L-CDM… I consider it ugly by introducing physics without proof (Dark Energy)."
On this, we're agreed.
Yeaahhh
###################################################################################################################################################################################################################################################################################
MP: "I don’t believe in anything. And that includes my theory. "
I am going to read your full response.. but before I will make sure to reemphasize that not accepting someone saying I believe or I believe that is just a rhetorical tool to demand a reason. The reviewer of my article told me that he didn't believe that the Chain reaction could be approximated by a first-step limited reaction.
If I could I would beat him with Descartes Discourse of the Method. I can only rebut arguments. I cannot rebut beliefs. See his review question and my answer.
I don't need to have the approximation to work for the whole time profile of 56Ni production and decay. I only need it to be a good approximation up to the peak luminosity, because that is what is measured when astronomer measure distances. The rest is irrelevant and will be taken care by WLR. In addition, the detonation of [C] releases tremendous amount of energy (heats up things) and shift equilibriums to the right. In addition, Carbon and Oxygen are the main components of White Dwarfs. I would like to add the composition of the remnants a ( 24Mg, 40Ca) but somehow I cannot find a relevant paper to cite. In addition, some of those isotopes might be produced after the peak luminosity.
Physics tells me that first-step limited Carbon detonation is a good approximation within the caveats I mentioned. Until someone tell me otherwise (with some argument), I guess I will have to continue searching for solid evidence or convincing simulation.
######################################################################################################################################################################################################################################
I suppose that's honest, in some sense. What I find absurd is that people will say you are being scientific because you are arguing for a theory you do not believe in, while when I say I actually do believe that Lorentz Equations work at large distances, and change the coordinates of those events, they say I am being unscientific because I believe what I am saying is correct...
Let's place thing properly, if I challenge Lorentz equations is because I know the answer. I know that the topology, model already allowed me to derive the Law of Gravitation and Electromagnetism from first principles, that the model allow me to see waves on the galaxy density, that the model allowed me to fit SN1A distances... So I am being coy and reasonable not putting that up front and just pointing out that SR wouldn't work if this or that were true.
So my Belief and your Belief are not the same. The difference is that you didn't read my work and I didn't preface this question with it. I am trying to lead the reader step by step.
It's a very troubling pattern, when people who believe what they are saying is correct are considered unscientific. While people who do not believe in anything, including their own theories, are considered scientific.
I have to disagree with you there. My Belief that the strength of the evidence behind what I believe. Nothing more. You should never believe that Inflation Happened... no matter how many smiling scientists tells you so. BECAUSE it breaks all previous rules, doesn't provide a mechanism and clearly could be just a mistake due to incorrect distances being measured. The distances are the first, second and last and first again item to be analysed. It should have never been accepted that there is a SN1A at 36 Gly away in a Universe that is 13.58 Billion years old. Everyone should say NOOOOOOO... it can't be true..:)
Eventually someone would say, well Gravitational is inversely proportional to distance...::) , SNe chain reactions are basically first-step limited and that would solve the problem.
It is infinitely easier to accept that than to accept infinite expansion velocity, no rule.
I believe this is suspended disbelief for the sake of jumping into the fray and getting some theoretical papers out. Nobody should be that gullible.
So far as I know, the people that got the Nobel Prize for Physics for showing that the universe was accelerating actually didn't "believe" the theory under which they were modeling... They just said something like, "Well, let's plug in the data and see what happens." And when they plugged in the data, they found a lot of results that they found quite troubling.
That is plausible deniability... They are about to toss the community inside a Black Hole and want to create exculpatory framework....:)
If I recall correctly, there's a point in this video https://www.youtube.com/watch?v=50fHoJD2YNQ&t=1444s where Nobel Prize Winner, Adam Reiss confesses that he is just plugging the data into the model--and instead of confirming the expectations of the model, it created new predictions--massive amounts of dark energy and dark matter.
I suspect we both agree that this is not right.
I would give credit to Adam Reiss and their team, though, for being scientific. They plugged their numbers into a model, and figured out the results. They said the theory predicts massive amounts of dark energy and dark matter that we do not see.
What's NOT scientific, though, is how the press has run with this.
The press is not scientific. Something that infuriates me are the educators that come smiling at PBS shows telling me about Inflation..>:)
They think that the High z Supernova team proved the existence of Dark Matter and Dark Energy. That's not the case at all. What they've done is proven that IF the Lambda-CDM model is correct, then there MUST BE undiscovered Dark Matter and Dark Energy out there. The press seems utterly unable to cope with multiple possibilities... that either there is dark matter and dark energy OR the lambda-CDM model is incorrect.
We totally agree here. I also believe that Cosmology and Particle Physics are trapped because of a rotten core. That is what I am trying to correct. Because of the rotten core, they have no alternative other than funding projects to find ghosts... in hope they will find something.. although very likely not the ghosts.
###################################################################################################################################################################################################################################################################################
MP: "Wouldn’t it look stupid if the observed Inflation was just a mirage and Gravitation is actually epoch-dependent. Since we (you) didn’t derive Gravitation from a more basic theory, you have no idea what controls it. How does it depend upon time… So epoch-dependent G is quite a tame hypothesis in comparison to having the CMB coming from 35 trillion light years away (z=1080) and the Universe first bursting into scene while the universe expanded at 10^7 times the speed of light (or infinite if you want to consider infinite z), adding Dark Matter and Dark Energy as major components of the Universe... These are really very BOLD hypotheses. I would say unnecessarily BOLD"
"Wouldn't it look stupid?" you ask.
I used it as rhetorical tool...not as a personal offense. Specially since you seem not to 'believe' in L-CDM. On the other hand, I would use it to describe myself had I done the same mistake.... I made a stupid mistake (Ancient Photons slow down as they approach us - I projected wavelength but I forgot to project period - so my speed of light changed). I made the point to emphasize my own mistake. I believe there is nothing wrong in being wrong....Just correct yourself and move on.
I think we should try to avoid that sort of terminology. (though I've been guilty of it myself.) Many of Aristotle's ideas would look stupid to you or me, because we now know better, but Aristotle was clearly a brilliant man. My motto has become "Acknowledge the Hypothesis." regardless of how absurd it sounds. I agree!!!
Stupidity is not related to intelligence in my book. It is related to the choice one makes in choosing Hypothesis. If you block the path and jump into creating Inflation before other options are properly evaluated, then you are making a choice of hypothesis for your theory or for your Cosmology.
For the lack of a better word, I would call that stupid. Given the wrong hypotheses, an intelligent person will arrive faster at the wrong answer.
If the hypothesis is internally consistent, and consistent with available data, then it's not really stupid. Aristotle thought the sun orbited the earth and the stars were a big dome overhead because he did not see any parallax in the stars. He had no evidence to the contrary, and though he was incorrect, he wasn't stupid.
Aristotle waited as much as it was necessary at that time to reach his conclusions. That is not the case any more. We have gazillion scientists churning ideas and data. One can wait ten years, or 20 years before concluding something and adding Dark Matter, Dark Energy to the tune of 95% of the Universe.... the other way is to stop censoring me (and others), new ideas.
I don' t know where you're getting the 35 trillion light-year figure. I had heard it was up to 40 billion light-years across... Though I thought that distance estimate was based on magnitude measurements of the most distant galaxies and supernovae, rather than redshift..
I predicted the positions of SN1A for the corrected epoch-dependent Supernova. I can easily reverse the correction and get the 'observed" distance from z. Attached are the plot of my theory feed backwards to produced the 'observations' and the equation used. You plug 1080 and you will get 35 trillion light years away.
The z=1080 figure comes from assuming that it is a 3000 Kelvin blackbody redshifted to 2.7 Kelvin. 3000 Kelvin is the temperature where hydrogen turns from plasma to a gas. The distance one would estimate to that surface would vary depending on what model you were using. The simplest model would put it at just under the speed of light times the age of the universe. But if the age of the universe is 13.7 billion years, and the most distant galaxies are 40 billion light-years away, then obviously either the simplest model is incorrect, or the data is incorrect.
People who have adopted the lambda CDM model throw out the simple model in favor of the data, and then have to introduce Dark Matter and Dark Energy to work out the details.
As for me, my "personal theory" is that the problem is with the 13.7 billion year figure. While the LOCAL universe may be 13.7 billion years old, the universe as a whole is closer to 40 billion years old. I would explain the discrepancy by using the Twin Paradox of Special Relativity. My argument would be that in the early universe, particles of the local universe accelerated, via thermal collisions, a great deal, so they aged less than the universe as a whole. By the time they were able, on average, to travel in a straight line (which they've been doing for the last 13.7 billion years) the universe as a whole had already aged about 25 billion years.
I believe you are conflagrating proper time (time in the local reference frame) with Cosmological Time (time used to time the expansion).
So I've had the impression there is effectively a double-hubble law... with a hubble's constant of 1/13.7 billion years out to a redshift of around 0.7, and then a smaller hubble constant, of around 1/35 or 1/40 billion years, for redshifts beyond that.
See my single d(z) equation. It works all the time and for all z.
The attached video goes into more detail.
###################################################################################################################################################################################################################################################################################
https://hypergeometricaluniverse.quora.com/Second-Peer-Review-2
MP: Ancient Photons slow down as they approach us - I projected wavelength but I forgot to project period - so my speed of light changed
I am not sure to what this refers. But, I do know that if you say that the cosmological scale factor a(t) indicates "streching space" on moderated forums, they will delete your post and threaten to ban your account instead of discussing the issue.
MP: I believe you are conflagrating proper time (time in the local reference frame) with Cosmological Time (time used to time the expansion).
Conflagrating, or conflating?
My impression is that what A.E. Milne originally meant by cosmological time was a hyperbolic arc
(c tau)^2 = (ct)^2 - x^2
where tau is the proper time of objects moving in the Hubble flow, and t is the coordinate time for vertical world-lines..
So, if you have a look at the attached animation, you can see an animation of a cross-section of the Milne model universe, animated so that each frame represents a snapshot of the (x,y) coordinates of galaxies at a given coordinate time t.
The value of tau for any given particle in this animation would be proportional to the spread of the particles in its region.
Now, what I don't know, is which of these two variables, t, or tau, (or neither), is called "cosmological time" by modern physicists.
I know that when Milne introduced the term "cosmological time" , he was referring to the tau variable, because he knew, when you do a Lorentz Transformation of the events involved, that every observer in the system would see the same explosion centered on their own position, with age equal to their local tau variable.
I think that when people have "disproven" Milne's model, though, it is because they plugged in numbers that assumed that all of the bodies in the universe should exactly follow the Hubble flow. But we can extrapolate back and recognize that the early universe was far too dense to follow a Hubble flow. Particles would have bounced back and forth many times before they could follow a straight line trajectory.
MP: "If you block the path and jump into creating Inflation before other options are properly evaluated, then you are making a choice of hypothesis for your theory or for your Cosmology."
My impression... My impression was that they calculate the distance from the magnitude of identifiable objects, and treat the redshift as an independent variable. They really have pretty strong data, from examining stars--nuclear half-lives, and the speed at which nearby galaxies are receding, that this little section of the universe is about 13.7 billion years old. But they also have pretty good evidence, from measurement of Type 1a supernova magnitudes and redshifts, and Cepheids and Tully-Fisher relations, etc, that the most distant observed objects in the universe are as much as 40 billion light-years away. That's based on the inverse-square law for intensity of radiation, and whether they can properly identify the objects.
It is the discrepancy between the 13.7 billion year age, and the 40 billion light-year radius that makes them think there must be inflation.
Do you agree that the experimental 40 billion light-year figure comes from measurements of magnitude, or do you think they have actually used redshift to calculate the distance, using a formula that assumes inflation, a-priori?
http://www.spoonfedrelativity.com/pages/Milne-Explosion.php
MP: Ancient Photons slow down as they approach us - I projected wavelength but I forgot to project period - so my speed of light changed
I am not sure to what this refers. But, I do know that if you say that the cosmological scale factor a(t) indicates "stretching space" on moderated forums, they will delete your post and threaten to ban your account instead of discussing the issue.
I know about censorship and I agree with you and join you against it
############################################################################################################################################################################################################
MP: I believe you are conflagrating proper time (time in the local reference frame) with Cosmological Time (time used to time the expansion).
Conflagrating, or conflating?
Thanks. Conflating...:) although they are very belligerent too..:)
############################################################################################################################################################################################################
My impression is that what A.E. Milne originally meant by cosmological time was a hyperbolic arc
(c tau)^2 = (ct)^2 - x^2
That is not what I say. Cosmological time is the time that times the life of the Universe. Nobody says that the Universe is 13.58 billion years old on this reference frame but 12 billion years on that other reference frame.
The same thing has to do with spatial reference frame. If you point to three of the farthest Supernovae, you can build an spatial reference frame for the Universe.
So schizophrenic GR scientists will say that GR, SR do not allow for the existence of an absolute time and absolute spatial reference frame and then have to give an absolute age for the Universe.
In my theory, things are simpler. See the double cross-sections of the 4D lightspeed expanding hypersphere. The right has the xyzR and the left has xyzPhi
Phi is the Cosmological Time. It's hyperbolic projection (hyperbolic sines and cosines are used) onto the local time tau defines how the local time passes.
Notice that all reference frames have a torsional angle associated with their Fabric of Space (FS). That torsional angle speaks of their absolute velocity. We cannot see it, we can only detect relative tilt of FS when considering two local reference frames.
The torsional angle reaches 45 degrees when the reference frame reaches the speed of light. For sake of simplicity, HU allows the local metric associated with those reference frames to be Minkowskian. That said, that is just math and doesn't have anything to do with Physics.
You and everyone else will think that when one reaches the speed of light, local time just stands still... As you can see from the left panel, in HU nothing special happens. Our hypersphere will continue moving outwards at the speed of light.
So what happens to time?
HU says - Nothing.
The reason is because HU uses different dynamics. The rules of interaction are just one: The Quantum Lagrangian Principle - Dilators will dilate in phase with the surrounding dilaton field. Another phrasing of this is - Dilaton energy is quantized and cannot change through interaction, so the dilator has to position itself where it doesn't do or receive any work.
In HU matter is made of polymers of a coherence between stationary states of deformation of the local metric. This coherence is to be understood as a shapeshifting deformation of shape that also spins within the 4D spatial manifold as it moves with the rest of the Universe at the speed of light radially.
Electron, proton, positron and antiproton correspond to the four phases. Depending upon which one is in phase with the current Universe,one of these four natures will be exposed. You can see the different between electron and positron. Both correspond to a small footprint (small inertial mass). The electron is considered to be a small stretch which the positron is considered to be a small compression. The proton is considered to be a big compression and the antiproton a big stretch.
Interaction by QLP depends upon the footprint of the coherence. So interaction only happens at phase 0, Pi, 2Pi... So the Universe is Stroboscopic (Interaction only happens at different cosmological times).
So, QLP tells you where dilators will be (and thus their acceleration). If you realize that the FD 4D mass is the same (or approximately) as a hydrogen atom, you can start deriving natural laws. That is why HU does. That is how I derived the G to be inversely proportional to the 4D radius of the Universe.
Now, let's go back to time flow:
Interaction happens through QLP and the dilaton field. The dilaton field travels at 45 degrees at sqrt(2) c speed. That angle is defined by the relationship between hypersphere speed and space waves speed. 45 degrees is what one would expect for an equipartition of energy when the Universe was placed in motion (tangential velocity is equal to radial velocity). This is also consistent with the observed retarded potentials.
Interaction only affect tangential motion (another observation since no force makes us leave the hypersphere).
HU goes about that to calculate x. This is a non-relativistic displacement. The relativistic part has to do with the FS tilt. This was purposefully derived for a relaxed FS, that is, normal to local FS points along the radial direction.
Derivation of the same x with an arbitrary orientation of FS would provide the relativistic force.
The is a graphical way of seeing the effect of FS torsion. Let's consider a sequence of de Broglie steps while two bodies interact. First, lets see the SilverSurfer representing matter. IF there is no force (and if it has been moving for a long time), its FS will be relaxed (approximately). Under those circumstances it will move radially.
If interaction pushes it to the left, FS will be twisted to the left (like a surfboard on an hyperspherical shockwave).
Now, let's consider a sequence of increasing tilts. The accelerating sequence showcase a dilaton field increasingly twisting FS to the left. Due to the velocity relationship between the Universe and the dilaton field, when it reaches 45 degree displacement by interaction become infinitesimal.
So, by choosing QLP, Fundamental Dilator (FD), the equivalent of torsion of FS as acceleration, HU moves time dilation to frozen dynamics. The effects of SR hyperbolic metric are replaced by the usage of a different law for dynamics.
Traveling at high speed affects dynamics just along the direction of the relative torsion. That might explain Bullet galaxies seen on the earliest epochs. Their shape is consistent with anisotropic dynamics.
Particle at relativistic speeds on the other hand will not rotate around the radial direction. A neutron will rotate 180 degrees at each de Brogle step. The lifetime of a particle is affected by the torsion of its FS. FS is where nuclear energy is stored.
############################################################################################################################################################################################################
where tau is the proper time of objects moving in the Hubble flow, and t is the coordinate time for vertical world-lines..
So, if you have a look at the attached animation, you can see an animation of a cross-section of the Milne model universe, animated so that each frame represents a snapshot of the (x,y) coordinates of galaxies at a given coordinate time t.
The value of tau for any given particle in this animation would be proportional to the spread of the particles in its region.
Now, what I don't know, is which of these two variables, t, or tau, (or neither), is called "cosmological time" by modern physicists.
I know that when Milne introduced the term "cosmological time" , he was referring to the tau variable, because he knew, when you do a Lorentz Transformation of the events involved, that every observer in the system would see the same explosion centered on their own position, with age equal to their local tau variable.
I think that when people have "disproven" Milne's model, though, it is because they plugged in numbers that assumed that all of the bodies in the universe should exactly follow the Hubble flow. But we can extrapolate back and recognize that the early universe was far too dense to follow a Hubble flow. Particles would have bounced back and forth many times before they could follow a straight line trajectory.
I calculated the energy associated with the Big Bang here:
https://www.linkedin.com/pulse/big-pop-banging-universe-marco-pereira
assuming an average SN of 1E52 ergs, the many-Bangs released in 26 minutes, the energy of 1E21 Supernovae.
############################################################################################################################################################################################################
MP: "If you block the path and jump into creating Inflation before other options are properly evaluated, then you are making a choice of hypothesis for your theory or for your Cosmology."
My impression... My impression was that they calculate the distance from the magnitude of identifiable objects, and treat the redshift as an independent variable.
YES
They really have pretty strong data, from examining stars--nuclear half-lives, and the speed at which nearby galaxies are receding, that this little section of the universe is about 13.7 billion years old. But they also have pretty good evidence, from measurement of Type 1a supernova magnitudes and redshifts, and Cepheids and Tully-Fisher relations, etc, that the most distant observed objects in the universe are as much as 40 billion light-years away. That's based on the inverse-square law for intensity of radiation, and whether they can properly identify the objects.
It is the discrepancy between the 13.7 billion year age, and the 40 billion light-year radius that makes them think there must be inflation.
I corrected that by:
Since the farther SN is the smaller its mass is, that leads to overestimation of distances (by photometric distance measurement). That takes inflation out of the picture.
############################################################################################################################################################################################################
Do you agree that the experimental 40 billion light-year figure comes from measurements of magnitude, or do you think they have actually used redshift to calculate the distance, using a formula that assumes inflation, a-priori?
Of course, the 'observed 36 Gly SN1A' distance comes from magnitude. That is the weakness on the L-CDM model. That is what I corrected first.
https://www.linkedin.com/pulse/big-pop-banging-universe-marco-pereira
MP: Of course, the 'observed 36 Gly SN1A' distance comes from magnitude. That is the weakness on the L-CDM model. That is what I corrected first.
But, can you understand that if the SN1a distance measurements came from magnitude, that means that measurement did NOT come from the L-CDM model?
If you are finding a flaw with magnitude vs distance measurements, you're not arguing against the L-CDM model. You're arguing against the inverse square law of intensity.
http://hyperphysics.phy-astr.gsu.edu/hbase/vision/isql.html
MP: Of course, the 'observed 36 Gly SN1A' distance comes from magnitude. That is the weakness on the L-CDM model. That is what I corrected first.
But, can you understand that if the SN1a distance measurements came from magnitude, that means that measurement did NOT come from the L-CDM model?
If you are finding a flaw with magnitude vs distance measurements, you're not arguing against the L-CDM model. You're arguing against the inverse square law of intensity.
You have the logic wrong.
Trying to parse "Measurements were incorrect because of the White Dwarfs were epoch dependent. "
Should I do my best to creatively interpret that statement until it is not ambiguous?
Let's put it this way... An SN1a event requires for a main sequence star to have proceeded from red-giant phase to planetary nebula stage--which generally takes around 10-15 billion years from the birth of the star. It also requires a nearby mass source such as a stellar cloud or nearby star, from which it can accumulate enough matter to trigger the supernova. Hence, the frequency of SN1a events should be "epoch dependent" meaning, they will be more frequent in cosmological structures that are over a certain age, such as globular clusters.
However, the epoch dependency of Supernovae Type 1a events have no bearing on the (intensity = inverse square distance law).
If I were looking for a way to correct the inverse square distance law of intensity, I would probably be considering a diagram such as the one linked below,, which illustrates pretty dramatically that the proportion of light emitted behind a receding body is far less than the proportion of light emitted from a stationary body.
But you see, all I can say here is that "If I were using my model" then "The inverse square law of distance needs to be corrected according to the rules of aberration" I can't say "I have discovered that the inverse square law of distance is wrong."
My impression is that they do not take into account relativistic aberration in their calculations of magnitude, and it is a systematic error that they could choose to correct for, if they believe that this relativistic aberration is present. And if they corrected for it, they would find that the distances to the furthest supernova are smaller.
But I can't say "I have discovered that the law of inverse squared distance is wrong, and I have corrected it by my theory." All I can say is that I have an alternative hypothesis, which I strongly suspect, may provide a much more reasonable fit to the data, without the necessity of mysterious dark matter and dark energy.
Here's a difference. Whereas you're saying the inverse square law is WRONG, I'm saying it is RIGHT, but it includes the assumption that the light-source is stationary. What I would suggest is to remove that assumption from the formula, and find a new formula for redshift vs. distance that takes into account the recession velocity and resulting aberration.
(By the way... I don't really know exactly what methods the SDSS team or High Z Supernova Team used to figure absolute magnitudes from their data... I'm just saying what I would have suggested if I'd been there, if they wanted to adhere to "strict relativity" they should have taken into account aberration.)
https://www.google.com/search?q=stellar+aberration&espv=2&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiekMXIweDSAhWk54MKHehdDyoQ_AUIBygC&biw=1920&bih=925&dpr=1#imgrc=HfPHbe129kzApM:
You don't need to interpret my explanation. It is not like I didn't write an article about it and made it available to you and everyone.
https://issuu.com/marcopereira11/docs/huarticle
To understand the 'epoch-dependent Supernovae', you have to think:
There is no need to invoke aberration nor the lifestyle of the White Dwarf.
The breakdown of the inverse law takes place at Cosmological Distances and it is due to the Quantum Lagrangian Principle in page 41 and the derivation from eq. 80-111.
These items tell you that QLP is working in deriving from first principles Natural Laws and thus is a more fundamental theory than what you have.
The application of the theory to SN1A Survey Union 2.1 required that only line-of-sight optical path to be considered as the correct path for light from prior epochs.
Putting 1+1 together, one realize how individual dilaton field should be added together with the rest of the cosmological dilaton field. At each de Broglie step, dilaton field creates its imprint (polarized matter or vacuum). That polarization will be the starting point for the next dilaton field, so the number of wavelengths adjust itself to become the number of de Broglie steps between two epochs.
That means that the 'effective' number of dilaton wavelengths between emitter and detector does not depend upon the angle of the light-of-sight. All emitters in a given epoch will face the same attenuation and that will not be directly related to the 4D distance. It will be related to the number of de Broglie steps between the epoch and now.
About aberration, I can tell you that aberration will not affect magnitude measurements. From what I learned from you, it will only affect angle of observation. Since they project everything to J2000.0 equinox, I don't need to worry about it and I don't.
In addition my measurement are impervious to small errors on angle. I am integrating over all angles RA or DEC, so the results are robust.
By the way, I would accept correction about the theory. The way I present the theory is aimed at getting those corrections. Lessons on etiquette are not needed.
What I want is some demonstration of intelligence, knowledge to put me on my place. George did that about a small issue on my article. I immediately thanked him and moved on.
You might say that I am overstating my case. I would say otherwise since my theory didn't use a single parameter and predicted SN1A, that gives me a lot of authority on the subject. You are welcome to defy it by correcting my argument.
I would say that your obstination about Aberration and not reading what you are criticizing is not proper...:)
https://issuu.com/marcopereira11/docs/huarticle
I think my obstination about aberration is due to the question in the original post. The question is "Why would anyone think that Strict Relativity is valid over Cosmological Distances?"
So my intent in this particular thread is not necessarily to understand every detail of your paper, but rather to defend why I, would think that Strict Relativity is valid over Cosmological Distances.
That being said, I think that your model, and L-CDM, and FLRW metrics, and Eddington, and Mach, and perhaps even Einstein--all have rejected the relativity of simultaneity on cosmological scales... So when you say I am not reading what I am criticizing, realize that these criticisms are not leveled against your theory, uniquely, but rather, those who do not even acknowledge the existence of a strict relativity theory.
I you want to model the universe according to strict relativity, then you MUST invoke aberration. because the redshift of the stars is caused by their recession velocity. In the context of strict relativity, you'd be wrong if you didn't account for aberration.
If you want to model the universe according to L-CDM model, then there is no need to invoke aberration, because in the L-CDM model, the redshift is due to cosmological expansion, and the galaxies are not actually receding. Quite right, I think... it would be wrong, within the context of the L-CDM model to include an aberration adjustment factor.
And, if you want to model the universe according to your Hypergeometric model... if I understand correctly, you'd be wrong not to account for the changing of the universal gravitational constant, over time,. and IF the universal gravitational constant is changing, over time, THEN it would stand to reason that the Chandrasekhar mass would be changing over time, as well.
So my intent in this particular thread is not necessarily to understand every detail of your paper, but rather to defend why I, would think that Strict Relativity is valid over Cosmological Distances.
That is nice. The answer to that will be given by counter-example. Imagine you do your SR test on short distances (laboratory), satellite etc... Then we can do this Gedanken Experiment to check if SR would work over Cosmological Distances.
I can imagine one circumstance where SR would fail. Just put the Universe in a lightspeed expanding hypersphere. Now consider that you stay here and send me in a rocket traveling a 0.5 c. Now let's us go into cryogenic sleep for a billion years.
We wake up, and try to synchronize our oscillators, the frequency adjustment (which relates to time flow) will have to consider not only distance and velocity but also the curvature (not include in SR).
This scenario would be consistent with local tests of SR, but it would deliver a death blow to SR over Cosmological distances
So, I don't think you can properly defend SR over Cosmological Distances without attacking my topology and understanding my article. That is the whole point of this question.
That being said, I think that your model, and L-CDM, and FLRW metrics, and Eddington, and Mach, and perhaps even Einstein--all have rejected the relativity of simultaneity on cosmological scales...
We don't care about Simultaneity issues since we all use Absolute reference frames. Cosmology cannot be dependent upon which local reference frame you use. Observations have to be clear of reference to local reference frames.
So when you say I am not reading what I am criticizing, realize that these criticisms are not leveled against your theory, uniquely, but rather, those who do not even acknowledge the existence of a strict relativity theory.
As I said, you cannot defend SR over Cosmological Distances without directly criticizing my theory. Just see the simple scenario I proposed above. I say that because I am criticizing SR over Cosmological angles and that doesn't give you leeway to avoid criticizing me back... or disregard my criticism altogether without even trying to defend it...:) You will realize that I am not the only one criticizing SR over Cosmological distances. You are the only one defending it.
If you want to model the universe according to strict relativity, then you MUST invoke aberration. because the redshift of the stars is caused by their recession velocity. In the context of strict relativity, you'd be wrong if you didn't account for aberration.
Not unlike, L-CDM, GR etc, I model on an absolute frame of reference and with absolute time. In my theory, those two are explicit. They are not explicit in other theories but they are there. For instance, CMB is calculated after one eliminates anisotropic redshift due to the motion of the Galaxy with respect to the Cosmological absolute reference frame. When one speak of the age of the Universe, one is also talking about an Absolute Cosmological Time.
The Aberration (and I am just parroting back what you told me) can be eliminated if the data is referred back to a J2000.0 equinox (or even better J2002.0 Equinox).
I don't have the expertise to move the reference date from one to the other, nor do I need to do it. My data and theory is not sensitive to aberration (left over after the J2000.0 equinox referral). I deal with aggregation of 1.3 million stars. If you consider random velocities superimposed on the Hubble flow, those contributions would average out to zero.
If you want to model the universe according to L-CDM model, then there is no need to invoke aberration, because in the L-CDM model, the redshift is due to cosmological expansion, and the galaxies are not actually receding. Quite right, I think... it would be wrong, within the context of the L-CDM model to include an aberration adjustment factor.
I don't want to model the Universe according to L-CDM. That said, we both use an absolute reference frame and don't need to take aberration in consideration. In my case, any residual non-Hubble flow velocity is supposed to average out and thus it is disregarded.
And, if you want to model the universe according to your Hypergeometric model... if I understand correctly, you'd be wrong not to account for the changing of the universal gravitational constant, over time,. and IF the universal gravitational constant is changing, over time, THEN it would stand to reason that the Chandrasekhar mass would be changing over time, as well.
That is exactly what I did. I considered that G is changing over time and that caused Chandrasekhar masses to change over time. We both agree here.
I also have an extra leg of reasoning that states that the nuclear chain reaction can be approximated by a first step limited reaction.
I considered reaction rates and also an increase for the intermediate reaction rates. The script in on my github
https://github.com/ny2292000/TheHypergeometricalUniverse
From physical expectations one should consider that detonation will shorten the lifetime of intermediates. I made my guess of reasonable reaction rates. Under these circumstances, normalized Light/[C]^2 reaches 0.94 by the time of the Absolute Peak Luminosity. Considering that WLR is applied observations, one would expect that even this small bias would be erased.
https://github.com/ny2292000/TheHypergeometricalUniverse
MP: "So, I don't think you can properly defend SR over Cosmological Distances without attacking my topology and understanding my article."
Perhaps not. But part of the defense of a theory is to acknowledge its existence and to give a realistic presentation of its qualities. By acknowledging the existence of SR, and pointing out the fact that SR has the Simultaneity of Relativity (which I am now calling 'temporal facing'), and your topology has absolute time. This is just an acknowledgement of the differences between the two.
My "defense of SR" is a defense against it being misrepresented. This is quite a difference from what is meant by "defense" in a physical fight. In a physical fight, you dodge, and weave, and try to avoid getting pinned. My defense of SR is intended to do the opposite--to pin it down exactly, so that all that it's properties are perfectly understood.
MP: "Just put the Universe in a lightspeed expanding hypersphere."
I wouldn't have any idea how to do that, but I have attempted to "Model the universe as a lightspeed expanding sphere."
http://www.spoonfedrelativity.com/pages/Milne-Explosion.php
Within the context of SR there are only three spatial dimensions and one time dimension, so I don't think you can do hyperspheres... just expanding spheres.
MP: "Now consider that you stay here and send me in a rocket traveling a 0.5 c. "
MP: "We wake up, and try to synchronize our oscillators, "
Okay... That would correspond to a hyperbolic angle of arctanh(.5) = 0.5493 rapidians. (yes, I just made up that unit--it's a combination of rapidity and radian--to my knowledge, there is no common usage unit for rapidity.)
There are multiple definitions of the word synchronize.
1. cause to occur or operate at the same time or rate.
2. adjust (a clock or watch) to show the same time as another.
3. both of the above.
I'm using definition 1, here.
Within the context of SR, if we are temporally faced 0.5493 rapidians from each other, and half-a-billion light-years apart, then it is impossible to synchronize our watches. From my perspective, you won't get out of cryogenic sleep until long after I am dead. From your perspective, I won't get out of cryogenic sleep until long after you're dead.
Even if we were right next to each other, there is no way within the context of SR to synchronize two clocks that have a relative velocity at 0.5c. Each of the clocks would be measured by the other observer to be going cosh(.5493) = 1/sqrt(1-.5^2) = 1.1547 times slower than the other.
It's rather like saying, let's hold two meter-sticks at 30 degrees from each other while synchronizing their centimeter-marks. You either align them parallel, or you don't synchronize their centimeter marks.
MP: " You will realize that I am not the only one criticizing SR over Cosmological distances. You are the only one defending it."
Oh, I've been keenly aware of that. I actually got on the phone with Lewis Carroll Epstein a few years ago, author of "Relativity Visualized" and he had better things to do than worry about defending SR over Cosmological distances, like painting. A. E. Milne, the original defender of SR over Cosmological distances is long passed away. WWoods, the wikipedia user that inspired my mathematica demonstration of "Temporal Facing" remains entirely anonymous.
MP: "Not unlike, L-CDM, GR etc, I model on an absolute frame of reference and with absolute time. In my theory, those two are explicit. They are not explicit in other theories but they are there."
Now here, let's discuss "absolute time" vs. "temporal facing". In a model where temporal facing applies, there is no paradox in saying, as I did above "From my perspective, you won't get out of cryogenic sleep until long after I am dead. From your perspective, I won't get out of cryogenic sleep until long after you're dead." But in a model where absolute time applies, then all observers have the same temporal facing, and this sort of statement would seem nonsensical.
I've always thought that cosmological GR and L-CDM have an implicit assumption of absolute time, but every proponent of the theory has insisted something along the line of "it isn't an assumption" which always struck me as a bit of a dodge.
So I'm curious... When you say "they are not explicit... but they are there" it seems to go along with my general sense, that absolute time may be a sort of "presupposition".
MP: That is exactly what I did. I considered that G is changing over time and that caused Chandrasekhar masses to change over time. We both agree here.
A white dwarf goes supernova when it accumulates a mass greater-than-or-equal to the Chandrasekhar mass. So if G were decreasing, then the Chandrasekhar mass would be increasing. Right? So the most distant SN1a events would be coming to us from an event when the universe was young, and G was high, and it could be a very small white dwarf exploding. If it is a very small white dwarf, then yes, it would produce a smaller explosion--and hence be thought to be more distant than it was, if this were not taken into account.
And I confess, I'm terrible at formulas that involve inequalities. But what I'm wondering... why would the small white dwarfs wait until the Chandrasekhar mass was equal to their mass, when earlier the chandrasekhar mass was smaller than their mass?
MP: "So, I don't think you can properly defend SR over Cosmological Distances without attacking my topology and understanding my article."
Perhaps not. But part of the defense of a theory is to acknowledge its existence and to give a realistic presentation of its qualities. By acknowledging the existence of SR, and pointing out the fact that SR has the Simultaneity of Relativity (which I am now calling 'temporal facing'), and your topology has absolute time. This is just an acknowledgement of the differences between the two.
My theory also has a local time frame where issues of simultaneity can be discussed. The right panel has time in it. Two times... Phi is the cosmological time and tau is the proper time. So each moving reference frame can have their our x'tau' and be consistent with SR. I don't have any problem operationally with SR. My theory sees the Green Band and that makes it different. From this perspective, I can see the local curvature of the Fabric of Space (green band) and assign physics to it. Relativity misses the fourth dimension and so it couldn't do it. Cosmology is not done in one of those twisted reference frames. Once one assign physics to the torsion of the fabric of space, the assigned physics (torsion equal to velocity as in a surfboard surfing a shockwave), then one can immediately assign inertial motion to the relaxation of the FS, that is, inertial motion happens to relax FS. After a few billion years of motion, objects land on a relaxed FS and just follow Hubble flow (on average or on majority). That is the framework where HU develops Cosmology.
######################################################
I wouldn't have any idea how to do that
I explain it here.
https://www.linkedin.com/pulse/big-pop-banging-universe-marco-pereira
######################################################
pin it down exactly, so that all that it's properties are perfectly understood.
I think SR is pinned down enough.
######################################################
http://www.spoonfedrelativity.com/pages/Milne-Explosion.php
Expanding spheres doesn't solve any problem. L-CDM is an expanding sphere.
The Current radius is infinite. The Inflationary Universe picture is supposed to be rotated around the sky on all angles. So, L-CDM is a distorted 3D sphere. If your sphere is not equally distorted, it will not fit SN1A survey and thus be wrong.
######################################################
Okay... That would correspond to a hyperbolic angle of arctanh(.5) = 0.5493 rapidians. (yes, I just made up that unit--it's a combination of rapidity and radian--to my knowledge, there is no common usage unit for rapidity.)
Wrong. You can use simple trigonometry. You don't need to invent anything.
Current radius is 13.58 Gly. Half-Billion years later it will be 14.08. Cosmological angle is 0.5/14.08 radians. You can use the absolute cosmological framework xyzR or zyzPhi.
No need to bother with local frameworks or simultaneity. Just make use of CMB to get an absolute framework to refer that 0.5 c to. I should had made it easier just by saying velocity c. That is an absolute velocity.
######################################################
By trying to talk about myself being dead, not being able to synchronize watches, you miss the Forest by the Trees. You can always synchronize watches. You know what 0.5 c does to your watch, so you can synchronize it... It doesn't need to be spot on.... this is a Cosmology Gedanken Experiment to talk about curvature. The curvature will provide the drift NOT PREDICTABLE BY SR.
######################################################
"From my perspective, you won't get out of cryogenic sleep until long after I am dead. From your perspective, I won't get out of cryogenic sleep until long after you're dead."
You should understand that the point of presenting the hypothesis that the Universe is embedded inside a curved hypersurface is to show that that hypothesis is consistent with SR locally but makes SR to fail over Cosmological distances. It is not to discuss simultaneity. This is a Cosmology argument against SR. I am saying that it is easy, very easy to come up with topologies that are locally consistent with SR, where SR fails over longer distances where implicit curvature exists.
######################################################
So I'm curious... When you say "they are not explicit... but they are there" it seems to go along with my general sense, that absolute time may be a sort of "presupposition".
The implicit absolute space can be seem how you eliminate bias on CMB. Without getting rid of the motion of the Milky Way, the CMB is not homogeneous at all..
The absolute time is implicit any time someone states the age of the Universe.
It is a dodge because things don't make sense and people have no better answer to give. The same happens about Particle-Wave duality.. HU disputes that and in doing so uncovers the de Broglie Force.
######################################################
A white dwarf goes supernova when it accumulates a mass greater-than-or-equal to the Chandrasekhar mass. So if G were decreasing, then the Chandrasekhar mass would be increasing. Right?
Yes. See wikipedia and their eq. below. Mass is proportional to G^(-3/2).
https://en.wikipedia.org/wiki/Chandrasekhar_limit
You can see more about the logic behind my theory in my newest question
https://www.researchgate.net/post/Does_the_Supernova_Chain_Reaction_Follows_C2
For my theory to be correct, many things have to fall into place.
If anyone of these items is incorrect, my theory is incorrect!
Only Item 2 is item outside my theory. If it is incorrect, my theory fails.
######################################################
why would the small white dwarfs wait until the Chandrasekhar mass was equal to their mass, when earlier the chandrasekhar mass was smaller than their mass?
They wait until they reach Chandrasekhar mass because of Physics. Physics governs the Pauli exclusion principle and the forces repelling electron (fermions). White Dwarfs will explode when they reach Chandrasekhar mass and not earlier (unless they bump into some other star... or get eaten by a Black Hole...
MP: Wrong. You can use simple trigonometry.
I'm not wrong.
https://en.wikipedia.org/wiki/Rapidity#In_experimental_particle_physics
w=arctanh(v/c)
MP: You can always synchronize watches.
Not if you're using Special Relativity, you can't.
Sorry, I usually try to spend a bit of time discussing your idea... but seriously, you should know better than to say velocity is trigonometric tangent of rapidity. That's kindergarten stuff.
You are wrong. I am using the absolute reference frame and that framework is Cartesian. There is no need to use hyperbolic tangents.
This is cosmology. Not particle physics
You were attacking Special Relativity by misrepresenting Special Relativity.
That's called a strawman argument.
MP: You should understand that the point of presenting the hypothesis that the Universe is embedded inside a curved hypersurface is to show that that hypothesis is consistent with SR locally but makes SR to fail over Cosmological distances.
You should understand that invoking a priori assumptions that SR fails over cosmological distances is not the same as making an argument for SR failing over long distances.
MP: "You can always synchronize watches. You know what 0.5 c does to your watch, so you can synchronize it..."
You cannot synchronize watches between two distant observers moving away from each other at 0.5c in the context of SR. Each observer's clock is 15% slower than the other's, until they change rapidity and match pace.
MP: this is a Cosmology Gedanken Experiment to talk about curvature. The curvature will provide the drift NOT PREDICTABLE BY SR.
I don't think this is the case... The trouble is not so much that things aren't "predictable by SR" as that it is very difficult to wrap one's mind around SR. For instance, Milne predicted the Cosmic Microwave Background Radiation by 1935, but he had no notion of how bright it would be, or how far away it was, and he hadn't quite figured on it being a surface of hydrogen recombination. He just said it would be a "continuous background of finite intensity." But then when the cosmic background radiation was actually discovered in 1964, it's discovery was accidental.
Similarly, in SR, if an observer accelerates back and forth and back and forth, then he ages less than an observer who travels along without acceleration. This is called the twin paradox. But because pedagogically speaking, the twin paradox is taught as a flaw with SR, instead of a feature, people don't realize that the Twin Paradox actually predicts cosmological inflation in a Hubble Universe.
MP: I am saying that it is easy, very easy to come up with topologies that are locally consistent with SR, where SR fails over longer distances where implicit curvature exists.
Ah, ha. Name 10. Okay, just kidding. But really, no. I'm not kidding. Among the metrics that actually make some sense, I can think of Schwarzschild metric, Painleve Gullstrand, Rindler. What these metric's do is take a Minkowski space-time, and envision a set of clocks moving through the space as a continuous goo. The positions and readings of those clocks can be mapped to an underlying Minkowski coordinate system. But that underlying Minkowski coordinate system is infinite in extent, and does not have any implicit curvature.
It is that underlying Minkowski Coordinate system on which the Lorentz Transformations apply and SR is valid over cosmological distances. SR doesn't care that we've set up a bunch of clocks that have created a local non-Minkowski coordinate system, because it operates on the underlying Minkowski coordinates of events. Not on the "curved" labels and readings of the clocks which introduce curvature.
This seems to be a longstanding conceit of General Relativity experts, that the Lorentz Transformations can be prevented from working on cosmological scales by imagining various geometrical arrays of moving or stationary clocks in gedanken experiments. But in all these constructions, the readings of the clocks always maps to an underlying (flat) Minkowski metric, which is infinite in extent.
MP: You can always synchronize watches. You know what 0.5 c does to your watch, so you can synchronize it... It doesn't need to be spot on.... this is a Cosmology Gedanken
If I synchronized my watch with YOUR watch at 0.5c, I would have to slow my watch down by about 15%. But if there were a set of clocks that were co-moving with you, passing by my position, and I were able to read them as they went by, then I would have to speed my watch up. If we wanted to synchronize our watches, we'd have to decide which of us would keep our own watch reading, and which of us was basing our clock reading on the clocks in our immediate vicinity that is passing by at 0.5c.
So yes, I could speed up my watch to match the clocks passing by my position at 0.5c in your direction, or you could speed up your watch to match the clocks passing by your position in 0.5c in my direction. Or you could slow down your watch to match the reading of my watch in your coordinate system, or I could slow down my watch to match the reading of your watch in my coordinate system.
But regardless of what we do, we're always going to agree that from my point-of-view, your watch is slower, and from your point-of-view, my watch is slower.
======================
MP: They wait until they reach Chandrasekhar mass because of Physics. Physics governs the Pauli exclusion principle and the forces repelling electron (fermions). White Dwarfs will explode when they reach Chandrasekhar mass and not earlier (unless they bump into some other star... or get eaten by a Black Hole...
Are you really sure you've got the inequality correct on this? Because of physics, a white dwarf will wait until its mass is greater than or equal to the Chandrasekhar mass, and as soon as that condition is met, it should go supernova. But if you have the gravitational constant, G, decreasing, then the Chandrasekhar mass would be increasing. That means that the white dwarf would have been greater than the Chandrasekhar mass all this time--sufficient mass to go supernova... But somehow it waits until its mass is insufficient to go supernova... And then it goes supernova?
MP: They wait until they reach Chandrasekhar mass because of Physics. Physics governs the Pauli exclusion principle and the forces repelling electron (fermions). White Dwarfs will explode when they reach Chandrasekhar mass and not earlier (unless they bump into some other star... or get eaten by a Black Hole...
Are you really sure you've got the inequality correct on this? Because of physics, a white dwarf will wait until its mass is greater than or equal to the Chandrasekhar mass, and as soon as that condition is met, it should go supernova. But if you have the gravitational constant, G, decreasing, then the Chandrasekhar mass would be increasing. That means that the white dwarf would have been greater than the Chandrasekhar mass all this time--sufficient mass to go supernova... But somehow it waits until its mass is insufficient to go supernova... And then it goes supernova?
Inversely proportional -> increasing Chandrasekhar mass with distance..
###############################################
I will skip arguments about simultaneity since Cosmology shouldn't be done on relative reference frames.
###############################################
Also, we all agree that SR Doppler shift doesn't match observations (even L-CDM knows that), so I don't think it is worthwhile to debate SR fitness for Cosmological Distances.
A quite strange discussion. After the first observational support of GR, SR was out as a base for a theory of gravity. Simply because in SR, light would not be deflected by gravity, but the observation has shown that it is. Why one discusses after this if SR could be used in cosmology?
Of course SR should be used in cosmology. It's absurd that its not.
The fact that light bends around gravitational masses does not prevent a rotation transformation when you turn your head.
Why would it prevent Lorentz Transformation, when you accelerate?
MP: Also, we all agree that SR Doppler shift doesn't match observations.
What observations are you talking about here?
I would recommend those who think one has to use SR in cosmology to start learning GR. Lorentz transformations are transformations between preferred systems of coordinates named "inertial". In GR, there are no such inertial systems of coordinates.
Of course, you are free to apply the Lorentz transformations to any system of coordinates. But this makes not much sense - it would be similar to, say, rotations applied to spherical coordinates.
No problem if you want to develop a theory better than GR. But what is the point of questioning SR in a domain where nobody for more than 100 years thinks it works?
And to find something better than GR without knowing GR is a quite hard job. Because there is a lot of empirical evidence which fits GR predictions, or at least something similar, like GR + some dark matter. You have to do all this yourself for your own theory.
If you know GR, you can solve most of these problems in a quite simple way: Show that your new theory gives, in some circumstances, the same predictions as GR. After this, it remains to care about the few situations where your theory differs from GR.
This is what I have done in my theory of gravity, http://ilja-schmelzer.de/gravity/
Without this, you have no real chance.
You are a 100% correct. I stumbled in my theory without being a fully grown GR person, so due to ignorance I thought they were using Doppler shift to calculate d(z).
Here i decided to create questions challenging my assumptions, being the most important the
https://www.researchgate.net/post/Does_the_Supernova_Chain_Reaction_Follows_C2
This question refers to how the Absolute Peak Luminosity related to a variable Gravitational Constant. I claim it scales with G^(-3).
https://www.researchgate.net/post/Does_the_Supernova_Chain_Reaction_Follows_C2
You are preaching to the Choir. I made sure I passed Time Dilation, Mercury Precession and Gravitational Lensing first. These are the commonalities to GR (they are really agreements with observations and not GR).
Then I applied the theory to Supernova Survey 1A and challenged the measurements there. I don't need to know GR to challenge that. If I am correct, then GR is wrong since it fits that data.
Challenging the data results in getting the Universe well behaved.
GR is a 4D spacetime theory. My is a 5D spacetime theory with 4 spatial coordinates.
In a 4D spacetime theory, BAO is supposed to appear angularly.
In my theory they are supposed to appear along distance.
I looked for them in distance using the Sloan Digital Sky Survey and found them.
You can them in the picture below.
You can also see the parameterless predictions of the SN1A distances by the attached equation.
You can also find the article.
I will take a look at yours soon.
By the way, the last picture ManyBangs North Mass refers to the North MASS SDSS dataset. It shows the 36 bangs associated with the beginning of times. Those are observations on 10 years old data that didn't become visible just because of failure of GR.
What you call Lorentz Ether, I call the Fabric of Space. In a 5D spacetime theory, one can have an Absolute time and Absolute spatial framework without conflict with Relativity. My theory disregard all the GR algebra and consider it useless. The reason being is that I derived Gravitation from first principles and it is a velocity-dependent force.
It is a waste of time to do algebra on a velocity dependent force since it is not amenable to geodesic considerations. Since your theory has Inflation and Dark Matter, your theory would be in conflict with my challenge of SN1A distances.
https://issuu.com/marcopereira11/docs/huarticle
IJ: This is what I have done in my theory of gravity, http://ilja-schmelzer.de/gravity/
Can you give an explicit form for square-root-of-negative-g?
Hmmmm, I think I may have answered my own question... If g is the diagonal, (c^2,-1,-1,-1) form metric tensor of Minkowski space, Here,
http://www.spoonfedrelativity.com/pages/EinsteinNotationAsMatricesPart4.php
I have an expression called \chi, used in equation 4.15. which, when squared, would produce negative-g.
IS: I would recommend those who think one has to use SR in cosmology to start learning GR.
MP: I suspect I will bypass GR since I am correcting it.
Ilja knows this subject extremely well Marco, he's been working on it for many years and knows where the pitfalls lie.
SR does not deal with gravity at all. When cosmologists talk about redshift versus distance such as in the Hubble Law, it is assumed that the observer and the distant source are both locally at rest and the redshift is purely due to the curvature introduced in GR. If you try to use SR, you get no shift at all so no cosmologist imagines SR can be used.
The toughest constraints on gravity are those tested locally, obviously the approximate inverse square law on Newton must be a classical limit, the orbit of Mercury should be predicted accurately and then gravitational bending at twice the Newtonian prediction should be matched. Any theory of gravity that cannot replicate those observations is not going to go far.
Only then can that revised gravitational theory be extrapolated to see what it says about cosmological observations, will it predict any redshift at all for example?
> Can you give an explicit form for square-root-of-negative-g?
g is the determinant of the matrix $g_{mn}$, which is a 4x4 matrix. So, it is a sum of 24 terms, each of them a product of four of the $g_{mn}$. I can, in principle, do it, but I'm too lazy to do it. What would be the point? In most of the simple computations of example solutions most of the $g_{mn}$ are zero anyway, and therefore most of the terms disappear.
GD: it is assumed that the observer and the distant source are both locally at rest and the redshift is purely due to the curvature introduced in GR. If you try to use SR, you get no shift at all so no cosmologist imagines SR can be used.
Yes, this sounds to me like the standard sort of explanation that I've heard bandied about.
Can you give any further detail of what you mean by "observer and source locally at rest?"
Are you saying that the observer and source are approximately at rest with respect to the objects in their immediate vicinity, or they are approximately at rest with respect to each other?
Because if you say they are at rest with objects in their immediate vicinity, this does not preclude the possibility that the redshift is caused by the recession velocity of the source with respect to the observer.
Of course, if there is a recession velocity, SR would predict a redshift.
There seems to be a skipped step here, where you are concluding that the source and observer must be at rest with respect to each other... Or is it a priori assumption that the source and observer are at rest with respect to each other?
MP: Also, we all agree that SR Doppler shift doesn't match observations
Marco said earlier that he is under the impression that there has been some modeling by cosmologists that actually acknowledges the possibility that the redshift is caused by recession velocity.
I have never seen any evidence that this was the case. All treatments of cosmological General Relativity have made an argument similar to what George Dishman has said... for example "source and observer and the distant source are both locally at rest "
IS: I would recommend those who think one has to use SR in cosmology to start learning GR.
MP: I suspect I will bypass GR since I am correcting it.
Ilja knows this subject extremely well Marco, he's been working on it for many years and knows where the pitfalls lie.
GD: I mean not disrespect. My point is that GR is 4D spacetime. Mine is 5D Spacetime. Mine replicates SN1A observations without inflation and on a Cartesian space (although it can be construed as locally Minkowskian). I am far from being someone that has the resources to devote to learn GR without a reason. If my theory were incorrect, I would just leave the problem to people who is paid to solve it. While that doesn't happens, I am still around.
My theory has great value if the Absolute Peak Luminosity of type 1A Supernovae has a G^(-3) dependence.
I am struggling with that part of the theory right now. This problem may or may not be solvable analytically. The reason is that the 'observed apparent peak luminosities' pass through WLR before becoming a measurement. WLR is empirical. I used empirical reasoning... to prove myself right.... I would love to see if the analytical part of the detonation modeling allows me to gain the full G^(-3).
If you want to challenge the theory or help me prove it right, you can look at that problem. I created a question about it and have been writing about it for some time. Arnett is pushing me towards learning the modeling of detonation processes in Supernova. It is not easy to try to follow the effect of an increased Chandrasekhar radius on those semi-empirical models. I am waiting to receive his book. Hopefully a clear and analytical model will be there. Up to now I can see a dependence of G^(-2)...I suppose the rest (G^-1) should be hidden on the other variables... but that is still not visible to me.
If I cannot find the missing G^-1, I will declare myself wrong and say excuse me..>:)
SR does not deal with gravity at all. When cosmologists talk about redshift versus distance such as in the Hubble Law, it is assumed that the observer and the distant source are both locally at rest and the redshift is purely due to the curvature introduced in GR. If you try to use SR, you get no shift at all so no cosmologist imagines SR can be used.
The toughest constraints on gravity are those tested locally, obviously the approximate inverse square law on Newton must be a classical limit, the orbit of Mercury should be predicted accurately and then gravitational bending at twice the Newtonian prediction should be matched. Any theory of gravity that cannot replicate those observations is not going to go far.
MP: I did that.
Only then can that revised gravitational theory be extrapolated to see what it says about cosmological observations, will it predict any redshift at all for example?
MP: I did that
Sorry, but this sounds contradictory. You do not have the resources to learn GR, which is a quite simple theory, at least if you care only about the well-known solutions for the expanding universe, but claim to have some 5D theory which predicts the SN1a behavior - which requires a lot of knowledge about statistics for evaluating observations, physics of the SN1a themselves, and a lot more? Then you claim to have computations which show that your theory matches the Newtonian limit, the Mercury orbit and gravitational bending of light? Sorry, I don't believe.
MP: I would love to see if the analytical part of the detonation modeling allows me to gain the full G^(-3).
There is no analytical solution, supernovae involve a mix of detonation and deflagration in the turbulent fluid interior, probably starting off-centre.
It's not even known of the dominant process is a single white dwarf gaining mass from a companion star until it exceeds the Chandrasekhar Limit or the merger of two dwarfs, both below the limit but whose sum exceeds it. It is likely that there are examples of both types but the prevalences are not known.
A lot of work is being done on this, a random couple of examples are linked. To model this, you'll need a modern supercomputer.
https://ccse.lbl.gov/Publications/aja/wka.pdf
https://www.youtube.com/watch?v=5tjd9KAPais
Let's get back to the original question. "Why would anyone believe that strict relativity is valid over cosmological distances?"
In order to answer that question, realistically, I think we need to have a set of questions to answer... let me offer a few questions
1a. You see a stationary object at 4 o'clock (120 degrees to your right), and accelerate in the 12 o'clock direction with a rapidity boost equal to 3 rapidians. (v/c = .995= tanh(3)) To what angle does the image aberrate after the boost.
1b. Assume the object in part 1a was 13 billion light-years away before the acceleration. How far away is the image right after the boost, and at what angle would you need to aim a speed-of-light signal to intercept the object?
2a. Same scenario as 1a, except this time, the object at 4 o'clock is not stationary, but traveling directly away from you with rapidity equal to 3 rapidians (v/c = .995 = tanh(3)) Does the image aberrate to the same location, or a different location, after you accelerate .995c in the 12 o'clock direction?
2b. Same as 1b, except the object was traveling directly away from you at .995c before the acceleration... figure out the angle you would have to aim a signal to intercept the object after the boost in this scenario.
3. Same as scenario 2a, 2b, except this time consider an object directly in front of you at 12 o'clock, receding at .995c (3 rapidians) before the boost. What is the direction and distance to the object after the boost?
Now, if General Relativity somehow assumes, or concludes, "observer and source locally at rest" and SR admits the possibility of "observer and source at large relative rapidity" then obviously, the two theories should give different answers. I don't think that General Relativity makes any such assumptions or conclusions, though many cosmologists certainly believe that it does.
Special Relativity will give an explicit answer for all these questions, while (George Dishman's concept of a cosmologist) will hand-wave some excuse involving "distant locality" for not bothering to answer,.
George Dishman's concept of a cosmologist: "When cosmologists talk about redshift versus distance such as in the Hubble Law, it is assumed that the observer and the distant source are both locally at rest and the redshift is purely due to the curvature introduced in GR."
https://groups.google.com/forum/#!topic/sci.physics.relativity/nFIZ9q3DQmI%5B176-200%5D
Jonathan, I'll try to answer your previous question this afternoon as I'm busy at the moment but the significance is that speeds for your question 1a in the latest post never exceed 0.01c for any galaxy. For the Solar System, the speed is 368km/s or 0.00123c
However, the distance to the highest redshift galaxy GN-z11 is 11.09 so the distance between us and it was increasing by 2.24 light years per year when the light we see now was emitted, and we can extrapolate the current cosmological model to find that the separation is currently increasing by 4.32 light years per year.
Obviously you cannot treat those rates of increase of separation as simple speeds and plug then into the usual relativistic Doppler formula.
Now, I think it is done. I provided a few arguments supporting my assertion that type 1a Supernova Absolute Peak Luminosity varies with G-3. That would make the distances observed on the Supernova Survey incorrect by a factor of G1.5.
I think now it is proven in a way the community can understand here.
Please let me know if you are ready to abandon ship (GR) and move into HU..>:)
https://hypergeometricaluniverse.quora.com/Second-Peer-Review-2
GD: the distance to the highest redshift galaxy GN-z11 is 11.09 so the distance between us and it was increasing by 2.24 light years per year
GD: Obviously you cannot treat those rates of increase of separation as simple speeds and plug then into the usual relativistic Doppler formula.
Just to be clear about what is "the usual relativistic Doppler Formula". and you're free to call this "Jonathan Doolin's concept of the relativistic doppler formula" or whatever you like. But the formula is that given at https://en.wikipedia.org/wiki/Redshift
1+z = sqrt((1+v/c)/(1-v/c))
For a redshift of 11.09, then, we have
12.09^2 = (1+v/c)/(1-v-c)
v=c*(145.16/147.16) = 0.9864c
This would indicate a rapidity of tanh^-1(.9854) = 2.492 rapidians.
But SR makes no prediction of the distance to that object. It would only tell us how fast it's moving away from us. You would have to do some more complicated work involving standard candles and magnitudes to get an estimate of the distance.
GD: Jonathan, I'll try to answer your previous question this afternoon as I'm busy at the moment
Thanks. And I will work out how the answers would turn out under SR.
GD: it is assumed that the observer and the distant source are both locally at rest and the redshift is purely due to the curvature introduced in GR. If you try to use SR, you get no shift at all so no cosmologist imagines SR can be used.
JD: Yes, this sounds to me like the standard sort of explanation that I've heard bandied about.
It is really a question of terminology, each galaxy exhibits some redshift but for convenience we can split it into two parts, a value which is the average observed for all galaxies around the same distance and a second part specific to the galaxy in question, hence relative to the average. However, see below for why cosmologists don't think of the average part as a speed as such.
JD: Can you give any further detail of what you mean by "observer and source locally at rest?" Are you saying that the observer and source are approximately at rest with respect to the objects in their immediate vicinity, or they are approximately at rest with respect to each other?
The former. The average for the matter in the two vicinities is that they are moving apart by roughly 70km/s for every megaparsec of distance between them.
JD: Because if you say they are at rest with objects in their immediate vicinity, this does not preclude the possibility that the redshift is caused by the recession velocity of the source with respect to the observer.
Correct. Let me try to give you a slightly flawed analogy. Suppose you had a very sensitive speed gun and some distance from you there was a wasps' nest on a tree branch. You don't like wasps so you are prudently walking away from it. You could point the gun at each wasp buzzing round the nest and get its individual speed then take the average and that's a good estimate for your walking speed. You could also calculate the standard deviation of the speeds after subtracting the average and after alowing for varying flight directions, you could find the average air-speed of a wasp.
Looking at a distant cluster of galaxies is a bit like that, we can get both the speed of the galaxies within the cluster and the mean speed of the cluster as a whole. The flaw is that there is no "nest" for the galaxies, it is the combined gravity of all the galaxies that holds them in the cluster (though you could think of the dark matter halo as represented by the "nest").
JD: Of course, if there is a recession velocity, SR would predict a redshift.
You could but using the Doppler formula will not give the right frequency shift at large distances. As long as the speed is a small fraction of the speed of light it's fine though.
JD: There seems to be a skipped step here, where you are concluding that the source and observer must be at rest with respect to each other... Or is it a priori assumption that the source and observer are at rest with respect to each other?
Neither. The starting point for cosmology is the "Cosmological Principle" that says the distribution of stuff in the universe is homogeneous and isotropic if you average over large enough volumes. From that, you get the Robertson-Walker metric and Friedmann Equations. Those describe an expanding (or contracting but that's not observed) universe and that produces a redshift due to expansion.
On top of that, you get additional Doppler from local movements. For a cluster that has been a tight group for a significant time, the Virial Theorem is a good guide to the velocity distribution and allows us to measure the total mass of the cluster (including dark matter).
To continue the above analogy, imagine two wasps' nests hanging from branches on opposite sides of a tree. As the tree grows, the nests move slowly apart. If a wasp from one nest pointed a speed gun at each of the wasps buzzing round the other nest, there would be three components to each reading: the speed of the wasp carrying the gun relative to his own nest, the rate at which the nests were separating and the speed of each of the targeted wasps relative to their nests.
JD: Marco said earlier that he is under the impression that there has been some modeling by cosmologists that actually acknowledges the possibility that the redshift is caused by recession velocity.
That is partly the case, the redshift is the combination of the Doppler from the motion of the distant galaxies relative to the mean of all matter in their locale, the stretching of the wavelength due to the Hubble expansion, and the motion of the Solar System within our local structure. We know the latter accurately by measuring the dipole in the CMB (it is 368±2km/s), the other two need to be separated mathematically (basically just averaging) to get the Hubble Law coefficient of ~70km/s per Mpc.
JD: I have never seen any evidence that this was the case. All treatments of cosmological General Relativity have made an argument similar to what George Dishman has said... for example "source and observer and the distant source are both locally at rest "
The average speed of galaxies within clusters is of the order of 0.1% to 1% of the speed of light, say around 1000km/s for a very rough "rule of thumb" guide and applies regardless of distance (well until you look so far back that clusters were much smaller).
The speed of the Hubble flow is proportional to distance and for the most distant galaxy yet detected, it was receding at around 2.2 times the speed of light. SR still applies "over there" so relativistic particles ("cosmic rays") within that galaxy aimed towards us would be moving away at no less than 1.2c and those going away from us would be receding at up to 3.2c. The speeds relative to local matter are limited by SR, the rate at which the gap between us and the galaxy increases is governed by GR and is not limited. Note that that rate of increase is not a speed at all, it is a fractional rate of change and at present it is a 1% expansion every 140 million years.
JD: v=c*(145.16/147.16) = 0.9864c
That would be right starting from the redshift, but now try working out the redshift starting from the figure that the distance is increasing by 2.24 light years per year. Standard cosmology says the distance between wave crests is stretched by the same amount as the distance between the galaxies hence the wavelength would be increased by a factor of 12.09, what is your SR alternative? Do you see why cosmologists don't approach the problem that way?
MP: I would love to see if the analytical part of the detonation modeling allows me to gain the full G^(-3).
GD: There is no analytical solution, supernovae involve a mix of detonation and deflagration in the turbulent fluid interior, probably starting off-centre.
It's not even known of the dominant process is a single white dwarf gaining mass from a companion star until it exceeds the Chandrasekhar Limit or the merger of two dwarfs, both below the limit but whose sum exceeds it. It is likely that there are examples of both types but the prevalences are not known.
A lot of work is being done on this, a random couple of examples are linked. To model this, you'll need a modern supercomputer.
MP: I beg to differ.
https://hypergeometricaluniverse.quora.com/Second-Peer-Review-2
MP: I beg to differ.
The first equation derives the radius from the density and the second derives the mass from it so the derivation is circular. I would think you need to start from the equation of state.
You are correct, I don't need to derive a tautology. It is just there to remind people that the density is independent upon G.
This is a minor point and not relevant to any conclusion. I kept it there just because it is interesting to know if the pressure and temperature profiles are independent of G (just shorter, condensed). Which I believe it is, since rho-center is defined by the maximum density permitted by electron non-degeneracy and the pressure at the surface is zero. It is not really relevant or used anywhere.
Please read the rest of the proof. The rest is relevant.
https://physics.wustl.edu/buckley/476_576/Theory/lecture11_11.pdf
I've skimmed the rest but you are trying to extrapolate from a star burning in a stable mode to an unstable detonation of a supernova, there is no validity in that at all. The only way you could get a credible result would be to take the work being done to model the nuclear burning for various different chemical compositions and alter G to find out what effect it has.
I use equations for the detonating star. That detonation equation has the mass of the Sun as reference.
I would expect your answer to be NO for all cases.
The reason I don't need to consider your path is because:
########################################
I updated the calculation to include the fact that when pressure is dominated by radiation pressure (condition similar to the one in a Supernova), Luminosity would be proportional to M. Under those circumstances, Supernova Luminosity is proportional to L^(-3).
Under the ideal gases model (expansion dominated by gas pressure), Supernova Luminosity is proportional to G^(-3.5)
The observed value after data is massaged by WLR (empirical adjustment) is closer to G^(-3).
This is an alternative theory where the most likely scenario (expansion dominated by radiation pressure) yields the exact G dependence used in the HU model.
########################################
I would say that this is a pretty good alternative to the current Cosmology where the Universe came from nowhere (no mechanism), expanded at infinite speed, stretched space for no good reason and then decided not to stretch it so much for no good reason either. A model that invokes Dark Energy - nowhere to be found. Dark Matter.... nowhere to be found either.
And HU did it all without parameters, just physical hypotheses - like the one in Luminosity proportional to G^-3
https://hypergeometricaluniverse.quora.com/Second-Peer-Review-2
GD: That would be right starting from the redshift, but now try working out the redshift starting from the figure that the distance is increasing by 2.24 light years per year. Standard cosmology says the distance between wave crests is stretched by the same amount as the distance between the galaxies hence the wavelength would be increased by a factor of 12.09, what is your SR alternative? Do you see why cosmologists don't approach the problem that way?
I think, what you have done here is that someone has found, by measuring the magnitudes of the distant galaxies, the distance to the furthest galaxy to be something like 31 billion light-years.
Then, plugging this number into the formula for Hubble's Law, you find
v = d*H_0
You find that the velocity comes out to some 2.24 light-years per year.
Is that a correct assessment?
I put together a little web-page today (link below) to try and explain my SR alternative. The last three paragraphs are copied below.
General Relativists, are comfortable with the idea that the v(z) formula may be modified at will to fit the data, relating the distance to Hubble's constant.
While I do agree that local gravitational fields may indeed cause a redhsift in stars, or a gravitational lensing, in rare instances, I do not think there is good reason to believe there is any global cosmological effect that allows objects have relative velocities greater than the speed of light.
There are two other features of this equation, if we actually allow for the possibility that the redshift is due to recession velocity. (1) that d(m) should be affected by aberration. If an object is traveling at relativistic speeds, the intensity will not be distributed uniformly over the sphere. (2) that the more distant parts of the universe may have actually been traveling away from us for longer than 14 billion years.
http://www.spoonfedrelativity.com/pages/intensity-and-redshift.php
GD: There are two other features of this equation, if we actually allow for the possibility that the redshift is due to recession velocity. (1) that d(m) should be affected by aberration. If an object is traveling at relativistic speeds, the intensity will not be distributed uniformly over the sphere. (2) that the more distant parts of the universe may have actually been traveling away from us for longer than 14 billion years.
#################################################
The first visible flaw of your reasoning comes from the Big Bang itself. It has been assigned at redshift of 1080, that is, 35 trillion light years away.
"This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old.[12]"
see https://en.wikipedia.org/wiki/Cosmic_microwave_background
The Big Bang itself is at infinite distance from us.
By the way, a model that only fits a region of z is not an explicative (predictive) model. It is just an approximation, valid for a range. Not unlike what we had before (Hubble plus redshifts from Doppler effect).
I think George Dickman will have something to say about your idea that far away galaxies being traveling longer because of "in their proper time, they have been traveling longer due to time dilation". I don't think this makes sense even in SR.
#################################################
There is a theory out there that remove the attribution (redshift is due to recession velocity). It uses a hyperspherical topology, shown below (HU_ViewPast_radianABC.jpg). Considering the reason of the redshift, just the projection of a 4-D k=vector onto the local hyperplane, one can easily derive an equation for redshift that reads like this (overestimation_DZShort.jpg), where alpha is the cosmological angle.
The other side of the equation are the distances.
Are the magnitude readings really correct. Is the farthest SN1a in the survey actually at 36 Gly away? (I am going to cast aspersions on SNe distance measurements by challenging your P1=P2 equation).
The answer to this question lies on two items:
a) Gravitational Constant is inversely proportional to the 4D radius (gyrogravitation.jpg).
b) Luminosity is proportional to G^-3. (www.quora.com/How-the-Luminosity-of-a-Supernova-varies-with-gravitational-Constant-G)
Follow the link to realize that the readings are incorrect and no SNe is beyond 13.57 Gly (this corresponds to H_0=72)
The resulting predictions are plotted below as vdistancesmall.jpg:
Notice that this theory doesn't use any parameters.
In addition, when the topology predicts Acoustic Imprints on the Galaxy distribution, it predicts them to be along the distance coordinate (not along the angular coordinates). When SDSS is probed for those acoustic imprints, one finds this density profile (ManyBangsNorthMASS.png). It indicates that the Universe didn't come out of a Big Bang. It came out of a Big Pop which was followed by 36 Bangs along the initial 3 thousand years expansion at the speed of light.
JD: Is that a correct assessment?
Not really.
JD: General Relativists, are comfortable with the idea that the v(z) formula may be modified at will to fit the data, relating the distance to Hubble's constant.
That is complete nonsense. Cosmological redshift in standard science is explained as a consequence of the expansion of the universe. It follows that the wavelength of whatever light we observe must have been stretched by the same factor as the distance between us and the source. GR then relates the form of the expansion to the age of the universe, for example while radiation matter dominated the energy content, the scale factor was proportional to the square root of the age, but once matter dominated the cube of the scale factor was proportional to the square of the age. There is no flexibility about that, the form comes from the equation of state for each component. The only adjustment that can be made is how much of each type of energy the universe contains.
https://en.wikipedia.org/wiki/Scale_factor_(cosmology)#Chronology
https://en.wikipedia.org/wiki/Friedmann_equations
https://en.wikipedia.org/wiki/Equation_of_state_(cosmology)#FLRW_equations_and_the_equation_of_state
George,
Don't think I will let you pass your authoritative nonsensical advice to my friend JD.
The stuff you just said is not better than what he said...:) in fact, it is even worse.
Well, I am waiting...
Marco
MP: There is a theory out there that remove the attribution (redshift is due to recession velocity).
I don't think your HU theory is unique in this aspect, Mr. Pereira.
You, yourself, presented a graph that shows "Friedmann Lemaitre Fitting", "George Half baked prediction" and "HU overestimated distances prediction"
All of these different models remove the attribution "redshift is due to recession velocity"
They all accept data reporting "distance" and "redshift" and then try to produce a prediction of the function
d(z) = ln(1+z) ; (George Dishman?)
d(z) = c T ln(1+z); (Johan Masreliez)
d(z) = 1-cos(alpha)+sin(alpha) where alpha = Pi/4 - a sin(1/sqrt(2)/(1+z)) (Marco Pereira)
MP: I am going to cast aspersions on SNe distance measurements by challenging your P1=P2 equation
You can do that if you wish, but you've already implicitly acknowledged the P1=P2 equation by accepting the SDSS data giving you a distance vs. redshift chart. Because the distances calculated are based on an idea called "standard candles". The "standard candle" argument relies on having two objects that are similar enough and consistent enough that they can be identified as the same type of object, and can be relied on to produce the same light. Inasmuch as they can prove that these objects are all the same, then P1=P2.
MP: ....Gravitational Constant is inversely proportional to the 4D radius (gyrogravitation.jpg).
MP: Notice that this theory doesn't use any parameters.
I don't know what you mean by this. First of all, your theory introduces a new parameter... You have said "Luminosity is proportional to G^-3) and you have said "Gravitational Constant is inversely proportional to the 4D radius " The constant of proportionality is a parameter, and the rate of change of Gravity over distance, and the rate of change of gravity over time, would also both be parameters.
In fact, even in a model which claims "Gravity is constant", you could still say "The rate of change of gravity over distance is zero" and "The rate of change of gravity over time is zero" So in some sense, these are still parameters; it just happens that the parameters are zero.
Also, if you accept the distance vs. redshift curves, you are already implicitly accepting any parameters they may have used in calculating those distances, and in calculating those redshifts. The telescopes don't report redshifts and distances. They report brightness (magnitude), and colors (spectra) and shapes. Unless you have a clear explanation of how they have calculated the distances from the magnitudes, and you have the magnitude data itself, you cannot claim to even know what parameters your model is using.
MP: Don't think I will let you pass your authoritative nonsensical advice to my friend JD. The stuff you just said is not better than what he said
What he said was "General Relativists, are comfortable with the idea that the v(z) formula may be modified at will .." which simply isn't true, GR mandates the form of the equations in quite a straightforward manner. As you say, there are some parameters involved, the energy densities of each component of the contents, but given their respective equations of state, the evolution of the scale factor and hence redshift follow directly, there is no scope to modify the formula "at will". It is not "advice", simple a statement of fact about conventional cosmology.
JD: d(z) = ln(1+z) ; (George Dishman?)
The formula I gave in another thread based on Marco's model of a hypersphere expanding at constant rate dr/dt=c with present radius R0 is:
where D0 is the present distance. If you think the formula is incorrect for that model, by all means give your alternative and show how you would derive it.
https://www.researchgate.net/post/Is_it_a_known_fact_that_the_Universe_resonated_10_times_at_the_Big_Bang/5
GD: Cosmological redshift in standard science is explained as a consequence of the expansion of the universe.
I was thinking about trying to name the competing models we have here.
I could call my model the "Modified Milne Model" or MMM or the "Suppressed Big Bang" model (SBB). In this model, we should consider the possibility that distant galaxies are moving away from each other, largely according to the relativistic redshift equation. The key difference between MM and MMM is that in the primordial universe, particles may bounce back and forth millions or billions of times with relativistic collisions, before settling into the "Hubble Flow". This would make it so that the local universe was much younger than the universe as a whole.
Then, we could refer to what George Dishman is calling "standard science" or SS. I have also called this the Friemann, Lemaitre, Robertson, Walker (FLRW metric,) the Lambda CDM (LCDM) metric, the Standard Cosmological Model, (or SCM) or the "Interesting to Peebles" models. All of these explain cosmological redshift as a consequence of the expansion of the universe. (I here refer to the "Interesting to Peebles" model, because he wrote a book "Principles of Modern Cosmology" wherein, he dismisses the Milne Model as "not very interesting")
GD: . It follows that the wavelength of whatever light we observe must have been stretched by the same factor as the distance between us and the source.
It is interesting that you say this, because any time that I have said anything about "stretching space" on PhysicsForums, this usually causes my posts to be deleted, and the moderators to threaten to ban me.
MP: Don't think I will let you pass your authoritative nonsensical advice to my friend JD.
Haha. Thanks for trying to protect my impressionable naive mind. But I don't think what George is saying here is necessarily "nonsense". Even if the words are ambiguous, there may be some sensible meaning which is correct.
GD: while radiation matter dominated the energy content, the scale factor was proportional to the square root of the age,
Expressed as an equation, then this would be
(1) a = k_1 T^(1/2)
and if we take the derivative with respect to time,
da/dT = 1/2 k_1 T^(-1/2) = a/(2T)
GD: once matter dominated the cube of the scale factor was proportional to the square of the age.
Expressed as an equation, then this would be
(2) a = k_2 T^(2/3)
and if we take the derivative with respect to time,
da/dT = 2/3 k_2 T^(-1/2) = (2a)/(3T)
Additionally, if you look at
https://en.wikipedia.org/wiki/Hubble%27s_law#Hubble_time
You find an equation equivalent to
da/dT = aH_0 = a/T
So it seems we have three different epoch's in your model, where (da/dT)/(a/T) is either 1/2, 2/3, or 1.
While I don't think this is completely meaningless, I would characterize it as either misleading, or inconsistent with my my understanding of the idea of "cosmological inflation". From my understanding of cosmological inflation, the early universe supposedly expanded much faster than the speed of light, which should give us a model where da/dT started very high, and descended, rather than starting low, and increasing.
GD: "GR mandates the form of the equations in quite a straightforward manner. As you say, there are some parameters involved,"
That is enough for me to lay down this horrendous criticism.. These parameters represent physical constructs (Dark Energy included)
It is not acceptable to introduce physics for which you have no proof when a simple revision of your distances is provided.
I know that you were critical of my attempt to derive how luminosity varies with G. One of the models using radiative pressure, gave exactly the models proposed G-dependence.
I think that is a timid, humble hypothesis if you compare with the one ripping the Universe apart.
Not to mention the Banging seem along the radial direction. I cannot understand how can you see it and not immediately conclude that the Universe is a lightspeed 4D hypersphere.
Those Bangs are in the SDSS data and have been there for 10 or more years. They didn't bother to look for them because they have a 4D spacetime. I think it is appalling. It is appalling not that they didn't have the foresight to look for it but that I face any kind of barrier or disagreement when I try to explain my theory.
ps- after I wrote this last statement... it is appalling to me..but that said, if it wasn't for your bothersome objections I wouldn't have the better presentation of the theory today...:)
It is appalling... it bothers me....it is infuriating.... that said... it still makes the theory better and I cannot complain..
That said, I wouldn't mind some help
JD: While I don't think this is completely meaningless, I would characterize it as either misleading, or inconsistent with my my understanding of the idea of "cosmological inflation".
I gave the matter and radiation dominated eras as examples only, for the dark energy dominated era, the scale factor becomes exponential and that also covers inflation though with very different time constants. The combination however doesn't have an analytic solution.
I included the links because there's too much to cover in a reply of reasonable length but the maths is all there and reasonably straightforward. If there was any ambiguity in my words, those pages should resolve it.
MP: Don't think I will let you pass your authoritative nonsensical advice to my friend JD.
Haha. Thanks for trying to protect my impressionable naive mind. But I don't think what George is saying here is necessarily "nonsense".
JD, trust me...:) it is all nonsense..>:) from the first syllable to the last stretching of space...:)
I don't know how GD didn't change his ways after seeing the ManyBangsNorthMASS.jpg. This figure show that there wasn't a single Big Bang... :) You can count at least 36 Bangs. That has to throw a monkey wrench on those GD neuronal gears...
That picture depicts density of galaxy clusters on the bump at 0.3 R_0. (directly from the SDSS dataset).
I don't know how he keeps going..:) It has to be something like cognitive decoupling...:) Somehow the image is not processed by that Brainiac GD brain...:)
If I show you 36 Bangs... you have to stop talking about a Single Bang..:) that is just common-sense.. Otherwise people might think you are missing a screw somewhere...:)
Cheers,
Okay, when I said "s "General Relativists, are comfortable with the idea that the v(z) formula may be modified at will .." George, I misspoke.
What I meant was to compare what I'm calling the "SBB-Suppressed Big Bang" model--(still under development) with a variety of more "Standard Science" models with regard to Hubble's Law.
d(m) = H_0 v(z)
In the SBB model, the redshift is due almost entirely to recession velocity, so we use the inverse of the relativistic redshift equation to figure the velocity of distant objects.
v(z) = c((1+z)^2-1)/((1+z)^2+1)
What I should have said, was that General Relativists do not feel this is the correct equation, so you "feel comfortable" trying out other formulas than the relativistic redshift equation to fit the data.
By contrast, General Relativists are comfortable with the notion that Hubble's Constant is actually a constant. While I feel comfortable just replacing Hubble's constant with 1/T for objects that have been traveling away from us at constant speed for time T, and with smaller numbers for objects that have been traveling away from us during the "suppressed" period of the SBB model.
GD: If you think the formula is incorrect for that model, by all means give your alternative and show how you would derive it.
Alright. Within the context of the assumptions you made... I think it would look something like this.
da/dt = a H
If we assume Hubble's constant is really a constant, we can separate the variables and integrate.
da/a = H dt
ln(a) = H t
Then take e to the power of both sides.
a = e^(Ht)
And divide a_now/a_then
a_now/a_then = e^(H(t_now - t_then))
Then, using the logic that redshift is caused by stretching of space, the wavelength quotient is equal to the scale quotient equals 1 plus the redshift.
1+z = e^(H(t_now - t_then))
Taking ln of both sides
ln(1+z) = H(t_now - t_then)
I am not super-comfortable with this calculation, because I have deep misgivings about the idea that Hubble's constant is actually a constant. But I think that is what you would have to assume in order to get a ln(1+z)
GD: If you think the formula is incorrect for that model, by all means give your alternative and show how you would derive it.
JD: ln(1+z) = H(t_now - t_then)
You've got the same logarithmic relationship as I found for Marco's model (with slightly different constants, probably equivalent). I think that confirms what he should be predicting.
However, that is far from standard cosmology.
JD: I am not super-comfortable with this calculation, because I have deep misgivings about the idea that Hubble's constant is actually a constant.
In standard cosmology, it isn't. As you found earlier, in general it is proportional to 1/t however the effect of dark energy produces exponential expansion so in the far future it will fall not to zero but to a constant. That gives a "de Sitter universe", see the link.
https://en.wikipedia.org/wiki/De_Sitter_universe
MP: I don't know how GD didn't change his ways after seeing the ManyBangsNorthMASS.jpg. This figure show that there wasn't a single Big Bang... :) You can count at least 36 Bangs.
You've previously described your plot as being based on two-point correlation which is the right way to go about analysing the data. However, since in your model, matter is constrained to the surface of the hypersphere, this plot would pick out circular features across the surface of the sphere. Multiple "bangs" would result in multiple concentric and isolated hyperspheres. You plot is perpendicular to your description.
Of more concern though is how clean the plot is, the SDSS data isn't like that at all, I don't know what you are picking out but it is nothing astrophysical, that's certain. It might be an instrument artefact but more likely it looks like a bug in your software.
MP: I don't know how GD didn't change his ways after seeing the ManyBangsNorthMASS.jpg. This figure shows that there wasn't a single Big Bang... :) You can count at least 36 Bangs.
You've previously described your plot as being based on two-point correlation which is the right way to go about analysing the data. However, since in your model, matter is constrained to the surface of the hypersphere, this plot would pick out circular features across the surface of the sphere. Multiple "bangs" would result in multiple concentric and isolated hyperspheres. You plot is perpendicular to your description.
Of more concern though is how clean the plot is, the SDSS data isn't like that at all, I don't know what you are picking out but it is nothing astrophysical, that's certain. It might be an instrument artefact but more likely it looks like a bug in your software
####################################################
You should ask how clean the SDSS data manipulation is.
My dirty fingers are all over my github repository which I made available to everyone, since ever.
https://github.com/ny2292000/TheHypergeometricalUniverse
This shows how self-deluded you are - not matter what, you keep trying to believe I am wrong..:)
This is not a two-point correlation. This is a map. When I create a map, I call it a Map. When i do 2-point correlation, I name them as such.
###################################################
GD: You've previously described your plot as being based on two-point correlation which is the right way to go about analysing the data.
This is authoritative nonsense I mentioned, JD...:)
That kind of reasoning is what precluded all SDSS and the community to find these waves. They believe that (due to the Universe being a 4DSpacetime, of course!!!) the RIGTH way to go about analysing data is to create angular 2-point autocorrelation. They are so indoctrinated (you too), that you cannot fathom that there is a 4th spatial dimension out there and we are traveling at the speed of light on a hypersurface.
This is the difference between conformism, the co-option of a bad theory, and discovery.
I was really wondering why didn't you say something. I deem you as a very intelligent person. Now, it is clear,...self-delusion..Your powerful brain is capable of fooling your own self...:)
###################################################
This is a map that is created by keeping the cosmological angle constant (Hubble Flow). It is based on the idea that after billions of years traveling, the galaxy center of mass will be relaxed and will only travel radially.
###################################################
I only calculate their cosmological angle in the past, consider them to be moving in the Hubble flow (their Fabric of Space is relaxed) and project them into the present.
Once I have them in the present, I just plot them in a 3D Scatter plot. The one you see is distance (called alpha because distance matches the numerical value of the cosmological angle on a normalized hypersphere) versus Declination. All the RA contributions were summed up.
###################################################
Since this is a map and not a correlation across angular space (going through many cycles, I suppose), there is no reason why data would appear more than once. Certainly they wouldn't appear as density waves (along the density coordinate).
###################################################
My two-point correlation is shown below. It doesn't have 36 Bangs..I couldn't make it have it if I tried.
###################################################
Now that you understand the words that come out of my mouth, let me know what do you think? Please review your nonsensical analysis, since now it is clear that I am only summing up RA (or DEC - the plot looks the same) contributions...:)
Since this is a momentous moment when someone how spouses the Standard Cosmological Model understands the the thing is nonsense, I will add an extra figure to celebrate...:)
ps- sorry if I overdid nonsense... it was just to keep up with the previous theme - that Inflation, Dark Energy, Expansion, GR are nonsensical.
https://www.youtube.com/watch?v=YfxqMsnAinE
I was doing my best to follow what George Dishman had done here:
https://www.researchgate.net/post/Is_it_a_known_fact_that_the_Universe_resonated_10_times_at_the_Big_Bang/5
...Where he derived
GD: D0 = R0 ln(1+z)
and came up with
JD: ln(1+z) = H(t_now - t_then)
I actually have three concerns about this result.
(1) It was derived from the assumption that Hubble's constant is actually a constant. and I don't believe it to be constant, except insofar as 1/14 billion years is pretty close to 1/14.001 billion years, and 1/13.999 billion years are really close to each other, so it doesn't change much over a million years.
(2) GD: You've got the same logarithmic relationship as I found for Marco's model (with slightly different constants, probably equivalent). I think that confirms what he should be predicting.
Before I would claim that this is Marco's model, I would want confirmation from him that his model is consistent with such equations as
(da/dt)/a = H_0
and
1+z = a_0/a_z
(3) JD: ln(1+z) = H(t_now - t_then)
What I have derived here is unitless on the left, and is of the form (distance/distance). So if we want to assign a physical meaning to this quantity, it is "scale now minus scale then".
What physical meaning does this quantity have? If I were to imagine an analogy, I would take two model train sets. We'll have "sunshine" brand trains at 1:160 scale and "silver lining" brand trains at 1:120 scale. If I were to subtract the larger from the smaller scale, I would get 0.00208 meter/meter.
Now... what physical significance can you find in subtracting the scale of one train set from the scale of another train set? I don't think there is any. In fact, I would probably make the "apples - to oranges" argument,
If you have a number of apples, you cannot subtract a number of oranges, and end up with a physically meaningful result.
Similarly, even though both the "sunshine brand" and "silver lining brand" train scales are unitless according to a naive physics, that "silver lining meter per actual meter" and "sunshine meter per actual meter" are actually completely different things, and have no physical meaning when subtracted.
=============
GD: In standard cosmology, it isn't. As you found earlier, in general it is proportional to 1/t however the effect of dark energy produces exponential expansion so in the far future it will fall not to zero but to a constant. That gives a "de Sitter universe", see the link.
I have deep misgivings that Hubble's constant is anything other than 1/T. If Hubble's constant is anything other than 1/T, then space is stretching, and we have a changing scale factor over time. If Hubble's constant is exactly equal to 1/T, then we don't have to worry about space stretching. We can just identify which objects are part of the Hubble flow, and which objects aren't.
(2) GD: You've got the same logarithmic relationship as I found for Marco's model (with slightly different constants, probably equivalent). I think that confirms what he should be predicting.
Before I would claim that this is Marco's model, I would want confirmation from him that his model is consistent with such equations as
The first thing you have to understand is that I contest the distances. HU states that the bolometric distances are incorrect because SN1a are not all equivalent. Farther ones are weaker and thus the bolometric distance measurement would overestimate them. Since I predict different distances and contest the way people measure those distances, no other model can be like mine. It is not just a matter of which form d(z) has.
The answer to the second part. I use trigonometry. There is no logarithm any place in my derivation. In addition, the hyperspheres are not accelerating radially. They travel at the speed of light and that is it.
#######################################
Velocity plays no role in the redshift and thus it is irrelevant in HU. In addition, which velocity are you speaking of. HU proposes a lightspeed moving reference frame.
The only velocity that is relevant in cosmology is related to the Hubble Flow. That velocity relationship uses a constant Hubble constant of 72. There is no need to expand space or calculate a da(t)/dt.
That said, I calculate it to recompose the mirage Current Cosmology sees when using the wrong distances. I am being positivistic (using the word wrong) on purpose. The overwhelming evidence points in my favor.
MP: The only velocity that is relevant in cosmology is related to the Hubble Flow. That velocity relationship uses a constant Hubble constant of 72. There is no need to expand space or calculate a da(t)/dt.
So, Marco, within the context of the HU model, what role does Hubble's constant play? Is it one of the parameters of your model, or have you thrown it out completely?
No. Ho=c/R0 with R0=13.58 Gly
It provides the receding velocity on the current hypersphere, that is, in absolute time frame. That is rarely used, unless you are planning a trip out there..:)
The absolute time runs radially..on both panels below (remember that R=c*Phi, that is, Cosmological time times the expansion of the Universe.
The proper times and are the local frames, which can be twisted (Fabric of space twist correspond to absolute velocity).
Since I can draw those figures, I can talk about what is happening in the current epoch.
MP: Ho=c/R0 with R0=13.58 Gly
So, Marco. Within the context of the HU model, is R0 a constant, or is it a function of time?
Is c a constant?
MP: remember that R=c*Phi
In the context of the HU model, what is the relationship of H0 with R?
R0 is the current 4D radius, so it is a constant.
c is constant on the proper reference frame. it slows down as the photons come closer to us in the absolute reference frame. Look at the picture below. The line AC is the line for the 4D k vector. The projection of the k-vector on the local hyperplane clearly shows the redshifting. The absolute time goes along the radial direction, so light slows down on the absolute frame of reference.
The issue (which GD brought up), it that when we make velocity measurements we are in our hyperplane, we measure wavelength and measure period and divide one by the other to get velocity. The proper time at point C is also radial (at that point). That proper time has to be projected to our current proper time to get the period. If you project both wavelength and period, the speed of light remains unchanged. So, the speed of light is constant in the proper reference frame. It slows down on the absolute reference frame (as one should expect).
The c you use in the hubble eq is the standard value.
Ho only refers to R0. It gives the current 4d radius. from there on it is just trigonometry. Once you know Ro, which you make equal to 1 anyway, you can position your Supernovae and double check that they are correctly predicted by law of sines. Once you know d(z), you can make maps of all galaxies and discover things
Things are very, very simple if the SN1a distances are corrected.
Difficult it is to create a theory where those distances need to be corrected.
MP: Ho=c/R0 with R0=13.58 Gly
MP: (remember that R=c*Phi, that is, Cosmological time times the expansion of the Universe.)
MP: There is no need to expand space or calculate a da(t)/dt.
Even if, for whatever reason, you find no need to calculate
(dR/dPhi)/R
It could be calculated, if you wanted to, right? And if you were to calculate this, would you get a function of Phi?
1/Phi
or would you get some constant?
H0
dR/dPhi = c or 1 depending upon if you choose dimensionalized time or not. R is expanding at velocity c... not ifs or buts...
(dR/dPhi)/R= c/R
which would be the H0 for that epoch. H_0 varies with epoch due to its geometric definition c/R
Why didn't you plug in R = c*Phi?
(dR/dPhi)/R= c/R = c/(c Phi) = 1/Phi
JD: I was doing my best to follow what George Dishman had done here: ...Where he derived
GD: D0 = R0 ln(1+z)
The original notes were somewhat terse as the detail was in the Wikipedia link, unfortunately I've just noticed that was in a post on the following page. It's linked below for reference.
MP: the hyperspheres are not accelerating radially. They travel at the speed of light and that is it.
MP: dR/dPhi = c or 1 depending upon if you choose dimensionalized time or not. R is expanding at velocity c... not ifs or buts...
I understand that to mean the the radius of the hypersphere increases at rate dr/dt=c.
MP: If you project both wavelength and period, the speed of light remains unchanged. So, the speed of light is constant in the proper reference frame.
I understand that to mean that light travels across the surface of the hypersphere at rate dD/dt=c where D is the proper distance.
For comoving galaxies, their location on the hypersphere does not change. Two such galaxies will therefore subtend a constant angle at the centre. If the angle is ϴ then using the convention that subscript 0 means the present and subscript z is for a source seen at redshift z, we have:
ϴ = D0/R0 = Dz/Rz
and
1+z = D0/Dz = R0/Rz
From Marco's statements the radius in general increases at dr/dt=c.
If the angle subtended at the centre between the light as it travels and the source galaxy is ϕ then dϕ/dt=c/r.
That differential equation can be solved or we can note that proper distances are measured round the circumference of the hypersphere at any epoch, so when the light starts out the proper distance between the galaxies is Dz but by the time it arrives that has increased to D0. Since the radius is increasing at rate c, that linear distance is increasing at rate ϴc. That allows us to use the standard "Ant on a Rubber Rope" calculation I linked. On that page the initial distance is (confusingly) called "c" corresponding to our Dz while the speed of the ant is "α" corresponding to the speed of light c in our analysis. The speed of the end of the rope is "v" which is our ϴc. The result for the time taken is "T=c/v(ev/α-1)" on Wikipedia or
T=Dz/(ϴc)(eϴ-1)
in our notation. The time is also (R0-Rz)/c of course and a little bit of algebra then gives my result (unless I slipped up) of
D0 = R0 ln(1+z)
Note that doesn't use H0 at all but other approaches would be equivalent.
I'm currently playing with MATLAB so I've plotted the light path in two ways, the first is a simple iteration and the second is the analytic solution. An image of the code is attached as well as one of the plots, they are identical.
https://en.wikipedia.org/wiki/Ant_on_a_rubber_rope
GD: do you still have questions about the why SN1a Luminosity is proportional to G^(-3) and thus Distances are overestimated by G^(1.5)?
Are you convinced that my data manipulation was clean and that the Universe saw 36 Bangs instead of just a Big One?..:)
Did you have fun running the scripts in the repository?...:)
######################################
GD - I didn't see your answer first. Let's see what did you say.
######################################
JD: I was doing my best to follow what George Dishman had done here: ...Where he derived
GD: D0 = R0 ln(1+z)
The original notes were somewhat terse as the detail was in the Wikipedia link, unfortunately I've just noticed that was in a post on the following page. It's linked below for reference.
MP: I plotted this answer and it doesn't match the SN1a observations.
MP: the hyperspheres are not accelerating radially. They travel at the speed of light and that is it.
MP: dR/dPhi = c or 1 depending upon if you choose dimensionalized time or not. R is expanding at velocity c... not ifs or buts...
I understand that to mean the the radius of the hypersphere increases at rate dr/dt=c.
MP: Yes. as long as you are stating R. I use r for the proper reference frame.
MP: If you project both wavelength and period, the speed of light remains unchanged. So, the speed of light is constant in the proper reference frame.
I understand that to mean that light travels across the surface of the hypersphere at rate dD/dt=c where D is the proper distance.
MP: HU has an absolute frame of reference. Don't use proper distances unless you want to make mistakes. The proper frame of reference tilts with where you are at the surface of the hypersphere. After billions of years traveling, Galaxies Fabric of Space are deemed to be relaxed and thus Galaxies only travel radially.
For comoving galaxies, their location on the hypersphere does not change.
MP: Their Cosmological Angle doesn't change. Their location changes with Hubble Flow
Two such galaxies will therefore subtend a constant angle at the centre.
MP: Why consider different Galaxies and not us and the other galaxy. Only one galaxy can be at any given celestial direction and any given Cosmological angle. That is unique.
If the angle is ϴ then using the convention that subscript 0 means the present and subscript z is for a source seen at redshift z, we have:
ϴ = D0/R0 = Dz/Rz
and
1+z = D0/Dz = R0/Rz
MP: I didn't say that this is true. Incorrect assumption. Absolutely no justification. It is implicit in it that the redshift if due to stretching space. If you start with stretching space, no wonder you will arrive at the conclusion that space has been stretched. Compare that assumption with my assumption that k-vector projects according to cosine projection (a.k.a. momentum conservation).
From Marco's statements the radius in general increases at dr/dt=c.
If the angle subtended at the centre between the light as it travels and the source galaxy is ϕ then dϕ/dt=c/r.
That differential equation can be solved or we can note that proper distances are measured round the circumference of the hypersphere at any epoch, so when the light starts out the proper distance between the galaxies is Dz but by the time it arrives that has increased to D0. Since the radius is increasing at rate c, that linear distance is increasing at rate ϴc. That allows us to use the standard "Ant on a Rubber Rope" calculation I linked. On that page the initial distance is (confusingly) called "c" corresponding to our Dz while the speed of the ant is "α" corresponding to the speed of light c in our analysis. The speed of the end of the rope is "v" which is our ϴc. The result for the time taken is "T=c/v(ev/α-1)" on Wikipedia or
T=Dz/(ϴc)(eϴ-1)
in our notation. The time is also (R0-Rz)/c of course and a little bit of algebra then gives my result (unless I slipped up) of
D0 = R0 ln(1+z)
Note that doesn't use H0 at all but other approaches would be equivalent.
I'm currently playing with MATLAB so I've plotted the light path in two ways, the first is a simple iteration and the second is the analytic solution. An image of the code is attached as well as one of the plots, they are identical.
MP: Did you plot your result against the Supernova Survey data. I did, this doesn't work
MP: You script is also incorrect. As light travels along AC, the angle with the normal changes and the tangential velocity of light diminishes. So light doesn't travel at 45 degrees all the time. It will leave the emitter at 45 degrees. Momentum conservation will require that to change as it impacts its momentum into polarization and that polarization creates the EM that will propagate in the next de Broglie cycle. Light actually slows down in the Absolute Frame of Reference. As you so smartly pointed out, the speed of light doesn't change in the proper reference frame. That is because both period and wavelength have to be projected into the observer proper frame. When you project both (they are separated by a 90 degrees rotation and have to be projected onto axis separated also by 90 degrees), the end result is the same speed of light
MP: If you project both wavelength and period, the speed of light remains unchanged. So, the speed of light is constant in the proper reference frame.
My mind could not parse meaning out of this text.
GD: I understand that to mean that light travels across the surface of the hypersphere at rate dD/dt=c where D is the proper distance.
Well, this seems a good distinction. What does Marco mean by "proper reference frame"?
If I take two galaxies in the Hubble flow, in the context of Marco's model, is the "proper distance" between them increasing over time (Recession Velocity Model), or is the "proper distance" between them remaining the same, over time? (Stretching Space Model)
Second. If I take two galaxies in the Hubble flow, in the context of Marco's model, is the speed of light constant, in such a way that it always take the same amount of proper-time for light to get between the two galaxies? (Stretching Space Model) Or does the amount of proper time it takes for the light to get from one galaxy to the other increase, as they become fainter and fainter to each other. (Recession Velocity Model)
In short, when Marco says the speed of light is constant in the "proper reference frame" does he mean the proper reference frame where all of the objects in the hubble flow are stationary (Stretching Space Model), or does he mean the proper reference frame where objects in the hubble flow are moving apart? (Recession Velocity Model)
MP: do you still have questions about the why SN1a Luminosity is proportional to G^(-3) and thus Distances are overestimated by G^(1.5)?
I think your attempt to extrapolate the luminosity equation is like using a formula for how bright a candle shies based on the size of the wick to predict how big a bang you will get if you kick a bucket of nitroglycerine based on the diameter of the bucket. Don't try that at home ;-)
MP: Are you convinced that my data manipulation was clean and that the Universe saw 36 Bangs instead of just a Big One?..:)
I am convinced it is wrong because it doesn't identify the Sloan Great Wall at z=0.78. and the layers you see are far too clean to represent anything astrophysical, you are either seeing some sort of instrumentation artefact or possibly something like a rounding limit in the maths.
MP: Did you have fun running the scripts in the repository?...:)
I'm having enough fun debugging my own software without fixing your for you, did you try plotting the result of the code snippet I added to my last reply? You can have the source if you have MATLAB or it should be trivial to code it in whatever package you prefer.
MP: I plotted this answer and it doesn't match the SN1a observations.
My formula is derived from your descriptions and gives the distance only, it says nothing about luminosities so you can't check it that way.
MP: Yes. as long as you are stating R. I use r for the proper reference frame.
I use lower case letters as variables and upper case for specific values, r is the radius and varies from Rz to R0.
Be careful with the word "proper". Proper distances are measured along a surface of uniform cosmological age so along the surface of the hypersphere at a single epoch and radius. Because the surface is curved, there is no "reference frame" in the SR sense because that would be a flat surface and you can't map that onto a sphere without distortion. Similarly, the Hubble Law etc. deal with comoving galaxies which in your model move exactly radially.
MP: Galaxies Fabric of Space are deemed to be relaxed and thus Galaxies only travel radially. For comoving galaxies, their location on the hypersphere does not change.
Yes, that's exactly what I have used.
GD: Two such galaxies will therefore subtend a constant angle at the centre.
MP: Why consider different Galaxies and not us and the other galaxy.
In the plot, we are at the outer end of the red line but the equation is valid in general, not just for us.
GD: 1+z = D0/Dz = R0/Rz
MP: I didn't say that this is true. Incorrect assumption. Absolutely no justification. It is implicit in it that the redshift if due to stretching space.
No, it makes no such assumption (and in fact that model is deprecated by those familiar with GR). What it says is that if the distance between two galaxies in creases by some amount and there are two light flashes passing through those galaxies, the distance between the flashes must increase by the same amount (since they are within the galaxies). Take a Fourier analysis of a pulsed light source and it is obvious that the wavelength of the light must be altered in the same ratio as the distance between the pulses.
MP: As light travels along AC, the angle with the normal changes and the tangential velocity of light diminishes. So light doesn't travel at 45 degrees all the time.
We discussed that at some length and you finally thanked me for solving that aspect of your model, I don't know why you've gone back to your previous view, it's incorrect. Light always moves at the speed of light locally which is always 45 degrees to the radial direction (and to a local tangent).
GD: you are either seeing some sort of instrumentation artefact or possibly something like a rounding limit in the maths.
Well said. That's almost exactly what I was thinking. (I'm very curious about it, but my hacking skills aren't up to the task of navigating GitHub)
By the way, I put a list of questions back on page 5 of this thread. This is more relevant to the original question "Why would anyone think that Strict Relativity is valid over Cosmological Distances?" It's only relevant to cosmologies where the Hubble Flow is explained by recession velocities.
But here is a link to the answers to questions 1a and 1b, as I calculated them.
http://www.spoonfedrelativity.com/pages/AberrationQuestions.php
Did you forget the link Jonathan?
I think I replied to your question some time later or maybe in another thread, the speed of the solar system relative to the CMB is 368km/s or 0.00123c so your question is moot really.
As regards the original rhetorical "Why would anyone think that Strict Relativity is valid over Cosmological Distances?", it isn't clear if that refers to SR or GR and the first two paragraphs of the detail are complete rubbish so they don't help. If the question was about SR, the answer is simply "nobody does" though of course there are bound to be a few exceptions to that generalisation ;-)
Anyway, you said you were trying to follow my previous derivation of the redshift versus distance relationship so I posted my method, do you agree that or see any problems in it? It's based on my specific understanding of Marco's description of course which might be flawed but we have to leave that aspect to him if I've misinterpreted it. Do you agree the maths though?
Regarding your earlier question:
JD: Second. If I take two galaxies in the Hubble flow, in the context of Marco's model, is the speed of light constant, in such a way that it always take the same amount of proper-time for light to get between the two galaxies? (Stretching Space Model) Or does the amount of proper time it takes for the light to get from one galaxy to the other increase, as they become fainter and fainter to each other. (Recession Velocity Model).
The definition of proper distance is cosmology is well defined:
https://en.wikipedia.org/wiki/Distance_measures_(cosmology)#Proper_distance
In Marco's model, that corresponds to the length of an arc across the surface of the hypersphere.
The speed of light is always c locally which mean that it takes longer for light to get between the galaxies as the distances within the universe expand.
If the angle subtended at the centre between the light as it travels and the source galaxy is ϕ then dϕ/dt=c/r.
MP: This is wrong because of the assumption of always 45 degrees.
https://en.wikipedia.org/wiki/Central_angle#Formulas
ϕ = L/r
dϕ/dt = 1/r dL/dt
Projected onto the surface, light moves at:
dL/dt = c
hence
dϕ/dt = c/r
P.S. You edited your post while I was replying, you are forgetting that we measure light moving across the surface at c, in your 4D space, it would move at c√2.
GD: Did you forget the link Jonathan?
Yes I did. Added it above, and for good measure, below.
I'll come back to answer your other questions later.
http://www.spoonfedrelativity.com/pages/AberrationQuestions.php
I've added some links as a comment on the page, they should allow you to confirm your analysis.
GD: P.S. You edited your post while I was replying, you are forgetting that we measure light moving across the surface at c, in your 4D space, it would move at c√2.
This is only valid for short distances. Longer distances the line of sight path is not at 45 degrees with the radial. This means that the velocity of light is not constant on an absolute frame of reference (which HU has) but it is constant on the proper frame of reference
MP: If you project both wavelength and period, the speed of light remains unchanged. So, the speed of light is constant in the proper reference frame.
My mind could not parse meaning out of this text.
Attached is the plot showing that the projections of x' to x and tau' to tau have the same angle (45-theta). Since we are projecting both period and wavelength by the same angle, the ratio doesn't change. Hence, the velocity of light is the same on proper reference frames.
For the absolute reference frame you only project x'. As the Universe moves outwards, if the wavelength is longer, it will show as a slower speed of light. This is the same as assuming that the frequency didn't slow down...You don't make the foolish thing of measuring frequency and wavelength...:)
JD: if you want, I will hold your hand and get you through the trouble of github and anaconda...;)
Don't fear the Unknown...:)
ps- Don't ask GD about my model..:) He doesn't understand it...:)
By the way, who was the Genius that decided that the velocity of light is the observed wavelength divided by observed period. That will always be preserved. It has to be the observed wavelength divided by the original period (which you know from absorption lines)!!!!!!!!!
GD: I think I replied to your question some time later or maybe in another thread, the speed of the solar system relative to the CMB is 368km/s or 0.00123c so your question is moot really.
And I failed to respond, at the time--I had just read your conversation with Dirk Van de Moortel regarding
Obviously, in the Special Relativity model, the CMB is receding at z~1101, so it's speed is v/c = ((z+1)^1-1)/((z+1)^2+1) =0.9999983501
That corresponds to about 7 rapidians, and the dipole anisotropy might be explained by 368 km/s. 368 km/s divided by 300,000 km/s is .0012c which is about .0012 rapidians.
GD: f the question was about SR, the answer is simply "nobody does" though of course there are bound to be a few exceptions to that generalisation ;-)
We're very few in number. Unfortunately, A.E. Milne is long dead, and last I talked to Lewis Carroll Epstein was over a decade ago, and he was retired. I don't know if "nobody does" but I have noticed that I'm censored, if I talk about it on Physics Forums.
GD: Anyway, you said you were trying to follow my previous derivation of the redshift versus distance relationship so I posted my method, do you agree that or see any problems in it?
I got this:
ln(1+z) = H(t_now - t_then)
It's unitless on the left, and has units of distance over distance on the right.
I felt that the right-hand-side was physically meaningless because it is a scale factor minus another scale factor. But two other things occurred to me.
1. I should have worked it out with the integration constant.
2. If you're looking out at at distant astronomical objects, and comparing their angular diameter, then there could be situations where a simple difference in scale factor could be important. Perhaps with the integration constant, there might yet be meaning--within the context, of course, of a stretchy space universe.
GD: It's based on my specific understanding of Marco's description of course which might be flawed but we have to leave that aspect to him if I've misinterpreted it.
Well, from what MP said, his model is consistent with (dR/dPhi)/R = 1/Phi, where R is the radius of the universe, and Phi is the proper time. This is very similar to (da/dt)/a=1/t which would be the Hubble flow in a recession-velocity model.
However, I don't know I could call it a recession velocity model, because he has posted a whole bunch of diagrams which make so little sense to me, I'm not sure what to ask to unravel it.
MP: (Misattributed: This was GD) The definition of proper distance is cosmology is well defined:
Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance factors out the expansion of the universe, which gives a distance that does not change in time due to the expansion of space (though this may change due to other, local factors, such as the motion of a galaxy within a cluster); the comoving distance is the proper distance at the present time.
That is surely not well-defined. Is it the simultaneous distance? Is it the distance to the image? Is it the distance as measured by meter-sticks? Do objects in the hubble flow stay at constant proper distance, or do they recede over time as the space expands? Does the distance between comoving objects in the hubble flow get closer, recede, or stay the same, with objects in the Hubble flow? Are the points at two ends of a rigid ruler comoving? Are distant galaxies comoving? I can't answer any of these questions based on that definition.
JD: Well, from what MP said, his model is consistent with (dR/dPhi)/R = 1/Phi, where R is the radius of the universe, and Phi is the proper time.
Phi is the Cosmological Time. I don't use proper time in Cosmology
I create a posting in Quora where I can embed figures and explain things better.
https://hupeerreview.quora.com/Conversation-with-JD
Feel free to ask questions there.
MP: I create a posting in Quora where I can embed figures and explain things better. https://hupeerreview.quora.com/Conversation-with-JD Feel free to ask questions there.
Good idea. I posted a response.
MP: Oops! This was GD:: The definition of proper distance is cosmology is well defined: https://en.wikipedia.org/wiki/Distance_measures_(cosmology)#Proper_distance
JD: That is surely not well-defined. Is it the simultaneous distance? Is it the distance to the image? Is it the distance as measured by meter-sticks? Do objects in the hubble flow stay at constant proper distance, or do they recede over time as the space expands? Does the distance between comoving objects in the hubble flow get closer, recede, or stay the same, with objects in the Hubble flow? Are the points at two ends of a rigid ruler comoving? Are distant galaxies comoving? I can't answer any of these questions based on that definition.
I see now, the wikipedia article answered at least one of my questions.
The wikipedia article says in the previous paragraph, "The comoving distance between fundamental observers, i.e. observers that are both moving with the Hubble flow, does not change with time, as comoving distance accounts for the expansion of the universe."
Then it says "the comoving distance is the proper distance at the present time."
So I take it, from this, that we could label all of the galaxies in the Hubble Flow with coordinates according to the present epoch. Those labels represent their "comoving distance" and that comoving distance never changes, because they just represent, essentially the labels on the objects.
However, if we look at the galaxies according to their "proper distance" then it is appropriate to say the galaxies are actually moving apart, and they have a recession velocity. And in fact, the labels representing "comoving distance" are also spreading apart over time.
JD: That is surely not well-defined. Is it the simultaneous distance?
Of course. That is the only conclusion if I tell you that in Cosmology, I use an ABSOLUTE reference frame. The distance used in Astronomical measurements is the distance between EPOCH, not the line-of-sight distance. It is measured in meters.
JD:Is it the distance to the image?
Despite of that in 4D, one can see a distance AC, the distance used in astronomy is AB. The reason being the breakdown on the 1/distance squared rule of Luminosity decay. That breakdown is because of how the dilaton field decays. The dilaton field decays with the number of cycles and not the distance. The number of cycles between two epochs is the same no matter where you are in the epoch (inner hypersphere).
JD: Do objects in the hubble flow stay at constant proper distance, or do they recede over time as the space expands?
Don't ask me about proper distance, since they don't mean anything in Astronomy.
I interpret proper distance (proper reference frame) as anything measure on the local xyztau (local reference frame reflects a torsion on the local Fabric of Space) as opposed to ZYZPhi.
Objects in the Hubble flow have their Fabric of Space relaxed and will keep the cosmological angle alpha constant. This means that you know where they are in simultaneous time (Absolute time) on the outermost hypersphere (our current epoch).
The will recede within our hypersphere with Hubble velocity.
JD: Does the distance between comoving objects in the hubble flow get closer, recede, or stay the same, with objects in the Hubble flow?
From what I said, since each object will keep their own cosmological angle and since the radius is increasing.The distance will increase as expected.
JD: Are the points at two ends of a rigid ruler comoving? Are distant galaxies comoving? I can't answer any of these questions based on that definition.
I am giving you an absolute reference frame. Anything you see there is what you see. There is no stretching of space nothing. All the magic is in changing the distances to the SN1a. Once they are changed, you can measure the distances to anything in the past with a ruler. You can do the same if you want to project them to the present.
The concept of comoving is overly complex and unnecessary for the figure below and HU. The figure contains only circles and angles. You can use trigonometry and answer any questions.
JD: I see now, the wikipedia article answered at least one of my questions.
The wikipedia article says in the previous paragraph, "The comoving distance between fundamental observers, i.e. observers that are both moving with the Hubble flow, does not change with time, as comoving distance accounts for the expansion of the universe."
Then it says "the comoving distance is the proper distance at the present time."
You cannot understand a new theory with the trappings of your old theory.
JD: So I take it, from this, that we could label all of the galaxies in the Hubble Flow with coordinates according to the present epoch.
Yes. That is what I did. I created the current map and investigated what was happening there.
JD Those labels represent their "comoving distance" and that comoving distance never changes, because they just represent, essentially the labels on the objects.
Comoving distances will change in your own model as the Universe continues to expand. That said, that is an irrelevant observation.
JD: However, if we look at the galaxies according to their "proper distance" then it is appropriate to say the galaxies are actually moving apart, and they have a recession velocity. And in fact, the labels representing "comoving distance" are also spreading apart over time.
Yes.
The DoubleCrossSections was missing on the quora posting. You could had stopped there and let me explain. Please see if that changes your qualms and answer all your questions.
Since we are projecting both period and wavelength by the same angle, the ratio doesn't change.
We’re not doing any such “projecting” of period or wavelength by an angle. Maybe these words have meaning to you, but I’m not in your head.
MP- Lorentz transformation is a rotation just in case you don't know. It uses hyperbolic functions in Minkowski space and just sines and cosines in Cartesian space.
As the Universe moves outwards, if the wavelength is longer, it will show as a slower speed of light.
This just means that when measuring the speed of light for a redshifted beam, one should not use an observed period. One should use the period that the optical transition has (here and now), because that is the period it had when the photon was emitted.
If you measure both period and wavelength, light will never change velocity. That is built-in into the Lorentz Transforms or in the Cosines projections I use here.
Thanks for correcting my grammar. Please, edit your comment in light of seeing the DoubleCrossSection.png file.
DONE
MP: Don't ask me about proper distance, since they don't mean anything in Astronomy.
Oops! I goofed. I keep putting down MP when I meant GD. Usually I fix it in time. I meant to ask GD more about what proper distance means.
By the way, I put more comments on your quora page.
https://hupeerreview.quora.com/Conversation-with-JD