In studying the Sloan Digital Sky Survey (SDSS) data along the distance dimension, I was able to see the Universe resonating 36 times during the first few minutes after the Big Bang.
It used to be 10 times...now, with inflation, it is 36 times....and growing...:)
In fact a little better data processing points to at least 36 Bangs (see the plot).
The interval between Bangs is around one minute. This means that just after being born as a 184 light-seconds hyperspherical metric fluctuation, the Universe rang for 36 minutes.
From the reaction (no reaction) of my peers, I gathered that this should be a known fact and I am the only one unaware of it.
Skimming through the literature, what is known about BAO seems to be the conclusions of an spherical harmonics decomposition of the CMB with a peak on l=180 or thereabouts (1-2 degrees). My data shows me that the wavelength is 0.24 radian, nowhere near one degree.
I am trying to figure out if I overlooked something.
Among other things, a modulation of Galactic density along the radial direction would confirm that the Universe has a hyperspherical topology (4D angle=3D distance).
The link refers to the plot of the SDSS Celestial Coordinate distribution summed up along the Right Ascension coordinate. One can see the 36 ringings associated with the Big Bang.
https://www.youtube.com/watch?v=YfxqMsnAinE
https://www.youtube.com/watch?v=04JeZ1n4qNI
https://www.youtube.com/watch?v=r54AQc2BR5c&t=315s
https://issuu.com/marcopereira11/docs/huarticle
Frankly no. It isn't a known fact. New ideas like yours and mine go through a period of review and evaluation. Essentially if other researchers are able improve upon your work then eventually it gets accepted and might even become regarded as a known fact.
On the other hand your discovery might be explained by other conclusions.
Oscillations are sometimes suggested as a cause for how mass is distributed.
Oscillations are sometimes suggested as a cause for how mass is distributed.
That is my point since I am plotting Galactic density.
The wavelength of my oscillations of 0.25 means that current results that support Baryonic Acoustic Oscillations are overfitting, that is, they support something that doesn't exist.
The main evidence (from my simple-minded understanding) comes from the decomposition of the Cosmic Microwave Background onto spherical harmonics base. It shows a peak for L=180. I would be amazed if any decomposition of anything doesn't show a peak somewhere. What I am saying is that it doesn't make any sense to use that decomposition as evidence to BAO. In addition, I am seeing lambda as 0.25 of the radius of the Universe (13.58 billion light years). That doesn't correspond to 1-2 degrees anywhere.
In addition, my data (modulation along the distance dimension instead of the angular dimension) is consistent with an hyperspherical universe, thus Inflation, General Relativity, Dark Energy and Dark Matter are being challenged by that data.
When I placed this question here, I wanted to hear your answer coming from someone that has skin on the game - an astronomer, astrophysicists, general relativist.
You, being an amateur like me has no obligation to be knowledgeable nor your opinion has any effect on other academicians..:)
I aimed to either be attacked and go away or have them defending themselves. I have no interest in wasting time if I am wrong. On the other hand, I don't want to waste time because people are afraid of accepting criticism from an amateur.
On the other hand your discovery might be explained by other conclusions.
Why would my waves be called anything other than waves. The are expressed not by overfitted spherical harmonics decomposition but by eyeball visible modulation of Galactic density. My interpretation is the only valid here because this map is only possible using my theory.
Marco,
It is not (a fact?) known to me. I can't form an opinion just on the basis of of your RG contribution. Do you have a paper?
Matts,
Thanks for the kind reply. I am in transit. I will try to get back to you by the end of the day. Looking forward to your input.
Thanks,
Marco
I am surprised at how many people hate the big bang without having a clue of the theoretical physics and cosmology requiring it.
I don't know why would you say that to me...:)...Sorry, I've just noticed that you were referring to Amrit...:) I do have a complex of persecution... can't help it...:)
I have not one but ten bangs...;) one after the other.. I also have a Big Pop.
By the way, I don't want to direct you towards the basic article:
https://issuu.com/marcopereira11/docs/huarticle
because it is long and it will come across to someone who eats Algebra for lunch as quaint and impossible to understand.
This was going to be just a section this larger article. Since it is so striking (no pun intended), I am considering making an article of it.
For sake of conversation, could you please take a look at this quora comment. I like to think aloud at quora.
https://www.quora.com/Is-the-10-rings-Big-Bang-proof-of-a-hyperspherical-universe
It seems that I found NAO (Neutronium Acoustic Oscillation) which have longer wavelengths than the BAO (Baryonic Acoustic Oscillations due to plasma fluctuations). Those are really not acoustic....they are just chaotic plasma fluctuations.
I also calculated the 2-point correlation and there is no 150 Mpc peak (Einsenstein).
So, I am 100% ignorant of many things. I just hope whatever I know might be of help.
I also challenge the idiotic idea that the Universe came out of the Big Bang. It came off from the Big Pop....:)
Instead, what we call the Big Bang, came out of the decay of the Neutronium phase.
Tell what you think (while I polish the article).
Thanks,
Marco
PS- Please run the script. I will hold your hand and explain what is different .
Run it with glueme=True for you to get into Glue and manipulate the Map.. Specially the alpha/Declination map. That is the one with the 10 rings.
Here is the github for you to test my calculations:
GitHub - ny2292000/TheHypergeometricalUniverse
The SDSS data contained two sets:
galaxy_DR12v5_CMASS_North.fits .gz
galaxy_DR12v5_LOWZ_North.fits. gz
galaxy_DR12v5_CMASS_South.fits .gz
galaxy_DR12v5_LOWZ_South.fits. gz
Marco, we should not be at the centre of BAO, they would happen all over the universe so we should see many overlapping circles from where we are, not a simple radial pattern so for that reason I think you are measuring something different in the data.
MP: The main evidence (from my simple-minded understanding) comes from the decomposition of the Cosmic Microwave Background onto spherical harmonics base. It shows a peak for L=180.
Roughly speaking, when the CMB was released about 380,000 years after the universe got started, the CMB we see now was released at about 41 million light years so ~1 degree angle subtended corresponds to a distance around 1.3 million light years measured around the surface. The redshift of the CMB is 1089 which gives a distance now of about 1.4 billion light years.
MP: I am seeing lambda as 0.25 of the radius of the Universe (13.58 billion light years).
That is about 3.4 billion light years so you seem to be out by a factor of ~2.4 but other factors may come into this.
Note however the present radius of the observable universe is 1090 * 41 million light years or about 46 billion, your number is closer to the Hubble Length.
MP: That doesn't correspond to 1-2 degrees anywhere.
It corresponds as far as I can see to about 2.5 degrees so not too bad, but I remain sceptical that the pattern you are seeing is too clean to be trusted.
First, I would like to thank you for your reply. As you know, the best Science is done without bias. If I made a mistake or if you the community made a mistake, it was an honest mistake. So, I make sure all my hypotheses are clear and make available my script and datasets. It takes just a few minutes to see what I calculate. My github site is here:
Here is the github for you to test my calculations:
GitHub - ny2292000/TheHypergeometricalUniverse
The SDSS data contained two sets:
galaxy_DR12v5_CMASS_North.fits .gz
galaxy_DR12v5_LOWZ_North.fits. gz
galaxy_DR12v5_CMASS_South.fits .gz
galaxy_DR12v5_LOWZ_South.fits. gz
I used the column NZ (which I understood as number density) as a proxy for the mass associated with individual points in the 1.3 million objects in those galaxy datasets. If I consider 1 to be the density for each object, I still get a bump at 0.3 R0. I just miss some of the details of the oscillations.
alpha(z) is calculated using eq.20 on this article
https://issuu.com/marcopereira11/docs/huarticle
Alpha is the Cosmological Angle. That is maintained for all times (torsion of the Fabric of Space FS is mapped to motion and after billions of year, all galaxies FS are relaxed. Under those conditions, motion only occurs along the 4D radial direction).
In summary, alpha(z) is calculated using eq.20. Alpha is preserved and maps exactly to the distance on the current epoch. NZ was used as number density.
These two lines explains everything pertaining to my Model, used in the calculation
#################################
The first thing you need to understand is that my theory proposes that we in the hypersurface of a light speed expanding hypersphere. I explain how the hypersphere took off at the speed of light here:
https://www.linkedin.com/pulse/big-pop-cosmogenesis-theory-marco-pereira
Here is the youtube site with a few videos. One of them deals with glue and how to better see the oscillations
https://www.youtube.com/channel/UC9i8Z2pHA--eoyUPOfnQJJg
#################################
In the hyperspherical topology, the cosmological angle is mapped to distance in our 3D hypershell. So, it is not surprising that I would find the imprint of acoustic waves onto the distance dimension. I also assign these vibrations to the Neutronium Phase of the Universe, that is, when the Universe was as dense as a neutron star. I explained that during the Big Pop, the Universe (last layer of the Hypersphere) started expanding. that expansion created regions where neutrons could be created and that started the Banging. So the NAO (Neutronium Acoustic Oscillations) is not what you call BAO>
The BAO that you talk is something you detect by analysing CMB angular distribution. Since the plasma is chaotic, there is always a characteristic dimension (2.5 degrees). So I would not consider them acoustic. I would consider them chaotic plasma fluctuation. They occurred much later (380,000 years later) than the acoustic oscillations NAO.
##################################
You might notice that you are talking with authority about the distances, angles in the Universe. That is also something I am challenging, so your distances, angles are model dependent. This means that for you to talk to me about my claim, you might have to make a modicum effort to understand my theory. I can help you. My theory is trivial.
My theory requires G to be inversely proportional to the 4D radius of the Universe. G become epoch-dependent. This implies that Chandrasekhar masses become epoch-dependent and type 1A Supernova distances have to be recalibrated. That means that your measurement are model dependent. They do not correspond to my model.
##################################
"I remain sceptical that the pattern you are seeing is too clean to be trusted"
"The redshift of the CMB is 1089 which gives a distance now of about 1.4 billion light years."
I have exactly the opposite behavior. Inflation Theory and the other shenanigans are too dirty to be trusted..:)
Meet me at my level, that is, ask me questions and/or run the python script. It takes few a minute to run and you can see the universe coming out of those SDSS datasets.
I will be happy to help you and accept your help. If I am wrong, I will abandon the idea. I suppose you will do the same...:)
Marco, I have my own projects to work on so I'm not going to go throguh your analysis, I'm happy to accept that you have applied your own theory correctly. However, based on that, you said: "My data shows me that the wavelength is 0.24 radian, nowhere near one degree." while my quick mental arithmetic indicates that your data fits the conventional model to within a factor of ~2.5, and a factor of 2 of that might arise just from converting circle radii to spherical harmonic frequency.
We all have our work to do. This is leasure.. to be done when you have time and are concerned that someone is saying that the theory you use at work is flawed..:)
My theory reproduces d(z) for all type 1A Supernova explosions and does that without the use of single parameter. L-CDM uses 6 parameters...so my theory is telling everyone that Inflation Theory is incorrect and that includes all your remarks about distances and angles.
The 0.25 radian appears along the radial dimension. It doesn't appear along the angular dimension (so it will not appear on a spherical harmonics decomposition), so your mental arithmetics doesn't work. You have to use trigonometry to understand what a hypersphere means.
I don't want to seem mean spirited... But I have to bring you up to speed that your mental arithmetics is model dependent and your model is not parsimonious with respect to introducing unnecessary physics (Inflation, Dark Matter, Dark Energy) by over-parameterizing Cosmology (e.g. Rube Goldberg Cosmology).
The plot is the distances of type 1A Supernova predicted by HU. No parameters required.
Thanks,
That said, I could use your help in confirming my interpretation of Comoving Number Density NZ
Here is the column mapping for the dataset. Here is the comoving number density internet explanation:
############################################
If I understand correctly, the motivation behind the Comoving Number Density is to map to the current epoch, the number density by element of volume for an epoch defined by a redshift z.
Am I correct to infer that NZ (comoving number density) is the number density associated with an element of volume in another epoch would have in our epoch?
I need confirmation since this is a hypothesis I used in deriving my waves.
Thanks,
https://ned.ipac.caltech.edu/level5/Carroll/Carroll3_4.html
https://data.sdss.org/datamodel/files/BOSS_LSS_REDUX/galaxy_DRX_SAMPLE_NS.html
MP: Am I correct to infer that NZ (comoving number density) is the number density associated with an element of volume in another epoch would have in our epoch?
Yes but that is model dependent, it corrects the observed density for the LCDM expanson to get the present value.
MP: You have to use trigonometry to understand what a hypersphere means.
The hypersphere is the standard "simple" topology model for LCDM with positive curvature so I'm well aware of that.
MP: I don't want to seem mean spirited... But I have to bring you up to speed that your mental arithmetics is model dependent ...
We both know that, my point was that it's quite simple to do the calculation (roughly) for the standard model and it fits quite well, you said your model came out at "0.24 radian, nowhere near one degree" so is a poorer fit. You can't try to calculate LCDM quantities using your alternative model.
Thanks for clarifying the issue about comoving number density. You have no idea how difficult it is to get any information from scientists. I have been using and I will suppose it does that it is suppose to do. I have no other option. that value is given to me by SDSS.
#################################
The hypersphere is the standard "simple" topology model for LCDM with positive curvature so I'm well aware of that.
Correct me if I am wrong, when you say that a hypersphere is a standard simple topology model for LCDM you are not telling me that LCDM models the Universe as a the hypersurface of a lightspeed expanding hypersphere? That is what I do.
I notice that they contorted their brains to add an extra dimension and still remain in a 4D Spacetime. I never understood the why they didn't conclude we live in a 5D spacetime.
Am I correct in my assertions.
When you say simple I have the feeling that I am stupid and the LCDM people already did what I am doing behind my back..:) Please let me know that they didn't do that..:)
#################################
We both know that
Thanks for being a good sport.
#################################
You can't try to calculate LCDM quantities using your alternative model.
I had two qualms with the spherical harmonics decomposition. If you live in my Universe, the angle alpha (Cosmological Angle) is our distance (on a normalized Universe of radius 1). As you can see, waves would awash the hyperspherical hypersurface (as I can testify). Their visibility in a 0.19 femtometers thick Universe wouldn't live any angular footprint. That is, one cannot (should not) be able to see acoustics by just looking at the sky (CMB) angular distribution. CMB will have the plasma chaotic fluctuations but those are not acoustics.
So when I complain about a L=180 peak on SH decomposition, I am complaining both about the value and about the method of observation.
In addition, my theory tells me that after a few billion years, the fabric of space is relaxed and alpha is preserved for all galaxies. That is, the angle I see in the past is the same angle I see in the present. There is not conversion of angle from epoch to epoch. Of course, the angle I am referring to is the Cosmological Angle and not the angle of observation. That is irrelevant for sake of making a map.
GD: The hypersphere is the standard "simple" topology model for LCDM with positive curvature so I'm well aware of that.
MP: When you say simple I have the feeling that I am stupid ...
No, that's why I put the word in quotes, it is a specific technical term. For example, if the curvature of the universe were exactly zero, it would mean a slice through the spatial extent of the universe at a given cosmological age would satisfy Euclidean geometry. The "simple topology" for that would be a flat plane. Draw a traingle and the sum of the internal angles is 180 degrees. Draw that on a sheet of paper and then roll it into a cylinder and the angles still sum to 180. In four dimensions, a 3-torus can be formed by joining the ends of the cylinder without distortion (you can't do that in 3D space) so a 3-torus has the same geometry as the flat plane but a different topology. The plane is called "simple" and all possibilities are "complex topology".
MP: Correct me if I am wrong, when you say that a hypersphere is a standard simple topology model for LCDM you are not telling me that LCDM models the Universe as a the hypersurface of a lightspeed expanding hypersphere? That is what I do.
Almost.If you think of the 3D surface of the hypersphere as the extent of space at a specific cosmological age then yes, that is the model. However, you have to think back to special relativity to merge in the idea of time. In that, the universe is seen as four-dimensional so there is no surface that we would call "now", time extends into the future and past. That's known as the "block universe" philosophy.
Anyway, for the three possible geometries with the corresponding "simple" topology, see the diagram on the right of the first link (also the third link from WMAP). The sphere which has positive curvature is the expanding hypersphere. You'll find this in all introductory cosmology textbooks. The radius is R = HL/√ΩK where HL is the Hubble length and ΩK is the "curvature density". The latter has been measured by the Planck satellite to be 0.0±0.005.
MP: I notice that they contorted their brains to add an extra dimension and still remain in a 4D Spacetime. I never understood the why they didn't conclude we live in a 5D spacetime.
Have a look at the second link on Kaluza-Klein theory, the fifth dimension was originally considered by Kaluza in 1919. It has evolved into modern "string theory".
Am I correct in my assertions.
MP: I have the feeling .. the LCDM people already did what I am doing behind my back..:) Please let me know that they didn't do that..:)
Well I wouldn't say it was "behind your back", it's the starting point for most textbooks. The hypersphere model is also commonly known as the "balloon model of the universe", put that into Google and see how many hits you get. The fourth link is from Ned Wright whose tutorial (last link) has been around since last century.
https://en.wikipedia.org/wiki/Shape_of_the_universe#Curvature_of_the_Universe
https://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory
https://map.gsfc.nasa.gov/universe/uni_shape.html
http://www.astro.ucla.edu/~wright/balloon0.html
http://www.astro.ucla.edu/~wright/cosmo_01.htm
I am quite aware of the balloon allegory to explain Universe expansion. It take a little more Brain to create a theory and provide logical support to make that balloon real.
Science is about subtleties. There was Lorentz Transformation before Relativity gave it a physical meaning (time dilation, space contraction) of a coordinate transformation in a Minkowski space.
I certainly looked into Kaluza-Klein and criticized (for rhetorical effect) it as a contrived theory that introduced new physics (compact dimensions) without actually using the geodesics model (electromagnetism doesn't curve space, despite of being much stronger than Gravitation. Don't even start me with String theory..:) (which I don't know. I only know physics).
You realize that: to say that the Universe acts as a Balloon is different from saying that the Universe is a Balloon. That is exactly what I am doing. To be able to do that, I created a model for matter where matter is made of deformed space, a new Standard Model, derived Natural Laws (Gravitation and Electromagnetism), explained away the other forces, discovered a new force (de Broglie Force) and challenged both SDSS and type 1A Interpretations of their own data.
My facetious comment about LCDM people working behind my back was to make you realize that what I am saying is new and that all those very smart fellows couldn't make the intellectual jump (because it is difficult). They just BS among themselves and created PBS shows and so on and so forth.
##############################
Thanks for the links. You know that all those links are model dependent. Curvature of space by matter is meaningless in a lightspeed expanding hyperspherical universe. Kaluza-Klein theory is a joke. It was discarded about the time it was written. String theory has no testable predictions, doesn't derive any natural law... it is physically inconsistent with Physics. An extremely small string requires extreme large containment field. What happened to having kinetic energy inversely proportional to lambda squared? This is what I can unnecessary physics (lack of parsimony in introducing new physics).
It is possible that further observations will require that. I don't think there is any observation that requires us to dispose the idea that something very localized has to be contained by some very high potential. This means that energy levels of that string would be HUGE, and not suitable to describe larger particles.
One can always make a fabric of strings and use collective modes (they have lower energy transitions). that said, there is no reason for that.
MP: electromagnetism doesn't curve space, despite of being much stronger than Gravitation.
It's energy density curves space to exactly the same extent as the same energy density of matter.
MP: You realize that: to say that the Universe acts as a Balloon is different from saying that the Universe is a Balloon.
Of course, the balloon is just a crude and somewhat flawed analogy to give laymen a simple picture. The scientific model incorporates the FLRW metric, the Friedmann Equations and the various equations of state for the components.
MP: My facetious comment about LCDM people working behind my back was to make you realize that what I am saying is new ..
The 3-sphere idea is not new, it's the simplest finite model from standard cosmology, you didn't seem to realise that based on your previous comments. The only difference appears to be that curvature is intrinsic in the standard version, you may be thinking of it as extrinsic with an embedding space.
Let's say we have a planet with two moons... they are following their geodesics. Then I extract 10 Million Coulombs of electrons from one of these moons and add them to Earth. I suppose that will change the trajectory of the charged moon and not change the trajectory of the neutral moon.
My naive understanding is the since space has been modified by the charge redistribution, the neutral moon trajectory, should also be affected. If you have to change the paradigm to explain this simple Gedanken Experiment, then the paradigm is not good.
########################################
I don't think I can convince you that my theory is new. Even after giving you the example of a change of interpretation of Lorentz transform was all that Strict Relativity was about.
I am changing the interpretation that the physical universe (as oppose of LDCM model) is a lightspeed expanding hypersphere. LCDM model doesn't have a lightspeed expanding hypersphere in it, so it is not the same.
I am also proposing that matter is made of space deformation coherences, that is,t he whole Universe is made of only space. LDCM adds two new constructs (Dark Matter and Dark Energy) which are parameters but somehow they want those parameters to have a physical meaning. That clashes with reality, since they are nowhere to be seen and Dark Energy/Inflation Theory is simply explained away by incorrect Supernova distance measurements, as I did.
I would direct towards attacking my correction to the Supernova Survey distance measures first if you really want to say that my theory is not new...:)
####################
You don't need to tell me that the concept of 3-sphere is not new. It wasn't new when it was first used in Physics. It was already a mathematical construct.
In fact, a lightspeed expanding hypersphere is new. I am sure mathematicians never used them because it has the speed of light on it. Physicists had never used them because they are not allowed to place an atom at the speed of light... never mind the whole universe
MP: I suppose that will change the trajectory of the charged moon and not change the trajectory of the neutral moon.
The Coulomb forces between the bodies would have an effect which would cause the paths to deviate from the geodesics when uncharged. Extracting charge from one body and moving it to another requires an input of energy (like charging a battery) and that energy will very slightly alter the masses but the amount is tiny.
MP: If you have to change the paradigm to explain this simple Gedanken Experiment, then the paradigm is not good.
No change in paradigm is needed, you just do the maths with the standard equations.
MP: LCDM model doesn't have a lightspeed expanding hypersphere in it, so it is not the same. .. Physicists had never used them because they are not allowed to place an atom at the speed of light.
The simple positive curvature model is a hypersphere but since distance is measured across the 3-surface (or we would have to experience a universe with 4 spatial dimensions), it makes no sense to me to talk of the hypersphere having a speed of expansion (presumably you mean rate of change of radius). In the standard model, the speed between any two points on the surface of the hypersphere is given by the Hubble Law so you can't even define a "speed of expansion" because it is proportional to distance.
MP: LCDM model doesn't have a lightspeed expanding hypersphere in it, so it is not the same. .. Physicists had never used them because they are not allowed to place an atom at the speed of light.
The simple positive curvature model is a hypersphere but since distance is measured across the 3-surface (or we would have to experience a universe with 4 spatial dimensions),it makes no sense to me to talk of the hypersphere having a speed of expansion (presumably you mean rate of change of radius). In the standard model, the speed between any two points on the surface of the hypersphere is given by the Hubble Law.
Is it possible that you cannot reason without the framework you have inside your mind at this point? Forget the standard model (I live in another model) for a second. Imagine a hypersphere - expand it at the speed of light (there is nothing else to expand other than the 4D radius). Now consider that we are living inside the lightspeed expanding hypersurface. It contains a 3D Universe, so it should be ok.
If you think a little, you will realize that that hypersphere follows Hubble law if you consider H0=c/R0 where R0 is the current 4D radius. If you do the math like I did, you will realize that this topology, plus G proportional to 1/4d_R plus some understanding of Nuclear Chain reactions in a Supernova detonation will render Inflation Theory and your Standard Model irrelevant.
Don't worry, I am sure you are not the only one suffering from this ailment.
MP: Forget the standard model (I live in another model)
Other than the rate of change of radius, I'm still waiting for you to say something different from the standard balloon version.
MP: Imagine a hypersphere - expand it at the speed of light (there is nothing else to expand other than the 4D radius). Now consider that we are living inside the lightspeed expanding hypersurface. It contains a 3D Universe, so it should be ok.
Right, other than the speed, that is the standard model.
MP: If you think a little, you will realize that that hypersphere follows Hubble law if you consider H0=c/R0 where R0 is the current 4D radius.
Right, or R0=c/H0. Note that c/H0 is called the Hubble Length and has the value 14.4 billion light years.
The radius also defines the curvature of the surface. The curvature can also be described in terms of an equivalent energy density ΩK and the relation is ΩK=(c/H0R0)2, you can think of that as just a definition of a parameter based on H0 and R0.
For your model, that means ΩK=1.0 always, however current measurements give its value as ΩK=0.0±0.005.
MP: Don't worry, I am sure you are not the only one suffering from this ailment.
Oh I won't worry, it's called "accepting the evidence" and it's what scientists do.
You are on the right track if you realise that inflation caused R to increase at an exponential rate before settling down to the present much lower value. The observational minimum for the radius of curvature is 204 billion light years, but it's probably much larger and possibly infinite.
https://en.wikipedia.org/wiki/Hubble%27s_law#Hubble_length
Right, other than the speed, that is the standard model.
I asked you to get out of your model for a second. Somehow having the physical universe traveling at the speed of light is not significant difference to the Standard Model Having matter being made of space. I also explained how the whole Universe was placed in motion in 1E-24 seconds to the speed of light.
I don't think there is any hope. for you I mentioned that I predicted all the Supernova distances given their redshifts, that means that my theory doesn't require what you foretell to be in my future (accepting Inflation). You ask for evidence but refuse to read the article...:)...I cannot provide you with evidence by waving my hands...:)
Let's agree to something and never communicate again
MP: I asked you to get out of your model for a second.
I know but what you then say is the same as one form of the standard model, the geometry of a 3-sphere is the same whatever model you use it in, we can't avoid that commonality.
MP: I mentioned that I predicted all the Supernova distances given their redshifts,
Since the published distances are only known from the redshift, that's not surprising. The real question is can you accurately predict the non-linearity in the plot of their magnitude versus redshift? See Perlmutter's graph linked for example.
MP: You ask for evidence but refuse to read the article
I didn't ask for evidence, I asked for clarification on what aspect you thought was different to the standard 3-sphere model and you provided that, the difference is that in yours the radius increases linearly with time at the speed of light.
To test a theory, I don't look for supporting evidence which I assume you have. The approach science takes is to look at the tests which are most likely to show a discrepancy so the obvious aspect here is the effect that your much smaller radius would have. That should cause higher curvature so that is the way to test your theory. Consider the image attached. The dot is our location on the 3-sphere, r is the radial distance corresponding to some redshift and C is the circumference of a circle of given r. The ratio C/r would be 2π in Euclidean space but will be smaller on the hypersphere by an amount that depends on the sphere's radius R (and it could be greater than 2π if the curvature is negative). Curvature is usually published in terms of the equivalent energy density ΩK and the most recent published value for that is in the Planck cosmological parameters. From the abstract: "Spatial curvature is found to be |Omega_K| < 0.005." There is more detail in the document, specifically section 6.2.4 which states "The base CDM cosmology assumes an FRW metric with a flat 3-space. This is a very restrictive assumption that needs to be tested empirically.". Simply, your model gives a very different value for ΩK to the empirical measurement, that is what I think you need to address.
I agree, I doubt much more can be achieved by repeating the discussion, those are two questions that you will need to address if you want people to consider your proposal seriously.
https://inspirehep.net/record/1351741/files/Perlmutter1999.png
https://arxiv.org/abs/1502.01589
MP: I asked you to get out of your model for a second.
I know but what you then say is the same as one form of the standard model, the geometry of a 3-sphere is the same whatever model you use it in, we can't avoid that commonality.
Commonality is not identity. As I mentioned, Einstein great feat in Strict Relativity was to state that Lorentz Transform was a physical phenomenon. You should know that a 3-sphere in math.... or in a model where its curvature is a parameter is different from a physical universe traveling at the speed of light (I provide the mechanism for acceleration of that whole universe in 1E-24sec to the speed of light.).
If you cannot understand that commonalities are less important than the distinctions, you are documenting in stone (in bits) that you really don't understand physics and are a mathematician.
##############################################
MP: I mentioned that I predicted all the Supernova distances given their redshifts,
Since the published distances are only known from the redshift, that's not surprising. The real question is can you accurately predict the non-linearity in the plot of their magnitude versus redshift? See Perlmutter's graph linked for example.
I mentioned that my prediction is a prediction because it doesn't use parameters. Perlmutter uses parameters. it is just a fitting. If you don't understand this...:)
By the way, my argument correct those distances, so Perlmutter is fitting the wrong data and my correction of distances plus mechanism for redshifting eliminates any problem. As much as you want to hide behind Perlmutter, I am attacking the distances themselves... You should know that whatever anyone did based on those distances is useless...wrong... .incorrect... Unless you can rebut my correction to them.
##############################################
MP: You ask for evidence but refuse to read the article
I didn't ask for evidence, I asked for clarification on what aspect you thought was different to the standard 3-sphere model and you provided that, the difference is that in yours the radius increases linearly with time at the speed of light.
I didn't provided just that!!!!!
'If you do the math like I did, you will realize that this topology, plus G proportional to 1/4d_R plus some understanding of Nuclear Chain reactions in a Supernova detonation will render Inflation Theory and your Standard Model irrelevant."
this is a quote from our prior communications....
L-CDM is a really bad theory. One that was developed without bothering to check if the measurements were correct. Then, it accepted really stupid thing like inflation (without making an effort to listen to alternatives). My alternative theory has been around since 2004. The Community read it (you didn't) and never sideline with me (support me against censorship).
You for instance (may not be a scientist any longer), but has the standard model so ingrained on you, that you cannot write two paragraphs without injecting it back into conversation.
Let me make it clear. I know you know the standard model... You don't need to prove it.
What you need to prove it (if you want to do so) is that you can learn new things or if you had your fill and your are done for this life...already know too much...:)
MP: You for instance (may not be a scientist any longer)
I never have been, I'm a design engineer working in the field of HF radio communications. I did my degree in physics many years ago but have been studying cosmology for about 20 years and took an course which is based on the CalTech cosmology course a couple of years ago.
MP: the standard model so ingrained on you, that you cannot write two paragraphs without injecting it back into conversation.
MP: You should know that a 3-sphere in math.... or in a model where its curvature is a parameter is different from a physical universe traveling at the speed of light
You said your model was a 3-sphere Marco, it's not something I'm imposing on you. If I am not to force the standard model on you, all I can do is go by what you say. If you say your model is a 3-sphere then it will have a surface curvature that depends on the radius and that curvature can be measured. If that isn't the case, you're model doesn't have the geometry of a 3-sphere.
MP: I mentioned that my prediction is a prediction because it doesn't use parameters.
Prediction of what though? Distances are obtained from redshift so if you use that, you are only showing that x=x. Of course you'll get that right but so will every other model. To be meaningful, you need to relate two different measurements, like magnitude versus redshift for example.
MP: this is a quote from our prior communications....
I've searched all 3 pages of this thread and can't see anything like your list, are you thinking of some other conversation perhaps?
MP: I mentioned that my prediction is a prediction because it doesn't use parameters.
Prediction of what though? Distances are obtained from redshift so if you use that, you are only showing that x=x. Of course you'll get that right but so will every other model. To be meaningful, you need to relate two different measurements, like magnitude versus redshift for example.
Let's see if you are smart enough to tie your shoes. I will let you consider the Universe a lightspeed expanding hypersphere. Calculate the distances of all supernova. I will give you redshift z. No parameters are allowed. If you insist, you can use the R0=13.58 Gly
If you don't like the hypersphere to be expanding, that is ok... No parameters allowed.
Please show me how do you calculate the distances... for z=1.1 for instance.
MP: Please show me how do you calculate the distances.
For the distance values you get from published sources, they should say the basis on which they were calculated, but typically for low redshift the approximation D=cz/H0 is used. For higher values of redshift, they might use a model, typically ΩM=0.3 and ΩDE=0.7, but they may still use the simpler formula as a convention even though it would be inaccurate.
No... It seems that you cannot understand anything.
You have to calculate the distances from just z and H0 and the speed of light c. Nothing else. How come you don't understand the task I performed. I didn't use another parameter... didn't use approximations of any kinds.
To be really precise I used approximation for the nuclear chain reaction on the Supernova... You can use that if you want to do so.
I certainly didn't tell you to introduce OmegaM or any other physics or parametrization.
Why is that so difficult for you to understand????
Do what I did...or at least try to understand how I did it.
Marco, I quoted your question: "Please show me how do you calculate the distances."
You asked me to tell you what I do, not to try to guess what you do. What I said is what astronomers do and I don't do anything different.
MP: You have to calculate the distances from just z and H0 and the speed of light c. Nothing else.
I did that, I said "typically for low redshift the approximation D=cz/H0 is used". Did you even read my reply? I think that would apply globally in your version.
MP: I certainly didn't tell you to introduce OmegaM or any other physics or parametrization.
Tough luck, matter produces gravity and gravity slows down expansion, so history of the rate of expansion depends on the density of matter in the universe.
MP: Why is that so difficult for you to understand???
Why is it so difficult for you to understand that I consider your answer incomplete because it ignores the effect of mass density?
You have R increasing linearly with time as shown in the linked image with ΩM=0.0 but we know that the universe is not devoid of matter. The numbers that fit best are around ΩM = 0.3, ΩΛ = 0.7 which again is what I said.
https://upload.wikimedia.org/wikipedia/commons/d/dc/Friedmann_universes.svg
Marco, I quoted your question: "Please show me how do you calculate the distances."
You asked me to tell you what I do, not to try to guess what you do. What I said is what astronomers do and I don't do anything different.
Let's see if you are smart enough to tie your shoes. I will let you consider the Universe a lightspeed expanding hypersphere. Calculate the distances of all supernova. I will give you redshift z. No parameters are allowed. If you insist, you can use the R0=13.58 Gly
I mentioned clearly that you shouldn't use parameters. That would tell that I don't do what everyone does. That I created something better (no parameters, no unnecessary physics or extra degrees of freedom). The point of my work is that I am creating a theory that is better than what everyone does. I didn't come here to say, I am doing the same thing or worse. By choosing to read part of my question, it should be clear that you are not being honest... you are just using this as a rhetorical competition, instead of opening your mind for a second and learning something new.
D=cz/H0 equation is incorrect...no approximation. Nobody wants an approximated theory. I don't use this equation in my theory.
MP: I certainly didn't tell you to introduce OmegaM or any other physics or parametrization.
Tough luck, matter produces gravity and gravity slows down expansion, so history of the rate of expansion depends on the density of matter in the universe.
What kind of thinking person (as I going to use the word scientist, but rhetorically, you will evade it by saying that you are not one...) are you that imposes from the get-go, "that mass produces gravity and slows down expansion" after I said the word hypersphere. In a hypersphere, symmetric mass distribution eliminates the global effect of gravitation, so density of matter has no global effect.
It is a shame that you decide just to win a discussion instead of learning something.
MP: I mentioned clearly that you shouldn't use parameters.
You also said:
MP: No... It seems that you cannot understand anything. You have to calculate the distances from just z and H0 and the speed of light c. Nothing else. How come you don't understand the task I performed. I didn't use another parameter.
Well I gave you the equation using H0 (and z and c of course), and I didn't use "another" parameter.
MP: What kind of thinking person (as I going to use the word scientist, but rhetorically, you will evade it by saying that you are not one...) are you that imposes from the get-go, "that mass produces gravity and slows down expansion" after I said the word hypersphere.
I am one who knows of Birkhoff's Theorem that says the total mass of the shell (of uniform density) will be equivalent to a single mass at the centre, and who knows that GR relates the rate of change of the scale factor to the energy density of the contents via their equations of state, and that a hypersphere is the standard positive curvature solution which still includes those relations.
MP: It is a shame that you decide just to win a discussion instead of learning something.
You haven't offered anything new to learn, your rate of expansion appears to me to indicate the Milne Model (see the link), devoid of energy if GR is valid globally or possibly containing matter if SR applies globally while GR is only locally valid, and you haven't said anything about the surface curvature of a hypersphere or how you propose to reconcile it with the measurements that show it to be close to zero. I'm not "winning a discussion", I'm just waiting to see how you will answer the questions.
https://en.wikipedia.org/wiki/Milne_model
Let's drill down on one by one of your 'answers'
You cannot be serious in rebutting what I wrote in my answer to this:
D=cz/H0 equation is incorrect...no approximation. Nobody wants an approximated theory. I don't use this equation in my theory.
Are you claiming that this answer works? Can you publish this?
See the link from the SDSS site which states:
The first two sections I put in bold are what I said to you, the third bold text tells you this is the standard convention that virtually all astronomers will most often use, not just some personal view of mine.
http://skyserver.sdss.org/dr1/en/proj/advanced/hubble/conclusion.asp
(For sufficiently large d, we might expect a departure from the simple linear relation, but that's another story.)
There is no other story. I did not ask for an approximation. I did not provide an approximated theory. My question was connected with your seemingly lack of understanding of my contribution. For that reason, I asked you to do what I did.
Instead, you pretend that you didn't understand the request and the context. I call that bad faith
There is no "bad faith" involved, I answered the question you asked so if it isn't what you wanted, you need to make the question clearer.
The equation is exact for small z because the value of H0 is empirically defined as the slope of the line cz/D. However D in the equation is the proper distance now and for real observations, we look back in time due to the finite speed of light. Since H had a different value in the past, the equation is exact for D as defined but only approximate for practical observations over long time spans.
I would also note that if you picture an expanding hypersphere then the distance D between two points on the surface is proportional to the angle they subtend at the centre. If that angle is constant then D will increase at a rate dD/dt which is proportional to D and we can therefore define a constant (dD/dt) / D which, with an appropriate choice of units, is H0 so I suspect the equation holds for your model too.
Now that you know that I asked you for a solution for d(z) valid for all distances using just H0 and c, can you do it?
That should had been a simple question. Just say no... nobody can. This question is designed for you to understand the limitations of what you know.
The next question is can you use GR on a 5D Spacetime without knowing the Nature of Gravitation and Electromagnetism?
The answer to this is also No. That will help you understand why you cannot use Birkhoff's theorem on a 4D hyperspherical mass or a 4D mass in a 4D hyperspherical shell traveling at the speed of light.
Next, go to my github repository and branch it.
https://github.com/ny2292000/TheHypergeometricalUniverse
run all the scripts.
I hesitate to enter what is clearly a fractious debate between two people...but my 2c steers clear of any theoretical arguments and just addresses the SDSS data presented by Marco.
When I see things such as the opening frames in the "RingingAlongDec" youtube video you linked, I immediately suspect quantization in the measurements or their presentation. The curves are just too perfect, too noiseless, too...rounded off.
There is also the glaring question, already alluded to by George in his first response here - why would these density shells be concentric on Earth? I would add, why would they be exactly aligned with something as cosmologically arbitrary as the (RA,Dec) coordinate system defined by the tilt of Earth's axis? Is it not far more likely that this pattern is connected with an observational characteristic, such as the drift-scan mode of observation of the SDSS telescope, scanning the sky in slices?
One might say, "never attribute to intricate theories that which is adequately explained by data systematics".
One last thing - terminology. Marco, you use terms like "Galactic Clusters" and "Galactic density" where you should be using "galaxy clusters" and "galaxy density". Galactic, with a capital G, is reserved for our own Milky Way Galaxy. To me and to other astronomers, "Galactic density" means the density of stars or other matter within our Galaxy.
Ray,
It might seem as fractious... but I don't blame him for being a difficult student. Anyone who lives and breaths GR would try to see things from a GR or 4D spacetime perspective.
I thank you for the comment about Galaxy and Galactic. I am a foreigner and also not an Astronomer.
################################################
My conversation with George has been predicated on the idea that he is trying to learn something from me. Side arguments are the result of how difficult it is to force anyone who is intelligent and has a lot of knowledge to stop trying to use it where it doesn't belong. For instance, trying to use Birkhoff's Theorem on a 5D spacetime.
"I am one who knows of Birkhoff's Theorem "
he is a proud GR person and that is ok. It is not ok to think that GR works in 5D Spacetime without blinking. GR was designed and tested for a 4D Spacetime. This lack of depth on the scientific method makes it difficult for me to teach. I just want him or you to relinquish your tools at the door for a second, realize the flaws in the current view such that you can give a new idea a fair evaluation, instead of taking an adversarial position or trying to translate everything I say to your language all the time.
################################################
Is it not far more likely that this pattern is connected with an observational characteristic, such as the drift-scan mode of observation of the SDSS telescope, scanning the sky in slices?
One might say, "never attribute to intricate theories that which is adequately explained by data systematics".
I totally agree with you. The 10-Bang is more of an observation...as theory goes modeling 10 Bangs is as simple as modeling one bang...
There is a caveat. I am plotting densities, that is, the fine structure is contained in the comoving NZ, the gross structure is contained in the zDensity(number of observations by z). The z density should be modulated with a distance-squared profile plus an edge decay. It shouldn't have oscillations.. My plots should have the 2-point correlation already normalized by 1/distance-squared, so that bias is already taken into consideration.
I don't know how one would modulate through the drift-scan mode the values for comoving number density to fit a specific profile shown in the declination plot. Remember, the dimension where the profile is located is the z dimension, not an angular dimension.
Please let me know your thoughts.
My lack of understanding of the details of data collection and data processing in Astronomy explains my effort to convince some of you to collaborate with me.
I certainly cannot do everything well... I only hope I have something to add
################################################
I asked SDSS to tell me that answer. Didn't receive a reply (other than one directing towards a complex paper). On the other hand, the 10 bangs is present on both datasets. It is also strange..I cannot imagine on my head if they are being introduced as modulations along the declination dimension. I cannot see them in the visualization, so it is puzzling. It might be an artifact... but that doesn't change the theory...it just changes the number of Bangs. Now the ring around distance, that is relevant. I would expect a ring someplace. It is possible the data wasn't collected with that in mind and better data will be necessary. That doesn't change my theory nor its predictions. The data should be good enough. People are deriving conclusions from it (e.g. 150 Mpc conclusion). So I don't know what is going on. SDSS doesn't tell me.
################################################
Below is a map of my attempt to teach anyone anything about my theory.
https://www.researchgate.net/project/The-Hypergeometrical-Universe-Theory
It has a few steps. I have to guide your reasoning since this is really outside your understanding.
################################################
I also thought about quantization. In my theory, the effect of the Waves take place just after the Universe started moving. The tangential expansion causes the hypershell to start a phase transition from Blackholium to Neutronium. Neutronium phase is the phase where the Universe is between a 184 light-seconds Black Hole and a 575 light-seconds Neutron Star. So in my theory the Universe goes from Blobium (just a smooth blob of metric deformation),to a 184 ls Blackholium to a 575 Neutronium.
So after 390 seconds of existence, the Universe is starting to have a density that is smaller than a Neutron Star... Around that time, density fluctuations star converting Neutrons into Electrons and Protons and antineutrinos. That creates the wave seeds.
So I see the seeding of galaxy clusters as a competition between the natural dilution of the Universe and the effect of these waves.
I haven't been able to see how those densities are distributed along the Declination dimension. I lacking time and skill to do a better job on visualization. The data and my script is available here
https://github.com/ny2292000/TheHypergeometricalUniverse
################################################
You can see that I didn't do anything to the data (other than use the cosmological angle constancy to project the past into the present hypersphere).
I am not here to defend myself... in the sense that, I am not my theory. I am just defending a theory I created. If it is wrong, I will get out from my high horse and move on.
On the other hand, I would appreciate if everyone had the same posture.
The flaws in Science are self evident (Natural Laws with poles at zero distance), inflation, imaginary Dark Energy and Dark Matter (unsubstantiated by any reasonable evidence) and the list go on. So, as much as my theory is tentative, the current view is much worse. That is my claim.
On my map to teach this theory I claim it doesn't have a parameter. That should be enough for you to lend me your ears for a few hours.
################################################
I am also concerned about us being in the center of the waves. I mentioned that in my updates of this project. This is not something I am hiding. In fact, I am also not hiding the fact that I asked SDSS for help on that matter and received no help, no explanation, no clarification. They directed towards a complex paper.. difficult for me to read... It diminishes my ability to provide any insight to the community.
################################################
I know my limitations. I am happy to present myself begging for information from SDSS. See my exchange below.
There is nothing wrong with being ignorant of something. Comoving Number Density was used as a mass proxy in my calculations. SInce I am not an Astronomer, I can only guess all the billions of detailed knowledge that is part of the word Comoving. I just need the finality (comoving is trying to make that density to correspond to a current Universe as opposed to something farway in space and time). I just need confirmation in English of that... I always try to cross my ts and dot my is.
###############################
Benjamin,
I would like to ask you to be charitable and clarify a little if I can disregard the comoving part of "Comoving number density". :)
I suppose the comoving part has to do with dilation of space... and it is used to keep the meaning of number density comparable in different epochs. Am I correct?
I am just an amateur scientist..:)
I understand number density as a proxy of mass or number of stars or number of galaxies. I am just trying to make sure I am not too off in that understanding.
Not to mention the effect of the change in fiducial cosmology between DR11 and DR12 on the column NZ Could you elaborate if the change in fiducial cosmology affected the interpretation of NZ.
Thanks,
Marco
From: Benjamin Alan Weaver
To: MP
Cc: "[email protected]"
Sent: Tuesday, February 28, 2017 9:40 AM
Subject: Re: [helpdesk 10598] Question about columns on dataset
Hello Marco,
Thank you for writing to the help desk. The file is described in detail here: https://data.sdss.org/datamodel/files/BOSS_LSS_REDUX/galaxy_DRX_SAMPLE_NS.html. The NZ column is described as "Comoving number density for the object's redshift using the fiducial cosmology. Note that the fiducial cosmology is different in DR11 and DR12."
Kia ora koe,
Benjamin Alan Weaver
GD: I answered the question you asked so if it isn't what you wanted, you need to make the question clearer.
I asked but you still haven't made it clear, are you asking about standard cosmology, well-known alternatives or your own model? I'll assume the latter but first explain why.
MP: Now that you know that I asked you for a solution for d(z) valid for all distances using just H0 and c, can you do it?
Suppose you asked a student in secondary school to answer this question using Newtonian mechanics:
"A stone is thrown into the air with initial vertical speed v. How long will it be until it falls back to the initial altitude (ignoring air resistance)?"
They would reason that the stone would have zero velocity at the highest point and reach that in time t=v/g where g is the acceleration due to gravity. By symmetry it would take the same time to return t the starting altitude so the answer is t=2v/g.
If you then asked the student to give you an exact answer but without using any parameter related to the acceleration (g or the mass of the Earth etc.), he would think it was an idiotic question (and he'd be right). In standard cosmology, gravity affects the rate of expansion so for exactly the same reason you cannot answer without including that contribution in some way, so for that reason I don't think that is what you wanted, you aren't an idiot. If you do want to know the standard solution, any decent textbook will tell you how to calculate it based on the relative energy densities and the equation of state for each component.
You could be asking about an alternative cosmology such as the Milne Universe or Verlinde's version (or several others) but since you didn't specify any particular one and the answer would obviously depend on that choice, I don't think that was the case either.
That leaves your own model. You've said it is a "hypersphere expanding at the speed of light" and confirmed that the speed refers to the rate of change of the radius. In that case the answer is fairly simple.
You haven't said what relationship there is between expansion and redshift in your model so I'll have to assume the usual, if the distance between us and the source was Dz when the light was emitted and is D0 now, the wavelength of the light will have been stretched by D0/Dz = (1+z).
Assuming the radius of the 3-sphere was Rz when the light was emitted and is R0 now, for comoving galaxies, the angle subtended is constant so
Putting it all together, the answer is that the current proper distance is:
MP: That should had been a simple question. Just say no... nobody can.
It is simple, and I just answered it.
MP: My conversation with George has been predicated on the idea that he is trying to learn something from me.
ROFL, remember you didn't even know that the expanding 3-sphere was the "simple topology" standard solution for an FLRW universe with positive curvature. Had you studied the Milne model before I introduced it? I think anyone reading the thread can see that you learned more from me than vice versa.
MP: It is not ok to think that GR works in 5D Spacetime without blinking. GR was designed and tested for a 4D Spacetime.
Indeed, but it is OK if one knows about Kalusa-Klein theory which is the 5D version of GR, explains electromagnetism via the extra dimension and in which Birkhoff's Theorem is still valid.
MP: This lack of depth on the scientific method makes it difficult for me to teach.
Again, it seems you weren't even aware of Kalusa-Klein theory and imagined the extension to 5 dimensions was an original idea so it looks as though your comment applies most aptly to yourself. If you cut out the attempts at ad hominem attacks, this conversation would be much more fruitful.
MP: Comoving Number Density was used as a mass proxy in my calculations.
Number density only reflects the number of objects, not their mass. To use it as a proxy for mass density, you need to multiply by the mean mass per object. Remember galaxies grow by merging so although it's a reasonable approximation at low redshift, it will fail at higher values, the average mass will be smaller. A more significant problem would be Malmquist Bias, that will progressively eliminate dimmer objects at greater distance and seriously affect both densities. You also need to look at the maps of the sampling, obviously regions containing significant clusters will have unrepresentative higher densities, and you also need to consider the "Finger of God" effect on redshifts within clusters.
MP: SInce I am not an Astronomer, I can only guess all the billions of detailed knowledge that is part of the word Comoving.
That's trivial, in your model it means that the object has no motion across the surface of the hypersphere. Real galaxies move within their host clusters with velocities that depend on their specific past history giving them a local velocity relative to the mean "Hubble Flow" of expansion. Those speeds can be up to about 1% of the speed of light.
MP: I just need the finality (comoving is trying to make that density to correspond to a current Universe as opposed to something farway in space and time). I just need confirmation in English of that... I always try to cross my ts and dot my is.
My comments above are intended to explain standard astronomical terminology, they are not advocating any particular model.
You mentioned me in a recent comment so I had a look and on your profile page you say:
I will play the role of "critic" if you like although somewhere between "sceptic" and collaborator would be more accurate. Specifically, I am suggesting questions that you could address using your model that will give you testable predictions. I've already mentioned a couple, what are you assuming for the evolution of the mass function in translating number density to mass density, and how have you handled Malmquist Bias. Ray Butler's points are excellent, I said way back at the beginning that the cleanness of your regular patterns was very suspicious and Ray suggestion that it is a result of an instrument idiosyncrasy is exactly what I would expect. You need to explain how you have processed your data to remove such artefacts in any paper for publication.
I have also raised a cosmological question already, how do you explain the measured near-zero curvature of the universe when your model should have high curvature, and alluded to a second.
You asked above if I could calculate D from z which I did, in return I will ask you, what is the formula in your model for the apparent magnitude of a distant point source (such as a supernova) using only the absolute magnitude and z? Remember the evidence for dark energy is the plot of magnitude versus redshift so that formula is crucial in testing your model.
The final question is similar and related to your earlier comments on BAO. What is the formula in your model for the observed angular size of a distant object using only its width measured locally and z?
I derived Gravitation from first principles. I know from its derivation that it speaks only of tangential motion (motion within the lightspeed expanding hyperspherical hypersurface). That gives me unique insight that Gravitation Law doesn't work in the bulk. Matter sitting around in a 4D spatial manifold doesn't express Gravitation or electromagnetism.
Having this view, I stated that no extension of GR into a non-compact 4D spatial manifold is warranted.To that point I added the scientific tenet that scientific theories have scope and their automatically extension outside its scope does not follow the scientific method.
So my contention that you cannot use GR Birkhoff's theorem is because Birkhof hadn't derived Gravitation from a more basic theory. Nobody did it. Kaluza-Klein was a 5D spacetime, but their extra dimension was compact.
##########################################
You haven't said what relationship there is between expansion and redshift in your model
https://issuu.com/marcopereira11/docs/huarticle
equations 20-21. They are not hiding themselves..:) It is not a secret... you don't need to guess.
##########################################
so I'll have to assume the usual, if the distance between us and the source was Dz when the light was emitted and is D0 now, the wavelength of the light will have been stretched by D0/Dz = (1+z).
Why would i consider that space inflated? If light was at distance D0 and I know that light hit my telescope now. I know exactly everything I need to know to calculate d(z) with equations 20-21.
##########################################
D0 = R0 ln(1+z)
MP: That should had been a simple question. Just say no... nobody can.
It is simple, and I just answered it.
You just flunked your test...:) See the plot below
You answered but it is doesn't work. It is not properly derived. Check your Half-Baked Solution below!!!! Poorly derived, not working...
I don't need to tell you that his equation will not predict the distances on the SN1A survey, do I?
So I don't know what you just did?
##########################################
You asked above if I could calculate D from z which I did, in return I will ask you, what is the formula in your model for the apparent magnitude of a distant point source (such as a supernova) using only the absolute magnitude and z? Remember the evidence for dark energy is the plot of magnitude versus redshift so that formula is crucial in testing your model.
You seem to read my mind...:0 ... that is exactly how my model proves that there is no need for Dark Matter, Dark Energy, GR, Friedmann-Lemaitre model...L-CDM
That is the point of the article. That is what I do there. You should read more ...:)
##########################################
The final question is similar and related to your earlier comments on BAO. What is the formula in your model for the observed angular size of a distant object using only its width measured locally and z?
Equation 20-21
In addition, I am challenging the concept of BAO on angular measurements on 3D.
The reason being is that they are created by plasma and not acoustics. Acoustic happens when the Universe is dense as a Neutron Star (595 light-seconds 4D radius). Under those conditions the speed of sound is close to the speed of light and the acoustic imprint has dimensions consistent with the radius of the Universe at the time. That is what we are seeing on the distance dimension.
The two-point correlations shown below doesn't contain the 150 Mpc resonance, so conclusions might be model dependent (result of an artifact on their calculation).
With respect to angular
##########################################
In standard cosmology, gravity affects the rate of expansion so for exactly the same reason you cannot answer without including that contribution in
You cannot use standard cosmology to criticize my theory without understanding it. If I am right, Standard Cosmology is wrong, senseless... Hence, the way to criticize my theory is to understand it and criticize it from within... Criticize hypotheses, the derivation of conclusions, etc.
It is like criticizing Quantum Mechanics with Classical Mechanics arguments....like the electron is either here or there... same level of primitive criticism.
You are correct in stating that what I asked is difficult (if you said that). I did it. Nobody did it. So it is difficult. My point is to get you to recognize it.
##########################################
With respect to critic, you have to read what you criticize.
##########################################
GD: I answered the question you asked so if it isn't what you wanted, you need to make the question clearer.
I asked but you still haven't made it clear, are you asking about standard cosmology, well-known alternatives or your own model? I'll assume the latter but first explain why.
I don't know how to express better, I apologize. My theory doesn't use parameters and repilcates the SN1A distances. Not using parameters is the same as saying that I am not introducing new physics, unknown forces and undetectable matter.
If Standard Cosmology can do that, you could use Standard Cosmology
If a Well-known alternative to my theory could do that, that would be fine.
In my naivete, I thought that they couldn't do it, otherwise why are they introducing Dark Matter, Dark Energy?
##########################################
MP: SInce I am not an Astronomer, I can only guess all the billions of detailed knowledge that is part of the word Comoving.
That's trivial, in your model it means that the object has no motion across the surface of the hypersphere. Real galaxies move within their host clusters with velocities that depend on their specific past history giving them a local velocity relative to the mean "Hubble Flow" of expansion. Those speeds can be up to about 1% of the speed of light.
I said that the center of mass of the galaxy will move radially. That means that they will move on the hypersphere as the Hypersphere expands. That is called Hubble Law for short distances. Since my d(z) predicted the average I don't think you can state that my hypothesis is incorrect.
That said, cosmology is not about exceptions. It is about averages. My theory predicts averages and not each single event in the Universe. On average, galaxies follow the Hubble flow.
Are you proposing that a cosmological theory will have to predict individual galaxy velocities? How picking up as example, a member of a distribution challenges my assertion that on average they will mode only radially???
##########################################
Kaluza-Klein
Not everything that has 5 dimensions are identical. Kaluza-Klein adds a compact dimension to allow for electromagnetic modes. My extra dimension is non-compact.
My five dimensions spacetime contains a Fabric of Space (preferential coordinate system, travels at the speed of light, stores nuclear energy, and it is part of a theory that doesn't require the concept of mass).
So stop telling me that Kaluza-Klein is my model. Read the article if you want to give the appearance of speaking with any knowledge of what you are talking.
##########################################
You mentioned the word collaborator and that almost gives me a pause. You have no idea how much I value both a critic and a collaborator. You might think that I am being rude. It is just exasperation.
As I mentioned, I know some of what you know. I also know what you don't know because you decided not to read my work. Why don't we have a skype session such that I can give you a 15 minutes summary of my work.
##########################################
I provided an answer to Ray Buttler question. Please add yours.
By the way, the first person to become suspicious of the cleanliness of the bumps was me...:) I just couldn't find yet a real way to get rid of them. As I mentioned, the information for those nice features is contained on NZ (comoving number density). That has to do with Luminosity. The feature is place on the plot by a distance, derived from the redshift z.
So to fake those features one has to modulate luminosity and redshift together. Ray's argument about angles doesn't hold water.
##########################################
With respect to Number density as a proxy to mass, that is the only number available. I Malmquist Bias. I mentioned that there would be a bias on the detection of closer galaxies (brighter) and that would affect my normalization of 2-point correlation. If what we are observing on the data is correct and there is a bump at 0.3, then Malmquist adjustment is incorrect. It presupposes an uniform distribution.
It would affect it qualitatively. It didn't eliminate bumps. Malmquist bias is supposed not to add bumps to the data, so it wouldn't change the conclusions.
##########################################
I have also raised a cosmological question already, how do you explain the measured near-zero curvature of the universe when your model should have high curvature, and alluded to a second.
Again, I have to answer you that your GR consideration doesn't work against a theory that is telling you that GR is wrong. It is missing a dimension. What else can I say to you.
##########################################
You need to explain how you have processed your data to remove such artefacts in any paper for publication
The conclusions about 10-Bangs and a Big Pop are not essential to the critique to SN1A analysis/Standard Cosmology/Dark Matter/Dark Energy/GR.
That said, the features are in the data (I have to suppose SDSS got rid of data collection systemic problems). The 10-Bangs are in z and luminosity. That doesn't depend upon angles or celestial reference frames.
I provide all my calculations in my github repository
https://github.com/ny2292000/TheHypergeometricalUniverse
##########################################
Please copy and paste again any questions I might not have answered. It is too long.
Your Half-Baked incorrectly derived d(z) yields disastrous results when compared with reality. You flunked it...:)
MP: I derived Gravitation from first principles. I know from its derivation that it speaks only of tangential motion (motion within the lightspeed expanding hyperspherical hypersurface). That gives me unique insight that Gravitation Law doesn't work in the bulk. Matter sitting around in a 4D spatial manifold doesn't express Gravitation, nor electromagnetism.
That's fine, my comments were based on the same assumption. In terms of matter affecting expansion, think of a balloon. The rubber is like the surface of the hypersphere. The tension in that acts only across the surface, not radially, but the effect is still to cause the balloon to shrink if the air is let out. Anyway, that's an aside because I derived the distance versus redshift equation based on your postulate that the hypersphere expands at a constant rate dR/dt=c.
BTW, you didn't acknowledge whether you agreed with the equation I gave as the answer, it would be polite to do that or give your alternative if you think mine was wrong.
MP: Kaluza-Klein was a 5D spacetime, but their extra dimension was compact.
Correct, hence gravity's effect was limited to the 3D surface as yours is, but let's leave that anyway since I worked with your postulate and ignored Birkhoff.
GD: I asked but you still haven't made it clear, are you asking about standard cosmology, well-known alternatives or your own model? I'll assume the latter but first explain why.
I don't know how to express better, I apologize.
All you needed to do was say whether you wanted the equation for your theory or standard cosmology. I guessed you were testing to see if I understood how to derive the equation in your geometry.
MP: My theory doesn't use parameters and repilcates the SN1A distances.
Replicates the distances based on what though? You can't claim a correlation with only one parameter. You don't seem to have grasped that supernova distances cannot be directly measured, they are always just a conversion from an observable measurement.
MP: In my naivete, I thought that they couldn't do it, otherwise why are they introducing Dark Matter, Dark Energy?
Dark matter was originally identified in the late 1920s from the mean speeds of galaxies in the Virgo Cluster. They are so fast the galaxies should have flown apart but they stay as a structure. A similar picture comes from the rotational speeds of stars within galaxies. These are entirely local effects not related to large scale cosmology. Subsequently dark matter has been confirmed in several unrelated ways, through nucleosynthesis and "bottom up" structure formation for example.
Dark energy comes from the correlation of redshift versus magnitude for SNe so it's important that you calculate your equations.
##########################################
MP: SInce I am not an Astronomer, I can only guess all the billions of detailed knowledge that is part of the word Comoving.
GD: That's trivial, in your model it means that the object has no motion across the surface of the hypersphere. Real galaxies move within their host clusters with velocities that depend on their specific past history giving them a local velocity relative to the mean "Hubble Flow" of expansion. Those speeds can be up to about 1% of the speed of light.
MP: I said that the center of mass of the galaxy will move radially. That means that they will move on the hypersphere as the Hypersphere expands.
This may only be terminology but if they only move radially, they dove move across the sphere,it is only the distance between them that increases. Imagine the Earth started expanding radially, London and Paris each would remain fixed at their respective latitude and longitude but the distance between them would increase.
MP: That is called Hubble Law for short distances.
Yes, or the "Hubble Flow" at any distance.
On the other hand, the distance between a snail leaving London to visit Paris and the Eiffel Tower would vary by a combination of the snail's speed and the expansion rate.
MP: Are you proposing that a cosmological theory will have to predict individual galaxy velocities?
No, what I am warning, since you say you are not an astronomer, is that the SDSS redshift data inevitably includes both, therefore you have to be careful about taking averages, it's not trivial to count the galaxies at the right distance because the internal motion in the host cluster generates Doppler shift which mimics Hubble Flow redshift.
MP: You mentioned the word collaborator and that almost gives me a pause. You have no idea how much I value both a critic and a collaborator. You might think that I am being rude. It is just exasperation.
It's hard to work through this medium. I may look as though I'm just knocking your idea but that's not the intention, I'm outlining key questions that you can tackle which will develop your ability to compare the predictions of your model with actual observations.
MP: Why don't we have a skype session ..
I don't intend to get that involved. I've been out 7 nights out of the last 8 on other hobbies and I work full time as well so my free time is scarce!
MP: With respect to Number density as a proxy to mass, that is the only number available.
In the SDSS data perhaps but there are other surveys and papers you can use to get a mass evolution function.
MP: It didn't eliminate bumps.
In relation to "bumps", have a look at the attached SDSS map. In the top half, you can see by eye some arcs which are roughly centred on our location which could be the origin of your features, but if you look at the bottom half, they are not present. Now look at a slightly smaller scale, you should be able to see that the bottom region has smaller circles, about half a dozen between the centre and the edge of the plot. If you then return to the upper half, you can make out similar small circles and the merger of the edges of a few of those creates the larger features. Earth is not the centre of the universe, what astronomers do is treat the whole map equally and look for the mean radius of those circular features.
MP: Malmquist bias is supposed not to add bumps to the data, so it wouldn't change the conclusions.
That wasn't why I mentioned it.
GD: I have also raised a cosmological question already, how do you explain the measured near-zero curvature of the universe when your model should have high curvature, and alluded to a second.
MP: Again, I have to answer you that your GR consideration doesn't work against a theory that is telling you that GR is wrong. It is missing a dimension. What else can I say to you.
Curvature is a feature of any hypersphere simply from the geometry, it has nothing to do with GR.
MP: Please copy and paste again any questions I might not have answered. It is too long.
So is this reply, I'll summarise in a separate reply but probably tomorrow.
http://www.sdss.org/wp-content/uploads/2014/06/orangepie.jpg
Well it takes a lot of time to follow this discussion. George Dishman is right, though.
Hi Matts,
We have a vigorous scientific debate...:)
Below is a word file. George let's use this word file... It is impossible to do a good job on this reply box.
I will await to answer until someone gives me back the word file.
We might end up with a book...:)
What did he say that is right?
Thanks,
Marco
MP: George let's use this word file... It is impossible to do a good job on this reply box.
I'll probably just take individual points and reply here so that others can follow. One point stands out though because if I've got this wrong, it invalidates everything else I've done on your idea.
MP: dR/dt=c is not correct, because you are using proper time. It should be dR/dPhi=1, where Phi is the cosmological time dimensionalized by multiplication by c.
MP: It is the only possible inference from my assertion that the 3D universe is the hypersurface on a lightspeed expanding hypersphere. You cannot interpret it otherwise.
Well I interpreted what you said as being a 5D model with four spatial dimensions and just one of absolute time. One of the four dimensions is suppressed so that we are only aware of three, and those three are then the surface of a sphere in 4D. The radius of that sphere increases at dR/dt=c or if you like dR/dϕ=1 in consistent units. That is an expanding glome in the mathematical sense.
Galaxies and light travelling between them, such as we see form a high redshift source, are constrained to the surface and specifically light travels at speed c across the surface, so light travels at dx/dt=c where x is the distance measured across the surface, i.e. a "great circle" distance. Now if you are saying that ϕ and t are different that would make a 6D model, not 5D, with four spatial dimensions, one suppressed, and two temporal dimensions.
I just want to check that I haven't misunderstood so can you confirm which of those is correct?
https://en.wikipedia.org/wiki/3-sphere
In relation to "bumps", have a look at the attached SDSS map. In the top half, you can see by eye some arcs which are roughly centred on our location which could be the origin of your features, but if you look at the bottom half, they are not present.
Read the issue of aggregation to understand the appearance and disappearance of waves.
################################################
Now look at a slightly smaller scale, you should be able to see that the bottom region has smaller circles, about half a dozen between the centre and the edge of the plot. If you then return to the upper half, you can make out similar small circles and the merger of the edges of a few of those creates the larger features. Earth is not the centre of the universe, what astronomers do is treat the whole map equally and look for the mean radius of those circular features.
Those circular features if I understand correctly your explanation are nothing more that plasma waves of chaotic nature that became imprinted onto the Cosmos. That is what is seen in BAO and mistakenly asserted as Acoustics.
This OrangePie plot only goes to z=0.15. Mine goes to 0.7. There is more to see that will tell you that we really seem to be at the center of a NAO (Neutronium Acoustic Oscillation).
My calculation is based on the same data, so they have to be consistent, specially if the data is presented on redshift. That plot is not model dependent, although it might had been adjusted to eliminate Malmquist Bias. In my framework, I would just divide the observed density by distance^2. That is model dependent.
So, the peak positions might differ a little, but they are basically the same data, so no surprise there.
The data peak visibility depends upon how you aggregate the data. If there are few double, triple n-density spots and you aggregate them to a lot of single density (baseline density), their visibility will be diminished. I aggregated the least. I didn't want to make my map pixelated. This means that most of the objects are sitting in a pixel by themselves, thus providing a baseline density. Some have double density, or triple or 10 times density.
So, seeing waves requires some finesse. I provided the data and script. You can play with it. The mapping to distance is the only model dependent item, so if you don't pay attention to distance, you will still be able to see the 10-Bangs.
Needless to say, SDSS didn't look for it, because GR, Standard Model is a 4D Spacetime theory and thus do not expect nor want nor hope to find waves along radial direction.
It also seen that the OrangePie places us at the center of those waves.
################################################
MP: George let's use this word file... It is impossible to do a good job on this reply box.
I'll probably just take individual points and reply here so that others can follow. One point stands out though because if I've got this wrong, it invalidates everything else I've done on your idea.
Thanks for acknowledging that you could be completely wrong about everything you said. It takes intellectual courage to say so.
I will answer your items later. I will just point out where that anyone and their cats, knows that the introduction of a preferential reference frame (R, the Fabric of Space and Phi, the Cosmological Dimensionalized Time) would not be consistent with Strict Relativity. The proper time and proper dimensions are presented on the right panel where 4D spacetime lives. They can be mapped to a hyperbolic spacetime if you insist in using Standard Physics, so HU is compatible with SR.
The Fabric of Space deformations are related to velocity, acceleration, and thus Force. They are used to show how SR is a naive theory. HU explains the Why the speed of light is the limit, and thus it is more fundamental than SR.
Static Laws can be derived without the benefit of time and so they can be derived on the left panel. Biot-Savart, Photon Entanglement have to do with the Right Panel.
So there are a lot of things in that picture. The Fabric of Space maps locally to proper spatial coordinates. On a global sense, the FS is always a hypersphere (locally deformed by moving matter on their footprints only). That means that the curvature of space is 1/4DR.
For Cosmology, proper time and proper distances are mostly irrelevant.
So my theory has One Cosmological Time, XYZ (referring to the Fabric of Space). The proper xyz is the local FS deformed. Perpendicular to it is the proper time and the proper radial direction r.
Those are projections of Phi and R. Depending upon which laws of dynamics you use, those projections will be hyperbolic or not.
################################################
MP: dR/dt=c is not correct, because you are using proper time. It should be dR/dPhi=1, where Phi is the cosmological time dimensionalized by multiplication by c.
MP: It is the only possible inference from my assertion that the 3D universe is the hypersurface on a lightspeed expanding hypersphere. You cannot interpret it otherwise.
Well I interpreted what you said as being a 5D model with four spatial dimensions and just one of absolute time. One of the four dimensions is suppressed so that we are only aware of three, and those three are then the surface of a sphere in 4D. The radius of that sphere increases at dR/dt=c or if you like dR/dϕ=1 in consistent units. That is an expanding glome in the mathematical sense.
Galaxies and light travelling between them, such as we see form a high redshift source, are constrained to the surface and specifically light travels at speed c across the surface,
That is only valid for short cosmological distance. Ancient photons will display varying velocity as they come closer to us. This is due to the shifting of the FS os space with respect to the initial 4D k-vector. The wavelength observed is the projection of the 4D wavelength onto the local hypersurface.
################################################
so light travels at dx/dt=c where x is the distance measured across the surface, i.e. a "great circle" distance.
This reasoning is not right. The logic is more complex. You can take a look at the second plot showing how light from earlier epochs reach us.
################################################
Now if you are saying that ϕ and t are different that would make a 6D model, not 5D, with four spatial dimensions, one suppressed, and two temporal dimensions.
I counted dimensions properly since the proper dimensions are projections. But if you disagree, that is OK. I don't care how many dimensions are attributed to the model as long as people understand the model.
################################################
I just want to check that I haven't misunderstood so can you confirm which of those is correct?
I don't understand how a lightspeed expanding hypersphere can be interpreted in any other way other than as having its 4D radius increasing at the speed of light. The reason being is that any other distance points to an arbitrary point in the hypersurface and will present a different expansion rate (Hubble Law).
You are more of a mathematician than I am and seems to be capable of envision other scenarios...
Thanks for the OrangePie plot. My data is the same as their data, so any different is due to aggregation and possibly efforts to eliminate Malmquist bias. I suppose that is what they did.
################################################
By the way, now that you are reevaluating everything you said about my theory (due to not reading my article), would you mind to state that this is also baloney...:)
MP: It is not ok to think that GR works in 5D Spacetime without blinking. GR was designed and tested for a 4D Spacetime.
Indeed, but it is OK if one knows about Kalusa-Klein theory which is the 5D version of GR, explains electromagnetism via the extra dimension and in which Birkhoff's Theorem is still valid.
You cannot believe this.. and Kaluza is misspelled...:) You don't know how to derive Gravitation from a more fundamental theory. How can you possibly state that Gravitation is the same for Kaluza-Klein and for a model where the Universe expands at the speed of light and where Gravitation laws were derived from first principles. Under those conditions you can easily understand that Gravitation will not move you along the radial direction. Everything we know about Gravitation is derived from observations along the Tangential direction.
Watching your video at https://www.youtube.com/watch?v=YfxqMsnAinE it appers that we have a scatter-plot of "galactic density" vs "alpha" that pretty distinctively shows 10 populations.
I don't know, though... Does each point in the plot represent a single galaxy? If so, then "galactic density" must refer to the density of that galaxy. And there are definitely 10 populations there--10 different but similar nonlinear correlations of alpha with galactic density.
But if the determining variable upon which those ten populations was based were DEC, I would think that as the image turned we should not see uncorrelated "noise" but rather, we should see independent curves.
I imagine there is some other variable besides DEC. If you can go back and find another third variable, because it looks to me like there is no correlation between DEC and the other two variables, at all.
By the way, I have no idea what "alpha" variable represents. If I did, I might be able to guess what the third variable that varies with the ten population actually is.
SDSS provides 1.3 million objects, which I suppose are galaxies. If I have zero aggreagation, the density variation will come from Comoving Number Density. I am using CND as a proxy to galaxy mass. I believe that sometimes they might image a galaxy cluster and just quantify what they observe in the CND.
That is a guess, of course. Nobody gives me any clear information.
I then rounded angles to one decimal (Declination varies from 0.0 to 360.0, RA varies from -90.0 to 90.0) .
myGalaxy.DEC = myGalaxy.DEC.round(1)
myGalaxy.RA = myGalaxy.RA.round(1)
I also rounded x,y,z to four decimals... So they vary from 0.0001 to 1.0000)
n = 4
myGalaxy['alpha'] = np.round(pi4 - np.arcsin(1 / sqrt2 / (1 + np.abs(myGalaxy.Z))), n)
myGalaxy['x'] = np.round(myGalaxy.alpha * myGalaxy.CosDEC * myGalaxy.CosRA, n)
myGalaxy['y'] = np.round(myGalaxy.alpha * myGalaxy.CosDEC * myGalaxy.SinRA, n)
myGalaxy['z'] = np.round(myGalaxy.alpha * myGalaxy.SinDEC, n)
This minimum aggregation is sufficient to permit the visualization of Waves. I didn't want to lose the spider web look of the Galaxy Clusters.
I didn't have time to try to make the waves more visible. My theory predicts waves along the radial direction (because the Universe is in a hyperspherical hypersurface). I will be happy if people acknowledge my findings. Data and data presentation can always be improved. My script if available in the repository.
Alpha
Alpha is a cosmological angle in Radians. Since the current epoch 4D radius is normalized to 1.0, the alpha is numerically equal to the distance on the current epoch hypersphere. Notice that one never sees anything in the current epoch hypersphere nor one can travel in the current epoch hypersphere since the hypersphere keeps expanding at the speed of light.
Are the Waves Real?
I didn't have time to think too much about the densities themselves. I thought about the process of obtaining them. I couldn't figure out how I could had introduced them by accident. I also don't believe they could be introduced by the data collection process either. They are in the Z and CND and DEC/RA coordinates. Z is done by mathematical fitting of spectra and has nothing to do with data collection. CND is done by eyeball or whatever they use to measure intensity. That is also cannot be artificially correlated to DEC/RA. Z cannot be artificially correlated to DEC/RA.. So I don't know how to get rid of what I see. I get the densities and plot them.
similar nonlinear correlations of alpha with galactic density.
Correlation between alpha with galaxy density can be created if the densities are the result of spherical acoustic waves. Acoustic waves bring matter together in a time when matter is being diluted by Universe expansion.
I think you mention quantization. I tend to believe that there is quantization in the earlier (more dense) stages of the Neutronium. By the time Bangs are occurring, density is dipping below a Neutron Star density (neutrons are becoming free and decaying/releasing energy).
I envision the process are a recompressing of a diluting Neutronium. The recompression occurs with increasing intensity due to the increasing amplitude of density waves (driven by increasingly stronger Bangs).
There is a quantization of the peak density. I would say quantization of the profile because the profile is continuous. What you have is a sequence of Bangs that are growing exponentially. I think that is what one would expect from a simple linear model for this process.
I imagine there is some other variable besides DEC
The position of any galaxy is defined by DEC, RA and Z. I showed cross-sections DEC/Z and RA/Z. Since the wave is spherical, DEC and Ra have almost the same profile. There are no other variables to use.
You can see that there are waves along the DEC dimension.
https://www.youtube.com/watch?v=YfxqMsnAinE
I wasn't able to make a good display. On the other hand, my main priority is to bring the theory into discussion. The data analysis and further development is better done as a community effort. I cannot do everything. I can give advice on how to do things..
MP: I will answer your items later. I will just point out where that anyone and their cats, knows that the introduction of a preferential reference frame ... would not be consistent with Strict Relativity.
That's not a problem, a Lorentzian aether theory is equivalent to SR and cosmological age provides a "preferred" direction for time at large scales so I can take your hypothesis at face value.
MP: The proper time and proper dimensions are presented on the right panel where 4D spacetime lives. They can be mapped to a hyperbolic spacetime if you insist in using Standard Physics, so HU is compatible with SR.
"Proper time" has a specific meaning which is not what you describe, but the topic is not relevant for the discussion so I suggest we ignore that.
MP: For Cosmology, proper time and proper distances are mostly irrelevant.
"Proper" in cosmology is not the same as "proper" in SR, proper distance is what we are trying to calculate.
MP: Those are projections of Phi and R. Depending upon which laws of dynamics you use, those projections will be hyperbolic or not.
Locally the geometry of SR means projections are hyperbolic so that is true irrespective of any dynamics, it is a result of the kinematics.
MP: dR/dt=c is not correct, because you are using proper time.
No, proper time for light is always identically zero. As I say you are not using that term correctly.
GD: Galaxies and light travelling between them, such as we see form a high redshift source, are constrained to the surface and specifically light travels at speed c across the surface,
MP: That is only valid for short cosmological distance. Ancient photons will display varying velocity as they come closer to us.
No, the speed is determined locally. If light passes through glass, the speed is reduced by the refractive index and in vacuum it is c. Projecting between vectors doesn't change the speed due to the locally hyperbolic geometry.
GD: so light travels at dx/dt=c where x is the distance measured across the surface, i.e. a "great circle" distance.
MP: This reasoning is not right. The logic is more complex. You can take a look at the second plot showing how light from earlier epochs reach us.
MP: I counted dimensions properly since the proper dimensions are projections. But if you disagree, that is OK. I don't care how many dimensions are attributed to the model as long as people understand the model.
It is important. If you let light pass into the bulk, it spreads over 4D so the luminosity would fall as the inverse cube of distance which would mess up you use of apparent magnitude (and of course is completely at odds with observation).
MP: I don't understand how a lightspeed expanding hypersphere can be interpreted in any other way other than as having its 4D radius increasing at the speed of light.
That's fine, it's what I've used but I wanted to check.
MP: You are more of a mathematician than I am and seems to be capable of envision other scenarios...
It is not a different scenario, it is a question of local detail in yours. I've marked up your diagram to show an expansion of a small increment along the light path. The shell moves outward at dr/dt=c which is your "4D radius increasing at the speed of light." The projection of the light onto the shell also results in the light moving radially at c so dx/dt=0. The angle subtended at the centre of the hypersphere by the two galaxies is ϴ and the angle moved by the light from the source is ϕ. That increases at dϕ/dt=c/r where r is the radius at that moment.
The distance between the galaxies was Dz when the light was emitted (measured around the hypersphere surface between "B" and "C") and is D0 now (measured between "A" and "D").
The maths then is just solving for D0 with the definition that D0/Dz=1+z, as you say because the wavelength is projected between the 4-vectors.
The term proper time is being used as in the time in your local inertial reference frame. If you provide me a better term for it I will use it from now on.
MP: The proper time and proper dimensions are presented on the right panel where 4D spacetime lives. They can be mapped to a hyperbolic spacetime if you insist in using Standard Physics, so HU is compatible with SR.
"Proper time" has a specific meaning which is not what you describe, but the topic is not relevant for the discussion so I suggest we ignore that.
MP: For Cosmology, proper time and proper distances are mostly irrelevant.
"Proper" in cosmology is not the same as "proper" in SR, proper distance is what we are trying to calculate.
You don't seen to understand that in my theory I can calculate distance in an absolute manner. My universe has no inflation, so what I am trying to calculate is not what you calculate.
MP: Those are projections of Phi and R. Depending upon which laws of dynamics you use, those projections will be hyperbolic or not.
Locally the geometry of SR means projections are hyperbolic so that is true irrespective of any dynamics, it is a result of the kinematics.
My theory creates a new Law from which dynamics is derived, so what you just said, is not true and it is a result of you not reading my article.
MP: dR/dt=c is not correct, because you are using proper time.
No, proper time for light is always identically zero. As I say you are not using that term correctly.
dR/dPhi=1 is not just for light. It is the 4D Radius of the Universe, so it is not zero. It is not clear why you corrected my proper time reference, since there is no other name to replace it.
GD: Galaxies and light travelling between them, such as we see form a high redshift source, are constrained to the surface and specifically light travels at speed c across the surface,
Light never travels across the surface. it travels between hypersurfaces, each one defined by a de Broglie step.
MP: That is only valid for short cosmological distance. Ancient photons will display varying velocity as they come closer to us.
No, the speed is determined locally. If light passes through glass, the speed is reduced by the refractive index and in vacuum it is c. Projecting between vectors doesn't change the speed due to the locally hyperbolic geometry.
This shows ignorance of my argument. It is supported by the SN1A results. Your knowledge of light doesn't include it having a 4D k-vector, so this comment doesn't really means anything. Just consider light traveling from adjacent hyperspheres at 45 degrees. as it travels through the line-of-sight path, its projection onto the local hyperplane will change angle. Just look at the picture.
GD: so light travels at dx/dt=c where x is the distance measured across the surface, i.e. a "great circle" distance.
MP: This reasoning is not right. The logic is more complex. You can take a look at the second plot showing how light from earlier epochs reach us.
MP: I counted dimensions properly since the proper dimensions are projections. But if you disagree, that is OK. I don't care how many dimensions are attributed to the model as long as people understand the model.
It is important. If you let light pass into the bulk, it spreads over 4D so the luminosity would fall as the inverse cube of distance which would mess up you use of apparent magnitude (and of course is completely at odds with observation).
This is not correct. This is based on the paradigm that light is spread out onto an 'area' and dimensionality matters. HU derived the laws of physics based just upon distance. That means that all interactions are unidimensional and area diffusion (of photons, gravitinos, etc) is not the right paradigm. By the way, none of the Laws (Gravitation, Electrostatics have been derived from anything more fundamental until HU, so there is no competition to my interpretation). Also having a distance_squared in the denominator is not the same as diffusing something onto an area. You might think that that is the only way a distance_squared could arrive there. Read the article and you will see that that is not the case.
The inverse distance squared dependence was derived from unidimensional interaction through the HU Dynamics Law (the quantum lagrangian principle) which are more fundamental than the standard classical mechanics laws. I hate to burst your bubble but HU is correcting the Laws of Dynamics also.
It is not what you expect. It is not what you learned but it is what is consistent with observations and that includes SN1A (SN1A supports the variability of light tangential velocity while traveling across cosmological distances).
Read p.55-60
https://issuu.com/marcopereira11/docs/huarticle
MP: You are more of a mathematician than I am and seems to be capable of envision other scenarios...
It is not a different scenario, it is a question of local detail in yours. I've marked up your diagram to show an expansion of a small increment along the light path. The shell moves outward at dr/dt=c which is your "4D radius increasing at the speed of light." The projection of the light onto the shell also results in the light moving radially at c so dx/dt=0.
Light is moving radially at c - always. It is also moving at c (at short distances) along x, so dx/dt=c for light at short distances. Notice that light leaves matter at 45 degrees with the radial direction. When it arrives on point A, that angle has changed. That means that the projection of the 4D k-vector changes as light travels towards us. That means that Ancient photons slow down as they approach us. This also means that light has a variable tangential velocity. That is undetectable on short experiments, but it is self-evident from the redshift from distant stars. Self-evident if you use a geometric paradigm and remember that the Universe is the polarizable medium. Light propagates through Electromagnetic field generates polarization which then generates EM. Polarization is traveling at constant radial velocity c, and so does light. The tangential velocity has to keep momentum conservation, that is, the 4D k-vector doesn't change direction, so its tangential projection has to change.
The angle subtended at the centre of the hypersphere by the two galaxies is ϴ and the angle moved by the light from the source is ϕ. That increases at dϕ/dt=c/r where r is the radius at that moment.
The distance between the galaxies was Dz when the light was emitted (measured around the hypersphere surface between "B" and "C") and is D0 now (measured between "A" and "D").
The maths then is just solving for D0 with the definition that D0/Dz=1+z, as you say because the wavelength is projected between the 4-vectors.
in which Universe would D0/Dz=1+z be True? Just because it is OK for the extreme values of z, that doesn't make it correct.
D0/Dz is the same as R0/R(t)=1/(cos(alpha)-sin(alpha)). This is High School trigonometry..:) Law of Sines... almost the only math one needs to understand the Universe. You do need physics to understand it, though
see eq.13
See equations 14-16 on article. This is not how one projects a 4D K-vector.
MP: My universe has no inflation, so what I am trying to calculate is not what you calculate.
Inflation finished at around 10-32 seconds and we are talking about observation to at least 300k years later so inflation is of no relevance at all. We are only talking about the current period of slow expansion.
MP: That means that Ancient photons slow down as they approach us. This also means that light has a variable tangential velocity. That is undetectable on short experiments, but it is self-evident from the redshift from distant stars.
That doesn't happen though. Redshifted light has a reduced frequency (confirmed by observations at radio frequencies) as well as a longer wavelength (confirmed by optical measurements using diffraction gratings) and the speed is still c.
The difference between your geometry and mine is actually very small for typical redshifts, it only adds a very slight curve to the light path. The big difference is that you keep the expansion of your hypersphere at a constant speed whereas in the conventional model it is variable.
MP: in which Universe would D0/Dz=1+z be True?
Any. Imagine looking through a remarkable telescope at a distant galaxy and seeing a lighthouse flash once a minute. The distance between those flashes, the "wavelength", must increase in the same proportion as the distance between the remote galaxy and us. That is essentially the definition of z.
MP: D0/Dz is the same as R0/R(t) = 1/(cos(alpha)-sin(alpha))
The first part is correct but remember the geometry is hyperbolic, not Euclidean.
If you wonder where my equation comes from, note that the distance to be covered when the light is emitted is Dz but that has grown to D0 when it finally reaches us. The link explains the maths but note the variable names are completely different (for example Dz is called 'c' while the speed of light c is called 'α').
https://en.wikipedia.org/wiki/Ant_on_a_rubber_rope#An_analytical_solution
MP: My universe has no inflation, so what I am trying to calculate is not what you calculate.
Inflation finished at around 10-32 seconds and we are talking about observation to at least 300k years later so inflation is of no relevance at all. We are only talking about the current period of slow expansion.
You say potato, I say potatooo. Inflation for me is both the hyperinflationary and slow expansion. The quality of my predictions indicate that there is no need for Hyperinflation nor slow expansion. I am not going to say this again because it is becoming repetitive.
#################################################
MP: That means that Ancient photons slow down as they approach us. This also means that light has a variable tangential velocity. That is undetectable on short experiments, but it is self-evident from the redshift from distant stars.
That doesn't happen though. Redshifted light has a reduced frequency (confirmed by observations at radio frequencies) as well as a longer wavelength (confirmed by optical measurements using diffraction gratings) and the speed is still c.
Your argument is circular. You are saying that their wavelengths define how their are produced. You might fail to see it because you live in a singularity.
#################################################
The difference between your geometry and mine is actually very small for typical redshifts, it only adds a very slight curve to the light path. The big difference is that you keep the expansion of your hypersphere at a constant speed whereas in the conventional model it is variable.
There is a difference between being right and being wrong. That difference sometimes appears as minute difference when you try to use your 'true' ansatz outside infinitesimal variations. See the result attached file) of your ansatz in calculating the observed Supernova distances.. Totally wrong... despite of being totally correct at very small distances..
#################################################
The big difference is that you keep the expansion of your hypersphere at a constant speed whereas in the conventional model it is variable.
I would say that that is a very big difference. One (yours) require the existence of a new force (Dark Energy). Mine, doesn't. It is a big difference... perhaps not big for a Mathematician but very big for a Physicist.
#################################################
MP: in which Universe would D0/Dz=1+z be True?
Any. Imagine looking through a remarkable telescope at a distant galaxy and seeing a lighthouse flash once a minute. The distance between those flashes, the "wavelength", must increase in the same proportion as the distance between the remote galaxy and us.
First, I am surprised that you would even comment about this after I plotted the results of that model. See attached plot. It is not right even in your own Universe.
You cannot evaluate the validity of my answers if you keep changing my hypotheses by yours!!!!!!
Nobody makes time measurements across cosmological distances. This is not even wrong. Light is traveling on a space that is not expanding. Don't mistake the expansion of a shockwave with the expansion of space!
The photon doesn't go around in 4D trailing the current observer (that would be a different trajectory). The photon leaves its hypersphere at 45 degrees. It if happens to be in the line of sight with the observer at the current epoch. it will be seen.
The distance that controls the decay of light (and thus the ruler SN1A) is not AC.
It is AB. This is supported by my SN1A predictions and the derivation of Natural Laws by my theory.
In addition, given that D0/Dz1 Z2 will not fall on the same line AC. The photon being emitted at z2 (different epoch) leaving at 45 degrees would not be along the line-of-sight of the photon emitted on the epoch z1.
This pesky 45 degrees rule is required such that retarded potentials remains valid at short distances. HU requires retarded potentials AT SHORT DISTANCES to be valid all the time and not only now.
At cosmological distances you have to replace the 45 degree distance and sqrt(2) c velocity by c and DeltaPhi (distance AB). Or you can just forget distance and variable speed of light and use DeltaPhi all the time with the velocity of light being c.
That is how HU corrected Newton's Law of Gravitation... Gauss' Law of Electrostatics...Biot-Savart Law.. Maxwell's Equations.
#################################################
As usual Mathematicians, stretched the Ant on a Elastic Rope way too much.
A photon traveling along that path would not satisfy the retarded potential concept. Either that or have the speed of light change without any explanation on the WHY. (Dark Energy is not an explanation).
You should go to Wikipedia and correct that nonsensical generalization.
#################################################
The first part is correct but remember the geometry is hyperbolic, not Euclidean.
You cannot evaluate the validity of my answers if you keep changing my hypotheses by yours!!!!!!
Your geometry is hyperbolic. The XYZR is Cartesian. Hyperbolic geometry is only optional on xyztau. You don't need to have hyperbolic geometry along Cosmological distances and times. SR is only tested on local reference frames. There is no reason to consider it valid beyond that. To paraphrase an Illustrious Republican Senator, since I am Casting Asparagus on SR, I might as well state that since SR is not proven to work at Cosmological distances (it doesn't work on my model for that matter). there is no reason to believe it is valid for deriving Doppler shifts. Everything is based on untested Physics.
My derivation, on the other hand, depends on the Law of Sines... probably derived by a Greek Mathematician...and as solid as rock (in a Cartesian Universe - like the one I proposed).
I still don't understand your definition of "Alpha"
"Alpha is a cosmological angle in Radians. Since the current epoch 4D radius is normalized to 1.0, the alpha is numerically equal to the distance on the current epoch hypersphere. Notice that one never sees anything in the current epoch hypersphere nor one can travel in the current epoch hypersphere since the hypersphere keeps expanding at the speed of light."
I cannot follow this.
Do you have any other source that defines this variable?
I found this equation at GitHub
https://github.com/ny2292000/TheHypergeometricalUniverse/blob/master/fit%20wavelength%20to%20distanceShort-radian-alpha.ipynb
alpha = [math.pi/4 - math.asin(/math.sqrt(2)/(1+x)) for x in z]
Is this the variable?
yes. Alpha is the cosmological angle. The distance on the current hypersphere is R_0 * alpha
Since R_0 is normally normalized to 1, then distance is numerically equal to alpha
GD: The first part is correct but remember the geometry is hyperbolic, not Euclidean.
MP: Your geometry is hyperbolic. The XYZR is Cartesian
Yes, I realised after I posted that your extra dimension is spatial, not the usual temporal, but I was doing some DIY and you had replied before I had a chance to correct my error. Your is Euclidean.
I don't have time to reply to the rest at the moment.
Guganathan,
Universe consists all creatures.
This is not true. The Universe also consist of space, time, stars where there are no creatures... etc.
The Universe Actions
Universe has no border. In that many things were in rest for many years. Those things possess its own gravitational force.
There is no evidence that the Universe has not borders.. Just because you cannot see one, doesn't mean one doesn't exist. For instance, we could all live inside a 3D submanifold embedded into a 4D spatial manifold. Under those conditions we would be trapped within that hypersurface, not knowing what is outside of it.
########################################
Your principle looks more like pattern recognition of some know facts than anything else.
What can you do with your Universe Actions?
PS- I am asking these questions because they are easy to ask and they might put your work in perspective and preclude you from wasting time. Only a critic can save you time.
MP: Under those conditions we would be trapped within that hypersurface, not knowing what is outside of it.
The 3D surface has no boundary.
Since forces are known not to move you out of this World... :) there is no way for you to change your radial velocity or position. You don't need to a physical boundary to be constrained within a 3D surface.
There is something called Newton's First Law... If there is not force affecting you radially, you will keep moving forward at the current speed. Not even quantum mechanics denies Newton's first law.
By the way, there are no rules about being trapped on a 3D Hypersurface. If the Universe is a 3D hypersurface, we are trapped there by definition.
A hypersurface is a mathematical concept up to now. I am the one bringing it into Physics. I make the rules.
That said, when you look into the past, you look across that 4D dimension. So you know what is behind you and you hope that there isn't anything coming your way...
Electromagnetism doesn't work in the reverse sense.. there are no advanced potentials...only retarded potentials also known as disadvantaged potentials...
You should take a look at my predictions of the overestimated SN1A distances using HU. I fared much better than your d(z)=ln(1+z)
You can clearly see that I use not a single parameters...:) there is only z there..
If you are not playing in the snow, you can download the repository and play with it.
https://github.com/ny2292000/TheHypergeometricalUniverse
MP: Since forces are known not to move you out of this World... :)
Then the momentum carried by EM must also be constrained to the surface, and so must the photons :-)
MP: You don't need to a physical boundary to be constrained within a 3D surface.
Agreed, I was just pointing out the geometrical fact that the 3D surface has no boundary.
MP: If the Universe is a 3D hypersurface, we are trapped there by definition.
If it is by definition, then that must apply to photons too.
Going back a bit:
MP: Just consider light traveling from adjacent hyperspheres at 45 degrees.
That's the right way to look at it :-) To find the global path you then just trace out the locus that is always at 45 degrees to hypersphere.
GD: Redshifted light has a reduced frequency (confirmed by observations at radio frequencies) as well as a longer wavelength (confirmed by optical measurements using diffraction gratings) and the speed is still c.
MP: Your argument is circular. You are saying that their wavelengths define how their are produced.
No, I am saying that here on Earth we can measure both the wavelength and the frequency and the product of those tells us the speed of the light as it arrives. It is equal to the usual speed of light regardless of how far the light has travelled or how it was produced.
MP: This pesky 45 degrees rule is required such that retarded potentials remains valid at short distances. HU requires retarded potentials AT SHORT DISTANCES to be valid all the time and not only now.
And that is exactly what I drew in the detail circle in the attached image. Since dx/dt = dr/dt, the angle to the surface of the sphere is always 45 degrees. Have another look and you'll see that what is in the detail circle matches what you are telling me. Your straight line path has the angle varying along the path.
MP: As usual Mathematicians, stretched the Ant on a Elastic Rope way too much.
It's not real elastic, you can stretch it as far as you like ;-)
However, it's not wrong but now that I understand your idea better, it may be inappropriate. Rather than the usual derivation for redshift from the scale factor ratio, you would probably need to apply the Doppler effect to the rate of change of the path length.
MP: A photon traveling along that path would not satisfy the retarded potential concept. Either that or have the speed of light change without any explanation on the WHY.
That's wrong. The light moves at c relative to the piece of elastic over which it is currently moving. In you model, it moves at 45 degrees to the hypersphere with a 4D speed of c√2 at every point on the journey which corresponds to dx/dt=c. That means it only satisfies the retarded potential if it is moving locally at c.
MP: Nobody makes time measurements across cosmological distances.
Of course they do, that's one of the key measurements for Type 1a SNe! See the linked paper by Goldhaber, I'm surprised you don't know of this. The duration of the supernovae is extended by exactly (1+z).
Systems like eMerlin also measure redshift directly from the received frequency.
https://arxiv.org/abs/astro-ph/0104382
Marco, a few times in this thread, you have said things like:
that is exactly how my model proves that there is no need for Dark Matter, Dark Energy, GR, Friedmann-Lemaitre model...L-CDM
Dark Matter was first invoked to explain phenomena on local rather than cosmological scales - the observed rotation curves of galaxies, and the observed dynamics of galaxies in clusters. If your model does away with Dark Matter, it must propose an alternative non-Dark Matter explanation with regard to observed galaxy motions. How does it fill that gap?
When a mass of metal moves in a magnetic field or when the magnetic field through a stationary mass of metal altered, induced current is produced in the metal.
The forefinger, the middle finger and the thumb of the right hand are held in three mutually perpendicular directions. If the forefinger point along the direction of the magnetic field and the thumb is along the direction of motion of the conductor, then the middle finger points in the direction of the induced current. The rule is also called generator rule.
In an induction furnace, high temperature is produced by generating eddy currents. The material to be melted is placed in a varying magnetic field of high frequency. Hence a strong eddy current is developed inside the metal. Due to the heating effect of the current the metal melts
1. Electromagnetic waves can be seen in all things in universe.
2. Many things possess magnetic power in universe.
4.When the eddy current is increased, temperature also increased in many things in universe.Ultimately it became as Nabula.
5. So that the earth lacks the living organisms and changed into a star.
8. As far as the latest invention there is no chance to live in other planets and also no living organisms in other planets. But we can try to create a chance in earth laboratory artificially like the condition / Environment of some other planets.
9. By the artificial method, living organisms can be produced in other planets.
GL: The Earth will change a Small Star???
No.
First, stars contain mostly hydrogen and helium. The hydrogen fuses to make heavier elements giving off heat. The Earth is a rocky planet so made of heavy elements already, the energy is mostly gone.
Second, there is a minimum mass for a star which is about 75 times more than the mass of Jupiter, the Earth has about 318 times less mass than Jupiter.
You would need 24000 Earth masses of hydrogen to make the smallest normal star.
George, you’ve got an A+ in this exchange. Really good work. You save me from FURTHER embarrassment on my claim that ancient photons slow down as they arrive close to us. :)
######################################################
MP: Since forces are known not to move you out of this World... :)
Then the momentum carried by EM must also be constrained to the surface, and so must the photons :-)
###################
(an aside, as the photon 4D k-vector changes direction the lost momentum must be taken by the Universe itself. It should be felt as a photonic pressure, undetectable but still there).
###################
As my good old friend Maxwell used to say, Electromagnetic waves induce polarization which induce Electromagnetic waves.
Since the polarizable medium is in the hypersphere, photons are radially trapped to move radially at the speed of light.
You might ask what about dilaton field that propagates past the hypersphere (dilaton field propagating at angles larger than 45 degrees)?
They might or may not even exist. There is no reason to consider that the outer space (beyond the hypersphere) is even there. I am not a fan of creating space. That said, the proposed cosmogenesis is based on dimensionality transition followed by metric fluctuation decay. Considering the model as not having anything beyond us radially is not out of consideration. We cannot see what is ahead of us. It might just be a lot of nothing. I prefer a full Cartesian Infinite Manifold...but I am open to discussion on that subject
The other way to trap those pesky photons inside of the hypersphere is that any dilaton field traveling faster than the hypersphere will never be dephased. Remember the kid on a swing, if there is no dephasing there is no interaction. This part is like a Feynman Path diagram. The only path that matters is the path that can do something (where the dilaton field can be dephased).
In addition, take Maxwell's equations as the base for the argument.
EM -> Polarization -> EM. EM is created at each de Broglie step out of the dynamics polarization and so it is trapped.
#######################################################
MP: You don't need to a physical boundary to be constrained within a 3D surface.
Agreed, I was just pointing out the geometrical fact that the 3D surface has no boundary.
Thanks
#######################################################
MP: If the Universe is a 3D hypersurface, we are trapped there by definition.
If it is by definition, then that must apply to photons too.
It is not by definition. Take Maxwell's equations as the bae for the argument. EM -> Polarization -> EM. EM is created at each de Broglie step out of the dynamics polarization and so it is trapped.
#######################################################
Going back a bit:
MP: Just consider light traveling from adjacent hyperspheres at 45 degrees.
That's the right way to look at it :-) To find the global path you then just trace out the locus that is always at 45 degrees to hypersphere.
That is exactly what I said. It is consistent with Retarded Potentials.
#######################################################
#######################################################
#######################################################
GD: Redshifted light has a reduced frequency (confirmed by observations at radio frequencies) as well as a longer wavelength (confirmed by optical measurements using diffraction gratings) and the speed is still c.
MP: Your argument is circular. You are saying that their wavelengths define how their are produced.
I am wrong here and I explained my mistake below
#######################################################
#######################################################
#######################################################
MP: This pesky 45 degrees rule is required such that retarded potentials remains valid at short distances. HU requires retarded potentials AT SHORT DISTANCES to be valid all the time and not only now.
And that is exactly what I drew in the detail circle in the attached image. Since dx/dt = dr/dt, the angle to the surface of the sphere is always 45 degrees. Have another look and you'll see that what is in the detail circle matches what you are telling me. Your straight line path has the angle varying along the path.
You are right! I struggled with that for a long time. The 45 degrees is always mentioned with short distances. That is because the short-distance propagation requires 45 degrees
Conservation of momentum requires that angle to change as the photon propagates. The 4D-K-vector (4D momentum) doesn't change (no scattering is being considered in the observation of star in the past).
How does that work?
Take a look at the always 45 degrees in my Universe timeline. It comes as red dots. So the SN1A survey doesn’t support always 45 degrees angle. That is an observational fact.
Front that I concluded that irrespective the angle with the radial direction, tangential momentum conservation will require the radial light speed to adjust itself as the photon travels towards us.
How can light change radial speed? The answer is already explained based on EM->Polarization-> EM.
How much speed change has to take place between two adjacent de Broglie steps. Let’s see. One de Broglie step is 0.19 femtometer. The radius of the Universe is 13.58 Gly, so there are 6.7619326E41steps.
So changes in velocity and angle are very small between de Broglie steps. Each de Broglie step has a polarization (time dependent). That means that dilators are moving (their FS torsion is changing) and producing imprints on the subsequent dilaton field. That imprint will impact torsion onto the next de Broglie step.
I struggled with the question about the polarization of vacuum. Does the vacuum becomes polarized by light? My answer is yes. It has to be, otherwise Maxwell’s equations would be valid only where there is matter.
So, I should say that the dilaton field propagation speed changes as photons come closer to us. Photons, themselves, are not the dilaton field. They are the spatial modulation onto the dilaton field due to the continuously changing polarization (that includes the vacuum polarization itself).
THIS MEANS THAT I MADE A MISTAKE. I HAVE BEEN SAYING THAT THE SPEED OF LIGHT SLOWS DOWN FOR ANCIENT PHOTONS AS THEY MOVE CLOSER TO US. I AM WRONG ABOUT THAT. THE SPEED OF ITS CARRIER (THE DILATON FIELD) ADJUSTS ITSELF AS THE ANGLE OF PROPAGATION WITH RESPECT WITH THE FS CHANGES.
THANKS, GEORGE….:) ALL THE EXASPERATION I SUFFERED IN TRYING TO EXPLAIN MY THEORY IS PROPERLY PAID OFF..:) AND I MEAN IT..:) I owe you a beer…J
The extra missing piece of the puzzle has to do with equations 58-62. They are the base for the calculation of the Natural Laws. They state that the dilaton field decays in intensity with each de Broglie step (circular ring around the dilator).
The question is how the individual rings merge with the Whole Universe de Broglie step?????
Since light decay had to do with the number of Universe Rings, I concluded that individual dilaton fields merge with the whole Universe dilaton field at each de Broglie step. This is the reason I corrected Newton’s Gravitational Law, Biot-Savart, Maxwell’s equation for cosmological distances and for short distances also. See equations on page 79 on
https://issuu.com/marcopereira11/docs/huarticle
I forgot to put numbers on them. The number of de Broglie steps crossed by photons from any star in a given epoch is the same, so their decays is the same. You can project the hypersphere around you as a 3D shell with all stars (emitting at 45 degrees on each direction).
Since the de Broglie for individual dilaton fields and the Whole Universe aggregated dilaton field merge (they are produced again at each step), Light decays does not have a straightforward distance dependence!
#######################################################
#######################################################
#######################################################
MP: As usual Mathematicians, stretched the Ant on a Elastic Rope way too much.
It's not real elastic, you can stretch it as far as you like ;-)
However, it's not wrong but now that I understand your idea better, it may be inappropriate. Rather than the usual derivation for redshift from the scale factor ratio, you would probably need to apply the Doppler effect to the rate of change of the path length.
Not sure what you mean by this. I suspect I don't belong here.
#######################################################
MP: A photon traveling along that path would not satisfy the retarded potential concept. Either that or have the speed of light change without any explanation on the WHY.
That's wrong. The light moves at c relative to the piece of elastic over which it is currently moving. In you model, it moves at 45 degrees to the hypersphere with a 4D speed of c√2 at every point on the journey which corresponds to dx/dt=c. That means it only satisfies the retarded potential if it is moving locally at c.
I made a mistake to get into the stretched Universe of yours… I have no business being there.. Pardon, I am leaving..:)
#######################################################
MP: Nobody makes time measurements across cosmological distances.
Of course they do, that's one of the key measurements for Type 1a SNe! See the linked paper by Goldhaber, I'm surprised you don't know of this. The duration of the supernovae is extended by exactly (1+z).
Systems like eMerlin also measure redshift directly from the received frequency.
This is an amazingly good observation. I suspected you correct me about my conclusion that Ancient Photons slow down as they approach us. I will have to drop that. THANKS
To calculate the speed of light as you understand it (seen as a ratio between wavelength and period) one has to consider not only the change in projection of the k-vector (projection of wavelength) but also consider the change in direction of proper time. That projection of periods (which will match the projection of k-vectors since proper time is perpendicular to the FS on the xyzPhi cross-section, so we would just do a similar calculation on that cross-section to see how period projects onto our local hyperplane ) will result a constant velocity.
I MISTOOK THE SPEED OF LIGHT WITH THE SPEED OF THE DILATON FIELD, MY MISTAKE.
Mea Culpa. I was wrong and am relieved to correct it. The more I have to defend the more difficult the task is.
Thanks
PS- I told you that criticism is the only way to go!!!!
##############################################
@Ray Butler
Marco, a few times in this thread, you have said things like:
that is exactly how my model proves that there is no need for Dark Matter, Dark Energy, GR, Friedmann-Lemaitre model...L-CDM
Dark Matter was first invoked to explain phenomena on local rather than cosmological scales - the observed rotation curves of galaxies, and the observed dynamics of galaxies in clusters. If your model does away with Dark Matter, it must propose an alternative non-Dark Matter explanation with regard to observed galaxy motions. How does it fill that gap?
MP: It might be the case that the temporal order is incorrect. I should has stated that
Dark Matter was used in L-CDM instead. Since I replicated the 'corrected observed' distances without any parameters, I challenge the need for Dark Matter in Cosmology.
You question is great.
The Galaxy rotation puzzle is 'solved' by the derived law of Gravitation. See equation 175 and 180 on
https://issuu.com/marcopereira11/docs/huarticle
I put quotation marks on 'solved' because I only argued that the lowest potential for the galaxy is achieved when all velocities are perpendicular to radial direction and when V1 and V2 are such that their angular velocity matches.
This argument was done considering a central mass. The first consideration is that w2 (the outlying star) cannot be traveling faster than the rest (while propelled by interactions with the rest of the galaxy). Since V1.V2 is in the denominator, the only way to make the potential lower is by increasing omega2 until it matches w1. That is, the lowest potential state for the derived gravitational potential is when all stars are rotating together.
A more convincing argument could be made if I had done hydrodynamic simulation and proved it. I cannot do it for lack of resources. I was only able to provide this argument.
Please feel free to counter it (based on my derived potential).
@Ray Butler
Marco, a few times in this thread, you have said things like:
that is exactly how my model proves that there is no need for Dark Matter, Dark Energy, GR, Friedmann-Lemaitre model...L-CDM
Dark Matter was first invoked to explain phenomena on local rather than cosmological scales - the observed rotation curves of galaxies, and the observed dynamics of galaxies in clusters. If your model does away with Dark Matter, it must propose an alternative non-Dark Matter explanation with regard to observed galaxy motions. How does it fill that gap?
MP: It might be the case that the temporal order is incorrect. I should has stated that
Dark Matter was used in L-CDM instead. Since I replicated the 'corrected observed' distances without any parameters, I challenge the need for Dark Matter in Cosmology.
You question is great.
The Galaxy rotation puzzle is 'solved' by the derived law of Gravitation. See equation 175 and 180 on
https://issuu.com/marcopereira11/docs/huarticle
I put quotation marks on 'solved' because I only argued that the lowest potential for the galaxy is achieved when all velocities are perpendicular to radial direction and when V1 and V2 are such that their angular velocity matches.
This argument was done considering a central mass. The first consideration is that w2 (the outlying star) cannot be traveling faster than the rest (while propelled by interactions with the rest of the galaxy). Since V1.V2 is in the denominator, the only way to make the potential lower is by increasing omega2 until it matches w1. That is, the lowest potential state for the derived gravitational potential is when all stars are rotating together.
A more convincing argument could be made if I had done hydrodynamic simulation and proved it. I cannot do it for lack of resources. I was only able to provide this argument.
Please feel free to counter it (based on my derived potential).
@Ray Butler
Other evidences for Dark Matter are the 150 Mpc on the Beutler at all article. That has been used to say that Dark Matter is anchored and cold. It brings back matter that might be diffusing from a region where dark matter is.
My 2-point correlation function clearly doesn't show that resonance and so I am assigning that observation to statistical noise.
The other evidence (gravitational lensing where there is not visible matter is being assigned to an antimatter Universe (lagging hypersphere) traveling just behind us.
That is the HU candidate to Dark Matter . This is consistent with the Gamma Ray Bursts. They could had been created in the early epochs when matter and antimatter start or blackholes aligned themselves along the radial direction and annihilated themselves into Gamma Rays. I supposed they were just stars instead of BH.
My theory should be consistent with reality. If reality tells me that there is such a thing as Einstein's Ring, then there's got to be something in that space that curves it.
My theory derives the same gravitational lensing as GR.
MP: Take a look at the always 45 degrees in my Universe timeline. It comes as red dots.
Bingo, you've got it :-)
Compare it with the attached diagram from Ned Wright's tutorial, you'll see why I'm already familiar with the concepts.
The link takes you to the page 1 of the tutorial, the diagram comes from page 3 so you can see how your diagram compares to conventional theory.
MP: PS- I told you that criticism is the only way to go!!!!
My comments are always intended as constructive criticism where possible, many people miss that because their response is automatically defensive.
MP: So the SN1A survey doesn’t support always 45 degrees angle. That is an observational fact.
Maybe, maybe not. You have to work out how both z and the apparent magnitude vary as a function of radius to find out since those are the only observables for SNe.
http://www.astro.ucla.edu/~wright/cosmo_01.htm
MP: Take a look at the always 45 degrees in my Universe timeline. It comes as red dots.
Bingo, you've got it :-)
Compare it with the attached diagram from Ned Wright's tutorial, you'll see why I'm already familiar with the concepts.
You have been studying behind my back...:) No wonder you can correct like that..:)
The link takes you to the page 1 of the tutorial, the diagram comes from page 3 so you can see how your diagram compares to conventional theory.
MP: The reason SDSS didn't look for density waves along distance is because they studied Ned Wright's Cosmology Tutorial. It states that
The Universe is Homogeneous and Isotropic
I dispute that and the data tells me I am right...:)
I don't need to ever read conventional theory to understand that line-of-sight would fail the time-of-arrival constraint. That is why I kept erring for a longest time and repeating that light always travelled along the red-dots (always at 45 degrees). That would make everything fine. One would be able to see up to a radian (Cosmological angle). Now, one can only see up to pi/4 (d=0.79). Imagine my confusion, I see that light is coming from R(T)=0 but somehow that light maps to a star pi/4 distant away. It takes a little bit of reasoning to understand that..:) The hint is to make the smallest hypersphere not a point. Then you will realize that light is not coming from the equivalent (on that epoch) distance R(t). So visibility (45 degrees emission) will make you see a very specific point on the an earlier epoch on any direction.
That cosmological angle of 45 degrees will map to distance to the center of the 4D radius=R0. But the galaxy will be projected at pi/4 distance on the outer hypersphere. This is due to the constraint for seeing something.
Homework - Realize that we can only travel to pi/4 (or the first quadrant)... never further...perhaps at warp speed you can..:) In fact, there is a way....but requires you to be converted into space torsion and to have a space torsion converter (to revert the process) on the other side ,,,:) like the Stargate...:) Space torsion will propagate through time bounce back from time zero and track down the Stargate. Simple as butter...
MP: PS- I told you that criticism is the only way to go!!!!
My comments are always intended as constructive criticism where possible, many people miss that because their response is automatically defensive.
George, you are my dream come true..:) -minus the reading my woro part...:)
You are like me... I also think I don't need to read no stinking Inflation Theory to criticize it...:) I certainly don't want to read this
https://arxiv.org/pdf/1607.03155.pdf
since it seems that it is exactly what I expected. Eq. 4 tells me everything I need to know. See the attachment. They are performing angular 2-point correlation and then decomposing it in spherical harmonics (just the Legendre Polynomial part)paying attention to just one angle.
I see that and consider it crazy (unless the harmonics are visible by eyeball). Anything will have some natural distance or in this case natural angle. The fact that they find a maximum in a noise decomposition supports my theory. Just compare their signal with my signal (from the same data).
I did the equivalent (2-point correlation along distance plus FFT). i can see the oscillations by eyeball. I can measure the wavelength of the oscillation just by looking where the peak is... It is not what I hope for.. around 0.3 R0... I would rather see it at 0,24 ..:) more elegant... but the data is King... it tells me otherwise..
The fitting in that paper is heroic..:) but for me to understand it, it is a waste of time, since it pays attention to details of angular volume and z. The reason being is because the signal-to-noise stinks and that is a necessity to be able to say anything. That said, they should not had said a thing. Certainly not after seeing my results.
MP: So the SN1A survey doesn’t support always 45 degrees angle. That is an observational fact.
Maybe, maybe not. You have to work out how both z and the apparent magnitude vary as a function of radius to find out since those are the only observables for SNe.
You read my mind.... you just don't read my article..;)
There is Appendix F, which explains how the Absolute Luminosity (which become apparent magnitude on the telescope detector) depends upon [Carbon] ^2 and
how Chandrasekhar mass depends upon G. Eq.175 on this nice article below showcase the G dependence upon R_0 (the equation was written for the current hypersphere), change it to R(T) for other epochs.
So, you are a smart fellow ...:) and will not disappoint me..:) Follow the logic
Arnett article was not perfectly clear, but since I am like yourself (two peas on the same pod)...:) I filled in the blanks... I concluded that Luminosity is the decay reaction of 56Ni, so it is proportional to amount of Nickel one has at any given time. The amount of Ni associated with the LUMINOSITY PEAK is equal to dN/dt times the raising edge of the Luminosity profile. SN1A distance measurements only pay attention to Luminosity to find that peak. (it corrects the width using WLR - see Appendix F). Due to the coasting velocity approximation , that leading edge duration is always the same, so it is irrelevant, so in first approximation, the Luminosity proportional to the dNi/dt or [C]^2
Q.E.D.
That is why I corrected the distances by R(T)^(-3/2).
That is how one creates a theory with no stinking parameters! It is easy because they are not necessary.
By the way, you can see my d(z) performance in the plot below. You see...not a single parameter... perfect prediction...
Notice that it doesn't have numerical parameters. It has Physics or Physical constraints imposed by topology and model for interaction and model for matter. Starting from those three things, I built this parameterless theory.
The least the number of degrees of freedom a theory has the better...;) We don't want no stinking parameters!
https://arxiv.org/pdf/1607.03155.pdf
https://issuu.com/marcopereira11/docs/huarticle
http://iopscience.iop.org/article/10.1086/510375/pdf
https://en.wikipedia.org/wiki/Chandrasekhar_limit
@George Dishman
Missing signs of intelligence, George. Did you finally read my article or run the scripts?
MP: Missing signs of intelligence, George.
Ad hominen abuse won't get you anywhere Marco.
I created this theory in 2004. I asked people to provide criticism or suggestions or any feedback. They didn't.
So, I tried the next best. I applied the theory to GR, L-CDM, SN1a Supernovae Survey based d(z) and challenged the framework they derive their sense of self-worth.
That is what you are talking about. Your sense of self-worth is tied to GR. You did what everyone is also doing.
There is no easy path to challenge a widespread scientific fad.
You decided not to rebut my ideas. That is your choice.
I guess, mankind and I, have the most to lose from the behavior of the community.
MP: Your sense of self-worth is tied to GR
Complete rubbish, my "sense of self-worth" is tied to the way I conduct my personal life and interact with others, GR is just a piece of maths that accurately describes gravitational effects. You should also note that I didn't use GR at any point in my discussions with you.
MP: You decided not to rebut my ideas
That is not correct, I pointed out that your version of the standard hypersphere model together with the fact that Maxwell's Equations require the speed of light to be c locally at all points along the path results in a logarithmic redshift relation. It is just a correction to your maths provided as "constructive criticism" so it is up to you to then consider whether that correction has any impact on your model. It might rebut it or you might be able to make adjustments to accommodate it, that is for you to discover.
MP: I guess, mankind and I, have the most to lose from the behavior of the community.
I acted in good faith, giving you a fair peer review of that aspect of your model. If you cannot take that input in similar fashion, it is only your own behaviour that you need to examine.
You are correct. Good only know how much pleasure you derive from you knowledge of GR. Let's not be picky.. You should also consider SR... I am correcting both.
You might understand my frustration with you and JD (and everyone else) when I provided evidence that the Universe is hyperspherical (through the existence of hyperspherical acoustic waves) on the SDSS BOSS dataset.
I spent very long hours trying to educate you and JD and everyone else about the theory. The last argument was something that most people with basic knowledge with computers could do in half-hour. To run the python scripts, see the map of the Universe and the Neutronium Acoustic Waves - that evidence is enough to disavow GR,SR etc
I cannot get one person to do it, see the data, and tell me that they saw the data and that at first glance they do appear as hyperspherical acoustic waves. (I wouldn't expect you to give scientific corroboration, but I would hope you would enjoy it and be puzzled).
Somehow this last step cannot be reached.
#############################################
With respect to "speed of light to be c locally at all points along the path results in a logarithmic redshift relation" that is not a requirement.
You cannot state the the speed of light is c locally at all points along the path...That is an unwarranted requirement. One only knows c locally.
#############################################
Please be understanding with my complaints. You were the closest to someone who understood the theory but somehow couldn't run some python scripts. It is frustrating.
Also, my complaint was a rebuttal to your comment about
"Ad hominen abuse won't get you anywhere Marco"
when I kindly reached out.
as if you knew some other path that would lead to better results. It was offensive to me, since I have not been given the benefit of the doubt since 2004 (13 years). That is unfair.
What I did to you was unfair also... but I hope you will understand that it is warranted.
George Dishman
I had to retract my conclusion that photons did not travel along a log spiral. They do. The solution to the conundrum is to separate the condition for tangential momentum conservation from the optical path. Those are two distinct things.
A reader called Marmoo, argued against my position an eventually I realized what had to be done.
The plot would be different from the plot you have, since the Universe did not start as a point.Different current times would be seeing slightly different positions in the initial hypersphere. The radius of that hypersphere is known around 400,000 light years.
Hence, the plot is not correct.
Hi Marco,
MP: I had to retract my conclusion that photons did not travel along a log spiral. They do.
I'm glad we've reached some consensus on that.
MP: The plot would be different from the plot you have, since the Universe did not start as a point.Different current times would be seeing slightly different positions in the initial hypersphere.
Sure, the red lines in the standard plot are for a single observation time. The comoving observer's worldline is purely radial so for an earlier observation, the red lines would run inside those shown and intersect any circle of uniform age at a different point.
MP: The radius of that hypersphere is known around 400,000 light years.
Was that number just a typo?
That's not feasible, the Andromeda Galaxy for example is roughly 3 million light years from us which would be about the same as the circumference of the hypersphere. That would mean Andromeda was actually the Milky Way seen in light that has travelled once round the universe. The problem is that with only space for a single galaxy, every galaxy should look identical and they should be regularly spaced. That of course is not the case.
Before I say anything, I would like to apologize from being impolite.
We spent a lot of time discussing my theory. You clearly seemed to understand what I said. The final step would require you to derive a conclusion about my discovery and run a simple python script.
At that point, you stopped. You are not the only one. Every single scientist, when facing the Sloan Digital Sky Survey data, just disregard that evidence and keep talking from the logical framework of GR (e.g. comoving distances, etc).
That explains my misbehavior. It does not justify, but explains it.
I don't want to misbehave again - so I just placed this apology here, which is sincere, since you were the brightest person to engage me (until I was engaged by Marmoo (not his real name..:) Scientists keep a very low profile when agreeing with me..;)
If you want to know how a famous scientist did when facing the same problem, you can check here:
https://www.researchgate.net/post/Refutal_of_General_Relativity
Cheers
Hi Marco,
MP: Before I say anything, I would like to apologize from being impolite.
Thanks. It's easy to get caught up in these discussions and perhaps take a tone that is later regretted, I've done the same myself now and then so no hard feelings.
MP: We spent a lot of time discussing my theory. You clearly seemed to understand what I said.
To a degree, the whole hypersphere model is quite familiar to me, but why you think that would produce any quantisation of observations is unclear.
MP: The final step would require you to derive a conclusion about my discovery and run a simple python script. At that point, you stopped.
I did but for good reason. The next step wouldn't be to run your script, it would be for me to independently develop my own script to test your hypothetical structure without having seen your script at all. In fact it would be better if I wrote my code in a different language to reduce the risk of a common coding error. That would take a lot of effort though and I have several projects of my own that I can't progress through lack of time, mainly in the field of gravitational waves where the O3 run of LIGO and Virgo is producing results faster than anyone can keep up.
I'm glad "Marmoo" was able to help more than I, at least now you can consider how that might affect your own code.
Best regards,
George
As they say, the Perfect is the Enemy of the Very Good
You believed you could recreate my ideas or from your partial understanding. You are very bright, but not enough..;)
It would suffice for you to understand what I did. You were not under Oath to give support for my ideas.
#############################################
When you say that hyperspherical model is something you are familiar with, please show me a link to prior art where the Universe is a lightspeed expanding hyperspherical hypersurface.
If such article exists, I should place it in my references. If the article does not exist, then what are you saying?
To say that 4D spacetime is a hyperspherical hypersurface does not cut for the simple reason that if the Universe is a lightspeed expanding hyperspherical hypersurface, the Universe does not obey Einstein's equations and thus one should try to localize a discussion within an incorrect paradigm.
#############################################
You did a good job and I was wrong about the light propagation. That said, the description is not complete (I haven't write down the equations of how light propagates). Needless to say, they don't match what we have in store (Maxwell's Equations).
Marmoo placed me in an untenable position by creating paradoxes.
https://hypergeometricaluniverse.quora.com/Mar-Moo-95-Comments-and-Counting-1
Sometimes you cannot beat paradoxes..>:)
George,
To a degree, the whole hypersphere model is quite familiar to me, but why you think that would produce any quantisation of observations is unclear.
To understand quantization you have to understand how I used the extra spatial dimension to be able to see the Fabric of Space. Once I saw it, I was able to create a new model for matter based upon the Fundamental Dilator and replace Newton's Laws of Dynamics with the Quantum Lagrangian Principle.
https://hypergeometricaluniverse.quora.com/The-Fundamental-Dilator
The Fundamental Dilator is supported by experiments (Couder) and observations (Huygens).
The Fundamental Dilator is simpler and thus more Fundamental than what we have at this time.
In addition , the SDSS data refutes Singularity, Big Bang, etc.
Best regards,
Marco
I recreate the link to the data analysis.
https://showcase.dropbox.com/s/The-Hypergeometrical-Universe-Theory-J0vIT6EenofXf7QAd7heF
Don't worry, George. This is for the general public.
Hi Marco,
MP: When you say that hyperspherical model is something you are familiar with, please show me a link to prior art where the Universe is a lightspeed expanding hyperspherical hypersurface.
Split that into two aspects, the hyperspherical geometry and the rate of increase of the radius.
MP: If such article exists, I should place it in my references. If the article does not exist, then what are you saying?
The hyperspherical shape is what Friedmann published in 1922 which kicked off the whole "expanding space" model:
https://en.wikipedia.org/wiki/Alexander_Friedmann#Professorship
https://en.wikipedia.org/wiki/Shape_of_the_universe#Curvature_of_the_universe
However, the big difference with yours is that the curvature is intrinsic so there is no constraint on the rate of change of radius of curvature. Imagine a cork bobbing on slow ocean waves. At one moment it is in a trough and a little later on a crest while between those there is a moment when the surface is flat (but tilted). The radius is going between two values passing through infinity between the extremes, so the rate of change varies from zero to infinity and back twice on every wave, but the surface is rising and falling quite slowly.
MP: To say that 4D spacetime is a hyperspherical hypersurface does not cut for the simple reason that if the Universe is a lightspeed expanding hyperspherical hypersurface, the Universe does not obey Einstein's equations
OK, now think about what that means, you can turn it around logically: If the universe is a hypersphere and it obeys Einstein's equations then the radius of curvature is not increasing at light speed.
A hyperspherical shape with unconstrained rate of change of radius is the accepted model and fits observations, while your hyperspherical model with the rate of increase of radius fixed at the speed of light does not. My assessment of those statements is that you have disproved the hypothesis that the rate of increase is c, not that you have disproved the standard model.
Hi George,
You shouldn't split my paradigm in two parts.
One does not split Einstein's 4D Spacetime in 3D Space and Time and say that I am familiar with it. Newton did it 300 years ago. That would be offensive to Einstein, so you can imagine that I also consider that offensive.
I cannot refer to Friedmann paper because it is not a 4D Spatial Hypersphere. There is a difference between space and time. And there is a difference between 4 and 5 dimensions. It is also not expanding at the speed of light.
Having a Fourth Spatial dimension is what allows me to make a model about the Fabric of Space. The FS does not boop up and down. It is twisted by interaction. The torsional angle defines the Absolute Velocity with respect to FS. Hence, HU is a theory where the cumulative point at v=c is defined by having the absolute velocity =c or the torsional angle equal to 45 degrees.
The Curvature of the FS (I consider FS in general as the relaxed FS and that is just an smooth circle with a radius equal to the 4D radius of the Universe or 13.58 billion light years.
OK, now think about what that means, you can turn it around logically: If the universe is a hypersphere and it obeys Einstein's equations then the radius of curvature is not increasing at light speed.
What I say is more than this. What I say is that if the Universe has that shape and the radius is expanding at the speed of light, NO argument can be based upon GR, de Sitter, Friedmann Model, or comoving distances, because that configuration is not a solution to Einstein's equation and anything based upon Einstein's equation is refuted by Reality.
So, this is a observational argument, done mostly outside the my theory.
The observations are:
a) SDSS - Here the observations imply the hyperspherical topology. CMB homogeneity and isotropy also support the topology. Hubble Law requires lightspeed expansion.
b) Planck CMB survey. Here I point out that Dr.Fulvio Melia R_h=ct is better than L-CDM and thus refutes L-CDM. This means that R_h=ct support lightspeed expansion.
c) Supernova Project. Here my epoch-dependent G corrects Supernova distances and the theory predicts all SN1a distances. The predictions are based upon the lightspeed expanding hyperspherical topology. Good predictions means that this also supports the lightspeed expanding hyperspherical topology.
If all three support that topology, then all three refute GR.
Simple argument.
Best regards,
Marco
Dear Marco Pereira, George Dishman,
May I ask some question for understanding of the discussion? When space is expanding through a 4th spacial dimension with the 4D radius increasing by the speed of light, then how is the redshift of objects explained?
We have at least visual evidence of a redshift z=10. The distance, along the 3D surface of the 4D hypersphere, where the expansion of space is the light speed is at 1 radian of the sphere. The maximum expansion would be 2πR where we would be one times round. The curvature of 3D space can be measured without the need to observe it in 4D space. It is observed to be flat at the distance of the CMBR. Curvature of a distance covering one radian of the hypersphere is clearly detectable. So the distance of the CMBR can't be one radian.
Maybe you see that I have problems with a hypersphere radius expansion of only one time the light speed. Could you explain that?
Regards,
Paul Gradenwitz
Dear Marco,
MP: You shouldn't split my paradigm in two parts.
I'm not doing that, I'm just identifying the aspects where it differs from the standard model. Yours has an extra spatial dimension and a fixed rate of change of radius of curvature otherwise they are the same.
MP: And there is a difference between 4 and 5 dimensions. ... Having a Fourth Spatial dimension is what allows me to make a model about the Fabric of Space.
I have a problem with that though. I can weld three metal rods together at their ends such that each is perpendicular to the other two. I can't then add a fourth rod pointing in the fourth spatial direction. Why not?
MP: What I say is that if the Universe has that shape and the radius is expanding at the speed of light, NO argument can be based upon GR, de Sitter, Friedmann Model, or comoving distances, because that configuration is not a solution to Einstein's equation and anything based upon Einstein's equation is refuted by Reality.
No, reality is consistent with GR in the form of the standard model as the Robertson-Walker metric. You cannot impose your alternative hypotheses on the GR solution. You can only compare GR with observation, not with your alternative theory, and GR passes that test.
MP: You shouldn't split my paradigm in two parts.
I'm not doing that, I'm just identifying the aspects where it differs from the standard model. Yours has an extra spatial dimension and a fixed rate of change of radius of curvature otherwise they are the same.
Mine also wasn't proposed before and its curvature does not depend upon the Stress Tensor, or Universe content. My point in that part of my reply was that you said you are familiar with my paradigm. The differences is what matters. The 4D Spacetime can be mapped to a lightspeed expanding hypersphere locally - that is a homology. I used that to explain why Gravitational Lensing or other GR predictions are fortuitously successful. That is called serendipity and the fact that the geodesics models is forced to have Newtonian Dynamics as the asymptotic solution.
Other unwarranted parallel is that because I have 5D, my theory has been done before by Kaluza-Klein. Kaluza-Klein is a contrivance. It is offensive.
Similarities are less important than the differences (because the similarities are the result of serendipity).
I have a problem with that though. I can weld three metal rods together at their ends such that each is perpendicular to the other two. I can't then add a fourth rod pointing in the fourth spatial direction. Why not?
Because matter is traveling together with the Universe and there isn't a force that will move matter outside this hypersurface. All forces you know are 3D forces
If you had a force that could move things outwards, then you would be able to do what you want. You have to understand what a Force is.
No, reality is consistent with GR in the form of the standard model as the Robertson-Walker metric. You cannot impose your alternative hypotheses on the GR solution. You can only compare GR with observation, not with your alternative theory, and GR passes that test.
What you call consistency is a fitting to SN1a distances. Anything can be fitted.
Robert-Walker Metric is not consistent with SDSS data.
Passing tests is also what my theory does.
Here you can see it passing all tests of GR (Gravitational Lensing, Gravitational Time Contraction, Velocity TIme dilation, etc).
https://www.quora.com/Can-the-hypergeometrical-universe-theory-replicate-the-general-relativity-achievements/answer/Marco-Pereira-1
Here is my theory passing the tests that GR cannot pass:
https://www.quora.com/Are-dark-matter-and-dark-energy-falsifiable/answer/Marco-Pereira-1
All my theory is consistent and supported by observations.
There is not independent support for the parameters obtained in L-CDM. Where is the support for the assertion that 85% of the Universe is Dark Matter. Where is the independent support that there is Dark Matter.
In addition. HU makes matter directly and simply from deformed space. This means that my theory can created the Universe with zero energy being released. This is what the SDSS shows to be the case.
The Standard Model is not consistent with SDSS
Dear Marco Pereira,
Marco>: Passing tests is also what my theory does.
The distance to where the redshift is z=1 is somewhere around 10 billion Light years. When space is a hyper sphere with light speed expansion rate then one radial will also have light speed expansion rate. Thus with the known distance for z=1 the radius of the hypersphere should be equal to the distance of objects with z=1. In that distance we can detect curvature. The curvature should yield the same radius. Space, up to the distance of z=1 appears flat. Its curvature radius is much larger than 10 billion light years. So, either your expansion speed of your hyper sphere is much higher than the speed of light or your hypersphere does not exist. Any way a hypersphere of radius 10 billion light years expanding with light speed is ruled out by observation. Passing tests is good. Failing tests makes the need to rethink the theory.
Regards,
Paul Gradenwitz
Dear Paul,
I suggest you learn my model and my objections to GR before making conclusions.
When you say that the Universe is Flat, that is a statement that has a measure defined within your preferred 4D Spacetime theory.
You would have to understand how that measure looks like in an lightspeed expanding hyperspherical hypersurface. Reasonings that are just fine for a standing still sphere, are not fine for an expanding sphere.
First, check to see if your theory is consistent with Reality (SDSS, Planck CMB Survey and Supernova Project). Once you realize it is not, then you can make an informed decision and provide your best advice.
###################################
z=1 implies distance =0.5 R_0= 0.5* 13.58 GLY=6.79 GLY.
You don't see the Current Universe. Your reasoning seems to be referring to the outermost circle. That is the incorrect way of thinking about the Universe and what you can observe.
Best regards,
Marco
Dear Marco,
MP: My point in that part of my reply was that you said you are familiar with my paradigm.
No, I said I was familiar with the geometry of a hypersphere, you keep reading more into what I say.
MP: Other unwarranted parallel is that because I have 5D, my theory has been done before by Kaluza-Klein.
Almost but in their version the extra dimension is compact which is why we only experience 3D, that's an important difference.
GD: I have a problem with that though. I can weld three metal rods together at their ends such that each is perpendicular to the other two. I can't then add a fourth rod pointing in the fourth spatial direction. Why not?
MP: Because matter is traveling together with the Universe and there isn't a force that will move matter outside this hypersurface.
The question remains why is matter constrained to only three dimensions.
MP: All forces you know are 3D forces
Exactly, I'm asking why that is the case. If we live in a 4D spatial manifold, force should have four components, movement should be possible in 4D, inverse square effects should be inverse cube and so on. Telling me we live in a 3D world with 3D forces doesn't explain why that is the case.
MP: Other unwarranted parallel is that because I have 5D, my theory has been done before by Kaluza-Klein.
Almost but in their version the extra dimension is compact which is why we only experience 3D, that's an important difference.
The universe also does not travel at the speed of light, nor matter is made of coherence of stationary states of deformation of space, etc. So, it is not the same and hasn't been done before.
The question remains why is matter constrained to only three dimensions.
For that you will have to chug along and understand what is a Force.
This can only be done after you review the data and confirm that GR does not describe the Universe.
Let's solve one problem at a time. If you are really interested, the answer is here:
http://www.worldscientificnews.com/wp-content/uploads/2017/07/WSN-82-2017-1-96-1.pdf
I am not interested in discussing this at this second. The goal of this posting for me to to agree on a language for debate. I am providing evidence such that that language becomes Physics instead of GR.
GD: The question remains why is matter constrained to only three dimensions.
MP: For that you will have to chug along and understand what is a Force.
Force is rate of change of momentum and in a 4D universe, it would be a 4-vector. Nothing you've said restricts it.
Dear George,
I find our discussions great. That said, I want to answers questions in a predetermined sequence. The reason is Logic. If you eliminate competing descriptions of Reality, it is easier to educate.
If you don't want to discuss the question at hand: the evidence of an extra spatial dimension and the lightspeed expansion refuting Einstein's equations, just tell me so.
This is a public discussion and it is in my interest not only to educate you, but also educate the others who are reading this.
Best regards,
Marco
Dear Marco Pereira,
Marco>: You would have to understand how that measure looks like in an lightspeed expanding hyperspherical hypersurface. Reasonings that are just fine for a standing still sphere, are not fine for an expanding sphere.
The model of GR, even where I disagree with that model, assumes space to expand. You assumes space to expand as a hypersphere with finite radius. GR assumes space to expand with at least a much larger radius. When describing curvature there is no need to invoke a higher spacial dimension but it is not forbidden to have a higher spacial dimension. The outcome of curvature for an expanding 3D space is the same if there exists a 4D hyperspace or only a 4D spacetime. What is compared is space in both cases subject to something that appears as Hubble expansion.
Marco>: You don't see the Current Universe. Your reasoning seems to be referring to the outermost circle. That is the incorrect way of thinking about the Universe and what you can observe.
My reasoning is that for any distance is valid that the radial distance from the observer to the position where the cosmological expansion appears to give z=1 has to be at a distance of 1 radian of the hypersphere is that hypersphere is expanding with light speed. I know that with the expansion comes a bit extra distance but it is sufficient to note that the CMBR with z=1078 appears to be flat better than measurement accuracy. If, by taking into account that space is expanding, still the measurements come to a conclusions that space appears flat out to the CMBR, then the distance out to z=1 is so much flat that a radius equal to z=1 distance from us for the hypersphere has to be ruled out.
Marco>: First, check to see if your theory is consistent with Reality (SDSS, Planck CMB Survey and Supernova Project). Once you realize it is not, then you can make an informed decision and provide your best advice.
I did so. You assume that my theory is GR. It is not. GR has an equation for the expansion of space. That equation is based on gravitation. If you describe space as an expanding hypersphere then you have an equation for the expansion of the surface of that hypersphere, our 3D space. So, the observation that the ratio, of our physics ruler length to the scale of the universe, changes over time is shared with us. We say that at some time in the past a large certain volume of current space could fit in a volume with the diameter our physical unit of length. At the current moment that same volume is much larger than our unit of length.
So: at t(1) diameter(V) / 1L = 1 and at t(2) diameter(V) / 1L >> 1
Now both you and GR assume that the explanation of this change in ratio between the size of space and the unit of length has to be attributed to space. Because of that you and GR come with mathematical equations about space to explain the observations. You state that GR has a problem that it uses the same principle equation for gravitational processes in space as for the expansion of space. You seem, for me, to have postulated space as a hypersphere with an arbitrary expansion rate equal to the light speed. That for me seems to be ruled out by observation. My explanation for the observed change in ratio between size of space and the size of our unit length is completely different to both your and GR explanation. I don't blame space for the change. I assume that space is constant. Thus I have to conclude that our unit of length has changed over time. While that seems just a simple mathematical inversion there is a fundamental difference in this approach. When I blame the unit of length to have changed, then I have to find equations of matter and not equations of space to explain why matter shrinks.
You have found that the reality (SDSS, Planck CMB Survey and Supernova Project) place a big burden of explanation on GR that seems to fail. I can agree with you about that. You have proposed to replace that theory with another theory of space with 4 spacial dimensions where 3 spacial dimensions are accessible to matter and their forces and the 4th drives the expansion. I have proposed to change the focus to matter and leave space flat 3D and infinite large and infinite old.
Regards,
Paul Gradenwitz
Dear Marco,
MP: If you don't want to discuss the question at hand: the evidence of an extra spatial dimension and the lightspeed expansion refuting Einstein's equations, just tell me so.
I'm happy to do so but it is not acceptable logic to say that GR is wrong when you add your own modifications to the standard model. There is no reason why observations can't be compatible with more than one model but each must be considered in isolation. The SSDS models may be compatible with your model as well as with standard GR.
Now going back to the "elephant in the room" as the saying goes, the first step is to explain why we experience a 3D spatial manifold when you claim it is actually 4D. Force should be 4D along with displacement, velocity and momentum in that case but that is only one aspect. If I light a candle, the brightness should fall as the inverse cube of distance (no force involved). Simple tests like these tell me your hypothesis does not reflect the world I live in and you are not offering any explanation of that, you want to jump ahead and add in "lightspeed expansion". That has its own problems but let's take it one step at a time, explain how you solve the dimensionality problem first.
Dear George,
That problem is called Dimensional Leaking and my theory solves it.
I'm happy to do so but it is not acceptable logic to say that GR is wrong when you add your own modifications to the standard model.
First, I don't know which standard model are you speaking of. The Cosmology standard model or the QCD Standard Model.
It is it the QCD standard model, GR decided not to change Newton's Laws of Dynamics or the model for matter. It just accepted mass as it was there.
The refutal of GR has nothing to do with mass. It has to do with shape of the Universe and rate of expansion (lightspeed).
Mass has to do with the refutal of the Standard Model of QCD. This is another discussion. It is there because SDSS tells us that there was no Singularity, No Big Bang, No Heat and Pressure at the moment of the Universe creation.
###############################################
The SSDS models may be compatible with your model as well as with standard GR.
That is my point. SDSS is not compatible with Einstein's equations. If you take Einstein's equations from GR and change the Universe dimensionality, you are left with nothing. That is refutal.
###############################################
Now going back to the "elephant in the room" as the saying goes, the first step is to explain why we experience a 3D spatial manifold when you claim it is actually 4D. Force should be 4D along with displacement, velocity and momentum in that case but that is only one aspect. If I light a candle, the brightness should fall as the inverse cube of distance (no force involved).
This argument, you should be given to the Uppsala team who used my topology without referring to my work or replying to my email. I derived Newton's Law of Gravitation and Electromagnetism on a moving 3D hypersurface and obtained the squared distance dependence. How did I do it. You have to read the article.
I am focusing on the meat and potatoes of refuting GR first such that spurious arguments don't arise. You can see, I am talking to two people, 100% of them will refuse to review evidence and make a conclusion.
Instead of reviewing evidence, which is independent upon my work, you decide to try to judge my work (to refute the evidence somehow). It boggles the mind..:)
Best regards,
Marco