I wonder about the source of the formula at internet
http://www.pantheory.org/HF.htm :
[((z+1)2-1) / ((z+1)2+1)] c / H0
H0 – Hubble’s constant, c – speed of light.
“Comparing Hubble calculated distances and brightnesses with Pan Theory calculations of distances and brightnesses."
I have checked the formula against 100 galaxies with [0
Hubble's theory depends on the behavior of photons. The notion of a photon is not well comprehended. It is NOT A WAVE and its carrier is NOT THE ELECTROMAGNETIC FIELD. Waves cannot travel billions of light years through empty space and stat detectable. The EM field does not cover such huge ranges.
Dear Sir,
I am a novice in this matters and I am still confused with this formula as I found the formula in difference context, i.e. like tanh(ln(1+z)), that was the question. Hubble’s low (experimental result) means for me a law how the EM wave behave when traveling huge distances. Hubble interpreted the EM (the light) redshift as a result of Doppler effect. Therefore this formula, when following in steps of Hubble, can be used only for z12. On the other hand I am convinced that the weak gravitational redshift is more reliable for interpretation for the redshift than the Doppler effect. As consequence we must drop the dynamic nature of the universe. So, what is then left to the BB? It might be that the BB is not a BB, but a phase transition of matter from unknown origin, like dark energy. Water can remain liquid below zero, so called super cooled water. Why not to suppose that dark energy is a matter below 0K. Is her some contradictions of laws of physics?
@Joseph,
Physics theories are guided to pragmatic applications and not so much on explanations of its origins. For that reason, most physical theories are descriptions of observations rather than explanations. If a description appears to fit, then it is accepted as a valid theory. In this way, all kinds of "theories" exist that claim to describe the behavior of the universe. Our problem is that only a small part of reality is accessible to observation, while all observations are affected by changes of the format of the perceived information. Google for the Hilbert Book Model.
https://www.docs.com/hans-van-leunen
I've never heard of Pan theory, but that formula is
Distance = velocity * Time.
Where T = 1/H_0 = 1/13.7 billion years
The expression of [((z+1)2-1) / ((z+1)2+1)] =v/c
It is the inverse of the relativistic redshift equation v/x = sqrt((1-v/c)/(1+v/c))
Ho Jonathan,
Many thanks. I received also an answer that matches yours. Yes, you are right. The equation to convert the redshift (z) to distance is based on recessional velocity (v). Indeed, the relativistically correct equation for velocity is:
v = c [(1+z)2 -1]/[(1+z)2 +1]=tanh(ln(1+z))c
Hubble distance equals d=vH0. Thus, dividing the expression by H0=67,15km/s per MpC, the distance for a redshift, f.e., 0.138 is: [(1+0.138)2 -1]/[(1+0.138)2 +1]c/H0=573.9MpC.
However, I am still in doubt. The function f(z)=[(1+z)2 -1]/[(1+z)2 +1] or f(z)=tanh(ln(1+z)) in more elegant form, is convex, i.e., f’’(z)
"Estimates of galaxy distances based on indicators that are independent of cosmological redshift are fundamental to astrophysics"
Such independent indicators would be magnitudes of standard candles. I have recently heard the word "luptitudes" but I haven't yet learned what that means exactly--something to do with accounting for signal-to-noise ratios in very faint and distant galaxies.
"So, I am totally confused because convex function cannot underpin concave function."
I'm not sure what this means.
But, I can offer you another variable that might help.
Let's name this other variable "rapidity" denoted as curlyphi or varphi. This represents the hyperbolic rotation angle between our worldline and the distant galaxy's worldline.
varphi= ln(1+z)
beta = v/c = tanh(varphi)
gamma = 1/sqrt(1-beta^2) = cosh(varphi)
beta*gamma = beta/sqrt(1-beta^2)=sinh(varphi)
My grasp of what you mean by "underpin" might be a bit hazy, but if my usage of the word matches yours, you will understand when I say that z is not what underpins the space, nor does beta, gamma, or phi. But what underpins the space is a Minkowski coordinate system.
In this framework, distant objects are not required by principle to move according to a Hubble flow. They just happen to move according to a Hubble flow because of the event or events which set them into motion.
These events would tend to create an environment of equipartition in rapidity space, which in turn, produces a local appearance of equipartition in velocity space (small angle approximation: sinh(cphi) = tanh(cphi) = cphi.) which in turn, produces an apparent homogeneity over regions where the small-angle approximation holds.
Hi Jonathan,
I used the term “underpin” in its ordinary meaning like “reinforce, back, sustain, corroborate, confirm,” etc.
I am not familiar with all these funny names: varphi, beta, gamma, etc. For me it is just hyperbolic trigonometry, where a pail is called pail. Hyperbolic geometry, to my knowledge is connected to geometries with negative curvature. Minkovski geometry represents a positive curvature geometry. Minkovski geometry, let we agree on the word, “underpin” the NED-D data much better than negative curvature geometries. Hyperbolic geometry underpin better the Doppler effect.
Two functions, as you already noticed, represent so called diffeomorphsm, where the direct mapping and its reverse are both differentional functions:
1+z=sqrt[(1+v/c)/(1-v/c)] and v=tanh[ln(1+z)]
represent such en example of a diffeomorphism. Unfortunately this diffeomorphism does not underpin the NED-D data. On the contrary, I found a diffeomorphism which does:
g(r)=4(pi) { arctan(r)+r [ (-1+r2)/(1+r2)2 ] } (mu) / rlambda
I call the g-function a weak gravitational potential function, which reverse function, indeed, in contrast to tanh[ln(1+z)], is not a convex but concave. One can check that the reverse r-values of g-function taken in ascending order on the interval g in [0.03;10.86] will undepin NED-D data, at least with the correlation coefficient 0.988238.
beta, varphi, and gamma are just words used to represent LaTeX representations of greek and roman symbols.
I could have written:
Define: rapidity= ln(1+redshift)
Define: celerity = beta = velocity/sped of light = tanh(rapidity)
Define: Lorentz Factor: gamma = 1/sqrt(1-beta^2) = cosh(rapidity)
Derive: beta*gamma = tanh(rapidity)*cosh(rapidity)=sinh(rapidity)
As for your use of underpin:
JEM: I used the term “underpin” in its ordinary meaning like “reinforce, back, sustain, corroborate, confirm,” etc.
I am wondering, now, if you might use the word "fit", as in, "this function fits the data", or "perform an ordinary least-squares regression to fit the data"
1+z=sqrt[(1+v/c)/(1-v/c)] and v=tanh[ln(1+z)]
represent such en example of a diffeomorphism. Unfortunately this diffeomorphism does not underpin the NED-D data.
Hmmm... Sorry, I'm not familiar with the Ned-D data tables, but I'm looking at a table of something called NED-D data now. at
https://ned.ipac.caltech.edu/level5/NED0D/NED.4D.html
This particular table doesn't even show a value for z. It just has a column for V helio(z). Nor does it show a value for weak gravitational potential function, that I could see, unless that's the GLON or the GLAT.
So, I am totally confused because convex function cannot underpin concave function.
Is your concern here, really related to whether various functions are concave/convex, or does it have more to do with whether they are one-to-one, and onto? So long as two functions are one-to-one, and onto over their respective domains and ranges, you can do functions of functions of functions, and not lose any information.
To conclude the discussion.
It was impossible. in accord to my level and knowledge of data analysis, to fit the data from NED-D, The Astronomical Journal, 153:37 (20pp), 2017 January, Table 3, column Mean (Mpc) (compiled, in authors words, from 954958 extragalactic entities and collected from 34389 galaxies, reports of 13745 authors) by Hubble’s distances tanh[ln(1+z)]c/H0, c – speed of light, H0 – Hubble constant.
Which was greater? The values in column 3, or the hubble distances?
In most situations, the calculation of the distance comes from estimations based on the object's magnitude, rather than its redshift.
Hi,
As said, the observation of Astronomical Journal, 153:37 (20pp), 2017 January, Table 3, column Mean (Mpc), convinced me, at first glance, that Hubble’s distances at lower z-s, perhaps, give overly high values, and, in contrast, on higher z-s Hubble’s law underestimate more longer distances of extraterrestrial objects. I need to think this preliminary analysis result in details. In the illustration attached more correct estimates might be useful.
JM
Ps. Tab. 3 highlights 94.958 extraterrestrial entities collected from 34.389 galaxies and reported by 13.745 reserches.
Well, the last comment was a year ago. I just found this query in the last week or so, so I will answer it. I was the one who derived the reference equation that was used in the reference website, which is my organizations website. The is the Javascript program referenced above the calculates galactic distances contrary to the Hubble formula. The formula is (10) in the above paper, and the brightness formula variation is (11) differs from the inverse square law of light since larger matter in the past (relatively speaking) would appear brighter based upon its distance.
This model proposes that dark energy and non-baryonic dark matter do not exist, as well as proposing a "simple" beginning to the universe without a Big Bang or Inflation. It is a mechanical “Theory of Everything", proposing to be able to unify all of physics under a single all-encompassing theory. Galactic redshifts are explained by a diminution of matter process rather than by the expansion of space. Space would appear to be expanding but instead matter would be very slowly getting smaller, a type of scale-changing theory. New matter would be steadily created from the matter decrement, maintaining a constant density of matter and a steady-state condition conserving matter and energy. The universe accordingly would be far older but not infinite in size or age. It is also an aether theory, a single fundamental particle theory and a single matter-innate physical-force theory. A related scientific peer-reviewed paper has been published in 2014.
http://www.pantheory.org/HF.htm:
Our website is http://www.pantheory.org/
The paper whereby I explained how I derived this formula is shown in this link:
Article An Alternative Universe-Scale Analytic Metrology and Related...
Article An Alternative Universe-Scale Analytic Metrology and Related...
The derivation was made in 2013. It involved several hundred type 1a supernovae data, much more data than what was available when dark energy was proclaimed. The model used to derive the formula is a diminution of matter model, contrary to the expansion of space. The model is called the Pan Theory which can be found on any search engine. The model concerning it does not take too long to explain its basis, but a complete description of it is a book 370 pages long and is available on the internet without cost at pantheory.org
Dear Forrest Noble,
You might be pleased by the fact that another theory also bases on a field excitation that forms the base of all matter. This excitation is a spherical pulse response and it is a solution of the wave equation. It integrates into the Green's function of the field. I apply quaternionic field theory and there the spherical pulse response injects volume into the affected field. That volume locally deforms the field and then spreads over the field and expands the field. See: Article Generating Mass from Nothing
and https://www.researchgate.net/project/The-Hilbert-Book-Model-Project/update/5af2df924cde260d15dd9529Hans,
I dislike the something from nothing ideas of Hawking, but I also propose the generation of new matter from a background field. The background field I prefer is a particulate version of the Zero Point Fiend. Since this is off topic I will e-mail you and we could discuss our ideas further If you are interested.
Forrest
The zero point field is not part of the Hilbert Book Model, but stochastic processes that control the universe are. You might send me a private message at LinkedIn
Dear Forrest Noble,
You have dispelled my doubts about the formula expressing distance by means of the Hubble constant through a redshift z in an elegant form. I found an article where the same formula was proposed to be called under the name of another author. Thanks for the answer. Joseph Mullat
Thank You Joseph, Dear Sir,
The formula you noted above is one form of the Hubble formula, but the formula on the website you listed above at http://www.pantheory.org/HF.htm is the JavaScript program, where distances are calculated by my own derived equation. Distances are calculated according to redshift input data based upon a different cosmology model called the Pan Theory, which is my own model.
The Pan Theory distance equation that you referenced by the embolden link directly above is:
r1 = 21.2946 log10 [.5((z +1).5 - 1) +1] (z +1).5 P0
Where r1 is the calculated distance, z is the observed redshift,
and P0 is a constant equal to 1,958.0
Below is the version of the Hubble formula that you also noted above:
d = [((z+1)2 -1) / ((z+1)2 +1)] c / H0 ,
Where d is the calculated distance, z is the observed redshift, c is the speed of light, and H0 is the designated Hubble constant. There are presently two contrary and competing Hubble constants. One is about 68 ks/ mpc and the other about 72 ks/ mpc, using two different methods of determination. This presently is a big controversy in mainstream cosmology.
http://www.sciencemag.org/news/2017/03/recharged-debate-over-speed-expansion-universe-could-lead-new-physics
On the other hand it is believed that the Hubble constant, the expansion rate of the universe, is not constant at all. Present theory/ hypotheses proposes that instead, the initial expansion of the universe was caused by a hypothetical Inflation process, and is presently believed to be controlled by a hypothetical Dark Energy process.
Comparing the Hubble formula (Big Bang cosmology) with the Pan Theory formula:
Pan Theory cosmology (reference any search engine) is based upon a different explanation for observed galactic redshifts.
According to this model, Instead of space expanding, matter (relatively speaking) would be getting smaller, as well as the rate time passes would be getting quicker. As such, it would appear to us that space was expanding. Such models have been called scale-changing theory. Of course the reasons for this change is part of the theory.
Where redshifts z1, the Pan Theory calculated distances progressively increase to multiples of the distances calculated by the Hubble formula with no ultimate distance limit other than the limits of our equipment. Of course such great distances are contrary to the Big Bang cosmology. The reason why these great distances are not recognized by mainstream astronomers would accordingly be because larger matter in the past would have produced more light per galaxy so the greater distances would be compensated by greater luminosity. This compensation still would not match with the inverse square law of light, but would match the determined observation angle of a galaxy or cluster. Instead of the universe expanding, the observable universe would be in a steady state condition with new matter eventually being created from a physical background field as a result of the decrement of the diminution of matter process.
Here is the link to the paper that explains how the Pan Theory formula above was derived.
http://www.ccsenet.org/journal/index.php/apr/article/view/32603
Here is a link to our paper explaining the major problems with the Big Bang model, and the reasoning to favor the Pan Theory model.
https://www.aijcrnet.com/journals/Vol_4_No_9_September_2014/2.pdf
best regards, Forrest Noble
Dear Forrest,
I really appreciate your prompt response.
To your information I'm not an astronomer, in my life I have never looked in a telescope. I wrote several articles that I published privately. All these articles are summed up under one roof, which I call "Monotonous Phenomena of Issues". Only some of them are published but not in prestigious publishing houses. For example, one article, over which I worked for about 20 years, turned out to be the most complicated non-linear economic model with 8 parameters.
My interest in Cosmology was awakened in 1973 in connection with the use of the same “Monotonous Phenomena of Issues”. These were just some of my funny ideas, which at that time I have illustrated by the same monotonous phenomena. Now I'm getting close to what caused my interest in cosmology.
It was Planck Mission from 2013, the results of measurements of which I just entered for fun in my cosmological model. To my surprise, without any problems, my model comes up to results with almost 100% accuracy. Working the last 2-3 years, I published this article unfortunately in the very contradictory Journal of Cosmology. However, the mathematical quality of this article should be at the standard academic level. You can download my book with my illustrations and comments:
http://www.datalaundering.com/download/Experiment.pdf , total 572 pages. My cosmological efforts begin on page 455.
I am seeking for contradictions in my model. Therefore, PAN theory suits well. I downloaded your articles and looked through the list of problems. I can immediately say that the "horizon problem" can be explained in my model. As for the "flatness problem", I do not use GR at all. The density problem is transferred to one parameter, which takes into account the relativistic density, which allows all kinds of motions of matter. In my model the matter supposedly standing still. The problem of forming Galaxy is relevant only if a timeline is used. I do not use the time parameter and, therefore, this problem does not exist.
Dear Forrest, I already told you that I'm not an astronomer. I do not understand, e.g. how an observer, unknown to me, came to conclusion, see page 561 in my book. However, if 561 is true, I can explain his observations in my model. "The Anachronistic Galaxy Problem" is confusing. I could at least somehow explain this problem but for this I have to become an astronomer what is impossible. My model is much simpler than the LCDM model with its 24 parameters.
By the way, I recently discovered an article claiming that GT in the theory of BB violates causality. As I understand it, the theory of PAN suggests that matter is shrinking, but moving with acceleration, and that the reduction due to acceleration is manifested as an expansion with acceleration. Yes, if matter accelerates in the Universe despite it is shrinking, then in this case the relativistic energy of the Universe increases, although in spite of the fact that the average relativistic energy can decrease, what is necessary in my theory. This is my problem, because of which I have not yet abandoned the attempts to explain my mathematical findings to astronomers. This can take many years.
Regards
Joseph
Dear Joseph,
I am a big fan of simplicity in physics, which you said is one of your goals in your own proposed model(s). In today's physics there seems to be little effort spent attempting to simplify theory. Most all physicists believe that to understand the universe is very complicated. Although most equations are necessarily complicated, explanations of theory often lack logic because the theories being explained are often partly, or wholly wrong IMO.
Because I often have contrary views and theory to mainstream models, my contrarian papers are often difficult to get published, maybe somewhat like the paper you ended up sending to the Journal of Cosmology. Our current paper might be called a "no dark matter" paper. It is contrary to the existence of dark matter. What we think is a better proposal has been presented with much data, calculations, and evidence to support our proposal. This paper was finished more than a year and half ago and is still in the process of our own editing since the paper has been turned down by some of the major publishers who have published at least one previous paper of ours. Some rejections were expected because contrarian papers may never be published in a mainstream journal. In this case I will keep changing the way the paper has been written to eventually get it published, hopefully in a higher profile journal. In another month or so I expect to submit this many-times-amended paper again to still another mainstream journal. We spent more than 2 years on our now published "no dark energy" paper.
Although I think the Big Bang model is an incorrect cosmological model, somebody's claim that the BB model violates the principle of causality probably has flaws in the logic. In my opinion the beginning of the BB model has been adequately explained by one of the mainstream versions of the model. Where time can be explained as meaningless without change, there could have been no cause for an original Big Bang, otherwise if there were -- what would be the cause of that cause etc. So one ends in an infinite series and a infinite universe concerning time. There has to have been an original cause for a finite universe model. If the universe were infinite in time there also could not have been any cause for it since time would be infinite in the past. Where the universe is defined as everything in existence, whether finite or infinite, it could not logically have had a beginning external cause for it, from a logical perspective.
I will look at your book and comments posted above, and hopefully come up with good suggestions :)
best regards, Forrest
As an afterthought I thought I would give an additional explanation for the Anachronistic Galaxy Problem with the BB model.
As I said in our related paper, this may be the biggest, and most obvious problem with the Big Bang model. This problem is well-known to astronomers but seldom discussed excepting by those making such on-going observations.
A great many galaxies at the furthest distances appear to be red, fully mature, large galaxies, some similar to the Milky Way, and some even older looking galaxies. At these great distances only small, blue, young appearing galaxies should exist according to the BB model. This is probably the biggest problem with the Big Bang model and the problem that will eventually result in its demise, IMO, after the James Webb has been up for awhile and can see no limit to the extent of galaxies, and galaxy clusters. Below are links to a few of the many such contrary observations at the presently furthest observable distances.
https://www.independent.co.uk/life-style/gadgets-and-tech/news/furthest-galaxy-from-earth-found-and-is-nearly-as-old-as-the-universe-10491373.html
https://www.space.com/11386-galaxies-formation-big-bang-hubble-telescope.html
https://www.smithsonianmag.com/smart-news/hubble-spotted-oldest-galaxy-it-has-ever-seen-180958288/
Dear Forrest,
In connection with the problem of anachronistic galaxies, I will try to describe the situation very simply, without going into details. In the theory of LCDM there are claims to a very precise and voluminous description of the formation of galaxies and all this soup of matter (rather quantum particles effects) accompanying the formation process. First, some formal scheme is used and then the scheme turns into a verbal description. It seems to me that we need to do the opposite - first we need to understand what an explosion is in simple words. Indeed, an explosion performing in time is an instantaneous transition of more dense material into a more sparse material.
Let we describe this process by one parameter - the density of the relativistic energy parameter mu. For me it is like a description of matter of a proton in a Large Hadron Collider moving with a nearly light speed what is equivalent to about 400 tons of mass moving at a speed of 150 km per hour. It is clear that the energy density enclosed in the volume occupied by proton will be extremely high. In my model from purely speculative purposes, I took the energy density approximately 1015 in the inflation phase of the BB. This density corresponds to a globe of 1cm in diameter. Now it would be interesting to calculate what the density of the universe will be at the time of the anachronistic galaxies formation, that is, when it passed somewhere 400,000,000 light years after the initial inflation. I used the NED database for this purpose from Stark's article. The farthest galaxy in this sample is at a distance of 7.700 MPC. The linear transformation that maps the distance to the galaxy with the density of relativistic energy gives me the result mu = 2.79. – indeed, there was a huge leap in density. If we now move to the present state of density, then, in fact, mu=0.12457. Actually, during 16.7 billion years that passed there was, in fact, only small leap in density. According to my calculations, there exists a theoretical, by analogy with LCDM model, a critical value mu = kappa = 0.087267 at which the phase process of energy transition to mass ends. It highlights, in other words, that the death of the universe will come. Hence it is clear that our universe has almost finished its evolution.
What I said in words is a fairy tale about the dynamics of our universe. A fairy tale is a fairy tale, but maybe there is something instructive here.
Yours sincerely
Joseph
Yes, it seems likely that your analysis may be a harbinger of something important because of its apparent accuracy. I will continue to investigate your material to give my humble input into what kind of nuggets your related material might contain. Since my related views will probable not be in accord with mainstream theory, my related input will just be personal opinion, probably not palatable to mainstream astronomers and theorists at the present time.
best regards, Forrest
Dear Forrest,
My duty is to help you reading my stuff. Don’t hesitate to disturb me.
I am worried that my math do not make any harm on you. I think to put your distance formula
r1 = 21.2946 log10 [.5((z +1).5 - 1) +1] (z +1).5 P0
in question to fit it with my theory. I must find an interval of z’ts mapping the z’ts into my average energy density interval. I already tried to do that with
d = [((z+1)2 -1) / ((z+1)2 +1)] c / H0 but must refresh my findings.
By the way d = [((z+1)2 -1) / ((z+1)2 +1)] c / H0 can be rewritten in more elegant form
d = tanh(ln(1+z)) c / H0. Best JM
Dear Forrest,
I do not quite understand your distance formula.
Why do you have 21.2946 constant and P0 constant - why not to merge these two constants into one? Next, is the log10 just a constant or it represents by itself a log10 function, e.g. log10[of .5((z +1).5 - 1) +1 ].... Please, make it cristal clear to me how to put the z into your formula. I really will try to match your formula in my model with the calculus of distances via average energy density parameter.
Best JM
Joseph,
Yes, for practical applications you could merge the two constants. But rather than calculating anything yourself you could use the programmed calculator at the link that you first posted: http://www.pantheory.org/HF.htm
The reason for the two constants being separate is because the formula was first written in the form of the natural log 'e' because the foundation Pan Theory has its basis in the natural log, so in that formulation the first constant is different. The formula was converted into log 10 base for ease of calculation. The form of the equation : r1 = ...... , above, is the way the program in the above link was written. Also, because the first constant was calculated based upon theory and the observation accuracy of spectral lines, while the second was determined based upon the combined type 1a supernova data available in 2013 which could vary maybe as much as 1 significant figure with hundreds of more observations.
When using the programmed equation in the link, if you have the determined redshift z of .52, for instance, you would put in the value of z+1 into the z input data, therefore you would put in 1.52 following the 1. This would be the relative spectral length increase over its normal beginning length indicated as "1." From this input the calculations of my formula are determined with no other input. To compare these results with the Hubble formula the additional input of a Hubble constant is needed. The average constant expansion rate that seemed to be used for the all-sources supernova data was 68 km/s/mpc, although I could find none specified since the Hubble constant can vary up to 10% depending upon the method of determination.
I haven't had time as yet but I still hope to be able to make possibly helpful suggestions concerning your related writings.
best regards, Forrest
Dear Forrest,
I am very sorry to disturb You.
I used z=0.52 and get 2459.374986 using your PAN distances formula.
Is it correct? Is your formula in MPC distances?
Best JM
Forgot to answer your other question.
"Is the log10 just a constant or it represents by itself a log10 function, e.g. log10[of .5((z +1).5 - 1) +1 ] "
One could consider it as the log of the function [ .5((z +1).5 - 1) +1 ], and (z +1).5 as a separate function, or they could be considered a single combined function of z when multiplied times the constants.
Yes, that is correct. Although the distance is greater than the Hubble calculated distance, the apparent distance (observation angle) is very close to the same. I had a much longer answer written which was somehow lost. I'll elaborate more on this when I get back from traveling this weekend beginning 5/25/18.
Dear Forrest,
Thanks for your efforts. Check that your formula can be rewritten in more elegant form:
Q*ln{[(1+sqrt(1+z))/2]sqrt(1+z)}, where Q=18110.607641.
I will come back with some positive statements about this formula in connection with my Relativistic Energy Average Density Scale.
Best JM
Dear Forrest,
It is very important for me to evaluate my own efforts in the field that now is rising under cosmology nomenclature. I did not accidentally draw attention to the PAN theory because here it was possible for me in addition to the Hubble scale to compare your formula of distances to galaxies with my scale of relativistic matter density about which I mentioned earlier. I found that the Hubble scale has a "convex" character while the PAN scale is practically linear. The Hubble scale, in simple terms, exaggerates (overestimates) the distances for small z at the same time, as it seems to me it underestimates the distances for larg z>=5 . Which of these two scales is closer to reality is hard to say. A similar character applies to my scale of relativistic density. However, there is one advantage in my scale.
Indeed, when we talk about some kind of measurement scale, usually one point is selected on the scale corresponding to the state of our environment. So 0o Celsius corresponds to the transition of water to ice. Now it is appropriate to ask what the temperature is outside the window, it's very simp now, for example, 24o degrees Celsius. Similarly, on my density scale there is a density point corresponding to the death of the universe, the critical density is kappa=0.87268. Now it is also appropriate to ask where is our universe at the moment - the answer, at the point mu=0.12457 .
Finally, we are now approaching the advantage of the density scale. Namely, it is easy to display any scale to be embedded into density scale, e.g. the scale of redshifts, the Hubble scale or the PAN scale into on the density scale. The mapping will characterize the scale image by the location of its image on the Density scale. For example, the recently discovered galaxy NG-z11 with z = 10.2 corresponds to a density on the PAN scale mu ~ 379.64788 at which the dimensions of the universe are negligible ~ 0. According to Hubble my mu density at z = 10.2 is equal to 52.119401. The size of the universe at the moment is ~ 3.065505. I can also note that the PAN scale at z = 10.2 corresponds to a distance of ~153.54 billion light years, while the Hubble distance is ~ 14.33 billion. The difference with the factor is more than 10 times.
Sincerely your JM
Yes, your comparison of distances calculated by the two different distance scales is correct. The Pan theory model is a scale-changing theory whereby, relatively speaking, matter becomes smaller as time progresses. From this perspective it would appear to us that the universe or space was expanding, whereby instead neither would be happening.
In astronomy one indicator of galactic distances is called the observation angle. Given a galaxy of a particular diameter, the farther away one is from the galaxy, the smaller the angle needed to visually traverse the galaxy from side to side with a telescope. The average observation angle should progressively decrease as distances increase. If the measurement scale is wrong, and distances miscalculated, then this is not what will be observed.
In the Pan theory model, observation angles match distances. Based upon Hubble formula calculated distances, observation angles do not correlate with distances at all. The rationale for this, according to the Big Bang model, is that galaxies in the past were progressively smaller in size even though they appear to be brighter than they should at calculated distances.
In an expanding universe, the universe would have been denser in the past. The opposite has been observed consistent with the Pan Theory model. After the James Webb goes up and is successfully operating, I believe it will become obvious to astronomers that something is wrong with the Big Bang model. Just before the James Webb goes up we intend to write a paper explaining what we believe the James Webb will observe that will be contrary to mainstream cosmology.
That being said, it would seem that your studies and conclusions better fit with standard cosmology. I would hope that my ideas, equations and conclusions could help you with your research but expect that it could not. Maybe the primary value of the Pan Theory is that contrary future observations may not come as a complete surprise for those familiar with this model.
respectfully, Forrest Noble
Joseph,
I gave an answer to your reworked version of my equation but again my answer was dropped. I must be occasionally making a mistake of some kind in my postings. I will check out your form of this equation, but if they are the same, I much prefer your version to the form that I posted. The log 10 form was easier to program in JavaScript than the natural log format. But the natural log format is the format that is an analog to the theory behind the equation so it is the preferable format. Thanks again for your effort in my behalf. I will get back with you concerning this equation and your own cosmological ideas of your book as time permits.
with best regards, Forrest Noble
Dear Forrest,
A quick answer. I checked carefully the LN form of your equation - it should be correct. It appears to me that the constant 21.946*PO is somewhat a calibration constant, which can be changed to fit real data best (18...) In the LN form. I have downloaded your article, where the PAN theory is introduced. The idea of diminishing the matter, perhaps, suits me well. I will come back with some questions. I really appreciate our conversations. Regards Joseph
Dear Forrest,
Despite the fact that the "adjacent views" of the PAN theory do not agree with the main line of astronomers and theorists, it is important for me to understand the meaning of the discrepancies. Every author knows its weakest points best. I have two points that can cause objections. The first, which for me is extremely important, is to explain the decrease in the parameter of the average energy density. The second concerns the interpretation of the same parameter.
The PAN theory states that “The alternative cosmological model proposes that matter becomes smaller in size but proportionally greater in quantity as time progresses. In the past there would have been accordingly fewer individual units of matter than there is now, but over time the density of matter in space would remain the same; as these individual units halve in size, they double in their numbers. These matter units in the future will accordingly be smaller but there will be more of them. For this reason this model is also a type of steady-state model.”
I understand the first half of the statement, although the second half remains tedious for me. Does this mean that the new atoms of matter reconnect or fill the old spatial atomic volume that becomes available because the matter will be compressed? In my model, as well, the volume of the total matter-energy increases despite the decrease in density. Even if my model supposedly corresponds to reality, I need to explain why the energy of atoms decreases together with the decrease in atomic volume, as required by the theory of PAN, e.g. the energy of the hydrogen atom should decrease together with its atomic volume? I also consider the creation of matter, but the new matter connects space "on the edge of the universe" as a result of a phase transition of dark energy. Dark energy in this context has nothing to do with repulsive gravity. I look at dark energy as a source for matter creation that can have some primary anomalies leading to secondary anomalies, i.e. to gigantic voids without matter or vice versa, like the threads of galaxies formed during the phase transition.
It seems that in the past, the density of matter was higher. However, there is evidence that this is not so. Apparently, it is necessary to rephrase the density parameter in a different way. "In Physics mass-energy equivalence states that anything having mass has an equivalent amount of energy and vice versa." The latter emphasizes, e.g. the duality an electron as a particle and wave, the light beam duality, etc. It appears that in the past the energy dominated matter, and at the moment, matter dominates the energy. Therefore, we can say that the average energy density in the universe reflects some synthesis of matter and energy. In this sense it is convenient to introduce the scale of matter density and to reveal in this way the dynamics of the Universe. Therefore, my point is to map distances, or any cosmological indicators, such as the brightness of galaxies, or their visible angles, etc. into the average energy density scale.
Yours sincerely
Joseph