Clearly the observable universe is big compared to the human let alone (sub) atomic scale. However we also know that the universe is very flat, which in the Friedman Robertson universe means that the mass density is (close to) critical. But why can't the radius of curvature (positive or negative) of the universe be, say, 10 or 20 orders of magnitude larger than the size of the observable universe, and the apparent flatness be a the consequence of our limited field of view a mere 13.7 billion years after the big bang?
Dear Rogier,
a short answer to your question is: Yes, the Universe can be curved at super-large scales. A more detailed answer involves at least two aspects.
What do we know from observations?
From the observational point of view we know from a detailed analysis of the temperature fluctuations of the cosmic microwave background, combined with a local measurement of the Hubble expansion rate (the precise value does not matter too much here) that the curvature scale of the observable Universe is at least one to two orders of magnitude larger than the Hubble scale (the characteristic scale of the observable Universe), i.e. curvature still could be a few per cent effect. You ask if there could be curvature at scales 10 to 20 magnitudes larger than the observable Universe? The answer is yes. This could be the case.
What do we believe based on theoretical ideas?
Our standard model of cosmology relies on the idea of cosmological inflation.
Inflation predicts that the observable part of the Universe is very close to being flat. It does not predict that the curvature on scales much larger than the observable domain is exactly zero. Different models of inflation make different predictions about the super-large scales. Thus the answer is again, the Universe could be curved at those super-large scales. At the observable scales we expect a departure from flatness, but we expect that it is just a 0.1 per mille effect (a factor of 100 smaller than current observational constraint).
I should add that the critical density of the universe
\rho_c = 3H^2/(8\piG)
where H is the Hubble constant and G is the gravitational constant is measurable and that astronomers believe that the density of the universe (including dark matter and dark energy) is indeed critical.
If I understand correctly, it is possible that the local (observable) geometry of the universe is apparently flat while, at larger scales, the global geometry is spherical....
Being a pedestrian here, and not at all a mathematician, I find it helpful to review http://en.wikipedia.org/wiki/Flatness_problem:
"... In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe."
Note that the density of matter specified in the LCDM model presumes the presence of a preponderance of cold dark matter, whose very existence can only be inferred from analyses of very complex conditions...
Also see http://en.wikipedia.org/wiki/Shape_of_the_universe. I particularly find the section http://en.wikipedia.org/wiki/Shape_of_the_universe#Two_aspects_of_shape to be helpful. It explains that there are two fundamental aspects of the geometry of the universe: its local geometry and global geometry. As I understand, these two domains are distinguished by the characteristics of the observable, or local, universe, and the global universe, which is not directly observable. An inset illustrates 3 possible local geometries - spherical, hyperbolic and flat - shown as two dimensional representations of a 3 dimensional space. It states:
"If the observable universe encompasses the entire universe, we may be able to determine the global structure of the entire universe by observation. However, if the observable universe is smaller than the entire universe, our observations will be limited to only a part of the whole, and we may not be able to determine its global geometry through measurement."
It also states,
"... For example, if the universe is a small closed loop, one would expect to see multiple images of an object in the sky [from different perspectives], although not necessarily images of the same age."
It then goes on to discuss http://en.wikipedia.org/wiki/Shape_of_the_universe#Local_geometry_.28spatial_curvature.29, stating
"... Many astronomical observations, such as those from supernovae and the Cosmic Microwave Background (CMB) radiation, show the observable universe to be very close to homogeneous and isotropic and infer it to be accelerating."
- and http://en.wikipedia.org/wiki/Shape_of_the_universe#Global_geometry...
For a somewhat different, but informed, view of the super-large scale universe see Sean Carroll's excellent article at
http://arxiv.org/pdf/hep-th/0512148v1.pdf (arXiv:hep-th/0512148).
In a similar spirit I might recommend reading Lee Smolin's article at
http://arxiv.org/pdf/hep-th/0407213v3.pdf (http://arxiv.org/abs/hep-th/0407213)
These articles, IMHO, provide a background against which "fun discussions" such as this should be held.
Speculation is undoubtedly fun - but it's not science!
Bernard,
It's too bad then that no one has, so far, been willing to offer any 'scientific' direction to this speculative discussion!
I'm jumping at the opportunity to shamelessly promote my paper "An Advancing Time Hypothesis" at:
https://www.researchgate.net/publication/236577283_An_advancing_time_hypothesis?ev=prf_pub
If the "Big Bang" was actually an eruption and advancement of time (The Big Time!) and only secondarily an expansion of space, then the universe would be a 4D sphere, with space on its surface -- apparently flat, just as the earth appears flat, because of its size.
The hypothesis predicts a Hubble constant of 70.6, which should give it some credibility. It also dispenses with a number of mysterious energies and forces -- "dark energy", for one..
Data An advancing time hypothesis
Find the origin of space curvature and you come very close to your answer.
Physics is a kind of fluid dynamics. It is ruled by the differential continuity equations at small scales and by integral continuity equations at cosmic scales.
Dear Rogier,
a short answer to your question is: Yes, the Universe can be curved at super-large scales. A more detailed answer involves at least two aspects.
What do we know from observations?
From the observational point of view we know from a detailed analysis of the temperature fluctuations of the cosmic microwave background, combined with a local measurement of the Hubble expansion rate (the precise value does not matter too much here) that the curvature scale of the observable Universe is at least one to two orders of magnitude larger than the Hubble scale (the characteristic scale of the observable Universe), i.e. curvature still could be a few per cent effect. You ask if there could be curvature at scales 10 to 20 magnitudes larger than the observable Universe? The answer is yes. This could be the case.
What do we believe based on theoretical ideas?
Our standard model of cosmology relies on the idea of cosmological inflation.
Inflation predicts that the observable part of the Universe is very close to being flat. It does not predict that the curvature on scales much larger than the observable domain is exactly zero. Different models of inflation make different predictions about the super-large scales. Thus the answer is again, the Universe could be curved at those super-large scales. At the observable scales we expect a departure from flatness, but we expect that it is just a 0.1 per mille effect (a factor of 100 smaller than current observational constraint).
"Critical density" is applicable only for a highly approximated model of the universe, i.e., the Friedman-Robertson-Walker-leMaitre-(and a Russian whose name I've forgotten) model of exact spatial homogeneity of the entire universe, expanding/contracting all as a whole in perfect universal synchronicity. It's a very good model for some purposes, but it shouldn't be taken as the last word in cosmology. In particular, critical density is not the the peg upon which to hang the fate of the universe. And the cosmological constant is not determined by pure density measurements.
Inflationary models use various quantum assumptions; so they are a mixture of (speculative) particle physics and relativistic cosmology. ("Speculative" because we don't know how to properly do quantum physics in strongly curved spacetimes, which is precisely what the Big Bang is.)
@James Dwyer, @Bernard Jones, Thanks for the pointers. They were interesting reads.
@Domenik Schwartz, thanks, this is what I suspected: while we can see to pretty much the end of the observable universe (including the CMB) there is reason to believe it is not a big part of the whole universe.
@James Dwyer, @Bernard Jones, @Domenik Scwartz: I did not want to speculate on multiverses or even inflation. My point is that since we have no reason to assume we can see a big part of the universe (or something comparable to the curvature scale) it is just _modest_ to assume that we can see only a small part of the universe. There is no reason to doubt there exists a much bigger universe beyond the observable universe: during the last 100 years we not only got a lot better instruments but the observable universe also got some 100* 4\pi (13.7*10^9)^2 cubic lightyear bigger simply because it got older.
In fact it occurred to me that this point of view also gives a natural explanation of the fine tuning problem of the density: if we assume that the curvature scale a of the universe satisfies
a >> c/ H
then if the universe is a Friedman Robertson Lemaitre Walker universe
H^2 = (8/3\piG) \rho - kc^2/a^2 , (with k =0, \pm 1)
\approx (8/3\pi G) \rho
then the density of the universe must be (approximately) critical:
\rho \approx \rho_c = H^2 / (8/3 \pi G)
From this point of view there is nothing very special about this critical density other than that it gives the observed Hubble constant H.
@Roggier
Indeed it is reasonable to assert that it is "just _modest_ to assume that we can see only a small part of the universe" and that "There is no reason to doubt there exists a much bigger universe beyond the observable universe ...".
However, the question is whether any attempt to produce a model or theory for this greater universe can be truly "scientific": there may be no obvious test that we might propose that might falsify any proposed hypothesis.
This is why I see the two articles that I proposed as a "worthwhile read". The fact that they talk about string theories is not surprising - that's trendy and one of the few physics-based options for attacking this problem. However, even if there were a string theory "solution" or "model", there would be no obvious way of testing it.
The good news is that we do have a theory for homogeneity and isotropy: "inflation". Dominik, Stephen and you, Rogier, all remarked on that . Of course, the mechanism or model of inflation is currently somewhat speculative, and so we are in danger of replacing one mystery with another. But that's how progress is made (as remarked in the articles I referred to).
But the even better news is that inflation theories make specific predictions about what the microwave background should look like. We have our test !! We can test inflation statistically by using the observations of the microwave background temperature fluctuations. While this does not tell us about the ultimate-scale structure of the Universe, it does suggest that the universe outside of our local "horizon" is very much like the bit that we can see.
Now that we better understand the bits of the Universe we can see, new issues have come to the fore - addressing those issues is very exciting!
Dear Rogier,
Although I am not dealing with cosmology so much, I think, mostly the main idea of a subject is important. If we can not think the whole frame, we stay trapped in the details. From the microscopic level to the macroscopic level, most of the objects which can be observed are rotating or tending to rotate. If a object which has big or huge size is rotating, other objects which are the inside or over the surface of this object also are rotated. On the other hand, as much as we know, the rotating objects mostly have a circular shape. We can say that from this point of view, universe must have circular shape and it revolves around a certain axis and has a limited size. Most importantly this is beyond our perception, because we are so small with respect to the heavenly bodies, may be we are nothing compared to them. Therefore, this universe seems us to be infinite.
I want to make a reminder also, most of the ancient scientists believed that the earth was flat, because they were inside the earth. When the earth seen from the outside, they were forced to accept the circularity. This was not a scientific approach, they could not see the turth. We do not have to make the same mistakes again. We must think the whole frame.
Bernard,
As I understand, the motivation for inflation was/is to provide a bridge mechanism to span the gulf between the uniform, flat universe we now observe and the initial highly energetic and chaotic conditions (thought to produce magnetic monopoles) - expected to result from an initial singularity.
See http://en.wikipedia.org/wiki/Cosmic_inflation#Criticisms
"A recurrent criticism of inflation is that the invoked inflation field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data we could get. Paul J. Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls ‘bad inflation’ a period of accelerated expansion whose outcome conflicts with observations, and ‘good inflation’ one compatible with them: “Not only is bad inflation more likely than good inflation, but no inflation is more likely than either... … Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation … Other configurations lead to a uniform, flat universe directly –without inflation. Obtaining a flat universe is unlikely overall. Penrose’s shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation –by a factor of 10 to the googol (10 to the 100) power!”"
Please see the quite accessible reference for the above excerpts:
Steinhardt, Paul J. (2011). “The inflation debate: Is the theory at the heart of modern cosmology deeply flawed?” (Scientific American, April; pp. 36-43), http://www.physics.princeton.edu/~steinh/0411036.pdf
Regarding your assertion that inflation theory is testable, Steinhardt points out that the theory itself has been fine-tuned over the years to better fit observations. On page 43 under the heading "Making Procrastinators Pay", it states:
"In light of these arguments, the oft-cited claim that cosmological data have verified the central predictions of inflationary theory is misleading, at best. What one can say is that data have confirmed predictions of the naive inflationary theory as we understood it before 1983, but this theory is not inflationary cosmology as understood today..."
Again, the principal justification for a universe originating from a hypothetical singularity is simply the maximal reverse-extrapolation of identified spacetime expansion. There is no reason I'm aware of to believe that the origin conditions were not some physical space confining the mass-energy of the universe that was (like the singularity) for some unknown reason eventually released. Applying Occam's Razor, this dimensional origination of the universe could have exhibited the observed conditions from the universe's onset...
For reference, see http://en.wikipedia.org/wiki/Extrapolation - it begins:
"In mathematics, extrapolation is the process of estimating, beyond the original observation interval, the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown..."
Dear Hikmet,
One distinct condition that might be produced by an exceedingly high velocity initial universal rotation is that condensing particles would most likely have conserved the universal spin orientation - explaining the preponderance of particles over antiparticles.
In the more relaxed conditions that have prevailed as the universe expanded and cooled, the manifestation of virtual particle pairs, for example, produces both particles and antiparticles...
This is, I think, the simplest explanation for the predominance of matter over antimatter!
@James
I agree that the nature of the inflation field is an issue - just one of many issues in cosmology of course. There are also different species of inflation. What is interesting is that one of the simplest (and oldest) inflationary models (Starobinsky et al) makes predictions for the slope of he fluctuation power spectrum that is in accord with the value derived from the recent Planck data where the inferred slope is marginally, but significantly, different from n=1.0.
What this shows is that, notwithstanding the inherent uncertainties, inflationary theories can be invoked to explain the overall homogeneity and isotropy of the Universe, while at the same time generating the observed power spectrum for the fluctuations. I find that immensely encouraging!
Future detection of the B-mode polarization of the CMB would establish the "scalar to tensor" ratio and tell us even more about any hypothetical period of inflation. The more we know about it the better we are able to constrain theories of inflation, and perhaps rule them out.
I am not an expert in this area and maybe that is why I can feel enthusiastic about it! Maybe someone out there can amplify this aspect of the discussion? I have never written a paper about inflation, or an alternative, but it's one of those things that I would like to do if the time and the opportunity ever arose.
@Bernard Jones.
A falsification of the hypothesis that the curvature scale of the universe is such that
a >> c/H
but is otherwise a "boring" FRWL universe of which the observable universe is a perfectly normal but small piece, is that the Hubble constant and the density of the universe match, i.e. that the density is "critical". The observable data are not in contradiction with that, so for what its worth, falsify that hypothesis.
Of course there can be all sorts of interesting reasons, like inflation why the scale of the universe is so big, which can have there own tests to falsify (and indeed I have heard a talk by Ruth Durrer that the correlation function of the CMB does just that), but I merely point out that rather than concluding that the universe is flat, which seems very special indeed, it seems more cautious to conclude it is flat on the scale of the observable universe, which is not special at all as I (and apparently nobody else in this thread) sees a reason why we should be seeing a part of the universe comparable to the curvature scale of the universe.
Compare the claims: the sea is a flat surface (barring some ripples), to the sea looks flat when looking out from 2 metre high, to the horizon. It gives a somewhat different perspective on "the sea flatness problem".
Bernard,
Very interesting! Coincidentally, http://en.wikipedia.org/wiki/Cosmic_inflation#Early_inflationary_models begins:
"Inflation was proposed in January 1980, by Alan Guth as a mechanism for resolving these problems.[38][39] At the same time, Starobinsky argued that quantum corrections to gravity would replace the initial singularity of the universe with an exponentially expanding deSitter phase..."
See https://www.researchgate.net/publication/222625284_A_new_type_of_isotropic_cosmological_models_without_singularity
I can't evaluate, but I really object to the (I think weak) presumption that the universe originated from a physical singularity much more than proposals for an inflationary epoch of some sort. I suspect that vacuum energy is very real and is a critical factor in both spacetime expansion and gravitation. However, I do not think that gravity is produced by quantum particle interactions, but rather a larger scale effect produced by the local extraction of isotropic mass-energy from vacuum energy (initiated by EM interactions). Of course, these are just speculative conceptualizations on my part...
I do think that Starobinsky produced a lot of very intriguing work, but it's beyond my comprehension. BTW, I also suspect that the models that are considered to be verified by CMB and other enhanced observational data are recent developments refined over the years by analyses of enhanced data - this is the nature of complex modeling programs.
Article A New Type of Isotropic Cosmological Models Without Singularity
Rogier
Your conclusion that the universe "is flat on the scale of the observable universe" is wise, because all statements about the non-observable universe are speculative and non-falsifiable. Recently an extremely Fe-deficient star was observed having an age of 13.6 Gyr. That's about the scale of the observable universe.
It has been argued (arXiv 1208.3749) that we must give up the geometric interpretation of general relativity because dark matter seems not to exist. Then geometry is a convention. Gravity is described by a field in, say, flat Minkowski space and there is no flatness problem. Günter Scharf, University of Zurich
Gunter,
I can give you 12 observational arguments for the existence of dark matter, see my review arXiv:1208.3662 or J.Mod.Phys 3 (2012) 1152. Most of these arguments also show that MOND is wrong (although I don't bother to mention MOND in my review).
To state that the rotation curves of spiral galaxies are asymptotically flat is an obsolete simplification of the situation: their shape is a function of the luminosity of the galaxy, a so far unsolved problem.
Dear James,
Yes, this idea can be right but we can no be sure. As I said in my previous explanations, the important thing is the main idea, May be the universal rotation could not be as we think to, The rotation of the huge objects can be little different from the small one's. Also rotation of the universe must be still continuing. As a result of a stream of logic,
If the atoms have a rotation
and if the molecules have a rotation
and if the planets have a ratotion
and if the stars have a rotation
and if the clusters have a rotation
and if the galaxies have a rotation
and if the clusters of the galaxies have a rotation
a huge rotation of universe is mandatory. As a result, one regularity requires a larger regularity.
Gunter,
Please see See http://dx.doi.org/10.1007/s10509-011-0854-z or http://arxiv.org/abs/1101.3224.
It is a very interesting problem, and I think that in this moment the answer is easy. General Relativity and all the equations from it, as the Friedman-Robertson, are equations related to the GEOMETRY of the Universe, in other words, the shape of the local universe. General relativity does not support anything about the TOPOLOGY of the universe (The general shape or geometry of the entire space). So, in this moment we do not have any field theory that supports clearly what is the global shape of the space. The fact that all the measurements support that we live in a flat universe, does not mean that the TOPOLOGY of the space is flat. The date from the falt Universe can be supported in this moment by flat, close or open universe models, you can find at least 18 TOPOLOGIES that fits with the observations. You can see articles about Neil Cornish, Hanna Levin and Jean Piere Luminet for more details.
I dont think that the answer easy, universe is so big and it not as easy as you think to find the topology. As we are inside the universe, finding the evidence about the shape of the universe needs more detailed and very long time periods of observations. Really it is not so easy. If we are not observing such events, we can not say that there is no such an event.
Hikmet,
I think you misunderstood Daniel's comment - I don't see where he suggests that the global geometry, or topology, is at all easy to determine.
In fact, hist statement "[the data] can be supported in this moment by flat, close or open universe models, you can find at least 18 TOPOLOGIES that fits with the observations" suggests to me that the unobservable global topology of the universe is not definitively determinable from its local geometry...
There is a beautiful Master's Thesis on cosmic topology, "Cosmotopology", written by Hugo Buddelmeijer of the Kapteyn Institute in Groningen in 2006:
http://www.astro.rug.nl/~buddel/go/thesis.pdf
The title page is in Dutch, but the thesis itself is in English.
It is beautifully explained and illustrated and, except for the Appendices, it's not too mathematical. Recommended reading for this discussion!
Dear James,
Yes you are right, i misunderstood his explanation. I read again, he is right. I am going to read the thesis recommended by Bernard Jones.
To what precision has flatness been determined?
Assuming the universe is a sphere expanding at c (for reasons I've given elsewhere), given an age of 13.75E9 years,
(sqrt((13.75*10^9)^2+1)-13.75*10^9) * (9.4605284*10^12) = 340km / lyr
or 1.17E-4 lyr / mpc
If nothing else, it suggests a very nearly flat universe by any hypothesis.
Let's get to the beginning of the problem, "why do we think the observable universe is big?" because the method we used leads us to this conclusion. Therefore, problem is in our instruments, tools or our approachs, this may be gravity formulas used for calculations. If the formulas give us a much more gravity force for a mass, we can think that more mass needed for the force balance or vice versa. As a result, we wiil try to search the missing mass for balance. Maybe, we no need to do that. This is just an example.
The thing that i want to express is to check our tools and to find the critical points in our instruments or in our formulas.
Dominik,
"At the observable scales we expect a departure from flatness, but we expect that it is just a 0.1 per mille effect (a factor of 100 smaller than current observational constraint)."
Do you mean 1 in 10,000? What are the current methods and observational constraints?
It's worth noting that there's curvature, and then there's curvature:
We know that there exist black holes. The curvature in the interior of a black hole is unbounded, going to infinity as one approaches the singularity. Now, that's not observable--that's precisely what's "black" about a black hole"--but even the observable curvature (outside the event horizon) is fairly strong.
What's meant here, though, is some sort of averaged curvature, smoothed out somehow from the large-scale density of galactic clusters and the like. But I'm not sure there's a naturally occurring way to do that smoothing. Perhaps the calculations are robust to variations in how that smoothing is done; but that would be a pretty piece of mathematics, to show that robustness. Indeed, it may be that the inhomogeneities (the voids) dominate in some sense--or that it's the (still unknown) cosmological constant that trumps.
Models simple enough to solve equations on are one thing; reality may be another.
Well, thinking about it again tonight (pre-3am this time), for curvature at a Mpc, I shouldn't have just multiplied the result for 1 ly (340 km) by the number of lyr in a Mpc. Just as the curvature of a circle falls off rapidly from a tangent, so would the curvature of the universe. The proper equation (given my assumptions) should be:
SQRT((13.75E9)^2+(3.261563E6)^2)-13.75E9 = 387 ly
THAT'S a lot of curvature. Is it testable?
The best evidence for "flatness" is surely the observation and analysis of the power spectrum of the cosmic background radiation (CBR). With this alone you can "discover" the cosmological constant with a value that is the same as that provided by the supernova surveys.
Even if you are not a specialist in this subject, you can see how this works with a beautiful web-based app:
http://map.gsfc.nasa.gov/resources/camb_tool/index.html
The app shows you the data: the power spectrum of the CBR, and by twiddling the sliders you can "dial-a-model" and see how it fits.
The sliders correspond to the 6 parameters that define our "standard cosmological model". It is, in fact, possible to determine more than just these 6 parameters, but they are the basic ones telling us what kind of Universe we are living in. (Recently, using this curve, there has been an interesting determination of the sum of the masses of the neutrinos that populate the Universe - if correct, that's great!).
The power spectrum data used in that app is from the WMAP experiment. The data from the more recent Planck experiment is largely in agreement - there is a slight "tension" in some areas of the parameter fitting but this makes practically no difference to the 6 basic parameters (the last decimal place of the parameter values).
IMHO this is one of the great pillars of modern cosmology, the other ones being the discovery of the cosmic expansion and the discovery of the cosmic background radiation. But that's for another discussion.
While on that web-page you might take a look at the fine
http://map.gsfc.nasa.gov/universe/
which will give you more information about this remarkable data.
Sorry , I'm a bit confused. If the last scattering surface (CMB) is the furthest we can see, does that make it the limit of observable Universe?
@Heleri Ramler
By electromagnetic means, yes. What we see is every thing in the intersection of our past light-cone with the 4D- region in space time where the electrons and protons started to bond to become neutral gas. Before that the universe was opaque to EM radiation This is a took a cosmologically short time and so we essentially look the intersection of a 3 dim space like hypersurface 360000 years after the big bang with our 3 dim backwards lightcone in 4 d space, i.e. a 2d-surface. In theory, we look at a different bit of the CMB as time progresses, but clearly our data are collected in rather a short time by cosmological standards and the features we can observe are far too large to notice the difference.
if we could detect them, we could, In theory, "see" deeper, i.e. before the surface of last scattering with neurinos or gravity waves.
Bernard,
Very cool toy tool - but http://map.gsfc.nasa.gov/media/080998/index.html illustrates not only the approx. proportions of atoms, dark matter & dark energy needed to fit the observed power spectrum data - it also illustrates the proportions of components at the last scattering. In this second pie chart, the proportions of matter : dark matter are very close to the fitted current values (with greater proportions of photons and neutrinos) - but there is no dark energy. This indicates that the toy tool is presuming some specific arrival time and rate of increase no matter what dark energy value is selected on the slider bar...
Since those unavailable parameters seem to be based mostly on the observational discrepancy between the distance estimates for type Ia supernovae derived from their luminosity and those derived from their host galaxies' redshifts without a cosmological constant, it seems their potential for error is great (since sampled type Ia SNe observations' distances are quite limited).
I think we need some more slider bars for dark energy arrival and rate of increase!
The precise fitting of cosmological model parameters to observed characteristics - especially those representing dominating physical elements whose existence cannot be confirmed much less precisely measured, such as dark matter and dark energy - leaves open the possibility that other, yet unidentified factors may provide significant contributions.
The temporal arrival of accelerating expansion is very difficult to achieve with any discrete energy source. IMO, it must be carefully considered that the acceleration of expansion may be coincident with the development of large scale structure in the universe, especially since the increasing size of voided regions necessarily produces an increasing percentage of universal spacetime whose peculiar mass-energy density is greatly diminished compared to universal averages. Please see http://arxiv.org/abs/1109.2314.
@James:
The NASA tool is just as you describe it - a "toy" that is used merely for illustrative purposes and for people to appreciate some of the range of information available in the power spectrum.
There are programs available on-line which are used in the reduction of the WMAP and Planck data - see their papers for a description and their websites for the programs and dcumentation. The results presented in the papers can be (and are!) checked by other scientists outside of the project - that is an important part of reducing such data and presenting scientific conclusions. The data is also available online, so you too can play with it.
Those toolsets are often supplemented by software written by the scientists inolved in dealing with the data The results of the analysis are therefore corss-checked by different people and with different techniques.
As an example, the "clean"maps removing the foreground signals from the galaxy are available in several pacakges that have used different cleaning algorithms. And if you do not like the way the map has been cleaned you too can get the original data and clean it yourself.
I have studied a few of those papers in some detail and feel very satisfied that the scientists involved have not missed anything of the kind you seem to be suggesting! But you can check for yourself if you are concerned.
As with all "big science"the data analysis will continue as more data becomes available and as different and new techniques become available. We do not sit down and say "it's now done and dusted!", but we do congratulate the teams involved for the tremendous amount of thought and effort they have put into their endeavours!
Bernard,
I've expressed nothing personally or professionally against any scientist! Have you read the paper I referenced, "Does the growth of structure affect our dynamical models of the universe?" - which has also been written by scientists reviewing the works of other scientists - all deserving of respect and consideration? Its abstract states:
"Structure occurs over a vast range of scales in the universe. Our large-scale cosmological models are coarse-grained representations of what exists, which have much less structure than there really is. An important problem for cosmology is determining the influence the small-scale structure in the universe has on its large-scale dynamics and observations. Is there a significant, general relativistic, backreaction effect from averaging over structure? One issue is whether the process of smoothing over structure can contribute to an acceleration term and so alter the apparent value of the cosmological constant. If this is not the case, are there other aspects of concordance cosmology that are affected by backreaction effects? Despite much progress, this `averaging problem' is still unanswered, but it cannot be ignored in an era of precision cosmology, for instance it may affect aspects of Baryon Acoustic Oscillation observations."
IMO if large scale structure exists and has developed over time, then voided regions _cannot_ represent the universal average mass-energy density parameters specified in cosmological models - they must provide a significant, varying contribution to the universal rate of expansion. In this case, a constant vacuum energy value cannot solely produce the acceleration of spacetime expansion.
@James
I do not understand your last paragraph, but I do feel competent to comment on the paper by Chris Clarkson, George Ellis et al. about the back-reaction discussion that was started by Thomas Buchert some years ago.
This back-reaction issue has been widely discussed, of course. It is a highly technical point of general relativity and there has been much written about Buchert's idea - it is conceptually important. However, the jury is still out as to its relevance to the problem of the "dark energy". That does not mean that Thomas' idea is correct, and nor does it mean that it is wrong. We must merely await the verdict of our expert colleagues on this.
If I was forced to give an opinion on this I would express the sentiment that it seems unlikely that back-reaction is the answer to the so-called puzzle of the dark energy. The amplitudes of the inhomogenieties are simply too small.
There are, as you know, many alternative sugestions about the nature of the dark energy. We need more information, more data and more good ideas before this is settled. I hope I am still around when the answer comes!
Welcome to the world of speculation - that's just one reason why I have devoted a lifetime to doing science!
Bernard,
Conversely, I don't understand the backreaction issue. http://en.wikipedia.org/wiki/Backreaction offers little clarification, but I think it's significant that it does discuss the term in two distinct contexts:
"In theoretical physics, Back-reaction is often necessary to calculate the behavior of a particle or an object in an external field.
When the particle is considered to be infinitely light or have an infinitesimal charge, it is said that we deal with a probe and the back-reaction is neglected. However, a real object also carries a mass and charges itself.
They modify the original environment (for example, they help to curve the space in general relativity) and this modification - the back-reaction - has to be taken into account when a more accurate calculation is performed.
"In Cosmology the term Back-reaction is used for the measure of the non commutativity of the averaging procedure and the dynamical evolution of space-time. The existence of an isotropy scale is determined by the length scale at which the Back-reaction parameter vanishes. The existence of such scale still needs experimental confirmation."
It seems to me that you are discussing the term as it relates to general relativity. At any rate, please note that the review I referenced is not limited to any backreaction effect - its title is "Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology," http://arxiv.org/abs/1109.2314.
I think I'm referring to a much simpler issue of averaging at disparate scales. Analytic cosmological models refer to the universe a singular structure with homogeneous properties. To the extent that large scale structure exists and has increased over time - this presumption of homogeneity is erroneous!
As a result, as I understand the expansion of spacetime is considered holistically, largely as a function of universal mass-energy density. Whatever terms are employed to determine the expected rate of expansion (without a cosmological constant, for example), if those same terms were applied to _each_ void - _and_ the the mass-energy density of voids is lower than the universal average, the expansion rate estimation for voids should be greater than that estimated for the universe as a whole.
To the extent, then, that universal spacetime is increasingly in time comprised of relatively massless voids, the overall expansion of spacetime should also be increasing over time. In this case, the acceleration of universal expansion may be solely the product of increasing substructure - without the application of any distinct dark energy source.
The temporal development of large scale structure could provide a natural explanation for the 7-8 billion year delay in the reacceleration of universal expansion - which, I think, has otherwise been unexplained.
To the extent that the universe is increasingly composed of increasingly large scale structural voided regions, homogeneity applies more to the earlier universe - its presumption in computing more recent universal characteristics has been ill-founded.
"Welcome to the world of speculation - that's just one reason why I have devoted a lifetime to doing science!"
You don't mean to imply that the authors of the referenced review paper or the authors of the research they review are not proper scientists, do you?
Em, another question regarding dark energy. See if something exploded in vacuum, like in space,for example . How would the pieces fly apart? At constant speed I guess? What about in later time? Has anyone ever tested it?
@Heleri Ramier
No the pieces donot fly appart at constant speed due to dark energy.
Dark energy is the same as a non zero cosmological constant. Its net effect is a _negative_ pressure i.e. increasing the volume of the universe [1] gives energy instead of costs you energy. The net effect is that expansion of the universe accelerates exponentially. This has actually been observed by measuring the Hubble "constant" using type Ia supernovae as standard candles to measure the distance and spectroscopy to measure the receding velocity. It shows a small but distinct deviation of a simple linear proportionality v = H d.
[1] More precisely increasing the volume of a comoving 3D space like element.
yep, I am aware of that. But what I mean is maybe it is just a characteristic of a vacuum, no dark energy needed to explain that everything starts accelerating at some point?
What we observe is a general acceleration in the expansion of the universe. To my mind the simplest explantation for this is that it's already allowed for in Einstein's formulation of gravity, in the form of the Cosmological Constant; there's no a priori reason to say what this constant is, so we can just say "this constant is determined by what we see in the large-scale behavior of the universe--and, ah hah, we now have enough observational data to identify it."
Assuming this is the appropriate model to use, then, yes, indeed, all parts of the universe show this same behavior, including any matter you want to introduce at some point and follow the future trajectories of.
Some physicists are unsatisfied with saying "accelerating expansion is explained by a non-zero Cosmological Constant" and want to find some other sort of explanation. Nothing has come to the fore as yet that that has a strong following. I don't know the details of any of the proposals (I'm not even sure there are any with much in the way of detail), so I can't say if any of them would predict a similar behavior for matter introduced at some point in vacuum. But since the thing to be explained is a cosmic, utterly pervasive large-scale behavior, I would guess that any alternative explanation would also have the same sort of prediction.
The term "dark energy" is more joking than serious; it really just means "whatever it is that causes the cosmic acceleration of the expansion of the universe". It's a joking reference to "dark matter" (which really is matter and really is dark--i.e., it's not emitting light like the stars are, even though there's several times as much of it in each galaxy as the mass of the galaxy's stars); and "dark" also refers to the fact that we can't explain it--unless the "it's just the Cosmological Constant" is accepted as an explanation.
Yes - nobody knows what has actually caused the apparent reacceleration of universal expansion. The positive cosmological constant parameter (Λ - Lambda) is the solution specified in the ΛCDM concordance model, which is considered to represent positive vacuum energy. BTW, the existence of dark matter has only been inferred, especially by discrepancies between gravitational effects derived from observations and those projected by from mass estimations and gravitational evaluations of very complex, compound objects, specifically spiral galaxies. It must be 'dark' since it has not been detected through electromagnetic interactions with other matter - it must be massive since it's thought to represent at least 80% of total spiral galaxy mass from complex gravitational evaluations.
The reacceleration of the universe was inferred from two studies intended to more precisely constrain the deceleration rate of universal expansion by determining distances to galaxies using the peak luminosity of type Ia supernovae, which was thought to be constant. Their results conflicted with those of then standard cosmological models that specified a cosmological constant parameter of 0. They were able to to fit the models' estimated distances to those derived from SNe by specifying a positive cosmological constant and a negative deceleration parameter - see http://arxiv.org/abs/astro-ph/9805201 and http://arxiv.org/abs/astro-ph/9812133.
As I understand, it's still expected that following a period of cosmic inflation, the subsequent expansion of the universe diminished as it expanded - due to increasing entropy. The SNe evidence for reacceleration has only been identified from the period 5-7 billion years ago. As a result, dark energy is thought to have only to have become effective when the universe was about 7-8 billion years old.
As Steve Harris indicated, dark energy is euphemistically termed 'dark' because the actual cause of the apparent acceleration of universal expansion is still unknown.
@james Dwyer. Thank you for ellaborating on the backreaction issue. I now actually read the paper you mentioned and it was well worth the read.
By the way @all thanks for the high level of discussion on this question.
There has been so much discussion over the past few hours it's hard to keep up! Let me first correct one remark about dark matter. The rotation curves of galaxies was almost certainly the impetus for thinking in terms of the existence of some unseeen cosmic component that might explain the flat rotation curves. However, if that's all there was to it there would not have been the hoo-haa about modified gravity and all that stuff.
The key evidence for the exictence of dark matter (as opposed to "dark energy") comes from gravitational lensing: we know the stuff is there because the light from earlier times is distorted by the gravitational influence of some unseen gravitating matter. Pretty convincing stuff that is supported by the determination of the amount of dark matter in the Universe from the cosmic microwave background.
I'll come back to the issue of back-reaction later: I just wanted to repeat that the evidence for dark matter comes from many sources - not just rotation curves. I'll also try to address Heleri's comments on explosions
.
Additional and independent evidence for a substantial non-zero cosmological constant, or some other form of "dark energy", comes from the observation of the power spectrum of the cosmic background temperature fluctuations which provides us with features of known size (eg the sound horizon). The angle subtended on the sky by these features depends on the gross cosmological parameters and in particular on the Lambda-term in the Einstein equations.
Furhter, independent,l determinations of Lambda come from observations of the baryonic acoustic oscillations in the distribution of galaxies in the Sloan Digital Sky Survey (the BOSS and bigBOSS projects).
The great thing about modern cosmology is that the conclusions we draw about our Universe come from independent and mutually corroborative measurements.
Bernard,
Regardless how wonderful modern cosmology may be, I again suggest that you read the paper I repeatedly reference: http://arxiv.org/abs/1109.2314. Again, its abstract concludes:
"... Despite much progress, this `averaging problem' is still unanswered, but it cannot be ignored in an era of precision cosmology, for instance it may affect aspects of Baryon Acoustic Oscillation observations."
Separately, regarding one 'independent' corroborative 'measurement', http://www.space.com/17234-universe-fractal-large-scale-theory.html - and many others reports of the research results: "The WiggleZ Dark Energy Survey: the transition to large-scale cosmic homogeneity," http://arxiv.org/abs/1205.6812 - consider it to observationally confirm the LCDM model's presumption of large scale homogeneity. Unfortunately this study suffers from a fatal, elementary error in analysis! Before anyone dismiss this astounding assertion regarding the crucial works of so many experts, please allow me a very brief and simple explanation.
The analysis partitions observations by redshift (z) and location within the sky. This data is analyzed as proximal (low-z) observations reflect the small-scale characteristics of the universe, while distant (high-z) observations indicate large scale characteristics. It's concluded that, since homogeneity increases at larger scales (higher-z), cosmological models can properly represent the universe as having a homogeneous distribution of mass-energy.
The failure is that these analyses ignore that the 'large scale' observations represent _only_ the _ancient_ properties of the universe, while 'small scale' observations represent _only_ the _recent_ properties of the universe. Therefore, their observational analyses _should_ conclude that the universe has _temporally_ transitioned from an early homogeneous distribution of mass-energy to a more recent inhomogeneous distribution of mass-energy. Their observations can determine nothing regarding the large-scale structure of the recent universe - IMO it can only be concluded that the distribution of mass-energy at large scales is 'now' universally inhomogeneous - as indicated by observational evidence of recent conditions. This, of course, conflicts with the common presumption of cosmology.
This temporal development of universal structure may have everything to with the temporal acceleration of universal expansion - please see my previous comments. If anyone can analytically refute this assessment - please do so!
Backreaction is one of 100 ideas about the cause of the cosmic acceleration. There is no particular reason to believe it explains the cause in a more fundamental way than the other 99 ideas. I don't say that it is a bad idea, just don't take it more seriously than anything else. Research is exploration in every possible direction.
Matts,
If all proposed causes of cosmic acceleration are to be categorically dismissed, presumedly none being more fundamental than any other - then shouldn't the cosmological constant (Lambda) also be viewed skeptically? It is primarily distinguished by the original discover's use of the preexisting parameter (along with the deceleration parameter) to fine-tune their cosmological models to fit their otherwise conflicting observations...
Isn't the cosmological presumption of homogeneity at large scales also unsupported? That is the assessment in http://arxiv.org/abs/1205.6812 above prior to their faulty observational confirmation of it... If the cosmos cannot be shown to be contemporaneously homogeneous at large scales, doesn't that support consideration of the temporal development of inhomogeneity as a cause of acceleration?
Bernard,
Allow me to correct your correction:
"The key evidence for the exictence [sic] of dark matter (as opposed to "dark energy") comes from gravitational lensing: we know the stuff is there because the light from earlier times is distorted by the gravitational influence of some unseen gravitating matter. Pretty convincing stuff that is supported by the determination of the amount of dark matter in the Universe from the cosmic microwave background."
Gravitational lensing and even acoustic oscillations of CMBR inherently rely on mass estimations and exceedingly complex gravitational evaluations of vast distributions of large scale, compound objects. The favorite weak gravitational lensing evidence is not at all straightforward. The presence of (coincident) dark matter is inferred from the discrepancy between clusters of galaxies and their estimated gravitational effects and those derived from statistical evaluation of minute optical distortions of often thousands of background galaxies. These methods are subject to systematic errors - researchers have reported conflicting results, and hope to refine their approaches to include some ability to verify their methods using synthetic data.
In all cases in which the presence of dark matter is inferred, gravitational evaluations of nebulous large scale structures composed of countless, interacting, compound objects are involved.
I'll concede that most physicists believe that dark matter is real, but "we" don't "know" for certain - there is no incontrovertible evidence for its existence!
No, there is not!
See http://en.wikipedia.org/wiki/Incontrovertible_evidence
"Incontrovertible evidence is a colloquial term for evidence introduced to prove a fact that is supposed to be so conclusive that there can be no other truth as to the matter; evidence so strong it overpowers contrary evidence, directing a fact-finder to a specific and certain conclusion. ..."
See http://www.merriam-webster.com/dictionary/incontrovertible
in·con·tro·vert·ible adjective \(ˌ)in-ˌkän-trə-ˈvər-tə-bəl\
: not able to be doubted or questioned
See http://www.vocabulary.com/dictionary/incontrovertible
"When something is incontrovertible, it is undeniably, absolutely, 100 percent, completely true."
- etc., for sure!
@James Dwyer, @Bernard Jones, @Matt Roos,
the difference is that there never was a good _a priori_ reason to assume that the cosmological constant is zero. So it seems to me that the right question is not why is the cosmological constant is nonzero, but why is it so small!
Nonetheless The Clarkson, Ellis, Larena, Umeh paper is still interesting, if only to properly determine how small the cosmological constant is.
@James
Your remark that
"Gravitational lensing and even acoustic oscillations of CMBR inherently rely on mass estimations and exceedingly complex gravitational evaluations of vast distributions of large scale, compound objects. The favorite weak gravitational lensing evidence is not at all straightforward. ..." etc
gives me the feeling that you have not studied the original papers in much detail. As far as I understand them, I don't think that "masses" really come into this: the analysis could be preformed without knowing what mass is involved - it's mainly length scales. My own opinion is that these papers are pretty solid stuff and that the authors certainly know what they are doing.
Likewise, my own reading and comprehension of the Clarkson et al paper is well within my compass of understanding. I read almost everything that George Ellis and Thomas Buchert write - it is always interesting and stimulating. If i fail to understand a point they make I always ask them for clarification. I am an admirer of both their works.
I am disappointed that you should resort to pointing the dictionary at Matts Roos - you might do better by reading his truly excellent book on cosmology! Please do not do that when we are trying to explain our "orthodox" (ie. informed) point of view.
I generally enjoy these discussions and hearing opinions but I do not enjoy it when we are told how to spell or what words mean.
Bernard,
Thanks for your reply.
I do not appreciate having my discussion points dismissed by proclamation! I resort to referring to dictionary terms to show that Matt's dismissive proclamation "Yes for sure there is incontrovertible evidence for its existence!" was, in fact, nonsensical! It seems that Matts didn't bother to understand the meaning of the word he'd taken from my own statement.
"... the analysis could be preformed without knowing what mass is involved - it's mainly length scales." In general terms, those optical effects are used to infer the total mass responsible for their production - the inferred mass produced by dark matter is the difference between the inferred total mass from the approximations of the mass provided by ordinary matter.
"My own opinion is that these papers are pretty solid stuff and that the authors certainly know what they are doing."
That's comforting, but can identical results be independently reproduced? I think not!
I do not question the expertise of researchers, but I do questions the accuracy of their results and potential for large errors since almost imperceptible optical effects imparted to thousands of faint galaxies, and the baryonic masses of large scale weak lensing galaxy clusters composed of many billions of discrete objects cannot be reliably determined.
I'm disappointed that I offered a simple argument against the central cosmological presumption of persistent large scale homogeneity and invited anyone to refute it. All I receive is authoritarian proclamations...
Rogier,
http://en.wikipedia.org/wiki/Cosmological_constant#History states:
"Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow, apparently, for a static universe: gravity would cause a universe which was initially at dynamic equilibrium to contract. To counteract this possibility, Einstein added the cosmological constant.[3] However, soon after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding; this was consistent with a cosmological solution to the original general-relativity equations that had been found by the mathematician Friedmann, working on the Einstein equations of general-relativity..."
Also see http://www.theatlantic.com/technology/archive/2013/08/einstein-likely-never-said-one-of-his-most-oft-quoted-phrases/278508/
"To make his equations work, in 1917 Einstein introduced an additional term into them, expressed by the Greek letter lambda (ƛ) -- the "cosmological constant." The new term represented a repulsive force that would counter gravity's attraction, leaving the universe intact."
It must be considered that at that time the known universe was limited to what is now known to be the Milky Way galaxy - at that time the entire universe was considered to be gravitationally bound! It was as much Hubble's discovery that 'nebulae' were in fact distant, discretely gravitationally bound galaxies like our own - as the realization that they appeared to be receding away - that changed Einstein's (and our) perception of the universe.
From this, it seems as though Einstein would not have explicitly added a cosmological constant term to his original field equations if he had not been motivated to prevent the eventual gravitational collapse of the known universe (galaxy). However, it also seems that the Friedman solution to the original field equations (without a cosmological constant) indicated that the universe should expand. From that point on, the lambda parameter was specified as zero in order to eliminate its effects, effectively omitting it from the revised field equations.
The specification of a non-zero lambda value was not adopted until the LCDM model became recognized as the standard. IMO, the only justification for for the positive lambda is to fit the model's results to the observations indicating universal acceleration of the average rate of expansion. IMO, there was no _a priori_ reason to adopt a _positive_ value for lambda - to the exclusion of subsequently identified potential causes of universal acceleration. IMO, it is an example of fine-tuning model parameters to fit observations...
When the universe was thought to be Ptolemaic, for centuries, all the best scientists, when confronted with anomalies, would add an epicycle to bring cosmological theory in line with observation. Dark matter, dark energy, the new cosmological constant, etc, look suspiciously similar to epicycle solutions to current problems between theory and observation. Even our favorite, most illustrious scientists can be completely wrong, in ways we can't yet imagine. A sense of certainty has never been helpful or appropriate.
Dear James,
You are absolutely right,
I was saying something like you in my previous messages, but due to lack of my knowledge, I do not participate in your discussions. I am trying to understand your ideas and the your previous discussions with other participants.
As I said before, we should not repeat the same mistakes.
@james
In throwing the dictionary at Matts you descended into mrere polemics - that is not how science is done.
The ideas that have the support of the scientific community are the best we have. They are not "epicycles" that have been added to "fix" previous theories, at least not until something better comes along.
The great Tycho Brahe was a strong advocate of the epicyclic theory. His planetary data was used (with extra epicycles!) to predict the future positions of planets - and it worked pretty well. Yet less than 50 years after his data was published Kepler came along, used hTycho's data, and the world of science changed. And 50 years after that along came Isaac Newton with another revolution. Fantastic! - and nobody thought any less of Tycho Brahe. This has happened many times in science and, hopefully, it will continue to happen.
When something better comes along you can rest assured that most scientists will adopt that as their paradigm.
If you think you have that better idea, just write it up and send it to a major journal for peer refereeing and publication. There are major journals with no page charges and no bars on who may submit articles for consideration. Even if you only have a critique of someone's procedures or analysis in a published paper - write up your critique and submit it for publication to a peer reviewed journal. I should only make the obvious remark that mere polemics are not an acceptable form of criticism.
I teach students cosmology in the hope that some of them will come up with better ideas and in doing so advance science. I am proud to say that many of my students have done just that.
The first thing we must do is define what is the observable universe. This is ambiguous. Anything beyond the cosmic background radiation (z>1099.7, when the universe was not transparent) is not visible with electromagnetic detectors. However, things happening there can be inferred in other ways (through primordial nucleosynthesis, for instance).
Tha mass of the maximum detectable universe (0
Bernard,
"In throwing the dictionary at Matts you descended into mrere polemics - that is not how science is done."
Not being familiar with the term "polemic" I had to Google it:
[...]
1. a strong verbal or written attack on someone or something.
"his polemic against the cultural relativism of the sixties"
synonyms: diatribe, invective, rant, tirade, broadside, attack, harangue, condemnation, criticism, stricture, admonition, rebuke; abuse; informalblast; formalcastigation; literaryphilippic
"a polemic against injustice"
argumentation, argument, debate, contention, disputation, discussion, altercation;
{formal} contestation
"he is skilled in polemics"
In hopes of more completely explaining my response to Matts that you object to, I was expressing my objection to his simple contradiction of my statement:
"I'll concede that most physicists believe that dark matter is real, but "we" don't "know" for certain - there is no incontrovertible evidence for its existence!".
Matts' curt retort:
"Yes for sure there is incontrovertible evidence for its existence!"
Is this the way science is done? In my opinion, it was Matts' response that is more correctly characterized as a contentious disputation - an attack, harangue, condemnation, criticism, stricture, admonition, rebuke; abuse...
I had chosen the term "incontrovertible evidence" purposely. I could have expressed the idea similarly as 'inarguable evidence' or 'definitive proof' - perhaps that would have been clearer to a non-native English speaker. But "incontrovertible evidence" was the term that best ft my intended purpose.
My initial reaction was to mimic Matts' contentious rebuke of my statement by replying only: "No, there isn't". However, apparently unlike Matts, I do not feel that such an argumentative approach is reasonable without further explanation - so I quoted the dictionaries' definitions of the term "incontrovertible evidence" so that the reader could understand that Matts' statement was incorrect: the evidence for dark matter is _not_ "so conclusive that there can be no other truth as to the matter; evidence so strong it overpowers contrary evidence", beyond doubt or questioning", "undeniably, absolutely, 100 percent, completely true"!
IMO, that and similar proclamations by Matts directly contradict the spirit of constructive commentary that lays at the heart of scientific discussions.
It's curious to me that you accuse me of being polemic (political) - in defense of what I consider Matts' objectionable treatment of commentators. In fact, it seems to me that you are bowing to political considerations in allowing undue dispensation to a presumedly exalted member of the physics community for his bad behavior!!. I hope this sufficiently clarifies and justifies my comments and actions to satisfy your apparent need to defend Matts.
@James
I neither look up to nor look down upon people in general and scientists in particular. I merely hope to add clarity to discussions by addressing statements that I believe to be in error, or amplifying discussions that fall short of answering a question. My stance is almost certainly one of orthodoxy, but I have spent over 40 years working towards generating that orthodoxy, so why would anyone be surprised at my taking such a position?
I believe that a tremendous amount of progress has been made in those 40 or so years and I hope that people can share in appreciating that quite remarkable achievement which has been won by the hard work of thousands. I participate in these discussions to share that sense of progress, not to look up to or look down upon other contributors.
There is a strong sense of mutual respect among astrophysicists, which is one reason why I am proud to be a part of that particular area of research. There is also keen competition in getting results first. It's a great community who have formed a consensus thorugh understanding the issues before them in considerable depth.
Bernard,
I am a retired professional myself - having for decades been employed by a leading global corporation in the position of Technical Fellow. I respect and appreciate your view, but any scientific orthodoxy has persisted primarily by both withstanding continual questioning and making necessary adaptations - not by contentious denials but through meticulous reexamination and careful consideration of alternatives.
'Yes there is!' is not a considerate, respectful, appropriate, helpful or even professional response to the factually correct statement: "... there is no incontrovertible evidence for its [dark matter's] existence!".
BTW - I especially appreciate the difficulty of expressing complex ideas in a second language, having been stationed long ago for a year in Germany - knowing only a few words. Contributors' excellent comments posted here constantly amaze me!
Manuel,
Your paper is very informative and (to the extent I can comprehend) very interesting - thank you. You make a very good point that increased mass density, including dust, in earlier times could have contributed to the diminished peak luminosity of type Ia supernovae relative to their host galaxies' redshift.
In any expanding universe, wouldn't mass density necessarily diminish as spacetime volume increased? Since universal spacetime is thought to have expanded enormously, mustn't mass density also have diminished enormously?
I also have a very basic question, if you could possibly explain. Per http://en.wikipedia.org/wiki/Friedmann_equations#Density_parameter
"To date, the critical density is estimated to be approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2 atoms per cubic metre.[4]"
Presuming that the mass of dark matter represents about 6 times the mass of ordinary matter, then the average density of mass contributed by dark matter would be the equivalent to 1.2 atoms/cm (sorry), bringing the total mass density of matter to 1.4 atoms/cm. So the question is: assuming for the moment that Lambda = 0; if the mass density was much greater in the earlier universe than now, mightn't it have initially exceeded critical density? Is there some time dependent condition that would have also have caused the effective critical density to be much greater than now?
http://en.wikipedia.org/wiki/Lambda-CDM_model#Overview states:
"The cosmological constant is denoted as \Omega_{\Lambda}, which is interpreted as the fraction of the total mass-energy density of a flat universe that is attributed to dark energy."
If this is the case, shouldn't the effects of the cosmological constant also diminish as a function of spacetime expansion?
Even more fundamentally, while gravity locally contracts spacetime, it only _condenses_ matter - it actually increases the separation of matter from the vacuum. How could it be possible for gravity to recreate a homogeneous universe composed of a dense quark-gluon plasma, for example? Wouldn't that be analogous to an entire dust cloud collapsing to form a singular object?
Thanks for your consideration!
James asks:
> In any expanding universe, wouldn't mass density necessarily diminish as spacetime volume increased? Since universal spacetime is thought to have expanded enormously, mustn't mass density also have diminished enormously?
Of course, mass density decreases enormously while the universe expands, but see below:
> I also have a very basic question, if you could possibly explain. Per http://en.wikipedia.org/wiki/Friedmann_equations#Density_parameter
"To date, the critical density is estimated to be approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2 atoms per cubic metre.[4]"
Presuming that the mass of dark matter represents about 6 times the mass of ordinary matter, then the average density of mass contributed by dark matter would be the equivalent to 1.2 atoms/cm (sorry), bringing the total mass density of matter to 1.4 atoms/cm. So the question is: assuming for the moment that Lambda = 0; if the mass density was much greater in the earlier universe than now, mightn't it have initially exceeded critical density? Is there some time dependent condition that would have also have caused the effective critical density to be much greater than now?
The critical density is time-dependent and decreases with time. This is why all the densities are computed relative to the critical density. There is a restriction:
Omega_m + Omega_k + Omega_Lambda + Omega_r = 1
where
Omega_m = relative mass density
Omega_k = relative curvature density (0 for the flat model)
Omega_Lambda = relative cosmological constant density (0 for the open model)
Omega_r = relative radiation density (negligible by now, dominant for a long time before the CMB)
So the mass density could exceed the critical density (Omega_m>1) for a model where Lambda=0, k>0 (which makes Omega_k "The cosmological constant is denoted as \Omega_{\Lambda}, which is interpreted as the fraction of the total mass-energy density of a flat universe that is attributed to dark energy."
If this is the case, shouldn't the effects of the cosmological constant also diminish as a function of spacetime expansion?
Yes, Omega_Lambda is also time dependent (all four components are). With the flat model, by the time of the CMB, Omega_Lambda was negligible while Omega_m was very near 1. And the critical density was much larger than it is now.
> Even more fundamentally, while gravity locally contracts spacetime, it only _condenses_ matter - it actually increases the separation of matter from the vacuum. How could it be possible for gravity to recreate a homogeneous universe composed of a dense quark-gluon plasma, for example? Wouldn't that be analogous to an entire dust cloud collapsing to form a singular object?
Yes, with a super-critical universe, the collapse produced by gravity acting on matter wuold be analogous to a dust cloud collapsing to make a black hole.
Regards,
Manuel,
That's very helpful for me - thanks very much!
The point of last question was - in a hypothetical 'expanding-collapsing universe' where gravity produces the universal collapse of matter, wouldn't the previously expanded spacetime, while contracted, necessarily remain external to the gravitationally collapsed, condensed mass?
James asks:
> The point of last question was - in a hypothetical 'expanding-collapsing universe' where gravity produces the universal collapse of matter, wouldn't the previously expanded spacetime, while contracted, necessarily remain external to the gravitationally collapsed, condensed mass?
In the Einstein equation for an expanding-collapsing universe (equation 7 in our paper with k>0, Lambda=0) R' (the expansion speed) eventually becomes negative as a combination of the first two terms, the first (the effect of gravity on mass) positive and decreasing, the second (curvature) a constant negative.
I have not studied this model in depth, but even after the contraction would begin, the whole universe would not be equivalent to a black hole until its radius got below the Schwartzchild radius, I think about 300 million years before total collapse (a long time from now). Up to that point, we would have a contracting universe with galaxies progressively nearer, but more or less as they are today, because at local cluster distances gravity dominates (at this time) against space expansion or contraction.
Thanks. I suppose another way of posing this question in terms of identified 'large scale' structure is that, if gravity produces filaments of matter while expansion produces voids, could universal scale gravitational effects actually cause voids to condense in the absence of expansion? I suspect this question is beyond the scope of universal scale cosmological models. It's really an academic question, anyway...
About: voids filaments and the rest: ...
There was a very recent international workshop on cosmic voids and filaments, held at the Lorentz Centre in Leiden (The Netherlands):
http://www.lorentzcenter.nl/lc/web/2014/604/info.php3?wsid=604&venue=Oort
The discussions presented at the meeting are becoming available on the conference website if you wish to read what the current thinking is. The program provides links to the people who particpated: you may like to look at their papers on the astro-ph arXiv.
In general I don't like to quote myself in a forum like this. However, as a reply to James Dwyers unfounded statements that there is no incontrovertible evidence for dark matter, I must quote my review article on all the evidence that there is:
arXiv:1208.3662 [physics.gen-ph] published as J. Mod. Phys. 3 (2012) 1152.
Matts,
That's fine, but apparently you still don't fully appreciate the meaning of the term 'incontrovertible'. I specifically used that term to indicate that there is no evidence for dark matter that proves that its existence is "undeniably, absolutely, 100 percent, completely true."
Just for your information, I offer a semantic point: you did not 'quote' your review article, you only referenced it. Quoting describes the copying of specific relevant text from an external source. Specific quotes that establish the physical existence of dark matter beyond any possible doubt would have been more helpful.
Science has always had difficulty with restraining its para-scientific enthusiasms. And yet there is scarcely a scientific theory that has survived more than 200 years, and many haven't survived much more than 100. Newton was once "incontrovertible", so was the particle theory of light, so was the wave theory. How long has it been since the "static, eternal universe" wasn't just incontrovertible, but its alternative not even thinkable?
Science has progressed, when it's progressed, without much need or regard for certainty. I'd even say it hasn't been helpful -- at all.
Existence of dark matter is inferred indirectly from the motion of
galaxies and many other astrophysical observations. Theories
of structure formations also use dark matter as a key ingredient.
However what James is trying to put forward is that there has not
been any ``direct detection'' of dark matter. We all know that this
statement is true. That is one of the key issues of supersymmetry
search, which is establishing the existence of neutralino. Alas
we have not detected it yet. Sterile neutrino is not
yet fully established.
One point in the favour of hot dark matter can be made. This
the the discovery of neutrino mass. Now it is almost beyond
any doubt that neutrinos have small mass, even though the
absolute scale of neutrino mass is unknown.
@Biswajoy Brahmachari
The evidence is tentative, but these papers suggest indepently that a signal of dark matter has been found by the XMM Roentgen satellite and that the signal is compatible with dark matter consisting of sterile left-handed neutrinos:
Alexey Boyarsky, Oleg Ruchayskiy, Dmytro Iakubovskyi, Jeroen Franse
An unidentified line in X-ray spectra of the Andromeda galaxy and Perseus galaxy cluster
http://arxiv.org/abs/1402.4119
Esra Bulbul, Maxim Markevitch, Adam Foster, Randall K. Smith, Michael
Loewenstein, Scott W. Randal
Detection of An Unidentified Emission Line in the Stacked X-ray spectrum of Galaxy Clusters
http://arxiv.org/abs/1402.2301
Update 2014-03-7: Biswajoy pointed out elsewhere in this thread that left-handed is the usual chirality. My bad, the papers clearly state that the hypothesized neutrino's have the opposite chirality from the usual one, and Biswajoy explains:
"..., any new neutrino (m < m_Z / 2) cannot be a
left handed fermion. It can be sterile (which is a pure
singlet fermion) or it can also be a right handed
neutrino, which is also sterile as long as Z decay is
concerned, as it does not couple to Z."
I suggest you to read the paper "IS THE FLATNESS PROBLEM REAL ?"
Roland Triay, Gravitation & Cosmology, Vol. 3 (1997), No. 1 (9), pp. 54–60 1997 Russian Gravitational Society
Rogier,
Actually, the papers you reference both propose to have detected not Cold Dark Matter, consistent with the Lambda-CDM standard model of cosmology, but the decay emissions of Warm Dark Matter in the form of sterile neutrinos. As I understand, the discovery of significant amounts of WDM would require at least major revision to the LCDM model. Previous attempts to detect CDM have not been successful - see today's news - http://www.nature.com/news/physics-broaden-the-search-for-dark-matter-1.14795.
Also see the recent, if preliminary, results from the new, most sensitive detector in operation - http://arxiv.org/abs/1310.8214.
BTW, you might also find my recent question to be of some interest:
https://www.researchgate.net/post/Can_the_Lambda-CDM_cosmological_model_survive_the_discrepancy_between_galaxy_cluster_observations_and_CMB_projections?
@Roland - bonjour!
To download your article I had to go via Google and do a search on "Triay inflation problem real" and of course you come top of the list with a PDF (which is in fact on this website). That version of the paper has blank diagrams - could you post a version which has the diagrams since that would make some of the mathematical arguments clearer to those who are not familiar with the vagaries of the FLG model?
@James
Several high resolution N-Body models (eg: "Aquarius") have run WDM Lambda models. The differences from the standard Lambda-CDM are seen only on the smallest, galactic and smaller, scales - everything else on larger scales is the same. So no major revision is required as far as mega-parsec and greater cosmic structure is concerned.
However, in the WDM models, galaxies have a quite different satellite population than in CDM models, the most striking difference being that the population of satellite galaxies is far smaller in WDM. Maybe, in view of what we see in our own Milky Way neighborhood, that is a good thing! There have been several papers on that (including one very recent one on which I am a co-author).
@Bernard, Bonsoir (I'm in RJ at this moment)
I discover that indeed the figures in the paper are empty !! Thanks.
here is the original one .. quite a long time ago..
FLG = Friedmann-Lemaître-Gamov
at those times the scientific community rejected the cosmological constant as you know..
Bernard,
I think you're much too quick to dismiss the issues involved. Switching from a CDM to a WDM cosmology hardly seems so 'transparent' as you imply.
I only find one "Aquarius" paper out of 42 (see http://www.mpa-garching.mpg.de/aquarius/) that mentions WDM in its title (a second title is in error) - "The haloes of bright satellite galaxies in a warm dark matter universe," http://arxiv.org/abs/1104.2929. Offhand, it seems to evaluate WDM only as a solution to the cuspy halo problem for dwarf galaxies...
Also see "Too big to fail? The puzzling darkness of massive Milky Way subhaloes," http://arxiv.org/abs/1103.0007 and "The Milky Way's bright satellites as an apparent failure of LCDM," http://arxiv.org/abs/1111.2048.
Perhaps you have some specific references to illustrate your point??
Again, I suggest that you at least read the question linked in my preceding comment - I also mention the missing satellite problem, but I suspect that even a mix of HDM or WDM would significantly affect the dynamics of DM halos and galactic rotation - especially in large spiral galaxies...
@James
I do read your posts - as usual!
I only commented on one of your statements regarding what Aquarius is finding, not the entire list of posts.
The Springel webpage you refer to only covers papers up to the publication of the Wang et al paper in 2012, and the papers you are citing are only up to 2011.
A lot of work has been done since then, some of it only covered in seminars and conferences. I report on what I know of and read about for the general interest of those who read this discussion - I present no opinions since I only work on the periphery of this particular subject :)
The issue is not about Lambda or about dark matter - there is abundant evidence for these from a variety of sources (and I do not count an N-Body simulation as "evidence").
The argument has moved on to what is happening on the smallest scales - it does not concern Lambda or the existence of dark matter. But it does raise an issue over the nature of the dark matter: if it is "cold" then we may (!) have an issue with an excess of satellites in Milky-Way sized systems. If the dark matter is warm, then the problem (if there really is one) looks far less serious.
Simulations are very important: but they are only simulations of a variety of specific scenarios. I view such simulations, in part, as an exploratory tool. We have only one Universe and we cannot experiment with it, but we can experiment with a simulation and that has proved to be a powerful way of understanding cosmology.
So, if you have a different scenario, run a simulation to see how well your scenario does. Springel's "Gadget" code is in the public domain.
Bernard,
Thanks for explaining about the Aquarius papers I found - but you didn't give me much to go on stating only:
"Several high resolution N-Body models (eg: "Aquarius") have run WDM Lambda models. The differences from the standard Lambda-CDM are seen only on the smallest, galactic and smaller, scales - everything else on larger scales is the same. So no major revision is required as far as mega-parsec and greater cosmic structure is concerned."
Can you please provide links to "several" studies that "have run WDM Lambda models" that conclude "... So no major revision is required as far as mega-parsec and greater cosmic structure is concerned"?
There seems to be some continuing misunderstanding - I was specifically suggesting that you read a new RG question posting - not referring to any comment posted here. In the recent RG question posting, https://www.researchgate.net/post/Can_the_Lambda-CDM_cosmological_model_survive_the_discrepancy_between_galaxy_cluster_observations_and_CMB_projections? I ask:
"However, if the composition of universal mass-energy included HDM or WDM and less total CDM than now thought, how would that affect the enormous gravitational effects routinely attributed to CDM in the observed universe?"
and
"What other L-CDM results would be affected by the inclusion of HDM and/or WDM?"
Selectively including WDM in models of dwarf galaxy dynamics without including it in large spiral galaxies seems to be taking the approach of parametric fine-tuning of cosmological models to the extreme!
As I understand, adding WDM to the LCDM applecart would require that something must first be taken out. If the amount of universal CDM is reduced, how does that affect large scale structure, the formation and properties of protogalaxies and their evolution, the composition of DM halos intended to explain especially large spiral galaxy rotation, etc.?
Adding WDM may help with LCDM problems such as 'the small scale structure problem', 'the missing satellite problem', the cuspy halo problem' and others, but it must do so while maintaining alignment with other observations. I suggest that will require a new 'Lambda Warm and Cold Dark Matter' cosmological model.
At any rate, there's no need to continue challenging me to do my own modelling - I cannot! I'm not embarrassed by that shortcoming - I will be specifically referring to others who have - please see the references in the question posting linked above. It would be much more appropriate to continue this and other 'discussions' there - there's no need to hijack this post.
@Rogier Brussee
Thank you very much indeed for the very recent references.
Note that the total decay width of Z boson is measured
very accurately at CERN. Because Z boson is a mediator
weak interactions, which is an interaction with left handed
currents and the Z boson, we precisely know how many
species of left handed neutrinos couple to Z boson. The
number is almost exactly 3. Sometimes they are also
called active neutrinos as opposed to sterile neutrinos.
Consequently, any new neutrino (m < m_Z / 2) cannot be a
left handed fermion. It can be sterile (which is a pure
singlet fermion) or it can also be a right handed
neutrino, which is also sterile as long as Z decay is
concerned, as it does not couple to Z.
@James
I'm not trying to hijack anything - I have better things to do with my time, and in any case, why would I do that? I merely provide information based on what I do in my everyday life.
The Aquarius project is on-going, and there are many other projects doing similar things, ie: embedding an ultra-high mass resolution simulation in a lower mass resolution simulation. The mechanics of doing that means that the hi-res box experiences the correct boundary conditions, and the rest of the model is correctly simulated, albeit at lower mass resolution. So in the hi-res box we see the result of including either WDM or CDM. Nobody does work on simulating HDM since including it does influence scales greater than the scale of galaxies and seems to produce results that are at variance with what we see (you listed those scales in an alternate post).
Of course, it may indeed ultimately require a combination of WDM and CDM - who knows? But let's understand the issues before declaring the necessity of adding in two more things we know little or nothing about.
These works are often discussed in seminars and conferences: publication takes considerably more time - it may take a couple of years for a piece of research to appear, even on the ePrint arXiv, and after that there is the refereeing process and so on. So we have to wait until the "official" papers come out. Seminars and conferences are a vital part of overcoming the tidal wave of papers that we have little or no time to read and study. Many workshop-style conferences post the PowerPoints (or PDFs) that were presented - though that does not include the all-important discussion that often follows.
And I am serious about running Gadget. A modern laptop can run a credible simulation - but it has to be a Linux laptop or computer. This is often an important part of teaching Masters students and startup PhD students about simulation and computing. It is probably harder to analyse and visualize the results of the simulation than to run it!
Bernard,
My 'hijacking' remark was intended to suggest that Rogier might prefer that our ongoing discussion about WDM, etc., be conducted on my question posting more related to that subject rather than here. I don't understand how that would take more of your time.
Yes, simulations can be built quite reasonably - but their representations, methods and results cannot be easily verified. No one needs more amateur model results that are difficult to interpret and impossible to validate - there are quite enough already, IMO!
@James
Apologies - I thought you were saying that I was hijacking this discussion. My misunderstanding.
Numerical simulation is an essential tool in cosmology and has driven many important developments over the past 4 decades (we were running 1000 bodies in the mid-1970's!). As a specific example I might cite research into the origin of galaxy angular momentum - a subject that is still not settled since it is far more complex than we had originally imagined.
I recommend that people use Gadget simply because the "hand-on" approach gives one a feeling for the scope of such simulations, and their potential problem areas. There would be no need to use such a simulation for any other purpose than elucidation: many of the big simulations outputs and analysis are available on-line (eg the "Bolshoi" simulations).
Of course the problem with a simulation is it has to be built on various assumptions, and that basis can be forgotten in the ensuing manipulations.
Dark energy and dark matter are two parameters in the standard curvature/flatness model that can't be zero-ed out without grossly distorting the representation. That doesn't prove they exist, only that they're needed in the model.
@Jim
Which brings us back to the flatness problem. Observational data from several directions tell us the Universe is flat and with that assumption we can run more meaningful simulations.
One trend in simulations is to see what is the effect of assuming different dark energy models - the idea being that this will suggest what kind of observational data will be needed to probe the nature of the dark energy at high redshift.
Taking the evidence for flatness at face value, and seeing that the present larger scale cosmic structure can then be understood via simulations, leads us to believe that we can ask questions about what is happening to the expansion rate at higher redshifts. That will indicate whether the dark energy is a cosmological constant or something else.
@Bernard
I'm still looking for the precision of the measure of flatness. Is there no measurable curvature out to a megaparsec? How much would have to be measurable at that distance?
@Bernard @Jim
The data suggests that we see a bit of the universe that is small compared to the curvature scale ;-). @Dominique Schwartz states that "At the observable scales we expect a departure from flatness, but we expect that it is just a 0.1 per mille effect (a factor of 100 smaller than current observational constraint)." It would be interesting to know why it is expected to be of the order 10^-4 because if that is really "just" two orders of magnitude from observable, then it sounds like a question that can one reasonably hope to answer by one or two generations of technology upgrades i.e. within my remaining life span.
@Bernard. I always thought that dark energy is the same thing as writing the cosmological constant term on the side of the stress energy tensor to give it the form of the Einstein equation without cosmological constant, but it seems that one can have different forms of DE. So what exactly makes a contribution to the stress energy tensor dark energy?
The measurement of flatness (ie: curvature):
--------------
The WMAP and Planck Cosmic Microwave Background (CMB) satellites were able to directly determine large numbers of cosmological parameters, and in particular the contributions to the total energy density of baryonic matter, dark matter, dark energy and curvature. These various contributions are referred to as "density parameters". The CMB experiments are also able to determine a value for the Hubble constant (the present rate of expansion).
Quite different experiments such as gravitational lensing, baryonic acoustic oscillations, supernovae, etc., also put limits on the values of these density parameters which are fully consistent with the CMB data. The conclusions reached are therefore not simply a consequence of a single experiment and in that sense the evidence that our model makes sense is very strong indeed.
If we combine CMB data with this data from other experiments we get very strong constraints on the value of the cosmological curvature. Here I will focus on the CMB results.
I'll present this briefly here even though some respondents to this question may already have made similar comments.
The simplest summary of the CMB view is the Wikipedia article on the WMAP experiment
http://en.wikipedia.org/wiki/Wilkinson_Microwave_Anisotropy_Probe
See the last table in the article, entitled "Best-fit cosmological parameters from WMAP nine-year results".
The table reports that the curvature using WMAP alone is -0.037 (+0.044
-0.042) - the numbers in brackets are the plus-minus 68% errors on the measurement. The value is consistent with zero.
If we add in the evidence from other experiments they report
Omega_k = -0.0027 (+0.0039 -0.0038),
which is even closer to zero. To quote from the paper Abstract: ".. the universe is close to flat/Euclidean".
This result is also Table 17 of the paper reporting these results:
http://arxiv.org/abs/1212.5225
(C. L. Bennett et al. 2013 ApJS 208 20, not yet available for download without a subscription). The number you want is towards the end of the table. The paper is rather technical and discusses the various models that are fitted and the fitting techniques that are used to analyse the data and combine it with data from other experiments. But, as is usual with the WMAP group, the paper is very thorough and clearly written.
It is also in
http://arxiv.org/abs/1212.5226
(G. Hinshaw et al. 2013 ApJS 208 19)
in their Table 9. This paper discusses the technical issues in even more detail.
The WMAP website is wonderful:
http://wmap.gsfc.nasa.gov/
and offers pictures, papers and explanations at all levels.
The Baryonic Acoustic Oscillations and gravitational lensing measurements offer independent and almost equally strong contraints on the curvature. See for example Tables 6 and 7 in
http://arxiv.org/abs/1303.4666
(Anderson et al. Monthly Notices of the Royal Astronomical Society, Volume 439, Issue 1, p.83-101)
*** Conclusion: as far as the data is concerned, the Universe we see is flat.
(I should add that when reading some of the more technical papers, you should be aware that some models now pre-suppose that the universe is flat when analyzing their data.)
I hope this answers the question!
That the mass-energy density of the universe appears to be so precisely identical to the critical density, after > 13 billion years of expansion, suggests to me that these are not independent variables, i.e., that the gravitational constant may be dependent on the density of matter; that some unidentified physical mechanism is responsible for 'automatically' tuning the apparent values of one to those of the other...
@Bernard
Thanks for your detailed exposition!
@James
As I tried to point out elsewhere in this thread, if you assume
A. we can only see a small part of the "whole" universe, where small means small compared to the curvature, and
B. the Universe looks to good approximation like a version of FLRW universe,
then the observed Hubble constant equals the Hubble constant for a flat universe of the same density.
In other words, no fine tuning required, and there is nothing particular "critical" about the density. Observing the denstity to be "critical" for a given observed Hubble constant is a falsification of A and B. I guess that is one of the points of the inflation idea.