It has been deduced from WMAP and Planck observations that the expansion of the universe is accelerating. This would mean that the present pressure in the universe is negative. Do we have any knowledge of the value of this pressure or of its order of magnitude (be it obtained by calculation or by measurement) ?
Dear Guibert,
yes, we do. It's a trivial exercise in cosmology. However, one has to be careful because what is called "pressure" in cosmology is often given in units of an energy density, which is however just a shortcut to avoid writing too many constant factors.
In any case, our current Universe (i.e. its energy density) basically consists of radiation (sub-per mille level), non-relativistic matter (dark and non-dark, about 31.5%) and its main component dark energy (68.5%). Out of these, radiation has pressure but its amount is negligible. Non-relativistic matter is pressureless and dark energy has, as you correctly say, negative pressure.
Thus to an excellent approximation the total pressure of the Universe can be set equal to the partial pressure p_DE in dark energy, which is just minus the energy density rho_DE in dark energy. The latter is given by 68.5% of the "critical density" rho_crit = 1.878e-29 g/cm^3 x h^2. h=0.67 is the reduced Hubble constant (both rho_crit and h are measured, not only by Planck or WMAP, but also by further astrophysical observations - although typically less precise). Turning units of g/cm^3 into Pascal gives a factor of 9e19, so that the total pressure in a cm^3-piece of the Universe is given by:
p_Universe = -rho_DE = -0.685 x 1.878e-29 x 0.67^2 9e19 Pa = - 5.2e-10 Pa.
Note that in cosmology we always talk about densities rather than absolute quantities - it would not make sense to translate this value into a "total pressure" of the Universe, since we don't know the size of the Universe and also because there is no way to measure such a total amount. We also don't use SI units, but in any case, the message is yes we do know the pressure of the Universe because we have measured it.
Best regards,
Alexander
Dear Guibert,
yes, we do. It's a trivial exercise in cosmology. However, one has to be careful because what is called "pressure" in cosmology is often given in units of an energy density, which is however just a shortcut to avoid writing too many constant factors.
In any case, our current Universe (i.e. its energy density) basically consists of radiation (sub-per mille level), non-relativistic matter (dark and non-dark, about 31.5%) and its main component dark energy (68.5%). Out of these, radiation has pressure but its amount is negligible. Non-relativistic matter is pressureless and dark energy has, as you correctly say, negative pressure.
Thus to an excellent approximation the total pressure of the Universe can be set equal to the partial pressure p_DE in dark energy, which is just minus the energy density rho_DE in dark energy. The latter is given by 68.5% of the "critical density" rho_crit = 1.878e-29 g/cm^3 x h^2. h=0.67 is the reduced Hubble constant (both rho_crit and h are measured, not only by Planck or WMAP, but also by further astrophysical observations - although typically less precise). Turning units of g/cm^3 into Pascal gives a factor of 9e19, so that the total pressure in a cm^3-piece of the Universe is given by:
p_Universe = -rho_DE = -0.685 x 1.878e-29 x 0.67^2 9e19 Pa = - 5.2e-10 Pa.
Note that in cosmology we always talk about densities rather than absolute quantities - it would not make sense to translate this value into a "total pressure" of the Universe, since we don't know the size of the Universe and also because there is no way to measure such a total amount. We also don't use SI units, but in any case, the message is yes we do know the pressure of the Universe because we have measured it.
Best regards,
Alexander
Dear Alexander.
Excellent. Thanks for this thorough answer. And feel reassured : I'm not going to translate this value in a "total pressure of the universe :=)
Just a lateral question on terminology : why "reduced" Hubble constant ? I noted from Planck's results 2015 that : H = 67.8 +- 0.9 km.s-1 .MPc-1. Does "reduced" here just mean "divided by 100" (for mathematical coherence or is there a more fundamental reason) ?
Dear Guibert,
yes, indeed, that's the value of h devided by 100. There is not much of a fundamental reason for this other than history. When we go back some 50 years ago, the exact value of the Hubble constant was not known, but it was nevertheless clear that 100 km/s per MPc was not a too bad approximation. Thus, H_0 was taken to be "some factor of order one" (called h) times the benchmark value of 100 (km/s)/MPc. Nowadays we just know h much more precisely than we did decades ago.
Other than this h is convenient to use because in cosmology we often think in dimensionless numbers. For example, the energy density in terms of dark matter is typically written down as "Omega_DM h^2", the former being the ratio of DM energy density to the critical energy density. The cosmological model, as fitted to the data, typically uses Omega_DM h^2 as parameter in a fit to the data, because effectively the model predictions depend on this combination of parameter, which makes it easier to fit.
Best regards,
Alexander
Dear Alexander,
Ok ! I understand. So the "Omega_lambda" one finds in literature is equivalent to an "Omega_DE" corresponding to the "rho_DE" of your answer. However, referring to "lambda" being the "cosmological constant" is probably not fully correct in the literature as it would mean that "rho_lambda" has to increase in the future (lambda no longer constant) because the density of non-relativistic matter would diminish due to the expansion ... Thus, "Omega_DE" and "Omega_lambda" are perhaps equivalent now but not in the future. Or did I miss anything ?
Dear Guibert,
they are basically equivalent. The cosmological standard model which provides an excellent fit to the data is called "LambdaCDM", i.e., is assumes a cosmological *constant* Lambda (leading to dark energy) and CDM (cold dark matter).
While this is the minimal model, in reality we don't know whether the dark energy is really a constant Lambda. It could in principle also be dynamic and change with time. However, we can only observe it in a single moment (at least on cosmological scales), so we don't know. Thus, dark energy is more general than a cosmological constant, but a cosmological constant is one possible realisation of dark energy (arguably the simplest one). In principle we could check whether the dark energy is constant by waiting long enough... but who's got a few billion years time?
As a side note, we also don't know for sure whether the "C" for cold is strictly speaking correct. Our current data excluded hot dark matter (i.e., particles with extremely high velocities), but in fact we don't know whether they are nearly at rest ("cold"). It could be something in between (often called "warm", although it can be even more complicated).
You see, we don't yet know everything in cosmology, but our measurements will improve in the future and we will certainly narrow down our picture of the Universe even further.
Best regards,
Alexander
If we consider primordial neutrinos as matter, then they were 'hot' in the early universe but became 'cold' perhaps a few million years later depending on their mass. Since the era of radiation dominance ended around 50 thousand years and their speeds were relativistic throughout that time, they are generally lumped in with radiation and their density isn't sufficient for a hot dark matter model to fit.
Dear George,
yes, you're right, what I was referring to was only that the main component of DM is not allowed to be hot (the upper limit is something like 1 per cent, or so). It is true that neutrinos are cold nowadays, but it does not really change anything because at the time when structures formed, which is where we could possibly observe their effect, they have been hot.
Neutrinos however also fail to be the main component of DM because their mass is too small. I did not mention these details before, because Guibert's original question was about dark energy and I just added the DM bit to make clear that, although we do know quite a bit about our current Universe, there are still some bits we are not yet sure of.
Best regards,
Alexander
Yes, to be cautious is a necessity in cosmology, even more than in other branches of science. But, isn't it just the originality (and glory) of cosmology to be able to make actual science (here, to find scientifically grounded subtle by-ways) to describe and understand one single object, the universe (*) ?
(*) we are part of and which cannot be duplicated
Dear Ales,
you are absolutely right one should always be cautious!! However, dark energy is not only (and also not mainly) established by supernovae, but it is instead a concordance between supernovae, baryon acoustic oscillations, galaxy clusters, and in particular the CMB. You are right that supernovae alone are not sufficient, but given the agreement between four very different datasets, I don't think any cosmologist would seriously doubt the existence of dark energy.
It is a different question about the details (e.g. is it constant in time or not, what is its precise value, and so on), but the data quite clearly tell us its there. Thus, the negative pressure simply follows. Still you are right that the number I gave earlier in this thread is just an example value based on the best-fit parameters. And it should be clarified that it's only that, so thank you for pointing this out.
Best regards,
Alexander
Dear Alexander and Ales,
Concerning the negative pressure, I just read enclosed article which uses the "Teleparallel Equivalent of General Relativity" (TEGR) to show that a negative pressure naturally emerges without need to call for Dark Energy. I discover TEGR but it seems to have already existed at the time of Einstein. In the article, it is described as "an alternative geometrical formulation of General Relativity (rather than an alternative theory of gravity)".
Could this TEGR be an alternative to the main flow of thinking which deduces an Omega_DE of 0.685 from GR and the present CMB dataset (Planck mission) ? Or is it inconsistent with it and the other observations (baryon acoustic oscillations, galaxy clusters, etc.) ?
Best regards
Dear Guibert, dear Ales,
(thanks Ales for labeling me as "qualified", but maybe I am just faking very well... ;-) )
I think I can resolve the terminology. First of all, what we call "dark energy" is what causes the accelerated expansion of space. The latter is what observations tell us exists in space, and any such thing qualifying as dark energy will have negative pressure (due to the accelerated expansion this is a necessity).
In that sense, whatever has negative pressure will act as "dark energy". This is possibly formulated a bit unfortunate in the paper that you mentioned, Guibert, because whatever would have negative pressure would show up in observations as "dark energy" (because we simply call such things dark energy).
The real subtlety however is what's behind dark energy. This we don't know for sure, but we have several theoretical guesses which - if true - would manifest themselves as dark energy in the observations. The easiest such theoretical model is a cosmological constant. This is simply a constant contribution to the energy density of the Universe that is not forbidden by any physical law and we could thus include this constant in our equations describing the Universe and fit this model to the data. This is what is done and it does describe the data remarkably well. This is the reason that often "cosmological constant" and "dark energy" are treated on equal grounds, which is however not strictly correct.
But there could be other (more complicated) models that lead to dark energy which is *not* constant in time. This would be possible, because we ultimately measure dark energy just within a certain small time interval in the history of the Universe - so we can't tell whether it's constant, or not. One such possibility is a time-dependent field (typically called "quintessence"), but it could also be some non-trivial metric perturbation, as discussed in the paper mentioned by Guibert. We cannot exclude such more complicated models, because our current data is not sufficient for that. We have basically no means to limit dark energy at the very early stages of the Universe (not strictly true, there is e.g. some limit from BBN but it is very weak, so weak that effectively we have hardly any constraint). Thus, our problem with these more complicated models is that we cannot possibly distinguish them from our data.
Thus what a physicist would typically do is to go with the simplest model possible that explains all the data, which is a cosmological constant. We are of course not blind to the fact that more complicated explanations can exist, but at least our cosmological standard model is the simplest one that fits the data.
I hope this clarifies the situation a little bit: dark energy is the general phenomenon of an accelerated expansion of the Universe and thus strictly connected to negative pressure. However, whether this dark energy (or the negative pressure) is constant in time we cannot tell for sure. A cosmological constant is just the simplest explanation (maybe other researchers would say "most boring") that we have, and it fits the data very well. This does not mean that it is the ultimate conclusion, and we do explore non-minimal settings and try to understand if they could be probed by experiments or observations, but we would always try to find a model as simple as possible unless there is a good reason to make further assumptions beyond the minimal set that one truly needs.
Best regards,
Alexander
Dear Alexander,
Thanks for this very clear and thorough answer ! I like the "most boring" ...
So in order to be completely in line with you, according to the simpliest model, the negative pressure is thought to be constant in time at least from the big bang till now and probably will remain constant in the future ("...whether this dark energy (or the negative pressure) is constant in time...")...(?)
Best regards
Guibert
Dear Guibert,
yes, that's exactly right. The energy density of a cosmological constant (and hence of the negative pressure) would be constant in time and we extract its value from the data.
However, all other components in the Universe (radiation and matter) *DO* evolve with time, and their energy density decreases. Thus, at early times, the energy density of a cosmological constant would have been the same as today, but those of radiation and matter were much higher (the one of radiation was hightest, which is why that phase is called "radiation dominance"). But radiation dilutes and redshifts by the expansion of the Universe, so its energy density reduces very quickly. Matter only dilutes, so its energy density decreases slower than that of radiation and the Universe at some point entered a phase where matter was the dominant component ("matter dominance").
Nowadays, also the matter energy density has decreased so far that it is less than the energy density in dark energy, which is why the current phase is called "dark energy dominance" or "vacuum dominance" (the latter term derives from a cosmological constant being associated with the so-called vacuum energy, although we don't understand this connection very well at the moment). Thus, even though initially the dark energy density was tiny compared to anything else in the Universe, it was the only component which did not decrease with time and now starts being dominant. Why this happens just now is not known (the so-called "Why now?"-problem), but it may just be a coincidence. But in any case, if dark energy is caused by a cosmological constant, than you would indeed expect this component of the Universe to be dominant at large times, i.e., when the energies of all other components have been diluted so far that they practically don't affect the energy balance of the Universe anymore.
Everything clear now?
Best regards,
Alexander
Dear Alexander,
Thanks a lot !
But science is a permanent questioning .... (sorry, but you are so good and so clear !). I thought that there was nowadays a general agreement that there was an inflationary stage around time 10-35 s after the big bang (explaining homogeneity and isotropy at large scales). So, because this inflation could have started from a "bigger than usual vacuum fluctuation" (see Mukhanov et al.) there could have been a kind of extremely short decoupling between the cosmological constant and the (average ?) vacuum energy to allow the inflationary stage ? At that time, the total density rhot was very high and the equation of state in force : p=- rhot . Thus a vacuum fluctuation extending to the whole (still very hot and concentrated) universe to allow a very big "-p" to develop for the inflation, while the very low "-p" due to the cosmological constant would have remained constant at its present value (?). This is perhaps why you say "...we don't understand this connection very well at the moment ..." :=)
Best regards
Guibert
Hi everybody,
let me start with Guibert's point about inflation. You are right that the inflationary phase has the same characteristics as dark energy, just the numbers don't match since the current dark energy density is much smaller than that required for inflation. Exactly that has in some parts triggered the notion of both being two sides of one and the same time-dependent phenomenon, but we simply don't know at the moment. On the one hand, we have not found a clear observational signature of inflation (which would be the tensor modes), so one could be agnostic and simply not care about the phase before our big bang picture is observationally etablished. On the other hand, one can argue that inflation certainly solves some puzzles in cosmology (e.g. it does explain the otherwise puzzling observation that the Universe is flat). Given that data does not tell us at the moment which view is correct (if any), I think that both positions are okay, and there are arguments in favour of both. If anybody prefers one from the other, that's personal taste and that's okay, but nobody should state that one of the two views would be excluded. This I think is in line with the "permanent questioning" you mention and which proper scientists have to do. And, by the way, they do: for example, the analysis of the Planck data is done with both the minimal cosmological standard model and with a bunch of non-minimal models. When looking into that, one can see that many models have no problem fitting the data, but what we call standard model is simply the most minimal one (and thus the most predictive).
Coming to your thoughts, Ales, first of all in my personal opinion I am very data-driven, in the sense that I accept only models which fit the data, but I don't put a too big prior on how they should be structured. Thus, I don't have any hidden suspicion concerning the "Why now?"-problem. Logically, it may or may not be a coincidence, we cannot tell from the data. Of course it looks like a fine-tuned situation, but we are aware of several fine-tuned situations in Nature (e.g. that the distance between moon, Earth, and sun and their sizes are precisely such that the moon just covers the whole visible part of the sun during an eclipse). We don't have an explanation for this and it is unlikely that there is one (because the size of the moon is not known to be connected to any other scale). On the other hand, fine-tuned situations always look "suspicious", which is why scientists tend to search for explanations, which is successful in several cases (e.g. why our eyes happen to be sensitive to exactly that type of electromagnetic radiation in which the sun is brightest). But we don't know without data, and that's the scientific method: come up with an explanation, derive testable predictions, and compare them to experiments.
Thus I would not say that we are looking for a preferred picture - on the contrary, there is certainly no shortage of scientists working out the predictions of more exotic models. But instead we simply have to be aware about where the line is between what we know from data and what we think we know but in fact it is just a concept that we have invented (and which may be confirmed or excluded with future data). Since we always question the state of the art, which is after all the job of a scientist, I would also not agree that we have a premature picture. The cosmological standard model is just our current best guess, for objective reasons because it provides the best fit to the data with the smallest number of parameters.
We also do not seek only confirmatory data. On the contrary, e.g. the current Planck analysis has given a very detailed account of the discrepancies to other data sets (e.g. in the measurement of the Hubble constant). But we have to quantify the impact of such discrepancies as good as we can. In particular in an astrophysics context this is not always easy, because we cannot claim to understand all astrophysical objects that are out there. This translates into some data sets being "worse" than others, in the sense that they involve a bigger error.
The whole business is complicated. And you are right that current observations do not go to the very beginning of things. The big bang model is however not wrong, it is just the name which is misleading (for historical reasons). In fact, as weird as it may sound, the big bang model does not need a big bang. What we call big bang model refers to the time *AFTER* the beginning of things. Our computations basically go back until we reach the "initial condition" of the Universe, which is the earliest point of which we have data (i.e. BBN) and *AFTER* that the big bang model provides a very good fit to the data and is therefor not wrong. However, we just don't know what was there before this initial condition. It may have been an actual big bang (as suggested by extrapolation our computation to that point) or it may have been something very different, we just don't know at the moment. We could get some insights, though, e.g. by establishing inflation by observing the tensor modes, but up to now we have not been able to do so.
It's difficult game, but what I described is the current state of the art.
Best regards,
Alexander
Ales,
Yes, "non-conflicting" is better than "confirmatory". Good wording is crucial even more in these complex fields. So one has to take care to be cautious not only in doing science but also in explaining it :=)
Alexander,
I come back to this "Why now?-problem". See your former sentences :
"...it (dark energy density) was the only component which did not decrease with time and now starts being dominant. Why this happens just now is not known (the so-called "Why now?"-problem), but it may just be a coincidence ..."
I thought we now detect dark energy density being dominant because we now have appropriate observation means to make this detection... According to the last calculations, the acceleration of the expansion started about 7 Gyears ago, isn't it ? We had thus plenty of time to detect it before. Of course if we were already existing that time (!) and would have developed even more sophisticated observation means than Supernovae Ia light measurements, WMAP or Planck .... So in my view not really a problem of coincidence ! Or does this "Why now?-problem" involve other aspects I have missed about the acceleration (not the question of the moon exactly eclipsing the sun and similar astonishing facts for which I fully agree with you) ?
Best regards
Guibert
This sounds like an application of the "Anthropic principle".
Thus "now" in the context of an accelerated expansion and taking account of the time needed to develop life and intelligence would mean "in the present few Gyears" (?).
I know R. Dawkins from his book "The selfish gene" which was debated long time ago (little bit controversial). Concerning L. Krauss, I have to check ....
Best regards
Hi everybody,
let's not drift away to anthropic stuff which is not really scientific. I am not commenting on such things in any case.
Guibert, the "Why now?"-problem basically refers to us not understanding the size of the cosmological constant. On cosmological scales, dark energy became dominant only "just now". I don't even know the number in years, but it does not matter so much, because time is in fact not a very good physical quantity to describe the stages in the evolution of the Universe. Temperature is much better suited for that, and in terms of temperature, the Universe has hardly changed in the recent 5 Gyrs since dark energy and matter had the same energy densities. The time-temperature relation is non-linear and depends on the contents of the Universe, hence this weird combination of numbers.
Given that the value of the cosmological constant is arbitary (or, rather, we don't know any argument for a particular value), it looks a bit weird that just at the stage of the Universe where we observe it this otherwise unimportant number becomes dominant.
But there is nothing anthropic to that, at all. Given the value of the cosmological constant, it does become dominant "now". If it had another value, it would have become dominant earlier or would become dominant later, and we would still not understand its value.
That's it, nothing mysterious involved here. The main problem with the "Why now?" is its name, as we can see, because it triggers incorrect associations.
Best regards,
Alexander