Also known as the reversibility paradox, this is an objection to the effect that it should not be possible to derive an irreversible process from time-symmetric dynamics, or that there is an apparently conflict between the temporally symmetric character of fundamental physics and the temporal asymmetry of the second law.
It has sometimes been held in response to the problem that the second law is somehow "subjective" (L. Maccone) or that entropy has an "anthropomorphic" character. I quote from an older paper by E.T. Jaynes,
http://bayes.wustl.edu/etj/articles/gibbs.vs.boltzmann.pdf
"After the above insistence that any demonstration of the second law must involve the entropy as measured experimentally, it may come as a shock to realize that, nevertheless, thermodynamics knows no such notion as the "entropy of a physical system." Thermodynamics does have the notion of the entropy of a thermodynamic system; but a given physical system corresponds to many thermodynamic systems" (p. 397).
The idea here is that there is no way to take account of every possible degree of freedom of a physical system within thermodynamics, and that measures of entropy depend on the relevancy of particular degrees of freedom in particular studies or projects.
Does Loschmidt's paradox tell us something of importance about the second law? What is the crucial difference between a "physical system" and a "thermodynamic system?" Does this distinction cast light on the relationship between thermodynamics and measurements of quantum systems?
Good question! Jaynes jumped from the necessity of a coarse-grained description to claims of "subjectivity". Of course subjectivity is important in science, but for other reasons. The Second Law requires only a coarse-grained description to satisfy the micro-macro distinction. Loschmidt´s paradox refers to the micro dynamics. The Second Law refers to the macro dynamics. The paradox has nothing to do with the Law itself, but with its use as an arrow of time. The paradox would imply that there is no arrow of time at the micro level. This is possibly true, although Prigogine tried to insert the arrow of time from a (proposed) asymmetry of quantum operators. Therefore , there would be no "heat death" at the micro level. Time and entropy increase would have to be defined in the interface of micro and macro. There may be implications for epistemology, but they are not automatic!
Good question! Jaynes jumped from the necessity of a coarse-grained description to claims of "subjectivity". Of course subjectivity is important in science, but for other reasons. The Second Law requires only a coarse-grained description to satisfy the micro-macro distinction. Loschmidt´s paradox refers to the micro dynamics. The Second Law refers to the macro dynamics. The paradox has nothing to do with the Law itself, but with its use as an arrow of time. The paradox would imply that there is no arrow of time at the micro level. This is possibly true, although Prigogine tried to insert the arrow of time from a (proposed) asymmetry of quantum operators. Therefore , there would be no "heat death" at the micro level. Time and entropy increase would have to be defined in the interface of micro and macro. There may be implications for epistemology, but they are not automatic!
For the correct answer, be sure to see:
Borchardt, Glenn, 2008, Resolution of the SLT-order paradox, in Proceedings of the 15th Natural Philosophy Alliance Conference, Albuquerque, NM, United States. [10.13140/RG.2.1.1413.7768]
Dear Glenn, Boltzmann himself in "Lectures on Gas Theory" raised the conjecture of compensations between parts of the universe. It seems to me that your originality is to consider it as infinite, while Boltzmann considered it to be finite. I am afraid that your solution may bring harder problems such as conceiving a physical infinity! If the part of the universe where entropy spontaneously decreases, the Second Law does not apply. On the other hand, if the infinity is not physical, the non-physical parts cannot compensate for the physical effects of the Second Law!
Alfredo:
Infinite Universe Theory assumes that it is physical. It is inconceivable that the universe should have an end to it. Would there be dragons there? Seriously, the existence of matter in one place implies its existence in every place. In other words, nonexistence is impossible. Empty space is an idealization, just as solid matter is an idealization. Neither can exist.
If one focusses on some interesting complex phenomenon - your living body, the history of an ant family or of a human empire, a rainbow, the biosphere, the earth, the solar system or some galaxy - one sees that the interesting phenomenon has only a finite duration. It will go in some decay period and finally will find an end. This will be caused by both internal constraints (like loss of initial energy, as by dying stars, or getting older, as for mamals) as also influences from outside. The matter will dissipate and eventually will be reused in other local (interesting) phenomena. I don't speak here about "closed thermodynamic systems" alone. I speak about frames working at a special own level of organization. It is possible that the second principle of thermodynamics is only a special case of a more general principle. This might be anthropomorphic - as I used the term "interesting" in this description - but this can be rigorously defined in different contexts. I even find more intersting to undestand why one will always have "interesting phenomena". Indeed, an infinite universe might be a part of an answer.
Mainz, Germany
Dear all,
I don't see that an infinite universe offers a solution to the paradox. The second law only holds of a closed system into which no free energy is introduced, and to invoke an infinite universe seems to suppose that there are always additional sources of free energy which might be brought into play. But this is strictly beside the point of what happens in a closed system. At the best, an infinite universe avoids the prospect of an ultimate "heat death," of the entire universe. But this seems to be a different question.
Even if there is no ultimate heat death of the universe, on the assumption that there is always additional free energy which can be brought into play, it may remain true that entropy always increases in a closed system, if such there be. But how and why could this be so if the fundamental dynamics are temporally symmetric?
Even if an infinite universe did provide a solution, and there seem to be many advocates of an infinite universe, as in the multiverse idea, this would seem to be a long reach --i.e., we might well prefer a less extravagant solution, one with more easily acceptable or testable assumptions or premises.
On the other hand, if it is held that there are in reality no closed systems, then this is to suggest that the second law fails of application. This seems implausible, since thermodynamics finds many practical applications. It was originally developed, after all, in accounts of the optimal efficiency of steam engines.
I believe that Pereira is correct that the paradox essentially involves the arrow of time. The second law is often put forward as an explanation of the arrow of time, or we might say that it appears to provide an arrow of time--evolution of physical systems from the more orderly and less likely toward the less orderly and statistically more probable. But viewed purely as a matter of statistics, the problem seems to reappear, since on purely statistical grounds, and without recourse to experimental determination of entropy, we have no reason to suppose that the more probable configurations are in the future rather than the past--or that they are not both in past and future.
On the other hand, if we bring in empirical evidence that the past was more orderly, it is thereby less probable, and the past configuration can only be explained, it would seem, by reference to a still less probable and still earlier ancestor configuration. The second law appears to provide no explanation of the low entropy past, given that the fundamental dynamics are temporally symmetric. It merely tells us that systems evolve so that entropy does not decrease--free energy remains the same or is decreased and rendered in a form which is incapable of doing any further work.
By the way, it is Maccone who speaks of the second law as "subjective" as I recall, and Jaynes who make of it something "anthropomorphic." Might it be that our concept of physical "work" is anthropomorphic?
H.G. Callaway
I think the basic ingredient of the problem is the existence of levels of reality. When we mix levels of reality careless, we might have a paradox. In mathematics we do not officially have the "notion" of levels of reality. An exception is Robinsonian Non-standard analysis.
Mainz, Germany
Dear Drossos,
Thanks for your comment. There may indeed be something to your general caution about mixing "levels of reality," though the idea is somewhat vague and the caution may easily be over generalized.
The second law of thermodynamics would seem to be a paradigm of the useful and productive in the mixing of levels, since it involves both macro and micro levels of analysis, and the paradox does not engender general skepticism about the second law. Instead it tests our deeper understanding of the second law.
In the face of a paradox, or an apparent paradox, we are brought to question the various assumptions that lead on to the paradoxical conclusion, and the paradox itself, does not tell us clearly which assumptions to revise or reject. Alternative possibilities normally suggest themselves, and need to be explored in a comparative fashion. That's the intellectual charm of paradox.
I do not think to emphasize a similar notion of levels in mathematics. So, it remains unclear to me why your want to bring this up. On the other hand, I recall that set theory is often regarded as fundamental in mathematics --though this is perhaps not helpful in the present context of discussion? Do you have reason to think that non-standard analysis would be helpful to consideration of the Loschmidt paradox?
H.G. Callaway
From the point of view of classical, time reversible Hamiltonian systems, every point on the constant energy surface is equally probable (assuming some form of ergodic dynamics). And it seems impossible to associate any form of entropy to a state described by any particular point in 2N-dimensional phasespace.
However, from a macroscopic point of view one only has access to the average number of particles with positions inside a (relatively small) region of 6-dimensional position-momentum space, as observed over some (relatively short) time interval. It must be that from such point of view, not all points in phasespace are equal: A huge majority of them must map to configurations which can be described as thermodynamic equilibrium. A simplified example: Consider N particles placed on a interval of length L. Which fraction of the volume L^N corresponds to a configuration where all particles are in the upper half of the interval?
This mapping from microstates to macrostates do not break time reversion invariance; hence a huge deviation from the equilibrium should in principle occur at times. But extremely rarely. One can simulate this behavior numerically on small systems, and observe the behavior as the number of particles increases.
There are fluctuations in thermodynamic equilibrium, responsible for transport coefficients, and the time reversible dynamics of the underlying microstates may have implications for such coefficients, as described by the Onsager reciprocity relations.
Well, let's concentrate now on closed systems. I propose a "Gedankenexperiment". Consider a closed square room without air and gravitation and 20 perfectly elastic balls, all of the same mass. At the time t = 0 one ball has velocity v along a given direction and all other 19 balls are not moving relatively to the room. Now let time t run versus + infinity. The ball start to have elastic colisions with other balls and with the walls of the room, the initial kinetic energy is shared with the other balls, and finally, after many collisions, all balls tend to have the same kinetic energy at some time +T. This can be an illustration of increasing entropy. Now let run the movie backwards. The entropy seems to decrease since one reaches t = 0, where only one ball moves with velocity -v in the opposite direction (as at the beginning). But then - in the negative side of the time axis - it starts again to make elastic collisions with the walls and with other balls and to share the kinetic energy, so for "big" negative values t = -T' we have again a situation with almost uniformly shared kinetic energy.
So, as we see, both limit behaviors (t --> + infinity) and (t --> -infinity) of the entropy are increasing, when t goes towards those values. The behavior towards 0 is decreasing from both directions (left and right), but in 0 there was our intervention "from outside", when we have arranged a very unsymmetric configuration.
So, I propose to adapt the principle in the following way: we permit entropy to decrease on some bounded intervals but there is always a neighborhood of t = + infinity and a neighborhood of t = - infinity where the entropy increases, when t runs towards those values. [this means that the entropy decreases on (-infinity, 0) - but we have reversed the arrow of time for our Gedankenexperiment, so we don't think like this].
I would say finally that there is no paradox.
Dear H.G. Callaway,
There is a person, Roberto Poli, who has a theory of "Levels of Reality". Without being a specialist on your question, I think that the concept of "level of Reality" causes a lot of paradoxes, if it is treated without care. In mathematics, there are many notions which refer essentially to levels of reality without acknowledge it. The great example is "The Axiom of Choice" (AC). As I interpret it, when AC holds, essentially we are in a macroscopic world. Thus when does not hold we are in Non-Cantorian frameworks.
All these may not be relevant to your question, and may not contribute anything to that!
Mainz, Germany
Dear Drossos,
In very general terms, I'd say I am not a great fan of the idea of "levels or reality," as this departs from the idea of "ontological parity" --suggesting, perhaps, that some things are more real than others. But there are, no doubt, many important examples of contrasting levels of analysis.
Things may, of course, be more or less salient, relevant or to the point in particular discussions or in particular theoretical contexts, but I don't suppose this makes them more real. Things can be more or less important, interesting or salient, but not, I suppose, more or less real.
My comments here are a reply to yours and not designed to comment on Poli.
H.G. Callaway
Another thread is that thermodynamics is about order out of chaos, quantum biology is about order out of order. (JIm Al-Khalili)
Mainz, Germany
Dear Olaussen,
You wrote:
This mapping from microstates to macrostates do not break time reversion invariance; hence a huge deviation from the equilibrium should in principle occur at times. But extremely rarely. One can simulate this behavior numerically on small systems, and observe the behavior as the number of particles increases.
---end quotation
A "huge deviation" from equilibrium suggests, as I see it, spontaneous generation of a low entropy configuration in some local state of the system. But I wonder if you could say more about the statistical character of the second law in particular. Explanations of low entropy conditions by reference to fluctuations have been regularly discounted of late. What in particular do you have in mind under the heading of "transport coefficients?"
Also, the statistical independence of the states of the micro system has been regularly questioned, especially in the context of quantum systems. Quantum entanglements are relevant here. It has even been argued that the regular increase of entropy in a closed system amounts to, or reflects, a growth of entanglement. Can you say anything about this idea?
A bit more explanation of your mathematical technicalities might evoke some wider response to your interesting posting on this question. Thanks for your contribution.
H.G. Callaway
Mainz, Germany
Dear Prunescu,
I very much like your thought experiment. I think the readers of this thread should pay it some attention, and I have just gone through it carefully.
But you seem to have two conclusions, which sit there in some considerable tension with each other. You wrote:
we have reversed the arrow of time for our Gedankenexperiment, so we don't think like this].
I would say finally that there is no paradox.
--end quotation
Can you explain (away?) the apparent tension here? If time reverses itself under the operation of the second law, then that may certain seem to be something of a paradox, since the second law and increasing entropy is often thought to provide the arrow of time.
H.G. Callaway
May I refer all readers to: J. Singh: "Great Ideas and Theories of Modern Cosmology", New York, Dover, 1970, pp. 289-290 for a statement of the paradox, and a description similar to that of M. Prunescu.
Dear Prof. Callaway,
I was not clear at the end. I repeat the conclusion.
"So, I propose to adapt the principle in the following way: we permit entropy to decrease on some bounded intervals but there is always a neighborhood of t = + infinity and a neighborhood of t = - infinity where the entropy increases, when t runs towards those values."
Your point was: "If time reverses itself under the operation of the second law, then that may certain seem to be something of a paradox, since the second law and increasing entropy is often thought to provide the arrow of time."
Time does not reverse unde the operation of the second law. Time reverses because in our Gedankenexperiment we allow this. We consider reversible mechanics, elastic collision with solid walls and elastic collision between balls as the only possible events. Increasing entropy seems to be the natural tendency in both directions, although the film of movement in the negative side is not a reflexion of the film of movement in the positive side.
The paradox exists only because there is an obsession of human thinkers, obsession called "ireversibility of time". Irreversibility of time might be true, might be also a law of nature, but is not really to see in our experiment. It seems that people argument like "the time is ireversible because entropy must grow" and in the same time "entropy must grow because time is irreversible". They do not build a clear relation of causality. I think that the experiment shows something like "entropy will finally grow even if time was reversible". (attention with future and past tense here, it is quite confusing!)
Looking this way, there is no paradox.
M. Prunescu:
I agree about circular reasoning concerning irreversibility of time and entropy growth. For this, may I refer you to my attached article (although I think you have downloaded it).
Loschmidt's paradox involves a deep issue: the structure of thermodynamics now is not similar to that of dynamics, but this paradox does not tell us something about the second law of thermodynamics, I think.
Newton equation, Maxwell equations, Schrodinger equation or the master equation of dynamics are built on the foundation — the law of conservation of energy, this is in keeping with the first law of thermodynamics in the macroscopic field but does not include the theoretical structure of the second law.
You can make v→-v or t→-t for Newton system, you also can make Q→-Q or t→-t for the equation of the first law of thermodynamics, both the two show reversal symmetry, so there is no paradox between dynamics and thermodynamics, Newton equation, Maxwell equations, Schrodinger equation and the first law of thermodynamics are built on a same foundation — the law of conservation of energy, I call them as “the first law physics”.
It is easy to understand that the first law of thermodynamics does not involve irreversibility in that the second law is an independent principle, so we cannot doubt the second law by reversal symmetry of these conservation equations.
In my opinion, there is no crucial difference between a "physical system" and a "thermodynamic system”, as a macroscopic theory, thermodynamics is the summation of all the microscopic dynamics, it should be a self-consistent theory, and should be mutually consistent with dynamics. But, a deep issue now is: the structures of thermodynamics and dynamics are not self-similar.
The crucial step is, I think, we need to establish a self-similar structure for thermodynamics and dynamics.
For a long time I've built the motion equation for the system from potentially interacting material points directly from the expression of energy, represented in the micro and macro variables. Micro variables determine the motion of the material points relative to the center of mass. These movements correspond to the internal energy. Macro variables determine the motion of the center of mass of the system and match the energy of motion of the system. Thus, the law of the energy conservation reduces to the conservation the sum of internal energy and the motion energy.
This equation, unlike Newton equation includes terms that take into account the work which determine the change in the internal energy. That is, it describes the motion of the system in view of the transformation of the motion energy into internal energy. It was found that such an equation is generally asymmetric with respect to time.
For several years I was tormented by the question of: why the Lagrange equation for a similar system is reversible but my equation is irreversible? The answer turned out to be associated with the hypothesis holonomicity constraints used in the derivation of the Lagrange formalism of classical mechanics.
Let us open, for example, H. Goldstein "Classical mechanics", and paragraph: "The principle of D'Alembert and Lagrange equation." Please note, when using the hypothesis of holonomic constraints. And you will see that it excludes the possibility of taking into account non-linear transformation of the energy of motion into the internal energy of the system. But the problem of the irreversibility, as a rule, tries to find based on the formalism of classical mechanics!
More information can be found for example here (Somsikov V.M. Why It Is Necessary to Construct the Mechanics of Structured Particles and How to do it.)
Dear Tang Suye
Thank you for your attention and useful advice. I would like to draw your attention to the key idea, which allowed describe of the dissipation. The idea is that the motion equation for the system should be obtain from the energy which split on the sum of two energy: the energy of motion and internal energy. Only then the equation will describe the transformation of these types of energy. This transformation is determine the violation of time symmetry. Indices "ij" belong to the terms which correspond to the internal energy. In an inhomogeneous field of external forces nonlinear terms are appearing. These terms depend on the micro and macro variables. I.e. the micro and macro variables are engaged. This is a reason for broken of the time symmetry
H.G. Callaway> A bit more explanation of your mathematical technicalities
I am a bit too occupied with other projects just now to go into details (I shouldn't really spend time on RG at all). The fact that time reversible molecular dynamics is a standard method for calculating thermodynamic quantities should convince you that my ideas are neither new nor profound.
However, one may consider a simple time reversible microscopic model, and a time reversal invariant mapping from microstates to macrostates (with an accosiated definition of "entropy"), and run a simulation registering how the "entropy" fluctuates. Starting from a minimum "entropy" state one sees that it first increases (what else can it do?) and mostly fluctuates around some equilibrium value, but once in a while returns to the minimum entropy state.
However, the average return time scales exponentially with the number N of particles; hence it soon becomes difficult to observe in computer simulations -- for N greater than (say) 20. And very difficult for sufficiently large N to simulate a system close to the thermodynamic limit. But there is no indication that anything fundamentally different would happen with increasing N. Running forwards or backwards in time makes no difference, as Mihai says (but his positive or negative infinities should be interpreted as some exp(k*N) computer timesteps away, for some positive constant k).
One may also run forward from a minimum entropy state for some time interval, make a time reversal, and continue running for the same time interval, and observe that you have returned to the initial state. This requires the numerical implementation to preserve time reversal invariance (to sufficient accuracy).
This is all classical. Quantum mechanics may be different. There are papers arguing that all "typical" pure states of a large system, when reduced to the density matrix of a (much) smaller system, automatically leads to a thermal density matrix. I have forgotten how "typical" was defined, but the thermodynamic parameters of the reduced system must come from this definition.
With transport coefficients I think of things like heat conductivity, diffusion constants, and (more interesting) coefficients describing cross effects (like a thermal gradient causing particle transport, or a gradient in chemical potential causing heat flow). Quantities which can be computed from time-time correlation functions in the equilibrium state. It was for the latter type of coefficients that Onsager proved his reciprocity relations, using time reversal invariance.
Mainz, Germany
Dear Olaussen,
Thanks for your reply and further comments. Very interesting.
I think it matters less what I take to be standard in the present context than what you take to be standard.
H.G. Callaway
Mainz, Germany
Dear All,
I've come across an interesting passage in David Albert's (2000) book, Time and Chance, p. 133; and I'd like to see if it may evoked some comments or discussion from readers and contributors to this thread. This has to do with the differences between classical and quantum-mechanical versions of statistical mechanics. I find this pretty convincing. See what you think. Here is the passage:
The empirical thermodynamic consequences of the classical and quantum mechanical versions of statistical mechanics are (of course) going to differ -- but those differences (as with the differences between classical and quantum mechanics themselves) turn out not to amount to much except in fairly unfamiliar sorts of circumstances -- at very low temperatures, say, or in very small containers, or on very oddly shaped energy hypersurfaces, or what have you. Elsewhere (which is to say -- in so far as things like the spreading of ordinary sorts of smoke and the cooling of ordinary sorts of soup are concerned) classical mechanics and quantum mechanics ... more or less reduce to each other, and (consequently) the empirical contents of classical and quantum thermodynamics do too.
---end quotation
The claim is that in so far as empirical thermodynamics is concerned, classical and quantum mechanical versions of statistical mechanics will make little difference. I wonder if any may be inclined to dispute this claim or tend to support it.
Notice that Albert writes, on the prior page, 132, that "the dynamical laws that govern the evolution of quantum states in time cannot possibly be invariant under time reversal."
Any comments?
H.G. Callaway
1. For most physical systems quantum statistical mechanics turns into classical statistical mechanics as the temperature T becomes large^*. Since thermodynamics can (in principle) be deduced from statistical mechanics, this ought to be true for thermodynamics also. I don't think quantum mechanics require any modifications to the formulations of thermodynamics.
^*There may be exceptions, like that QCD in 3 space dimensions as T -> oo is claimed to turn into something which can be described by zero temperature QCD in 2 space dimensions.
2. There can be a difference in principle, since quantum mechanics already rely on a probabilistic description. Perhaps no need to invent another statistical description for quantum statistical mechanics, at least in the canonical and grand canonical ensembles.
3. I don't (immediately) agree with what (you say) Albert writes on page 132. But then I don't know the context of this statement.
Mainz, Germany
Dear Olaussen,
I believe that if one doubts the accuracy of a quotation, it is appropriate to check the source. But, I assume you may doubt that, in Albert's words, "the dynamical laws that govern the evolution of quantum states in time cannot possibly be invariant under time reversal." It is, I think, perfectly appropriate to challenge this claim whoever makes it.
Albert's immediate context is fairly complex, but, in general terms, he argues in favor of the GRW theory in this book, holding, e.g., on p. 152 that "if anything along the lines of the GRW theory should turn out true (which, of course, is a matter for future experiments to determine) then the probabilities of universal statistical mechanics are (as a matter of fact, when you come right down to it) nothing other than the familiar probabilities of quantum mechanics."
Albert seems to be generally skeptical of claims for temporal symmetry, and his favoring GRW, which incorporates chance collapse into the QM equations of motion is, perhaps the crucial illustration of the point for present purposes. (I suspect that much the same could be accomplished within decoherence approaches.) However, Albert also places considerable emphasis upon the "past hypothesis" to the effect that the universe started off in some particular low entropy state --which he does not specify in detail as I understand his position in the book. The usual idea, however, is that gravitational entropy was low in the very early universe. I think this a plausible assumption or postulate, given the idea of the historical formation of stars and galaxies, though it seems to be a matter of debate whether this requires some further explanation.
Some readers of the present thread may like to take a look at the GRW paper, which can be read on-line at the following address:
http://link.springer.com/chapter/10.1007%2FBFb0074474
Also, see the short interview with Giancarlo Ghirardi at the following address:
https://www.youtube.com/watch?v=ur-Fa2pwbSk
H.G. Callaway
Dear Olaussen,
There are some differences between statistical mechanics and thermodynamics, one important difference is that the theoretical assumptions of the two are different, and the valid range of statistical mechanics is less than that of thermodynamics in that the postulate of the equal a priori probability does not need to be considered for thermodynamics, so, I think, thermodynamics can hardly (in general) be deduced from statistical mechanics. For example, Gibbs free energy (in general) does not satisfy the postulate of the equal a priori probability.
In quantum mechanics, the mathematical logic of the wave function seems to be deterministic, which differs from the equal a priori probability.
I mean, as first step, thermodynamics require some modifications to the formulations of dynamics for the first law, due to “One puzzling problem of classical thermodynamics is that the energy classification for the internal energy was incomplete”. The energy equation or the energy terms in thermodynamics and dynamics cannot be corresponded to one by one. Many thought that the internal energy of thermodynamic system cannot be classified into some independent types similar to we do in dynamics, I don’t think so.
Dear Kåre, and H.G., and Tang,
TO YOUR ATTENTION:
You all forget that the question is about the Loschmidt's paradox. Yes, there is a big difference between the classical mechanics and QM in addressing the paradox. With the QM we have a big problem. See below:
Let's begin with the classical treatment. It works with trajectories, and no matter how many (and eventually identical) particles there are in the container with the gas, each one has a well-determined trajectory. It is true that for a huge number of particles we can't follow each trajectory, because computing limitation (so we use statistics), but the evolution of the gas is deterministic. And indeed, the collisions between the particles lead from a non-equilibrium state to an equilibrium state, due to the re-distribution of velocities.
The situation with the QM is much more difficult. Let's assume for simplicity that the particles are single atoms and are identical. Then the total state of the gas should be either symmetrical in all the particles (bosons), or anti-symmetrical (fermions). In short, the particles are entangled, not independent. So, we can't represent the state of the gas as a product of the states of the individual particles, and by the uncertainty principle, no particle has at the same time a well-defined position and velocity. To the contrary, the uncertainty in the position of each particle is ALL the volume of the container.
So, first of all, if one wishes to address Loschmidt's paradox under the quantum regime, one has to formulate the paradox for this regime. For instance, to say that by reversing all the velocities we reverse the evolution of the gas, is meaningless. The particles are not distinguishable and have big indetermination in positions. How to attribute to a some particle a velocity at a given time? How can we say where is that particle at all? And after a collision, how can we say, e.g. which particle is the one that came from the left, and which is the one that came from the right?
Mainz, Germany
Dear Wechsler,
I think you are right to re-emphasize the Loschmidt paradox, and I think that should be helpful. It strikes me that there is some tendency where someone envisages a possible solution to the paradox, that they may also come to suppose that there is no paradox. It is much easier to keep one possible solution in mind, however complex, than it is to keep multiple possible solutions in mind and do comparative evaluations of the alternatives possible solutions. But this is just what is expected when we speak of a paradox. Of course, in spite of that, it may be very difficult for people to get one alternative clearly in mind.
Formulating the paradox in relation to quantum theory, it strikes me that the most natural way is to emphasize the uniform evolution of the wave-function and to think of quantum mechanical phenomena in those terms --rather than in terms of particles. The point is often made that the evolution of the wave-function is reversible. It allows for both prediction and retro-diction of states of the wave-function. But if all the micro- (quantum) mechanics are reversible in principle, then how comes it that the second law of thermodynamics, formulated at the macro-level has this uni-directionality or temporal asymmetry? How does the statistical mechanics, conducted in terms of QM yield macro-systems constantly tending in the direction of increased entropy?
I neglect for the present, indeterminacy of measurements (or decoherence by other interactions) as suggesting a uni-directionality of QM --and your interesting brief suggestions concerning the varieties of quantum statistics and the factor of entanglement. More later.
H.G. Callaway
Sofia ~
Time reversal, T, is a well defined operation also quantum mechanically.
T is observed broken in weak interaction processes; it is also broken by the expansion of the universe^*. But I don't think any of these T-violations are relevant for the "arrow of time" we observe in everyday processes.
^* There are many much more mundane time reversal breaking processes in our environment than the expansion of the universe; this example is only for dramatization.
According to the currently best cosmological model, the universe 13.8 Gyrs ago (minus some 370 kyrs) was well described as fireball in 3000 K thermal equilibrium. How does that fit with the second law of thermodynamics, in view of the universe we observe today?
Sofia, I very much agree with your answer. Now, a related question arises: how do you define entropy for an assembly of N quantum particles?
Mainz, Germany
Dear Olassen,
You wrote:
According to the currently best cosmological model, the universe 13.8 Gyrs ago (minus some 370 kyrs) was well described as fireball in 3000 K thermal equilibrium. How does that fit with the second law of thermodynamics, in view of the universe we observe today?
---end quotation
This appears to be intended to be an answer to someone or something stated before, but since you neglect to say to whom or to what point you are replying, or intending to reply, the comment seems rather lost here. You state a cosmological fact which I take to be well known, and state it emphatically, by the isolation, if not otherwise, in such as way as to imply that it is something needing to be said. You then, add a question, though the point of your question is rather obscure.
What is the point here. Who are you addressing, and why do you think you will do any good by means of the obscurity? Is this the way you usually work with your colleagues? Two messages back you informed us that you were really too busy to be engaged on RG. Now you seem not be want to stop, though your objectives become obscure.
I take it your point is not to denigrate the commonly interjected relevancy of cosmology, here, say, in comparison to more experimentally based considera-tions? Posing a rhetorical question is not an appropriate criticism of anything in a wide-ranging interdisciplinary context. Think before your post.
H.G. Callaway
Dear Sofia and Raul,
1) If the particles are entangled, or not independent, we should change our perspective from individual to correlation; that is why I suggest professor Vyacheslav from subscript “i” to “ij”, including vij (speed), mij (mass), pij (momentum), Eij (kinetic energy) and Fij (force) , I had proved that this change can be deduced for classical mechanics and classical statistical mechanics (which only can be seen in my Chinese book), and had a try for Bose and Fermi systems, it seems no problem.
2) We need to make sure which parts of the internal energy make contribution to entropy, this is not clear in current theories in that the internal energy has not yet been classified into some independent types completely. Since not all the internal energy make contribution to entropy, so if we distinguish the parts of the internal energy, which make no contribution to entropy, the question will become simpler. If we cannot distinguish the parts, thermodynamics and dynamics cannot correspond to each other.
3) In fact, Loschmidt's paradox cannot tell us something about the second law of thermodynamics, Raul mentioned the point of view by J. Singh in his book: "Great Ideas and Theories of Modern Cosmology", I agree with J.Singh, (here is not the original text) if a dynamical equation satisfies T- symmetry, it is only T- symmetry of the dynamical equation, and implies that the energy is conservation before an after T-reversal, but this does not mean T-symmetry of a phenomenon itself. Loschmidt's paradox implies such a fact: we cannot describe T-symmetry broken by these conservation equations.
Mainz, Germany
Dear all,
I continue to sense some tendency to deny that there is actually, or has ever actually been any problem, whenever someone seems to make any reasonable or helpful suggestions regarding the solution to the problem. In English, at least, if a proposal for a solution is made, we do not say "there is no paradox" or "the paradox tells us nothing," we say instead, "here is the solution to the paradox": ...... At which point someone else, may propose a different solution.
In the interest of clarification of the problem, let me point out that some other people see the matter differently. I quote from an on-line discussion of the Maccone paper by the theoretical physicist Sean Carroll:
The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.
The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.
---end quotation
See "The Arrow of Time: Still a Puzzle"
Posted on August 24, 2009 by Sean Carroll at the following link:
http://www.preposterousuniverse.com/blog/2009/08/24/the-arrow-of-time-still-a-puzzle/
The solution proposed here is just the "past hypothesis" mentioned above in connection with the quotations from Albert's book. The answer proposed is that we don't derive temporally asymmetric macro processes from time reversible micro laws of dynamics alone--that we need in addition the past hypothesis. But though it is certainly plausible, as Carroll has it, to postulate a low entropy past of the universe, the question then remains of how this could be. So, certainly, I think some may see this proposal as less appealing, on those grounds. But my point here is just that there is a general problems or paradox (or apparent paradox, if your prefer), and that various possible solutions have been proposed.
Now compare what Tang Suye says, directly above in support of the claim that "In fact, Loschmidt's paradox cannot tell us something about the second law of thermodynamics" The argument given is as follows:
...if a dynamical equation satisfies T- symmetry, it is only T- symmetry of the dynamical equation, and implies that the energy is conservation before an after T-reversal, but this does not mean T-symmetry of a phenomenon itself. Loschmidt's paradox implies such a fact: we cannot describe T-symmetry broken by these conservation equations.
---end quotation
More precisely, it is not that "Loschmidt's paradox implies" the "T-symmetry of a phenomenon itself." It is rather that the T-symmetrical dynamic laws of the micro level tell us that time reversal is always possible and apparently, just as likely, though the second law tells us (regarding macro phenomena) that entropy always tends to increase. If we have "T- symmetry of the dynamical equation" (which is one assumption of the paradox at the micro level) then how can this result in a macro development tending always in one direction? (The second law is the other element of the paradox. The problem is to show how both assumptions can possibly be true.) At he very least, it seems clear to me that Loschmidt's paradox tells us that the second law is at most a statistical generalization. But that is not the way it was first stated. Thermodynamics is not identical with statistical thermodynamics. But given that the second law is a statistical generalization, that still does not explain the massive predominance of developments involving increase of entropy.
H.G. Callaway
Dear H.G., this issue is complex and involves several branches of physics and philosophy. Scientists try to attack it from their expertise,formulating complicated equations or changing the boundary conditions (in the original formulation of the problem, it refers to isolated systems, not interacting with the environment), but all these attempts are insufficient because also a model of the micro-macro relations and an operational concept of time are required. There is also a deeper issue that most colleagues seem to be not aware of: the connection between quantum decoherence (loss of correlations) and the Second Law. What is the cause of what? Is decoherence the cause of entropy increase, or vice versa? Or do they have a common cause? In Boltzmann (see my 1994 PHD thesis in Portuguese language posted in RG) the cause of irreversibility was named the "Molecular Disorder" Principle, and conceived as a loss of correlations between particles that have collided. This principle is similar to the decoherence principle in the interpretation of quantum theory (see Zurek's work).
Mainz, Germany
Dear Pereira,
Many thanks for your helpful comments. You wrote:
Scientists try to attack it from their expertise,formulating complicated equations or changing the boundary conditions (in the original formulation of the problem, it refers to isolated systems, not interacting with the environment), ...
--end quotation
I think we certainly should not discount the expertise which scientists may helpfully bring to the exposition of the problem and of possible solutions. The more the merrier, I suppose. Again, though there are certainly systems practically isolated, and illustrated by the successful application of the second law, as you suggest, this practical isolation may need to be re-thought on more theoretical grounds.
Thanks, too for emphasizing your take on the relationship of the second law to quantum coherence and decoherence. This seems to me a very intriguing and complex nest of questions, which I have repeatedly attempted to introduce in RG discussions. I wonder if you are aware of the work of Seth Lloyd, say, his book, Programming the Universe (2006)?
H.G. Callaway
No, thanks for the reference. I have these publications:
de Ponte, M. A., Cacheffo, A., Villas-Boas, C. J., Mizrahi, S. S., & Moussa, M. H. Y. (2010). Spontaneous recoherence of quantum states after decoherence. Eur. Phys. J. D, 59, 487–496.
Frasca, M. (2007). Thermodynamic limit and decoherence: rigorous results. Journal of Physics: Conference Series, 67, 012026.
Halliwell, J. J., Pérez-Mercader, J., & Zurek, W. H. (Eds.). (1994). Physical Origins of Time Asymmetry. Cambridge (UK): Cambridge University Press.
Hsiang, J.-T., & Ford, L. H. (2008). Decoherence and Recoherence in Model Quantum Systems. http://arxiv.org/abs/0811.1596v1.
Martyushev, L. M. (2010). The maximum entropy production principle: two basic questions. Philos Trans R Soc Lond B Biol Sci, 365(1545), 1333-1334.
Narnhofer, H., & Wreszinski, W. F. (2014). On reduction of the wave-packet, decoherence, irreversibility and the second law of thermodynamics. http://arxiv.org/abs/1309.2550v3.
Oza, A., Harris, D. M., Rosales, R. R., & Bush, J. W. M. (2014). Pilot-wave dynamics in a rotating frame: on the emergence of orbital quantization. J. Fluid Mech., 744, 404-429.
Oza, A., Wind-Willassen, O., Harris, D. M., Rosales, R. R., & Bush, J. W. M. (2014). Pilot-wave dynamics in a rotating frame. Exotic orbits, Phys. Fluids, 26, 082101.
Prigogine, I. (1977). Time, Structure And Fluctuations. Nobel Lecture, 8 December .
Mainz, Germany
Dear Pereira,
Thanks for the references. Let's see what can be made of them. I thought the Prigogine looked especially interesting. Among your references, are there any in particular that you would recommend in the present context?
I'm surprised you hadn't taken note of Seth Lloyd on related topics.
Obviously, though, we've hit on your specialization.
H.G. Callaway
Mainz, Germany
Dear Pereira & readers,
I thought you might be inclined to take the lead on aspects of the present question, given that we seem to have hit upon your specialization. I would encourage you to do so. What do you see as the chief related issues or questions --given your impressive list of references and familiarity with the themes. What are the most important references in your list?
I'm drawn back, in degree to the question of the isolation of systems and also the nature of "free energy," which figures so prominently in contrast to the concept of entropy. I suspect we do require some concept of effective isolation of systems which will allow an understanding the the traditional roles of the second law, though it may be that in other contexts, more highly theoretical, isolation as a condition of application of the second law becomes problematic.
In any case, I read through the Prigogine Nobel lecture this morning, and this brought me to reflect on isolation of systems again. Perhaps you will have some relevant quotation from the lecture. Following out some cross references, I found the following interesting passage from a recent paper, concerned with the nature of life, and which references Prigogine --in connection with "dissipative structures." This may interest some readers, though perhaps, it is a bit distant from the original question here. See what you think. The paper I came across is,
"Emergent phenomena and the secrets of life"
Peter T. Macklem
Journal of Applied Physiology Published 1 June 2008 Vol. 104 no. 6, 1844-1846 DOI: 10.1152/japplphysiol.00942.2007
http://jap.physiology.org/content/104/6/1844
Quotation from Macklem:
It takes energy to create improbable configurations from disordered ones. Ilya Prigogine won the Nobel Prize for discovering that the importation and dissipation of energy into chemical systems could reverse the inexorable disintegration into disorder predicted by the second law (15). The second law only applies to closed thermodynamic systems with no exchange of energy or entropy with the environment, whereas life is an open thermodynamic system, in which energy is imported as food and oxygen and utilized in a process we call metabolism. Entropy in the form of waste products is exported. As entropy decreases, order must increase. Thus the imported energy is used to create the spontaneous development of self-organized, emergent phenomena. This is how eons of Darwinian evolution and much shorter gestational times for fetal development create the stunning order of life.
---End quotation
What strikes me as particularly interesting here is the apparent suspension of the isolation condition in thinking about the second law. It seems clear that the phenomena of life and the growth of order in living things or even entire civilizations is not in conflict with the second law, because we normally suppose that such processes of growth or emergence and ordering depend upon an inflow of free energy, while strictly construed or treated as an effective idealization "the second law only applies only to closed thermodynamic systems." Still, we may treat of entropy and free energy regarding systems not fully isolated.
It seems clear, in any case that Prigogine was concerned with entropy in isolated systems but also with entropy in non-isolated systems. See, e.g., his discussion on pp. 264-265 of the Nobel lecture. It is worth noting, too, in the present context, that Prigogine was a critic of determinism.
Comments invited. Do these reflections help the present question along?
H.G. Callaway
Dear H.G., this was the topic of my PHD thesis in 1994. I also discussed the issue of time in the philosophy of science, including authors as Paul Horwich, Larry Sklar and John Earman who made a good job of formalizing time as an asymmetric relation and relating the time asymmetry with the Second Law (following or criticizing the pioneering efforts of Hans Reichenbach). However, I am just a philosopher of science and cannot follow sophisticated mathematical argumentations. In my thesis I focused on Boltzmann´s 1872 paper with the "H-Theorem" and the almost forgotten "kinetic energy discrete model" of irreversibility. His goal was to prove irreversibility in the context of classical statistical mechanics, for mechanical, deterministic and isolated systems. This task seems to be impossible, but he tried hard and got the desired result by measns of the usage of a trick - the "Stosszalansatz" or "Principle about the number of collisions" in a coarse-grained unit of a system composed of a perfect gas in a closed recipient. Later he called it the "Principle of Molecular Disorder". This was his asnwer to the Loschmidt paradox, but soon he realized that it was not exclusively mechanical and tried to argue from a probabilistic viewpoint (in the 1986 book "Lectures on gas Theory"). What I find mostly interesting is that the Principle above is very similar to the principle of decoherence, but I never found in the literature a direct statement about this similarity! In the Introduction to the Halliwell, Pérez-Mercader e Zurek 1994 book they write that “the direction of decoherence coincides with, and maybe even defines, an arrow of time. Moreover, a number of attempts to quantify the degree of classicality of a decohering system use the notion of information and entropy, providing a link with the fields of complexity and computation”. I read this text after completing my thesis and thought that my intuition was not "not even wrong", but from 1996 on I turned to the philosophy of neuroscience and lost my brief expertise on irreversibility issues.
Dear H.G., Dr. Alfredo, and the other users interested in the Loschmidt paradox,
I apologize for my question, but, who of you read indeed Loschmidt's original article? Some people discuss this paradox, first describing it by some variant that they imagined. It's not correct. People should refer to the actual work, not do modifications as they like.
It seems that the work of Loschmidt was never translated in English. I am not sure of that, and if somebody knows such a translation, PLEASE TELL US.
I am trying to do such a translation, and as far as I advanced with it, Loschmidt seems NOT to have worked with an ideal gas, i.e. NOT with point-like molecules, but with spherically symmetrical balls. Let me tell you that for such objects, the evolution of the gas is DETERMINISTIC. The probabilistic treatment is necessary ONLY because of our limited computing probability.
Now, if the evolution is deterministic and if leads to an increase of entropy, how can one return to low entropy? And how can one obtain reversibility?
Dear Sofia, I read the Boltzmann reply to Loschmidt that was translated to English by Stephen Brush (see link below). Boltzmann himself used the billiard-ball model. Reversibility is obtained by means of reversing the order of time (what was "before" becomes "after"). This is of course a "thought experiment". The evolution of entropy in the H-Theorem approach was assumed to be deterministic, BUT in the calculus of the number of collisions for each unit volume a TRICK was made of discarding previous correlations between the particles. In order to fully understand the paradox it is necessary to read Boltzmann (1872) Further Studies in the Thermal Equilibrium of Gas Molecules (translation by the same Stephen Brush, in Kinetic Theory, Oxford, London: Pergamon Press, 1965).
http://www.informationphilosopher.com/solutions/scientists/boltzmann/loschmidt.html
Mainz, Germany
Dear Wechsler & Pereira,
Though I think it not the only viable approach, it is certainly of interest here to look to the historical sources and bring them into the discussion. It strikes me as very useful to have Boltzman's reply to Loschmidt available to readers of this thread of discussion. Many thanks to Pereira for providing the link and brief comments on this. On the other hand, billiard-ball models seem very nineteenth century, or mechanistic; and point particles a useful but also somewhat doubtful idealization. The historical discussions can, no doubt cast much light on the origin of the paradox, but I think we need to consider more recent models as well.
Does anyone have a link or text of the Loschmidt? Wechsler is certainly right that it ought to be read by those concerned with the present question.
H.G. Callaway
As a mathematical philosopher who starts from the premise that events must be intrinsically probable to give information, I would say that the second law of thermodynamics is an approximation of the fluctuation theorem regarded as a definition of increase in time, and that the first law of thermodynamics is derived from that and only applicable in those situations in time is squared, so it makes no difference whether the increase in time is positive or negative. In other words, it is extremely improbable that positions and momenta of all particles in a large closed system would be such that the second law of thermodynamics would be violated But Loschmidt's paradox assumes they would have such positions and momenta. Garbage in; garbage out or if the moon is made of green cheese and green cheese is good to eat, then the moon is good to eat is valid but not sound.
Dear H.G., historical considerations are important to help us to move forward.
From an analysis of the historical debate of the paradox, I identify five kinds of solution:
a) To deny the value of thought experiments - the inversion of the direction of time cannot be done in the Lab. This was the first reaction of Boltzmann, but soon he noted that there was a serious theoretical problem;
b) To appeal to special initial conditions - this was Boltzmann´s "official" reply to Loschmidt, but the same strategy did not work for Zermelo's objection to the H-Theorem (the Recurrence Theorem);
c) To appeal to the "Law of Large Numbers" - this is a popular solution in textbooks, and Boltzmann tried to justify it with cosmological conjectures (in other regions of the universe or another times in our region entropy decreases), but there is an unwanted consequence: the direction of time becomes a statistical issue! I criticized this solution in my thesis;
d) To appeal to an extra-mechanical principle (the Principle about the Number of Collisions, later named the Molecular Disorder principle). This seems to me to be the better option, and has the merit of anticipating Quantum Theory and the Decoherence interpretation made by Zurek;
e) Finally, an appeal to special boundary conditions. This oprtion was not taken into account in classical statistical mechanics, but became available with Prigogine´s Non-Euilibrium Thermodynamics. The problem is that boundary conditions are contingent phenomena and cannot support a physical law. Prigogine tried to find a justification in quantum theory, but the critics say that the solution (non-commutative operators) is arbitrary. The whole explanatory strategy goes full circle if the Second Law is taken as responsible for the irreversibility of the decoherence ("collapse") process.
Mainz, Germany
Dear Pereira,
Thanks for your further thoughts on the history of the problem. I was especially interested by your alternative d) immediately above, and this reminded me of a passage from an earlier posting of yours:
---you wrote---
Later he [Boltzmann] called it the "Principle of Molecular Disorder". This was his answer to the Loschmidt paradox, but soon he realized that it was not exclusively mechanical and tried to argue from a probabilistic viewpoint (in the 1986 book "Lectures on gas Theory"). What I find mostly interesting is that the Principle above is very similar to the principle of decoherence, but I never found in the literature a direct statement about this similarity! In the Introduction to the Halliwell, Pérez-Mercader e Zurek 1994 book they write that “the direction of decoherence coincides with, and maybe even defines, an arrow of time. Moreover, a number of attempts to quantify the degree of classicality of a decohering system use the notion of information and entropy, providing a link with the fields of complexity and computation”.
---End quotation
I think it a very interesting and engaging idea that the direction of increasing entropy and increasing decoherence coincide. We might expect, given that connection, that maximum entropy involves or implies maximal decoherence, and that would suggest a kind of special condition at thermodynamic equilibrium in terms of QM. As the wave function becomes less important or less dominant, something else, becomes more important, exercises greater dominance? In any case, I believe that a QM approach to the micro-level of the second law will have some significance.
I wonder if others may have come to similar speculative perspectives. People seem generally convinced that the big-bang origin had a special character of low entropy, and we believe we understand something of the effects or consequences. What, then of thermodynamic equilibrium and maximal entropy? (The kind of picture which Hawking presents of a universe after the evaporation of black holes.) Perhaps I am headed back toward your alternative c) here, though I am sure that no one will be satisfied with mere cosmological conjectures. On the other hand, I suspect we will end up with some fragmentation of time in any case.
I hope your thoughtful remarks will evoke some further thoughts in reply from other readers of this thread.
By the way, Wechsler has available the text of the Loschmidt paper (in German) and was kind enough to pass it along to me. (Its 51 pp., and apparently she is thinking to translate it into English).
H.G. Callaway
Dear H.G.,
The most striking consequence of conjecture 'd' is that when reducing entropy (by means of a conversion of external useful energy, as in Prigogine´s work) the decoherence process can also be reversed! This spetacular result was obtained by the authors of the publications below (from my previous list). The results also seem to support David Bohm's "Pilot Wave" interpretation of quantum theory.
de Ponte, M. A., Cacheffo, A., Villas-Boas, C. J., Mizrahi, S. S., & Moussa, M. H. Y. (2010). Spontaneous recoherence of quantum states after decoherence. Eur. Phys. J. D, 59, 487–496.
Oza, A., Harris, D. M., Rosales, R. R., & Bush, J. W. M. (2014). Pilot-wave dynamics in a rotating frame: on the emergence of orbital quantization. J. Fluid Mech., 744, 404-429.
Oza, A., Wind-Willassen, O., Harris, D. M., Rosales, R. R., & Bush, J. W. M. (2014). Pilot-wave dynamics in a rotating frame. Exotic orbits, Phys. Fluids, 26, 082101.
Dear H.G,
Loschmidt paradox assume that a real dynamical process is reversible, but this is unable to be confirmed only by conservation equations. For example,
Maxwell equations satisfy T-symmetry, but we never see that a real radiated EM field can be reversed, this is an example what I mean; which may be one of the micro sources of the second law. Loschmidt's paradox involves micro-dynamics and the second law, for radiated EM field, similar question involves the equations and the real phenomena.
Mainz, Germany
Dear all,
My sense of the state of the discussion is that we need to go back to consider the topic of decoherence and get a fuller account of it on the table. I discovered some reluctance to discuss the question when I tried to broach it directly a few weeks back, in a separate thread; but this topic seems to be very deeply involved in contemporary debates and disagreements in theoretical physics. Its is not clear to me that everyone using the term means quite the same thing by their use of it.
Would anyone like to try an elementary exposition of the topic? It matters much less, I think, whether contributors favor or disfavor the concept and approaches involving decoherence. So, if someone has a criticism, that would be equally of interest--if we find out, by means of the criticism exactly what we should understand under this heading.
Who can help out here?
H.G. Callaway
Oh, Kåre,
Couldn't your answer comprise only the first two logical blocks? I mean, that one single micro-state has no entropy (S = 0 because ln 1 = 0), while a macro-state has entropy (n > 1 ==> S N.E. 0).
I am not sure that what you say in the 2nd part, i.e. about the probability that all the particles gather in a small region, has something to do with Loschmidt's paradox. By the way, the ORIGINAL FORMULATION of the paradox is known to you? I don't claim that it is known to me for the moment, at present I am trying to translate Loschmidt's article into English. But, judging by what most of the authors say, Loschmidt spoke of the time reversibility of the classical mechanics vs. the irreversibility of the change in entropy. As you said by yourself, entropy is not a variable for a single microstate. So, if for a FINITE number of particles you can get (with almost zero probability), all the particles gathered in a corner, this is not a reverted evolution of the gas. The reverted evolution is that one would obtain one macro-state after another, with entropy all the time decreasing.
Secondly, how do we decide whether a macroscopic state corresponds to THERMAL EQUILIBRIUM? A macroscopic state in which the average number of particles per unit interval of velocities, obeys the Maxwell-Boltzmann distribution. Isn't that so? But that distribution doesn't speak of positions, i.e. whether the particles may become gathered in one corner of the container. So gathering in a corner is thermal NON-equilibrium? I don't know the answer by myself. What is known to you?
Dear H.G,
If there is similarity between decoherence and increase of entropy is a complicated issue and maybe the people have difficulty in giving an answer. But let me tell you a major dissimilarity between the systems discussed by Loschmidt's paradox and those that undergo decoherence: Loschmidt made his judgements on ISOLATED systems. To the contrary, decoherence occurs on highly NON-isolated systems, e.g. a quantum system in contact with a classical apparatus. That apparatus contains a lot of wires, all sort of peripherals, and there is no possibility to isolate it from the environment.
Mainz, Germany
Dear Wechsler,
I have my doubts that the point concerning decoherence and increase of entropy is included in the original or earlier accounts of QM in terms of decoherence. It may be, instead an incidental and subsequent development--however interesting. In my understanding, the concept of decoherence arose in part as a revision of the emphasis on observation in the Copenhagen interpretation. As such, decoherence takes place in either isolated or non-isolated systems. Nor is decoherence, in my understanding restricted to something that takes place in the interaction of a quantum system with a measuring apparatus. Quite the contrary. It takes place all the time, and everywhere.
But since we may have participants who know more or better about this concept, I have encouraged others to chime in. Many thanks for your thoughts on the matter. I think the physicists need to speak up.
H.G. Callaway
Mainz, Germany
Dear all,
Here follows a link to a short, summary on quantum decoherence:
http://www.informationphilosopher.com/solutions/scientists/zurek/
I hope that readers may find this of some use.
H.G. Callaway
Decoherence is a kind of buzz word that keeps evolving to to try to embrace the notion of deterministic evolution for all times which actually goes back to Schrodinger himself. I have never been impressed by these arguments though I do believe that deterministic evolution does hold.
On the point of thermal equilibrium, I don't believe this notion is really that well defined. It is a coarse grained notion as it must be to consider an spatial inhomogeneity at all. Even in a system with no such apparent inhomogeneity, fluctuations arise and violate this in the classical case. For the quantum one, dynamic typicality is the latest craze to try to justify the quantum ensemble results. A "pure state" is one that corresponds to a wavefunction vs a "mixed state" used in quantum ensembles. How is decoherence to reconcile such an ensemble picture?
My opinion is that we are ready to hit the wall with both quantum measurement and thermodynamics. The ultracold gas experiments have hydrodynamic behavior to lowest order and higher corrections are being fit with a hodge podge of quantum calculations onto classical hydro. Ultimately I think we will find that thermo fails for these systems and not just as a measure of the N being less than infinity but in that they are true general many body wavefunctions at all times and that mixed state ensembles have nothing to do with them.
Dear H.G., IMHO the article on ZureK you gave the link is not a good explanation of decoherence, because it relies too much on the idea of loss of information. The blame is on Zurek himself... I think a better definition refers to what happens to the decohering system: the diagonalization of Heisemberg's density matrix, as explained in the link below: "The effect of decoherence has been to move the summation sign from inside of the modulus sign to outside. As a result all the cross- or quantum interference-terms...have vanished from the transition probability calculation. The decoherence has irreversibly converted quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities)...In terms of density matrices, the loss of interference effects corresponds to the diagonalization of the "environmentally traced over" density matrix."
http://en.wikipedia.org/wiki/Quantum_decoherence
Another thought: it seems to me that:
Boltzmann's Molecular Disorder Principle is the same as
Prigogine`s "loss of correlations", which is the same as
Zurek`s "loss of information",
but what really matters is:
the loss of interference patterns.
Why do these patterns disappear when the system interacts with the environment (or with the measuring apparatus)?
This seems to be an easy question, but I don't know the answer.
Mainz, Germany
Dear All,
Here is the first paragraph of the Wikipedia piece for which Pereira provided the link:
In quantum mechanics, quantum decoherence is the loss of coherence or ordering of the phase angles between the components of a system in a quantum superposition. One consequence of this dephasing is classical or probabilistically additive behavior. Quantum decoherence gives the appearance of wave function collapse, which is the reduction of the physical possibilities into a single possibility as seen by an observer. It justifies the framework and intuition of classical physics as an acceptable approximation: decoherence is the mechanism by which the classical limit emerges from a quantum starting point and it determines the location of the quantum-classical boundary[citation needed]. Decoherence occurs when a system interacts with its environment in a thermodynamically irreversible way. This prevents different elements in the quantum superposition of the total system's wavefunction from interfering with each other. Decoherence was first introduced 1970 by the German physicist H. - Dieter Zeh and has been a subject of active research since the 1980s.
---end quotation
I thought this interesting and considered providing this link myself when I instead linked to the information philosophy page--which I think is pretty good.
I think we have to keep in mind that the word "decoherence" is quite widely used and there is some reason to expect that not all the physicists give the term quite the same meaning --or emphasized quite the same points by use of the term. Especially since the term is highly theoretical, that sort of situation is bound to present semantic puzzles. It is not as though the meaning of the term could possibly be independent of the viewpoint represented in its various usages by various theorists. So, I think that sort of expectation --of a single universally recognized meaning common to the debate--should be put aside, and we should look for something more like a common core meaning or crucial coincidences in the implicit meaning of the term.
So, consider, again, the opening sentence from the above quotation:
In quantum mechanics, quantum decoherence is the loss of coherence or ordering of the phase angles between the components of a system in a quantum superposition. One consequence of this dephasing is classical or probabilistically additive behavior.
---end quotation
This seems to me the core meaning in the usage of the advocates of the theory of decoherence. What is decoherence? It is "the loss of coherence or ordering of the phase angles between the components of a system in a quantum superposition." And notice that it is immediately added that the consequence of dephasing is "classical or probabilistic additive behavior." This seems to me a re-description or re-conceptualization of the more familiar concept of the "collapse of the wave function." Still it is not offered as a solution of the "measurement problem" --which often seems to be a mare's nest of polemics and hardened debates and positions. The chief or most important difference is that decoherence is taken to result from various and continuous interactions of a quantum system with its environment instead of merely focusing on the interaction with a measuring device. How exactly does a system in superposition reduce to particle-like behavior? Well, we know no more than Heisenberg, Bohr and Born did, but the over-emphasis on observation in the Copenhagen interpretation has been put aside.
Pereira comments:
but what really matters is:
the loss of interference patterns.
Why do these patterns disappear when the system interacts with the environment (or with the measuring apparatus)?
---end quotation
I think this is correct. What really matters is the loss of interference patterns. I suspect that the answer to the question, though, is the same as what we find in Heisenberg, Bohr and Born: the patterns just do disappear, and to ask why is to bark up the wrong tree, its a misconceived question --given the indeterminancy/ uncertainty principle.
I return to the quoted passage:
Quantum decoherence gives the appearance of wave function collapse, which is the reduction of the physical possibilities into a single possibility as seen by an observer. It justifies the framework and intuition of classical physics as an acceptable approximation: decoherence is the mechanism by which the classical limit emerges from a quantum starting point and it determines the location of the quantum-classical boundary.
---end quote
It only seems to me misleading to say that "decoherence is the mechanism by which the classical limit emerges from a quantum starting point "-- if this creates the expectation of finding a "mechanism" of a deterministic sort, going beyond the Born rule.
This seems to me the first step in understanding what is in question in the discussions of "decoherence," the central meaning of the term. All the reflections on information seem secondary to this.
May the physicists find my layman's errors.
H.G. Callaway
Mainz, Germany
Dear Bykov,
you wrote:
The notion "entropy" is used in two slightly different senses. First of all, it is statistical quantity, appeared in our approximated (coarse-grained) description of macroscopic systems. In reality (at least while quantum effects can be neglected) all physics is deterministic and all states are pure, the entropy is just zero. Probability distribution over large amount of states and the entropy, calculated from it are "anthropomorphic" in the sense that they appear due to our way of approximate description of the system, not due to some fundamental laws. Fundamental world does not know any entropy.
---end quotation
I puzzled a bit over this passage, since while I understand your claim that there is one sense of "entropy" as a statistical quantity involved in our coarse-grained description of macroscopic systems, I don't really see that you specify a second sense of the word.
Instead you claim that as regards fundamental (reversible) laws, the "fundamental world does not know any entropy." Rather than stating two contrasting meanings of the word "entropy," you seem, instead to simply restate the opening question and problem: How can the second law, which tells us that entropy generally increases, be reconciled with the temporal symmetry of fundamental dynamics--outside of "quantum effects."
"In reality," you tell us "(at least while quantum effects can be neglected) all physics is deterministic and all states are pure, the entropy is just zero." But this sounds very much like the claim that "in reality" the second law is false; and you seem to underline this notion by your remarks about its "anthropomorphic" character. I fail to see how the second law can be even approximately true, given what you say.
Would you care to clarify? Its clear, of course, that you do not intend to say that the second law is false, but what you do say here is very curious, given that you say that the increase in entropy doesn't hold "in reality." I understand by "reality" anything that has effects. Its a reality, for instance, (and true) that if the temperature of your boiler equalizes with that of the environment, then you'll get no more work out of a steam engine. There are many more similar statements implied by the second law.
H.G. Callaway
Mainz, Germany
Dear Bykov,
I have substituted "generally increasing" for "always increasing" in my comments above. The point that physical interactions can take place without increase of entropy was clear to me. You have caught a layman's error.
Now, let's see if you have clarified your opening remark about two senses of "entropy."
you write:
the second meaning is the function of macroscopic parameters. In statistical description of macroscopic system it is exactly the same value as statistical entropy. But when we have found equation for it, we can forget about it's statistical nature.
---end quotation
Let's recall the first meaning you gave:
First of all, it is statistical quantity, appeared in our approximated (coarse-grained) description of macroscopic systems.
---end quotation
Now, I am left wondering what function this distinction plays in what you go on to say. I think no one doubts that we can understand the entropy of a macroscopic system in terms of "energy, volume and particles number," and the like (though number of particles seems a matter of micro-description), and that the results involving macroscopic parameters depend on underlying statistical probabilities.
It strikes me that one might equally say that there is only one sense of entropy in contemporary thermodynamics, though there are two ways of calculating or understanding it --one in terms of macroscopic parameters and otherwise in terms of the statistics of micro- configurations. This is not to deny that there was an earlier, pre-statistical theory conducted only in terms of pressure, volume, temperature, etc. Precision is potentially helpful, but how does this particular precision help us?
You wrote, in your more recent post,
I'll try to formulate it in some other way. Real (classical) world does not know anything about probabilities. Everything is deterministic. You can (in principle) measure all positions of all molecules in the gas, it will not anyhow destruct it's further evolution according to thermodynamics laws. And the notion of entropy in statistical sense is senseless, it's just always zero.
---end quote
The point here is familiar, at least since Eddington, I believe, if not earlier. Entropy seems to be a property of large collections of things (particles) which we can't apply to individual things (particles one at a time). But again, what you say also seems a (partial) restatement of the problem or question initially broached in this thread of discussion. How is it that the second law is temporally asymmetric while the underlying dynamics is symmetric. Also, It doesn't follow from the lack of applicability of the concept of entropy to individual particles that there is not some physical correlate of entropy which does apply to individual particles, molecules, etc. If we want to understand the second law in relation to the micro configuration, then it seems we might be interested in some physical correlate of increasing entropy.
In any case, I think you describe both the physical facts connected with the second law and also the facts connected with the usual understanding of the micro-dynamics. Of course, we might still bring in the "past hypothesis" --the idea that the explanation of the general increase of entropy is that the universe starts out in a special, low entropy state. But I take it that our question concerns whether it is possible to derive the second law from the full dynamical description of the mechanics of the micro-states of systems.
So far, it is agreed, generally, that there are many more "disordered" configurations of the statistically possible micro-states of a given macro-system, and that is the reason that the statistical approach to thermodynamics makes sense. By chance alone, the micro-state is more likely to fall into one or another "disordered" configuration--since there are so many of them in comparison to the "ordered" configurations. But saying this, of course, we are not addressing the temporally symmetric dynamics of the elements of the configuration. We get a statistical account of the second law, but not a mechanical, dynamic account in terms of temporally symmetric "fundamental" law.
Perhaps you have not been aiming to address the original question directly, which is, o.k. by me. But since you did not address your remarks to anyone in particular, or directly challenge anything that had been said just before, It has taken greater effort to get at the intended purport of your two postings.
Perhaps, if we see eye to eye so far, you will go on to say something further.
H.G. Callaway
One pillar of the Loschmidt's paradox is time-symmetric dynamics.
What difference does it make that it is time symmetric? The apparently symmetric differential equation give unidirectional or cyclic performance entirely dependent on initial conditions that come as given from another process outside control of differential equations and supprisingly things start moving one way and always the same.
If one inverts t variable that means one uses a counting down timer instead of a regular stopwatch. Surprisingly initial velocity must be inverted with negative time, and the solution follows the same parth although as it should in reverse time the seconds count down from 0: 0,-1,-2,-3 etc
As time is nothing other than state of a clock, reversing it changes nothing.
Mainz, Germany
Dear Wutke,
Your comment that "time is nothing other than [the] state of a clock" seems to me to contrast with the idea that time, is, say, a physical dimension of space-time or even with the idea that "time is what clocks measure." Given these alternatives does it still follow, on your account, that "reversing it changes nothing?"
You are correct, I believe, to hold that "initial conditions" are of considerable importance concerning uni-directional development, yet, on the other hand, it seems clear that initial conditions of a system are themselves subject to development. If we know that the initial condition of a system involves low entropy, then we expect the entropy to increase from that level in subsequent development. However, if the increase of entropy is understood as purely a statistical matter of there being more ways for things to be disorderly than there are for them to be orderly, then this would suggest that the low entropy initial conditions of a system were preceded by a previous higher entropy state or condition.
If we appeal to low entropy initial conditions, then this invites the question of how this could have come about. So, it remains unclear that our initial question can be answered fully in terms of initial conditions. But more basically, it appears that time, in contrast with spatial dimensions has an intrinsic direction to it. How are we to understand this in light of purely time-symmetric physics?
H.G. Callaway
Dear Andrew and Aleksei, for classical statistical mechanics ""time is nothing other than the state of a clock". Loschmidt's paradox depends on this assumption. However, in the interface of the micro and macro, there may be an irreversible time because of the decoherence/recoherence process. Aleksey (above) argued that "we need to distinguinsh between particles 1 and 2 in order to reverse momentum of particle 1, but we do not need that for reversing of all momenta simultaneously". This is an epistemological solution for the theoretical problem of reversing the decoherence process. However, could the result of the reversal (the ""recoherent" state) be identical to the original coherent state? I don't know any publication on this problem, but my opinion is that it is different, because new entanglements may be formed during the process. The recoherent state that results from the operations of decoherence and recoherence is not a micro state like the original coherent state, but a macro state!
If one considers time to be the state of a clock then it is derives from a physical process for which we need some underlying dynamical parameter (for which we hope it is isomorphic!). This is like saying the voltage is what we measure on a meter. We make the meter to give a good approximation to the real voltage.
I have advocated that we only have a classical-like universe with possible data storage and discrete state machines with entropy increase because the universe is in a very particular and transient state. In other words, I don't believe that entropy is cyclic but that the rather binary states of existence of configurations we observe are destined to decay so that entropy is no longer a meaningful quantity.
Mainz, Germany
Dear Chafin,
It has always seemed significant to me that clocks are compared in respect to accuracy. We suppose, or determine, that one clock, or one kind of clock, is more accurate than another. (The situation is analogous to that which arose historically regarding standards of measurement of length.) The comparative evaluation of clocks may lead to the question regarding the clocks regarded as most accurate, how they might be evaluated. There seems no better kind of answer than to say that there is an underlying physical process which is known to be regular --or which is regular so far as is known. This is certainly closer to holding that "time is what clocks measure." It allows that we may eventually come to some more accurate means of measuring the passage of time.
Given the second law, and changes we see around us, it seems true that "the universe is in a very particular and transient state." But if (as seems true) at some time in the future, entropy will no longer be "a meaningful quantity," then the question remains of how it ever got to be a meaningful quantity in the first place. If the "initial condition" of the universe is (naturally) assumed to be one of low entropy, how could that situation ever have arisen? We readily postulate a low entropy state to begin, but this seems to call for some reasonable explanation. The problem seems to be "kicked down the road" into the distant (inaccessible?) reaches of cosmology.
H.G. Callaway
This thread is really getting interesting but I am afraid I have to postpone my contributions because its past midnight in Australia and my eyes cannot get focus anymore. But before I go I will throw one thing which I always wanted to discuss.
In my opinion the case of entropy is similar to the case of Cholesterol. We were told that eating cholesterol increases its concentration in blood and that is the cause of heart attack. These days they say that cholesterol is more indicator of a problem in the body that produces it in excess for various reasons rather than the prime cause of heart attacks.
The first thing I have learned about entropy in my thermodynamics course at uni was that this is a function of state. State is described by independent variables and entropy results from them. So how on earth entropy is the cause of irreversible processes, a time arrow and whatever. What causes states to change dictates what entropy is not vice versa. Entropy is just a useful indicator. As the concept expands in meaning the confusion around it grows with the peak on the theme of relativistic entropy for which exist two contradictory views.
it is probably true what once von Neumn said to Shanon in relation to entropy of information:
...You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.
When it comes to clock and time I confirm my view that it is the state of a reference clock arbitrary selected to correlate other observable states or positions. Clock does not measure time it produces a reference state for duration. Like a colour it can be an attribute of objects. There is nothing like colour permeating space it is a perceived phenomenon but there is an abstraction of colour so we treat colours as an existing omnipresent state. Wherever you look there is some colour. So changing direction of clock movement does nothing to the physical process we describe.
This view of time exists since antiquity most succintly stated by Mach:
It is utterly beyond our power to measure the changes of things by time. Quite the contrary, time is an abstraction at which we arrive through the changes of things.”
This view of time is guiding me towards conclusions that relative simultaneity is a failed and misunderstood concept irrespective of Special Relativity being an acceptable theory.
It tells us that the second law is not derived from time symmetric dynamics.
Time symmetric dynamics can only derive the first law.
It also tells us that many thermodynamic concepts now cannot correspond with the concepts of dynamics term by term, not only for entropy, but also for others, such as enthalpy, Gibbs free energy, etc., etc., etc., and the equation of energy conservation.
Entropy: A concept that is not a physical quantity
https://www.researchgate.net/publication/230554936_Entropy_A_concept_that_is_not_a_physical_quantity
Dear
Shufeng Zhang
I'm sure that today everything is not so bad in physics. Please see this paper.
Added an answer
Here we discuss in more detail the connection between thermodynamics and classical mechanics