Hi all,
I am kind of new in the stat thermodynamics and molecular simulation. I have just started with microcanonical ensemble.
In this ensemble energy of the isolated system remains constant. In a system total energy is the, U=kinetic energy+ potential energy.
is the energy remains constant because potential energy is zero (there is no interaction among the molecules, hence no interaction potential (L-J) is considered) and kinetic energy is constant (as temperature is constant) ?
Is this ensemble is a collection of isolated systems or a single isolated system?
This single isolated system can have different microstates based on positions of molecules? and all these microstates are degenerate? is this true?
if not what is it then? please explain it.
Please someone answer. I am waiting for the response.
Thanks
Vivek Arora
The energy is constant because the equations of motion for a system in isolation (Newton's laws of motion) preserve the total energy of the system. The microcanonical ensemble is designed to respect this property.
The question is not WHY it remains constant is not sensible, since constant N,V,E is by definition the microcanonical ensemble.
The good question is HOW it remains constant. The reason how energy remains constant is not because the potential energy is zero (usually is not zero, unless you deal with ideal gas, or hard sphere model of whatever model without potential energy term, in which case sampling statistic methods are not the best choice), but, as you initially said, it is the sum of kinetic and potential energy that remains fixed.
In the special case if potential energy is zero (different to LJ potential) then kinetic energy has to be constant, meaning that temperature will be constant too.
The energy is constant because the equations of motion for a system in isolation (Newton's laws of motion) preserve the total energy of the system. The microcanonical ensemble is designed to respect this property.
Hi Vivek. By definition the microcanonical ensemble deals with an insulated system (U, V, N are fixed). Energy U is total energy.
Hi Vivek,
to your question: Is this ensemble is a collection of isolated systems or a single isolated system?
The ensemble is an infinite collection of systems subject to identical constraints.In the case of a micro canonical ensemble, each system is isolated.
And : This single isolated system can have different microstates based on positions of molecules? and all these microstates are degenerate? is this true?
Each system in the ensemble will have, at any given time, a microstate described by the positions and momenta of all particles. The basic assumption of equilibrium stat mech is that all different micro states are equally probable, i.e. have the same probability density.
Why Micro canonical ensemble energy remains constant?. Available from: https://www.researchgate.net/post/why_Micro_canonical_ensemble_energy_remains_constant [accessed Apr 16, 2015].
Thanks to all of you for replying
This is in response to the Fernando Vallejos answer.
I understood why total energy remains constant.
You have said that potential energy in fact is not zero, but is it not true that this microcanonical ensemble consider that particles are non interacting? If that is true then interaction energy,potential energy, is zero.
Vivek> ... that particles are non interacting?
No, that is not an assumption. They are assumed to be interacting through conservative forces --- such that can be constructed from the derivative of inter-particle potentials (and perhaps also an external potential which keeps them located to a fixed region of space). In textbook examples of the microcanonical ensemble one usually don't consider interparticle interactions --- it becomes mathematically too complicated. It is different when you do numerical simulations of the micro canonical ensemble, like molecular dynamics.
You mean U=potential energy due to position + energy due to motion (translational, rotational, vibrational) remains constant through out the simulaiton in micro canonical ensemble.? i have seen in one book that partition function for this ensemble is molecular partition function, which is obtained for non interacting particles. is it true?
The total energy as you describe (the potential energy will usually depend on relative distances between particles, and also their relative orientations for more complex molecules). The energy of each individual particle (to the extent it can be defined) will not be constant, only the sum of all parts. The explicit examples you can find in textbooks will very likely be for noninteracting particles, but I strongly doubt that any serious book will claim that this is all there is to the microcanonical ensemble. You probably should check more carefully what is actually written.
Yes. Sum of potential energy and energy due to motion for all the particles in the system will be constant. I understand. Thanks for putting your precious time in this discussion.
Now to extend this discussion, I will next move to micro state. In an isolated system with N,V and E constant, number of degenerate microstates possible depends on position and momentum of molecules (basically infinite possibilities are there) and all are equally accessible by system at any given time.
Suppose, at time t=t1, microstate =1, System's total potential energy=U1 (After calculation of interparticle interaction potential based on particles relative position ) and total energy due to motion = K1(calculated based on the kinetic,1/2 mv2 = 1/2kbT; vibrational,1/2kx2=1/2kbT or rotational,1/2 Iw2=1/2kbT).
at time t=t2, microstate=2, System's total potential energy=U2 (considering new position of particles) and total energy due to motion=K2
Now, according to micro canonical ensemble U1+K1=U2+K2 right ?
Now, since temperature is fixed, your energy due to motion (rotaion, vibraiton, translation) remains fixed. means K1=K2. is it correct?
If K1=K2, U1=U2. Means only those arrangement (of particles positions) are acceptable, which gives constant total potential energy. Other arrangements are forbidden for this particular ensemble. Is this perception of mine correct?
I have read several books. They don't explicitly explain this way.
Please explain all these questions.
My write up has been quite lengthy.
Vivek> Now, since temperature is fixed, your energy due to motion (rotaion, vibraiton, translation) remains fixed. means K1=K2. is it correct?
No, that is wrong.
The temperature is not a primary quantity in the microcanonical ensemble. You can define a temperature from the relation
kB T = m < v_x^2 >
where < ... > means the time average (over a very long, in principle infinite, simulation time), and v_x is the velocity in the x-direction of one specific particle.
In practice such a definition need a very long time to obtain a reliable estimate of the temperature; hence it can be better to take the average over all directions and particles. You could also define temperature from the time average of rotation energies. It will be a good check of your simulations if that leads to the same result.
Have you read any introductory books on statistical mechanics? You seem to have some very basic misunderstandings, from which you will continue to suffer until corrected.
Just add to the previous answers:
>> 1. Is this ensemble is a collection of isolated systems or a single isolated system?
Answer 1: This ensemble can be represented as a collection of the isolated systems with the same number of molecules, total energy, and volume.
>> 2. This single isolated system can have different microstates based on positions of molecules? and all these microstates are degenerate? is this true?
if not what is it then? please explain it.
Answer for question 2: It follows that each microstate in the ensemble differs in molecular positions. It does preserve the total energy in comparison with the other microstates if every other microstate can be obtained from this one using the Newton's laws of motion. This allows to discuss also non-zero interaction potential energies between atoms or molecules.
You mean in an isolated system based on particles position and momentum, there are infinite microstates possible. If for example suppose there are 4 molecules in the system of volume V, with total energy E in an ensemble NVE. Again suppose there are three combination of positions possible, with the potential energies u1, u2, and u3. Now suppose we consider about 1st combination of position. In this position, four particles can have suppose 6 combinations of momentum's, ensuring that total kinetic energy from each of these combinations plus potential energy for 1st combination gives E conserved. Suppose, in the first combination out of 6 combinations, 2 molecules are having e1 kinetic energy each, 1 molecule is having e2 kinetic energy, and e3 kinetic energy. So, u1+2e1+e2+e3 =E.
So considering all these combinations, total number of combinations possible for this isolated system is 4x6=24. So this many micro states are possible for this system.
Is it correct? If not correctify me.Please
So, total partition function is given by Q=Σ14exp(-βui)xΣ16exp(-βei)
=Σ124exp(-β(ek+uk))=Σ124exp(-βH)
Where, u and e are the potential and kinetic energy terms.
Is it correct? Please look at it.
Your expression is very far from correct.
It makes a little bit of sense for the canonical ensemble, but it is not right for that either. You must find an introductory book on statistical mechanics, and study the various ensembles very carefully before you proceed. Otherwise I don't understand how you can be able to do anything meaningful.
An ensemble is the set of systems (mental replicas of the physical system at least as far as the known properties of the physical system are concerned). Each system in the ensemble strictly obeys the microscopic equations of motion of the system -- i.e Hamilton's equations or Schrodinger's equation. Thus for systems for which a potential energy of interaction can be written, the total energy of each system in the ensemble is conserved. In classical mechanics, each system in the ensemble evolves along a given trajectory through phase space: p(t)'s and q(t)'s. The trajectory lies on a surface of constant energy in phase space. There are extremely many such trajectories for a given energy E. Various ensembles arise on how we choose the trajectories in an unbiased way. For a micro canonical ensemble, for which we know nothing else except N, V, and the total microscopic energy E, each ensemble system's trajectory (corresponding to the known E) must be given equal weight. That is, no particular trajectory can be assigned to more systems than any other trajectory and all trajectories (each having the given energy E) must be represented in the ensemble. Thermodynamic (i.e. macroscopic properties such as Temperature T and pressure P) are then to be calculated by averaging over all systems in the ensemble -- again without bias.
If we do not know the energy of the physical system,-- but say know only that it is in thermal equilibrium with its surroundings,-- then we must set up a canonical ensemble in which systems again evolve causally along phase space trajectories of a given energy, but we must include trajectories of various energies to the various systems in the ensemble (as the energy of the physical system is not known). The point to be made is that macroscopic properties of a physical system are averages over microscopic systems in the chosen ensemble. Our ignorance of the microscopic details must be reflected in the unbiased way we choose to partition (distribute) the possible trajectories which are consistent with our limited knowledge of the macroscopic system.
Thank you Dr. Paul and Dr. Kare.
You mean p(t) is position and q(t) is momentum. yes each system will evolve along a trajectory with time in the phase space maintaining the constant energy.That means each point along the trajectory represents a microstate of the system. So the partition function becomes Q=Σiexp(-βHi), sum over all the microstates. where H is the Hamiltonian. Hi=ui(p(t))+ei(q(t)), ui is the total potential energy of the system at ith microstate . and ei is the total kinetic energy of the system at that microstate.
Case 1:
if the molecule is monoatomic , ei(q(t))=Σj qtr2/2mj , where j is the number of particles and qtr is the linear momentum.
Case 2:
If molecule is diatomic, ei(q(t))=Σj qtr2/2mj + Σj qr2/2Ij + vibrational kinetic energy + electronic energy + nuclear kinetic energy,
where qr is the angular momentum of the molecule, I is the moment of inertia.
Case 3:
If molecules in the system are non interacting, then ui(p(t))=0, and Hamiltonian becomes Hi=ei(q(t)).
Is this correct?
Please go through this.
If I am wrong please correct me.
Regards,
Vivek
Just an additional comment. The macroscopic properties of the physical system are to be calculated by taking "averages of each systems' time average" over all the systems in the ensemble. The average of the time averaging over the trajectories can be translated into "weighted averages" over the phase space accessible to the classical system, at least for systems which are "in equilibrium". (and thus whose properties are independent of time). The weighting factor for the phase space averaging depends on the choice of ensemble -- i.e. it depends on what we know and what we do not know about the physical system. The factor Q = exp{--BH} arises for a canonical ensemble where we know the temperature B = 1//kT, as the physical system is in thermal equilibrium with its surroundings. (H(q,p) is the classical Hamiltonian defined over the phase space.) This Q is NOT the weighting factor for a micro canonical ensemble where we (claim to) know the exact energy E of the physical system. So you do not use Q for a micro canonical ensemble. Without getting into mathematical details, let me just suggest the "partition function" for a micro canonical ensemble (where the known energy is E) must restrict the integration over phase space to the hypersurface on which H(q,p) = E. Thus the micro canonical ensemble partition function usually is expressed in terms of a Dirac delta function.
Vivek.
There is a good set of lecture notes by Daniel Arovas (linked below), with references to other good texts. If you want to do molecular simulation worthy of a PhD degree you (at least) have to master most of the material covered by those notes.
You are trying to make shortcuts by asking for help on the most elementary aspects of statistical mechanics here. But that would be shortcuts from nowhere to nowhere; you really have to learn the science yourself, preferably with some help from the staff at your university. There are also full sets of good video taped lectures available, distributed through various channels.
I often start my exam sets on introductory statistical mechanics with a standard bonus question like "Write down the partition function in the .... canonical ensemble", just to give the candidates a good start and some self-confidence. Those who can't answer this question have obviously not studied anything for the course, and invariably flunk.
https://www-physics.ucsd.edu/students/courses/spring2010/physics210a/LECTURES/210_COURSE.pdf
The micro-canonical ensemble deals with isolated systems; for such a system the energy is conserved as far as no interactions with the enviroment are allowed.
Dear Prof. Kare Olaussen,
I have just started reading about ensembles . Sorry for putting wrong perception about the different ensembles. Actually i am going through the book that you have suggested. I think present discussion below is making sense.
Dear Prof. Paul Westhause,
Thanks for your comments. It is really helpful. I am just going through lecture notes of the Tuckerman at the University of New York.
In any ensemble, partition function is given by F =ʃ f(H(x(t))) dx(t).
Where H is the Hamiltonian and f(H(x)) is the phase space distribution function.
Where x is a 6N dimensional phase space vector, x=(p1,p2,........pN, r1,r2,..........rN). Where, p and r are the momentum and posiiton vectors for each particle.
A point in the 6N-dimensional phase space describes the dynamic state of every particle in that system, as each particle is associated with three position variables and three momentum variables. A point in this space is specified by giving a particular set of values for the 6N coordinates and momenta. The time evolution or trajectory of a system can be expressed by giving the phase space vector x(t), as a function of time.
In this sense, a point on the trajectory in phase space is said to be a microstate of the system.
The concept of phase space provides a classical (continuous energies) analog to the partition function (sum over states) known as the phase integral. Such integration consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). So, partition function is the integral along the trajectory in the phase space. Whereas in the Boltzmann statistics, partition function is the sum of Boltzmann factor over the discretely spaced energy states.
In the microcanonical ensemble, the distribution function need only reflect the fact that energy is conserved and can be written as f(H(x))=δ(H(x)-E) , Dirac delta function. Partition function is equal to the number of microstates that all give rise to a given set of macroscopic observables. H(x) is the sum of potential and kinetic energy of all particles at a particular microstate.
Actually in a microcanonical ensemble (N,V,E), an isolated system evolves along a trajectory with time in the phase space constituted of position and momentum. The trajectory belongs to the constant energy hypersurface following, H(x)=E. All points on this surface correspond to the same set of macroscopic observables.
Please look at it. I think this is making sense and seems to be correct. Is not it?
In the earlier post, whatever mathematical expression i have posted, is for the canonical ensemble(N,V,T) with the total energy not remaining constant, but since temperature is constant, kinetic energy is constant. In the context of present phase space integral, the previous summation over the microstates becomes the phase integral.
Is not it?
Regards,
Vivek
This is to add to the previous response by Hadjiagapiou. A system can only change its energy if it interacts with, i.e. exchange energy with something outside of it. There is no way to change the energy by internal processes. Therefore, the energy of an isolated system (by definition, it does not interact with anything outside of it) must remain constant, which we take to be E0. It also means that the number of particles, its volume and all other extensive quantities such as total polarization, total magnetization, etc. must remain constant. Now, if we only focus on the energy and implicitly or explicitly assume that all other extensive quantities are held fixed, then all microstates of the system must lie on an energy shell of width dE around E0 in the phase space.
In the approach due to Gibbs, we usually think of an ensemble of isolated systems that are all prepared (somehow, how is not important) under the same macroscopic conditions, which here relates to the value of E=E0. The ensemble you will get is determined by whether the system is in equilibrium or not; see below. You should think of a microstate to be the instantaneous state of a member of the ensemble and different members of the ensemble are supposed to be in different microstates so that one can think of the ensemble to represent the collection of all the microstates; they all must have their energies in the energy shell discussed above. This is true whether the system is in equilibrium or not. In equilibrium, all microstates appear in the ensemble with the same probability. This probability will not change in time which explains why we identify this as an equilibrium state. This is called the equi-probability assumption. If the probabilities are not all equal, we have a nonequilibrium state of the system. In due course, the probabilities of the microstates will change with time until they all become the same and we come to equilibrium.
The probability piwith which a microstate i appears in the ensemble should not be confused with the question: What is the probability that a member has its energy given by E? The latter probability is obviously given by a delta function of argument E-E0 and has nothing to do with whether all pi are equal or not, that is whether the system is in equilibrium or not.
I hope that this explains some of your concerns.
Well, potential energy is produced by a charge (or a mass) through a force (called field), while kinetic energy is measured by velocity of that charge (or mass)...in NVE Kinetic+Potential=Constant, but not Potential=Kinetic=Constant...Temperature affects your velocities too...isolated system is a system that is closed and not even heat escapes it (no energy or matter transfer, no chemical reactions, i.e. neither mass, nor quantity, nor impulse is changing)...those micro-states have got energy surface or distribution of energies in the space, so can be and degenerate ones with the same energy...partitioning (or a collection of isolated systems) of a snapshot is only in your mind since it is still a single isolated system.
Thanks Dr. Gujarati for clarifying so many things.
I had made a post two days back about microcanonical ensemble.
Please look at it.
Is that correct?
Dear Vivek:
You are on the right track. However, I will make two comments to what you wrote:
1)
In this sense, a point on the trajectory in phase space is said to be a microstate of the system.
Not correct at a fundamental level if we wish to associate a finite entropy to a macroscopically large but finite system. A microstate is really very tiny volume element h3N, where h is the Planck’s constant. The rational for this comes from quasi-classical approximation in quantum mechanics. Once you realize that each microstate occupies this tiny volume, the integration over the phase space can be reduced to a discrete sum over microstates.
2)
So, partition function is the integral along the trajectory in the phase space. Whereas in the Boltzmann statistics, partition function is the sum of Boltzmann factor over the discretely spaced energy states.
There is really no difference as one can change the integration over the phase space to an discrete sum over E as I said above. You can play the same game without bringing in the idea of a microstate as I have defined above and deal with an integration. Then instead of a sum over E, you will have an integration over E. You need to be slightly careful if you wish to deal with the microcanonical ensemble.
Rest has been explained in my previous response.
Puru
Thank you Prof. Gujrati for reading my post and explaining things.
In an ensemble consisting of S number of systems, each system will evolve with time along a trajectory in the phase space. So for S systems , there will be S trajectories in the phase space. So, partition function for the ensemble, is the sum or integration (whatever we say) over all the microstates along all the trajectories. Since the number of systems are infinite, one can say that trajectories cover whole phase space. In other words, one can say that each point in the phase space represents a microstate.
At any given time at the equilibrium, each system is in the unique microstate.
The idea of ensemble averaging can also be expressed in terms of an average over all such microstates (which comprise the ensemble). A given macroscopic property, A, and its microscopic function a=a(x) , which is a function of the positions and momenta of a system, i.e. the phase space vector, are related by
A=(Σk a(xk))/S where xk is the microstate of the kth system of the ensemble. k ranges from 1.....S.
In reality, measurements are made only on a single system and all the microscopic detailed motion is present. However, what one observes is still an average, but it is an average over time of the detailed motion, an average that also washes out the microscopic details. Thus, the time average and the ensemble average should be equivalent, i.e.
A=lim(T-inf) 1/T ʃTa(x(t))dt , integration is done from 0 to T.
This statement is known as the ergodic hypothesis. A system that is ergodic is one for which, given an infinite amount of time, it will visit all possible microscopic states available to it (for Hamiltonian dynamics, this means it will visit all points on the constant energy hypersurface,(for N,V,E).
However, it states that if a system is ergodic, then the ensemble average of a property can be equated to a time average of the property over an ergodic trajectory.
So which averaging method is followed in the MD or MC calculation of macroscopic property? Time averaging method is used?
Please look at it.
Thanks
regards
Vivek
Vivek: You are asking questions as you are new to the field. Let me make an important point. No matter how many trajectories you take, each having zero width, even infinitely many, there is no way they will cove the phase space; 0 times any number is still zero even if you take the limit as the number tends to infinity. I
That is why a point should not be taken as a microstate. If you take it to represent a microstate, then you will deal with an infinite number of microstates. That is nonsense.
The number of systems in the ensemble are infinitely large in principle, even though the number of microstates is finite although extremely large. Each microstate appears in the ensemble with a frequency given by its probability pi. The sume over all the members then can be written as a sum over distinct microstates so that A=Σi pia(xi).
It may happen that for systems, the trajectories remain confined to a part of the space space. In that case, the ensemble average (which considers microstates in the whole phase space) will not be equal to the time average and the system is not ergodic.
In MD, you are carrying out the temporal average, while in MC, it depends on your view point. Usually, it is usually taken to be the ensemble average as the moves are not related to any dynamics but are made probabilistically.
Hope this helps.
Puru
there are as many micro-states in one snapshot as there are particles-atoms (including dummy atoms), and you see their evolution in time, then you can sum up them and so on
ergodicity simply means that you get the same averages of something by summing up them in the space being at one snapshot or by summing up them in time (taking many snapshots of one atom or all of them)
Thanks Prof. Gujrati and Kuprusevicious for making things so transparent.
Actually I studying stat thermo and Molecular simulation on my own. So there is some difficulty in understanding.
I am following some lecture notes and Books of Haile, Frenkel etc.
Thanks
Vivek
Hi all!
After a long time i am posting this. In between i was going through the stat mech.
One of my confusion while going through the the phase space distribution function and Liouville's theorem.
In the phase space each system can be represented by a phase space vector which changes position in the phase space with time (t). Each member in the ensemble will have phase space vector x(t)., which will be evolving with time. The probability of finding a member's phase space vector x in the small volume dx around a point x in the phase space at time t is given by
f(x,t)dx
f(x,t) is the phase space distribution function. its properties are
1) f(x,t) ≥0
2) ∫f(x,t)dx=number of members in the ensemble
Basically it means that f(x,t)dx is equal to the fraction of total number of systems present in the volume dx. While integrate over the whole phase space volume, one gets the number of total system in the envelop. On the other wards, one can say that at a particular time t, integral over whole phase space volume, gives the total number of unique microstates from all the members in the ensemble at time t.
Is this correct?
Vivek: There are still problems with your conceptual understanding. If you regard a point in phase space to represent a member of the ensemble, then the number of members in the ensemble must be infinity large as the number of points is always infinitely large. So, the number of points is not important. Indeed, many members in the ensemble have the same (micro)state so the number of microstates is not the same as the number of members in the ensemble. As I wrote to you earlier, the frequency or the probability pi with which microstate i appears in the ensemble depends on what kind of ensemble you have prepared. It may represent an equilibrium ensemble or an nonequilibrium ensemble.
The concept of the distribution function depends on the normalization on it. Your second condition should be
∫f(x,t)dx=1.
Then, f plays the role of a probability density and f(x,t)dx is equal to the probability or what you call the fraction of not the total number of systems present in the volume dx.but of the microstates represented by the points in dx.
I will suggest that you carefully read Sec. 1 in Landau and Lifshitz, Statistical Physics, Vol. I. This will clarify a lot of confusion for you.
Thanks a lot..
ya definitely, number of microstates are not same as the number of systems in the ensemble and every point in the ensemble is not representing the system. If one looks at the ensemble at any particular time t, then the number of microstates will be same as the number of system; because each of the systems is in the unique microstate at time t. As the time passes on, each system evolves with time with different microstate.
f(x,t)dx is the probability of finding a microstate in the volume element dx, which is the fraction of microstates in the volume element dx at time t. While this quantity is integrated over whole phase space, it reduces to 1.
I hope this is correct.
Actually I was going through lecture notes of Mark Tuckerman (NYC), where it is given that integration of density function over whole phase space gives the number of systems. I was doubtful about it.
Anyway, I will go through the book you have referred.
Thanks
Vivek
There is still some problem.
You said:
"If one looks at the ensemble at any particular time t, then the number of microstates will be same as the number of system; because each of the systems is in the unique microstate at time t."
This is not correct. Many systems or what I called members in the ensemble can be in the same microstates; their frequency is the microstate probability pi. This is true even if a member is in a unique microstate: the latter does not mean that many members cannot duplicate the same microstate.
Regarding the normalization by Tuckerman: you can normalize f(x) anyway you wish. Just be consistent. I usually normalize the distribution function as if it is a probability distribution.
Puru
Thanks for explicitly explaining this. After a long time this is clear to me.
So the probability of finding a microstate at time t in an ensemble is the number of that microstate normalized with total number of microstate at time t.
If f(x,t) denotes the probability density function, then ∂f/∂t =0 at equilibrium.
Does this mean that at equilibrium, numbers of different microstates remain fixed?i.e. frequency of each microstate remains fixed?. This is called the most probable distribution, which prevails the equilibrium distribution?
Thanks,
Vivek
Yes: Only the probability is constant. If, however, you keep the number of samples also fixed, then even the number of microstates is also constant in equilibrium. Remember, two different people can start with two different number of samples. However, for each of them, as long as they do not change the number of samples, the number of microstates, once equilibrium has reached, will not change. It will change as the system tries to approach equilibrium.
So, you have finally understood what is going on.
Good Luck!
Thanks again..
To be more explicit, suppose an ensemble is consisted of three systems, a,b,c. each system is having same number of samples (molecules). As the time passes on, these systems evolve and at a time, say at t, each of these systems is in their unique microstate (say A, B, C, D, E, F, G, ......inf). Now, at this moment three systems can have either three different microstates (say A, B, and C respectively ) or some common microstates (say, a and b have common microstate A, and c is having microstate C, making the number of microstate A is 2 and microstate C is 1, and number of other microstates are zero).
Now say, at the equilibrium (time t1), number of microstate A is 1 (system c having this microstate) and that of microstate G is 2 (system a and b are having this microstate), and other microstates number is zero. Now according to the equilibrium condition, for all t1+nΔt times after equilibrium is reached, ensemble will have 2 microstate G and 1 microstate A i.e frequency of microstate A and microstate G remains fixed. This is the microscopic concept of equilibrium.
Statement 1: Two ensembles having same no of systems and under same macroscopic condiitons (like N,V,E or N,VT whatever it is), but initial coordinates of the ensembles are different, will have same number of microstates once equilibrium is reached. Only difference will be the time taken to reach the equilibrium.
Statement 2: According to the postulate of a priori probabilities, for an isolated system in equilibrium, all microscopic states are equally probable. is this violating the microscopic equilibrium condition? where is the mistake?
Statement 3: What did you mean by samples? is it number of systems in an ensemble or number of molecules in a system? If it is number of systems, then two different people will have different number of microstates that is for sure.
This write up has been quite lengthy.
Please have a look.
Regards,
Vivek
Vivek: I am afraid I do not understand what you are asking. There may be some confusion at least the from the wordings you have chosen.
When I say samples, I do not mean the particles in the system. I mentally imagine a large number of replicas or copies of the same system, all prepared under the same macroscopic conditions. Let R denote the number or replicas or copies of the same system, which I keep fixed so it does not change with time. each replica is in certain microstates and the frequency of a microstate is its probability.
Do you mean three different systems a,b and c like an ideal gas, a real gas and a polymeric gas? Or do you mean three replicas of the same system?
You need to explain further so that I can understand you.
By three different systems, a, b and c, I mean three replicas of the same system prepared under same macroscopic conditions (suppose, N,V,T or N,V,E ensemble consisted of three systems a,b,c). Each replica can have infinite number of microstates ( A,B,C,D,G......infinity) based on position and momentum . At a particular time t, each replica is in an unique microstate. Say system a and b is in microstate A and system c is in microstate C, then probability of microstate A is 2/3 and that of microstate C is 1/3, and probability of other microstates are zero.
Suppose at equilibrium, system a and b are in microstate G and system c is in microstate A, then probability of microstate G is 2/3 and that of microstate Ais 1/3. Once equilibrium is reached, this probability remains fixed.
Now that I understand, I can answer the following. There is no way for you to conclude that you have reached equilibrium with only three samples to consider. I will think of isolated systems. In equilibrium, all microstates are equally probable, each with probability 1/W, where W is the number of distinct microstates. The probabilities in your examples do not satisfy this requirement; hence, you are dealing with equilibrium. As each probability is 1/W, you need at least W samples to be able to deal with equilibrium.
You need to appreciate this first.
So you mean at equilibrium, the ensemble will be having all distinguishable microstates. In other way we can say that at equilibrium, number of system is equal to the number of microstates.
I know that with only three systems one can't achieve equilibrium. We need to have large number of systems/samples. I have taken three systems for the purpose of simplicity. In my previous example, equilibrium has not reached. If ensemble has to reach equilibrium, the three distinguishable microstates with each having probability 1/3, must be achieved.
Now one question is arising in my mind that a system can have large number ( infinite number) of distinguishable microstates. That means that one has to start with nearly infinite number of systems. how to decide how many systems one has to start with for simulation purpose?
I can't thank you enough sir for finding my faults.
Thanks
vivek
You are finally grasping the mystery behind Gibbs ensemble picture. It is merely a mathematical construct and does require at least W, but in general, a very large integral multiple of W in the ensemble. Then, one computes pi of an i-th microsate and takes an ensemble average at each instant. If the ensemble is not in equilibrium, pi are not all equal to 1/W. In time, they will all become equal to 1/W.
However, one can focus on a single sample, the system if you wish or 3 copies as you had taken. Here, I will take only a single member. Then, you must watch it over a long period of time and collect all microstates that appear over this time period. If the period is sufficiently long, you will find the probability of a given microstate in the collection of microstates (now over time) approaches 1/W, and you obtain equilibrium. But, now you are considering what is called a time average, not an ensemble average. Whether the two averages are always the same or not is part of the ergodic hypothesis of Boltzmann.
We have not discussed it so far. But the first section of the book by Landau and Lifshitz explains all this although it does not talk about the ergodic hypothesis.
Good Luck!
Thanks a lot sir........
I have read the ergodic hypothesis. One can replace the ensemble average with time average. In MD simulation people uses time average, whereas in the MC simulaiton one uses ensemble average.
So if one start with a single system at time t=0, and at each time step system is taking a microstate. Suppose system has taken 2W time steps, so corresponding microstates will be 2W ( distinguishable + indistinguishable). Suppose each microstate is having a frequency of 2, making a probability of 1/W for each microstate. Once the equilibrium is reached, time average is taken to calculate the macroscopic property.
Now next thing that is coming in my mind is that since the number of distinguishable microstates are maximum at the equilibrium, entropy is maximum at the equilibrium.
Thanks
Vivek
Your last paragraph is not correct. The entropy is given by the set of the probabilities {pi} of the microstates and not by their number. This you must appreciate. The entropy becomes maximum only when all microstates become equiprobable. This does not mean that the number of microstates is maximum in equilibrium; their number may be the same even out of equilibrium.
Thanks sir
ya i understand.
Microcanonical partition function measures the number of microstates available to a system which evolves on the constant energy surface.Why there are N! and h3N terms in the denominator in the partition function?
In a microcanonical ensemble, If two systems,(N1,V1,E1) and (N2,V2,E2), are in thermal equilibrium, the entropy follows an additivity rule.
The total partititon function is given by, F(E)=C ʃE0 dE1 f(E1) f(E-E1) , f is the individual system's partition function. This integration is over all possible combination (among E1 and E2) of energy states maintaining the constant overall energy, E. At each combined energy state (E=E1+E-E1) , there are f(E1) f(E-E1) microstates possible.
Thanks
Vivek
Thanks a lot...
I have one more confusion which is about checking whether equilibrium has reached or not?
In molecular simulation (MD), people checks the variation of E(N,V,T), T(N,V,E) ,N(µ,V,T) as a function of time. Once equilibrium is reached, these parameters don't change with time.
Why people don't check probabilities of microstates to check whether equilibrium is reached or not? Because as equilibrium is reached, all microstates will have equal probabilities. This can be an alternative way of checking equilibrium. Why don't people use then?
Thanks
Vivek
Before answering your new question, let me start noticing that there is no clear-cut way to check if a system is or not at equilibrium. Simply because different observable may relax to their equilibrium values within different timescales. For example, if you look at the time correlation function between density fluctuations of different wave-number, you may have states and intervals of time such that the highest wave-number correlation functions have decayed to zero while the long-wavelength fluctuations may remain significantly larger than zero.
Moreover, the quantities which may be useful as indicators of equilibrium depend on the specific molecular simulation algorithm used in the simulation. So, for instance, in microcanonical MD one does not use the total energy, since it should be a constant of motion and if it is not, it is a signal of inaccurate integration. Temperature in such a case may be a useful probe.
Now, as far as the direct check of equal probabilities of the microstates (in the microcanonical case, of course), it is not an easy task. The number of states is usually huge (it grows exponentially with the number of particles) and in order to get a reliable estimate of the frequency each state has appeared, one should wait until each state has been visited many times.
It is simply not feasible. But it is also useless: Provided that the algorithm is granted to sample the phase space with equal probability, it is enough to look for the equilibration of the average values of the relevant observable. Usually they converge much before the whole phase space has been sampled.
Thank you Dr. Giorgio Pastore for nicely explaining things in details.
For a system of ideal gas, if one starts with any of the ensembles (NVT, NPT, NVE), exactly same values of observables(P, S, E) are obtained.
How to decide which ensemble to start with for the MD simulation of ideal gas system?
Thanks
VIvek
Simulating the ideal gas is probably one of the most useless and at the same time one of the most difficult things to do.
It is useless because everything is well known for the ideal gas.
It is very difficult because the ideal gas is a very special system. If it is described as a non interacting system it has no mechanism to equilibrate. Or to go back to equilibrium if a perturbation is switched on/off. And even simulating a very weakly interacting system may be a daunting story. The problem is that weakly interacting systems are close to integrable systems. And integrable systems are not ergodic.
Why would you like to perform MD simulation of a perfect gas ? Is there any motivation ? If the goal is to learn something about MD simulation, I would suggest a classical Lennard-Jones system, trying to reproduce well known literature results.
Thanks a lot..
I understand integrable systems are not ergodic.
For learning purpose I want to take ideal gas system.
How to decide which ensemble to start with for the MD simulation for a real system? Is it based on the system's nature?
Thanks
Vivek
In real research work the choice among different ensembles depends on many considerations. But if you are just learning the basic tools, I suggest you start with the simplest method (for MD, microcanonical simulation) and, after being sure you are able to reproduce literature results, you go on and compare step by step microcanonical simulations with other ensemble simulations.
Thanks a lot..
I am having few more confusions.
1) Why quantum mechanical correction, 1/h^3N is taken in partition function? Is it because to make partition function unitless?
2) Why there is a 1/n! term ? Is it to take care of indistinguishable nature of particles? If classical degeneracy is included, this n! term has to be omitted?
3) This is related to the grand canonical ensemble:
In deriving the partition function, a combined system separated by a permeable wall is taken. System 1 is having N1,V1,T and system 2 is having N2,V2,T (N2>>N1, V2>>V1). N can be exchanged between systems. To get the partition function for the system 1(which is supposed to be maintained μ,V,T) , the density function for the system 1 is integrated over phase space of system 1 and this integral is summed over all the values of N1 that system 1 can take.
Since number of particles is changing in system 1, it's chemical potential (∂G/∂N)P,T also changing because system 1 has not reached equilibrium to make equal chemical potential. Particles are transferred into the system 1 unidirectionally.
If μ,V,T condition is not maintained, why particle numbers are changed in the system1 ?
We know that chemical potential remains constant when there is a chemical equilibrium. In equilibrium, number of particles going out of a system is equal to the number of particles going in to keep (∂G/∂N)P,T Constant.
I don't know how much i have been able to explain question 3.
Please got through this.
Thanks
Vivek
Are you referring to any book (text-book)? Quite a few of your very valid questions are answered in detail in such books, where you might be able to get answers to ALL/MOST of your questions in one go.
Apologies, since I am not answering your question.
By definition, the microcanonical ensemble is an isolated system, so its energy remains constant. On the other hand, in the canonical ensemble, the system interacts with a big reservoir of temperature T, without interchanging particles. In this case the system can have energy E with the probability exp(-E/T). Nevertheless, if the system plus the reservoir are isolated, the total energy E+E' remains constant.