If we look at the laws of Newton, Schroedinger, Einstein and others we can observe that they are all second order degree differential equations, ordinary or partial. Why such a coincidence? Is this an indicator that our projection of reality is just a linear projection or is it something deeper behind this universality of the 2nd degree?
@Demetris: great question.
Now, to complement Markus' answer: the simplest Lagrangians leading to *differential* (as opposed to algebraic) equations are the first-order Lagrangians, but then the resulting Euler--Lagrange differential equations are of (at most) second order, which in a sense answers Demetris' original question: the fundamental equations are mostly Lagrangian and it is natural to expect that they should be of the simplest possible form, so the Lagrangians should be of the first order and the equations of motion of (at most) second order.
Note that in certain cases first-order Lagrangians produce first-order equations, as is the case e.g. for the Dirac equation, so perhaps the first order of the Lagrangians is, where applicable, more fundamental than the order of equations. On the other hand, while the standard Lagrangian for the Einstein's general relativity is of second order (i.e., of the same order as the Einstein field equations), there is also a formulation with the first-order Lagrangian.
Very interesting question. Why?
Maybe, to take into account the motion of time? Or rather, the motion in 4-dimension space?
A nice answer is in Sidney Coleman's 1961 article, http://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM2820.pdf
``Classical electron theory from a modern standpoint''.
The issue isn't with the spatial derivatives, but the time derivatives. More than two time derivatives require that the initial conditions fix not only position and velocity, but, also, acceleration, thus force. Therefore they lead to acausal propagation. Coleman has a very nice discussion about this point, precisely. Physical systems whose equations have more than two spatial derivatives do exist.
@Stam, the Taylor series expansion (see attachment) is the interesting point of your mentioned paper. The second order of spatial derivatives means curvature: it seems that the next causality scheme holds for our local universe:
Time dependent entity ---> Curvature of space or space time trajectory.
But is this something really existent or is it just another projection of us?
Once more, what do you assume as known and understood? ``Time dependent entity'' does *not* have anything to do with curvature: the trajectory of a free particle is flat-in any given coordinate system-nevertheless velocity is *not* zero, acceleration is.
The statement is that *if* you want causal propagation-where this can be consistently defined-*then* you can't have more than two time derivatives. If you have fields, like the electromagnetic field, then, as Coleman explains, you need to take extra care.
Newton's law: F(t) --> d^2/dt^2 r(t) which can compute the curvature of a curve
Schroendiger's law: d/dt Psi(r,t) --> del^2, spatial 2nd derivative, can also lead to curvature, although not directly seen
Einstein's law: Tμν --> Gμν, it is self evident
As time varies the result is a variation in a curvature. What curvature? It depends on the law.
My question is not about curvature, is about the second order of spatial derivatives. Why?
Because ultimately, all physics is made of waves, and other theories are approximations or reformulations. For example, geodesics in general relativity is the mere consequence of the Huygens principle.
That's maybe part of the answer. More fundamentally, Leibniz pointed out that there must be both a principle of change and a detail of what changes. In waves there is always two conjugate quantities, that must interplay through a second order differential equation. If it is of first order, then there must be several components, like a complex wave function or a spinor, which is the standard way to solve a second order equation.
Forward in time, Wheeler noted that all the laws of physics reduce to a very simple topological theorem that he dubbed: the boundary of a boundary is zero, or boundaries have no boundary. This is algebraicly express as something like d^2 = 0, which is second order.
All your examples contain second time derivatives, so they don't bear on your question.
Take the classical wave equation, which is a limiting case of Eisntein's equations, incidentally. Higher than second spatial derivatives simply lead to dispersion phenomena, that's all. These do exist and can be described in that way. However thse are ``effective'' descriptions, valid within some approximation.
(Newton's law doesn't describe causal propagation, Schrödinger's equation is non-relativistic, pertains to phase space not real space, so isn't relevant for this discussion.)
The reason is that, if you did have higher spatial derivatives, you would have a length scale, that would violate Lorentz invariance, with respect to the velocity of propagation of your wave equation, of course.
It is because these are directly or indirectly related to force in other way related to a quantity called acceleration . We need just two mathematical operation to define precise position of an object. So to know trajectory or particle position which is ultimate aim of fundamental laws , required second order operations.
@Stam, for Newton's law see: http://en.wikipedia.org/wiki/Curvature , for the curvature. Other laws have spatial derivatives. Even so, the 2nd order is present.
@Mukesh, the need for a wave is what we have to investigate. Why is it necessary a law to present a wave like solution at all? Is it something fundamental, or is it just the reaction of material, just like is the Hook's law for a spring?
@Claude, very interesting answer! I have to read about... Can you provide me with a proper link about? Thanks.
Curvature of a curve, of course, is related to acceleration-but, by definition, a curve is parametrized by *one* parameter. If the curve describes the position of a particle, the parameter is time. So the acceleration of the particle is proportional to the curvature of the curve. These statements involve *time* derivatives, once more, *not* spatial derivatives.
I don't think that there exists a problem if derivatives are spatial or with respect to time, it has no sense I think. The main question after a first inspection is this: What is the legitimization of the waves in Physics? Are there self existent or are they just a low order description of the true and unrevealed nature for every 'law'? What is the answer to this question?
Once more: there does exist a difference between time derivatives and space derivatives-and it has to do with causal propagation of signals. For mechanical waves, it turns out, experimentally, that they can be understood as the collective behavior of
matter, that is, indeed, discrete at atomic scales. For electromagnetic waves it turns out, also, experimentally, that, as far as we can measure, these *cannot* be understood as the collective behavior of other degrees of freedom, but are *the* fundamental degrees of freedom of the electromagnetic field. So here the causal properties are fundamental, from which we deduce everything else. And since electric charges are sources for the electromagnetic field, this has consequences for mechanics.
See the classical textbook of Misner, Thorne and Wheeler _Gravitation_, especially chapter 15.
For Leibniz, it is in his metaphysical system.
Demetris, such an awesome question ! Thx for it.
A brief comment: if one focuses on how concepts evolved, the all increasing interest on CHANGE - at least since Kepler - and on its expression in geometrical proportions and then in more purely algebraic terms helps to give an useful historical perspective on how DE came to grow in importance; Kepler, Galilei (particularly in Two New Sciences), to some extent Descartes (particularly in his Geometry, including its fantastic Preface), are good works to present this in class. At the time of Newton, the sophistication of analytical expressions of space, time, and change proper, together with the concept of ACCELERATION, can help understand how 2ndODE acquired their importance (in a way, CONSERVATION is an inavoidable, perhaps 'syntethically apriori' concept that 'followed' from CHANGE).
This does not help in answering to the deeply disturbing 'why' in your question, and deals only with the basic relations of Classic Dynamics, though. I think Physics as a body of knowledge extends well beyond Kepler, Newton, Schroedinger, and Einstein, and I tend to side, if anything, with the mature Poincaré, without commiting myself to unification (mostly for practical reasons). Still, your WHY is very instigating.
The equations of physics link both time and space derivative, so no distinction can be made between them, or it would be purely formal. There is no third order space derivative in any fundamental theory, for the simple reason that it wouldn't be Lorentz invariant, as Lorentz invariance precisely says that there is no distinction between space and time.
Here we note that the invariance of the pseudo-norm of a 4-vector is equally expressed as a second order (polynomial) equation. That provide a further reason, which is Pythagora's theorem, if the concept of distance is to make sense at all. In turn, distance is associated to causal chains.
No; odd space derivatives would break parity, that's all (under assumptions of how the numerators transform under parity). Even space derivatives are fine and fourth order space derivatives do enter the equations that describe bending rods, membranes and plates. Distance has *nothing* to do with causality, absent assumptions. And among the assumptions are those that fix the spacetime structure, which means that, if we allow for gravitational fields, which is the subject of Misner, Thorne and Wheeler's book, we must be even more careful of spelling out what we're talking about.
Distance is the number of links in the causality chain, that's the raison d'être of time, and of space too. They are but representations for organizing the events according to their causal relations. Entanglement shows that other relations exist across space-time.
For these assertions to make sense you need to define what you mean by causality, by chain, by link and then how the definitions of space and time are compatible with them. Newtonian mechanics has a totally different causal structure from Lorentzian mechanics and both are consistent, if you neglect electromagnetic effects, that then, experiment shows, tell you that the causal structure of Lorentzian mechanics and
not Newtonian mechanics is compatible with electromagnetism. In particular, that space-time has *one* ``timeline''dimension (the signature of space-time does *not* change under a Lorentz transformation). If you allow other structures, you may end up having closed time-like curves.
All these issues enter in the determination of which distances are ``space-like'', time-like'' or ``light-like'' (null) and whether these labels make sense.
Many of the fundamental laws in Physics are conservation principles, which in most cases can be written in a general framework of the convection driven by a gradient, therefore yielding the popular 2nd order differential operators in space. But, of course, this is usually a first approximation for most of these systems, and deeper, most sophisticated models will be needed to fully understand and link these fundamental principles.
Is it to say that rods, membranes, and plates aren't described in full detail by Dirac and Maxwell equations, or QED? I don't think so. Of course, a non relativistic model has not necessarily this constrained, but it isn't fundamental like in the question.
Indeed: the equations for rods, membranes and plates are classical, not quantum, do not describe spinors and are not invariant under transformations that leave the speed of light invariant, but, at most, the speed of sound in the material. If the material has anisotropic elastic moduli, a subgroup of the rotation group is preserved, that, indeed, for cubic groups, for instance, allows fourth order terms.
Try writing down the equations that describe the vibrations of a rod or a beam under bending moments, for instance.
Of course these are effective, not ``fundamental'', and the reason is Lorentz invariance.
I would argue that any sensible physical theory can be related to a variational principle. And if everything is nicely "smooth", the Euler-Lagrange equations do the trick at least for conservative potentials.
@Demetris: great question.
Now, to complement Markus' answer: the simplest Lagrangians leading to *differential* (as opposed to algebraic) equations are the first-order Lagrangians, but then the resulting Euler--Lagrange differential equations are of (at most) second order, which in a sense answers Demetris' original question: the fundamental equations are mostly Lagrangian and it is natural to expect that they should be of the simplest possible form, so the Lagrangians should be of the first order and the equations of motion of (at most) second order.
Note that in certain cases first-order Lagrangians produce first-order equations, as is the case e.g. for the Dirac equation, so perhaps the first order of the Lagrangians is, where applicable, more fundamental than the order of equations. On the other hand, while the standard Lagrangian for the Einstein's general relativity is of second order (i.e., of the same order as the Einstein field equations), there is also a formulation with the first-order Lagrangian.
Suppose you could give an explanation of why "the most fundamental laws" have such or such form. In your explanation you will use terms, relations, and principles, by virtue of which the fundamental laws of Physics take the form you want to explain. Then, those concepts, relations, and principles would be more fundamental than the fundamental laws you want to explain. That's the case, for example of the explanation of the relation PV=nRT provided by atomic and molecular Physics.
But the expression ``simplest Lagrangians'' begs the question: what are the symmetries? The variational formulation implies that one must write down the *most general* Lagrangian, compatible with the symmetries of the problem: the symmetries dictate what the dynamics will be. So there is a reason, from the symmetries, why we take only two time derivatives (causality) and only two space derivatives (global Lorentz invariance)--at the fundamental level. If we relax the symmetry assumptions, e.g. global Lorentz invariance, in elasticity, we can have higher spatial derivatives.
Newton mechanics allows absolute simultaneity and infinite speed, which mean that the effect is contemporal to the cause. There is then no distinction between cause and effect, so Newton mechanics is not causal.
The only assumption that need to be made is causality. Then the causality network between events can be embedded in a manifold with a metric and a law of transformation that preserves this metric, so that it is always possible to find a referential where an effect and its cause are separated by one unit of distance and along a unique coordinate corresponding to time. There is naturally no absolute simultaneity. It happens that this manifold is three dimensional (or more, 10? 26?), which is not a consequence of causality, but a feature of Nature, and/or of our congnitive functions.
The evolution of physics, guided by experimental observation, has led it to a for more compatible with causality. Actually, causality is an unsaid and unjustified assumption of every science.
From an Engineer's perspective using a System Identification approach. A math model is good if it has effective predictive capability. Many natural phenomena exhibit oscillatory behavior which brings to bear one or more frequencies. A second order system is the lowest order which can reproduce oscillatory behavior. The motion of the leaves on the branch of a tree swaying in the wind follows Newton's second law which yields ...oscillatory behavior. The fundamental laws of nature allow us to make these predictions using "first principles" which happen to yield a class of differential equations of a second order which may be generalized to model spatio-temporal behavior as well such as the Navier -Stokes equations that determine fluid flow are based upon Newton's second law.
For a rod for example, there is a fundamental surface which is the section of the rod. The fourth order space derivative says just that, that is, there are three dimensions of space and only one of time. But it says nothing about the distinctive natures of space and time, since the section is a feature of the rod and not of the laws of physics. There is no sixth order space derivative, which would be possible in a seven dimensional space.
We can also look at the Maxwell's equations. They describe all of classical
electrodynamics. But none of them are second order ! They involve first
derivatives of electric and magnetic field vectors. Then can we say that
`most' fundamental equations in Physics are second order. I would think
that it depends on the variables we choose to describe natural phenomenon.
Now we see that
div.E=rho/eps_0
can be converted to a second order equation in we use the electric
scalar potential phi, which satisfies
E= -grad phi.
Then in terms of phi we have a second order differential equation
whereas in terms of E vector, we have a first order equation. Then the
question really is whether we should use phi or should we use
E vector. That is electric scalar potential or the electric field vector.
Some popular answers till now:
0)The oscillation is everywhere present in nature: since (sin(x))''=-sin(x) every time we need to describe an oscillation we have to use a 2nd order DE as the simplest candidate equation.
1)The boundary of a boundary is zero: after reading Ch 15 of Misner et al we see that this is a generalization of the well known Stokes theorem, see here:
http://en.wikipedia.org/wiki/Stokes'_theorem
This answer focus in the conservation demands for physical entities: we build a law by demanding conservation of a quantity.
2)The optimization technique: We start from a Lagrangian (btw just like in classical Economics!) and apply Euler-Lagrange equations for minimal action (another big story is why action sould be minimal!) and so we have 2nd order equations.
3)The chosen variables: if we choose this set, then we have first order, otherwise we have 2nd order DE and so on.
Interesting answers I think. Just to add one of my views here:
Besides the way we have used in order to end with a proper DE for describing our problem it is apparent the simplicity of the final output: this reminds us the famous Taylor expansion f(x0)+f'(x0)(x-x0)+1/2*f''(x0)(x-x0)^2+..., so we could say that all our theories are the first two or three terms in a generalized Taylor expansion for Theories: a concept that has to be defined later, be patient!
A preliminary example now:
Let's see what is the difference from Newtonian Mechanics (NM) and Special Relativity (SR) of Einstein. In SR the kinetic energy of a particle with mass m and velocity v is:
K=\frac {mc^2}{\sqrt {1-{\frac {{\upsilon}^{2}}{{c}^{2}}}}}-m{c}^{2}
or
K=m*c^2/sqrt(1-v^2/c^2)-m*c^2
By taking the Taylor series expansion around v=0 we have:
K=\frac{m}{2}\,{\upsilon}^{2}+\frac {3m}{8{c}^{2}}{\upsilon}^{4}+\frac {5m}{16c^4}\,{\upsilon}^{6}+O \left(\upsilon^8 \right)
or
K=1/2*m*v^2+(3*m/(8*c^2))*v^4+(5*m/(16*c^4))*v^6+O(v^8)
So the term 1/2*m*v^2 is Newton' s kinetic energy when v
As the electromagnetic field can be written as a function of the potentials, with no more than space and time derivatives, it is clear that the variables to be taken are the potential, then the equations are second order. The current density can also be expressed as a function of the electromagnetic field, then of the potentials.
The action as the integral of the Lagrangian along a path has the properties of a distance. Indeed, the Lagrangian usually taken for a free relativistic point particle represent the length of the path, cf. the Fermat principle. We come back to geometry, and distance in (an abstract) space-time.
I think there exist not a serious conversation about the techniques we are using in deriving the laws in Physics.
1) There are fans of optimization techniques who are always find the least action solution by using Lagrangians and Euler-Lagrange equations and they have never written something about the legitimization of such a technique: I always wonder why should a proper defined action be stationary?
2) Next we have fans of the conservation laws: First they define a physical or mathematical entity like current density or probability and then they demand the conservation of it, thus leading to a law: But, since we are not even sure that our defined entity has to do with reality, then how do we demand conservation? I have not seen many arguments about.
So, to be honest, we are acting like Economists who have the optimization technique as a Holy Reading and follow the rules without any objections. Our only advantage is that electrons have not brain in order to react differently under the same conditions: we are lucky enough to treat lower intelligence systems.
Charles, one cuckoo does not bring the Spring! (And btw there are many other counter-arguments against qed, so that positive result is surrendered from other negative results).
Demetris, Charles: opposing Action x Conservation looks like a very good exercise; it seems promising to be used, for example, in class. I like Charles' emphasis on a 'something' metaphysical in Action (which interestingly might lead to questions of the explanatory power of metaphysics in parts of Physics). I'd like to add that we can show, in class in particular, what a powerful empirical principle Conservation is, and how well it can be submitted to verification and refutation (this seems to be, at least, quite didactic); at the same time, working out conservation of physical magnitudes may be seen as the result of 90% transpiration, and 10% inspiration, as sometimes it takes a long time just to find the expression of a magnitude whose conservation that can be demonstrated.
Happy New Year; Demetris, I wish you continue to find time and inspiration to offer us inspiring questions in 2014.
Conservation laws are a consequence of the Lagrangian formulation through Noether's theorem. The undelying concept is symmetry.
Newton's laws are not the only assumptions of the Lagrangian, there is the d'Alembert principle too. They didn't led to quantum mechanics while the action, through the Hamilton-Jacobi equation, did. It's because the action is actually the phase of the wave function, and is more general than the classical linear momentum, that don't include the electromagnetic potential for example. Minimizing the action is keeping the trajectories where interference is constructive.
Charles, I may have misunderstood what you meant when you wrote "There is a load of elaborate metaphysics here, to arrive at precisely the same simple equations".
At the risk of ridiculously distancing myself from the technical, empirical, and theoretical aspects of the discussion, let me say that, as a non-physicists, I would find it uncomfortable not to notice differences in the metaphysics of Newton and Leibnitz, Schroedinger and Eistein, just as a non-biologist, I should be aware of similar differences between, say, Gould and Dawkins. Of course I can have my simpathies, but I seldom have the technical sophistication required to justify my simpathies, and I know I have to live as happy as possible with different schools of thought in these sciences. It was in this context that I said metaphysical beliefs can have explanatory power; for different schools of thought inside each discipline.
I apologize if this discussion is not really technical.
Historically, De Broglie introduced waves and used them to explain the Bohr atom. It was an inverse analogy, and not a mathematical inference. Then the action made sense, not the Newton's laws. The analogy between the Lagrangian formalism and the optics of short wave lengths was know for a long time. Experiment made the decision, and only then found Schrödinger his equation. The probabilistic interpretation came afterward to describe the transition between levels, the Hamiltonian is only a shorthand for writing the wave equation.
Classical and quantum mechanics have the same mathematical structure, but are distinct theories, not approximations. There are no trajectories in quantum mechanics, even approximately. Every term of the Fourier decomposition of the trajectory can be separately detected. The modern formulation of forces as gauge theories has nothing to do whatsoever with Newton's laws, the propagation is free in some (abstract) space with some metric (in contrast to the "action by a medium" in Newton's view of gravitation.) The conservation of momentum is derived from the invariance by translation, without taking forces into account, and it has a direct physical interpretation as the wave vector.
@Diogenes (with a famous Greek name!), Have a nice New Year and I hope to discuss with you about interesting themes in 2014!
A possible legitimation of the principle of least action is this:
= => =0=>S is stationary
It is the cosmological principle of zero universe: everything is a vibration around zero, nothing existed, exists or will exist, but only oscillations around zero. Something like 'quantum vacuum' which is full of virtual particles.
@Claude, there exists a huge discussion about "...Fourier decomposition of the trajectory can be separately detected." What is your opinion? Do you believe that terms of that Fourier series are self-existent entities or are they just mathematical creations? I do not speak about the overall measurement but about every separate term a[n] |n>. I know that this is not relevant to present topic, but I'd like to ask about.
@Demetris, according to the type of measurement, for example position or momentum, on a same particle, a different decomposition, also called representation, should be used. Then the terms don't exist by themselves, only their sum, that is the wave function, do.
If we measure the position of a particle in a narrow beam, each one will be found near the axis. But if we measure the momentum, we get a plane wave that fills the entire space. The wave function is near zero far from the axis, but if the other plane waves are subtracted by the projection, it gets a greater value. (Rigorously speaking, it is infinitesimal because of the normalization, but the probability to subsequently find the particle is the same for every region of space.)
@Claude, I totally agree with you. The sum is what has a meaning and not any individual term: Since we have to choose between an almost infinite set of basis functions it should be rather silly if nature had a physical representation for every term we have involved at our Fourier series expansion. The problem is that we are using merely sinusoidal functions which describe an oscillation, so we are confusing the normal modes of a string with the normal modes of a quantum system. A relevant topic is this:
https://www.researchgate.net/post/What_is_your_opinion_about_the_Sturm-Liouville_theory
but not so many participation was achieved. You are welcome also there.
@Charles, I respect your effort (btw: very well designed is your website!) but I do not agree with you. I can distinguish between the respect to somebody's hard work (I know it is a hard work since my first studies in Physics) and from what I (critically thinking and without dogmas) concern as closer to the truth. QED is tighted with sinusoidal functional representation: Here is the chance for you--> try to build a free from sinus theory.
Happy New Year!
May be we can regroup laws into two groups.
1. Laws which are valid at large scales (such as humans)
2. Laws which are valid in atomic and subatomic scales.
I subatomic scales, of course, trajectories are probabilistic in
nature. The wave function satisfies a wave equation. The wave
equation has a double derivative. In that domain second order
differential equations will appear automatically.
In large scales however, one is not so sure whether one
MUST use second order differential equations. One counter
example is Maxwell's equations. Where experimentally
measured quantities can be handled with first
derivatives as well.
Happy new year to all !
That's the main question: about the wave nature. Is this present or what we are just watching is the material response, a forced oscillation which always can be described by a 2nd order DE or PDE?
I wish everybody a Happy and Creative New Year!
What is more fundamental? This question has two answers, an easy one and a difficult one. Is more fundamental what allows to describe more experimental facts. But if we compare classical mechanics and quantum mechanics, they are distinct theories, and describe distinct sets of facts. (let aside general relativity that is still more difficult.) A more fundamental theory should generalize and unify both theories, but wouldn't make use of the primitive concepts of wave and corpuscle. It would likely be non linear. Second order differential equations would then only appear through approximations or for auxiliary functions.
There is no classical limit or classical approximation of quantum mechanics, see:
http://arxiv.org/abs/1201.0150
Corpuscles and waves together lead to an inconsistency, including between quantum mechanics and general relativity. The puzzle hasn't been solved in soon hundred years.
As for Feynman path integral ,interpretation:
R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals, (McGraw-Hill, New York,1965)
I have ONE only question. Given the quote of page 29:
"The phase of the contribution for a given path is the action S for that path in units of the quantum action hbar...The contribution of a path has a phase proportional to the action S:
phi[x(t)]=const exp((i/hbar) S[x(t)]) (2.15)"
*Question:-->What is the legitimation of such an assumption?
I think it is a great jump without reasoning, neither mathematical (why proportional and especially linear) nor physical( I leave it to the participants of the discussion).
1: The Schroedinger equation is not actually second order.
Fundamental physical equations are defined in terms of a metric (the Euclidean metric in non relativistic case and the Lorenz metric in the relativistic case) It so happens that the simplest invariant differential operators of the corresponding invariance groups (Rotation or Lorentz group) _which therefore depend only on data given by nature and not on the arbitrary choice of coordinates made by mere mortals_ are second order. The requirement of invariance or more precisely independence of the choice of coordinates, is more important than second orderness: the Maxwell equations are first order in the fields, the Dirac equation is also first order, but in both cases they are defined in terms of invariant (or covariant) differential operators.
Now being first or second order is also a matter of perspective: in terms of the potentials (or if you want the connection == covariant derivative) the Maxwell equations are second order. Likewise, Einsteins equations of general relativity are second order in the metric, but first order in the affine connection == covariant derivative. The first order equations are not invariant but merely covariant under the gauge group (which is not a problem at all but something you have to deal with). We can fix a an _arbitrary choice of gauge_ and get second order equation. We can also study gauge invariants, of which the (curvature) fields (first derivatives of the connections, in GR, second order of the metric) are the easiest to work with.
Charles, In one way or another, you have to take the limit h -> 0, since h doesn't appear in classical mechanics. You just let everything else tend to infinity, which is mathematically strictly equivalent, and besides is what is meant by letting h tend to zero. You make the wrong assumption that quantum mechanical probabilities are not different from classical ones, which isn't true, as Bell's theorem and Bell like experiments show. The Schrödinger's cat is not about whether the cat can be described by classical mechanics, it can, it's an experimental fact. The problem is the interface between quantum and classical mechanics, and all what it says is that they are distinct theories, giving a concrete example where N -> oo is the one that is incorrect.
That's the usual way of obfuscating and then saying shut up and calculate, everything is under control. Maxwell's electromagnetism was inconsistent with Galilean relativity. Yet, the ether was drawn out of thin air and all difficulties denied. Even Poincaré clung to old fashion beliefs and missed the great discovery by a narrow margin.
I think that Rogier brought at the conversation another big issue: That of the almost Holy Nature of Differential Geometry:
" It so happens that the simplest invariant differential operators of the corresponding invariance groups (Rotation or Lorentz group) _which therefore depend only on data given by nature and not on the arbitrary choice of coordinates made by mere mortals_ are second order. "
We have the dr.dr term which is Pythagoras & Euclides together, although we work with Riemannian Geometry! Since our way of thinking is merely 'linear' (see my relevant question about 'linear science') it is a corollary that our laws will be given from simple relations that have been brought out from 2nd order differentials. Thus, we have to see a little deeper the phenomenologically innocent Differential Geometry. What if we changed our view and instead of Dif. Geom. we were working with Topology? What do you think about?
Charles, when the number of particles increases, either they are independent and it is the thermodynamical limit, not the classical one, either they aren't and it amounts to a mass increase since the wave function is in the configuration space, not in the ordinary one. Then the wave length of the collective motion decreases with respect to the caracteristic size of the system, as it is linked to the mass by the Planck constant, not the size. Bell like experiments can't be described with classical probabilities like a dice, since there is no (local) hidden variable, they require probability amplitudes. The classical limit doesn't remove entanglement.
Systems of more and more particles are found to present quantum mechanical behavior. The correspondance principle is a dogma that have never been proven, neither theoretically nor experimentally. Every hint that it is wrong is merely suppressed.
This topic really intereqting, so thanks you all for this nice discussion.
Some of the answer justify second order DE by the existence of a variational principle.
I worthes to note that nothing preclude the use of an higher order Lagragians to to describe a physical system. Furthermore this description will lead to an higher order Euler-Lagrange equation. So up to that point, there is no contradiction between higher order theories and varational principles.
(Some problems will appear if one want to obtain the dual hamiltonian formulation. (Some negative enrgy mode appears in the Ostrogradsky's Hamilton formulation)).
Hence for me the variational justification for 2nd order DE is false. The true problem lies in the physic properties that has to be respect (causality, Lorentz invariance). This point is the core for fundamental physical laws. Concerning effective physical laws such restriction are less relevant, and higher order descriptions are very frequent.
We are trapped on the concept of law covariance under the Lorentz transformations etc. What is so fundamental after all? Differential Geometry's metric g_{ij} and the defined inner product? Why such a dedication to all those procedures?Isn't it possible to formulate our world without second order inner products at all? Or this is just the jail of the easiness that linear algebra offers to us? I tend to believe that we are just linear human beings!
Charles, you haven't found a fault in the proof of the reference I gave. I say that taking the limit of a large number of particles is incorrect, since we get a wave function in a N^3 dimensional space, not 3 like for classical mechanics, unless we are taking thermodynamical averages over an ensemble of fictitious particles, of which only one is present at a time.
Bell type experiments are not only those designed specially to study entanglement, it is the normal situation in every quantum physical system, even if the entanglement isn't between different particles, but between parts of the wave function of the same particle. That's linked to the projection postulate, and that's which makes quantum mechanics definitively distinct from classical mechanics.
I won't continue this discussion since it is rather out of topic, and we already know we don't agree.
Here is another counter example. The Dirac Equation. It is
first order in space derivative as well as time derivative.
See for example:
http://en.wikipedia.org/wiki/Dirac_equation
In fact Dirac started from the Klein Gordon equation. He wanted
to formulate a relativistic wave equation which does not
have double derivatives. As a result electron spin was
discovered. Note that Klein Gordon equation cannot
describe spin half particles.
In the non-relativistic limit Dirac equation reduces to
the form
H psi= E psi
which again has second order space derivatives.
Because H has a term like p^2/2m.
@Biswajoy, the interesting point of Dirac equation are the two degrees of freedom (spin up & down), so the number 2 is also silently involved!
Dirac equations are first order equations, but note
that they are coupled equations.
Yes, but the delivered information is with 'dimension'=2, it is not a scalar field solution. My argument is the presence of number 2 and not 3 or 4 or 5. Number 2 is everywhere apparent, even in 1st order equations (Maxwell's eq. are also 1st order but with an appropriate potential they become 2nd order).
I think that a strong reason for 2nd order equations is this:
*The conception of oscillation comes from looking around us and there also exists the demand of finite amplitude for them: remember, otherwise everything could have been destroyed due to an exponential explosion. This is the ground zero of matter: the demand of stability.
*So: stability of matter-->oscillations with finite amplitude-->2nd order eqs.
Dear Demetris,
I'm really enjoying the discussion. In this regard, I see fractional derivative models as a potential (and powerful) way to generalize the 2nd order question you've raised. In fact, the concept of oscillations with finite amplitude (and stability issues) is also preserved for fractional derivatives (i.e., the fractional derivative of harmonic functions also being harmonic functions).
Dear Alfonso, I hope to enjoy the discussion with the good way :)
As for fractional derivatives, what do you think that could be the net added value from involving them? Can we explain something that we couldn't do with simple derivatives? I mean, except from the nice limiting property, what else could be scientifically worth using them?
I would also like to make the following observation.
Maxwell's equations are also coupled first order equations
between E and B fields. When we decouple them we get
second order differential equations for each of them.
But then we also loose the connectivity between
E and B. That is the mutual dependence between
E and B.
That is, we loose some information !
And, Biswajoy, we do not only loose information but we have a variety of gauge fixing methods for resolving the problem of the 'constant of integration'. But, then they describe a wave nature object. So, what is the principal here: the wave nature of light or our mathematical convenience in choosing the gauge?
The potential obeys a second order differential equation too, and both E and B are derived from it, that's the explanation of their connectivity. Actually, their connectivity means that there are too many degrees of freedom. The Dirac equation couples to the potential, not the fields.
Yes I appreciate your point very much which really contains
the key:
Compare the two situations (i) I know a function (ii) I know its
derivative. In situation (ii) I have to integrate the derivative
for obtaining the function, therefore one arbitrary constant
comes into the discussion.
When we decouple first order differential equations
decoupled set becomes second order: that is one more
derivative comes in. Now if I want to recover first order
set, one arbitrary constant is introduced.
Potentials are arbitrary to some extent. To fix this
arbitrariness a gauge fixing has to be done by
hand, which is also an arbitrary
condition (Lorentz gauge, coulomb gauge ...)
The point, I think is that because potentials are
scalar functions, it is much easier to handle them
compared to vector quantities (E,B..). This is a question
of procedure of calculation to be followed.
Good point Biswajoy!
My concern is about the choice of the arbitrary constant and the interpretation of it, since it is arbitrary. Can we trust such an interpretation?
The arbitrariness is like the choice of a referential, the gauge degree of freedom is much like the rotation of the basis vectors. But when it comes to observable quantities, this arbitrariness cancels out. The Bohm-Aharonov effect is the same in every gauge, although depending on the potential alone, because it is given by the integral along a closed path.
@ Claude Messe
Ofcourse, I agree that physical laws should
be independent of the choice of reference frames.
Your comments on integrals of scalar potential.
along a closed path is quite interesting.
Potentials are not measurable in classical electrodynamics.
Only electric fields and magnetic fields are measurable
quantities. Is there a counter example to this
in classical Physics ?
I know that in quantum field theory the vector potential can
be identified with the photon field. But let us confine
ourselves to classical notions at the moment.
The electric and magnetic fields depend on the inertial referential. And in general relativity, even the force is relative. The only true invariants are integrals over close spaces. That the case of the electric charge, that is equal to the flux of the electric field through a sphere surrounding it.
Thank you very much Claude. What are other commonly used
invariants in general relativity. Can I use divergence theorem and
convert surface integrals to volume integrals of divergence of the
quantity ? For example instead of using flux (E.n) can I use
volume integral of Div.E. Will that also be an invariant in general
relativity ?
@Biswajoy
Use the differential form version of the Maxwell equations and Stokes theorem. Stokes theorem in form formulation does not even depend on a metric.
\int_{boundary of 3D space time volume V} F = \int_V dF = 0
\int_{boundary of 4D space time volume W } J =
\int_{4D space time volume W} d J = \int_{4D space time volume W} d d*F = 0.
All the typical charge and flux conservation laws follow from these statements.
So, it seems that almost anything that has to do with materials is constrained by the demand for stability, thus leading to second order differential equations to describe its oscillation with finite amplitude.
It because most of law are directly or indirectly related to force , that is define in term of 2nd order derivative. Other physical thing are derived from force.
Force can be substituted by curvature in general relativity, thus even if we do not use force, we use again a second order quantity.
Hi, dear Researches! The max of 2-nd degree means, that any system's evolution can be set as the positions X_i and rates of change dX_i/dt of all components. One might think, that the X_i and dX_i/dt are always continuous functions of time. If the equations were 3-nd degree, then there must be the continuous also the d^2X_i/dt^2. But that would greatly restrict the behavior of the systems. Indeed, you might create such jail-like world on your computer. Take the Newton laws and add the 3-nd degree F = ma + epsilon da/dt. Let then the system evolve... The first law of Newton do hold: You might assume, that if F=0, then the physical solution has a=0, other solutions are unphysical. Bye!
Thus, dear Dmitri( we have the same name!), the fact that it holds F=ma means that our world is basically a linear one, just like Euclid said 2300 years before now:
http://en.wikipedia.org/wiki/Euclid
What do you think?
Hi! May be because we are playing with Physics only in Inertial Reference Frames. Latter characterize by velocity and position. So all known Physics characterizes by two kind of things: position and the rate. Bye!
Hi, Demetris! Thanks for conversation. The Partial Differential Equations in general form one can hardly solve using the linearization/perturbation methods. The mind of the Creator is often not like a straight line. Bye!
Dmitri, the concept of 'linear' here mainly means the superposition property: The differential operator in general is a linear one. Thus the so many applications of Fourier Analysis. Not the way that we can solve them.
I would add this: that most higher order derivatives contribute so little to the aggregate, that one only need consider at most second order effects.
That does not mean those higher order effects are not there and very real and very active nonetheless.
The current discussion has left me with 'gaps', so I started a same thread at LinkedIn:
https://www.linkedin.com/groups/Why-are-most-fundamental-laws-3091009.S.5942900793303670788
There Gilbert Rooke ( http://uk.linkedin.com/in/garooke ) wrote about the process of divergence, gradient ... upon acting to tensors and gave me the idea to write next:
Let's start from the very basics:
(1)Suppose that we want to investigate the spatial evolution of a scalar φ(x,y,z). If we take the gradient of it, ∇φ=gradφ=grad phi, then we obtain a vector field. Can we add a vector field with a scalar? of course not. But after taking the divergence of gradient we end again in a scalar field, since div.grad phi = ∇.∇φ is a scalar. Now we can make for example a 'linear combination' of the form ∇.∇φ + k² φ = 0 ∇²φ + k² φ = 0 and ... we have a law!
(2)Now let's suppose that we have a vector field F(x,y,z) and we want a 'law'. If we take the divergence divF we will find a scalar field. If we further take the gradient of the divergence we end in a vector field, so we can do the same 'job' one more time and say that ∇(∇. φ)+(...)φ=0 is another 'law'
The overall conclusion is that our approach is merely a 'closed form equation way', ie we are always try to find linear relations between similar quantities, a scalar with a scalar after taking proper derivatives and so on. We could ask if the 'equation like' project has reached its limits of predictability. Stephen Wolfram argued that we can generate complexity by using cellural automata: http://www.theeway.com/skepticc/wolfram.htm .
I think that we have first to define proper the limits of equation-like applicability. What do you say?
And what if we forget that maths are just another tool and insist to find the equation of all? Or, in other words, what if such a problem has not a solution? Should we continue using equations or should we try to find another way in order to achieve for a higher level of knowledge?