E. Schroedinger in his article "AN UNDULATORY THEORY OF THE MECHANICS
OF ATOMS AND MOLECULES", see here:
http://prola.aps.org/pdf/PR/v28/i6/p1049_1
writes about geometrical optics(page 1055):
"The "rays" (stream lines of the flow of energy) would no longer be rectilinear
and would influence one another in a most curious way, in full contradiction with the most fundamental laws of geometrical optics."
Is the modern Billiard-Physics- (Particle~Rays) - analogue to the geometric optics and is this the reason why it cannot explain many phenomena?
For a brief description of Hamiltonian look here:
http://www.atm.ox.ac.uk/user/read/mechanics/LA-notes.pdf
It is very interesting to notice that Schroedinger actually starts from the demand of stationary Hamiltonian action W and then proposes a test solution:
W=-Et+S(x,y,z)
where E is a constant. Then he continuous with geometric arguments about the equal-action surfaces and finally he constructs his famous equation after demanding:
1)the sinusoidal wave function describing W, the famous psi function
2) some constants to be such that to satisfy the already experimental fact E=h*nu.
So, the Schroedinger equation has born from the demand of a sinusoidal solution. That is the reason why it is of first order w.r.t. time.
A very short summary is the attachment file.
For a brief description of Hamiltonian look here:
http://www.atm.ox.ac.uk/user/read/mechanics/LA-notes.pdf
It is very interesting to notice that Schroedinger actually starts from the demand of stationary Hamiltonian action W and then proposes a test solution:
W=-Et+S(x,y,z)
where E is a constant. Then he continuous with geometric arguments about the equal-action surfaces and finally he constructs his famous equation after demanding:
1)the sinusoidal wave function describing W, the famous psi function
2) some constants to be such that to satisfy the already experimental fact E=h*nu.
So, the Schroedinger equation has born from the demand of a sinusoidal solution. That is the reason why it is of first order w.r.t. time.
A very short summary is the attachment file.
After this short introduction the question can be refined:
"Since the founder of quantum mechanics started from the principle of least action and adopted a sinusoidal test solution in order to build the new theory, why should we departure and adopt genuine point particles at all? Is that 'particle-represantation' like using the rays of geometric optics in order to explain phenomena with distance similar to the wave-length of the light?"
Dear Professor Demetris Christopoulos
Really, it is great and amazing task. Please send me the given link http://www.atm.ox.ac.uk/user/read/mechanics/LA-notes.pdf
I want to read it now, although I am able to download it from our Department server, but now I am at home.
Demetris
This accords with my thinking., and I will also go on to explain "why should we go on and adopt genuine point particles instead? "
Schrodinger started with the insight that Hamiltonian mechanics was the framework Nature used, and then formulated specific results within it that were consistent with observations that otherwise could not be explained.
So it must be that Hamiltonian mechanics itself is a deep physical result, and this was his great insight - the rest was just padding steps to make everything match up conceptually with real observations of time and space. What follows is the way I believe he did it:
There is an implicit (unstated and often unrecognized) assumption made in the use of Hamiltonian to derive classical solutions - it assumes the degrees of freedom represented by the measuring device do not influence the system being measured. But this assumption lies nowhere in the derivation of Hamilton (or Lagrangian mechanics for that matter). It was an extra, implicitly applied to make sense of classical solutions!
But when you make this assumption applying this mechanics to quantum observations, it does not give the right results! Any normal mortal would give up at this point and conclude Hamiltonian mechanics was just classical mechanics in disguise. Schrodinger, said instead: No, if we get rid of this external assumption, Hamiltonian mechanics works, and here is my proof. QED. (what a great pun!)
So, when you remove the assumption of measurement independence (which is NOT part of the derivation of Hamiltonian mechanics) , and add the padding, you get Quantum Mechanics.
The conclusion: QM is just plain old Hamiltonian mechanics observing itself from the inside, and lo and behold it has to spit out probabilities to describe itself. Wow, who woulda thunk that! - Schrodinger did! And this is Copenhagen interpretation.
Now, why bother to look at this any other way?
It is my view on unification of physics that there is more than one way to do this padding step. It needs to be generalized from the specific case that Schrodinger chose to understand electrodynamics, to ALSO make sense of standard particle model, cosmology, and GR. But the Hamiltonian mechanics part will remain unchanged - that is fundamental, complete, and correct.
So, how to generalize the padding:
a) Remove the rest mass term from the constraint supplied to QM.
b) Note that dx*dv in uncertainty principle does not contain rest mass.
c) Note dx*dx-dt*dt =0 does not include rest mass in special relativity
d) Complete the uncertainty principle from an inequality in its current form to an equality relationship through the use of special relativity axioms. Arggggh. Heresey you say - the justification is a story for another day.
But to make this generalization work you also have to re-interpret the state variables in Hamilton phase space as point like mass-less particles, all identical, and whose only property is relative location!
I am almost certain now (no quite not proven yet!) that the generalized form that results is what I call "The Equation of Natural State" in Quirk theory.
When this new general form is fed into Hamiltonian mechanics, turn the crank, and rest mass and GR should pop out, along with all the familiar case studies of QM. I am also pretty sure something else will pop out - something that turns the inequality of Entropy (that other great and lonely inequality in physics!) into an equality relationship - and my best guess is it will be the extra expansion of the system (Universe) that balances the increase in Entropy.
Andrew, except for the assumption that you already mention, the sinusoidal assumption is a very tight one to start with. If Schroedinger had decided to put as a test function NOT sin(W/K), see eq.9 at page 1054, then what could he manage to build? So, there exist some hidden assumptions at every initial stage of a new theory that after decades of debates about minor topics still remain hidden!
The irony in QM and QFT is that Schroedinger built QM in order to explain phenomena at small distances (compared to the wave lengths) by a wave-like theory and after so many years we are still have the debate about double slit experiment! (I.e. we are still concerning light as when we were using geometric optics!).
[I am waiting to see your work..]
Demetris,
Ah yes. You refer I think to the problem of convergence compared to divergence of solutions justifying his choice of Sine- His choice denied divergent solutions which he did not feel could be handled by the mathematics. It is deeply related to the Nyquist frequency.
This is why I consider Schrodingers work needs to be generalized from his specific case - the divergent solutions I think are in fact a real part of nature - and are needed to explain the double slit result (I shall show you how the exchange of a photon can be interpreted as a divergent solution). But first I explain how the universe does not blow itself to kingdom come in the face of general exponential assumptions and divergent solutions, even though field theory maths does blow itself apart!
This problem is a prime example of why in a generalized (unified) case you must dump the quantum field numbers of spin charge, etc, and revert to point like Hamiltonian state coordinates, and then somehow rebuild those same quantum numbers up from coordinate states of the more general equation - I show at the end exactly how this is done!
Why do this at all? Because the divergent solutions Schrodinger was so worried about need to make sense in the mathematics, - and the change of state variable to point particles makes sense of this divergence. Here is how it works:
Consider two groups of three point like particles in Hamiltonian space, Now one of those points goes on a divergent solution - by correlation (entanglement!) one of the points in the other group also heads off on a divergent solution. Guess what! They continue on a divergent solution right until they swap groups where they revert back to elliptical behavior. Go back to sleep folks, no apocalypse here! The elliptical and divergent solutions are both valid parts of a more general harmonic (exponential) equation - the equation of natural state! (I hope)
So we now still have two groups of three particles, and a divergent solution has been achieved without going to infinity - in this new paradigm of point like mechanics , we call such a divergent solution an exchange of a photon! Ever wondered why nobody has written down the field theory solution to Dirac's photon equation? Now you know - in field theory one does not exist, because it is a divergent solution. In point mechanics it can and does exist - no problems at all. And we call the non-divergent elliptical solutions of Schrodinger fame, the wave functions associated with rest mass!
So now to answer my final point. How do the quantum numbers like spin arise from Hamiltonian point coordinates.
Consider an electron made from three such points (I am pretty sure it is!, and a neutrino from two!):
Any two points have an angular momentum relative to their normal axis with respect to the third point - so there are three independent spin angular momentum eigenstates in the one object - just like quantum spin in an electron! More importantly, none of these states is related or aligned to real spatial coordinates, they are all internally referenced only to themselves - not at all like classical spin of a rigid body! and indeterminate by any means until measured.
Two electrons of three points each swap points by a divergent solution, and we call this an exchange of a photon (A measurement!). This of course means the internal spin states of the two electrons completely determines the photon properties (like polarization)
But the photon itself can only contain information from one of the three internal spin states in each electron, So this is all we can measure! and we call this a collapse or decoherence. Why only one - because we measure it with a photon which only uses one of three points in each electron to do its job!
And the swapping of the points puts the source electron in a different spin state (The measurement affects the system) All is clear. All works as expected in the quantum world!
The neutrino is interesting: Consisting of two points its spin is not referenced internally but rather to anything it comes across! It comes in four forms - the electron neutrino references its spin to one spatial axis, the muon neutrino to two, the tau neutrino to three, and the sterile neutrino to none. the last is completely undetectable by any means (probably - it might just be detected by small anomalies in the standard model) - but nonetheless I maintain its existence - it must exist if point mechanics is right.
Andrew, when I strated this topic my concern was about the wave-nature of quantum mechanics and how this origin has totally eliminated with the adoption of QED and QFT, thus leading to Billiard Particle physics(BP), see another topic. But, as reading the derivation of the famous Schroedinger equation I realised that there was a more interesting thing: That of the hidden assumption that every wave has to be of a sinusoidal form! I think it is a strong assumption and if Schroedinger had relaxed that assumption probably QM could be formulated in a more general form!
Demetris,
I agree with you - but I think the point I was making with respect to your question was that while the generalization can be made it is not compatible with field theory and so cannot be done in field theory. This is because the generalization requires a breaking of the axiom of partial differential geometry that there exists a defined differential at every location in the field.
The only way I have found to make the generalization (which i argue HAS to be made to unify physics) is to return to points.
However, it is my (unproven) view that a billiard type system where influence is passed locally from one to the next to the next is not viable theory because Michelson-Morley experiment and derivatives showed us that light does not propagate in a medium, and this type of concept is by default...a medium.
Note the point approach I am taking is not billiard particle physics. In my approach each particle follows a probability geodesic in phase space defined by the (extended) uncertainty and this geodesic depends simultaneously on all other point locations regardless of their space-time location.
Andrew, do you remember Higgs boson? Is it present everywhere or not? Otherwise how could it give mass to every potential particle of the 'vacuum'? So, the Higgs boson is a medium like the ('cursed') Aether!
Quantum Gravity and Unified Theories
This division is concerned with the unification of general relativity and quantum mechanics into a theory of quantum gravity, which should also provide a consistent framework for incorporating the other fundamental forces in nature.
http://www.aei.mpg.de/18228/03_Quantum_Gravity_and_Unified_Theories
The canonical approaches to quantum gravity emphasize the geometrical aspects and appear well suited to deal with unsolved conceptual issues of quantum gravity, such as e.g. the problem of time or the interpretation of the wave function of the universe. Important new insights have been gained over the past decade in the framework of loop quantum gravity, whose modern variants (spin foam gravity and group field theory) are among the division's main research directions. This approach, which complements and extends the old geometrodynamics approach, employs a non-perturbative and background independent framework allowing (at least in principle) to describe the fluctuations of geometry itself, and leading to a discrete structure at the Planck scale. On this basis, it is now possible to study the full quantum dynamics of gravity and space-time itself. Most recently, these concepts have been successfully applied to the study of cosmological and black hole singularities, where classical general relativity breaks down. In this way it may become possible to understand how the Big Bang singularity of classical relativity is “dissolved” in quantum cosmology.
@Mohammad, when we speak about 'quantum something' we have to remeber that this is no more than a suitable solution of a boundary values problem: think about a chord which is anchored at the two edges: you will find lambda_{n} by solving the equation of motion. So, for quantum gravity, which is the boundary conditions?
Demetris,
I believe the Higgs Boson is an unstable particle in the standard model whose energy represents a point where a symmetry break in the group theory of the standard model occurs. This break probably relates to gravity.
Do I believe Higgs Boson confers mass to particles. In a word: No
I do know for certain that it is not correct to interpret the uncertainty principle as causing the energy fluctuations we observe in the vacuum - it was never intended to be applied to space-time - it was derived to show the relationship between particles!
So what is mass? As Einstein said, it is bent space! The real question is what is space. I now have a firm but unproven view on this as well. Space is the gap between particles!
I am moderately hopeful that I will find time off work over the next two months to prove all this - it is very close one way or the other.
Probably mass is bent space in a macroscopic level, but in the micro level it is not!
That' s the reason why GR and QFT cannot be complementary each other.
Mass is whatever can be described with the classical centre of mass concept: either a point-mass (if you love billiard) or a simple wave or a wave packet (the traditional QM interpretation).
Demetris,
I disagree - mass is bent space at the microscopic level as well. We must first come to common understanding of what the space is that I am bending. It is not a Cartesian geometric manifold that I am bending - it can't be, because this is an ether. GR did a brilliant job of disguising this ether with relativity axioms, but somehow you just cannot disguise that last little bit of absoluteness. I now explain where that absoluteness is in current theory.
Absoluteness still sits on a cornerstone of physics in such plain sight that nobody even notices it. Even though physical law applies equally to all reference frames, each frame in relative motion must be treated as separate with a different value of total system energy associated with it. You shall not move frames mid calculation without first translating to a new system. But we are not in reality in a new system when we change reference frames even though we mathematically have to say we are. Physically, we are still in the same system (universe) as we always were.
Because of this mathematical requirement to translate to a new system when shifting frames, there must also exist one frame with a minimum system energy - the center of momentum reference frame, So lets stick to that and all will be good. But that one minimum energy frame is absolute with respect to all others. Anyone anywhere anytime can find that velocity vector that makes the system energy a minimum. Such a vector is surely not supposed to exist in a truly relative universe, So it must be that the principle of general relativity is not a complete definition of relativity.
We know the ether does not exist, so why do we persist in using an ether as a starting point, only to use it to develop theories specifically designed to get rid of all trace of it! And then we expect mass to move around in it, after we have got rid of it. Let us start again, this time, by not starting with an ether.
I will now make a proven statement. There exists a type of spatial geometry where all reference frames within it agree on the total system energy whether they are inertial or not, and this total system energy is equal to the energy of the center of momentum reference frame in Cartesian physics. No special minimum energy frame exists internal to this space - all reference frames are of equal system energy
Now is that not interesting? I work with this fantastic structure every day in my research. You are free to shift reference frames in your calculations with no penalty or incorrectness. And I am right now possibly working to a proof that when you bend this space microscopically you get potential energy, and when you find minimum energy configurations in this space, you get a standard model particle with rest mass equal to that potential. Of course i might be wrong - it is after all research, in which case a proof won;t be forthcoming! but I will let you know if it does!
I have read through this thread only briefly, but I hope to read in more detail soon. However, I have a few points to make at this stage.
Waves and particles... As I think Lawrence Bragg once said `the future is a wave and the past is a particle'.
The point here is that if some measurable property, regarding an arbitrary question about the universe, is considered then the measurement of this property remains only a potential event. It is only speculation to consider such potential events as particles, but for the purposes of mathematics it can sometimes be appropriate to consider a finite set of potential events as point-particles (this may be somewhat in keeping with Andrew's ideas, but here the points are `times').
However, generally that which has not happened must be regarded in the context of probability - certainty of an outcome is also accounted for in this case.
-the future is a wave.
The past:
Data about a general quantum system, i.e. answers to general questions about the universe subject to the incompatibility conditions of quantum logic, is only realisable as the results of a measurement process (the concept of measurement has deep levels of abstraction outside of the physics laboratory).
We consider a system that we understand, for whatever reason, may be coupled to an instrument that we can observe. The instrument does something and we can say `the instrument is doing this because there is something there, doing something to the instrument'. This has been formalized mathematically such that measurements a supposed system observable are realised as the results of disturbing this observable. In this case the Heisenberg uncertainty appears as a relationship between the error in the measurement of the `true' value of the observable property and the force imposed on the system from the measurement process.
Anyway, the data, say positions, may be understood as the drift of this stochastic measurement process. And thus the notion of a particle is realised as something about which the apparatus is locating.
-the past is this output of the measurement process. It is not statistical, but actual. Its future is statistical.
But note: the apparatus is as much a part of the reality of this particle trajectory as the system to which this particle is identified with.
In this way particles may be understood as a consequence of observation.
This modern quantum theory is related to atomic physics, radiation
theory and quantum optics. The Q theory is developed in terms of Hilbert space operators. The quantum mechanics of simple systems, including the harmonic oscillator, spin, and the one-electron atoms, are reviewed.
Also, methods of calculation useful in modern quantum optics are discussed. These include manipulation of coherent states, the Bloch sphere representation, and conventional perturbation theory.
A thought on the sinusoidal nautre of the Schrodinger equation:
One may dilate (in a purely mathematical manner - no physical desires) the Schrodinger equation . This comes from studying `Newton-Leibniz algebra' (which is nilpotenet algebra having a single basis element: dt).
However, returning to physics, the dilated dynamics is very natural. It corresponds to a sequential measurement (or observation) of a system, with respect to an apparatus with a minimal 2 degrees of freedom. Here, the system simply corresponds to a collection of weighted possible answers about an arbitrary question. `What are you thinking?' for example.
Generally a measurement yields an answer, but in this case we simply receive an outcome that confirms `you are thinking.' To obtain information about `what you are thinking' requires additional degrees of freedom of the measurement apparatus which will ultimately result in a Lindblad dynamics describing the evolution of the distribution of the possible `thoughts you might be thinking'. Then the measurement dynamics is generally described by non-linear filtering.
So, we may consider a question that is identifible with a system. We assume the C*-algebra formulation of probability theory. Then we set up, mathematically, this minimal apparatus for measuring the system. This apparatus tells us nothing about the distribution of system properties BUT we do discover something nice which may shed light on the sinusoidal nature of the Schrodinger equation...
What happens now is that we consider a sequential measurement/observation of the system. As i have mentioned, this observation simply indicates that the system is there. The next task is to construct an operator that describes the interaction of the apparatus with this system. There are some mathematical requirements for this.. these requirements may of course we investigated, but the standard requirement for interaction is unitarity. This unitarity is part of the realisation of the sinusoidal nature of the Schrodinger equation, and without it we arrive at a slightly more general equation dV=LVdt. However, the exponential form of the solution still appears and this may be regarded as a condition on the measurement process:
The order of sequential observations is preserved.
A first order differential dV=Kdt appears 'in the ignorance of the measurement process' i.e. when the apparatus space is traced over. It is exponential when the points of observation are randomised and the interactions (between system and apparatus) are independent of past and future, dV=LVdt. It is corresponds to the Schrodinger equation when the interactions are regarded as unitary (with respect to the apparatus involution) dV=-iHVdt.
Note: In this dilated formalism an observation maps a state of the system to its derivative. A second observation maps this derivative to its second derivative, and so on.
Matthew,
I am very interested in your post, but am struggling a little with the description.
I think you are saying unitariat criterion leads to the sinusoidal? is this a single global unitariat over all degrees of freedom, or local to each degree of freedom in the case of slightly more complicated scenarios?. i.e. is each Sum(p(i))=1 so Sum(P)=N, or is Sum(p(i...N))=1?
I am thinking here in terms of the following concept for a photon, An electron in an hydrogen atom has a several degrees of freedom, and unitarity criterion leads to a sinusoidal solution (like a 1s orbital for example) A second atom can be treated in the same way, but presumably in a two atom system the unitariat criterion now has to cover both atoms? and each atom in this system need no longer be a unitarity phenomenon in and of itself. So all you get is sinusoidal for the total system.
But can there still exist an exchange of a photon interpreted as a swapping of a degree of freedom in each electron? - in other words not a sinusoidal solution, but two symmetric exponential divergent solution, that swap a degree of freedom each , leaving two hydrogen atoms in different states, but still with the required overall sinusoidal solution. This is not well explained but I hope you get the point.
Dear Andrew,
So, atomic electrons and photons..
Let's, for example, first consider a system wave-function describing a distribution of electron states of a single atom. The atomic property is considered in a laboratory with respect to a measurement apparatus that is a laser. The experiment begins. An initial state of the laser is considered - a coherent state. If the atom does not respond to this incoming information then nothing happens. Laser frequency is changed, say. The incoming information is scattered. The existence of an object is inferred.
The atom is considered to spontaneously interact with the laser. This is regarded as the realization of a photon: a specific quantity of energy identified with the interaction of the laser with the atom. If It is localizable in time and space it is because the interaction is. The mechanics is that these interactions of the atom with the laser are given by operators (unitary otherwise re-normaliztaion required), and these operators act at specific random instants - this is the spontaneity of the interaction. This may be translated as: the photons in the laser are separable (the coherence), and interaction operators are acting on the atom and on 1 photon. On the remaining photons this action is given by the identity operator.
The output trajectories, which are the scatterings of the laser by the atom, may no longer be regarded as independent photons, because although they have not directly interacted with one another they are all independently entangled with the atom - this is why the atom's existence may be inferred.
The reality is the data collected from the entangled output.
Now, we can trace out the apparatus (laser Fock space) and this will give us a Lindbladian evolution of the atomic electron state - this is a generalization of the Schrodinger evolution which incorporates the degrees of freedom of the photon. If one may consider an apparatus `laser' in which there are simply no transverse degrees of freedom, then any inferred object will evolve according to the Schrodinger equation. The measurement process is the same as that for the photons, only these more primative apparatus particles have no transverse structure. That said, these apparatus particles are massless, they are independent from one another prior to interaction, and the interactions are given by unitary opertors acting only on the atom and one such particle. This gives rise to sinusoidal structure exp{iEt}.
Two atoms:
Indeed, the sinusoidal nature is of the joint system. However, one can argue as follows. An interaction is considered as that between the joint system and a single photon. The wave-function of the joint system describes the distribution over electron states of the joint (could be separable or entangled) system. The two atoms may generally be regarded as coupled for theoretical purposes. Thus an interaction dynamics may be established in which transitions in one atom correspond to the complimentary transitions in the other - assuming they are neither absorbing or radiating energy into their environment.
Let's now suppose that an apparatus/environment photon interacts with this two atom system. Let's suppose that it is absorbed. One might suppose that it is absorbed by one of the atoms in particular. On the other hand one might simply say that the joint system absorbed the photon. In order to extract more information one must take more measurements... Even if the two atoms are not interacting, the process of measurement can stimulate interaction between the two - scattered information from one atom reaches the other.
The existence of photon exchange between atoms:
In theory, one may consider one atom increasing in energy and the other losing energy in response. Given conservation of energy there is no divergence if the energy eigen-values are bounded below, but in the absence of additional constraints there is no reason why the two atoms shouldn't gian or lose energy to one another in an overall unitary dynamics. Generally, such coupled electron transition dynamics is best described as a coupled quantum Brownian motion: creation of a state in one atom is coupled to annihilation in the other, i can provide references for this stuff if desired.
However, the reality requires varification of the existence of this system, and so the measurement process returns. And from the outside we can only understand a lump of matter in the context of its response to our questions.
If one is able to `ask' a lump of matter under measurement how it is interacting internally, then questions about internal dynamics may be answered. If one considers a lump to be composed of little bits then this lump may be `opened up' by the apparatus:
i) the lump is composed of bits that may generally be regarded as interacting with one another (including trivial non-interaction).
ii) these interactions may in principle be inferred from a fundamental measurement apparatus with additional degrees of freedom compatible with the state transitions of these bits.
iii) to do this requires that change is recognised within this system that cannot be accounted for by the apparatus.
But there are many problems..
For example, there may be lots of stuff going on inside the matter that is simply incompatible with the chosen measurement process.
Also, one can in pricniple consider space-like force holding the matter together. In the absence of observation is time necessary? Does a theory of force need to be dynamic? This is starting to re-generate questions in my mind.. now i'm thinking out loud. Space-like force - not dynamical - if thought of as dynamical then people have notions of `problems with faster than the speed of light..'
What i'm getting at is a distinction between the dynamical nature of disturbing a force and simply having a static force. The former arises in the necessity of measurement in order to varify a statement about reality. The latter is simply attempting to infer the nature of the thing being measured - geometry. This replaces any question of `why is this matter holding itself together?' with `what is this matter that is holding itself together?'. The latter question addresses the inferrence of the nature of matter from measurement.
Matthew,
Ok that is much clearer thanks. It is great to find someone willing to take the time to explain the status quo in a (reasonably) straightforward manner.
If you feel up to it, can you comment if the following is a valid interpretation of QM
:
So now consider a simplified two hydrogen atom system (lets just assume the protons are included as a static point potential)
It is considered as a single isolated system but also one where each atom is the instrument "observing" the other. Now if one electron is in a low energy state, and the other in a high energy state before the system is isolated , then there are two possible phenomenological interpretations of what happens after it is "isolated":
a) A dynamic but effectively instantaneous process of energy transfer from high to low by the exchange of a photon, which of course can then oscillate back the other way, ad infinitum, leading to an average where each atom cannot distinguish its own state of energy as either being high or low - i.e. the system is entangled, but curiously "measured" by itself at the same time and because the oscillation is 'instantaneous" it still counts as a valid interpretation of QM.
b) An equilibrium process where the system is a static mixture of the observable states (hi low or low-high). (Which is sort of the standard interpretation of QM I believe)
I don't really see a) as violating any Special Relativity criterion, because from the reference frame of the photon, it can exchange the quantum states of the two electrons in no time at all since the space-time interval of the photon is zero. It is only from the reference frame of an electron that it takes the time dictated by the separation distance. So the implication in this (perhaps naive) reading, is that the speed of light might actually be a property of space
This raises a curious point: If the Spacetime interval of the energy transfer is zero, then all this happens "in the instant" in this framework, Does that mean the exchange process in (a) in phase space also has to be instantaneous with respect to the evolution of system phase for this dynamic view to remain a valid QM interpretation. Could the dynamics in phase space happen at some finite rate with respect to system phase and still be a valid QM interpretation (given it remains instantaneous in Spacetime where our actual human measurements are constrained to be done)
Finally, is it true that QM has nothing much to say about the photon itself? That is all it says is one electron goes from one sinusoidal solution to another, while the other electron does the reverse, and the correlation of the two IS the photon. does this raise the possibility of a generalization of Schrodinger to non sinusoidal + sinusoidal solutions where this generalization might have something more to say about the mechanics of what a photon does?
@Mathew, can you provide me with a link for your formalism dV=LVdt? I think I miss something. Thanks a lot.
Andrew, I would be happy to comment on your last post, but i must wait until tomorrow as it will take time and it has been long day! But i look forward to reading it.
Demitris, have a go at reading this: http://arxiv.org/pdf/1111.7043.pdf
It's a bit heavy, but I have almost finished what I hope will be a physics-friendly version if you have any troubles. And i'm happy to answer any questions as you go through it.
Matthew,
I want to send you a very special thank you. Your unitarity argument with respect to Schrodinger derivations has provided the last piece of a puzzle in a proof I have been looking towards for the last two years.
Some Quantum Mechanical friends of mine have been arguing that the Quirk theory I have been working on cannot possibly be Quantum Mechanical - instead it (apparently) was "obvious" it was rooted in classical probability theory.
Quirk theory makes the bold claim that the only state variable of importance is the relative position difference between point like fundamental particles. - this immediately rang alarm bells with QM people, prompting the "classical probability" criticism. I can now counter this as follows:
First, convert the state variable of relative position, to a state variable of probability using the unitarity criterion. This is indeed just classical probability theory. But this is OK , because it is now in a form that can be treated directly using Lucien Hardie's derivation of Quantum Mechanics. He proved that Quantum mechanics can be derived from 5 axioms, four of which are grounded in classical probability theory. The fifth axiom, which is the key to Quantum Mechanics, relates to the requirement that there exist a continuous and reversible evolution of the probability of state variables.
Since I already know that Quirk space satisfies axiom 5 of Lucien Hardies treatment, and now I know the rest can be treated in classical probability, then the establishment of Quirk Space meeting axioms 1 through 4 in classical probability is straightforward. Therefore Quirk Space is explicitly Quantum Mechanical in nature QED.
Since I already know Quirk space also obeys the requirements of special relativity, then it greatly increases the possibility that quirk theory represents a significant unified result in physics - and all without mentioning the word "rest mass" (which is a derived result from quirk theory, rather than an input to it)
Not only that, but the unitarity argument explicitly translates the Equation of Natural State in quirk theory on to a unit circle, and the unit root vectors on this circle define the Lie Algebra for Quirk Space, and the Lie group associated with these root vectors therefore should be the Group theory of the standard particle model. I am quite encouraged again after many people have led me to doubt myself!
@Mathew, I read yours and Belavkin's work.
1)Why should we introduce another set of operators when we have not even prove the need for the existence of the already operators in use? (I refer to futurepast operator)
2)What is the new? Diffusion equation is very well studied anyway, so why to accept quantum diffusion principle.
Demetris,
If I understand it right (and this is a test for me prior to Matthew's actual answer.), it is because the work derives from a different historical and philosophical starting point.
QM derives from the desire to understand particles physics. Matthew and Belavkin's work derives from an attempt to understand the process of measurement.
It has just turned out that both approaches have some deep equivalence going on, but this is somewhat hindsight than foresight. So it is not a really a matter of needing to switch operators. Some people prefer french, some English . Go Figure! Your question I think is like asking the French president why he does not speak more English!
It is just possible one type of approach may prove easier to generalize than the other. If nothing else it makes you think about the axiomatic underpinnings and hidden assumptions of theory.
English and French indeed. Let's delve into these languages..
Demetris,
These are good questions.
1) )Why should we introduce another set of operators when we have not even prove the need for the existence of the already operators in use? (I refer to futurepast operator)
To answer this I will attempt a brief summary of the subject. But first I will answer your second question, and the two answers will mix together.
2)What is the new? Diffusion equation is very well studied anyway, so why to accept quantum diffusion principle.
Diffusion has been studied in classical contexts, but not in quantum contexts. How does when describe a quantum diffusion without the appropriate mathematical machinery? Hudson and Parthasarathy constructed the original formalism for quantum stochastic calculus, but what is this. Actually, it is a rigorous analysis of noise in which individual particles of noise can be mathematically formalized. Belavkin was independently involved as once the PhD student of Stratonovich and they were working on noisy quantum channels in the 70's.
Quantum stochastic calculus differs from orthodox quantum mechanics because the former has its roots in algebra and probability whilst the latter has its roots in particular areas of physics. What this means is that in the usual (orthodox) theory there are operators which are postulated from physical considerations, and therefore one would hope to prove their existence in some way (presumably by experiment...). On the other hand, quantum stochastic calculus (QSC) begins simply by defining the most general forms of maps on quantum states, and these maps intend to describe open quantum systems (i.e. gaining or losing information).
Note: In quantum probability atomic physics is simply an example, but the theory builds on general axioms of quantum logic: Loss of distributivity corresponding to incompatibility: inconsistence no longer implies mutual exclusion. From this one may derive orthoprojector logic in a Hilbert space.
What is interesting is that in QSC particular examples of these general maps on quantum states can be considered. This means that constraints are imposed, for example linear maps, and the constraints leading to such maps may be formalized.
QSC has another interest on the mathematical level. If one builds a Fock space over a general Hilbert space (any number even infinite degrees of freedom) of square-integrable functions over the real line, or a subset of this, then liner operators on this Fock space correspond to stochastic processes. This starts to unite general dynamics with algebras familiar to physics. But notice I have said linear here!
[[
Why linear. Best to look at it like this:
We simply have an algebra, and this is how we choose to formulize things. If I take an element of this algebra then linear doesn't come in to it. If I then consider a notion of states on the algebra that what I am trying to do is take an element of this algebra and squash it into a number. The first place to start by considering the action of the state on the element of algebra as linear (the study of linear functional). This is a mathematical consideration. In Hilbert space (of which Fock space is an example) this corresponds to the linear action of an operator on a `wave-function'. If the wave function is denoted by k and the operator by T the we get Tk. If k is normalized (corresponding to a unital state) but ||T|| is not 1, then in order to interpret Tk statistically we must re-normalise as Tk/||Tk||. Notice that 1/||Tk||=(k*T*Tk)^{-1/2} may be expanded into a non-linear function of k.
One of the things that Belavkin wanted to explain in that paper you read is that he dynamical (filtering) equation for such f=Tk/||Tk|| is non-linear but derived from linear action. Also note in that paper that the apparently linear operators are also not linear because they are depending on k too!
]]
So, where are we..
QSC is a general theory of quantum probability. In this frame-work stochastic dynamics may be analysed in a very beautiful way in which the individual particles of noise interact with the object whose dynamics is driven by the stochastic process. These interactions are formalized mathematically has maps, which are considered most basically as linear maps (from which non-linearity may be derived).
We are not in fact introducing `another set of operators' but we are simply considering the general form of maps from an information point of view, and then we find that the operators `already in use' may be recovered in certain circumstances. Which is nice.
For example, Lindblad dynamics is immediately obtained in this way from a linear unitary stochastic process in which the noise particles are all independent, and one by one entangling with a quantum object. In this case we don't intend to prove the existence of the operators involved, we simply assert that we now have a rigorous tool which allows us to correctly interpret these symbols.
Diffusion equations are well studied in classical probability, but quantum Brownian motion is, at its heart, an algebraic formulation of diffusion. It handles the concepts much more naturally and provides more rigour to study this stuff, particularly in the context of quantum theory. When probability is considered algebraically it becomes a lot more structural, QSC, in a basic manner, is the Hilbert space formalism of stochastic process. However, like quantum mechanics, things appear in the Hilbert space formalism that are simply not in the classical stochastic theory.
In light of quantum stochastic process it seems that the diffusion equation was not well studied enough! (see the notes on quantum logic axiom that I made a few paragraphs up). But I think the mean thing to note about quantum diffusion is the appearance of non-commuting diffusions.
I mentioned in the other thread that we have dw and df respectively corresponding to the increments of error in the apparatus measurement and force imposed by the apparatus measurement. Infact [df,dw]=2ihdt (h is just planks constant. More generally we consider creation and annihilation increments though da* and da and these satisfy the commutation relation [da,da*]=dt. At the end of the day this theory is all about the study of time in an rigorous algebraic setting. Recall that in usual quantum field theory we have [a,a*]=1, in QSC we have [a,a*]=t. It's all about evolution.
Andrew,
I am glad something I said shed light on a problem you had. Time to read your posts!
Andrew,
I few notes:
when facing theories of physics there are very subtle but important things that must be considered:
i) If one speaks of such things as `isolating a system' then it is important to start developing some mathematical formulation of this isolation process. That generally goes for everything in a theory. Nothing must be neglected, attention to detail..
ii) If two electrons are considered to observe each other then how is this being done?
Indeed you touched upon what I am about to write. Either this is done by virtue of photon exchange, based on the conditioning of our knowledge of matter from experiment, or the electron transitions are simply coupled directly - a bit like an instantaneous photon, but direct coupling does not get involved with questions about space. If the two electrons are considered separate then this becomes I little more debatable.
iii) What is an electron? Now we are entering a delicate area. Take the double split experiment. The electron is a wave. However, when the is spontaneous interaction between the electron field and the apparatus something happens. There is an indication that the electron becomes localized at one or other of the slits. Notice that the realization of the electron as a particle is not separable from the apparatus which measured it...
In the context of atoms it may be appropriate to make statements about localized electron orbital states, but it may not be appropriate to speak of electron rest-frames, because speculation becomes unavoidable. On the one hand we can consider an underlying structure in the world that we are trying to measure in order to verify. But on the other hand we simply take measurements and infer an underlying structure based on the observations that we make. In the latter case we can then consider our inferred structure and use it to make predictions about future observations. Of all the things I've said in the last few days that may be the most important. That is the general measurement approach to physics. The whole theory is a living thing that continues to build the world. Other theories simply try to explain it all in one go.
The same thing applies to photons as to electrons. What is a photon. All evidence of a photon as a particle is in the context of its interactions. Therefore it should not be considered outside of this context, in general. I would say that a photon is an interaction point in an electromagnetic field, and a quantum mechanical phenomena. Having an interaction point in an electromagnetic field suggest that there is something there - this is the inference. This thing may generally be assumed to be quantum in the sense that the laws of quantum probability are a generalization of those of classical probability. We don't know exactly what is there so probability must be used, but our idea of what is there is developed by continually making these observations - which may be induced by particular experiments.
A mixture, using your two electron orbital states as an example, is simply a weighted distribution of the possibilities that may be actualized by measurement. The measurement selects just one of these possibilities - this is essentially the non-demolition principle. In selecting just one state of the mixture requires re-normalization as the state is weighted in the context of the mixture. This is where non-linearlty comes from.
However, this mixture is only real `in the ignorance of the measurement process'. This means that generally you have the thing being measure (two electron orbitals) and the apparatus. Maps are considered which describe a general form of interaction (which may be constrained under particular conditions) and the evolution arising on thing being measured may be considered by tracing out the apparatus space - that is simply obtaining the marginal state of the measured thing. That is what `the ignorance of the measurement process' means. So the mixture appears as the marginal state of the measured object.
One final thing. Many paradoxes are resolved by simply working in a probabilistic frame-work/language, and probability itself should be realized as the nature of the universe as far as out attempts to understand it go. If it is regarded simply as a mathematical tool, then it will appear unsatisfactory in theories of physics.
Mathew,
I think a lot of us are seeing a convergence of information theory and physical theory. The last word has yet to be said on the relationship between signal, noise, the Nyquist frequency. information content, likelihood, entropy and so on. All these terms that intersect but are treated in different frameworks in maths - we need a common framework - a great conservation law if you will for information, and when we find it, I very much suspect it will also be the law by which the universe works.
The best verbal summary I can make of where I think we are headed is "The Universe is eternal, because the thing, and nothing are mutually defining, and the Universe is what it is because it cannot be any other way.
You might be surprised to hear this from an engineer whose career of 30 years has been grounded in the practicality of building things that work! Ten years ago I set myself the goal of understanding in detail the issues around the unification of physics - and read very widely, and as deep as I needed. Experts in a single aspect of physics continually think I misunderstand them, when it is just the tower of Bable at work -I don't have the language to express things the way they expect, because I would have to learn 20 specialized languages to do so.
What I was left with after ten years of discarding all the possibilities that could not work, was Quirk theory. It surprised the heck out of me because it was the last thing I was looking for; it was such a preposterous idea! - the closest thing to nothing, that was still something! It still may prove to be wrong (I put the odds of it being right at about 1 in 15 at the moment), but if I prove it wrong, I can leave my quest knowing that the Universe is more mysterious than I can possibly ever hope to understand, and I will go back to enjoying the sunsets!
I have approached many physics departments, thinking that holding advanced honours degrees in engineering would at least get me to an interview stage to put my case for a PhD study in this field - no such luck! So, I have been looking for serious criticism of the theory on RG, but all I have been finding is disproofs along the lines of "I think therefore it is wrong" So frustrating!
I am now quite hopeful of a proof in the next few months, because I now have a clear methodology in my simulation software to construct a Hydrogen atom from first principles, proton and all. Because Quirk theory has no empirical parameters, this will either succeed or fail - there is no middle ground.
Mathew, first of all I' d like to tell you that I liked in a mathematical sense the QSP theory you have just described to us and I thank you for this. It was something new and interesting for me. I quote from your reply: " In this frame-work stochastic dynamics may be analysed in a very beautiful way" and I agree.
But, since we are doing Physics which is an experimental science among others, I'd like to present me a table like next one:
\begin{tabular}{llll}
RealValue&QM_{estimation}&QED_{estimation}&QSP_{estimation}\\
...
\end{tabular}
I want to see which is the improvement in practical estimations in order to say:'OK it is a fine Theory which better describes reality, so we have to accept it'.
Mathew, Thanks for that. As an engineer I am much more concerned with the how and the what than the why. The how and the what is simply the diligent application of the correct maths in the correct way. It is not sufficient for me to just do maths (which is why I am an applied mathematician, not a real one!), My maths must come with an understandable context. Feynman was quite honest and contrite about it - he accepted quantum theory, but was under no illusion that he truly understood it - the frustration at not understanding such a simple few rules is likely why he enjoyed playing the bongo drums so much.
Andrew, you can ask detailed questions and be sure you will receive detailed answers!
Anyway I wish you good luck to your project!
Indeed, practical solutions is another matter! It will come, but it takes time. QSC is very new mathematics, just touching physicists at the moment, and to be implemented into engineering will take time. It will take time for it to filter into physics. However, I think John Gough and Matt James are working with much more practical applications of this stuff in perhaps more of a theoretical engineering context. John Gough has some familiarity with the Belavkin Formalism of QSC and uses it. Demetris, maybe you will find some good stuff in his work too. Belavkin has a lot of work on quantum information too, and there is many estimates that appear.
Regarding your table Demetris, to test the results of theories we need experiments. Perhaps we consider an experiment that verifies QED, and perhaps QED is the best/simplest way to handle the data in this experiment. The problem is that the theory becomes one that gives explanations for the data in very particular experiments, and subsequent experiments are constructed still in the context of this theory. If new theory is considered then new experiments may be carried out as new ideas about `what' to measure appear. So experiment becomes somewhat synthesized to the theories.
Gravitational waves is a good example. These are predicted in a theory, and subsequently specific apparatus is built to attempt to verify these predictions. One of the problems with QSC is that people ask `what does it predict?` all I can say is that it's not trying to predict anything, it is simply a construction of dynamics that accounts for everything: it is not a theory of `what is' but a description of how we are evolving. And it allows this evolution to be generated by observation - absolutely fundamentally.
This is not quite a prediction. To make predictions one would have to take QED, say, and re-formulate this in terms of QSC (quantum stochastic calculus) machinery, and then say things like `if we constrain the system in this way, which may be achieved in the laboratory be doing this, then we can deduce that these things correspond to this and those things correspond to that...'
In QSC everything has a very particular place as far as the way all the ingredients fit together, and this alone provides additional insight into interpretation (which is important in physics). It gives a much more precise context in which the underlying ideas may be arranged.
Demetris, this may be an unsatisfactory response to the table that you would have liked, but I'm trying to reveal that QSC is more like a very logical system within which physics may be understood. In this way it is not like QM or QED, but it is closer to reality as it is trying to provide a means by which reality can be understood.
crudely ,
QM, QED: are attempts to uncover a structure that (parts of) the universe is made of, that may be verified through experiment.
Belavkin QSC: is trying to say that statements about the structure of the universe should be inferred from a fundamental principle of measurement/observation, and thus any inferred structure should be understood as a consequence (at least in part) of the observer/apparatus.
These are subtly very different things.
Demetris,
All I ever wanted to do was sit down over a cup of coffee and a whiteboard with a couple of real physicists who really understood the unification problem and to present my hypothesis for a critical examination (or even a good belly laugh) Why? because I had tested it for all the falsifications I could think of, and not found one that was, and I was looking for second opinions as to how to go about it.
The original intent was to find a PhD supervisor, so I could learn enough to do the falsification studies/proof myself - the onus of proof of such a bold attempt is quite rightly extremely high. As it turns out, enough time has passed that I have now managed (almost) to learn enough to attempt this without help.
I will leave you with a typical quote from a senior physics professor, when I asked to come in for an interview with regard to a PhD candidature. Note this was at my Alma mater, an institute that saw fit to grant me a first class honours Bachelors degree that entitled me to go direct to a PhD study at that University. This was his response:
"I guess I should start out by being direct and say that I share the commonly encountered scepticism regarding proposed theories that aim to solve all issues of quantum and gravitational physics in one fell swoop. Beyond that scepticism---which might be criticised as no more than evidence of a closed mind---there are the endless demands on time…we are all very busy"
Quirk theory came about by a very strange route - it is a mathematically complete and unambiguous hypothesis but its common framework is based only on principles and not evidence. But each principle on its own was already proven in physics , just never put together quite that way before. These principles came from QM, GR, Cosmology, Thermodynamics, and the Standard model. But to make non-contradictory sense of all the principles in all these various facets of physics, only one option remained with just three axioms: Relativity. the Quantum, and Scale Invariance
What has physicists all riled up is that the last axiom is incompatible with field theory, in the limit - it has to be incompatible, because the structure I use works right down to the singularity while maintaining strict energy conservation at the micro level, and still allows Quantum Mechanics to be done on it. Field theories, and differential geometry just cannot do this. The general response of course is that it can't possibly work because it can;t possibly work, therefore it does not work, and yet my simulation program and excel spreadsheet continue to spit out demonstrably valid solutions to the maths itself, and some analytic ones at that!
Andrew, good luck from me too! Although, maybe sunsets and mystery is the greatest path of all!
Andrew, please don't overestimate the guidance of a PhD supervisor. In time I will share with you many experiences of my own that indicate the simple truth: In Research you are ALONE!
I shall keep in touch...
Demetris, Matthew,
Perhaps it is because I had an absolute genius as a research masters supervisor. The two examiners only made one comment. Why was this thesis not submitted as a PhD. There was a good reason. Our personal research standards were higher than theirs. It was not till my mid 30's that he finally convinced me I more or less knew what I was doing! I grew up absolutely convinced of my own ineptitude. Now people look at quirk theory, and say I was right. Go figure. I am happy to let it stand or fall on the scientific method.
The mystery I find in the simple, What is more mysterious than energy telling itself how to move, and it does. You can see in the maths "how" it does, and it is still mysterious. I also find mystery in emergent behavior - complexity from simple rules applied often, creating layer upon layer of whole new classes of phenomenology that just cannot be readily deduced except in hindsight. These mysteries, Matthew, as you rightly say, arise entirely from observation. And we can never know why we observe, just that we do. That is the deepest mystery of all - and I am not going there because it cannot be answered, only wondered at.
@Mathew, my only 'problem' with the above mentioned theory by you is this: Since it is a theory for measurability and observability, isn' t it a contradiction the weakness to define a proper experimental process in order to distinguish its superiority"? That' all. I am making 'devil's advocate' now, but I told you, I liked the theory and I'd like to see it being recognised by the community. I think the only way for this is by proving its superiority at the experimental arena.
@Andrew, you have touched the area of the existence or not of justice: My personal opinion, after many years of searching about, is that simply justice in our world does NOT exist. I can count you my personal objections starting from the university of my first degree (we had the harder schedule of all Greek Universities in Physics and what? we got a degree equivalent to others universities... and so on). So, take it as a fact and continue your life...
Demetris, i'm so glad you brought this up. There is in fact one appaeratus, that for us humans is the only true apparatus, That us ourselves! There is no better way for one to understand the universe than through one's own observations!