Quote from Wikipedia 2016:
“Quantum electrodynamics.... Richard Feynman called it "the jewel of physics" for its extremely accurate predictions ... the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results .... theory being meaningful after renormalization is that the number of diverging diagrams is finite... quantum electrodynamics displaying just three diverging diagrams.”
So, the renormalization has not removed all the infinities from QED??? How then the Richard says, what QED is extremely accurate??? The theoretically the QED is a monster, but practically – “jewel”. In fact, the infinities remained:
Wikipedia: “An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero... From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy.[24] The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues.”
So, the Richard Feynman was not objective. The renormalization removes the infinities from the theory. The infinities are not completely removed. Conclusion: there is no single Quantum Theory of Field, which would be renormalized, and even quantized. So, theoretically, we have no Quantum Theory of Field. Please go along the path of the David Bohm's theory of Quantum Physics. Thank you.
André> "Can you elaborate?"
Feynman invented the technology for doing QFT-calculations in general, not only in QED (or just some specific processes in QED, like the electron anomalous magnetic moment). It is known as Feynman diagrams.
* It is manifestly invariant, combining many Lorentz symmetry breaking contributions of the old Hamiltonian formalism into single, explicitly invariant (or sometimes covariant) terms.
* It provides a visual representation of long and complicated algebraic expressions, unambiguously defined by each diagrams, which makes it easy to identify expressions which are equal, or which cancel each other. Hence it makes it possible to do visual calculations (i,e., incredibly complicated algebraic computations in your head).
* It supports intuition, interpretation and understanding of the physical mechanisms behind the processes under scrutiny, making it easy to decide which processes are possible in a QFT model (and give a fast quantitative estimate of its likelihood, if it is possible).
* It provides a clear way to communicate ideas and results. [Q: What is your model? A: Let me draw the Feynman rules. Q: Why do you say this must happen? A: Look at this Feynman diagram. Q: Isn't your model already ruled out by experiment, because then that diagram implies unobserved high probability for that process? A: Uhmm... -- oh shit! Let me think some more...]
I could continue, but you should have gotten an idea by now.
André> "To my knowledge, this was accomplished only by tweaking the electron g factor..."
This is not correct. The electron (and muon) anomalous magnetic moments can be computed without direct introduction of new parameters. In requires knowledge of the dimensionless fine structure constant, which can be obtained from completely different experiments. For higher order corrections, it also requires knowledge of the ratio between the muon and electron masses (again to be taken from experiments), and the measured energy-dependent cross-section of electron-positron annihilation to hadrons.
[Your repeated, somewhat derogatory, use of phrases like just a, only, ..., in combination with glaring mistakes, makes some of your statements quite painful to read. You should avoid polluting your posts with such traces of arrogant ignorance.]
It seems difficult to expect someone not to think of his own creation as being a jewel among other creations.
Isn't QED just a method for calculating energy levels due to electric interaction between charges?
I agree.
It,s a quick fix
But at the end of the day re-normalization was not such a bad option because it depended on the Compton wavelength which represents the finite photonic energy of the the electron.
I can explain further if you are interested
And oh by the way Feynman's calculation of the electron magnetic moment to Bohr magneton ratio was not such a good quick fix.
Can also explain
André> "Isn't QED just a method for calculating energy levels due to electric interaction between charges?"
Yes, it is "just a method" to quantitively explain how almost everything observable in nature works. Except for classical gravity, and some esoteric (but interesting for fundamental physics) phenomena of radioactivity and subatomic physics.
Andrew> by the way Feynman's calculation of the electron magnetic moment
From Julian Schwinger to Toichiro Kinoshita, Richard Feynman was not really involved in calculating the electron magnetic moment. But he invented the technology for doing the calculations efficiently. Schwinger never adopted his method. Perhaps he was too proud to do so, or perhaps he disliked that almost everyone suddenly got technology to do calculations which only he had been able to perform.
Renormalization in QFT is not really about infinities. From a "modern" (i.e., since the works of Ken Wilson and others in the 1970's) viewpoint it is about universality. I.e., that we can compute properties of nature at one space-time scale, without knowing details about nature on much smaller scales -- such details only reveal themselves in the form of a few parameters which must be fixed by experiments. From a practical point of view, not a twiddle will change if anyone agrees that all infinities are cured by string theory.
But yes, QED is a *jewel*. And it is *the theory of almost everything*.
Dear intelligent friends, thank you for the insightful comments. P.S. The one, who gave me the dislike, why?
Actually,there's no point to discuss QED by itself anymore-the Standard Model, as a whole, has been tested in precision experiments in many ways and the ways to perform calculations efficiently have, also, evolved. And it is necessary, because only such precision in both calculation and experiment can identify effects where the Standard Model does not describe everything that's measured-or what it does describe implies that there's something more-like the measured fact that the mass of the Higgs is such that the Higgs self coupling remains small enough that perturbation theory is consistent. This implies that unknown particles are to be found.
Expressions like ``jewel of physics'', however, are just personal opinions that express the joy of people at having understood something. People that haven't learned to appreciate what this means can't be expected to share such emotions-any more than people that haven't learned to appreciate music can't share in the emotions aroused by a musical piece.
I agree with you Stam that Feynman's pride at his own achievement can't be taken as a negative mark on his record, for the very reason that you highlight.
His defining "virtual photons" to calculate momentary energy/force intensity between charges really was a great achievement, which allowed using the easy Lagrangian for the purpose when before only the more complex to use Hamiltonian was available, from what I understand.
But it seems to me that restricting dealing with charges interactions only with this method can't cover all angles.
In high energy accelerators for example, to my knowledge, QED is superceded by relativistic mechanics due to the invertia of charges and by the Lorentz and Maxwell fields approach due to the fact that chages can be guided by electric and magnetic fields.
In other words, QED definitely is one of the useful tools in the toolbox, but I would hardly qualify it as being *the theory of almost everything* as Kåre says.
QFT is a theory, but QED is a calculation method.
Kåre,
You say that Feynman invented the technology for calculating the electron magnetic moment efficiently.
Can you elaborate?
To my knowledge, this was accomplished only by tweaking the electron g factor so that the difference between the experimentally verified value can be related to the electron classical gyromagnetic moment calculated from theory, that is, the Bohr magneton.
To my knowledge, the real experimentally obtained electron magnetic moment never was explained from theory, or was it?
The Standard Model-and QED-is a very particular quantum field theory. There's no meaningful way to distinguish ``theory'' from ``calculational tool''-these are just words. The Standard Model and QED in the special case it is appropriate, provides the framework for performing calculations that are under control and can be used to design and interpret experiments.
André> "Can you elaborate?"
Feynman invented the technology for doing QFT-calculations in general, not only in QED (or just some specific processes in QED, like the electron anomalous magnetic moment). It is known as Feynman diagrams.
* It is manifestly invariant, combining many Lorentz symmetry breaking contributions of the old Hamiltonian formalism into single, explicitly invariant (or sometimes covariant) terms.
* It provides a visual representation of long and complicated algebraic expressions, unambiguously defined by each diagrams, which makes it easy to identify expressions which are equal, or which cancel each other. Hence it makes it possible to do visual calculations (i,e., incredibly complicated algebraic computations in your head).
* It supports intuition, interpretation and understanding of the physical mechanisms behind the processes under scrutiny, making it easy to decide which processes are possible in a QFT model (and give a fast quantitative estimate of its likelihood, if it is possible).
* It provides a clear way to communicate ideas and results. [Q: What is your model? A: Let me draw the Feynman rules. Q: Why do you say this must happen? A: Look at this Feynman diagram. Q: Isn't your model already ruled out by experiment, because then that diagram implies unobserved high probability for that process? A: Uhmm... -- oh shit! Let me think some more...]
I could continue, but you should have gotten an idea by now.
André> "To my knowledge, this was accomplished only by tweaking the electron g factor..."
This is not correct. The electron (and muon) anomalous magnetic moments can be computed without direct introduction of new parameters. In requires knowledge of the dimensionless fine structure constant, which can be obtained from completely different experiments. For higher order corrections, it also requires knowledge of the ratio between the muon and electron masses (again to be taken from experiments), and the measured energy-dependent cross-section of electron-positron annihilation to hadrons.
[Your repeated, somewhat derogatory, use of phrases like just a, only, ..., in combination with glaring mistakes, makes some of your statements quite painful to read. You should avoid polluting your posts with such traces of arrogant ignorance.]
Dear Kåre
Sorry if my comments offended you. This was not my intent in any way.
It is just my humble opinion that QED is just a mathematical tool that makes energy and force calculations more easy than with the Hamiltonian.
I do not disparage Feynman's contribution, that I know well and indeed admire.
Regarding the electron so-called anomalous magnetic moment. I am only stating what I found in the literature regarding the ad hoc setting of the electron g factor for this purpose.
If you have a link to a formal procedure describing how the experimental electron magnetic moment can be calculated from theory without the ad hoc modified electron g factor, please give me the reference.
I will be happy to learn how this is done.
Any textbook on quantum field theory describes this calculation, cf. http://isites.harvard.edu/fs/docs/icb.topic1146665.files/III-3-AnomalousMagneticMoment.pdf
There's no such notion as ``ad hoc setting of the electron g factor''. The g factor of a free, charged, spin 1/2, fermion is equal to 2-that's a consequence of Lorentz invariance, when the electromagnetic field is classical. Taking into account its fluctuations gives a value different from 2, that is a function of the coupling of the fermion with the electromagnetic field and may be shown to be related to its magnetic moment. It's known how to calculate it, in many different ways and how to measure it.
But, once more, QED is part of the Standard Model, that's been tested to comparable accuracy in many other processes, not, just electromagnetism. The calculations and experiments in QED are now homework exercises for advanced undergraduates and backgrounds for probing effects beyond the Standard Model.
André> "Sorry if my comments offended you. This was not my intent in any way."
I was not offended (then I would not have bothered to write a long answer); I wanted to point to behavior which can harm your reputation (and only your).
André> stating what I found in the literature
Where did you find such statements in the literature? What kind of literature was that??
André> QED is just a mathematical tool
You mean like Newton's laws, and his invention of calculus, is just a mathematical tool? What, in your opinion, is required for something in science to be more than just a something?
Hi Stam.
Thank you so much for this link. I had never come across this development.
I am enthoused by this relation that was made with the fine structure constant that gives so precise a figure for the corrected g factor.
This will allow me to complete my own research.
Has this paper been formally published?
If so, can you give me the formal reference that I could cite as a reference?
This material is now taught-this particular part is a lecture course at Harvard. Any textbook on quantum field theory describes it. Just navigating the links goes to Schwartz's web page, the physics department and so on.
Kåre,
The last reference I used was B. Odom, D. Hanneke, B. D’Urso and G. Gabrielse. New Measurement of the Electron Magnetic moment Using a One-Electron Quantum Cyclotron. Phys. Rev. Let. 97, 030801 (2006). See ref. below.
This last experimental measurement, to my knowledge, was g/2=1.001 159 652, which is not equal to any figure that can be obtained from theory.
And yes, I think that calculus, just like QED is a mathematical tool that we use to measure physical reality.
Newton's laws are conclusions, but calculus is a mathematical tool.
I see a difference between the tools we use to make calculations and the conclusions that can be drawn from the measurements that we can make with the measuring tools.
I have no idea what you are talking about with "behavior that can harm me". I take full responsibility for my own opinions, and to my knowledge, airing opinions is allowed on RG.
http://www.ncbi.nlm.nih.gov/pubmed/16907490
Thanks for the lead Stam. I will try to follow it to locate the actual formal publication, because I can refer only to formally published material in this instance.
The reason is that I arrived at the exact same value for the amended g factor from an entirely different angle with an infinitesimally small error margin, but without having made this logical link with alpha.
If anybody knows about the formal reference, I would greatly appreciate any info.
http://ijerd.com/paper/vol7-issue3/E0703021025.pdf
André> Newton's laws are conclusions, but calculus is a mathematical tool.
With such a classification, the QED Lagrangian is a conclusion, and Feynman diagrams a mathematical tool.
André> g/2=1.001 159 652, which is not equal to any figure that can be obtained from theory.
The figure you give, to the accuracy you give, is unproblematic. What can be calculated theoretically is (essentially) a series in the fine structure constant alpha, the latter being a number that cannot be obtained from theory. So, for comparison, alpha must be obtained by independent means, where a measurement based on the Quantum Hall Effect seems very appropriate. The problem with the most accurate measurement
(g-2)/2 = 1159.65218076±0.00000027 x 10-6
is that it is more accurate than any independent measurement of alpha. Hence, the suggestion is to use this measurement, in conjunction with the most accurate theoretical prediction, to obtain the best possible value for alpha. This better value can then be used in other theoretical predictions, like the computation of (g-2) for the muon. But clearly, in such a procedure the last digit of agreement between theory and measurement for the electron (g-2) becomes a fit, and the value of the Quantum Hall constant becomes a comparison between theory and experiment (instead of the other way around).
Kåre,
Matter of interpretation maybe, but to me, both the Lagrangian and Feynman diagrams are tools that help us to describe physical reality.
The figure I arrive at for the is 1.00116138 653, which is extremely close to that obtainable from use of the alpha constant, which is why this is of interest to me, but this still is not equal to the experimentally obtained value. This difference needs to be explained.
However, contrary to current opinions, alpha can be independently obtained from theory, which is why this relation made by Schwartz (I suppose he is the author) between the amended electron g factor and alpha is so interesting to me, since the value I get is practically equal to that obtained from use of the alpha constant.
I found from theory that alpha and the Planck constant can be related by the following relation: α = ( μo e2 c )/( 2 h ), which sets alpha to this exact value from theory.
Dear Christian,
As I mentioned to Kåre, this may be a matter of opinion. My own take on this issue is as follows:
At our disposal to understand physical reality, we have the mathematical tools that we developed, plus, and a major plus at that, our reasoning ability.
1- We collect data
Most of the data that Newton had at his disposal to establish his 3 laws was collected by Kepler, plus the data that was collected before and after kepler that came to Newton's attention.
2- We analyze the data with the mathematical tools we develop and draw conclusions with our reasoning ability.
Which is what Newton did to establish his laws.
It seems to me that Newton could only draw his conclusions from these measurements and use of his reasoning ability.
My question would then be (we are grazing epistemology here, seems to me):
What other means do we have at our disposal to explore and understand physical reality?
André> "QFT is a theory, but QED is a calculation method."
I fully agree with those who consider QED a "calculation method". And yes, it is a brilliant method. I disagree with "QFT is a theory". From a (physical) theory I expect that it is at least mathematically consistent, but I do not know of any realistic QFT (of interacting fields) that is mathematically consistent. I have no objection to the formulation: "QED is a calculation method based on a quantum field theoretical Ansatz". To formulate this Ansatz, we employ "QFT of free fields", which is well defined, but nothing other than a mathematical tool for setting up quantum mechanical multi-particle states (in Fock space). Within the context of the perturbation algorithm of QED, QFT refers to a very imaginative re-interpretation of Feynman diagrams by "interpreting" internal lines as "particles that have left their mass shell." It is obvious that imaginative prose does not constitute a theory.
In this context it is worth to remember that Feynman, in his papers on QED, describes (classical) electrodynamics "as a description of a direct interaction at a distance (albeit delayed in time) between charges (the solutions of Liénard and Wiechert)." And further on: "We shall emphasize the [direct] interaction viewpoint in this paper." Through six decades Feynman's QED has successfully been subjected to a "politically correct" re-interpretation as a QFT. Today almost nobody realizes that Feynman diagrams describe contributions to a quantum mechanical action-at-a-distance.
Stam> "Actually, there's no point to discuss QED by itself anymore"
I absolutely disagree. There is still plenty of work to do on QED itself. Using clever recipes for removing divergences from a calculation does not mean that we have understood the origin of these divergences. None of the "physical explanations" of renormalization can explain how a realistic physical process could make an infinite number finite or vice versa. And I don't think that we can claim to understand the Standard Model, before we have not fully understood QED.
Dear Christian,
Since Newton was human, like you and me, we do know the origin of his thoughts.
He was born knowing nothing, just like any other human being. He learned to speak, read and write. He became interested in physics and mathematics, learned logical reasoning, learned to use the mathematical tools that already existed, developed those he needed that did not exist previously, studied what data had been experimentally gathered at the time, and the conclusions that had already been drawn, good or bad, drew his own conclusions and then wrote his Principia among other achievements.
All other discoverer did the same.
This the only way we can gain knowledge.
I highly recommend you read a book by George Gamow, Nobel prize winner for his work on the relativistic theory, where he explains how Newton derived his laws to explain Kepler's data, titled "Gravity" (1962).
André> 1- We collect data
Well, it appears that Kepler did not start by collecting data. He started with a theory, as described in Mysterium Cosmographicum (1596). To prove it, he started to collect data. Which disproved his theory. He must have been a true giant of science, since he was able to realize that his theory was wrong. That made it possible for him to discover his three laws (1609 and 1619). Because they are not that different from his initial theory.
Astrologers had also looked into the skies for centuries. Why didn't they discover the same laws much earlier? Maybe because their observations was motivated by a too different theory.
It is not enough to start by collecting data.
Kåre,
I totally disagree on this issue.
No data = nothing to analyse = no conclusion.
André> Has this paper been formally published?
This section can most likely be found in the book,
https://www.amazon.com/Quantum-Field-Theory-Standard-Model/dp/1107034736
(carefully proofread by the former NTNU student Anders Johan Andreassen).
There is a good presentation by Kinoshita here:
http://www.riken.jp/lab-www/theory/colloquium/kinoshita.pdf
The (very likely) latest theoretical prediction is this one:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.109.111807
The preprint version of that paper is here:
https://arxiv.org/abs/1205.5368
Kåre,
You wrote "it appears that Kepler did not start by collecting data. He started with a theory"
Doesn't this imply that Kepler needed no data to elaborate his theory?
Just like Newton, and all others that drew more and more precise theories over time as more experimental data was accumulated, Kepler had to become aware of what data had been gathered and conclusions, good or bad, before him, before he could have drawn new conclusions, good or bad, leading to even his first draft of a personal theory.
Seems to me that it is impossible for anybody to skip the first step of current knowledge acquisition, data and conclusion wise, before a person can draw new conclusions leading to more precise theories.
Thank you so much for the further leads to locate the formal source for the alpha based g factor calculation.
Walter,
We seem to be in general agreement.
Particularly on the point that Feynman diagrams describe contributions to a quantum mechanical version of action-at-a-distance.
Actually, each QED virtual photon amounts to a momentary, or instantaneous, measure of the Coulomb force intensity, plus the precise amount of energy induced at the distance considered between two charges by the force.
I still wonder why he discouraged his followers from exploring the last remaining frontier of particle interactions:
"In many problems, for example, the close collisions of particles, we are not interested in the precise temporal sequence of events. It is of no interest to be able to say how the situation would look at each instant of time during a collision and how it progresses from instant to instant."
Page 771 in his 1949 seminal paper. See below.
I totally disagree with him on this issue, because this prevented formal exploration on progressive interaction between charges for the past 60 years.
http://authors.library.caltech.edu/3523/1/FEYpr49c.pdf
André> Doesn't this imply that Kepler needed no data to elaborate his theory?
Of course not, since he was a true scientist and not a dreamer. But his (very wrong) theory gave him motivation to look in the right direction. I.e., to observe the orbits of planets very accurately. He did not try to correlate the apparent positions of stars with the prize of tulips in Amsterdam; that would also have been a way to collect data.
André> the formal source for the alpha based g factor calculation.
The PRL by Kinoshita et.al. is the first-and-foremost paper to cite, if you want to refer to the most accurate prediction. But there are of course a lot of works by a lot of people behind the full story.
Kåre,
Despite our different angles on the data issue, I see that we are nevertheless in general agreement.
I'll pay particular attention to the Kinoshita et al. reference as recommended.
I never expected to end up even discussing the electron g factor and alpha when I first commented in this unrelated thread, let alone being made aware of this link between both.
Thank you and Stam for the leads to the formal references. Greatly appreciated.
Dear Christian,
We agree on data collection.
But I do have a different opinion on how new ideas come about.
I concluded that effectively, new ideas can only come from correlating increased pools of data collected that comprise the conclusions drawn from the more limited pool of data and conclusions that was available when the previous conclusions were drawn, the latter, having been drawn without knowledge of the data collected after these previous conclusions were drawn.
New ideas come from attempting to correlate the new data that could not be integrated in the previous conclusions (drawn from a lesser pool). At some point, further correlations are made from understanding new relations in the more complete pool, leading to more precise understanding and elaboration of more precise theories.
As an example, it was impossible for Newton to even imagine that mass increases with velocity since the technology did not exist to observe and measure the mass of electrons accelerated to relativistic velocities.
After this new data was collected, then newtonian mechanics could be modified to account for the newly understood speed of light asymptotic velocity limit of massive particles.
This is how it always happened in the past.
I think that this process will go on for as long as we do not understand physical reality to satisfaction.
In the old days, with very limited computing power to process data, one really had to have some opinion about how to organize the measured data, and which patterns to look for. Kepler was perhaps lucky.
Today there is technology for handling much larger sets of data, and for automatically recognizing patterns (I believe, but someone must have defined what constitutes a *pattern*). So then it makes more sense to just start collecting data, without having explicit ideas about how to analyze it. I suspect many intelligence agencies do that...
Still, at CERN they must immediately throw away most of the generated data, due to lack of storage and processing power. This is a clear case where theoretical input is used when the data is collected. This may be a dangerous thing; how then can you find something really unexpected? But what else can be done?
You can have so much data that it contains *any* thinkable pattern! Some years ago, Roger Penrose wrote an interesting book, which included hints about how signatures from the previous universe could be present in the microwave background, in the form of circular patterns.
Someone did a search for circles: Lo and behold! There was *a lot* of them! A paper was written (perhaps many). Then two Norwegian physicists did a serious statistical analysis, and found that the amount of circles found (to the accuracy a circle could be defined) was just what could be statistically expected in the huge collection of data. [My memory may have distorted the true story here somewhat, but you get the picture.]
I don't know if any conclusions can be drawn from these anecdotes...
André,
I don't have problems with the passage on page 771 of Feynman's paper ("In many problems, for example..."). Feynman is telling the reader that it does not always make sense to ask questions about the "temporal order of events". Remember that he is talking about a perturbation algorithm, which principally is not different from a (time-independent) perturbation calculation in non-relativistic quantum mechanics. Here, nobody would ask, which contribution comes first. In QED the perturbation is not acting "in time", although the perturbation series may look this way. It rather generates "static" contributions to the intermediate two-particle state.
Dear Christian,
Regarding how Einstein derived the relativistic energy formula, without revealing any secret, it is public knowledge that Poincare was very active at the time, and so were Lorentz, Abraham, Kaufmann, Planck and many other discoverers, during one of the most productive periods in data acquisition due to technological progress. All the while, before he came up with SR in 1905, it is well known that he spent most of his time at the patent office reading all that came about, not even speaking of his interest from childhood on for magnetism, physics and astronomy, and family contacts with scientists.
From what I understand, he simply gathered enough data to be able to see the correct coherence and conclude.
As for Gravity by Gamow, I referred you to it in context to the part where he explains how Newton related the inverse square force with Kepler's laws, chapters 2, 3 and 4 if I recall.
As you suspect, in Kepler's time, data collection about the solar system could only be a years long process. From what I read, Brahe and Kepler spent their lifetime at it, like the few real scientists that even were around at the time with sufficient info to even become interested.
As for perceiving coherences in data sets, this simply is a natural property of the neocortex. If patterns exist in sufficiently large data sets, coherences are perceived. It is up to us to pay attention, further analyze, confirm or reject. We all have the same equipment to think with.
As for the Ptolemaic astronomers who were about more than a thousand years before Brahe, I have a hunch that all chances are on the side of them not having as much data about the orbits as Brahe and Kepler ended up gathering to draw their conclusions. The same for Newton and then Einstein.
But as I said, this is mostly a matter of opinion.
Walter,
I understand your argument that it obviously makes no sense to explore temporal order of events in a restricted frame where time is not a factor.
But as you highlight, his defining virtual-photons simply replaces the so-called and much argued against "infinitesimally progressive action at a distance" concept, involving the Coulomb force, by a more favorably viewed quantum mechanical "quantized action at a distance" concept, still involving in the background the inverse square Coulomb force for calculation of the "static" contributions to the intermediate two-particle states, at any given distance between the charges.
In physical reality however, underlying both interpretations, close collision of particles does involve time, which in turn involves infinitesimally progressive motion of the particles during the process, and infinitesimally progressive intensity increase or decrease of the force, and of the energy induced in the particles, increasing when they move towards each other and decreasing when they move away from each other.
It seems to me that his comment was interpreted at large as generally applying to real processes.
I have a view that important knowledge could be gained from such exploration, particularly in clarifying the distinction that must be made between the Coulomb force proper and the energy that can be calculated to exist in charged particles at any distance between them and that seems to be induced as a function of the inverse square of the distance by the Coulomb force.
In short, I have a definite issue with any research interdiction based on principle for any aspect of physical reality.
Dear Christian,
Actually, energy-momentum was very well known by 1905 with respect to the electron. Walter Kaufmann, to name just him had been experimenting with relativistic electron velocities sufficiently to detect that the longitudinal inertia of relativistic electron was higher than their transverse inertia, which led to the debate about electromagnetic mass, transverse mass, longitudinal mass, that Poincare among others discussed extensively. See one of Kaufmann's papers below.
There was a raging debate between Abraham-Kaufmann on one side and Lorentz-Einstein on the other side about this, even before 1905. He definitely had a deductive theory. All theories are. From the conclusions that you draw from the data and other prior conclusions that you correlate, you elaborate or deduce a theory.
Regarding the Higgs boson, there was no need to predict anything, because as long as they kept cranking up the energy in the LHC, they were bound to hit it when the right energy level was reached and overshot. They simply would have given it another name if Higgs had not come up with his theory.
The possibility that this real momentary parton was to exist did not start to exist when Higgs defined its possibility. We do not create physical reality.
My humble opinion is that even if the author you quote was convinced that "The Higgs boson could not have been discovered experimentally by accident." He can only have been wrong, given that we do not create physical reality, and that the CERN people were bound to hit it anyway in their constant search for ever more energetic scattering levels.
I can personally predict that if they keep on cranking up the energy, they will detect one yet more energetic and no doubt others yet more energetic.
A pity that these fleetingly existing partons can't help us understand normal matter, since none of them are part of atoms.
Not trying to convince you in any way though. This is just my opinion. Sorry if you find my opinions painful to read and a show of arrogant ignorance as Kåre concluded.
http://gdz.sub.uni-goettingen.de/dms/load/img/?PPN=PPN252457811_1903&DMDID=DMDLOG_0025
André> Regarding the Higgs boson, there was no need to predict anything,
Well, someone had to be convinced to build the collider in the first place.
I made a quick look at some numbers available on the net:
In 2016 LHC has until now delivered 29.3 inverse femtobarns (fb-1) of 13 TeV beam to the ATLAS detector, who has recorded 27.1 fb-1 of this. At these energies, the total cross-section for proton-proton collisions is about 0.15 barn. This means that there was 4*1015 collisions in the detector. The measured cross-section for production of Higgs is about 50 pb, which translate to 1.3*106 Higgs particles being produced (so we can expect much more accurate results to come out soon :-)).
This means that you have to identify 1 among 3*109 events to find a Higgs particle. The signatures are not that obvious either. Try to open 3*109 encrypted letters, to identify the one of interest.
The Higgs particle could not have been found without knowing where and how to look. The only fairly obvious thing to discover by cranking up the energy is that the total rate of collisions increases steadily, and similar mundane features.
What many people fail to understand is that, while any given calculation will provide an answer, associating a given measurement to the result of a calculation is something different. So to discover, for instance, the Higgs boson, it's not, only, necessary, to compute the cross section for its decay into various final states, which, multiplied by the luminosity of the collider, will give the number of expected events per unit time; but, also, to compute all other processes of the Standard Model, that can, also, lead to the same final states. Many processes in proton+proton collisions lead to to two photons in the final state; or to two lepton-antilepton pairs.
Similar arguments hold for trying to discover new effects. Now that the Higgs boson has been discovered, its events can be eliminated-and anything that remains is new. This will take some time to become fully operational, though.
André,
"...still involving in the background the inverse square Coulomb force..."
In QED there are no Coulomb forces. Instead there are exchanged "virtual quanta". The Coulomb force "emerges" from a non-relativistic approximation to QED. You will find this approximation, e.g., in S. S. Schweber: Relativistic Quantum Field Theory (in my Second Printing it is on pages 534/5).
"...close collision of particles does involve time, which in turn involves infinitesimally progressive motion of the particles during the process, and infinitesimally progressive intensity increase or decrease of the force..."
Suppose we were not talking about scattering but about the Hydrogen atom. Would you make a similar statement about the "infinitesimal progressive motion" of the electron relative to the proton? Probably not, because in quantum mechanics it does not make sense to use classical notions such as particle trajectories. Even the notion of temporal order becomes questionable, as Wheeler's famous delayed choice experiments show.
"I have a view that important knowledge could be gained from such exploration..."
You are absolutely right, but this exploration has already been done 66 years ago and resulted in QED.
Dear Christian,
I agree that we need to construct models to "understand" physical reality, but to create physical models that are predictive, cannot be done without fitting data that comes out as "exceptions" to existing models. This is how all successful and predictive models were established.
In Ptolemy's time that you mentioned, the pool of accumulated and confirmed data about physical reality was more restricted, so the models they could come up with were inevitably less detailed than the models that we have today.
Even if they understood the idea that matter had to be made of infinitesimally small "particles", there was no way they could even imagine the inner structure that we now know these particles have, let alone predicting this inner structure. They could not describe or even predict electrons, protons and neutrons. Static electricity was known already, but unexplained and impossible to predict from their models.
Data accumulated, so we are now in Newton's time with a lot of new data that doesn't fit and cannot be predicted with Ptolemy's time model. We now have an old model with plenty of "exceptions".
Newton then elaborates a model more precise than Ptolemy's that incorporates most if not all of the "exceptions" not explained nor predicted by Ptolemy's model, from the now more complete data set. Static electricity still not explained and can't be predicted by Newton's model.
Then electricity and magnetism began to be separately understood with Coulomb, Gauss, Ampere Faraday and Maxwell synthesizing their findings, around the wave and fields concepts. And we now understand static electricity, no more an exception to this model.
Then came Wien, Planck Einstein Rahman Compton and localized electromagnetic photons with longitudinal inertia and some modicum of transverse inertia, which are an "exception" to, and can't be predicted by Maxwell's wave model, and surprise, Not yet convincingly integrated into a more comprehensive model!
Where are we at now?
We are now at a point in time where we have Maxwell's model PLUS new knowledge about electromagnetic energy that needs to be integrated into a more comprehensive electromagnetic model in which permanently localized electromagnetic photons will not be an exception (not talking here about QED's virtual photons).
When this is done, some time in the future will possibly come up more data that becomes an "exception" that this more comprehensive model cannot predict, and that will also need to be integrated into a yet more comprehensive model.
And so on until we have a complete model that finally won't be plagued by "exceptions".
I simplify the process to the extreme of course. But this is to illustrate that successful new models can only be made by integrating confirmed "exceptions", not by imagining hypothetical "explanations" not based on integrated data with the hope that real data will eventually fit, which you seem to consider, or maybe I misunderstood you.
Kåre,
I understand the complexity involved in processing the scattering data.
The main problem as the energy is jacked up is that the higher the energy resulting from the actual collisions, the shorter the lifespan of the metastable partons that will momentarily congeal, so to speak, in the various metastable states before almost instantly decaying in a cascade of intermediate states before ending up as stable electrons, protons, neutrons, neutrinos and electromagnetic photons with energy low enough not to generate new metastable partons themselves.
Many of these cascades are well understood and described in the European Physical Journal C and no doubt other references.
They possibly are near some limit where lifespan of the higher metastable masses that are likely to be produced will be too short for even being recorded.
To my knowledge, the trend to build more and more powerful accelerators is not directly linked to the search for the Higgs, but was initially meant for studying particles that before that were only detectable from cosmic radiation. Still ongoing.
I think it just was the natural direction to take after the short 2 years after the SLAC facility made exploratory non-destructive scattering to identify the inner scatterable components of protons and neutrons with for the first time, electrons energetic enough to penetrate the nucleons.
Once the inner components of nucleons were confirmed, the step go to destructive scattering was easy to take, to try to analyze the metastable particles that before that could only be detected from cosmic radiation scattering.
The are just preparing for the next round. Very predictably.
Dear Christian,
Yes. Sorry for the length of my "history of physics". This was just to emphasize the recurrence of exceptions accumulations each time after a more comprehensive theory was established. To me, this means that we, humanity, still are in the learning phase about physical reality. When the exceptions piles start to become leaner, we will be getting closer to complete understanding, which I think will one day be achieved.
I fully agree that simpler theories are superior. To me, complexity to the point of being incomprehensible is a direct sign of a bad theory.
I think that the next more encompassing theory will be simple and explanatory, just like Newton's was, because I think the really fundamental laws of nature have to be few and simple.
André> the next more encompassing theory will be simple and explanatory, just like Newton's was
Many have put their bets on String Theory for that. But it appears that Nature must be uncovered layer by layer (in scale or inverse energy); there can be many layers before string theory. For particle physics, unless some specific hints of the next layer is found at LHC, it is unlikely that a new accelerator will be built for a very long time.
Dmitri> we have no Quantum Theory of Field.
We do have many mathematically well defined Quantum Field Models (to be simulated on a computer, they must be well defined), defined on space-time lattices. It can be argued that we have no well defined Relativistic Quantum Theory of Fields. But it can be argued against that also: There is no field of physics with better agreement between theory and experiment than QED, best exemplified by the electron (g-2) anomaly.
QED is not only a Jewel in physics, it is The Finest Jewel in Physics. There may be other fields which shine equally well on the experimental side, or on the theory side, but not to the same extent on both sides at once.
Dmitri> Please go along the path of the David Bohm's theory of Quantum Physics.
I would hardly call it a theory; it is at best a way to interpret what has be calculated much simpler in other theories. And it cannot be made to fit with special relativity. You cannot compute the electron (g-2) in Bohm's theory; not even in principle.
So it is a bad theory, and a wrong theory. Stay away from that path.
The Standard Model is simple and explanatory-it does explain all non-gravitational effects, whether classical or quantum, up to the energy scale of a few TeV and is taught to undergraduate students now. It does show how Newtonian mechanics is a special case. ``Simple'' is, however, not very meaningful, because it's not impersonal, whereas physics is impersonal.
Hi Stam.
From what I understand, the Higgs metastable particle prediction was made on the assumption, to be clearly established, that there exists some "regularities" in the scale of energy levels at which metastable massive partons can momentarily materialize before decaying into simpler particles, exponential?, logarithmic? I don't know. Rendered more complex by the complex partons.
Similarly to the apparent energy scaling of electron then metastable muon and then metastable tau, the latter two quickly decaying to the electron stable form.
I agree that grounding research in the search for this type of regularities is possibly our only tool to finally see the whole picture.
One case that you mention however seems impossible. The case that a proton+proton collision could result in two photons.
The reason is simple. Protons are not elementary but each made of only 3 scatterable quarks which are only marginally more massive than electrons. Below interpenetrating energy level, protons just elastically rebound on each other, both having same sign charge.
Above interpenetrating energy level, colliding two protons is like having our solar system "collide" with another solar system. In such a 2-solar systems "collision" we can easily understand that unless both stars are directly headed for each other, both systems will simply keep on their way without any real collision, all chances are that none of the planets of either system will even come close enough to a planet of the other system to even be visible as both systems breeze through each other.
The most that can be hoped for in the LHC seems to be that one inner quark of one colliding proton hits an opposite sign quark of the other proton, both quarks converting to energy, which means that only one third of the translational energy communicated to each proton has a chance of forming the momentary local energy overload with the energy of the colliding quarks and that of their carrying energy that will generate the partons.
Quite interesting stuff.
Walter,
When you write
"The Coulomb force "emerges" from a non-relativistic approximation to QED."
That's what I meant when I wrote
"...still involving in the background the inverse square Coulomb force..."
You also write:
"Suppose we were not talking about scattering but about the Hydrogen atom. Would you make a similar statement about the "infinitesimal progressive motion" of the electron relative to the proton? Probably not "
My answer is absolutely yes, I would.
And yes also to your saying that "it does not make sense to use classical notions such as particle trajectories"
And the notion of temporal order definitely applies if you go with the concept of infinitesimally progressive motion.
You see, 66 years ago, when Feynman proposed quantizing the force by means of virtual photons, and you say that this exploration was done, it was still unknown that protons are not elementary particles, and this, combined with many other experimentally obtained confirmations, not the least of which is the Kotler et al. experiment confirming as recently as 2014 the inverse cube magnetic interaction between two electrons captive in mutual parallel spin alignment (see first paper below), changes everything.
If interested in seeing what can be done with the hydrogen atom when using Coulomb induced infinitesimally progressive motion, you can get a glimpse of at least one possibility in the second paper below. Freshly peer-reviewed and published. This even allows limiting the QM statistical spread to within reasonable boundaries.
http://www.nature.com/articles/nature13403.epdf?referrer_access_token=yoC6RXrPyxwvQviChYrG0tRgN0jAjWel9jnR3ZoTv0PdPJ4geER1fKVR1YXH8GThqECstdb6e48mZm0qQo2OMX_XYURkzBSUZCrxM8VipvnG8FofxB39P4lc-1UIKEO1
http://www.omicsonline.com/open-access/on-adiabatic-processes-at-the-elementary-particle-level-2090-0902-1000177.php?aid=75602
The Higgs has to do with electroweak interactions, not strong interactions, so it doesn't make sense mentioning it in relation with partons.
At the LHC, when protons collide with protons, the Higgs is produced through gluon fusion. It's obvious that two protons have electric charge, so there are more particles, than just two photons in the final state. The subprocess that's relevant is gluon + gluon -> photon + photon and one of the ways this can be achieved is through a Higgs in the intermediate state. So, by eliminating the other ways, one finds this one.
André,
I have no doubt that a "concept of infinitesimal progressive motion" goes with "temporal order". The point I wished to make is that the language of classical physics -- I consider "infinitesimal progressive motion" as a notion of this language -- may be inappropriate for formulating real or fictitious problems in the quantum domain.
The Journal of Physical Mathematics is not identical with the Journal of Mathematical Physics -- or is it?
Walter> "Journal of Physical Mathematics"
is different, published by OMICS International, https://scholarlyoa.com/2016/05/10/new-name-same-horrible-business-omics-international/#more-7384
Dear Christian,
What a defeatist attitude on Rohlich's part. Admitting defeat before even trying.
With all due respect, my opinion is that either he did not go to Euclid's reasoning method school or if he did, he did not understand the procedure.
My own understanding is that there is a general procedure which when used will lead to the correct solution of any problem, "scientific" of not.
Walter,
"Infinitesimal progressive motion" of charges has nothing to do with classical physics. It is the language of electromagnetism. The Coulomb force applies to charges.
And no, they are not the same journal. From what I understand, the Journal of Physical Mathematics is more mathematics oriented.
I prefer Randy Schekman's analysis:
Whoever wishes progress to resume in fundamental physics should pay attention:
https://www.theguardian.com/science/2013/dec/09/nobel-winner-boycott-science-journals
https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science
Kåre,
Thank you for the enlightening link. Of course you have noticed that my question was not too seriously meant.
Dear Christian,
Yes, I perfectly understand that the article you refer to is not 100 % your opinion and that Rohrlich is a credible scientist.
I also understand that opinions, including my own, all are only personal views on any given issue, that have no real bearing on actual physical reality.
To me, his all encompassing statement that scientific method does not exist and that there is no general procedure that could lead to valid solutions, is only his opinion.
Here is a quote from Einstein on the same issue that I totally agree with:
" If we want knowledge that is certain, it must be founded on reason. Such is the case of geometry, such is the case of the principle of causality."
I have no idea on what criteria Rohrlich bases this opinion, but mine is grounded on the same criteria as Einstein's, which stems from the understanding that if all initial arguments leading to a conclusion can be irrefutably demonstrated as true, then the conclusion of the logical progression grounded strictly on these arguments can only be irrefutably true.
Without specific training though, we tend not to become aware of this difficulty, and often remain neglectful to some extent regarding the choice the premises that must be part of the initial restricted set that should be considered to solve any given problem, and also regarding the degree with which we "irrefutably" verify each premise of the restricted set.
I think that this is what leads to opinions such as Rohrlich's about scientific method.
I alluded to Euclid in reference, because to my knowledge, the only discipline in existence that allows rigorous logical training without prior mathematical knowledge is plane geometry or “Euclidean geometry”. The fact that no prior mathematical knowledge is required places it within reach of absolutely everybody, but it seems that this discipline is no longer systematically part of schooling programs.
I know that Einstein had this training, hence his opinion completely opposite that of Rohrlich.
Regarding the relation of infinitesimally progressive motion with regards to QM/QED, the notion is not problematic at all in fact. It simply is not part of QM/QED. It is just a different and complementary approach to analyzing the submicroscopic level that allows highlighting other aspects of particles interactions.
I must say that notions like "classical physics" and all such categorizations are mostly meaningless to me. All I see is electromagnetic particles in interaction at the submicroscopic level.
Dear Christian,
I think there is a contradiction.
If you read his quote again, Rohrlich dismisses the possibility of the existence of a general procedure that can lead to the solution of a given scientific problem, which I understand that he means that it is not possible to arrive to conclusions that are certain.
Einstein on his part, says that to obtain knowledge that is certain (implying that this is possible), it must be founded on reason, and he gives geometry (I remember that he was specifically alluding to Euclid's) ...
I see a direct contradiction for the reason I gave. Of course, this is just my personal interpretation.
I am an optimistic. so I believe that we can understand physical reality.We just have to keep at it with more and more precise self-consistent models until we hit it right.
Dear Christian,
Einstein never questioned QM.Very clean method that does its job perfectly. He questioned the Copenhagen school interpretation of QM, which is very different.
I think that scientific progress is not possible without rigorous confirmation of any parameter taken as any of the set considered to elaborate a model. Without this step, my view is that even self-consistency cannot be achieved, let alone correspondence with physical reality.
The method has been found more than 2000 years ago by the Greek. It is there for anybody to use at will. Since they did not have the math to support complex and abstract reasoning threads, they went for premises confirmation at every step of development of a logical thread, which is the gift they left us.
By nature, intuition is just "a hunch" that an avenue of solution could be right. It is certainly there and is a natural part of the reasoning process.
However, once a hunch (an intuition) has come up. there begins the rigorous work of assembling the probable set of parameters that could support the direction the hunch seems to lead to. Then comes the step of thorough verification and confirmation of each and every chosen parameter. Retention of the confirmed parameters. Rejection of the unconfirmed or unconfirmable parameters. Followed by rejection of the hunch if the remaining set of parameters can't support the direction intuited, and finally thorough work of reasoning step by step until the conclusion is reached, oftentimes different from what the initial hunch seemed to lead to.
The outcome can only be a self-consistent model.
My own intuition always was that Einstein was on to something recommending Euclid theorems training. As witnessed by the fact that he came up with 2 self-consistent theories that are still going strong even after 100 years.
When Einstein talked of reason, I understood that he meant "reasoning", presumably logically reasoning, from premises to conclusion, which implies thorough confirmation of all initial parameters for success, which is implied in his recommendation of Euclid theorems training.
Dear Christian,
Actually, Einstein questioned the Copenhagen school interpretation, which is that QM completely describes submicroscopic physical reality, and that it was useless to try to comprehend more deeply due to the related Heisengerg uncertainty principle, not that QM was not complete for its purpose. QM is self-consistent.
I suggest reading "The Debate on Quantum Theory" by Franco Seleri to get the complete picture about the Copenhagen school of thought.
Just like Planck, Schrödinger, de Broglie and many other discoverers of the time, Einstein was simply convinced that more could eventually be understood about submicroscopic particles and their interactions.
Even today, Copenhagen school aficionados deny that electrons and protons can remain localized when being accelerated to highly relativistic velocities despite the glaring evidence that they are routinely guided on extremely precise trajectories leading to collisions in high energy accelerators, with relativistic equations and electromagnetic fields to meet with cross sections that are way smaller than what the QM statistical spread allows at these velocities. No wonder that QM is not used in these facilities, where it is totaly unfit for the job. Seems to me that, the submicroscopic level is already de facto better understood in these facilities than for those who can't see beyond the self inflicted limit of the Heisenberg uncertainty principle.
Regarding Einstein's theories, I said I found them self-consistent, enough to have lasted 100 years for those who equate self-consistency with matching physical reality. Not my case.
Towards the end of his life, beginning of the 1950's, Einstein became convinced that the real deal lied with electromagnetism.
I fully agree with him.
At the time, the community rejected out of hand his intuition without giving it a second look, delaying related research by a good half century.
John Wheeler in 1995:
" A distinguished physicist even published in his very last years works, the main point of which is to claim that gravitation follows the pattern of electromagnetism. This thesis we cannot accept, and the community of physics, quite rightly, does not accept." "Gravitation and Inertia", Ignazio Ciufolini & John Archibald Wheeler, Princeton University Press, 1995. Page 391, right after equation (7.1.2).
I can also see that our ideals are not far apart despite our different angles on these issues.
I had no idea that just mentioning in passing that I was not surprised that Feynman was proud of his QED achievement, would ignite a discussion that made me aware that my first principle solution to the electron magnetic moment anomaly gave the same figure as the alpha based solution, that I now totally relate. And that I would be involved in such an interesting conversation.
I also hugely enjoyed discussing with you. Indeed, the most interesting back and forth I have had in years. Thanks to you too
I did not realize Einstein's late views about gravitation and electromagnetism. I think he was right then. I expressed the same "intuition" in a recent blog about Time and Space which should be linked to the Maxwell's equations. The essence of the correlation between time and space is expressed in the propagation of the photon when there is no inertia (thus the mystery of the creation originates from its differentiation from the propagation of electromagnetic waves, i.e the understanding of the Maxwell's equations not in terms of the two fields but in terms of time and space). The weaving of inertia as a competitive mechanism to propagation-leading to gravitation- is the formation of elementary particles with mass, from the synchronization of time space fluctuations, resulting of the influence of the past on the present, which is not the case for the propagation without inertia: https://newschoolpolymerphysics.blogspot.com (the blog on Scorcese's dash moment).
Hi J.P.
I had a look at your blog and indeed found your "intuition" quite interesting regarding a possible link between electromagnetism and gravitation.
In my case, it was reading Wheeler's remark when his book came out in 1995 that got me interested in Einstein's own intuition in this regard, an intuition that I had not had myself. After exploring in search of what had come out of this idea, I observed that nobody had picked up the ball after Einstein passed away.
I always was interested in electromagnetism via the riddle posed by the fact that Maxwell's approach dealt with EM energy as waves incompatible with the notion of localization of EM energy quanta, while Louis de Broglie asserted that it had to be possible to describe permanently localized photons while still remaining compatible with Maxwell's equations.
For inertia, the photo electric proof demonstrates that photons have longitudinal inertia, and since light can be deflected by masses such as that of the Sun, this means that photons are bound to also have some modicum of transverse inertia. This definitely tied in with the idea of electromagnetic mass that was discussed at the beginning of 1900's.
Since I found absolutely no publications on these issues but the work of de Broglie, I then decided to dig in myself. The outcome was what de Broglie said was possible, that is, the description of a localized electromagnetic photon with an internal structure fully compliant with Maxwell's equations, and that incorporates longitudinal and partial transverse inertia. Maybe you will find my solution interesting. See paper below.
Too long to explain here, but from this photon structure, it is possible to establish how photons of energy 1.022 MeV or more can mechanically decouple into massive electrons and positrons, so now we get mass from supposedly non massive EM energy, and so on.
http://www.omicsonline.com/open-access/on-de-broglies-doubleparticle-photon-hypothesis-2090-0902-1000153.php?aid=70373
Dear Christian,
I have no specific reference. But it seems to me that it is general knowledge that with QM it is impossible to represent a moving electron other than as a non-localizable diffuse wave packet whose sum of "wavelets" amounts to the energy of the moving electron, that cannot be related to a precise trajectory. With QM, localization becomes reality only when the wave function collapses, but then the electron is not moving.
Just recently, there was an extensive discussion on RG (can't remember which one, but I could try to re-locate it) where it was put forward that if the electron was represented with the wave function as moving very slowly, then the wave packet could possibly be represented with a smaller cross section that could be seen as following a trajectory. I did not find this very convincing for the high relativistic velocities though.
I never read nor heard that QM was used in any context in high energy accelerators, at least no mention is ever made in specialized refs that I know of. Not in Stanley Humphries Jr's "Principles of Charged Particle Acceleration" anyway. Only relativistic mechanics, electromagnetic fields and LRC resonance math are apparently used.
In my view, accelerator beams do not contradict QM, because QM simply is not in the picture, since from my understanding, QM can't describe the very tightly collimated individual particles that are permanently and verifiably localized "as they move" in the narrow tubes and when they actually collide at full velocity, being guided with high precision on their precise trajectories to the very moment of collision. QM describes other aspects of particles energy but apparently not these. Just like relativistic mechanics cannot be used to describe atomic orbitals. Different tools used for different purposes.
I am no expert, so maybe you could get the real story from an experimentalist working in one of the accelerator facilities. It would be a jaw dropping surprise to me if QM turns out to be used in guiding the beams.
If you can easily re-locate a reference to this paper showing the derivation from the Dirac equation, I would be interested. I suspect though that the end result of the derivation would be some of the relativistic mechanics equations, EM fields equations or LRC resonance math that are used. Quite interesting if this linkup can be made.
Dear Christian,
I located the thread where wave function vs electron localization while moving was discussed.
Here it is:
https://www.researchgate.net/post/What_happens_with_two_identical_fermions_identically_polarized_whose_paths_cross_one_another
André> I never read nor heard that QM was used in any context in high energy accelerators, at least no mention is ever made in specialized refs that I know of.
I suspect this means that the effect of such "quantum diffusion" is so small that it not even worth mentioning. There must be some classical diffusion within each particle bunch, due to Coulomb interactions between particles, which is corrected for (implicitly handling any quantum contributions also?).
But your question is interesting, as a matter of principle. And it would be very strange if nobody has spent thoughts on it. F.i. John Bell, who did some work on accelerator physics. If he has not written anything about quantum effects in that connection, it must have been because it is not worth mentioning (is my guess).
André> Just like relativistic mechanics cannot be used to describe atomic orbitals.
This is wrong. It can be used, and it is being used. Relativistic treatment of the hydrogen atom is standard textbook material; I think it was covered in my first course on quantum mechanics. I was later surprised to discover that there had been conferences devoted to relativistic effects in condensed matter. In current condensed matter research (like spintronics), the Rashba Hamiltonian is popular. It is derived from relativistic effects in atomic physics (which is a subset of QED -- which is one reason why QED is the theory of almost everything).
Of course, relativistic effects should be taken into account in atomic physics, especially for heavy atoms. Spin-orbit interaction is a relativistic effect.
But, a little correction about the hydrogen atom. It is not the hydrogen atom but an electron in a Coulomb potential. There is a little problem of inability to introduce the center of mass, separating the variables.
.
Dear Andre,
I have not read your article yet (browsing through it has opened my appetite though) but bouncing off your suggestion that there might be proof for the longitudinal inertia of the photon or some kind of inertia to explain the bending of the straight propagation by the influence of gravitation: this explanation may just be derived from your mehanical approach and not thermodynamical approach to the definition of propagation and inetractions. As I suggest in the Scorcese's essay, it may be that a dissipative term is missing in our fundamental equations of propagation, which resolves into questioning the stability in time of a given solution (propagation vs interactions). Space and time fluctuations may be variable in time, certainly influenced by past solutions when those include inertia. I am not sure that if dissipation is incorporated as part of the mechanism of time-vs space interplay, the photon will have any inertia at all.
Kåre,
As far as I recall, Bell was so busy with his full time dispute with Einstein and other determinists trying to prove that nothing could be learned about the submicroscopic level besides what QM allows that I would be surprised if such an unrelated side issue ever crossed his mind. So no surprise there for me in his case.
You say my question is interesting as a matter of principle. Seems to me that the issue that it raises is more than rhetorical.
The real question being raised, is in fact the following:
Is QM directly being used in any way, shape or form in the actual guidance of particle beams in high energy accelerators?
The answer can only be yes or no.
If it is yes, I would greatly appreciate being referred to any article or technical ref describing how it is being used for the purpose.
Why not ask actual working experimentalists? I don't happen to know any myself.
So you say that relativistic mechanics can replace QM in specifying the possible orbitals in the hydrogen atoms. I would be very interested in any paper explaining how it can account for any specific quantum number set for any of the possible orbitals.
Strangely, I never thought I would ever find myself defending QM as the only known tool that can do the job. I must be dreaming.
But my comparison was only rhetorical, just trying to illustrate that different tools are used for different purposes.
Dear J.P.
For photon longitudinal inertia at least, the photoelectric proof is what bagged Einstein his Nobel prize, this is what first attracted my attention on this issue, I then found that it made sense and fit well with other characteristics of electromagnetic energy.
If you find time to read my paper, you will be able to judge for yourself if this makes sense to you from your perspective.
Dear Christian,
Thanks for this very interesting link. Great source of info. At first glance complementary to the Humphreys reference in many respects. This will be one of my refs in the future.
But this is a long read that I don't think I can find time to really address. I browsed through it trying to find anywhere if QM was even mentioned as being used to guide the beams. I found nothing specific for the storage rings, nor wigglers, nor for linear acceleration.
If you can point me to where this would be mentioned, I would greatly appreciate.
Dear Christian,
The arXiv paper looks to me as theoretical more than practical. Unless I misunderstand, this does not relate to current beam guidance practices.
André> As far as I recall, Bell was so busy with his full time dispute with Einstein
I think you must be mixing up John Bell with Niels Bohr. Right?
André> So you say that relativistic mechanics can replace QM in specifying the possible orbitals in the hydrogen atoms
I did not say that, I say that there is a subfield of QED referred to as relativistic quantum mechanics, which is used to f.i. calculate the hydrogen spectrum more accurately. And the corresponding wave functions. It is even more important for the description of electrons in the inner shells of heavy atoms, since these have relativistic velocities.
André> Seems to me that the issue that it raises is more than rhetorical.
Certainly! It is a matter of doing quantitative calculations. But I believe Jaganathan summarizes the situation correctly:
"The traditional approach to accelerator optics, based mainly on classical mechanics, is working excellently from the practical point of view. However, from the point of view of curiosity, as well as with a view to explore quantitatively the consequences of possible small quantum corrections..."
I.e., it can be done, but it is not important to do (on skimming the paper, I note that he does refer to some exceptions).
Kåre,
Nope, no confusion between John Bell and Niels Bohr.
If you were talking about the John Bell that came up with the Bell theorem, he is also the one I meant.
Isn't he the John Bell who was so busy full time trying to prove that nothing could be learned about the submicroscopic level other than what QM allowed, against the objections of Einstein and many other discoverers?
If so, this definitely is the one I meant. Strange endeavor. How could anyone prove that we know all that can be known about any aspect of physical reality!
As for the "I say you say" bit, Let us put things in temporal order:
In a previous post, I wrote:
" QM describes other aspects of particles energy but apparently not these (referring here to actual guidance of particles beams in high energy accelerators). Just like relativistic mechanics cannot be used to describe atomic orbitals. Different tools used for different purposes."
Then you wrote:
"André> Just like relativistic mechanics cannot be used to describe atomic orbitals.
This is wrong. It can be used, and it is being used. Relativistic treatment of the hydrogen atom is standard textbook material; I think it was covered in my first course on quantum mechanics. "
I then commented:
"So you say that relativistic mechanics can replace QM in specifying the possible orbitals in the hydrogen atom?"
And you just wrote:
"André> So you say that relativistic mechanics can replace QM in specifying the possible orbitals in the hydrogen atoms
I did not say that, I say that there is a subfield of QED referred to..."
Well, this is not what you first wrote. You wrote:
" This is wrong. It can be used, and it is being used. Relativistic treatment of the hydrogen atom is standard textbook material; I think it was covered in my first course on quantum mechanics."
Note that I was referring to relativistic mechanics, not relativistic treatment.
As for the question as to whether QM is currently being, or ever has been, directly used in any way, shape or form in the actual guidance of particle beams in high energy accelerators, as I said, the only possible answers are either yes, or no.
If yes. Then it should be easy to have actual confirmation by a working high energy accelerator experimentalist and reference to formal documentation explaining how this is done.
It seems to me that any other argument enters the category of "No, case dismissed".
I have no doubt that many in the community would be interested in having confirmation either way.
André> Isn't he the John Bell who was so busy full time trying to prove that nothing could be learned about the submicroscopic level other than what QM allowed, against the objections of Einstein and many other discoverers?
No, that does not describe the John Bell I knew. I don't think Einstein made any objections to his works. I am not sure if Bell really liked quantum mechanics, in particular the quantum mechanics of fermions. But he analyzed some of its properties more deeply than anyone before him. He wrote papers on accelerator physics, applying classical mechanics, presumably relativistic classical mechanics.
When you referred to relativistic mechanics in connection with atomic physics, I automatically interpreted you to mean relativistic quantum mechanics (as should be quite obvious from my first answer); hence the confusion.
André> many in the community would be interested in having confirmation either way.
Why don't you contact someone in the accelerator group at CERN, and ask (some may also be members of ResearchGate). My hunch is that the answer will be no.
I did an estimate of how the wave function for a free particle with the mass of an electron should spread with time (as measured in it's instantaneous restframe). The answer comes out to a width of about 1 cm in 1 second (10 cm in 100 seconds, 1 meter in 10000 seconds...). This is for coherent evolution. In a real accelerator there are synchrotron radiation and other processes which destroy coherence.
Kåre,
Then we are not talking about the same John Bell. Sorry, I don't know this one's work.
The John Bell I thought you were talking about is John Stewart Bell born in 1928, and he sure was involved in a long winded dispute with all determinist physicists about whether or not any more info could be had about submicroscopic particles and their interactions. This is history.
Regarding whether or not I was referring to "relativistic mechanics" or "relativistic quantum mechanics", I thought it was clear enough that I was referring to "relativistic mechanics" by not inserting the word "quantum" in between "relativistic" and "mechanics".
Regarding my contacting someone at CERN to ask about QM being used to guide particle beams in their facility, it is not me who disputes that QM is not being used for the purpose.
From all references I have, initially and mainly the Humphries work that I already gave in reference, and also now the new ref by Chen and Reil that Christian gave me, and that I scanned a little more, no trace of any use of QM in particle beam guiding maths, unless I missed it this time around.
The case has long been settled for me. The answer to my satisfaction is still no, it is not being used.
If you believe otherwise, up to you, or to anyone else to verify to your and/or their satisfaction.
Unless you bring me direct confirmation either way, any more argumentation on this issue is of no interest to me.
What John Stewart Bell (born 28 June 1928) did was not engage in a "long winded dispute" about philosophical dill-dall. He demonstrated, by fairly simple mathematical calculations and logic, that the predictions of Quantum Mechanics, as formulated, is inconsistent with any local hidden variable explanations of the same. What he did was therefore exactly the opposite of engaging in a long winded dispute over interpretations: He showed that the question could be decided by experiment.
I don't know if Bell was in favor of one outcome over the other, but it has been my impression that all physicists having any opinion of the issue, was certain that the outcome had to be in favor of quantum mechanics. Which has been the case for all experiments performed.
I once was involved in some play with a fully deterministic (but chaotic) model for the quantum measurement process. Alas, it was essentially dead on arrival, due to the Bell inequalities.
Regarding descriptions of mechanics, it is common to use a 2x2 matrix classification:
Non-relativistic Relativistic
Classical x x
Quantum x x
Kåre,
My conclusion is that Bell was engaged in a long winded dispute with determinist physicists trying to discourage on principle any further formal research to better understand the submicroscopic level.
You say that this can be decided with experiments. I say that it is impossible to prove that we know all that there is to know about physical reality.
The same recommendation based on principle can be found in Feynman's 1948 paper stating that it was of no interest to be able to say how the situation would look at each instant of time during a collision and how it progresses from instant to instant.
As I mentioned already, I totally disapprove of any research interdiction based on principle about any aspect of physical reality, and I think that this has been an unfortunate hindrance to timely exploration of the submicroscopic level.
Up to you or anybody else to differ, but this is my opinion, and I have no qualm airing it.
Korzybski wrote in 1948:
"The evolution of our human development may be retarded, but it cannot be stopped."
I totally agree with him.
Dear Christian,
If you read back, you will see that I only said that QM was not used in high energy accelerators to guide particle beams.
I did not speculate that QM would not be applicable. I stated my conclusion that it is not being used, after not finding any QM math in the Humphrey's reference nor anywhere else.
I have no argument for or against. If it is being used, this would highly surprise me for the reason I gave, but if anybody can refer me to confirmation of use, I will just as readily change my tune. Until then, all evidence leads me to conclude that it is not used, which is my current conclusion.
Christian> the submicroscopic level is already de facto better understood in these facilities than for those who can't see beyond the self inflicted limit of the Heisenberg uncertainty principle.
I don't think the current phase-space localizations of particles in accelerators are in conflict with the Heisenberg uncertainty relation; they are not (yet) that well localized.
But one may wonder why the almost inevitable spread of the wavefunction under unitary evolution doesn't lead to (a diffusion like) delocalization. I believe the answer to this is that, due to synchrotron radiation and interactions with the environment, one can no longer make superpositions of the particle wave function alone. The rules of quantum mechanics then instructs us to add probabilities instead of probability amplitudes. This leads to a classical stochastic description (where in this case the quantum induced stochastic part is negligible compared to classical contributions), similar to how radioactive decay chains are described.
When should we add probabilities instead of amplitudes? When there is a part of the total system (often un-observed) which is different.
Dear Andre,
I found your paper very instructive to bridge some of the ideas I have developed to understand interactions in polymers, in particular the reason for entanglements, and other possible applications of this new statistical framework to other systems in interactions at various scales building up matter. I am learning about the issues in other domains of physics (here particle interactions) which could perhaps be addressed from a very different new angle. Your paper in that regards is quite helpful (and very clear). Thank you.
Christian> So in my view there is no intrinsic problem with a quantum description of particle beams.
Since I have not tried to do the math, I am in no position to protest. What I do know is that it is much more difficult to prevent the spread of a wavefunction than to govern its average position. Anyway, the general considerations about coherent versus incoherent evolution should still be valid. Sometimes it may be wrong to apply the Schrödinger (or Dirac) equation for too long!
Christian> there are indeed scientific papers that derive the usual beam optics equations on the basis of the Dirac equation.
Have you seen other works than those by Jagannathan? I get the impression that the field has gone to sleep with him. I believe the spin dynamics is already treated by quantum methods (what else can you do with spin-1/2?), and also radiation processes (since Schwinger).
Dear J.P.
Thank you for your appreciation. Glad that this new angle on EM energy might be of use in your own research.
You probably noticed that the expanded local space geometry described is not really required to establish the LC equations, although it helps tremendously mentally "visualizing" the electromagnetic oscillation.
But if you eventually follow the trail and explore how the unidirectional half of a 1.022 MeV or more energy photon mechanically acquires omnidirectional inertia as the photon converts to a pair of electron and positron, whose total complements of energy display omnidirectional inertia (mass), then you will see that this cannot be mechanically explained without the 3-spaces geometry. The same when you get inside protons and neutrons.