It is well-known that several renormalized versions of a given theory may emerge depending upon the chosen regularization method, choice of subtraction scale/scheme, perturbative-vs-nonperturbative implementation,....etc. Hence, the above question bears upon the fundamental-aspect of QFT,i.e. uniqueness.For additional explanations, my response to Remi Cornwall and Stam Nicolis, may be seen.
Response to Remi Cornwall: In response to your answer to my question, I would appreciate if you could perhaps illustrate your answer with example(s) from QFT. ( As far as I am aware, almost all renormalized versions of a given theory have "correct experimental predictive power" and "correct physics" consistent with the basic principles of QFT.)
Yes-if the bare Lagrangian is the most general, consistent with the symmetries, then the statement that it's renormalizable means that the renormalized Lagrangian, constructed, when taking into account corrections beyond free fields, will be of the same form as the bare Lagrangian, only the coefficients of the terms will be changed.
This is a mathematical statement and is independent of the relevance of the particular Lagrangian for describing physical phenomena.
Response to Stam Nicolis' answer to my question;Dear Dr.Nicolis, I agree to your views on my question as far as the definition of renormalization in QFT is concerned- form-invariance of the bare-Lagrangian after renormalization is indeed the definition of the renormalization programme. However, my question on the' uniqueness' of renormaized Lagrangian did not concern that but rather to the well-known fact that widely different values of the renormalized parameters ( the "coefficients of terms" in the renormalized Lagrangian-as you have put it) such as coupling-strength, mass etc, that emerges in different schemes/methods for subtraction, regulaisation, etc. Some prominent illustrative,examples could be be the question of " triviality" in the self coupled scalar field case, the scale and scheme dependence in QCD...).In that sense, the asked question concerns the right-choice of these parameters for observables, purely based on some theoretical principle, if known/possible. ( An example could be the PMS-"Principle of Minimal Sensitivity" proposed by PMS ( PM Stevenson)).
Renormalizability means the coefficients are independent of the cutoff procedure; their values depend on the initial conditions of the renormalization group equations. They're, also, independent of the scheme-that's why it's meaningful to call them physical.
The statement about the ``right choice'' of the renormalized parameters doesn't make sense. What does make sense is what is the ``right choice'' of the bare parameters, that leads to a critical point, where the renormalization conditions fix the values of the renormalized parameters. Whether the renormalized coupling constant is, in fact, zero is a physical statement and doesn't depend on any scheme, for instance.
The bare parameters aren't physical-they just are used to construct quantities that can be shown to be independent of schemes etc. And only such physical quantities make sense.
@Stam Nicolis
If the Lagrangian is T -invarian , what will happen ?. T=time reversal operator
B.Rath
Response to Biswanath Rath: Dear Dr. Biswanath Rath, your question as above does not at all relate to the question asked by me. Hence,you may please delink the same from the threads to my question/answers.If you prefer, you may ask it as a separate question from your own page. Bimal Mahapatra
Dr. Mahapatra,
You raised an important question which, unfortunately, is often either glossed over or simply ignored in many standard QFT textbooks.
In general, the Renormalization Group flow is a multi-dimensional system of nonlinear coupled equations whose end point is typically associated with a unique fixed point (or equilibrium) solution. In general, however, such a system of equations:
a) displays sensitivity to initial conditions (for example, sensitivity to the initial choice of the bare parameters or the UV scale),
b) evolves in the presence of external perturbations (non-perturbative in nature or linked to out-of-equilibrium conditions),
c) ends up on an attractor with a more complex structure than a fixed point (such for example a limit cycle or a strange attractor).
A strange attractor of this kind is associated with the onset of chaotic behavior, typically has global stability and may provide a natural explanation for the hierarchical structure of Standard Model parameters, see:
http://www.worldacademicunion.com/journal/1749-3889-3897IJNS/IJNSVol3No3Paper02.pdf
Dear Bimal Mahapatra,
It is indeed true that there is no unique prescription to calculate the renormalised version of the coupling constants, and that in general applying different subtractions methods leads to different renormalized couplings in the Lagrangian. However, this difference is unphysical in the following sense: the physical prediction of the theory is not the value of the renormalised coupling, but its scale dependence. This enters in the logarhithmic terms, usually in powers of $\log (m^2/\mu^2)$ where $\mu$ is the renormalisation scale. Hence, while the finite part of the renormalised parameters is not a prediction of the theory ( indeed, this finite part has to be fixed by some experiment ), the scale dependence is a physical prediction, and thus all renormalisation schemes must give the same scale dependence, which is in fact true. RG flow then improves the calculation by resumming all the loop contributions to a given running.
Putting this in a different way, as other answer have already mentioned, we can state: different renormalisation schemes only differ by the choice of initial conditions of the RG flow trajectory, or the value of the coupling constants at some reference scale $\mu_0$. This has not been proven rigorously, but i would say it is a necessary condition for the renormalisation paradigm to be motivated physically.
Dear Stefano,
I agree that the scale dependence of renormalised couplings needs to be independent of the renormalisation scheme.
But what we currently know how to work with are only perturbative renormalisation schemes that may or may not be applicable to non-perturbative or far-from-equilibrium conditions. We do not have compelling clues on how to reliably define the deep UV sector of QFT, For example, is the deep UV sector compliant with all consistency requirements mandated by low-energy QFT? Can we convincingly eliminate Landau poles or deal with triviality problems? How is gravity to be accounted for at these scales?
Dear Ervin,
My previous answer should be read in the perturbative treatement of QFT's: consider for example the theory of a self interacting scalar. Perturbative renormalisation of the 1-loop diagrams, for example, leads to correction to the mass and quartic couplings, \delta m and \delta \lambda, to a given order in perturbation theory. My claim is that the physical prediction of the theory are the scale dependence of these couplings, while their value at some scale should be measured by experiments.
Regarding your comment, much progress has been done recently to infer non perturbative and out of equilibrium regimes. First, RG studies are a progress in the direction of non perturbative physics: RG flow resums the leading contribution of all orders in perturbation theory, by demanding that the effective action does not depend on the renormalisation scale $\mu$.
Furthermore functional methods in RG flow can be used to determine, although perhaps not definitely prove, the existence of non trivial interacting fixed points of RG flow in the non perturbative regime, which is if you want a way of defining a well behaved UV completion: it should possess a conformal interacting fixed point. This is the prescription of asymptotic safety programs, which aim also to infer the question whether the current theory of gravity, based on extensions of Einstein's theory, can be UV completed in the former sense. Much progress has been done in this direction in the past years.
The question of the Landau poles is delicate, but it has been argued (e.g. https://arxiv.org/pdf/0912.0208.pdf) that the existence of the fixed point leads to a cancellation of the Landau pole of QED, rendering an asymptotically safe standard model plus gravity a candidate for UV completion.
Of course, since in the regime you are talking about we do not have much valid experimental data, most of this is highly speculative.
Ref: https://en.wikipedia.org/wiki/Asymptotic_safety_in_quantum_gravity
https://en.wikipedia.org/wiki/Ultraviolet_fixed_point
Dear Stefano,
I agree, both asymptotically safe and UV complete models are highly speculative and need to be taken with a grain of salt.
Unfortunately, as I previously mentioned, not too many textbooks and research articles point out the inherent limitations of the perturbative RG program and the speculative nature of its extensions.
I am indeed grateful for the very lively and illuminating discussion generated by my question and I sincerely thank all the participants for the same-in particular, to Drs. Goldfain and Lucat. I must, admit that I have been pondering over this question for about three decades after I read the papers of Prof. P M Stevenson and collaborators, on the Gaussian Effective Potential ( GEP) approach for the self-coupled scalar field demonstrating two different renormalized versions: the "precarious" and the "autonomous" as these were named by the authors, leading to very distinct physical predictions of the theory.( Let us note in that context, that the GEP-approach is a non-perturbative scheme and the authors have demonstrated its supremacy over the loop-expansion-method and the conventional order-by-order implementation in the standard perturbation- approach. It may also be noted that the bare-coupling can be extinguished in the UV-limit in the "precarious" version, yet generating a finite,non-trivial value for the physical, renormalized coupling- fully consistent within the prescription of the renormalization-programme). As has been noted by several authors ( and pointed by Dr.Goldfain-herein) the physical implications of these results for the the Standard Model is severe, particularly in the context of the magnitude of the self-coupling strength that emerges for a 125 GEV-Higgs, when considered together with the "proof" of triviality.
To come back to the focus for the ongoing discussion let me make the following additional comments:
( 1) The physical-content of the renormalization programme is perhaps not exhausted by the RGE (" Renormalization Group Equations" )-flow, fixed-point structure of the theory etc.
(2) There are certain inherent arbitrariness that prevails and may still persist owing to the very asymptotic-nature of the analysis( A very crude and naive example could perhaps be the subtraction of infinities: (infinity)-(infinity)= finite but arbitrary !) as well as, in the freedom to choose initial conditions.
( 3) No experiment can be performed at infinite-resolution to perhaps decide decisively on the issues discussed/raised here-thereby, necessitating further theoretical constraints over and above the standard prescriptions.
(4) It may be important to note that our inability at present, to transcend the standard perturbation approximation(SPA) ( as defined by power-series-expansion in the coupling strength) may be irrelevant for the actual physical consequences of QFT-as has been demonstrated by alternative non-perturbative approximation schemes. Thus it is conceivable that qualitatively different consequences of the theory such as the results on "triviality" and "asymptotic freedom" may emerge in non-perturbative approaches, if and when the same are achievable.
There's no problem in studying quantum field theory beyond perturbation theory-the definition of a quantum field theory at the critical point of a lattice theory has been used for more than forty years. There are non-trivial issues regarding how close one can get, in practice, to the critical point and how well one can control the lattice artifacts.
That experiments have finite resolution isn't relevant-the issue is, whether the different ways of defining the finite resolution describe different physical effects, or not. If the theory is renormalizable, it is possible to address this question; if the theory isn't, it's, by definition, impossible since, in the latter case, different regularization schemes do define different theories.
@ Stam Nicolis
"There's no problem in studying quantum field theory beyond perturbation theory-the definition of a quantum field theory at the critical point of a lattice theory has been used for more than forty years."
It is true that several non-perturbative RG methods have been successfully developed in non-equilibrium statistical mechanics, lattice field theory and condensed matter applications. Despite these advances, it is presently unclear if these methods are universally applicable or if they are robust enough to describe physics of strongly coupled systems or the deep UV sector of HEP..
For example, It has been known for awhile that understanding the IR limit of QCD is far from trivial due to color confinement and chiral symmetry breaking. Challenges still exist in formulating a consistent theory for the spectrum of hadron masses outside lattice simulations. Same can be said about glueballs or quark-gluon plasma (QGP).
Likewise, a universally accepted non-perturbative theory of Quantum Gravity is missing and nobody knows if it is even possible or falsifiable.
Understanding the confinement for non-Abelian gauge theories, for SU(2) is known since the 1980s-cf. http://inspirehep.net/record/157909 And this has been extended to SU(3) and other Lie groups, cf. the followup papers.
This doesn't constitute a mathematical proof, of course; but that's due to lack of control over tunneling between topological sectors. On the other hand, the rigorous construction of the trivial topological sector, https://projecteuclid.org/download/pdf_1/euclid.cmp/1104253284 does support the lattice approach referred to.
The study of chiral symmetry and its breaking in QCD has become a technical problem and is no longer a conceptual one. It's now possible to perform simulations at physical pion masses. Of course there is scope for algorithmic improvement.
It doesn't make sense to talk of a consistent theory of hadron masses outside lattice simulations-lattice QCD is the consistent description, since it does have a scaling limit.
The difference between quantum gravity and the Standard Model is that the gauge group of gravity is noncompact. That's why the lattice regularization that works for the Standard Model, doesn't work for gravity. And the non-compactness has a physical reason: the generic formation of black holes, spacetime regions that are, classically, causally disconnected.
One way to describe it consistently is through the holographic correspondence, where the quantum theory of gravity is mapped to the classical field theory on the boundary. Here there are many conceptual issues that remain to be elucidated.
@ Stam Nicolis,
I don't disagree with your assessment. My only point was that, as of today, there are too many open questions to declare that a full-blown non-perturbative understanding of field theory exists.
"It doesn't make sense to talk of a consistent theory of hadron masses outside lattice simulations-lattice QCD is the consistent description, since it does have a scaling limit."
I take a different standpoint here. Lattice simulations and algorithmic enhancements are helpful indeed, but the lack of an underlying theory indicates that our fundamental understanding of low-energy QCD remains an open issue.
"One way to describe it consistently is through the holographic correspondence, where the quantum theory of gravity is mapped to the classical field theory on the boundary. Here there are many conceptual issues that remain to be elucidated."
True, holographic correspondence is another example of an incompletely proven model. In fact, there is quite a debate in the literature today about the validity of the AdS/CFT conjecture.
I think it only fair to say that we are making progress in developing non-perturbartive field theory from the bottom-up, but the end is not yet in sight.
A fairly comprehensive review of challenges in QCD and strongly-coupled theory can be found in:
https://arxiv.org/abs/1404.3723
In my opinion, since QCD is a rich and complex field of research, it makes sense to continue the discussion elsewhere.
Dear Prof.Goldfain,
Thanks for the reference provided-a comprehensive review containing a wealth of material. Bimal Mahapatra
The underlying theory of the strong interaction is QCD. It describes quarks and gluons at energies above a few hundred MeV and bound states thereof at scales below that range. The coupling constant is small enough, to make perturbative calculations meaningful at energies above ~1 GeV and the lattice framework can match what happens at lower energies. That's what matters. The Lagrangian that describes quarks and gluons at ``high'' energies can describe, also, hadrons at low energies-that's what lattice QCD is all about. There aren't different theories, just different approximation schemes to the same theory. And the Lagrangian for QCD is unique. There's nothing more and all calculations are well defined.
The holographic correspondence is relevant for gravity-it's tested on strongly coupled field theories, in flat spacetime, as is QCD at ``low'' energies, since a lot is known about QCD at strong coupling. That's how it's possible to understand what one would like to prove. For theories in flat spacetime the lattice is an unambiguous formulation from first principles.
@ Stam Nicolis,
"There's nothing more and all calculations are well defined"
To the contrary, several researchers point out that lattice QCD, despite making significant progress in the last decade or so, is not yet the ultimate answer:
1) Chapter 3 in http://www.usqcd.org/documents/bsm.pdf,
2) http://iopscience.iop.org/article/10.1088/0031-8949/2013/T158/014002/meta
Also, please prove to us that lattice QCD is able to derive or provide, from first principles,:
1) The QCD scale,
2) The pattern of quark masses,
3) The physics of pentaquark baryons,
4) The physics of glueballs,
5) The physics of quark-gluon plasma,
6) A definitive answer to the strong CP problem,
7) A definitive answer to the puzzle of proton radius,
8) A definitive answer to the puzzle of proton spin,
9) A definitive answer on whether gluons saturate at large occupation numbers,
10) A definitive answer on whether gluons form a color glass condensate.
Please note that this is my last reply, as I believe that these topics are way too involved to be discussed here..
Yes, to all, except the ``pattern of quark masses'' (what's, usually, called, the mass hierarchy) and the strong CP problem. It suffices to look at the technical papers on these subjects.
One shouldn't confuse technical issues (computing power, algorithms) with conceptual issues (how to describe the calculation to be done). For lattice QCD the former is the issue, not the latter. All the other questions can be framed in a way that is meaningful for lattice QCD. This doesn't mean that they aren't challenging for computation-but they're not challenging for what are the quantities to be measured, in each case, only how to measure the quantities.
The question of the mass hierarchy and of the degrees of freedom that control the theta term refers, by definition, to degrees of freedom beyond the Standard Model. For any model about them lattice QCD does provide the conceptual framework for testing their consequences, however.
Here, too, there are subtleties: QCD makes sense, whatever the quark masses are.
Cf. also D. J. Gross' 1975 Les Houches lectures, on the subject of the masses in asymptotically free theories.
Regarding the pattern of quark (or lepton) masses: it's known that this involves effects beyond the Standard Model. What these effects are isn't known, so how to describe them isn't known either.
Regarding the strong CP problem: this is, also, an indication of physics beyond the Standard Model, since new degrees of freedom would be needed to fix the scale of the theta term, were it non-zero. The Standard Model is, however, a consistent theory, whatever the value of the theta term and all experiments are consistent with it being zero, too, for the moment.
@ Stam Nicolis,
It was my intent not to continue this engaging dialog. However, I have decided to reply for one more time.
“Yes, to all, except the ``pattern of quark masses'' (what's, usually, called, the mass hierarchy) and the strong CP problem. It suffices to look at the technical papers on these subjects.”
To be clear, I am not talking about modeling (or simulating) of QCD related effects, but deriving them rigorously from first principles. In this sense, the QCD scale is typically set in lattice QCD but not extracted from first principles. Likewise, lattice QCD enables modeling of the “proton radius puzzle” but unable to derive it from first principles. Same goes for other items listed in my previous reply.
“One shouldn't confuse technical issues (computing power, algorithms) with conceptual issues (how to describe the calculation to be done). For lattice QCD the former is the issue, not the latter. All the other questions can be framed in a way that is meaningful for lattice QCD.”
Not everyone in the field takes your position. Take a look, for instance, at the first link of my previous reply which clearly points out that “the implications of lattice QCD go far beyond the traditional studies of low-energy QCD”. To give a single example, the report cites the difficulties of not having a non-perturbative regulator for chiral gauge theories. Other similar items are discussed there as well.
In my opinion, one should refrain from overselling current trends in non-perturbative field theory prior to solving all conceptual challenges related to the Standard Model and effective field theories, in general.
If there is a natural cut-off scale of a gauge theory (say standard model), then low energy Lagrangian describes an effective theory, which is found by integrating out heavier degrees of freedom. Physics wise, by this what we mean is that external lines are not heavier than the cut-off scale, but such heavier modes may appear in internal lines. For Wilson's approach to renormalization see: http://scipp.ucsc.edu/~dine/ph295/wilsonian_renormalization.pdf
Because this is a top down approach, in such a case low energy bare Lagrangian + counter terms is just an ingredient for the Wilsonian effective action.
The QCD scale, of course, is deduced from first principles, those of quantum field theory. Cf. Gross, Les Houches lectures 1975, for instance;
The proton radius can be given a precise meaning in terms of the correlation functions of quarks and gluons and these correlation functions can be measured in lattice QCD.
QCD is a vector theory, not a chiral theory; chiral symmetry is global, not gauged. So the problems-that have been, in fact, resolved-aren't relevant for lattice QCD.
It's not correct to confuse the notion of a cutoff with the notion of the scale, where one is performing the calculation. The two are distinct.
@ Stam Nicolis
"The QCD scale, of course, is deduced from first principles, those of quantum field theory. Cf. Gross, Les Houches lectures 1975, for instance;"
This does not mean that the QCD scale (Lambda QCD) can be derived from first principles. Here is why:
It is well known that Lambda QCD depends on the RG scale, the number of quark flavors and the strong coupling measured at that RG scale, see e.g.,
http://www.slac.stanford.edu/econf/C040802/papers/L010.PDF
It is also well known that the strong coupling and quark masses are considered free parameters of the QCD Lagrangian. Stated differently, the value of the strong coupling at a given RG scale cannot be predicted and has to be determined from experiments, see e.g.,
https://arxiv.org/abs/1506.05407
It follows that Lambda QCD cannot be directly predicted from theory (or derived from first principles), contrary to your assertion.
Let's agree to disagree.
The origin of mass of mass is one thing, the quark mass hierarchy, is another.It's wrong to relate them.
In any renormalizable theory, what's meaningful is the evolution of the coupling constant(s) with scale; the specific value depends on the initial conditions to the differential equations that describe it. As is well known, the initial conditions are distinct from the equations themselves.
One shouldn't mix up the initial conditions with the equations.
@ Stam Nicolis,
"The origin of mass of mass is one thing, the quark mass hierarchy, is another.It's wrong to relate them. "
This has nothing to do with the point of my objections to lattice QCD.
Whether or not one relates the quark mass hierarchy to the origin of mass is completely irrelevant here. The bottom line is that, as long as lattice QCD is an approximation of an effective field framework (the Standard Model) and, as long as it operates with free parameters that are fixed by experiment, it remains an incomplete non-perturbative field theory.
Again, let's agree to disagree.
Lattice QCD isn't an approximation, that's the point-it's a definition of QCD, whose consequences can and have been checked. Once more, what matters is the existence and the properties of the scaling limit and that it can be shown to be independent of certain quantities and how it depends on others.
The Standard Model is an effective field theory, in the sense that it doesn't describe all particles and their interactions, only a subset thereof, those that are known. It's not an effective field theory, in the sense that it can be shown to be independent of the regularization framework.
It can be completely defined on the lattice, in a way that's independent of any perturbative expansion about free fields, though what's the most efficient way to perform certain calculations does have room for improvement.
Any theory has free parameters, that must be fixed by experiment and what matters are the predictions that can be made from that point on.
@ Stam Nicolis,
"Lattice QCD isn't an approximation, that's the point-it's a definition of QCD, whose consequences can and have been checked."
This contradicts your earlier statement where you asserted that:
"The Lagrangian that describes quarks and gluons at ``high'' energies can describe, also, hadrons at low energies-that's what lattice QCD is all about. There aren't different theories, just different approximation schemes to the same theory."
You also say:
"Any theory has free parameters, that must be fixed by experiment and what matters are the predictions that can be made from that point on."
According to your interpretation, the physics community must stop searching from BSM phenomena and forget about ALL open items that make the Standard Model incomplete.
In my opinion, this viewpoint cannot be defended.
Once more:
The same Lagrangian describes asymptotically free quarks and gluons at high energies and confined quarks and gluons at low energies. The Lagrangian is that of lattice QCD. In the former régime the description can be, also, obtained using perturbation theory about free quarks and gluons. In the latter régime it's not possible to describe the interactions in terms of weakly coupled particles and what are the weakly coupled degrees of freedom isn't obvious. Lattice QCD has made many of the issues in this régime irrelevant, however, since it does offer a quantitative description.
The quantitative study of QCD backgrounds is what allows them to be eliminated from the data gathered in experiments and to reveal new physical effects. That's what ``beyond the Standard Model'' means. It doesn't mean anything else.
@ Stam Nicolis,
"That's what ``beyond the Standard Model'' means. It doesn't mean anything else."
Contrary to your assertion, there are many "beyond the Standard Model" searches outside LHC and accelerator technology, where controlling QCD background is critical for getting robust results. Good examples include neutrino telescopes and neutrino detectors, gamma ray bursts (GRB), axion and Dark Matter searches from astrophysical data,
https://neutel11.wordpress.com/
https://arxiv.org/abs/1301.4097
https://arxiv.org/abs/1602.00039
https://arxiv.org/abs/1104.2836
Again, I do not dispute the enormous success brought about the lattice QCD in computational physics and data analysis at colliders. My point is strictly related to the inherent incompleteness of the SM. To be specific, lattice QCD is unable to explain or definitively settle by itself many open questions including, but not limited to,
1) the origin of quark masses,
2) the origin of quark mixing angles and the CP violating phase,
3) the basis for conservation of leptonic and baryonic numbers,
4) the origin of the hadronization scale (lambda QCD),
5) the source of the matter-antimatter asymmetry,
6) where do neutrino masses and mixings come from,
7) if neutrinos are Dirac or Majorana,
8) if there are sterile neutrinos,
9) the triviality problem,
10) where does the SM gauge structure comes from?
11) what is the basis for fermion chirality?
It is for this reason that I consider lattice QCD, as integral part of the Standard Model, to be an incomplete framework.
One shouldn't mix up strong interactions and elctroweak interactions-it doesn't make sense.
1) is beyond the Standard Model, as mentioned above.
2) Has to do with the electroweak interactions of quarks, not the strong interactions.
The CP violating phase can be described by the CKM mechanism-which, simply, expresses the fact that, generically, the mass eigenstates of the strong interactions need not be mass eigenstates of the electroweak interactions.
3) Baryon and lepton number conservation are explained as symmetries of the Stndard Model-and a mechanism is predicted of how they could be approximations. So this isn't an issue. Whether it's sufficient to describe proton stability is, of course, questionable, since unknown interactions can lead to proton decay. But these aren't strong interactions anyway.
4) Lattice QCD most definitely predicts hadronizaton and its scale-for any number of flavors. The number of flavors is a free parameter, fixed by experiment. It doesn't make sense to expect that the number of flavors could be fixed by the theory.
5) This has been described by Sakharov in the 1960s. The non-trivial issue is that the CP violation described by the Standard Model isn't sufficient. There are new particles that contribute. Their detailed properties remain to be discovered.
6) IF neutrinos are Dirac particles, their masses are through the Brout-Englert-Higgs mechanism and the right-handed neutrinos are new particles. IF neutrinos are Majorana particles, then their masses are described through a seesaw mechanism, with new particles, not necessarily right handed neutrinos. All these particles remain to be discovered. But there's no lack of models-and these don't involve the strong interactions. So it's nonsense to discuss them when talking about lattice QCD.
7) That's the result of experiment-there's no compelling reason one way or another. And, once more, this doesn't have anything to do with QCD, since neutrinos don't have strong interactions.
8) IF the neutrinos are Dirac, the right handed neutrinos are sterile.
9) No problem. That the pure φ4 theory is trivial doesn't mean that it remains trivial, when interactions with gauge fields and fermions are introduced. And, indeed, the Higgs selfcoupling has been measured-it's not zero, but it is consistent with perturbation theory: and it is this result that implies the existence of new particles.
10) From experiment. That's the starting point. It's an experimental fact that the quarks and leptons have the charges they have. It's a theoretical prediction that the electric charges of quarks and leptons must cancel overall.
11) Experiment. It's an experimental fact that the weak interactions violate parity in a particular way.
As mentioned, none of these isssues-apart from the 1)-involves the strong interactions, so it's nonsense to mention them when discussing lattice QCD.
Contrary to what might be thought, the Standard Model does not unify the strong interactions with the electroweak interactions. How such a unification might be realized is not known and can only be found by experiment.
This isn't a problem set, where the rules can forbid using certain techniques. But it is essential to know that neutrinos don't have strong interactions. So it isn't surprising that lattice QCD doesn't have anything to say about neutrinos-it's inevitable.
In any event: the Standard Model, as well as any ``Grand Unified Theory'' of the strong and electroweak interactions, is a renormalizable quantum field theory. It's independent of the cutoff, but many quantities depend on a calculable way, on the scale studied-that doesn't have anything to do with the cutoff.
@ Stam Nicolis,
I am afraid your long explanation still misses my point.
Let me repeat for the last time. QCD is an integral part of the Standard Model. And as long as the Standard Model cannot be considered complete, due to the large number of open questions and due to its 20 (or 26) free parameters, QCD cannot be considered complete either.
Of course, you are entitled to your opinion and I respect that. But I happen to disagree with it for the reasons I elaborated upon in great detail.
Of course QCD isn't complete and the Standard Model isn't either. However the reasons put forward are mostly either irrelevant for discussing this or wrong.
The number of free parameters, in particular, is irrelevant-it's not that number that implies that QCD in particular, or the Standard Model in general, is incomplete.
And, once more: QCD is the description of the strong interactions. It is completely independent of the electroweak interactions and is a consistent quantum field theory by itself. So it doesn't make sense discussing the CKM matrix or neutrinos within the context of QCD.
The only free parameters of QCD are the number of quark flavors, the quark masses and the value of the theta term. For any values the theory is complete-it's a consistent quantum field theory that allows the computation of any gauge invariant quantity unambiguously. Since the theory is asymptotically free, it's quite insensitive to high energy effects, because these are very weakly coupled, so it's much harder to probe them that way.
The electroweak sector is incomplete: even though it is a renormalizable theory, therefore is independent of the cutoff: since it isn't asymptotically free, it's sensitive to new effects at high energies. The Higgs sector, in particular, is very sensitive. It's the measurement of the Higgs self-coupling that indicates, already, the presence of new physical effects. Their details remain to be discovered. And this is very hard, much harder than many people may have expected, since, though the theory isn't asymptotically free, the couplings remain weak enough that new effects don't couple that strongly to known effects for them to be readily observable. It needs a lot of work gathering and analyzing the data. Same holds for neutrinos and so on.
It isn't known how to describe the strong interactions and the electroweak interactions within a unified framework-like the electromagnetic and weak interactions are described within the electroweak sector of the Standard Model.
@ Stam Nicolis,
"Of course QCD isn't complete and the Standard Model isn't either. However the reasons put forward are mostly either irrelevant for discussing this or wrong."
You can certainly insist on your opinion, however your opinion is not shared by the theoretical physics community at large. Many textbooks and reference articles convey a diametrically opposed viewpoint: QCD and the Standard Model are incomplete precisely because there are so many unsettled conceptual challenges and free parameters.
"And, once more: QCD is the description of the strong interactions. It is completely independent of the electroweak interactions and is a consistent quantum field theory by itself. So it doesn't make sense discussing the CKM matrix or neutrinos within the context of QCD."
QCD cannot be always considered consistent without the electroweak model. For instance:
1) Quarks participate in electroweak interactions and QCD alone cannot account for the existence of such interactions.
2) The number of quark flavors and the number of lepton flavors must match to ensure that the Standard Model is anomaly-free. Take the electroweak sector away and you end up with an inconsistent Standard Model.
"The only free parameters of QCD are the number of quark flavors, the quark masses and the value of the theta term"
Again, this statement is at odds with many references on the topic, where either the QCD scale (lambda QCD) or the strong coupling is also added to the list of free parameters of the QCD sector. For example,
http://wwwthep.physik.uni-mainz.de/~uhaisch/BSM10/Jenseits_des_SM_SS_10_Apr29.pdf
http://pdg.lbl.gov/2012/reviews/rpp2012-rev-qcd.pdf
http://www.iop.vast.ac.vn/theor/conferences/vsop/18/files/Schott-4.pdf
It is quite obvious that our views cannot be reconciled.
Thank you for this stimulating conversation!
That QCD doesn't account for the electroweak interactions doesn't mean it's incomplete, as a theory of the strong interactions. It can be, if there are other, hitherto unknown, particles, that carry color charges. But this is an experimental, not a theoretical, issue. There's no reason, from the consistency of QCD, to impose either additional quark flavors, or additional interactions on the quarks-it's experiment, that indicates how many quark flavors there are and that they have electromagnetic and weak interactions, while leptons don't have strong interactions.
Once more, electroweak interactions are independent of strong interactions, so the number and charges of leptons and the electric charges of the quarks are irrelevant for the consistency of QCD. They're relevant for the electroweak interactions of the quarks, that carry electric and weak charges, that are completely unrelated to the strong charges they carry.
The electroweak interactions provide one way of distinguishing the quark flavors, that's all.
If one removes-or adds-the quarks or the leptons, in a way that's inconsistent with anomaly cancellation, the electroweak theory is inconsistent-not QCD. That's why the discovery of the tau lepton implied the existence of the tau neutrino and that of the third family of quarks-though couldn't place any constraints on their properties. (It was the GIM mechanism that implied that the charm quark was necessary, for explaining properties of the weak interactions, independently from anomaly cancellation, which was discovered later-and placed bounds on its mass.)
In any renormalizable theory, not only QCD, the coupling constant and the scale are related in a particular way, defined by the theory, that expresses the fact that the theory is renormalizable-this doesn't have anything to do with the hadronization transition. Nor does it mean the theory is missing degrees of freedom. It means that the the value of the coupling constant, at any given scale, can only be determined by the equations of the theory, if it's known at some other scale, through experiment. Lambda_QCD is the name given to the energy scale, where certain processes, that involve hadrons but, also, where quarks can be resolved, in ways that can be easily measured, are relevant. The equations in question are first order differential equations-the initial conditions are free parameters by definition. Their fixed points (provided they exist) or any other attractor structure, are independent of the initial conditions. Since the equations, however, are only known approximately, their fixed points, and their attractors more generally, are, also.
@ Stam Nicolis,
I agree with your reply, it makes sense and it is, indeed, the way that QCD is typically presented nowadays.
My point about consistency relates, however, to a hidden subtlety that is often overlooked.
"Consistency" of a given theory means fulfilling two separate requirements: 1) internal consistency in terms of mathematical formulation and the ability to extract finite observable results, 2) consistency across the full range of available experimental observations.
There is no doubt that lattice QCD meets both consistency requirements across a large range of observations. But this range is only partial, in the sense that quarks carry electroweak charges and lattice QCD alone is "blind" when it comes to the electroweak coupling of quarks and leptons. That the electroweak sector may become inconsistent if anomaly cancellation is violated, signals that radiative corrections contributed by the emission/absorption of weak bosons and photons may ruin the bound structure of baryons, in particular proton stability in the deep IR regime.
No; if anomaly cancellation is respected, nothing is wrong; and it isn't the theory doesn't make sense, new particles are required. For the moment the known quarks and leptons are sufficient to ensure that the electroweak theory is consistent-and in agreement with experiment. The two statements are distinct. However, the discovery of a new quark family, before the discovery of a new lepton family wouldn't affect QCD, as a theory. It would affect the electroweak theory and would be a prediction.
Conversely, however, the electroweak theory did allow to establish bounds on the mass of the top quark.
Consistency of a theory is a mathematical statement-it doesn't have anything to do with experiment. QCD is consistent. It's, also, in agreement with experiment. It describes how particles carrying color charges interact. Whether the particles carry any other charges doesn't affect its consistency.
The reason is that the electroweak interactions are independent of QCD. How they can be unified with QCD isn't known and it's part of the LHC program and other projects. This does imply that Standard Model signals can be measured to discovery precision, which is not yet the case.
For looking for physics beyond the Standard Model it's easier to study the electroweak sector than QCD, for the reasons mentioned above.
@ Stam Nicolis,
"Consistency of a theory is a mathematical statement-it doesn't have anything to do with experiment."
I have to disagree with your interpretation. There is widespread use of "consistency with experiment" in textbooks and research articles alike.
Thanks again for a lively exchange. Our debate ends here.