The provocative question is motivated by the lack of non-perturbative regulators in quantum field theory which preserve the Lorentz symmetry. Higher order derivatives regulate the time reversal invariant sector only. Do we have to return to the idea of H. B. Nielsen about the recovery of Lorentz invariance in low energy effective theories? One may even sharpen the question by asking whether there are boost invariant theories with either Galilean or Lorentz symmetry.
Yes, cf. for instance, https://projecteuclid.org/euclid.cmp/1104253284 for a mathematically rigorous example in four spacetime dimensions. It describes the approach, though. On a more heuristic level, while lattice regularization, which is, another, example of a non-perturbative regulator (beyond that used in the paper by Magnen et al.), does break Lorentz invariance, since this is a global symmetry, like any global symmetry, that's broken *only* by the regularization, it is restored in the scaling limit, provided this exists. Lattice studies have addressed this issue quantitatively in, for instance,
G. Schierholz and M. Teper,
``On the Restoration of Lorentz Invariance in SU(2) and SU(3) Lattice Gauge Theories,''
Phys. Lett. B136 (1984) 69.
S. Caracciolo, G. Curci, P. Menotti and A. Pelissetto,
``The Restoration Of Poincare Invariance And The Energy Momentum Tensor In Lattice Gauge Theories,''
Nucl. Phys. Proc. Suppl. 17 (1990) 611.
It is possible to quantify how Lorentz invariance could be violated, cf. http://www.physics.indiana.edu/~kostelec/faq.html
If Lorentz invariance were broken at some high scale, some of the consequences are discussed here: http://arxiv.org/abs/1106.6346
Following is anon-perturbative Lorentz Invariant theory.
Article Quarkonium and hydrogen spectra with spin-dependent relativi...
Article Periodic quantum gravity and cosmology
Mohamed: Thank you for the paper. I followed the construction rather superficially due to its heavy mathematical weaponry. As far as I saw, the proposal is the modification of the dimensionality of the space, a dimensional regularization. I have seen attempts to reach similar goal in statistical physics by a randomly modified coordination number on a lattice crystal and I suppose that your proposal is an analytic realization of this idea. I have two questions: Have you check that the Lorentz symmetry is preserved? To promote this construction to a non-perturbative procedure one has to go beyond the level of the loop-integrals and to regulate the path integral expressions. Have you tried it?
Stam: Thanks for the paper of 93, I have not known about this construction. Two side questions: Have you our someone else tried to go beyond the constraint A0=0 and to accommodate a non-vanishing, time independent A0 field which is needed if the four volume has finite temporal size? Do you know if someone has tried to use this construction in the framework of the functional renormalization group method?
Now, back to the issue of the regulator: Your proposal is to regulate the Euclidean theory and you mention lattice where the Euclidean symmetry is broken at the cutoff scale but is recovered in the continuum limit. You are right. But I am after a regulator which keeps the symmetry without weak violation. If a symmetry is violated at the cutoff scale and the strength of this violation tends to vanish as the cutoff is removed then the symmetry in question is indeed expected to be recovered in the renormalized theory. (This appears as Nielsen's idea, redressed in different language. When restated in the RG context one would say that the Lorentz breaking is irrelevant at the UV fixed point.) But his expectation holds for global symmetries only. A local symmetry, broken by the cutoff at arbitrary scale, remains broken in the renormalized theory c.f. anomalous gauge theories. The gauge theory of the Lorentz, more precisely the Poincare group is gravity. The motivation of asking my question is the suspicion that there is no QFT (non-stringy) regulator for real-time quantum gravity.
Vikram: Thank you for the interesting and original papers. A spontaneous question about the first one: Do you recover g=2 with the new relativistic equation? This is the most stringent test of an alternative spin half wave equation, as far as I know. I was glad to read your brave second paper, one of the rare reflections of quantum physics from the time honored Indian point of view! It is difficult to find a forum to discuss such issues but you may try to find one, it would be very interesting. I have not found proposal for UV regulator in quantum field theory in the text. Please correct me if I am wrong.
Janos Polonyi:
Yes, I can recover g=2 with the new relativistic equation. Twenty years ago I wrote an article "Anomalous Magnetic Moments of Electron and Muon." In that I mathematically calculated (g/2 - 1) for electron = 0.0011596522 and (g/2 - 1) for muon = 0.001165924. This was without using Feynman diagarams which is a very laborious technique. But non of the Journal editors were interested in publishing it. So I published it in a book form in 1996. "The Theory of Periodic Relativity" (ISBN 0-9652280-0-2) This book is not in print. And the article is not in electronic form.
I don't need a UV regulator in my theory because it is a non-perturbative theory. Regularization and renormalization is a problem of perturbative theories. Nevertheless, the explantion following eq.(4) concerning Compton wavelength provides the regularization effect because this avoids the infinite energy value and this is the smallest wavelegth permissible before the particle wave collapse without leaving any mass gap.
Thank you Vikram for the answer. I understand a bit better the way you think.
First a remark about the need of regularization: I think that one needs UV regulator in a quantum field theory independently whether the model is perturbative or not. In addition, field theories tend to be perturbative in one scale regime and non-perturbative in another. It is true that the UV divergences are usually found in an asymptotically free model by the help of the perturbation expansion but all checks, performed so far by non-perturbative, numerical simulations confirmed the perturbative picture. One may hope that a non-asymptotically free model develops a non-Gaussian UV fixed point and becomes trivial, like the phi4 theory in 1+3 dimensions. But even in this case one needs a regulator to define the theory.
Your modification of gravity changes indeed the high energy behavior and a particle can not have energy beyond the Plank scale. Is such a classical argument enough to regulate a quantum field theory? The question is non-trivial because the quantum particles do not obey the classical dispersion relations, they go off-shell. If the answer to the question is affirmative then you have indeed discovered a new regulator and a number of questions pop up, perhaps the most important being whether we recover the traditional (quantum) physics well below the cutoff.
Dear Janos,
I am always reading your papers with a great interest. The question you rise is very important, indeed. I have met it in my studies of severe IR singularities which appear in QCD due to self-interaction of massless gluon modes. With the help of the corresponding equation of motion I have summed them up into Laurent expansion in powers of the mass gap squared over the gluon momentum squared. However, the dimensionless coefficients of this expansion still remained dependent on the UV regulator even after the renormalization program have been performed to make PT QCD free of UV divergences. So even NP QCD was plagued by them, destroying thus Lorentz invariance for full QCD. I was very upset by this, until I found rather non-standard solution to this problem. The initial version of this solution has been published in our book "The Mass Gap and its Applications" (WS, 2013) by V. Gogokhia and G.G. Barnafoldi. However, today I have much more updated and developed version of the first part of this book.This makes the final proof how QCD becomes free of IR and UV divergences much more transparent.
Best wishes
Vakhtang
Dear Vakhtang,
thank you for your answer. I have no access to your book therefore I have looked instead into your paper in J. of Math. Sciences, 197 761 (2014). It is a nice idea to separate the non-perturbative part of the gluon propagator by leaving the non-transverse part for the perturbative part and the establishment of the mass gap of QCD is indeed an impressive result. Can you make sure that the perturbative part is negligible compared to the non-perturbative terms at long distances?
There are not enough details in this paper to reconstruct the whole calculation but it seemed as you were working in Euclidean space-time. Can you establish the renormalization in Minkowski space time? The usual strategy, the dimensional regularization is perturbative even if we perform a resummation of the perturbation series. The reason is that the regulator is defined for the loop integrals thus first we expand in the coupling strength after that we resum (a non-convergent series). A non-perturbative regulator, say lattice, provides a well defined mathematical expression before the perturbation series.
There is no problem with the Euclidean invariant cutoffs because the O(4) symmetry group is compact. But due to the non-definite nature of the metric in the Minkowski space-time the Lorentz group has infinite volume. As a result, we can not define a finite volume region in the momentum space by means of Lorentz invariant quantities. It is easy to check that there are Lorentz symmetry breaking terms, generated during the inverse Wick rotation from the Euclidean to the Minkowski space-time. They are usually neglected in non-gravitational theories because they are vanishing as the cutoff is removed.
Dear Jean,
it is very interesting what you wrote, could you give a reference to these results? The point is the general lesson of the renormalization group, namely that "There are no physical constants, every observed quantity depends on the scale of observation". What appears as a constant to the engineers is a wide plateau with weak scale dependence only. Hence the question is: Does this apply to the speed of light?
The only indication that c is physical quantity rather than simply a velocity parameter of the Lagrangian is that the support of the retarded photon propagator and the spectral weight is on the light cone. What you have mentioned shows that we achieve the scale invariance of c by imposing the Lorentz boost symmetry, involving c in a nonlinear manner, at every scale.
This suggests that there is another way to formulate the question about a Lorentz invariant regulator: Are we sure that the speed of light is independent of the scale of its observation? We are thus led back to the search of possible violation of the Lorentz invariance at high energies. And there must be some violations unless we find a Lorentz invariant regulator.
Another question: Does the problem, you mention about several species appear when the mass is generated by spontaneous symmetry breaking?
Luiz, thank you for your question, it points to the following interesting accident: Chapter 1 and section 15.3 in Bjorken-Drell present formal expressions for the Noether currents together with their commutation relation, derived formally from the canonical commutation relation. The issue whether these operators are well defined is not addressed unfortunately.
The reader indeed finds the correct representation of the Poincare group, constructed by the help of the field operators. But this does not mean that the Noether currents, defined in this manner are well defined. A well defined local operator should generate well defined Green functions. Had they started to check the Green functions in the framework of the perturbation expansion they would have ended up with formally divergent loop integrals.
The accident is that the Lie algebra of the symmetry group can be verified formally, by the use of the canonical commutation relations alone. This hides the ill defined nature of the theory, revealed only when the Green functions and other observables are sought. To render the observables finite one needs UV regulator.
Thank you for your response Janos. I think your are right about the need of regularization. I have done it in my article in my own way using my own terminology. The two theories I mentioned earlier operate in tow different scale regime. Quantum gravity theory can operate only near high energy scale and it is not possible to recover the traditional quantum physics (which I call the theory of charges) from it. I have defined following regulators UV fixed points on page 15 of the article.
In generating the data listed in Tables I to V, we have used following restrictions on the model.
Particle velocity cannot exceed the velocity of light.
Particle creation in particle anti-particle pairs.
Particle anti-particle fly in opposite directions.
Only radial motion of the particles when created.
Spin quantum numbers are restricted to 0, 0.5, 1, 2.
Only real solutions are accepted.
Because of the non-perturbative nature of the model, when the data is generated using the solution in eqn.(68), most of the unreal solutions are automatically suppressed. This model also generates huge number of particles other than that in the standard model. Whether such particles can exist in nature or whether some solutions needs to be supressed by way of regularization is not clear. A particle can not have energy beyond the Plank scale is a very reasonable regulator because that is a maximum energy that can be contained in the size limited by Planck length.
Dear Jean, thanks, this is indeed a nice paper. I found two renormalization group studies in the style of your last sentence: following the multiplicative renormalization group trajectory by integrating the beta functions for several fields, Gomes, Gomes, Phys. Rev. D 85 085018 (2012), and solving the Wegner-Houghton eq. for a single scalar field Kikuchi, Progr. Theor. Phys. 127 409 (2012).
I think that the lesson of the paper you have mentioned, Iengo et al. is that we have no non-perturbative way to formulate a theory with two scalar fields and Lorentz symmetry at low energy. We can not even start to study this model, or presumably any other which contains several fields, beyond the perturbation expansion if Lorentz invariance is imposed at low energy. Difficult to accept but how can we get around it?
Luiz, thank you for your explanation, I fully agree with you: The perturbative treatment of the Lorentz symmetric theories seems to be under control. I would like know if it is possible to go beyond perturbation expansion. After all the perturbation series is asymptotic. Hence it has a build in, maximal accuracy and it does not give well defined, unique predictions. The question I am trying to sort out whether we can define (=regulate nonperturbatively) a quantum field theory without sacrificing Lorentz invariance.
Vikram, I did not see that part of your calculation and thank you for the clarification. Lacking of a solution of the problem of relativistic, nonperturbative quantum field theory one should try different directions. Instead of introducing a traditional cutoff, a restriction on the quantum states, you are thinking about restricting the classical kinematics, as well. This is a very good idea and it would be interesting to see how does it work for a scalar quantum field. A question: I do not understand how can one reconcile eq. (4) of your article "Periodic quantum gravity and cosmology" with Lorentz invariance since the equation does not seem to be covariant. Is this not a problem in PR?
In lattice field theories, that do not involve gravity, Lorentz invariance is a global symmetry and broken by the regularization, thus recovered, like any global symmetry, in the scaling limit. If it is part of the gauge group, however, one problem is that, since the group is non-compact, the integration measure isn't well defined-that's a known issue in dynamical triangulations, for instance, and one should consult the papers of Ambjorn et al. for details. Another issue is that, since the regularization breaks a gauge symmetry, it is not the case that the symmetry will be restored in the scaling limit, without any further tuning. So the answer is that, for theories in which Lorentz invariance is a global symmetry, the breaking due to the (non-perturbative) lattice regularization is an artifact, that will not affect any observable, in the scaling limit and no further counterterms or finetuning is needed and the non-trivial question is, whether a scaling limit exists at all; for theories in which it is a local symmetry, as is the case of any local symmetry, this is not, necessarily, the case-generically, additional finetuning is to be expected.
It is not true, however, that the scaling limit of a lattice theory describes, necessarily, the low energy, or infrared behavior, of the theory-this depends on the nature of fixed point, that corresponds to the scaling limit in question. Therefore that the scaling limit of a lattice gauge theory is Lorentz invariant does not mean that Lorentz invariance characterizes the low energy dynamics only. The existence of the scaling limit implies that regularization effects can, in its vicinity, be unambiguously identified and absorbed into parameter and field redefinitions.
In theories with Lifshitz scaling, Lorentz invariance is broken explicitly by the model, so the non-trivial question is, whether a Lorentz invariant scaling limit exists, since the breaking isn't just due to the regularization itself.
Janos, Covariant and contravariant vectors are useful for geometrical treatment of the theory. My theory is energy based theory and not a geometrical theory like general relativity and M theory. In my theory energy momentum invariant is considered a superior invariant compared to Lorentz invariant which is geometrical. The reason for this is discussed in section 4 of the following article. Therefore, Lorentz invariance not reconciling with eq.(4) of my article is a drawback of Lorentz invariance.
Research Alternative explanation for orbital period decay of a pulsar
Holography does provide, in principle, a non-perturbative way to describe a Lorentz invariant quantum field theory.
Stam, thank you for your answer. You are right, the noncompact symmetry group brings its own problems. I expressed myself carelessly in responding to you. What I wanted to mention is that any region, born by invariants has infinite volume. This is related to the noncompactness of the Lorentz group but it appears as a wider and simpler issue.
I have thought about the recovery of the global Lorentz symmetry in the continuum limit just in the way you describe the situation. What prompted to ask this question on this forum was that it disturbed me that our most general symmetry could not be preserved in a nonperturbative quantum field theory and it is left to recover with a smaller and smaller error only.
I learned from Jean's answer that the situation is much more serious, namely there are marginal operators in theories with several fields which prevents the recovery of the Lorentz symmetry. This issue has not been and can not be addressed in lattice simulation.
We have no relativistic quantum field theory in a nonperturbative setting, as far as I understand it at the time being. You are right that the symmetry is broken by the regulator only but we can not distinguish this effect from genuine physics in lacking of a symmetric regulator. One should find out the message, encoded in this lack.
Vikram, thanks for the paper, it takes time to understand your radically different approach to relativity.
Stam, this might indeed be interesting, thought I would like to restrict this discussion to four dimensional space-time, without evoking higher dimensions. I follow the development of the holographic idea from the distance only and I have not seen any regulated, nonperturbative theory in the papers. Could you recommend some work to read?
Janos, the effects of any operator can be described, on the lattice, in particular, by the study of the identities it satisfies-so it's not true that, if these are marginal, their effects cannot be studied on the lattice and, as the work of Caracciolo, Menotti and Pelissetto, for instance, shows, it is possible to describe quantitatively what it means to recover Lorentz invariance in the scaling limit, in practice. It may, of course, be the case that the effects of certain operators have not been studied-but for the case of global Lorentz invariance, this is a question of practice and doesn't involve any issue of principle.
I think there's a misunderstanding here: the statement is that, IF a theory has a scaling limit and Lorentz invariance is a global symmetry, that's broken by the regularization, THEN, the observables that have a good scaling limit are Lorentz invariant. IF the theory does NOT have a scaling limit, Lorentz invariance will not be recovered-but any observable, anyway, will depend on the cutoff and its details in that case. In the example quoted, it may well be that the theory in question does not have a scaling limit at all, where those operators give rise to observables-i.e. they may decouple in the scaling limit.
Regarding holography the situation seems to be that while a lot of effort has gone into the investigation of the bulk, classical, gravitational, theories, the study of the boundary quantum field theories, in the sense discussed here, isn't that extensive-there are many issues of principle to clarify. This presentation: http://hep.physics.uoc.gr/mideast6/talks/Monday/papadimitriou.pdf
seems quite relevant to the issues discussed here, however.
Stam, let us be a little more precise. We are talking about the fate of the boost invariance. I think that the conservation of the energy momentum tensor and its correct anomaly structure in Euclidean space-time, as discussed by Caracciolo, has no relation with this problem. Please correct me if I am mistaken. I understand that the Euclidean space-time symmetries are restored in the continuum limit. But I have not yet seen tests of the Lorentz symmetry. One would have to perform a Wick rotation to address this problem, a hopeless enterprise when the Monte Carlo results are used as an input. Even if we accept that the Euclidean scaling laws tell something about the Lorentz violation the numerical determination of the logarithmic running of a marginal operator by Monte Carlo is hopeless, too.
I fully agree with you, the issue of the symmetry restoration is limited to the continuum limit.
The slides you mention are indeed interesting and they talk about something related. But I am afraid that they are not yet at the point where one can read off the definition of a nonperturbative, Lorentz invariant theory. My problem with this proposal is my lack of understanding the physical reason of going to higher dimensions to settle this problem. One can accept a Kaluza-Klein type scenario with strongly curved extra dimension to deal with an UV problem. But holography seems to be a finite scale issue. Do you think that it is related to UV phenomenas?
Janos, I think there's some misunderstanding regarding the statements about Lorentz invariance and the lattice here. The correct statement is the following, I think: The lattice theory is in Euclidian signature, therefore the relevant symmetries are rotations and translations. On the lattice these are hypercubic rotations and lattice translations. The scaling limit of such a theory establishes the continuum limit of a Euclidian theory, that has SO(4) invariance. It is this theory that, through Wick rotation, then corresponds to a theory with SO(3,1) invariance. So the energy-momentum tensor is the current of translations and rotations in the Euclidian theory, in the continuum and on the lattice. It is this statement that's the subject of Caracciolo et al. and, more recently, http://arxiv.org/pdf/1404.2758.pdf and http://arxiv.org/pdf/1306.1173.pdf and http://arxiv.org/abs/1204.4146 by other methods. As might be expected, in practice, there are quantities, whose lattice artifacts are easier to eliminate than others.
So the context is to establish SO(4) invariance in the scaling limit and then use Osterwalder-Schrader reconstruction (cf. also, http://www.matmor.unam.mx/~robert/sem/20091014_Colosi.pdf for a discussion of results that are mathematically rigorous.) For an example of what's required to deal with a lattice action, cf. http://projecteuclid.org/euclid.cmp/1104160284
i agree that there's a lot of work to be done, in order to understand how quantum field theories in flat spacetime are described in detail by putative gravitational duals. Indeed one non-trivial issue is that the only theories that have been investigated in detail are conformal field theories, i.e. whose beta function vanishes and, for instance, applications to hadronic physics, the so-called AdS/QCD correspondence, involves introducing the requisite scale by hand (cf. Polchinski-Strassler, for instance). So holography is, precisely, not a finite scale issue, for the moment-and that's an issue. As a duality, indeed, it maps UV phenomena of one theory to IR phenomena in another.
Stam, it is indeed important to make the point more precise. I agree with you about the state of the Euclidean external symmetries. My problem is related rather to the Wick rotation where one starts with an Euclidean cutoff theory and deforms the integration contour in frequency space. This is not Lorenz (neither Euclidean) invariant step. You naturally say at this point that the symmetry violation is exponentially small as the cutoff is removed. My point is that it is non-vanishing and the resulting real time theory is not exactly boost invariant. This becomes a real, practical problem in effective theories where the cutoff is finite.
We have no informations about the asymptotic behavior of the integrands in a nonperturbative treatment hence the regulator should be a restriction on the domain of integration in the Fourier space. Have you seen or can you even imagine an analytic continuation from imaginary to real frequencies, using Lorentz invariant integration domain? The very first step of the usual procedure, namely the integration over the frequency for fixed three momentum, already breaks the Lorentz (or Euclidean) invariance.
I understand your point of view about the relation of the duality and this problem, thank you. My doubt about this solution is that the duality transformation cannot replace a nonperturbative regulator, one has to start with well defined theory.
Janos, the statement about the Wick rotation of the Euclidian theory to Loentzian signature is more complicated-cf. the article by Menotti on the proof of reflection positivity of the Wilson action. The statement is that there is a procedure, defined by Osterwalder and Schrader, and it is this procedure that ensures the correct correspondence between the Euclidian correlation functions and the Green functions in Lorentzian signature. This doesn't mean that the procedure is straightforward and there is some nice work by Maiani and Testa: http://inspirehep.net/record/304984 ``Final state interactions from Euclidean correlation functions'', that's relevant here and illustrates that one must think quite carefully, in order to avoid mistakes.
Regarding holography, indeed, most of the work has gone in the direction of setting up and solving the equations of motion for the classical gravitational theory, using the correspondence to *define* the quantum field theory on the boundary and carrying out-some-checks that the theory, defined this way, is consistent. The checks that go in the other direction are limited to quantities that can be proved to be ``protected'' against corrections, so the challenge is to show that such quantities indeed exist and to find their correspondents in the gravitational theory-which has been done for certain classes of extremal black holes.
Stam, thank you for your insistence on this point, it forces me to make my question more precise. There are two schools of quantum field theory. The first one is based on the renormalized perturbation expansion and the axiomatic methods. It refers to the renormalized theory, expressed in terms of physical, finite quantities. Another later school, initiated by Wilson, is based on cutoff theories. There are indeed relativistic field theories according to the first school as you correctly point out.
There are no realistic, renormalizable theories around and this circumstance makes the first school rather academic. The prevailing point of view today is that all physical theories are effective. We do not want to settle the Theory of Everything before using the Standard Model with a nonrenomalizable U(1) and Higgs sectors. If we work with a large but finite cutoff then a relativistic field theory must be considered within the guidelines of the second school.
My point is that the Wick rotation or the Osterwalder-Schrader theorem are for renormalized theories. As soon you introduce a cutoff, a Lorentz invariant restriction in the domain of integration, we run into problems.
Janos, in fact my point is that both approaches are consistent with a lattice formulation. The lattice formulation does not depend on an expansion in any small parameter, for the value of the lattice spacing itself becomes a scale. Any statement about the theory then can be translated into a statement about the identities satisfied by the correlation functions, computed with a lattice action-these identities express the symmetries, whether global or local, of the theory and they, also, allow a concrete investigation of the effects of lattice artifacts. So from a calculational point of view it's clear what quantities to study-it is challenging to invent the most efficient algorithms, of course.
That a pure φ^4 theory, in four spacetime dimensions, may, most probably, be, in fact, a free theory, doesn't imply that the same theory, coupled to other fields, remains, necessarily, free. Same statement for a U(1) theory. The lattice approach provides a way to study such issues quantitatively, though, of course, it is difficult, in practice.
The Standard Model is a realistic theory, in four spacetime dimensions, that's renormalizable within perturbation theory and, in that sense, falls within the context of the Osterwalder-Schrader theorem. For the statement is, given Euclidian correlation functions and the identities they satisfy, is it possible to define Lorentzian Green functions that can be used to define a sensible theory. While the electroweak sector is more challenging to study on the lattice, there's been quite a lot of work on lattice QCD on these issues-and there doesn't seem to be a problem with Lorentz invariance, in the sense that lattice artifacts cannot be accounted for in that way. Indeed, one motivation of Maiani and Testa was to show how careful one must be when trying to deduce from lattice data the corresponding physical quantities.
The work of T. Reisz is relevant here: for example, http://inspirehep.net/record/262767
While it is, of course, not, yet, possible to provide a mathematically rigorous construction, beyond perturbation theory, of the Standard Model, this doesn't mean that it is not possible to put bounds on any potential corrections-and to understand, that, whatever the further completion of the Standard Model may be, it must reduce to the Standard Model in the current scale. So there don't appear to be any issues with Lorentz invariance within the Standard Model-and there have been extensions that explore this subject, e.g. http://www.physics.indiana.edu/~kostelec/faq.html
Stam, I agree, the lattice formulation of a theory at an UV fixed point is a good example of the overlap of the two schools. The beautiful paper of Thomas Reisz, you refer to, is indeed the clearest way to establish this connection. But the problem, I would like to raise, is that this overlap is restricted to Yang-Mills theories in four dimensions.
Naturally nobody knows what happens if additional fields are introduced in a phi4 theory but the numerically "proven" triviality of the simple phi4 model makes this issue rather suspicious.
The renormalization of the Standard Model is an interesting issue, it represents for me the lack of communication among different sub-communities in physics. I believe that that model is not perturbatively renormalizable (the order of the words is important!). The question whether it is renormalizable in a nonperturbative manner is completely open. I think that an affirmative answer would be a very bad news for LHC.
Janos, first of all, the putative triviality of pure φ^4 doesn't make the issue of its non-triviality, when coupled to other fields, any more or less suspicious than the fact that the representation of free fields is not unique. The triviality simply means that the interaction one thinks one is adding is an illusion-that's all-a sort of coordinate artifact in the space of fields, similar in spirit to what happens, for instance, for integrable models, where the interaction is, simply, a coordinate artifact. So , in the same way that one asks what are non-integrable contributions to an integrable theory, the meaningful question is, what kind of interactions with other fields can make φ^4 non-trivial. Such a question, necessarily, requires a non-perturbative framework, like, for instance, the lattice. The fact that quantum corrections to the classical action of the φ^4 theory (seem to) imply that it is, in fact, free, doesn't mean that the corrections to the classical action, when other fields are introduced, necessarily, imply that the coupling constant of the φ^4 term vanishes, also, in that case.
While the Standard Model may have non-perturbative issues, I think that the correct statement is that it is renormalizable in perturbation theory, so I don't think that it's useful to play with words. This means, simply, that each term of the loop expansion, when computing the contribution to a process is well-defined, i.e. does not depend on any cutoff or details thereof-it does not say anything about the convergence of the loop expansion-the two statements are, frequently, conflated, but are, logically, distinct: the first refers to whether the terms of a series are well-defined, the second, only, whether the series converges.
For the moment, all measurements at energies up to those available seem to support the predictions made by using the perturbative expansion of the Standard Model, in particular as regards the calculation of complex QCD backgrounds, and lattice calculations about flavor physics observables start to probe non-perturbative issues quantitatively. Heavy ion collisions can probe effects that are beyond perturbation theory and here lattice calculations at finite density and temperature are useful.
That the Standard Model is an approximate description is understood and no surprise. Whether the LHC configuration (energy and luminosity) can be sensitive to non-perturbative properties of the Standard Model isn't bad news-it would be very good news, if it turns out that way. However for this statement to be effective, the result of a specific calculation is required, in order that the search strategies be designed to select the events appropriately. For the moment, this is more likely to be found in heavy ion collisions. Such a calculation can be obtained, precisely, using a lattice regularization or, for instance, holography-but the issue of the backgrounds requires study. For the moment, by imposing as selection criteria perturbative calculations, everything can be accounted for-and not all of the known processes have been measured to discovery precision yet.
It may be that it will be easier to detect perturbative effects of physics beyond the Standard Model, than it will be to detect non-perturbative effects of the Standard Model.
Stam, they used to say at the beginning that (perturbative) renormalization consists of rendering the Green functions UV finite order by order in the perturbation expansion. This was apparently not satisfactory for Landau who has abandoned quantum field theory in the fifties after having realized the consequences of the pole, bearing his name. It is by now assumed in the perturbative renormalization that the coupling constant remains small in the UV. As of the triviality bound for the Higgs mass you may look at Heller et al Nucl. Phys. B405 555 (1993). But this problem leads us far away from the original question.
Landau may have been not satisfied-but this isn't a statement based on physics, but on personal opinion, so it's not very useful. As stressed by Coleman in his 1973 Erice lectures, consistency requires not, simply, ``small'' coupling constants, but controlling the leading logarithms-in the UV for applications to particle physics, in the IR for applications to critical phenomena-this last point was made by Mitter. The paper by Heller et al., in fact, studies purely scalar actions, so it doesn't, in fact, bear on the issue discussed, namely what happens to the scalar self-coupling in the presence of gauge fields and fermions. While fermions are computationally challenging, the effects of gauge fields should be amenable to numerical analysis.
Personally, I believe that relativistic covariance, similar to charge conservation, is a renormalisation axiom. (For the charge conservation, this was noticed by Schwinger, cf. the so-called Schwinger terms.)
No, the three concepts are logically distinct: relativistic covariance is a statement about space(time) symmetries, charge conservation is a statement about an internal symmetry and renormalization is a statement about how the quantities that are relevant for describing systems with an infinite number of degrees of freedom depend on how *this* infinity is defined, when taking into account configurations, that don't satisfy the (classical) equations of motion.
>Do we have to return to the idea of H. B. Nielsen about the recovery of Lorentz invariance in low energy effective theories?
As we know main idea in this area based on Loop Quantum Gravity....
First of all, I would like to thank for all of your answers, they are very helpful. Thinking about them I have realized that the problem, we discuss here has nothing to do with relativity and I reedited the question accordingly. The point is that it is the boost invariance which is violated by the non-perturbative regulators and this takes place in non-relativistic theories, too.
Stam, please check that Landau pole = leading log resummation. You are right about the phi4 model, we do not know what happens if additional fields are introduced. But I think, and this is my personal opinion only, that it is not even important because the renormalized trajectory of Nature probably differs from that of the Standard Model at high energies. What matters from the point of view of the question, discussed here, is that the cutoff dependence of the non-asymptotically free models is beyond perturbation expansion and we are seemingly lacking methods to assure the boost symmetry.
Lev, you are right, the relativistic covariance was assumed in axiomatic quantum field theory from the very beginning. The reason, I suspect, was that they worked with formal operators and with renormalized Green functions, neither containing the cutoff. By alluding to the Schwinger term you go very much ahead. The Schwinger term was the precursor of anomaly, the universal way of breaking symmetries by any regulator. It might be the the boost symmetry is inevitable broken by the regulator but whether it happens in a universal manner or not is another question. The circumstance that there is perturbative, boost invariant regulator, namely the dimensional regularization, makes this problem different than that of the anomalous symmetries.
Jaykov, could you please give a reference to the recovery of the Lorentz invariance in loop gravity?
Janos, the Landau pole appears when solving the beta function equation for the coupling constant, typically at one loop order, perhaps higher, in any case to finite order. So, yes, by definition, it takes into account leading logarithm resummation, but, also, by construction, lies outside the domain of validity of the expansion. At the energy scale of the Landau pole the coupling constant has, already, become, by construction, so large that the expansion itself doesn't make sense-it's broken down long before that energy scale is reached-not to mention the fact that it ignores the contribution of any new degrees of freedom. Only were it possible for non-perturbative methods to probe that scale, could some conclusion be reached about any physical effect it might herald. Göckeler er al. in 1997 performed a lattice study of the Landau pole in QED, http://arxiv.org/abs/hep-th/9712244 . Of course they, too, couldn't but not take into account any degrees of freedom, between the scale, where QED makes sense and the Landau pole.
If the claim is that Lorentz invariance is, in fact, broken, then it's possible to parametrize this and this work has been done. The Lagrangian acquires additional terms and its existing terms acquire additional structure and these additions have consequences that have been studied, e.g. by Kostelecky and collaborators.
Dear Jonas,,
I apologize for late respond to your remarks and questions. These days I was out of Budapest having no access to Internet.
The separation between NP and PT parts in the full gluon propagator is exact and unique within our approach: the NP part (which we call intrinsically NP (INP) is transversal by construction, while PT one is of arbitrary gauge. INP depends on the mass gap, while PT does not. INP part is presented by Luarent expansion in powers of the (mass gap over gluon momentum)^2, t.e, contains all the strong (NP) IR singularities, possible in QCD due to self-interaction of massless gluon modes, NP part contains only PT IR singularity . And finally PT part diverges only logarthmically, while coefficients of the Laurent expansion remain lambda^2 dependent constants (it is a dimensionless UV regulator).
So just it became the main problem, how to separate the mass gap from these constants which have no physical sense. The Picaurd theorem turned out to be extremely useful in this. It made it possible to replace Laurent expansion close to its essential singularity by the constant which value depends only how the gluon momentum goes to zero: as momentum of a free particle or as a loop variable. In the first case it should be put to zero, in the second case it leads to the finite result for the full gluon propagator, it simply is to be reduced to its INP component.
Now about using Euclidean signature. It is always convenient to use it because it is free from un-physical IR singularities due to light cone. In Minkovski space the corresponding dimensional regularization expansions, correctly implemented into the theory of distributions, are much more complicated. As pointed out by Jaffe and Witten to prove the existence of YM theory (which assumes Lorentz invariance in my opinion) includes establishing axiomatic properties as strong as have been proven by Osterwalde and Schrader for Euclidean Green's Functions.
Best wishes
Vahtang
To Janos Polonyi ·Jaykov, could you please give a reference to the recovery of the Lorentz invariance in loop gravity?
see for example
http://arxiv.org/abs/gr-qc/9705019v1
Lorentz invariance in gravitational theories is a *local* symmetry, not a global symmetry. This means, in particular, that,if it's broken by a regulator, or if it's broken by terms in the action, and one wants to study, whether the effects of such terms can be eliminated in some limit, the answer is that this is a dynamical issue and additional conditions are required, as for any local symmetry, in fact, cf. for instance, the papers by B. Julia and S. Silva,http://arxiv.org/abs/hep-th/0205072 .
So the questions of whether Lorentz invariance is recovered, as a gauge symmetry, in modifications of theories of gravity and that of whether Lorentz invariance is recovered, as a global symmetry, in theories that are defined on a fixed spacetime, are completely independent.
Stam, you are right, the chiral symmetry breaking indeed prevents qed to reach the Landau pole. But the driving force of chiral symmetry breaking is the running of the coupling constant. The pole, though non accessible, is a good parametrization to understand the qualitative features of the accessible region. But these are words only and I think that we agree about the facts, namely that the renormalization of a non-asymptotically free theory is nonperturbative.
I think that the other point you mention, parameterization of the possible breaking of the Lorentz invariance and its comparison with experiments is very important. The earlier works of Kostelecky and his collaborators were based on string theory. One may consider the string as a rather involved regularization which preserves the Lorentz invariance. It is used perturbatively and we can not draw conclusions about non-perturbative theories. But an enormous work has indeed been done to parameterize the Lorentz invariance breaking in quantum field theory models and the effects are below observational threshold as far as I know. The papers I know in this direction are based on perturbation expansion where the regulator was supposed to be Lorentz invariant. Do we have the same possibility of breaking Lorentz invariance when the regulator, rather than the vertices violates the symmetry? I suspect that the scaling laws will be different.
Thank you for mentioning Kostelecky's name, he is strongly involved in the project of the checking of Lorentz invariance and his works are relevant. He and Long have a recent paper, http://journals.aps.org/prd/abstract/10.1103/PhysRev with an upper limit of some Lorentz invariance violation in gravity. This is interesting because quantum gravity, being the gauge group of the Lorentz group, can not tolerate a regulator which violates the Lorentz symmetry. Can the negative results, obtained in the search of Lorentz violation in gravity be considered as a hint towards either string theories or the classical nature of the gravitational forces?
I agree completely, that the recovery of a global and a local symmetry are different. This is why the Lorenzt invariance of the regulator is obligatory in quantum gravity. In other words, in the absence of a nonperturbative, Lorenz invariant regulator there is no quantum gravity.
Dear Vahtang, welcome back. Thank you for your explanation about the separation by the Picuard theorem, I start to understand your construction, it is quite unusual and interesting. About the Euclidean signature: You are right, the Osterwalder-Schrader decomposition theorem applies and there is no problem with Wick rotating renormalized Green functions. What I try to understand is a strange limitation of this strategy: it seems to apply to perturbatively renormalizable theories only. Because we are short of nonperturbative regulator which preserves Lorentz symmetry. Only asymptotically free theories can be renormalized by perturbation expansion hence the Yang-Mills theory is the only four dimensional realization of the Osterwalder-Schrader theorem(???).
Jaykov, the point you have mentioned, strings, could be the solution of this problem! A short (open) string can serve as regulator when combined with point splitting. And this is used in loop gravity in four dimensional space-time. Therefore the proposition is to use point splitting in theories in flat space-time, too. But this still introduces a new class of dynamical variables, the strings. It would be useful to understand how do they avoid the infinities of the volume of regions in Minkownsi space-time which are bounded by invariant length and wether we can repeat this trick without them. Or if the strings form a non-dynamical, fixed set of lines then to prove that the renormalized theory is universal, namely it is independent of the choice of the set of strings, used as regulator. Do you know some results in this direction?
Janos, actually it is possible for a regulator to break a gauge symmetry and, nonetheless, be useful-what's required is to ensure that it is possible to impose the Ward-Takahashi identities. Another way of saying this is to realize that, while the local symmetry may be broken, if the global, BRST symmetry can be imposed, this suffices, for it is equivalent to the local symmetry. This is, indeed, what has been the basis of the ``Rome approach'' to a lattice approach for the electroweak sector. If the regulator violates the gauge symmetry, the conditions for its recovery entail independent counterterms, that's all, if the divergences are ``hard'', however (cf. the old paper by Preparata and Weisberger, who showed that the axial current requires an independent renormalization constant for this reason.)
I don't think that any absence of violations of local Lorentz invariance in classical gravity has any bearing, one way or another, on whether extended objects themselves are relevant excitations-nor on the classical nature of gravity-just like any presence of such deviations would, either-they're independent issues.
It well known that Lorentz invariant quantum field theories exist on fractal spacetime
http://arxiv.org/pdf/0912.3142v3.pdf
Stam, thank you for mentioning the Rome Approach, it is indeed related. But it shows the difficulties in the same time: the need of infinitely many counterterms to balance the violation of the local symmetry. Can one really call such a theory renormalizable in the original sense of the word, namely that the theory generates unique, well defined predictions? There might not be problem in a given order of the perturbation expansion. But the problem I raise is whether we can go beyond the perturbation expansion. The perturbation series is at most asymptotically convergent hence it gives no unique predictions. It is true that the Rome Approach is applied in a non-perturbative setting but not with Lorentz symmetry.
The Kostelecky-Lang paper (correct reference is http://arxiv.org/abs/1412.8362) is about neither classical nor quantum physics, rather about real measurements. If there is quantum gravity then it contributes to the observed quantities.
Jaykov, you have very good suggestions! I think that Calcagni's nice paper is a realization of the dimensional regularization in real space, in a non-perturbative setting. But the author admits that it corresponds to a deformed Poincare group. This could have been said about dimensional regularization, but nobody(?) cared about the O(epsilon) deformation seriously in the perturbative framework.
My problem with the relativistic invariant regulator is very primitive: Consider a loop integral in a relativistic theory at vanishing external momentas and assume that the regulator is Lorentz invariant. Thus the integrand is Lorentz invariant, as well. But it diverges because the volume between two hyperboloids is infinite. Do you see how could point splitting or fractal space-time avoid this problem? I do not yet understand the former case. The latter scheme presumably avoids the problem by a clever analytic continuation, like in the case of the power divergences which are swept under the rug.
Luiz, I agree that the problem you raise is beyond our capabilities. But we are discussing here a different, less ambitious question. Instead of a full algebra of operators, we consider a few expectation values in the path integral representation and wonder if they can in principle be defined in a Lorentz invariant manner nonperturbatively. The constructive field theory came into the discussion because this formalism is supposed to possess Lorentz invariant Green functions. Is this indeed true for a bare theory, before attempting to remove the cutoff?
Janos, while lattice regularization does generate infinitely many terms, these are organized by power counting, so it isn't true that an infinite number of counterterms are needed. The notions of ``relevant'', `` irrelevant'' and ``marginal'', still apply, as in any regularization scheme. What the Rome approach highlights is that, by exploiting the, global, BRST symmetry, which is equivalent to the gauge symmetry, it is possible, in principle, to control, in fact, the breaking of the gauge symmetry by the regulator. That's the whole point. So that's how it is possible to understand that the theory is renormalizable. The perturbation series and its convergence don't enter the picture-the issue is to describe the identities that the correlation functions on the lattice must satisfy, in order to be able to define a scaling limit at all. The relevant mathematical work is in Reisz's papers, where the way to formulate a power counting theorem on the lattice is described (though he was interested in QCD applications). Cf. also http://arxiv.org/abs/hep-lat/9707007
I have just realized that one of my colleagues here in Strasbourg, Jean-Luc Jacquot has had a paper about point splitting regularization of gauge theories, arXiv:hep-th/9609158 and arXiv:hep-th/0410251. The regularization is nonperturbative but the loop integrals are calculated in the Euclidean space-time. I believe that the problem remains open because the Wick rotation generates an exponentially small, but finite violation of the Lorentz symmetry.
Stam, you are right about the finite number of relevant terms in the Standard Model. But gravity is nonrenormalizable according to power counting and there are infinitely many counterterms. Quantum gravity is is a remote subject, looking from the point of view of the experimental support. I mentioned the issue of local Lorentz invariance only because I thought that the breaking of the global Lorentz symmetry by the cutoff is not too interesting (the symmetry is supposed to be weakly broken for high enough cutoff). But the result of Iengo et al., mentioned by Jean days ago, shows that the logarithmic running of the marginal operators make problems. It is easy to check that the logarithmic running is so slow that the problem is present even if the cutoff is at the Planck scale.
You are right, the correlation functions, seen on the lattice has nothing to do with the perturbation expansion. I mentioned the nonconvergence of the perturbation expansion to demonstrate why a theory, defined by the perturbation expansion only, is not satisfactory.
Luiz, it is remarkable that we have to go as far as the string theory to prove Lorentz invariance. Is the string the only Lorentz invariant regulator without the misleading structure of an analytic continuation? You mention the spectral function which is indeed the best way to perform the Wick the rotation. Do you know if these functions exist for higher order Green functions?
Janos, the Wick rotation does not violate Lorentz invariance: the Euclidian theory, defined through the Wick rotation, isn't supposed to be Lorentz invariant in the first place, but invariant under rotations and translations. So the only question, answered by Osterwalder and Schrader, is, whether, given the correlation functions of the Euclidian theory, it is possible to define the correlation functions of the Lorentzian theory. For lattice theories the relevant property is reflection positivity of the lattice action and that's the subject of the papers of Menotti and collaborators and others, too. There are, however, subtleties, as noted by Maiani and Testa. Point splitting, of course, breaks Lorentz invariance, explicitly-so this breaking doesn't have anything to do with the analytic continuation, but with the fixed length of the point splitting and it should be possible to show that all Lorentz breaking effects are proportional to the splitting and that there aren't any divergences that cancel their contribution and leave a finite part. The analytic continuation, involved in the calculation of specific integrals, although it does, of course, resemble the Wick rotation, doesn't imply that the whole theory is defined in Euclidian signature-it's a mathematical statement about the positions of the singularities of those integrals, that are defined in Lorentzian signature.
Stam, the integrals, defined over the full, infinite energy range can indeed be Wick rotated into Lorentz invariant quantities. This is what Osterwalder and Schrader considered. But the energy integral of a regulated Euclidean theory with finite cutoff involves finite energy range.
Janos, indeed-but the breaking is due to the finite energy range, not to the Euclidian signature, itself. And the finite range is due to the regularization, not the signature. In Euclidian signature this breaking is seen as breaking of rotation invariance.
Stam, you are right. But I did not say that the Euclidean signature plays any role. All I am saying is that the Wick rotated Green functions are not Lorentz invariant for finite cutoff.
Janos, they're not Lorentz invariant, even for infinite cutoff-they can be used to define Lorentz invariant functions, there's a difference. It's not the case that, at the scaling limit of the Euclidian theory, Lorentz invariance is recovered; SO(4) rotation invariance is recovered. However this is enough to allow the reconstruction of Lorentz invariant functions-cf. http://arxiv.org/abs/hep-th/9802035 mentioned above and that's what matters, not that they're rotation invariant at the scaling limit.
Stam, use O(4) invariant sharp cutoff in Euclidean space. (Non-sharp O(4) invariant cutoffs introduce higher order time derivatives and negative norm states.) Then the Euclidean energy integration is over a finite range, the Minkowski energy integral is over the same range (the signature of the metric was flipped only) and the Wick rotated Green functions violate Lorentz invariance. I do not doubt that the Euclidean invariance is recovered in renormalizable theories after the cutoff is removed. But we are talking about regulated theory with finite cutoff.
Janos, sure, in the presence of a cutoff there are all sort of artifacts-but these can be identified. It isn't surprising that artifacts of the regularization scheme appear, it's inevitable. And they depend on the details of the scheme. What is, however, remarkable is that there do exist theories, where, certain, quantities do not depend on the scheme, while others do. That's what matters.
Stam, do we agree on the statement that the Green functions of a bare theory (finite cutoff) are not Lorentz invariant?
Janos, that depends on the cutoff procedure-dimensional regularization preserves Lorentz invariance, for instance. Pauli-Villars, also, preserves Lorentz invariance. Similarly, any modification of propagators, multiplying them by a (decreasing) function of p2/Λ2, for instance, respects Lorentz invariance, since p2 is a Lorentz invariant. Therefore such a cutoff respects Lorentz invariance, too. They may not respect other symmetries, of course.
Stam, the higher order derivatives in time, appearing in the regulator, generate negative norm states and render the theory physically unacceptable.
Stam, yes, they do not decouple. One ends up with negative norm states, a Hamiltonian which is unbounded from below and the violation of unitarity.
Janos, that depends on the dynamics: precisely, for renormalizable theories they do decouple, for non-renormalizable they don't and these statements mean, indeed, that the latter require additional degrees of freedom, to describe interactions, the former do not-said differently, non-renormalizable theories exist only as free theories, with a fixed field content. And one can test these statements, for instance, in three spacetime dimensions, where rigorous constructions have been made.
Janos, but the regularization isn't an end in itself-as long as it's possible to identify the artifacts, that's what matters-that's what the term ``artifacts'' means. It isn't true, as a general statement, that all regularization schemes lead to a Hamiltonian that's unbounded from below, or to a non-unitary Hamiltonian; indeed, one reason why the lattice regularization is useful is that it can be proved to define a Hamiltonian that's bounded from below-that's among the consequences of reflection positivity on the lattice, for just one example.
If the regularization scheme led to a Hamiltonian that were unbounded from below it wouldn't be possible to perform any meaningful calculations with it-so something else is, surely, meant.
Stam, I agree, that the regularization is not an end in itself but not with the rest. Who can tell what is artifact and what is not? But we definitely know that a quantum field theory can not be defined without cutoff. The regulator is not only a mathematical trick to save the formalism, it reflects the limited informations we have by using measurements with finite resolution in space-time. We know, as well, that models where our lack of information can be hidden and the cutoff can be sent to infinity can not be taken seriously. First, they describe a dynamics which extrapolates down to zero distances with a single scaling law, a rather unrealistic and simplistic feature. Second, they form a too restrictive set. Therefore we have to live with effective theories with finite cutoff as far as I see.
Luiz, I know one book from Barry Simon only, it is the "Functional Integration and Quantum Physics" but I find nothing related there. Could you please give a more precise referrence.
Janos, the renormalizability of a theory doesn't imply that the theory is a valid description at any scale-it implies that it is possible to define quantities that are independent of the cutoff procedure, at any given scale and to describe how they behave, as one varies the scale, under the assumption that no additional degrees of freedom should be taken into account-that's how these latter can be unambiguously identified, when comparing the calculations with experiment. In a non-renormalizable theory such quantities cannot be defined at all, that's the difference. Once more these issues don't have anything to do with perturbative renormalizability, but with the existence of a scaling limit, fixed points, lines, and so on of the renormalization group flow. Perturbatively renormalizable theories are those that can be obtained by perturbing about a fixed point, essentially, a Gaussian fixed point. And lattice techniques have allowed to study more general cases, such as anisotropic gauge theories, for instance. If mathematically rigorous results are sought for, the relevant papers are by Magnen and Sénéor and their collaborators, linked to above, in four spacetime dimensions.
Stam, you are right about the unambigousness of a renormalized quantum field theory. What I ment is that renormalizability is a convenience rather then necessity. It is a convenience so long as our ignorance is swept under the rug rather than displayed in each equation. You are right about the possibility of several scaling regimes in a given renormalized theory. They incorporate finite number of scaling regimes, there is a last one as we go up in energy and from that on until infinite energy a single scaling law applies. I think that this is an unrealistic and restricting feature because I do not believe to see the ultimate Theory Of Everything in my life. In the view of the restricted set of renormalizable models the effective theory strategy, namely to use and test theories within a given scale regime, seems more promising. This is why a renormalizable theory is not a necessity at the time being.
You are right, as well, that the universality holds for both Gaussian and non-Gaussian fixed points. Perturbation expansion crept into this discussion because the only Lorentz invariant regulator I can imagine, the dimensional regularization, is designed for perturbation expansion. There is no problem to use perturbation expansion but if that is all we can do then the theory is not well defined because of the build in minimal errors appearing around the order 1/g of the asymptotic series.
Janos, renormalizability isn't a convenience-it's necessary. It doesn't ``sweep'' anything under any ``rug''. While *historically* people first found divergent expressions, then found how to define them and found that, in that case, the only consistent definitions led to finite expressions, or to free theories, this is just of interest to the history of the subject. Now we know better, that it is possible to set up the calculations in a way that doesn't encounter divergent expressions at all-but one must show that there exist quantities that don't depend on how this is done, which is what universality actually means. A theory that doesn't possess a scaling limit is sensitive to all sorts of effects and it isn't possible to state unambiguously what are artifacts of the regularization used to perform any calculation and what are physical properties. Said differently, if the theory possesses a scaling limit, universality makes sense and it's possible to say meaningfully that one is working with one theory or another theory. If the theory does not possess a scaling limit universality doesn't make sense: the different regularization schemes define different theories, because it's not possible to compare them.
In four spacetime dimensions, indeed, the work of Seiberg, Witten and Nekrasov shows that it is possible to define Lorentz invariant theories, where the non-analytic effects can be controlled in a mathematically well defined way. Using lattice techniques on simpler theories, one can obtain considerable insight, also.
My present understanding of this issue is finally summarized in https://arxiv.org/abs/1701.04068