When quantum mechanics and special relativity are combined (relativistic quantum mechanics, or relativistic quantum field theory), a new scale emerges for a system of particles with mass m, the Compton wavelength lambdaC = hbar/mc. According to the Heisenberg inequalities in quantum physics, to probe distances smaller than lambdaC requires energies higher than mc^2. This requirement fills the gap condition of the solution of Einstein’s free energy of relativistic particles and may imply in the creation of virtual particles. In quantum field theory, the perturbative formulation of physical processes may involve summations (in different orders) over virtual intermediate states. And if the theory is Lorentz invariant, those summations may involve an infinite number of supplementary states and generally speaking, for divergent summations, infinities will arise. Crudely speaking, renormalization is a kind of theoretical general algorithm to get rid of infinities that appear at each order of perturbation theory in practically all quantum field theories.
There is a very interesting PhD thesis you can follow to understand renormalization: Renormalization from Classical to Quantum Physics by Arnab Kar (https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=28739&itemFileId=144111)
In some point the author says:
"... However, the presence of virtual states leads to infinities in the calculations. The emergence of these infinities remained a puzzle for quite sometime. Following an idea by Dirac, the first step towards solving the puzzle was finally laid by Bethe, Schwinger, Feynman and Dyson in 1940s. They showed that the infinities could be avoided by redefining the coupling constants and masses of the theory. The process of redefinition of the parameters of the model became famous as renormalization in years to come. Even though it is mathematically quite cumbersome, it led to physical theories of unprecedented accuracy and elegance. The magnetic moment of the electron was calculated using this formalism to an accuracy of 15 decimal places, and agrees with experiments. Another renormalizable theory is the Yang-Mills theory. This non-abelian gauge theory describes the strong forces in the standard model. The “Higgs” mechanism, which assigns masses to the force carriers in the standard model is also a renormalizable theory. These theories in nature stand on an elegant, yet intricate model. The renormalizable theories have other issues which need to be addressed now. The YangMills theory has the million dollar prize associated with it, to address the mass gap problem. The mass gap problem in the theory deals with the existence and evaluation of the ground state and first excited state energies of the theory. The “Higgs” mechanism on the other hand has a naturalness problem/hierarchy problem associated to it. The mass of the Higgs et al. boson calculated in this theory is lighter than the Planck mass by many orders of magnitude. It is puzzling that the weak forces of nature mediated by the Higgs et al. boson are so much stronger than the gravitational forces...."
See also: A hint of renormalization, Bertrand Delamotte (Delamotte_AJP_2004.pdf )
one definition of renormalization is: renormalization is used to treat infinities arising in calculated quantities by altering values of quantities to compensate for effects of their self-interactions. now, because of operating with infinities often is not defined, it is smart to avoid them. my personal opinion is: use distributions theory; it allows to handle infinities. (see generalized functions, test functions, dirac, heaviside functions, distributions on set functions (vasile postolica, harvaneanu)...)
s h s> can any body explain re-normalization concept in a layman words....
Physics explanation:
Consider an ideal harmonic oscillator, with a natural (resonance) frequency of oscillations. If you drive that oscillator with a periodic external force, with exactly the resonance frequency, the oscillation amplitude will grow indefinitely (in practice it will be limited by damping).
If you have a physical (i.e. nonlinear) oscillator, its natural oscillation frequency depends on the oscillation amplitude; it is renormalised due to nonlinearities, by an amount which depends on the amplitude.
Many of you can observe that in your kitchen, by listening to the noise from your refrigerator, which -- among many things -- is a nonlinear driven system, with one (or more) natural periods of oscillations, the value(s) of which are amplitude dependent. You can hear the level of the noise raise and sink in a fairly systematic manner. That is because the system is driven by a fixed frequency, ultimately coming from your electricity company, surprisingly often close to a resonance frequency. When close to resonance, the amplitude raises. This renormalises the resonance frequency to a different value. So, your driving force becomes out of resonance, and in turn also out of phase with the oscillation, which lowers the amplitude again. Which brings the system back to resonance. And so it goes...
TL;DR: Listen to your refrigerator.
Note added: As I described, renormalisation is a phenomenon of nature, caused by nonlinearities. Nature does not give a s**t about distributions and similar inventions.
Well, dear Thierry, the first article you recomend mention that:
L'étude du spectre montre que les rougissements de la lumière résulte d'un pompage d'atomes d'hydrogène de 1S à 2P. En conséquence les rougissements n'ont pas une origine cosmologique.
The study of the spectrum shows that the redness of the light results from a pumping of hydrogen atoms from 1S to 2P. Consequently, the reddenings do not have a cosmological origin.
While calculating amplitudes of physical processes in quantum field theories we find that a change in cut-off energy scale can be compensated by a change in coupling strengths which removes cut-off dependence of physical quantities as needed, but only at the cost of introducing scale dependence in coupling strengths. This is a purely quantum effect usually known as renormalization. When cut-off dependence is removed physical quantities become finite.
measuring implies units, that is: discrete. i.e.: aproach continuous phenomena by using discrete sets. the smaller they are, the better the aproximation, but not perfect.
We can only solve for cross sections using perturbation theory in a coupling constant expansion. Unfortunately, beyond the zeroth order we encounter infinities, whereas experiment has finite answers. That required humans to look at the theory and ask what really are the parameters that enter into the Hamiltonian or Lagrangian. They came to the conclusion that experimental results are relative to the presence of quantum fluctuations. But how does one describe "relative to quantum fluctuations" in a theory analysis? The answer is to identify the theory parameters as unmeasurable "bare" infinities and remove finite parts. The infinity left over accounts for the quantum fluctuations and the removal of finite parts is the kludge we say is "relative to quantum fluctuations". The finite pieces depend on the experimental values that measure these left-overs. As experiment goes to new cross section energies, the finite pieces scale. Renormalization is a fancy word that describes this splitting off of finite pieces. If this scheme does not remove the infinities encountered in scattering cross sections, we say the Hamiltonian or Lagrangian is unrenormalizable, which is the case for General Relativity in four dimensions using the Hilbert-Einstein action.
Renormalization is the price we pay for choosing a wrong interaction operator in the original QFT Hamiltonian.
If you look at the interaction operator of any relativistic quantum field theory, you'll notice that it is drastically different from interactions in most familiar classical or quantum theories. In "normal" theories, the interaction acts only when there are two or more particles. Indeed, a single particle has nothing to interact with, so "normal" interaction operators are not supposed to act on one-particle states. So, in a "normal" theory we can (1) define our free particles, (2) define interactions between the particles, (3) study systems of particles, like bound states, collisions etc. with the understanding that when the particles are separated, they are non-interacting and they obey our definition in (1).
Interactions you meet in QFT are very "abnormal" in the sense that they act non-trivially on single particles and even on the vacuum (no-particle) state. In QFT, a single particle can "interact with itself". This means that the particle definitions introduced in step (1) become invalid after we introduced interaction in step (2). Due to the self-interaction, our particles (e.g., electrons) get properties different from where we started: their masses and charges get modified. The ugly thing is that these mass and charge self-interaction corrections are infinite! So, any calculations of physically interesting things in (3) become impossible. For example, the S-matrix gets infinite contributions in high perturbation orders.
The renormalization theory tries to fix this mess. However, it does not dare to make the whole theory consistent, finite, etc. It says roughly the following: the most important theoretical quantity is the S-matrix. From the S-matrix you can extract scattering cross-sections, energies of bound states and a few other quantities that are directly comparable with experiment. So, let us fix the S-matrix only. The idea is to add infinite counterterms to the Hamiltonian, so that when we calculate the S-matrix from the Hamiltonian, these infinite counterterms cancel order-by-order with the original infinite contributions mentioned in the previous paragraph.
The specific form of these counterterms is defined by demanding a couple of physically sensible "renormalization conditions" for the new S-matrix. These are known as the "mass renormalization condition" and the "charge renormalization condition". The mass renormalization condition basically means that we forbid self-scattering in zero-particle (vacuum) and one-particle states. The charge renormalization condition may demand (for example) that in the low-energy and long distance limit, the scattering of charges is described by well-known classical formulas.
Then we start to calculate our S-matrix from the originally defined Hamiltonian and, in each perturbation order we add counterterms to this Hamiltonian, so that both renormalization conditions are satisfied.
At the end of this renormalization process we get a finite S-matrix, which satisfies (by construction) our renormalization conditions and (by sheer lack or hand of God) is extremely accurate and agrees with experiment better than any other theory. At the same time we get a completely useless Hamiltonian full of these counterterms. The counterterms are infinite, but we do not care, because we are not going to measure the time evolution in our experiments, so we can tolerate if our Hamiltonian (=the generator of time evolution) is screwed up.
A non linear harmonic oscillator mentioned above does not have a certain "frequency", but a certain period instead, since the motion is no longer harmonic! It has nothing to do with the constant renormalizations in QFT.
I wrote a popular explanation why we encounter such a problem in my articles, see, for example, "A Toy Model of Renormalization and Reformulation" on arXiv: Article A Toy Model of Renormalization and Reformulation
Renormalization can also deal with 'total energy' and wave functions of Maxwell. The Born Approximations are also used as part of the wavefunctions to deal with probability squared to make a wavefunction. How you get your probability is everything. Probability needs to be based on independence as Dirac once noted not Bose, nor others who erred there or the mistakes are important and can be risky.
The renormalization group (RG) refers to a mathematical apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales.For example, in quantum electrodynamics (QED), an electron appears to be composed of electrons, positrons (anti-electrons) and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation.
The simple electron/photon interaction that determines the electron's charge at one renormalization point is revealed to consist of more complicated interactions at another.