User Tools

Site Tools


Sidebar


Add a new page:

advanced_tools:renormalization

Renormalization

Intuitive

What does a JPEG have to do with economics and quantum gravity? All of them are about what happens when you simplify world-descriptions. A JPEG compresses an image by throwing out fine structure in ways a casual glance won't detect. Economists produce theories of human behavior that gloss over the details of individual psychology. Meanwhile, even our most sophisticated physics experiments can't show us the most fundamental building-blocks of matter, and so our theories have to make do with descriptions that blur out the smallest scales. The study of how theories change as we move to more or less detailed descriptions is known as renormalization.

Introduction to Renormalization by Simon DeDeo

An experiment to determine what a hydrogen atom consists of, or even as straightforward as measuring the mass or electric charge of a free electron, is analogous to viewing something on a computer screen, where the image is made of pixels. At low resolution you see some level of detail, but if you increase the density of pixels, the picture becomes sharper. For example, if an image with a few pixels shows a river, representations built from larger numbers of pixels will reveal finer details of its swirling waters. Likewise, at low resolution, an atom of hydrogen can just be made out to contain a powerful electric field, which grips a single electron in the atom’s outer regions. The electron itself can be discerned as a fuzzy lump of electricity, filling a single pixel and seemingly just beyond the limit of resolution. In order to see what that haze of charge is really like, we must take a closer look. Suppose we increased the pixel density by a hundred, so that for every individual large pixel that was present in the previous case, we now have a hundred fine-grain ones. Where before the electric charge was located within a single large pixel, we now find that it is situated inside the single minipixel at the center, the ninety-nine surrounding minipixels containing negatives and positives: the whirlpools of virtual electrons and positrons in the vacuum. Now increase the density by another factor of a hundred. The charge of the electron, which previously filled a single minipixel, is now found to be concentrated into a central micropixel, which is surrounded by further vacuum whirlpools. And each of the whirlpools that had previously shown up in the image made of minipixels is revealed to contain yet finer eddies. And so it continues. The image of an electron is like a fractal, repeating over and over at finer detail, forever. The electric charge of the electron is focused into a single pixel of an ever-decreasing size, surrounded by a vacuum of unimaginable electrical detail. I cannot imagine it, nor, I suspect, can any other physicist. We are content to follow what the mathematics and experimental data reveal. Any- way, it is not the surrounding vacuum that concerns us here; our quarry is the electric charge of the original electron. As we zoom in toward the kernel, and the smeared lump of that electron’s charge is concentrated into a smaller and smaller pixel, the density of the electric charge increases. The way that this number changes with the scale of the resolution is called the “beta function.” Going uphill, in the sense that the density of charge is increasing, the slope is positive, while downhill, decreasing, it is negative. 19 In QED the slope of beta is positive, meaning that as your microscope zooms in, the charge concentrates more and more. According to QED, if the pixel size were to become infinitesimal, we would find the charge to be located totally within that infinitely minute point. Consequently, it would be infinitely dense.

An experiment to determine what a hydrogen atom consists of, or even as straightforward as measuring the mass or electric charge of a free electron, is analogous to viewing something on a computer screen, where the image is made of pixels. At low resolution you see some level of detail, but if you increase the density of pixels, the picture becomes sharper. For example, if an image with a few pixels shows a river, repre- sentations built from larger numbers of pixels will reveal finer details of its swirling waters. Likewise, at low resolution, an atom of hydrogen can just be made out to contain a powerful electric field, which grips a single electron in the atom’s outer regions. The electron itself can be discerned as a fuzzy lump of electricity, filling a single pixel and seemingly just be- yond the limit of resolution. In order to see what that haze of charge is really like, we must take a closer look. Suppose we increased the pixel density by a hundred, so that for every individual large pixel that was present in the previous case, we now have a hundred fine-grain ones. Where before the electric charge was located within a single large pixel, we now find that it is situated inside the single minipixel at the center, the ninety-nine surrounding minipixels containing negatives and positives: the whirlpools of virtual electrons and positrons in the vacuum. Now increase the density by another factor of a hundred. The charge of the electron, which previously filled a single minipixel, is now found to be concentrated into a central micropixel, which is surrounded by further vacuum whirlpools. And each of the whirlpools that had previously shown up in the image made of minipixels is revealed to contain yet finer eddies. And so it continues. The image of an electron is like a fractal, repeating over and over at finer detail, forever. The electric charge of the electron is focused into a single pixel of an ever-decreasing size, surrounded by a vacuum of unimaginable electrical detail. I cannot imagine it, nor, I suspect, can any other physicist. We are con- tent to follow what the mathematics and experimental data reveal. Any- way, it is not the surrounding vacuum that concerns us here; our quarry is the electric charge of the original electron. As we zoom in toward the kernel, and the smeared lump of that elec- tron’s charge is concentrated into a smaller and smaller pixel, the density Calculations in QED, which gave infinity as the answer, were in some cases doing so because they had summed up the contributions of all pixel sizes—from large scale all the way down to the infinitesimal extreme. However, this is not what a real experiment measures. What you see depends on the resolving power of your microscope. In our voyage into inner space, current technology enables us to resolve matter at a scale as fine as one-billionth the size of a single atom of hydrogen. This is very small, to be sure, but it is not truly infinitesimal. In practice, what we want to be able to calculate are the values of measurements made at this particular scale of resolution. Adding up the contributions of all pixel sizes is like a travel agent charging you for trips to outer space, and including in the bill exotic destinations that are as yet impossible to reach. For example, you may have traveled on the International Space Station, the limit of what is currently on offer, and expressed an interest in Lunar exploration and even a trip to Mars when these become practical. The travel agent includes these in the bill, along with twenty- first century trips to deep space that are presently science fiction. With the potential trips extrapolated into the indefinite future, the bill comes to infinity. By any rationale, this is absurd. What you really want to know is the cost of a trip to the moon or to Mars. To do so, we may agree with the accountant that the price for a trip to the space station (plus an option on possible future trips to infinitely far other destinations) is some amount: X. Then we might expect to get a sensible answer, in terms of multiples of X, for the cost of a trip to the moon or Mars. In fact, it would be deter- mined by the finite difference in the distances between those destinations. Analogously, infinity emerges from a typical sum in QED because the calculation has included the effects of scales of length—pixel sizes, if you prefer—that are infinitesimally small. With hindsight, this too is absurd. And as in the case with the travel bill, we need some way in QED to do the accounts for what we can measure and not include in the sums the wonders of an inner space that lie beyond vision. If we have some point of reference, where we know the correct value, the equivalent of the known cost of going to the space station, then we may be able to compute another quantity relative to the first one. That was at the core of what Feynman and Schwinger were developing. This technique of using known values, such as the charge of the electron (the cost of going to the space station in our analogy) to calculate other values, such as its magnetism (the cost to the moon or Mars), became known as renormalization. 2

page 44 in The Infinity Puzzle by Frank Close

Thus in elemental silicon, where there are many electrons locked up in the chemical bonds, it is possi­ble to pull an electron out of a chemical bond to make a hole. This hole is then mobile, and acts in every way like an extra electron added to the silicon, except that its electric charge is backward. This is the antimatter effect. Unfortunately, the hole idea makes no sense in the absence of something physically analogous to a solid's bond length, since this length fixes the density of electrons one is ripping out. Without it, the background electron density would have to bei infinity. However, such a length con icts fundamentally with the principle of relativity, which forbids space from having any preferred scales. No solution to this dilemma has ever been found. Instead,physicists have developed clever semantic techniques for papering it over. Thus instead of holes one speaks of antiparticles. Instead of a bond length one speaks of an abstraction called the ultraviolet cutoff ,a tiny length scale introduced into the problem to regulate it-which is to say, to cause it to make sense. Below this scale one simply abort sone's calculations, as though the equations were becoming invalid at this scale anyway because it is, well, the bond length. One carries the ultraviolet cutoff through all calculations and then argues at the end that it is too small to be measured and therefore does not exist.

Much of quantum electrodynamics, the mathematical description of how light communicates with the ocean of electrons ostensibly pervading the universe, boils down to demonstrating the unmeasurableness of the ultraviolet cutoff. This communication,which is large, has the fascinating implication that real light involves motion of something occupying the vacuum of space, namely all those electrons (and other things as well), although the extent of this motion depends sensitively on the value of the ultraviolet cutoff which is not known. There are endless arguments about what kinds of regularization are best, whether the cutoff is real or fictitious,whether relativity should be sacrificed, and who is too myopic to seethe truth. It is just dreadful. The potential of overcoming the ultravi­olet problem is also the deeper reason for the allure of string theory,a microscopic model for the vacuum that has failed to account for any measured thing.

The source of this insanity is easy to see if one simply steps back from the problem and examines it as a whole. The properties of empty space relevant to our lives show all the signs of being emergent phenomena characteristic of a phase of matter. They are simple,exact, model-insensitive, and universal. This is what insensitivity to the ultraviolet cuto means physically.

[…]

David Pines and I coined the term "protection" as a lay-accessible synonym for the technical (and thus confusing) physics term "attractive fixed point of the renormaliza­tion group." See R. B. Laughlin and D. Pines, Proc. Natl. Acad. Sci. 97, 28 (2000).

All well-documented cases of protection in physics are characterized by invariance of scale.4 This idea is illustrated by the story of the in­competent director who wants to make a movie of an organ pipe sounding. This is obviously not a big money-maker, but he is an avant­garde director and believes that this lm will be the ultimate Zen cine­matic experience. A er a few minutes of lming he decides that the of course sounds with a lower tone-and the cameraman is instructed to back up, so that the enlarged pipe again lls the eld of view. He then begins filming again-until he realizes his mistake. In a rage he jams the developed film into the projector, ips a switch to make it run twice as fast, and con rms that, sure enough, the image and sound are exactly the same as they were before. The improvements changed nothing. The reason is that the laws of hydrodynamics responsible for the sound of the organ pipe are scale-invariant. The pipe's observed behavior remains the same if the sample size is doubled, followed by a corresponding doubling of the scales for measurement of distance and time. This process is called renormalization, and it is the traditional conceptual basis for discussing protection in physics.5

Renormalizability is fundamentally lopsided. In the case of the organ pipe, for example, one can scale to larger and larger sizes for­ever without breakdown of the renormalization rule, but scaling in the opposite direction, to smaller sizes, works only down to the size of the atoms, at which point the laws of hydrodynamics fail. It is actually more revealing to imagine this experiment in reverse-starting from a small sample and scaling up. One nds that corrections to hydrodynamics such as atomic graininess, nonlinear viscosity rules,dependence of the flows on internal factors other than pressure, and so forth get smaller and smaller with each size change, causing the phenomenon of hydrodynamics to "emerge" in the limit of large sample size. That's the good news. The bad news is that there are other possibilities. If the average number of atoms per unit volume had been slightly higher, the universalities of crystalline solids would have emerged under renormalization instead of those of u­ids. One might say that small samples contain elements of all their possible phases-just as a baby contains all the elements of various kinds of adulthood-and that the system's identity as one phase orthe other develops only a er some properties are pruned away and others enhanced through growth.

The technical term for diminution of some physical property, such as shear strength in fluids, under renormalization is irrelevance.Thus, in a fluid, corrections to hydrodynamics-indeed most prop­erties of a collection of atoms one could imagine measuring-are ir­relevant, as are the corrections to elastic rigidity when the system is solid. Unfortunately, irrelevance is also a strong contender for the dumbest choice for a technical term ever made. It con ses everyone,including professional physicists, on account of having multiple meanings. I could go on and on about the practice of rewarding sci­entists for inventing things other people can't understand, but will constrain myself and note only that one of the easiest ways to do that is to assign a new meaning to a commonly used word. You chat along, casually drop this word and, presto, anyone listening is hope­lessly con sed. The trick to deciphering the code is to realize that there are two versions of the word "irrelevant." One means "not ger­mane" and applies to lots of things other than physics. The other means "doomed by principles of emergence to be unmeasurably small" and applies only to certain physical things.

The emergence of conventional principles of protection acquires an interesting twist when the system is balanced at a phase transition,so that it has trouble deciding how to organize itself. Then it can hap­pen that everything is irrelevant except one characteristic quantity that grows without bound as the sample size increases, such as the amount of magnetism in a magnetic material. This relevant quantity ultimately decides which phase the system is in. In the case of mag­netism, for example, the growth is negative if the temperature ex­ceeds a certain value, causing the magnetism above this temperature to disappear. Being magnetic is all or nothing. There can also be quantities that neither grow nor diminish, but these so-called mar­ginal variables characterize a special kind of stillborn phase transi­tion that occurs rarely (i.e., never) in nature. The situation resembles teams wrestle back and forth- rst one gaining the advantage,then the other-exhibiting the universal characteristics of a tug-of­ war, all other aspects of life having become irrelevant. At last, oneteam accelerates the rope faster and faster to its side, and the other team loses control and gets dragged catastrophically into the mud.That there will be a winner of this contest is certain, but the time it takes for the winner to emerge is not. In principle, the contest could be drawn out for an arbitrarily long time if the two teams were arbi­trarily well balanced. In practice, the balance makes them highly sus­ceptible to external in uences, such as rainstorms or heckling or bystanders, which then decide the outcome rather than the natural superiority of one team over the other. This effect also occurs in bal­anced elections, which is why such elections mean very little. Balanced protection occurs commonly in nature, but less so than one might anticipate because most phase transitions, like the evapo­ration of water, have a latent heat that forces the phases to coexist.Water is nicely in phase balance on a hot, humid day, when some is in the air and the rest in lakes and ponds. This balance is precisely what makes these days so uncomfortable, for it prevents the water in one's body from cooling it by evaporation. But if the water is placed under pressure, the heat required to turn the liquid into vapor can be made to diminish and then disappear altogether, thus obliterating the di erence between the liquid and vapor. When it just barely van­ishes one gets a true balance e ect called critical opalescence, in which the uid becomes milky and opaque.6 This e ect is something like fog, but vastly more interesting because it lacks scale. The droplet size in real fog is determined by environmental factors, such as dust and microscopic bits of sea salt in the air, and could just as easily have been extremely large-the extreme case being a nearby lake. But under pressure the schizophrenia of the uid is maximized, and its fog like behavior exists on all scales simultaneously. While this effect is wonderfully entertaining to see, its practical use is restricted to steam turbine design, which exploits this special property of its working fluid to maximize fuel efficiency.

Balance universalities and relevance associated with phase tran­sitions in nature cause two physical effects I call the Dark Corollar­ies. The melodramatic overtones are intentional, for these effects are insidious, destructive, and thoroughly evil, at least from the perspective of anyone concerned with differentiating what is true from what is not.

The second Dark Corollary I call the Barrier of Relevance. Sup­pose by some miracle one was able to discover the true underlying mathematical description of a thing, whatever it was, and took as one's task to solve the equations and predict the protected behavior that they imply. One would have to make approximations, of course,and in a stably protected situation, the small errors implicit in these approximations would be irrelevant in the technical sense that they would heal as one scaled up to larger and larger sample sizes. But in an unstable situation, relevant mistakes grow without bound. Rather than healing one's errors, the physical behavior amplifies them, caus­ing one's prediction to become less and less reliable as the sample size grows. This e ect is conceptually the same thing as "sensitive depen­dence on initial conditions" in chaos theory, but di ers from it in pertaining to evolution in scale rather than evolution in time. As in chaos theory, a very small error in the procedure for solving the equations can metastasize into a gigantic error in the final result­ large enough to make the result qualitatively wrong. This kind of universality destroys predictive power. Even if you had the right un­derlying equations, they would not be of any use for predicting the behavior you actually care about, because you could not solve them sufficiently accurately to make such predictions. This, in turn, makes them unfalsifiable.8 If you cannot predict certain experiments reli­ably, then you also cannot use these experiments to determine whether the theory is correct. The system has spontaneously generated a fundamental barrier to knowledge, an epistemological brick

"A different Universe" by Robert Laughlin


Recommended Books:

  • The Infinity Puzzle by F. Close

Concrete

Another well-known problem is connected with the fact that field theory has infinitely many degrees of freedom, and all of them have some zero point oscillations. Even for a free (photon) field we have a strongly divergent vacuum energy

$$ \int^{k_{max}} \frac{d^3k V}{(2\pi)^3} \frac{k}{2} \propto k_{max}^4 V . \tag{1.15} $$ For a gluonic field, which is selfinteracting, one has, in addition to (1.15), a series of corrections in powers of $\alpha_s(k_{max})$. In both cases we get rid of this infinite energy by “renormalization”. Its physical basis is the following: we are not interested in the energy of very high frequency modes, since they are never excited in physical processes under consideration. As a result their zero point energy is an unimportant additive constant which can be put equal to zero by some convention.

Theory and phenomenology of the QCD vacuum by Edward V Shuryak

Recommended Resources:

Abstract

Recommended Papers:

Why is it interesting?

The philosophy of renormalization is one of the most difficult, and controversial, in all of particle physics. Yet it underpins modern theory.

page 42 in The Infinity Puzzle by Frank Close

For physicists, infinity is a code word for disaster, the proof that you are trying to apply a theory beyond its realm of applicability. In the case of QED, if you can’t calculate something as basic as a photon being absorbed by an electron, you haven’t got a theory—it’s as fundamental as that. page 4 in The Infinity Puzzle by Frank Close

It is possible to say that the UV divergences mean that the theory makes no physical sense, and that the subject of interacting quantum field theories is full of nonsense. (Dirac (1981)) Renormalization by Collins

Renormalization originated not from abstract theory but rather from the struggle to overcome a nasty technical problem. If one supposes that spacetime is continuum, then in any finite volume of space there is an infinite number of degrees of freedom, and in summing their contributions to physical processes one often finds divergent, and hence meaningless results. Renormalization originated as a technical trick to absorb these divergences into redefinitions of the couplings: it relates so called “bare” couplings, which appear in the fundamental Lagrangian and have no direct physical significance, to “renormalized” couplings, which correspond to what one actually measures in the laboratory.

Gravity from a Particle Physicists’ perspective by Roberto Percacci

FAQ

Why is renormalization of QED easy?

A reason that QED is a renormalizable theory is that electrons, positrons, and photons bubble in and out of existence, and the overall image is like a fractal— there is no means to tell at what resolution your microscope is operating. In quantum field theory, the range of a force is linked to the mass of its carrier. The electromagnetic force has infinite range, which in QED is because of the fact that the photon has zero mass. An infinite-range force appears the same— infinite range—in all microscope images, which is a reason for QED’s being renormalizable. Contrast this with a finite-range force, such as the weak force in Chapters 7–11, which will not appear the same at all microscopic scales. At high resolution, where a small distance fills the entire image, the range will ap- pear to fill more of the picture than in a lower-resolution example covering a greater span. The finite-range weak force is carried by a massive W boson. It is this mass, and its link to a finite range of distance, that spoils the invariance of the microscopic image and contributed to the difficulty of constructing a renormalizable theory of the weak force. This analogy is described in more de- tail in G. ’t Hooft, In Search of the Basic Building Blocks, 67.

page 374 in The Infinity Puzzle by Frank Close

Why do we need to renormalize at all?
See:

The ultraviolet infinities appear as a consequence of space-time localization of interactions, which occur at a point, rather than spread over a region. (Sometimes it is claimed that field theoretic infinities arise from the unhappy union of quantum theory with special relativity. But this does not describe all cases - later I shall discuss a non-relativistic, ultraviolet-divergent and renormalizable field theory.) Therefore choosing models with non-local interactions provides a way to avoid ultraviolet infinities. The first to take this route was Heisenberg, but his model was not phenomenologically viable. These days in string theory non-locality is built-in at the start, so that all quantum effects - including gravitational ones - are ultraviolet finite, but this has been achieved at the expense of calculability: unlike ultraviolet-divergent local quantum field theory, finite string theory has enormous difficulty in predicting definite physical effects, even though it has succeeded in reproducing previously mysterious results of quantum field theory - I have in mind the quantities associated with black-hole radiance.

The Unreasonable Effectiveness of Quantum Field Theory by R. Jackiw

" The solution is given by an iterated integral, known as Dyson’s series, U(s, t) = T(e −i R t s dτ HI (τ) ). Here time ordering is needed because the Hamiltonians evaluated at different times need not commute. Since time ordering is defined with the generalized function θ, this is the point where (ultraviolet) divergences are inserted into the theory; in general one cannot multiply distributions by discontinuous functions, or equivalently, in the language of distributions, the product of two distributions is not well-defined. There are two ways of dealing with this problem: Try to construct a well-defined version of T, or proceed with the calculation and try to get rid of the problems at the end. The first is the basic idea of the Epstein-Glaser approach, the second leads to renormalization, the art of removing these divergences in a physical meaningful manner. […]

Finally, a theory is called renormalizable if in every order of perturbation theory the counterterms added can be put into the Lagrangian without changing its structure. In other words, if renormalizing the Lagrangian amounts to a rescaling of its parameters

[…]

The other approach to renormalization goes back to St¨uckelberg, Bogolyubov and Shirkov [BS80]. The basic idea is to find a well-defined version of the time ordering operator to take care of divergences before they can emerge. In causal perturbation theory one tries to construct the S-matrix not as a perturbation series but from a set of physical axioms such as causality and Lorentz covariance.

Wonderful Renormalization by Berghoff

Infinite renormalization is needed in ordinary QM when the potential gets too singular, for example with delta-function potentials that model contact interactions. Hardly ever discussed in textbooks but important for understanding. […] A paper by Dimock (Comm. Math. Phys. 57 (1977), 51-66) shows rigorously that, at least in 2 dimensions, delta-function potentials define the correct nonrelativistic limit of local scalar field theories.

http://www.mat.univie.ac.at/~neum/physfaq/topics/whyRen

Though it was now obvious that QED was the correct theory to describe electromagnetic interactions, the renormalization procedure itself, allowing the extraction of finite results from initial infinite quantities, had remained a matter of some concern for theorists: the meaning of the renormalization ‘recipe’ and, thus, of the bare parameters remained obscure. Much effort was devoted to try to overcome this initial conceptual weakness of the theory. Several types of solutions were proposed:

(i) The problem came from the use of an unjustified perturbative expansion and a correct summation of the expansion in powers of α would solve it. Somewhat related, in spirit, was the development of the so-called axiomatic QFT, which tried to derive rigorous, non-perturbative, results from the general principles on which QFT was based.

(ii) The principle of QFT had to be modified: only renormalized perturbation theory was meaningful. The initial bare theory with its bare parameters had no physical meaning. This line of thought led to the BPHZ (Bogoliubov, Parasiuk, Hepp, Zimmerman) formalism and, finally, to the work of Epstein and Glaser, where the problem of divergences in position space (instead of momentum space) was reduced to a mathematical problem of a correct definition of singular products of distributions. The corresponding efforts much clarified the principles of perturbation theory, but disguised the problem of divergences in such a way that it seemed never having existed in the first place.

(iii) Finally, the cut-off had a physical meaning and was generated by additional interactions, non-describable by QFT. In the 1960s some physicists thought that strong interactions could play this role (the cut-off then being provided by the range of nuclear forces). Renormalizable theories could then be thought as theories some- what insensitive to this additional unknown short-distance structure, a property that obviously required some better understanding. This latter point of view is in some sense close to our modern understanding, even though the cut-off is no longer provided by strong interactions.

"Phase Transitions and Renormalization Group" by Zinn-Justin

For example, in the well known textbook [15] it is explained in details that interacting quantized fields can be treated only as operatorial distributions and hence their product at the same point is not well defined.https://arxiv.org/pdf/1004.1861.pdf

See also:

Renormalization-Prescription Ambiguity

This ambiguity arises because we are free to remove the infinities in the theory in any way we choose. ' Though we take care of the freedom to scale the momenta for our renormalization, through a simple subset of the general renormalization-group equations, ' we are still free to prescribe how much of the finite parts are left behind on taking the infinities away. This is the so-called renormalization-prescription ambiguity. Clearly any physical quantity does not depend on the renormalization scheme in which we choose to work and so the choice of scheme is merely a matter of convenience, provided we can calculate to all orders in perturbation theory. However, in some schemes we may have to calculate many terms to obtain a good approximation, while in others just the first few terms may be sufficient. It is particularly important in perturbative QCD to find such schemes, because we have little hope of calculating beyond the first few terms in the QCD perturbative expansion. In perturbative QED, the choice of scheme is of minor consequence, because, for any reasonable choice of expansion parameter, the first few terms in the series are always a good approximation. Rather remarkably, moreover we have a natural choice of renormalization prescription in QED the physical or on-shell scheme. In this scheme, the low-energy limit of the differential cross section for Compton scattering is proportional to o. to all orders. This o. is just the fine-structure con- 1 stant, i.e &37 %e can, of course, use any other (running) coupling a(Q ) in any scheme as discussed by Coquereaux. ' In QCD, because the gluon, unlike the photon, is not physical on-shell, we have no such "physical" definition of the coupling.

Renormalization-prescription ambiguity in perturbative quantum chromodynamics: Has Stevenson found the solution' ? by M. R. Pennington

Criticism of the Renormalization Procedure

The shell game that we play […] is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate

Feynman 1985

Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small—not neglecting it just because it is infinitely great and you do not want it!

Dirac 1975

Is there a more physical alternative to renormalization?
Yes!

Although regularization and infinite renormalization of the $\delta$-function interaction strength produces physically sensible (unitary) and non-trivialscattering amplitudes, and also the possibility of bound states, it would seenpreferable to arrive at the results without introducing the mathematically awk-awkward "infinite" quantity $\Lambda \propto R^{-1}$. This can be achieved through the method of self-adjoint extension.

[…]

There can be no doubt that a $\delta$-function interaction in two- and three-dimensional Schrodinger theory can be defined, and the method of self-adjoint extension allows dispensing with infinite renormalization. Of course, the relation of these non-relativistic theories to relativistic field theories in 2 + 1-and 3 + 1-dimensional space-time is purely formal. Thus, one cannot draw any definite conclusions about the field theory models. In 1 + 1-dimensional space-time, the $\phi^4$ relativistic interaction rigorously goes over to the non-relativistic Schrodinger theory of a one-dimensional $\delta$-function.23 The possibility of carrying out the proof relies on the mildness of that field theory's ultraviolet divergences. It should also be feasible to carry out an analysis of the non-relativistic limit for the super-renormalizable 2 + 1-dimensional $\phi^4$ theory. While this model is known to exist, the presumed Schrodinger theory $\delta$-function limit shows some unexpected features: the need for infinite renormalization, which is not necessary in the field theory; the existence of a bound state, regardless of the sign of the renormalized coupling— it is as if only an attractive non-relativistic theory exists.In conclusion, while the status of relativistic, 3 + 1-dimensional $\phi^4$ field theory remains unsettled, the non-relativistic theory is not necessarily trivial.Indeed, a non-trivial scattering amplitude exists, but its construction is a subtle task. It remains to be seen whether a subtle construction of the field theory is possible.

Section I.3.B in Diverse Topics in Theoretical and Mathematical Physics by R. Jackiw

See also: Renormalized contact potential in two dimensions by R. J. Henderson and S. G. Rajeev and Renormalized path integral in quantum mechanics by R. J. Henderson and S. G. Rajeev

Why is the standard model renormalizable?

Note that, when a renormalization group transformation is performed, couplings, fields and operators re-arrange themselves according to their canonical dimensions. When going from high mass scales to low mass scales, coefficients with highest mass dimensions, and operators with lowest mass dimensions, become most significant. This implies that, seen from a large distance scale, the most complicated theories simplify since, complicated, composite fields, as well as the coefficients they are associated with, will rapidly become insignificant. This is generally assumed to be the technical reason why all our ‘effective’ theories at the present mass scale are renormalizable field theories. Non-renormalizable coefficients have become insignificant. Even if our local Hamiltonian density may be quite ugly at the Planck scale, it will come out as a clean, renormalizable theory at scales such as the Standard Model scale, exactly as the Standard Model itself, which was arrived at by fitting the phenomena observed today. The features of the renormalization group briefly discussed here, are strongly linked to Lorentz invariance. Without this invariance group, scaling would be a lot more complex, as we can see in condensed matter physics. This is the reason why we do not plan to give up Lorentz invariance without a fight.

https://arxiv.org/pdf/1405.1548.pdf

advanced_tools/renormalization.txt · Last modified: 2018/04/16 11:42 by jakobadmin