Monday, September 29, 2025

Ultraviolet divergence in QED

In this blog we have for many years claimed that destructive interference removes ultraviolet divergences in QED. Our argument is based on the classical limit.

Regularization or renormalization are not needed if one uses a mathematically correct approximation method. Ultraviolet divergences are a result of a Feynman diagram only "hitting" the electromagnetic field with one Green's function – which is a poor approximation of the process.


















In the diagram above, we see an electron passing close to a very heavy negative charge X.

Let us switch to the classical limit. The electron is then a macroscopic particle with a very large charge, 1.8 * 10¹¹ coulombs per kilogram.

Feynman diagrams have no restrictions about the mass of the particles. The particles are allowed to be macroscopic.

As the large electron passes X, it emits a classical electromagnetic wave which has a huge number (actually, infinite) number of real photons.

The electric vertex correction is about the wobbling of the electric field relative to electron as it passes X. In particular, the far field of the electron does not have time to take part in the process. The electron appears to have a reduced mass as it passes X.

Classically, it is obvious that the inner electric field of the electron tracks the movement of the electron very accurately. The inner field is "rigid", and does not affect the movement of the electron much.


The rubber membrane model


                     #
                     #=========   sharp hammer
                     v                      keeps hitting

        ______       _____ tense rubber membrane
                   \__/
                     • e-         weight makes a pit


In the rubber membrane model of the electron electric field, we can imagine that the weight of the electron is implemented with a sharp hammer hitting the membrane at very short intervals.



                              |
                              |
                              |
                              |
                              • e-
        ^ t
        |
         -----> x


Let us analyze the Green's functions of the hammer hits if the electron stays static in space.

We see that if E ≠ 0, then there is a complete destructive interference for any

       exp(i (-E t  + p • r) / ħ).

That is expected, since the electric field is static.

Let us then assume that the charge X passes by the electron e-. The electron is accelerated, and gains some final velocity v.

For large |E|, the destructive interference still is almost complete. For what values of E is the destructive interference incomplete?

Let a be the acceleration of the electron. Let Δt be the cycle time of a wave with E ≠ 0.

During the cycle time, the electron moves a distance

       R  =  1/2 a Δt².

The wavelength is

       λ  = c Δt.

We see that if Δt is very short, then the electron moves negligibly during a cycle, compared to the wavelength λ. Intuitively, the destructive interference is strong then.

Let t be the time when the electron is accelerated. Intuitively, destructive interference is spoiled the most if the cycle time is t. That is, the wavelength is

       c t.

In this blog we have claimed that the electric field "does not have time to follow the electron", if it is at a distance c t from the electron. Destructive interference matches this.

The ultraviolet divergence is due to the fact that a Feynman diagram only hits the electromagnetic field once with a Green's function. In reality, the electron keeps hitting all the time.


Regularization and renormalization of the ultraviolet divergence in the electric vertex corrention




Let us look at how Vadim Kaplunovsky handles the ultraviolet divergence in the vertex correction.







If q² = 0, then the vertex function should be 1. We decide that the "counterterm" δ₁ must have the value:






With that value, the vertex function F₁^net(q²) has the right value 1 when q² = 0.

What is the logic in this? The idea is that the infinite value of the integral F₁^loops(q²) is "renormalized" to zero when q² = 0. We calculate a difference of the integral value when q² ≠ 0, compared to the integral value when q² = 0. The difference, defined in a reasonable way, is finite, even though the integral is infinite.

What is the relationship of this to our own idea in which destructive interference is used to make the integral to converge?

If q² = 0, then we claim that destructive interference cancels, for the Green's function, every Fourier component for which E ≠ 0. This is equivalent to the "renormalization" in the utexas.edu paper, where a "counterterm" δ₁ erases the entire Feynman integral.

What is the meaning of the difference

      F₁^loops(q²)  -  F₁^loops(0)

for q² ≠ 0?


In the link, for q² << m²,






where λ is the "photon mass" used to regularize the infrared divergence. There is no rule of how we should choose λ. The formula is vague.


Is the electric form factor F1(q²) a microscopic quantum effect?


We are struggling to find the analogue of the electric form factor F₁(q²) in the classical limit. If the electron is a macroscopic charge, then the wobbling of its electric field will reduce the mass of the electron, since the far electric field of the electron does not have time to react.

If the mass of the electron is reduced, and it passes a negative charge X, then X will push the electron away a little bit more: the momentum exchange is reduced, and the cross section is less.

But if X is positive, then the reduced mass of the electron allows it to come a little bit closer: the momentum exchange is larger and the cross section is more.

In the literature, the form factor F₁(q²) only depends on the square q² – it does not differentiate between X being positive or negative.

The tree level diagram of e- X scattering only depends on q, not on the electron mass. Thus, the mass reduction would not even show in the Feynman integral cross section.

The quantum imitation principle, which we introduced on September 19, 2025, may solve the problem. When the electron meets X, the electron tries to "build" its electric field with a photon. But the resources of the electron only suffice to send one large photon (mass-energy ~ me) at a time. The electric form factor F₁(q²) would be a result of this shortage of resources.

In the classical limit, the electron is able to send many large photons simultaneously, and build its electric field at a high precision.

The Feynman integral may work correctly if the electron passes very close to X. Then the resources of the electron are severely limited. It may send a single large photon, attempting to build its electric field.

In the classical limit, the form factor F₁(q²) clearly is wrong. It does not describe the wobbling of the classical electric field.

We have to check if any empirical experiments have verified the factor F₁(q²). Does the anomalous magnetic moment depend significantly on F₁(q²)?


A practical calculation when e- is relativistic and meets a massive charge of size e-


Let us assume that the electron e- is relativistic and is deflected by X into a large angle. Let us try to estimate the magnitude of the electric vertex correction.

The southampton.ac.uk link above suggests that 

     F₁(q²)  ~  1  +  α / (3 π).

That is, the cross section increases by ~ 1 / 1,300.


                               ^  v ≈ c
                              /
                            /
           e- • --------    
                                ● X


The mass-energy of the electric field of the electron at least a Compton wavelength

       λe  =  2.4 * 10⁻¹² m

away is α / (2 π) = 1/861 of the electron mass me.

Since the relativistic electron is scattered to a very large angle, its closest distance to X must be

       ~ re  =  2.8 * 10⁻¹⁵ m.

If we reduce the mass of the electron by 1/861, then as it passes X, it will come closer to X. Let the time that the electron spends close to X be

       t  =  2 re / c.

The acceleration of the electron toward X is something like

       a  =  c / r

            = c / (2 re / c)

            = c² / (2 re).

The acceleration takes the electron closer to X, very crudely:

       Δr  =  1/2 a t²

             = 1/2 c² / (2 re)  *  (2 re)²  / c²
 
             = re.

If we reduce the mass of the electron by 1/861, the impulse that the electron receives is ~ 1/861 larger. We expect the scattering cross sections to grow something like 1/861.

However, we do not understand how the Feynman diagram could be able to calculate this. The Feynman diagram calculates the probability amplitudes for various scattering momenta p' ≠ p, if p is the initial momentum of the electron.

Could it be that the two electron propagators in the integral become larger if we reduce the mass of the electron:


























No. The value of the propagator measures "how much" the electron is off-shell. Reducing the mass of the electron does not bring the electron closer to on-shell.


              far field
              • ---> v
             |
             | rubber band
             |
         e- • ---> v    
                                       ● X


Classically, the electric field of the electron becomes distorted when the electron bounces from X. If we treat the electron and its inner field as a single particle, that single particle (reduced electron) is "on-shell" after the bounce.

After the bounce, the electron must supply the missing momentum to the far field of the electron, and must get rid of the excess kinetic energy that the electron has. In this sense, the electron is "off-shell" after the bounce, if we treat the electron and its entire electric field as a single particle. The excess kinetic energy escapes as electromagnetic radiation.

The bounce puts the electron and its far field into an "excited state". The process can be understood like this:

-  The bounce from X converts kinetic energy of the electron into an excitation of its electric field. The excitation decays by emitting electromagnetic radiation.


The process will always radiate real photons which have large wavelengths. Elastic scattering is a process which does not happen at all. If Feynman diagrams claim that it is possible, then they miscalculate the process.


Feynman diagrams calculate the energy radiated in an inelastic collision, and subtract that energy from the elastic path


Now we figured out what the inelastic and elastic Feynman diagrams calculate. (Inelastic = a real photon is radiated. Elastic = no photon is radiated.) This interpretation is inspired by the rubber membrane model.

1.   If the electron is not under an acceleration, it will always reabsorb all the real photons which it emitted when it hit the electromagnetic field with a Green's function.

2.  The diagram which concerns a real photon emission, calculates the probability amplitude of that photon being emitted. It does not calculate the electron flux. We cannot expect the sum of these amplitudes be 1 – rather it is infinite, since an infinite number of photons are always emitted. The diagram, loosely, calculates radiated energy.

3.  The elastic Feynman diagram simply reflects the fact that if energy is radiated out, then the electron cannot reabsorb that energy.


The above interpretation explains why the infrared divergent parts of the tree level radiation diagram and the elastic diagram cancel each other exactly: they both calculate how much energy is radiated out!

Since the electron will always radiate real photons when it passes by a charge X, the elastic Feynman diagram is useless? It does not describe any process in nature. It does not calculate anything useful.

Radiated photons reduce the energy and change the momentum of the electron. The photons, all of them, must be taken into account when we calculate the momenta of the electrons coming out from the experiment.

What does this mean concerning the ultraviolet divergence? We discard the purely inelastic Feynman diagram, but there are Feynman diagrams which contain both the emission and reabsorption of a virtual photon and an emission of a real photon. The ultraviolet divergence can dive up there.


                    ~~~~~~
                  /                \
          e-  ---------------------
                         |      \
                         |        ~~~
                         |
          X   ---------------------


Does the elastic collision Feynman diagram calculate anything useful?


Let an electron pass by a charge X. We pretend that there is no radiation out. Then the elastic Feynman diagram is relevant. If q² varies, what does the integral calculate?


                             k
                         ~~~~
                       /            \
             e-  --------------------- 


The diagram for the free electron is above.


                             k
                          ~~~~
                        /            \
             e-  -----------------------
                              | 2
                              | q
             X   -----------------------


The diagram for the colliding electron differs from the free electron, because we have the vertex and the electron propagator marked with 2 in the diagram, as well as the photon propagator for q. The photon q propagator is factored out from F₁(q²).

The electron propagator measures how much the electron is off-shell: being more off-shell means that the absolute value of the propagator is smaller.

It is obvious that q² affects the Feynman integral, but does it have any intuitive meaning?

In the rubber membrane and the sharp hammer model, if we assume that there is no energy loss, then the hammer must hit at the same place as it did earlier. The process is the same as for the free electron. If we make the (unrealistic) assumption that there is no radiation loss, then the Feynman integral should have the same value as for the free electron.

Classically, we can think like this: we strip the far electric field of the electron off, to remove radiation losses. We keep the rigid inner field. Since the inner field is rigid, we can assume that the electron has no electric field at all, and the mass-energy of the field is in the mass of the pointlike electron. The elastic Feynman diagram is useless: we can just look at the tree-level diagram.

But could there be some microscopic effect which comes from the quantization?




***  WORK IN PROGRESS  ***

Wednesday, September 24, 2025

Infrared divergence in QED

Let us study the classical limit of electron-heavy charge scattering.



The text is probably written by Vadim Kaplunovsky.











The utexas.edu paper discusses the infrared divergence of the "electric" vertex correction in the scattering of an electron from a very heavy negatively charged particle X. The yellow circle depicts the electron bumping from the field of X, and also the virtual vertex correction photon.

Let us analyze the Feynman vertex diagram. The solid line is the electron.
















The Green's function at the birthplace of the virtual photon k "creates" the Coulomb field of the electron.










Let the virtual photon k possess the energy E and the spatial momentum P. Let|E| and |P| very small. Then other factors in the integral are essentially constant, but the photon propagator

       1 / k²  =  1 / (-E²  +  P²)

varies a lot. The divergence has to come from photons for which -E² + P² is almost zero, that is, from almost "real", or "on-shell" photons. The paper says that the divergence is logarithmic.

Is there any reason why the integral should not diverge?


A toy model


                         1    90%
                        ------------
                     /                 \         1% + 81% + 81%
          e-  ----------- 1% --------------------------------
                     \                 /
                        ------------
                         2     90%


Let us have a toy model where the electron coming close to X has a 90% probability to emit a virtual photon of the energy 1, and the same 90% probability to emit a virtual photon of the energy 2.

The probability of the electron reabsorbing the virtual photon is 90%.

We sum the probabilities of the three paths and end up in a nonsensical figure 163%.

What was wrong? We assumed that the paths 1 and 2, and their probabilities, are mutually exclusive. In the diagram, the electron never emits both 1 and 2. A more realistic model is one where it, in most cases, emits both 1 and 2.

Classically, an electron bumping into the charge X will emit and absorb a large number of real and "almost real" photons.


                              ~~~~
                           /     ~~~
                         /      /
           e-  --------------------
                           |
           X   --------------------

    --> t


Feynman diagrams allow the electron to emit many photons, if the diagram is more complex. Does this save the mathematical correctness?

No, not in the case of real, emitted photons. If two photons are emitted, it is a distinct end result. Its probability amplitude cannot be summed to an end result where only one photon is emitted.

What about virtual photons?


                           ~~~~~~~
                        /      ~~        \
                     /       /       \        \
          e-   ---------------------------------
                                |
          X    ---------------------------------

     --> t 


Adding a new photon means one additional photon line and two additional vertices in the integral.

Intuitively, emitting and absorbing a small virtual photon should not change the phase of the outgoing electron much. There will be no destructive interference, and adding more photon lines should not cancel the divergence of the integral with a single line. But is this true according to the Feynman rules?

Let us check what people have written about this.


C. Anastasiou and G. Sterman (2018) present a method to remove infrared divergences. They do not say that going to two loops would help and cancel divergences at one loop.


In the classical limit, an electron emits a huge number of small photons


We know that a macroscopic accelerating charge will radiate a very large number of photons whose wavelength is large. What implications does this have for Feynman diagrams?

Let k₀ be the 4-momentum of a small real photon. The correct physical model (classical) says that the probability of the electron emitting just a single photon of a 4-momentum

       |k  -  k₀|

is essentially zero. It will always emit a huge number of small photons.

The Feynman diagram claims that the probability of such an emission is small, but it significantly differs from zero.

We conclude that the Feynman diagram calculates an incorrect result.

If an electron passes the large charge X at a relatively large distance, then we can make a wave packet to describe the electron, and the process is almost classical. Let us use the Larmor formula to calculate how many photons the electron radiates.


We assume that the electron is relativistic and passes a proton at a distance R. The acceleration is

       a  =  1 / (4 π ε₀)  *  e² / R²  *  1 / me.

The power of radiation is

       P  =  2/3  *  1 / (4 π ε₀)  *  e² a² / c³

           =  2/3  *  1 / (4 π ε₀)³  *  e⁶ / R⁴  *  1 / c³

               *  1 / me²

          =  3 * 10⁻⁴⁹  *  1 / R⁴.

The radiated energy for a relativistic electron is

       W  =  P R / c

             =  10⁻⁵⁷  /  R³.

One photon of the typical frequency has the energy

       E  =  h f

            =  h c / (2 R)

            =  10⁻²⁵  *  1 / R.

The number of photons of the typical frequency is

       n  =  W / E

            =  10⁻³² / R².

The Compton wavelength of the electron is 2.4 * 10⁻¹² m.

If the distance R = 10⁻¹⁰ m, the number of typical photons is only 10⁻¹². We conclude that the electron is solidly in the realm of microscopic particles.

The classical electromagnetic wave emitted by the electron is a "bump" which lasts for the time 2 R / c. What is the Fourier decomposition of such a bump? The Fourier transform is essentially constant for large frequencies.

The pulse is able to excite a detector which observes very-long-wavelength photons, say such that the frequency is just 1 herz. If R = 10⁻¹⁰ m, then the radiated energy is 10⁻²⁷ J, and we are able to detect a million such photons.

The "number of photons" in the pulse is not well defined. We probably can "mine" the energy in the pulse and can extract various collections of photons, depending on the detectors we are using. Nevertheless, we are able to observe a very large number of low-energy photons. This contradicts Feynman diagrams.

Feynman diagrams work reasonably well if the quanta are large?


The analysis on the infrared divergence in the utexas.edu paper is incorrect


The paper at utexas.edu tries to explain away the divergence problem by resorting to the fact that any single detector can only observe photons whose energy is larger than some threshold energy

       ωthr.

But the explanation is incorrect. Quantum mechanics is about what could be observed, not about what a certain real-world detector observes.

The paper says that the divergence in the photon emission has the opposite sign to the divergence in the vertex correction, and that the divergences "cancel each other out". This is not possible. If the electron loses kinetic energy to a real photon, then the momentum of the outgoing electron differs from every vertex correction electron, because a vertex correction electron does not lose any energy. Electron waves with different momenta cannot cancel each other out.

Also, as we saw above, the infrared divergence is not the only problem. Another problem is that Feynman diagrams predict a far too large probability for the output of just a single photon.

Feynman diagrams simply are a wrong way to approximate a semiclassical process which produces electromagnetic radiation. No gimmick or explanation can refute the basic problem.

The Peskin-Schroeder textbook An Introduction to Quantum Field Theory (1995), contains the same strange claim as the utexas.edu paper, that we can sum probability amplitudes for processes which have a different end result:











The Feynman probability amplitude for a real photon emission: we can allow divergence




















There are two diagrams. Above is one of them. The probability amplitude is
















Let |k| be very small. If we double the charge of the electron e, the mass of the electron m, and the momenta q and p, then the the probability amplitude grows twofold, and the probability flux fourfold . This agrees with the Larmor formula. The classical limit is ok, in this sense.

Now we realize an important thing:

- The Feynman probability amplitude is the PRODUCT of the electron flux and the photon flux. We can ALLOW the integral to diverge, if an infinite number of photons are produced!


Classically, the process will generate an infinite number of photons, if we look at ever longer wavelengths. The Feynman formula may be a fairly good approximation?

But Feynman diagrams miscalculate the effect of the emitted photons in the scattering of the electron from the large charge X. The electron loses its kinetic energy to the radiated photons. That affects its scattering from X. Feynman diagrams only consider one emitted photon, while in the classical limit, the electron will always emit a large number of photons.

How do Feynman diagrams work at all? We calculated above that the probability to emit a photon of a "typical size" is only 10⁻¹² in a typical case of scattering. Corrections which come from photon emissions are so small that they do not spoil the accuracy.

Also, the correction mainly comes from the very rare case (probability 10⁻¹²) when the electron does emit a photon of the typical size. The Feynman diagram is correct to reduce the mass of the electron by that one photon. We can ignore small photons.

In particle accelerators, the margin of uncertainty for a scattering probability is typically on the order of 1%.


The "electric" vertex correction

            
                                ● X

       e-  • -----------       R = minimum distance
                             \
                               \


Let us look at the divergence in the vertex correction. Let us guess that the vertex correction really is about how much the electron mass must be reduced, because its far electric field does not have time to follow the electron in the abrupt scattering from X.

"Far" means something like > 2 R, if the electron is relativistic. There R is the minimum distance between the electron and X.

If we think of the far electric field of the electron built from virtual photons of various wavelengths, then, obviously, an infinite number of virtual photons are needed. This explains why the Feynman vertex integral diverges.

As we noted above, then the Feynman integral calculates the product of the electron flux and the virtual photon flux.

In the classical limit, a correct approximation method would calculate the expectation value of the combined energy of the produced virtual photons, and reduce the mass of the electron by 1/2 of that value when the electron meets X. The Feynman method is erroneous in the classical limit.

Let us then analyze the process when the electron is microscopic.

The paper at utexas.edu states that the infrared divergence is logarithmic when we go to smaller |k|. Making the dimension larger than 4 makes the integral to converge. This suggests that the divergence really comes from building the electric field at the distance > 2 R. We can build an approximation "exponentially" by using waves of length 2ⁿ, where n is a positive integer.











After a lot of calculations, the integral becomes logarithmically diverging:









where D = 4 is the dimension.




















A strange detail: the divergence has a negative sign (marked in red) in the vertex correction, while it had a positive sign in the real photon emission. Why is this?

In the Schrödinger equation, when an electron wave bounces from another negative charge, there is a 180 degree phase shift.

The vertex correction diagram has one vertex more than the real photon emission diagram. This might explain the sign difference. But it does not explain what the phase of the two-particle combination e- and γ should be. We only considered the electron wave.


How to get rid of infrared divergences?


In the case of the real photon emission, we argued above that the electron always emits an infinite number of real photons. We argued that the Feynman probability amplitude for the combination e- and γ is the product of the e- and γ fluxes. Therefore, the integral must diverge. It is the correct behavior for the integral.

We did not check if the integral agrees with a classical calculation of the emitted wave, though.

Also, we remarked that the classical electromagnetic wave cannot be divided into a fixed combination of real photons. We can "mine" energy away from a classical wave packet in many ways.

In the case of the electric vertex correction, we claimed that the electron always emits an infinite number of almost "on-shell" virtual photons, and reabsorbs them later. These virtual photons will reduce the effective mass of the electron. The scattering should be calculated with the reduced mass. In the classical limit, this is self-evident: the far field of the electron does not have time to react to the scattering of the electron e- when the electron comes close to the charge X.

The Feynman diagram calculations in the classical limit are definitely wrong. Feynman cannot handle a case when many quanta are always produced in a process.

Actually, we could say that when many quanta are created, we come to the "non-perturbative" regime. With many quanta, we can model classical processes accurately. Feynman diagrams only work in a "perturbative" setting where a single quantum is created at a time.


Conclusions


We found the reason why there is an infrared divergence in the emission of a real photon in electron scattering.

It is the expected result, and not incorrect: the probability amplitude is the product of the photon amplitude and the electron amplitude. An infinite number of photons are always emitted in the process.

However, we did not check if the Feynman diagram correctly reproduces the classical emitted electromagnetic wave. There must be lots of empirical measurements of bremsstrahlung. Feynman diagrams must reproduce the measured results. Thus, Feynman diagrams do calculate the emitted wave approximately right for microscopic processes.

In the case of the electric vertex correction, many, almost on-shell, photons are always emitted and absorbed. This explains the infrared divergence. How to fix it? Putting a suitable cut-off will remove annoying small photons, and should yield approximately correct results. Feynman diagrams can handle one photon which is rarely created. The divergence problem shows up with many simultaneously created photons.

In the utexas.edu paper it is claimed that the divergences in the real photon emission and in the electric vertex correction "cancel out" each other. That is definitely a wrong claim. A state which has a different number of particles, and a different electron 4-momentum, can never cancel another state.

We introduced a new hypothesis:

- When many quanta are always produced in a process, then the process approaches the classical limit, and the "perturbative" method of Feynman diagrams will fail.

Monday, September 22, 2025

Freeman Dyson's argument of QFT divergence: is the argument incorrect?

Let us analyze the heuristic argument given by Freeman Dyson in 1952.


Justin Bond writes about the argument by Freeman Dyson.

Suppose that the perturbative Dyson series of Feynman diagrams converges for a weak repulsion between electrons. It produces nice, well behaved results.

Let the coupling constant be a small positive number

       e²  >  0.

We assume that we can calculate a physical quantity F(e²) with a perturbative series:







Let the series be analytic in e². We can then analytically continue the series into small negative numbers

       e²  <  0,

and the perturbative series still converges.

A negative value of e² means that electrons attract electrons and positrons attract positrons.


         e- -------------------  e-
                   |
                     ------------  e+
                   |
                     ------------  e-
                   |
         e- ------------------- e-

    --> t


But then the system can tunnel into a lower energy state where we have a large number of electrons close to each other, and elsewhere in space, an equal number of positrons close to each other. The attractive potential of these gatherings is hugely negative, more than the mass-energy of the electrons and the positrons. The system has tunneled into a lower energy state where a huge number of electron-positron pairs came into existence.

In the diagram above, we have a pair born spontaneously.

Such tunneling should be visible in the perturbative series, and should cause divergences in the series?

What is the problem? Is it so that we cannot determine physical quantities with a nice analytic series?


Collapse into a black hole spoils quantum mechanics?


Let us model any physical system with quantum mechanics. There is always a chance that the system will tunnel into a black hole. Is quantum mechanics useless then?

No. The probability of tunneling is negligible in, say, 1 hour.


Feynman cannot handle time-dependent phenomena?


In the Feynman approach, we work in "momentum space", and ignore the time and position coordinates. If the tunneling is time-dependent, then the Feynman approach cannot handle it. What about regularized and renormalized Feynman integrals? We know that they cannot handle "bound" states.

Freeman Dyson argued that the Feynman perturbation series cannot converge. Is this a valid conclusion? The large groups of electrons and positrons form bound states. Can the Feynman series recognize this?

What if it does converge for e² < 0, but simply does not calculate right the collapse of the state to a huge number of electrons and positrons?


John Baez (2016) presents a simple model where a Taylor series E(β) which is supposed to find the ground state energy of a particle in this potential, does not converge:

       V  =  x²  +  β x⁴,

Barry Simon proved in 1969 that the series does not converge.

Finding the ground state energy is not what a Feynman diagram does. It is not clear if the β model is relevant for Feynman diagrams.


How does a Feynman diagram handle attractive forces in QED? Bound states cannot happen


In quantum electrodynamics, the electron and the positron have an attractive force. If they come very close to each other, they are annihilated.


       e+ --------------- ~~~~~~~  γ
                             |
       e-  --------------- ~~~~~~~  γ'

       --> t


Suppose that an annihilation is not possible. What happens?


                                             γ
                                          /
                                       /
                                    /
       e+ ----------------------------
                          |
       e-  ----------------------------
                          
       --> t


Can the pair emit a large real photon γ as bremsstrahlung? The electron and the positron move at a very slow speed, come close to each other, and emit a lot of energy in a large photon.

But this is prohibited by the Feynman rules. The rules assume that all real particles coming out from the diagram are free. They cannot be in a potential pit. The energy coming into the diagram must equal the energy coming out.

It looks like the Feynman rules prohibit the collapse suggested by Freeman Dyson. The collapse is no problem for Feynman diagrams because the collapse cannot happen in them.

The large populations of electrons and positrons suggested by Dyson cannot be created because energy would not be conserved.


Dyson's argument is not about Feynman diagrams – what does the argument then prove?


The argument seems to prove this:

- If we have an analytic, converging series which calculates correctly a physical quantity for a repulsive force, then the same series calculates something for the attractive force, but it is not the "collapse of the universe". We can say that the series miscalculates for the attractive force.


Is this a fundamental flaw? No. For the attractive force, we need another formula, if the problem is well defined at all. If the problem is not well defined, then we do not need any formula.

Specifically, it is possible that the series of Feynman diagrams does converge. Freeman Dyson did not prove that that is impossible.


Why do people care so much about the behavior of Feynman diagrams? They should not


Feynman diagrams assume that free real particles meet, and the output is another set of free particles.

1.   This only covers a very limited set of physical phenomena.

2.   The Feynman formulae are a crude mathematical approximation about what might happen.

3.   There is no a priori reason why the crude mathematical approximation would work at all. It is surprising that it does work well in many cases.


Freeman Dyson's argument against convergence forgets item 1 above.

People claiming that "new physics" must exist at the Planck scale forget item 3.

Also people claiming that we must modify quantum mechanics, to remove divergences in Feynman integrals (e.g., string model) forget item 3.

Gravity does play a role at the Planck scale. But if we study just QED, there is no need to claim that there is new physics at the Planck scale.

"Supersymmetry" was developed in order to get certain Feynman integrals to converge. Since the divergence problem is a mathematical error, there is no need to assume that supersymmetry is true. The LHC proved that if supersymmetric particles do exist, they are hard to find.

Some people hoped that the LHC will reveal extra dimensions. Extra dimensions were added to get string models to work. And string models were supposed to solve diverging Feynman integrals. Again, we see that new physics were speculated about to fix a mathematical error. That is an almost hopeless strategy. The LHC did not find any extra dimensions.


Conclusions


Freeman Dyson did not prove that series of Feynman diagrams diverge.

Divergence in Feynman diagrams is a mathematical error. This explains why the LHC did not find supersymmetric particles or extra dimensions. It is a bad strategy to fix mathematical errors by modifying the physics.

Friday, September 19, 2025

Regularization and renormalization in Feynman diagrams

Naively, the Feynman diagram of the anomalous magnetic moment might diverge logarithmically for large 4-momenta of the virtual photon in the vertex, since it is of a form

             ∫      1 / k⁴  dV.
        k ∈ ℝ⁴ 



The paper at the fnal.gov link uses dimensional regularization in 4 - 2 ε dimensions. At the end of the calculation, the paper states that diverging terms cancel each other.

The paper at the utexas.edu link utilizes various symmetries, and states at the end of the calculation that the integral converges.

We conclude that the integral probably is benign: if the integral is summed in the "natural" order of increasing 4-momenta in the virtual photon, then the integral will converge. There is no need for regularization or renormalization.

In the previous blog post we claimed that the mass-energy of the electric field of the electron is

       α / (2 π)  *  me  ≈  1/861 me.

Since the integral is benign, we can claim that the result is robust: the result does not depend on dubious regularization or renormalization procedures.


The quantum imitation principle


Quantum imitation principle. Quantum mechanics tries to imitate classical mechanics. The resolution of the imitation is restricted by the Compton wavelength associated with the energy available in the process. The imitation may in some cases be more accurate, if there is a lucky coincidence. The imitation is further restricted by quantization.


We introduced the principle above in our previous blog post. Quantum mechanics tries to imitate the energy of the electric field of the electron. But the resolution is quite poor: the Compton wavelength of the electron is

       2.4 * 10⁻¹² m,

which is a large value compared to the classical radius of the electron 2.8 * 10⁻¹⁵ m.

The imitation of the electron electric field only succeeds at the resolution of the Compton wavelength. Quantum mechanics believes that the mass-energy of the electric field is just 1/861 of the electron mass.

The fact that we do not need regularization or renormalization in the calculation of the Feynman integral, stresses that the quantum imitation principle is robust: it will produce finite values without dubious mathematical methods.


What if destructive interference cancels diverging integrals?


For classical waves, a wave whose frequency is 

       f

cannot normally produce significant waves whose frequency is > f. If we use Green's functions to construct a solution, destructive interference, in a typical case, cancels all high frequencies.

The cancellation is not absolute: if we would sum the absolute values of each wave, the integral would diverge.

We conclude that it is ok if the integral diverges in absolute values, as long as it converges when integrated in the natural order of increasing 4-momenta.

Let us calculate an example. The impulse response, i.e., the Green's function, for a static point charge is

          ∫      1 / k²  *  Real( exp(i k • r) ) dV,
     k ∈ ℝ³

where dV denotes a volume element of ℝ³, and Real takes the real part. The integral k³ / k² diverges badly. If we use Green's functions, we cannot expect absolute convergence of integrals.


If classical wave processes cancel high-frequency waves, why do high-frequency waves remain in Feynman integrals, and cause divergences?


In this blog we have long suspected that divergences are a result of a wrong way of applying Green's functions to scattering phenomena.

The reason may be that we calculate Feynman diagrams in the "momentum space", and ignore the position of the particles.

In the case of vacuum polarization, we claimed in 2021 that the divergence comes from a sign error when considering the Dirac hole theory.


The classical limit of the "electric" vertex correction


There is no classical limit for the magnetic part of the vertex correction, since the magnetic moment is a microscopic quantum phenomenon.

But for the wobbling of the electric field in a scattering experiment, there is a very natural classical limit. If we increase the mass and the charge of colliding electrons, they start to behave like classical charges. The wobbling of the electric field will remain significant, and will affect the paths of the charges.

The classical limit should have no diverging integrals. Destructive interference should cancel all high frequencies.


                 1/2 c
        e- • --------->
                          | r
                           <---------- • e-
                               1/2 c


In this blog we have written about the natural "scale" of scattering in such an experiment. If electrons meet at relativistic speeds, and their minimum distance is r, then r is the natural scale. Waves shorter than r, or with a frequency higher than

       f  =  r / c,

should get destroyed by destructive interference. The "natural frequency" of the meeting process is f = r / c.

Note about the high 4-momentum cutoff. In the case of the anomalous magnetic moment, the cutoff for high 4-momenta of the virtual photon is determined by the mass of the electron, me. But in the electric vertex correction, the cutoff is determined by the geometry of the meetup of the electron with another charge.














The diagram is from the utexas.edu link. The solid line is an electron.

In the diagram above, the virtual photon on the right is the impulse on the electron as it travels past another electron, or another charged particle.

The virtual photon on the left describes the wobbling movement of the electric field of the electron.

The far field of the electron does not have time to take part in the meeting of the electron with another charge. This means that the electron has a reduced mass as it encounters the other charge.

The Feynman diagram should calculate the reduced mass, and the effect of reducing the mass in the meetup with the other charge.

The reduction in the mass of the electron depends on how long it takes to pass the other charge. If the time is t, then we expect that the electric field at a distance

       >  1/2 c t

cannot take part. The electron mass is reduced by that amount.

The reasoning in this is entirely classical. The integral should converge if analyzed as a classical process.

But is it so that the Feynman integral diverges for large 4-momenta of the vertex photon?









The incoming electron has the 4-momentum p, the outgoing p'. The vertex photon is k.

Let us denote the impulse virtual photon

       q  =  p'  -  p.


How can the Feynman integral "know" the mass-energy of the electric field far away?


      p' 
        \                   |
           \                |
          |  \              |
       k |   |  ~~~     q
          |  /              |
           /                |
        /                   |
     p                     Z
     e-

   ^ 
   | t


In the diagram, an electron meets a heavy nucleus Z. The nucleus give a pure momentum (no energy) q to the electron. The virtual photon k reduces the mass of the electron in the encounter.

It has to be that the virtual photon q somehow "modulates" the Feynman integral, so that the electron mass me is suitably reduced when the electron meets the nucleus Z. Since the mass of the electron is reduced, the nucleus Z will pull it a little closer, and the electron will be scattered a little more.

If p = p', then the Feynman integral is quite simple. Does it diverge? If yes, we could "renormalize" the integral value to zero. If p ≠ p', the value of the integral will change somewhat. We can use the change as the correction to the scattering probability amplitude. Why does the change when q is made nonzero, tell us the corrections to the probability amplitudes of the tree-level diagram?

Classically, destructive interference cancels any high-frequency phenomena (large |k|) in the process. If the Feynman integral is infinite, that means that the integral does not describe the classical process adequately.

Missing information. Let us analyze classically: in the diagram, we know the velocity of the electron, since we know p. If we know the charge of the nucleus Z, we can calculate what is the minimum distance of the electron from the nucleus for a given value of q:

       R(q).

But if we do not know the charge Z, we cannot know R(q). Then we cannot calculate the reduced mass of the electron. The vertex correction is not well defined then.


Let the charge of Z be +e and velocities in the encounter close to c.

If |q| is very large, close to me c, then the reduced mass of the electron should be similar to the case of the anomalous magnetic moment. A reduction of ~ 1/2 me would make scattering to large angles much more probable.

What if q contains lots of energy, not just momentum? That corresponds to a slow electron meeting a very fast nucleus Z. In that case we might be able to identify q with m in the Feynman integral. By switching to a different frame, we can make the electron to receive a lot of energy in the encounter.














The fnal.gov link has the Feynman integral in a clearer form:








If the the nucleus Z passes the electron close to the distance of the classical electron radius 2.8 * 10⁻¹⁵ m, the electron will receive a momentum which is close to me c, and an energy which may be ~ 1/2 me c².

Classically, the encounter happens very quickly, in just ~ 2.8 * 10⁻¹⁵ m. Most of the electric field of the electron does not have time to react. Classically, the electron mass is reduced a lot, maybe to ~ 1/2 me.

But the quantum imitation principle says that quantum mechanics cannot imitate the energy of the electric field of the electron closer than its Compton wavelength 2.4 * 10⁻¹² m. Thus, the mass reduction seen by quantum mechanics is ~ 1/861 me.

Let us set q to zero for a while. The integral above, probably, diverges. We regularize and renormalize the integral to zero. We kind of pretend that the integral has a fixed value. We want to calculate the difference that setting q ≠ 0 makes to the value of the integral.

Let q then be very large, close to me c. Its energy "reduces" the electron mass m in the factor

       (l-slash  -   q-slash  +  m) 

      /  ( (l  -  q)²  -  m²)

in a major way.

The factor 1 / (2 π)⁴ at the start of the integral tones this down, so that the probability amplitudes are much smaller (by a factor ~ 1/861) than for the tree-level diagram.

Feynman miscalculates the classical limit? If q is small, then the quantum imitation principle allows quantum mechanics to imitate the classical wobbling of the electron electric field very well. Quantum mechanics should then approach the classical behavior. But the factor 1 / (2 π)⁴ tones the probability amplitudes down too much? Our note about the Missing information above suggests that, indeed, the Feynman diagram is incorrect if we study a macroscopic system.


Regularization and renormalization work because their ad hoc trick implements the destructive interference seen in the classical treatment? In the classical treatment of the process, destructive interference cancels all high-frequency waves. In the Feynman integral taken in "momentum space", this classical mechanism is missing. Regularization and renormalization are ad hoc tricks which are mathematically dubious. But they happen to implement the effect of destructive interference. That is why they produce empirically correct results?


The electric form factor F1 and the classical limit: infrared divergences


The paper at the utexas.edu link calculates the "electric" form factor F₁(q²). Let us check if the result is reasonable for the classical limit.

The calculation of F₁(q²) contains both an "infrared" and an "ultraviolet" divergence.

Let us try to analyze what infrared divergences are in the context of a classical system. We assume that the electron and the nucleus are classical particles with a macroscopic mass and a macroscopic charge. What problem could be involved with very long wavelengths?

If we try to solve a classical wave problem with Green's functions, could it be that the low frequency spectrum requires some destructive interference, too?

Let us, once again, look at the rubber membrane model, which is hit with a sharp hammer.

Ultraviolet problems arise from the sharpness: the hammer must input an infinite energy to create the analogue of a point charge electric field.

A single hammer hit will output energy in the outgoing waves in the undulating membrane. The energy is not infinite, though.

An infrared divergence seems to be associated with the zero mass of the photon.

Could the problem be that the hammer can never create the entire Coulomb field, since the field spans the entire space? The far parts of the field must be "inherited" from earlier hammer strikes? Having a partial electric field is a sure way to end up in problems with Maxwell's equations.

The photon propagator in the Feynman integral with p = q = 0 is

       1  /  l²,

where

        -E²  +  P²,

if E is the energy and P is the photon spatial momentum. If the photon is real, "on-shell", the denominator is zero. This probably is the source of the divergence. What does this correspond to, classically?

Low-frequency waves DO escape as real photons. Suppose that the electron consumes the time ~ t to pass the nucleus Z. Then a part of its Coulomb field farther than c t from the electron "breaks free" and escapes as electromagnetic waves. Those long waves cannot be reabsorbed by the electron. This imposes a cutoff to an infrared divergence. The Feynman integral miscalculates the probability amplitude of the electron reabsorbing the waves which broke free.


The paper at the utexas.edu link mentions that the infrared divergence is canceled if we consider the "soft photons" which the electron sends. This suggests that the Feynman integral error really is about the waves which break free. However, this does not yet explain why the integral result diverges.


Destructive interference inside a limited volume: Feynman diagrams ignore it


A Feynman diagram is in "momentum space". We imagine a wave arriving to, say, a cubic meter of volume, and interacting with another field there. Waves then leave that cubic meter. We calculate the Fourier decomposition of the departing waves.

The process should conserve energy. If the arriving wave denotes an electron, and leaves 100X stronger, energy was not conserved. Even worse if the departing wave is infinitely strong.

Is there any reason why using the Green's function method should conserve energy? If the method ignores destructive interference, then it will certainly break conservation of energy.

Suppose that wave components with different momenta p,

       exp(i (p • r  -  E t))

leave the cubic meter. Can we extract the energy in the wave components separately? If yes, then we can sum the energies of the components.

In principle, we can tune an antenna which absorbs the component p, but not a nearby component p' where

       |p'  - p| > ε.

If ε is small, the antenna, presumably has to be very large. It cannot fit inside a cubic meter.

Inside that cubic meter, we have to take into account destructive interference of Fourier components of waves.

We uncovered a major error in the Feynman diagram calculations. They ignore destructive interference inside a limited volume. The calculation happens in "momentum space", and assumes that the space is infinite and that Fourier components can be handled separately.

All physical experiments happen in a limited volume.

Switching to "momentum space" is a valid approximation in many situations. A cubic meter is a vast volume for microscopic processes. But the approximation, in many cases, will cause wave energy to grow infinitely. This probably is the fundamental reason for diverging Feynman integrals.

Diverging integrals mean that energy (the number of particles) is not conserved. Energy is not conserved because the integral ignores destructive interference in a finited spatial volume.

The problem is not that we "do not know the physics at the Planck scale". The problem is a simple mathematical error in the treatment of waves.

The author of this blog never understood how calculations in "momentum space" can work. The answer is that they do not work. However, the approximation in many cases can be saved with ad hoc tricks like cutoffs, regularization, and renormalization.


The Feynman path integral method is inaccurate


The path integral method assumes that we can calculate all the distinct paths of "particles", and sum the probability amplitudes of the paths, to obtain the probability amplitudes of the end results.

The method assumes that the paths are independent and other paths can be ignored. But this is a wrong assumption. Destructive interference in the volume where the process happens, plays a crucial role in conservation of energy and particle numbers. We cannot ignore other paths.

If we look at a physical process with classical waves, no one would assume that we can ignore destructive interference when we do calculations. If we hit a tense rubber membrane with a sharp hammer, the energy of the impulse response, or the Green's function, depends in a complex way on the interference of the Fourier components. If we try to sum the energies of the components separately, we certainly will obtain strange results.

Why was the mathematical error ignored in quantum field theory?

1.   Feynman diagrams produce correct results for many processes, like the tree-level diagram for electron-electron scattering. It is somewhat surprising that the diagram works in this case.

2.   Subatomic particles are viewed as mysterious and exotic. They do not necessarily obey classical physics. People speculate about new physics at the "Planck scale".

3.   The idea that a virtual photon is a "particle" is nice. A Feynman diagram is aesthetic.

4.   People confuse a probability amplitude with a classical probability. Classical probabilities often are separate and can be summed without much thinking.

5.  People did not realize that Feynman integrals should also work in the classical limit. They must be able to calculate classical wave processes, or they are erroneous.

6.   People thought that a "momentum space" exists. But all physical processes take place in a finite spatial volume. There is no momentum space.

7.   Ad hoc methods, like renormalization, remove most of the problems caused by the mathematical error.


Conclusions


We believe that we found the fundamental reason why Feynman integrals diverge in many cases. The problem is that the Feynman path integral method ignores destructive interference of waves, which takes place in a finite spatial volume.

We have to analyze the electric vertex correction in more detail. Does it miscalculate the classical limit?


Freeman Dyson presented an argument which claims that the sum of Feynman diagrams will always diverge. We will next look at that.