Monday, September 29, 2025

Ultraviolet divergence in QED

In this blog we have for many years claimed that destructive interference removes ultraviolet divergences in QED. Our argument is based on the classical limit.

Regularization or renormalization are not needed if one uses a mathematically correct approximation method. Ultraviolet divergences are a result of a Feynman diagram only "hitting" the electromagnetic field with one Green's function – which is a poor approximation of the process.


















In the diagram above, we see an electron passing close to a very heavy negative charge X.

Let us switch to the classical limit. The electron is then a macroscopic particle with a very large charge, 1.8 * 10¹¹ coulombs per kilogram.

Feynman diagrams have no restrictions about the mass of the particles. The particles are allowed to be macroscopic.

As the large electron passes X, it emits a classical electromagnetic wave which has a huge number (actually, infinite) number of real photons.

The electric vertex correction is about the wobbling of the electric field relative to electron as it passes X. In particular, the far field of the electron does not have time to take part in the process. The electron appears to have a reduced mass as it passes X.

Classically, it is obvious that the inner electric field of the electron tracks the movement of the electron very accurately. The inner field is "rigid", and does not affect the movement of the electron much.


The rubber membrane model


                     #
                     #=========   sharp hammer
                     v                      keeps hitting

        ______       _____ tense rubber membrane
                   \__/
                     • e-         weight makes a pit


In the rubber membrane model of the electron electric field, we can imagine that the weight of the electron is implemented with a sharp hammer hitting the membrane at very short intervals.



                              |
                              |
                              |
                              |
                              • e-
        ^ t
        |
         -----> x


Let us analyze the Green's functions of the hammer hits if the electron stays static in space.

We see that if E ≠ 0, then there is a complete destructive interference for any

       exp(i (-E t  + p • r) / ħ).

That is expected, since the electric field is static.

Let us then assume that the charge X passes by the electron e-. The electron is accelerated, and gains some final velocity v.

For large |E|, the destructive interference still is almost complete. For what values of E is the destructive interference incomplete?

Let a be the acceleration of the electron. Let Δt be the cycle time of a wave with E ≠ 0.

During the cycle time, the electron moves a distance

       R  =  1/2 a Δt².

The wavelength is

       λ  = c Δt.

We see that if Δt is very short, then the electron moves negligibly during a cycle, compared to the wavelength λ. Intuitively, the destructive interference is strong then.

Let t be the time when the electron is accelerated. Intuitively, destructive interference is spoiled the most if the cycle time is t. That is, the wavelength is

       c t.

In this blog we have claimed that the electric field "does not have time to follow the electron", if it is at a distance c t from the electron. Destructive interference matches this.

The ultraviolet divergence is due to the fact that a Feynman diagram only hits the electromagnetic field once with a Green's function. In reality, the electron keeps hitting all the time.


Regularization and renormalization of the ultraviolet divergence in the electric vertex corrention




Let us look at how Vadim Kaplunovsky handles the ultraviolet divergence in the vertex correction.







If q² = 0, then the vertex function should be 1. We decide that the "counterterm" δ₁ must have the value:






With that value, the vertex function F₁^net(q²) has the right value 1 when q² = 0.

What is the logic in this? The idea is that the infinite value of the integral F₁^loops(q²) is "renormalized" to zero when q² = 0. We calculate a difference of the integral value when q² ≠ 0, compared to the integral value when q² = 0. The difference, defined in a reasonable way, is finite, even though the integral is infinite.

What is the relationship of this to our own idea in which destructive interference is used to make the integral to converge?

If q² = 0, then we claim that destructive interference cancels, for the Green's function, every Fourier component for which E ≠ 0. This is equivalent to the "renormalization" in the utexas.edu paper, where a "counterterm" δ₁ erases the entire Feynman integral.

What is the meaning of the difference

      F₁^loops(q²)  -  F₁^loops(0)

for q² ≠ 0?


In the link, for q² << m²,






where λ is the "photon mass" used to regularize the infrared divergence. There is no rule of how we should choose λ. The formula is vague.


Is the electric form factor F1(q²) a microscopic quantum effect?


We are struggling to find the analogue of the electric form factor F₁(q²) in the classical limit. If the electron is a macroscopic charge, then the wobbling of its electric field will reduce the mass of the electron, since the far electric field of the electron does not have time to react.

If the mass of the electron is reduced, and it passes a negative charge X, then X will push the electron away a little bit more: the momentum exchange is reduced, and the cross section is less.

But if X is positive, then the reduced mass of the electron allows it to come a little bit closer: the momentum exchange is larger and the cross section is more.

In the literature, the form factor F₁(q²) only depends on the square q² – it does not differentiate between X being positive or negative.

The tree level diagram of e- X scattering only depends on q, not on the electron mass. Thus, the mass reduction would not even show in the Feynman integral cross section.

The quantum imitation principle, which we introduced on September 19, 2025, may solve the problem. When the electron meets X, the electron tries to "build" its electric field with a photon. But the resources of the electron only suffice to send one large photon (mass-energy ~ me) at a time. The electric form factor F₁(q²) would be a result of this shortage of resources.

In the classical limit, the electron is able to send many large photons simultaneously, and build its electric field at a high precision.

The Feynman integral may work correctly if the electron passes very close to X. Then the resources of the electron are severely limited. It may send a single large photon, attempting to build its electric field.

In the classical limit, the form factor F₁(q²) clearly is wrong. It does not describe the wobbling of the classical electric field.

We have to check if any empirical experiments have verified the factor F₁(q²). Does the anomalous magnetic moment depend significantly on F₁(q²)?


A practical calculation when e- is relativistic and meets a massive charge of size e-


Let us assume that the electron e- is relativistic and is deflected by X into a large angle. Let us try to estimate the magnitude of the electric vertex correction.

The southampton.ac.uk link above suggests that 

     F₁(q²)  ~  1  +  α / (3 π).

That is, the cross section increases by ~ 1 / 1,300.














***  WORK IN PROGRESS  ***

Wednesday, September 24, 2025

Infrared divergence in QED

Let us study the classical limit of electron-heavy charge scattering.



The text is probably written by Vadim Kaplunovsky.











The utexas.edu paper discusses the infrared divergence of the "electric" vertex correction in the scattering of an electron from a very heavy negatively charged particle X. The yellow circle depicts the electron bumping from the field of X, and also the virtual vertex correction photon.

Let us analyze the Feynman vertex diagram. The solid line is the electron.
















The Green's function at the birthplace of the virtual photon k "creates" the Coulomb field of the electron.










Let the virtual photon k possess the energy E and the spatial momentum P. Let|E| and |P| very small. Then other factors in the integral are essentially constant, but the photon propagator

       1 / k²  =  1 / (-E²  +  P²)

varies a lot. The divergence has to come from photons for which -E² + P² is almost zero, that is, from almost "real", or "on-shell" photons. The paper says that the divergence is logarithmic.

Is there any reason why the integral should not diverge?


A toy model


                         1    90%
                        ------------
                     /                 \         1% + 81% + 81%
          e-  ----------- 1% --------------------------------
                     \                 /
                        ------------
                         2     90%


Let us have a toy model where the electron coming close to X has a 90% probability to emit a virtual photon of the energy 1, and the same 90% probability to emit a virtual photon of the energy 2.

The probability of the electron reabsorbing the virtual photon is 90%.

We sum the probabilities of the three paths and end up in a nonsensical figure 163%.

What was wrong? We assumed that the paths 1 and 2, and their probabilities, are mutually exclusive. In the diagram, the electron never emits both 1 and 2. A more realistic model is one where it, in most cases, emits both 1 and 2.

Classically, an electron bumping into the charge X will emit and absorb a large number of real and "almost real" photons.


                              ~~~~
                           /     ~~~
                         /      /
           e-  --------------------
                           |
           X   --------------------

    --> t


Feynman diagrams allow the electron to emit many photons, if the diagram is more complex. Does this save the mathematical correctness?

No, not in the case of real, emitted photons. If two photons are emitted, it is a distinct end result. Its probability amplitude cannot be summed to an end result where only one photon is emitted.

What about virtual photons?


                           ~~~~~~~
                        /      ~~        \
                     /       /       \        \
          e-   ---------------------------------
                                |
          X    ---------------------------------

     --> t 


Adding a new photon means one additional photon line and two additional vertices in the integral.

Intuitively, emitting and absorbing a small virtual photon should not change the phase of the outgoing electron much. There will be no destructive interference, and adding more photon lines should not cancel the divergence of the integral with a single line. But is this true according to the Feynman rules?

Let us check what people have written about this.


C. Anastasiou and G. Sterman (2018) present a method to remove infrared divergences. They do not say that going to two loops would help and cancel divergences at one loop.


In the classical limit, an electron emits a huge number of small photons


We know that a macroscopic accelerating charge will radiate a very large number of photons whose wavelength is large. What implications does this have for Feynman diagrams?

Let k₀ be the 4-momentum of a small real photon. The correct physical model (classical) says that the probability of the electron emitting just a single photon of a 4-momentum

       |k  -  k₀|

is essentially zero. It will always emit a huge number of small photons.

The Feynman diagram claims that the probability of such an emission is small, but it significantly differs from zero.

We conclude that the Feynman diagram calculates an incorrect result.

If an electron passes the large charge X at a relatively large distance, then we can make a wave packet to describe the electron, and the process is almost classical. Let us use the Larmor formula to calculate how many photons the electron radiates.


We assume that the electron is relativistic and passes a proton at a distance R. The acceleration is

       a  =  1 / (4 π ε₀)  *  e² / R²  *  1 / me.

The power of radiation is

       P  =  2/3  *  1 / (4 π ε₀)  *  e² a² / c³

           =  2/3  *  1 / (4 π ε₀)³  *  e⁶ / R⁴  *  1 / c³

               *  1 / me²

          =  3 * 10⁻⁴⁹  *  1 / R⁴.

The radiated energy for a relativistic electron is

       W  =  P R / c

             =  10⁻⁵⁷  /  R³.

One photon of the typical frequency has the energy

       E  =  h f

            =  h c / (2 R)

            =  10⁻²⁵  *  1 / R.

The number of photons of the typical frequency is

       n  =  W / E

            =  10⁻³² / R².

The Compton wavelength of the electron is 2.4 * 10⁻¹² m.

If the distance R = 10⁻¹⁰ m, the number of typical photons is only 10⁻¹². We conclude that the electron is solidly in the realm of microscopic particles.

The classical electromagnetic wave emitted by the electron is a "bump" which lasts for the time 2 R / c. What is the Fourier decomposition of such a bump? The Fourier transform is essentially constant for large frequencies.

The pulse is able to excite a detector which observes very-long-wavelength photons, say such that the frequency is just 1 herz. If R = 10⁻¹⁰ m, then the radiated energy is 10⁻²⁷ J, and we are able to detect a million such photons.

The "number of photons" in the pulse is not well defined. We probably can "mine" the energy in the pulse and can extract various collections of photons, depending on the detectors we are using. Nevertheless, we are able to observe a very large number of low-energy photons. This contradicts Feynman diagrams.

Feynman diagrams work reasonably well if the quanta are large?


The analysis on the infrared divergence in the utexas.edu paper is incorrect


The paper at utexas.edu tries to explain away the divergence problem by resorting to the fact that any single detector can only observe photons whose energy is larger than some threshold energy

       ωthr.

But the explanation is incorrect. Quantum mechanics is about what could be observed, not about what a certain real-world detector observes.

The paper says that the divergence in the photon emission has the opposite sign to the divergence in the vertex correction, and that the divergences "cancel each other out". This is not possible. If the electron loses kinetic energy to a real photon, then the momentum of the outgoing electron differs from every vertex correction electron, because a vertex correction electron does not lose any energy. Electron waves with different momenta cannot cancel each other out.

Also, as we saw above, the infrared divergence is not the only problem. Another problem is that Feynman diagrams predict a far too large probability for the output of just a single photon.

Feynman diagrams simply are a wrong way to approximate a semiclassical process which produces electromagnetic radiation. No gimmick or explanation can refute the basic problem.

The Peskin-Schroeder textbook An Introduction to Quantum Field Theory (1995), contains the same strange claim as the utexas.edu paper, that we can sum probability amplitudes for processes which have a different end result:











The Feynman probability amplitude for a real photon emission: we can allow divergence




















There are two diagrams. Above is one of them. The probability amplitude is
















Let |k| be very small. If we double the charge of the electron e, the mass of the electron m, and the momenta q and p, then the the probability amplitude grows twofold, and the probability flux fourfold . This agrees with the Larmor formula. The classical limit is ok, in this sense.

Now we realize an important thing:

- The Feynman probability amplitude is the PRODUCT of the electron flux and the photon flux. We can ALLOW the integral to diverge, if an infinite number of photons are produced!


Classically, the process will generate an infinite number of photons, if we look at ever longer wavelengths. The Feynman formula may be a fairly good approximation?

But Feynman diagrams miscalculate the effect of the emitted photons in the scattering of the electron from the large charge X. The electron loses its kinetic energy to the radiated photons. That affects its scattering from X. Feynman diagrams only consider one emitted photon, while in the classical limit, the electron will always emit a large number of photons.

How do Feynman diagrams work at all? We calculated above that the probability to emit a photon of a "typical size" is only 10⁻¹² in a typical case of scattering. Corrections which come from photon emissions are so small that they do not spoil the accuracy.

Also, the correction mainly comes from the very rare case (probability 10⁻¹²) when the electron does emit a photon of the typical size. The Feynman diagram is correct to reduce the mass of the electron by that one photon. We can ignore small photons.

In particle accelerators, the margin of uncertainty for a scattering probability is typically on the order of 1%.


The "electric" vertex correction

            
                                ● X

       e-  • -----------       R = minimum distance
                             \
                               \


Let us look at the divergence in the vertex correction. Let us guess that the vertex correction really is about how much the electron mass must be reduced, because its far electric field does not have time to follow the electron in the abrupt scattering from X.

"Far" means something like > 2 R, if the electron is relativistic. There R is the minimum distance between the electron and X.

If we think of the far electric field of the electron built from virtual photons of various wavelengths, then, obviously, an infinite number of virtual photons are needed. This explains why the Feynman vertex integral diverges.

As we noted above, then the Feynman integral calculates the product of the electron flux and the virtual photon flux.

In the classical limit, a correct approximation method would calculate the expectation value of the combined energy of the produced virtual photons, and reduce the mass of the electron by 1/2 of that value when the electron meets X. The Feynman method is erroneous in the classical limit.

Let us then analyze the process when the electron is microscopic.

The paper at utexas.edu states that the infrared divergence is logarithmic when we go to smaller |k|. Making the dimension larger than 4 makes the integral to converge. This suggests that the divergence really comes from building the electric field at the distance > 2 R. We can build an approximation "exponentially" by using waves of length 2ⁿ, where n is a positive integer.











After a lot of calculations, the integral becomes logarithmically diverging:









where D = 4 is the dimension.




















A strange detail: the divergence has a negative sign (marked in red) in the vertex correction, while it had a positive sign in the real photon emission. Why is this?

In the Schrödinger equation, when an electron wave bounces from another negative charge, there is a 180 degree phase shift.

The vertex correction diagram has one vertex more than the real photon emission diagram. This might explain the sign difference. But it does not explain what the phase of the two-particle combination e- and γ should be. We only considered the electron wave.


How to get rid of infrared divergences?


In the case of the real photon emission, we argued above that the electron always emits an infinite number of real photons. We argued that the Feynman probability amplitude for the combination e- and γ is the product of the e- and γ fluxes. Therefore, the integral must diverge. It is the correct behavior for the integral.

We did not check if the integral agrees with a classical calculation of the emitted wave, though.

Also, we remarked that the classical electromagnetic wave cannot be divided into a fixed combination of real photons. We can "mine" energy away from a classical wave packet in many ways.

In the case of the electric vertex correction, we claimed that the electron always emits an infinite number of almost "on-shell" virtual photons, and reabsorbs them later. These virtual photons will reduce the effective mass of the electron. The scattering should be calculated with the reduced mass. In the classical limit, this is self-evident: the far field of the electron does not have time to react to the scattering of the electron e- when the electron comes close to the charge X.

The Feynman diagram calculations in the classical limit are definitely wrong. Feynman cannot handle a case when many quanta are always produced in a process.

Actually, we could say that when many quanta are created, we come to the "non-perturbative" regime. With many quanta, we can model classical processes accurately. Feynman diagrams only work in a "perturbative" setting where a single quantum is created at a time.


Conclusions


We found the reason why there is an infrared divergence in the emission of a real photon in electron scattering.

It is the expected result, and not incorrect: the probability amplitude is the product of the photon amplitude and the electron amplitude. An infinite number of photons are always emitted in the process.

However, we did not check if the Feynman diagram correctly reproduces the classical emitted electromagnetic wave. There must be lots of empirical measurements of bremsstrahlung. Feynman diagrams must reproduce the measured results. Thus, Feynman diagrams do calculate the emitted wave approximately right for microscopic processes.

In the case of the electric vertex correction, many, almost on-shell, photons are always emitted and absorbed. This explains the infrared divergence. How to fix it? Putting a suitable cut-off will remove annoying small photons, and should yield approximately correct results. Feynman diagrams can handle one photon which is rarely created. The divergence problem shows up with many simultaneously created photons.

In the utexas.edu paper it is claimed that the divergences in the real photon emission and in the electric vertex correction "cancel out" each other. That is definitely a wrong claim. A state which has a different number of particles, and a different electron 4-momentum, can never cancel another state.

We introduced a new hypothesis:

- When many quanta are always produced in a process, then the process approaches the classical limit, and the "perturbative" method of Feynman diagrams will fail.

Monday, September 22, 2025

Freeman Dyson's argument of QFT divergence: is the argument incorrect?

Let us analyze the heuristic argument given by Freeman Dyson in 1952.


Justin Bond writes about the argument by Freeman Dyson.

Suppose that the perturbative Dyson series of Feynman diagrams converges for a weak repulsion between electrons. It produces nice, well behaved results.

Let the coupling constant be a small positive number

       e²  >  0.

We assume that we can calculate a physical quantity F(e²) with a perturbative series:







Let the series be analytic in e². We can then analytically continue the series into small negative numbers

       e²  <  0,

and the perturbative series still converges.

A negative value of e² means that electrons attract electrons and positrons attract positrons.


         e- -------------------  e-
                   |
                     ------------  e+
                   |
                     ------------  e-
                   |
         e- ------------------- e-

    --> t


But then the system can tunnel into a lower energy state where we have a large number of electrons close to each other, and elsewhere in space, an equal number of positrons close to each other. The attractive potential of these gatherings is hugely negative, more than the mass-energy of the electrons and the positrons. The system has tunneled into a lower energy state where a huge number of electron-positron pairs came into existence.

In the diagram above, we have a pair born spontaneously.

Such tunneling should be visible in the perturbative series, and should cause divergences in the series?

What is the problem? Is it so that we cannot determine physical quantities with a nice analytic series?


Collapse into a black hole spoils quantum mechanics?


Let us model any physical system with quantum mechanics. There is always a chance that the system will tunnel into a black hole. Is quantum mechanics useless then?

No. The probability of tunneling is negligible in, say, 1 hour.


Feynman cannot handle time-dependent phenomena?


In the Feynman approach, we work in "momentum space", and ignore the time and position coordinates. If the tunneling is time-dependent, then the Feynman approach cannot handle it. What about regularized and renormalized Feynman integrals? We know that they cannot handle "bound" states.

Freeman Dyson argued that the Feynman perturbation series cannot converge. Is this a valid conclusion? The large groups of electrons and positrons form bound states. Can the Feynman series recognize this?

What if it does converge for e² < 0, but simply does not calculate right the collapse of the state to a huge number of electrons and positrons?


John Baez (2016) presents a simple model where a Taylor series E(β) which is supposed to find the ground state energy of a particle in this potential, does not converge:

       V  =  x²  +  β x⁴,

Barry Simon proved in 1969 that the series does not converge.

Finding the ground state energy is not what a Feynman diagram does. It is not clear if the β model is relevant for Feynman diagrams.


How does a Feynman diagram handle attractive forces in QED? Bound states cannot happen


In quantum electrodynamics, the electron and the positron have an attractive force. If they come very close to each other, they are annihilated.


       e+ --------------- ~~~~~~~  γ
                             |
       e-  --------------- ~~~~~~~  γ'

       --> t


Suppose that an annihilation is not possible. What happens?


                                             γ
                                          /
                                       /
                                    /
       e+ ----------------------------
                          |
       e-  ----------------------------
                          
       --> t


Can the pair emit a large real photon γ as bremsstrahlung? The electron and the positron move at a very slow speed, come close to each other, and emit a lot of energy in a large photon.

But this is prohibited by the Feynman rules. The rules assume that all real particles coming out from the diagram are free. They cannot be in a potential pit. The energy coming into the diagram must equal the energy coming out.

It looks like the Feynman rules prohibit the collapse suggested by Freeman Dyson. The collapse is no problem for Feynman diagrams because the collapse cannot happen in them.

The large populations of electrons and positrons suggested by Dyson cannot be created because energy would not be conserved.


Dyson's argument is not about Feynman diagrams – what does the argument then prove?


The argument seems to prove this:

- If we have an analytic, converging series which calculates correctly a physical quantity for a repulsive force, then the same series calculates something for the attractive force, but it is not the "collapse of the universe". We can say that the series miscalculates for the attractive force.


Is this a fundamental flaw? No. For the attractive force, we need another formula, if the problem is well defined at all. If the problem is not well defined, then we do not need any formula.

Specifically, it is possible that the series of Feynman diagrams does converge. Freeman Dyson did not prove that that is impossible.


Why do people care so much about the behavior of Feynman diagrams? They should not


Feynman diagrams assume that free real particles meet, and the output is another set of free particles.

1.   This only covers a very limited set of physical phenomena.

2.   The Feynman formulae are a crude mathematical approximation about what might happen.

3.   There is no a priori reason why the crude mathematical approximation would work at all. It is surprising that it does work well in many cases.


Freeman Dyson's argument against convergence forgets item 1 above.

People claiming that "new physics" must exist at the Planck scale forget item 3.

Also people claiming that we must modify quantum mechanics, to remove divergences in Feynman integrals (e.g., string model) forget item 3.

Gravity does play a role at the Planck scale. But if we study just QED, there is no need to claim that there is new physics at the Planck scale.

"Supersymmetry" was developed in order to get certain Feynman integrals to converge. Since the divergence problem is a mathematical error, there is no need to assume that supersymmetry is true. The LHC proved that if supersymmetric particles do exist, they are hard to find.

Some people hoped that the LHC will reveal extra dimensions. Extra dimensions were added to get string models to work. And string models were supposed to solve diverging Feynman integrals. Again, we see that new physics were speculated about to fix a mathematical error. That is an almost hopeless strategy. The LHC did not find any extra dimensions.


Conclusions


Freeman Dyson did not prove that series of Feynman diagrams diverge.

Divergence in Feynman diagrams is a mathematical error. This explains why the LHC did not find supersymmetric particles or extra dimensions. It is a bad strategy to fix mathematical errors by modifying the physics.

Friday, September 19, 2025

Regularization and renormalization in Feynman diagrams

Naively, the Feynman diagram of the anomalous magnetic moment might diverge logarithmically for large 4-momenta of the virtual photon in the vertex, since it is of a form

             ∫      1 / k⁴  dV.
        k ∈ ℝ⁴ 



The paper at the fnal.gov link uses dimensional regularization in 4 - 2 ε dimensions. At the end of the calculation, the paper states that diverging terms cancel each other.

The paper at the utexas.edu link utilizes various symmetries, and states at the end of the calculation that the integral converges.

We conclude that the integral probably is benign: if the integral is summed in the "natural" order of increasing 4-momenta in the virtual photon, then the integral will converge. There is no need for regularization or renormalization.

In the previous blog post we claimed that the mass-energy of the electric field of the electron is

       α / (2 π)  *  me  ≈  1/861 me.

Since the integral is benign, we can claim that the result is robust: the result does not depend on dubious regularization or renormalization procedures.


The quantum imitation principle


Quantum imitation principle. Quantum mechanics tries to imitate classical mechanics. The resolution of the imitation is restricted by the Compton wavelength associated with the energy available in the process. The imitation may in some cases be more accurate, if there is a lucky coincidence. The imitation is further restricted by quantization.


We introduced the principle above in our previous blog post. Quantum mechanics tries to imitate the energy of the electric field of the electron. But the resolution is quite poor: the Compton wavelength of the electron is

       2.4 * 10⁻¹² m,

which is a large value compared to the classical radius of the electron 2.8 * 10⁻¹⁵ m.

The imitation of the electron electric field only succeeds at the resolution of the Compton wavelength. Quantum mechanics believes that the mass-energy of the electric field is just 1/861 of the electron mass.

The fact that we do not need regularization or renormalization in the calculation of the Feynman integral, stresses that the quantum imitation principle is robust: it will produce finite values without dubious mathematical methods.


What if destructive interference cancels diverging integrals?


For classical waves, a wave whose frequency is 

       f

cannot normally produce significant waves whose frequency is > f. If we use Green's functions to construct a solution, destructive interference, in a typical case, cancels all high frequencies.

The cancellation is not absolute: if we would sum the absolute values of each wave, the integral would diverge.

We conclude that it is ok if the integral diverges in absolute values, as long as it converges when integrated in the natural order of increasing 4-momenta.

Let us calculate an example. The impulse response, i.e., the Green's function, for a static point charge is

          ∫      1 / k²  *  Real( exp(i k • r) ) dV,
     k ∈ ℝ³

where dV denotes a volume element of ℝ³, and Real takes the real part. The integral k³ / k² diverges badly. If we use Green's functions, we cannot expect absolute convergence of integrals.


If classical wave processes cancel high-frequency waves, why do high-frequency waves remain in Feynman integrals, and cause divergences?


In this blog we have long suspected that divergences are a result of a wrong way of applying Green's functions to scattering phenomena.

The reason may be that we calculate Feynman diagrams in the "momentum space", and ignore the position of the particles.

In the case of vacuum polarization, we claimed in 2021 that the divergence comes from a sign error when considering the Dirac hole theory.


The classical limit of the "electric" vertex correction


There is no classical limit for the magnetic part of the vertex correction, since the magnetic moment is a microscopic quantum phenomenon.

But for the wobbling of the electric field in a scattering experiment, there is a very natural classical limit. If we increase the mass and the charge of colliding electrons, they start to behave like classical charges. The wobbling of the electric field will remain significant, and will affect the paths of the charges.

The classical limit should have no diverging integrals. Destructive interference should cancel all high frequencies.


                 1/2 c
        e- • --------->
                          | r
                           <---------- • e-
                               1/2 c


In this blog we have written about the natural "scale" of scattering in such an experiment. If electrons meet at relativistic speeds, and their minimum distance is r, then r is the natural scale. Waves shorter than r, or with a frequency higher than

       f  =  r / c,

should get destroyed by destructive interference. The "natural frequency" of the meeting process is f = r / c.

Note about the high 4-momentum cutoff. In the case of the anomalous magnetic moment, the cutoff for high 4-momenta of the virtual photon is determined by the mass of the electron, me. But in the electric vertex correction, the cutoff is determined by the geometry of the meetup of the electron with another charge.














The diagram is from the utexas.edu link. The solid line is an electron.

In the diagram above, the virtual photon on the right is the impulse on the electron as it travels past another electron, or another charged particle.

The virtual photon on the left describes the wobbling movement of the electric field of the electron.

The far field of the electron does not have time to take part in the meeting of the electron with another charge. This means that the electron has a reduced mass as it encounters the other charge.

The Feynman diagram should calculate the reduced mass, and the effect of reducing the mass in the meetup with the other charge.

The reduction in the mass of the electron depends on how long it takes to pass the other charge. If the time is t, then we expect that the electric field at a distance

       >  1/2 c t

cannot take part. The electron mass is reduced by that amount.

The reasoning in this is entirely classical. The integral should converge if analyzed as a classical process.

But is it so that the Feynman integral diverges for large 4-momenta of the vertex photon?









The incoming electron has the 4-momentum p, the outgoing p'. The vertex photon is k.

Let us denote the impulse virtual photon

       q  =  p'  -  p.


How can the Feynman integral "know" the mass-energy of the electric field far away?


      p' 
        \                   |
           \                |
          |  \              |
       k |   |  ~~~     q
          |  /              |
           /                |
        /                   |
     p                     Z
     e-

   ^ 
   | t


In the diagram, an electron meets a heavy nucleus Z. The nucleus give a pure momentum (no energy) q to the electron. The virtual photon k reduces the mass of the electron in the encounter.

It has to be that the virtual photon q somehow "modulates" the Feynman integral, so that the electron mass me is suitably reduced when the electron meets the nucleus Z. Since the mass of the electron is reduced, the nucleus Z will pull it a little closer, and the electron will be scattered a little more.

If p = p', then the Feynman integral is quite simple. Does it diverge? If yes, we could "renormalize" the integral value to zero. If p ≠ p', the value of the integral will change somewhat. We can use the change as the correction to the scattering probability amplitude. Why does the change when q is made nonzero, tell us the corrections to the probability amplitudes of the tree-level diagram?

Classically, destructive interference cancels any high-frequency phenomena (large |k|) in the process. If the Feynman integral is infinite, that means that the integral does not describe the classical process adequately.

Missing information. Let us analyze classically: in the diagram, we know the velocity of the electron, since we know p. If we know the charge of the nucleus Z, we can calculate what is the minimum distance of the electron from the nucleus for a given value of q:

       R(q).

But if we do not know the charge Z, we cannot know R(q). Then we cannot calculate the reduced mass of the electron. The vertex correction is not well defined then.


Let the charge of Z be +e and velocities in the encounter close to c.

If |q| is very large, close to me c, then the reduced mass of the electron should be similar to the case of the anomalous magnetic moment. A reduction of ~ 1/2 me would make scattering to large angles much more probable.

What if q contains lots of energy, not just momentum? That corresponds to a slow electron meeting a very fast nucleus Z. In that case we might be able to identify q with m in the Feynman integral. By switching to a different frame, we can make the electron to receive a lot of energy in the encounter.














The fnal.gov link has the Feynman integral in a clearer form:








If the the nucleus Z passes the electron close to the distance of the classical electron radius 2.8 * 10⁻¹⁵ m, the electron will receive a momentum which is close to me c, and an energy which may be ~ 1/2 me c².

Classically, the encounter happens very quickly, in just ~ 2.8 * 10⁻¹⁵ m. Most of the electric field of the electron does not have time to react. Classically, the electron mass is reduced a lot, maybe to ~ 1/2 me.

But the quantum imitation principle says that quantum mechanics cannot imitate the energy of the electric field of the electron closer than its Compton wavelength 2.4 * 10⁻¹² m. Thus, the mass reduction seen by quantum mechanics is ~ 1/861 me.

Let us set q to zero for a while. The integral above, probably, diverges. We regularize and renormalize the integral to zero. We kind of pretend that the integral has a fixed value. We want to calculate the difference that setting q ≠ 0 makes to the value of the integral.

Let q then be very large, close to me c. Its energy "reduces" the electron mass m in the factor

       (l-slash  -   q-slash  +  m) 

      /  ( (l  -  q)²  -  m²)

in a major way.

The factor 1 / (2 π)⁴ at the start of the integral tones this down, so that the probability amplitudes are much smaller (by a factor ~ 1/861) than for the tree-level diagram.

Feynman miscalculates the classical limit? If q is small, then the quantum imitation principle allows quantum mechanics to imitate the classical wobbling of the electron electric field very well. Quantum mechanics should then approach the classical behavior. But the factor 1 / (2 π)⁴ tones the probability amplitudes down too much? Our note about the Missing information above suggests that, indeed, the Feynman diagram is incorrect if we study a macroscopic system.


Regularization and renormalization work because their ad hoc trick implements the destructive interference seen in the classical treatment? In the classical treatment of the process, destructive interference cancels all high-frequency waves. In the Feynman integral taken in "momentum space", this classical mechanism is missing. Regularization and renormalization are ad hoc tricks which are mathematically dubious. But they happen to implement the effect of destructive interference. That is why they produce empirically correct results?


The electric form factor F1 and the classical limit: infrared divergences


The paper at the utexas.edu link calculates the "electric" form factor F₁(q²). Let us check if the result is reasonable for the classical limit.

The calculation of F₁(q²) contains both an "infrared" and an "ultraviolet" divergence.

Let us try to analyze what infrared divergences are in the context of a classical system. We assume that the electron and the nucleus are classical particles with a macroscopic mass and a macroscopic charge. What problem could be involved with very long wavelengths?

If we try to solve a classical wave problem with Green's functions, could it be that the low frequency spectrum requires some destructive interference, too?

Let us, once again, look at the rubber membrane model, which is hit with a sharp hammer.

Ultraviolet problems arise from the sharpness: the hammer must input an infinite energy to create the analogue of a point charge electric field.

A single hammer hit will output energy in the outgoing waves in the undulating membrane. The energy is not infinite, though.

An infrared divergence seems to be associated with the zero mass of the photon.

Could the problem be that the hammer can never create the entire Coulomb field, since the field spans the entire space? The far parts of the field must be "inherited" from earlier hammer strikes? Having a partial electric field is a sure way to end up in problems with Maxwell's equations.

The photon propagator in the Feynman integral with p = q = 0 is

       1  /  l²,

where

        -E²  +  P²,

if E is the energy and P is the photon spatial momentum. If the photon is real, "on-shell", the denominator is zero. This probably is the source of the divergence. What does this correspond to, classically?

Low-frequency waves DO escape as real photons. Suppose that the electron consumes the time ~ t to pass the nucleus Z. Then a part of its Coulomb field farther than c t from the electron "breaks free" and escapes as electromagnetic waves. Those long waves cannot be reabsorbed by the electron. This imposes a cutoff to an infrared divergence. The Feynman integral miscalculates the probability amplitude of the electron reabsorbing the waves which broke free.


The paper at the utexas.edu link mentions that the infrared divergence is canceled if we consider the "soft photons" which the electron sends. This suggests that the Feynman integral error really is about the waves which break free. However, this does not yet explain why the integral result diverges.


Destructive interference inside a limited volume: Feynman diagrams ignore it


A Feynman diagram is in "momentum space". We imagine a wave arriving to, say, a cubic meter of volume, and interacting with another field there. Waves then leave that cubic meter. We calculate the Fourier decomposition of the departing waves.

The process should conserve energy. If the arriving wave denotes an electron, and leaves 100X stronger, energy was not conserved. Even worse if the departing wave is infinitely strong.

Is there any reason why using the Green's function method should conserve energy? If the method ignores destructive interference, then it will certainly break conservation of energy.

Suppose that wave components with different momenta p,

       exp(i (p • r  -  E t))

leave the cubic meter. Can we extract the energy in the wave components separately? If yes, then we can sum the energies of the components.

In principle, we can tune an antenna which absorbs the component p, but not a nearby component p' where

       |p'  - p| > ε.

If ε is small, the antenna, presumably has to be very large. It cannot fit inside a cubic meter.

Inside that cubic meter, we have to take into account destructive interference of Fourier components of waves.

We uncovered a major error in the Feynman diagram calculations. They ignore destructive interference inside a limited volume. The calculation happens in "momentum space", and assumes that the space is infinite and that Fourier components can be handled separately.

All physical experiments happen in a limited volume.

Switching to "momentum space" is a valid approximation in many situations. A cubic meter is a vast volume for microscopic processes. But the approximation, in many cases, will cause wave energy to grow infinitely. This probably is the fundamental reason for diverging Feynman integrals.

Diverging integrals mean that energy (the number of particles) is not conserved. Energy is not conserved because the integral ignores destructive interference in a finited spatial volume.

The problem is not that we "do not know the physics at the Planck scale". The problem is a simple mathematical error in the treatment of waves.

The author of this blog never understood how calculations in "momentum space" can work. The answer is that they do not work. However, the approximation in many cases can be saved with ad hoc tricks like cutoffs, regularization, and renormalization.


The Feynman path integral method is inaccurate


The path integral method assumes that we can calculate all the distinct paths of "particles", and sum the probability amplitudes of the paths, to obtain the probability amplitudes of the end results.

The method assumes that the paths are independent and other paths can be ignored. But this is a wrong assumption. Destructive interference in the volume where the process happens, plays a crucial role in conservation of energy and particle numbers. We cannot ignore other paths.

If we look at a physical process with classical waves, no one would assume that we can ignore destructive interference when we do calculations. If we hit a tense rubber membrane with a sharp hammer, the energy of the impulse response, or the Green's function, depends in a complex way on the interference of the Fourier components. If we try to sum the energies of the components separately, we certainly will obtain strange results.

Why was the mathematical error ignored in quantum field theory?

1.   Feynman diagrams produce correct results for many processes, like the tree-level diagram for electron-electron scattering. It is somewhat surprising that the diagram works in this case.

2.   Subatomic particles are viewed as mysterious and exotic. They do not necessarily obey classical physics. People speculate about new physics at the "Planck scale".

3.   The idea that a virtual photon is a "particle" is nice. A Feynman diagram is aesthetic.

4.   People confuse a probability amplitude with a classical probability. Classical probabilities often are separate and can be summed without much thinking.

5.  People did not realize that Feynman integrals should also work in the classical limit. They must be able to calculate classical wave processes, or they are erroneous.

6.   People thought that a "momentum space" exists. But all physical processes take place in a finite spatial volume. There is no momentum space.

7.   Ad hoc methods, like renormalization, remove most of the problems caused by the mathematical error.


Conclusions


We believe that we found the fundamental reason why Feynman integrals diverge in many cases. The problem is that the Feynman path integral method ignores destructive interference of waves, which takes place in a finite spatial volume.

We have to analyze the electric vertex correction in more detail. Does it miscalculate the classical limit?


Freeman Dyson presented an argument which claims that the sum of Feynman diagrams will always diverge. We will next look at that.

Saturday, September 13, 2025

Why does a Feynman diagram simulate the Coulomb field?

In this blog we have touched this question many times. If we let high-speed electrons collide head-on, then their scattering can be calculated in two ways:

1.   from the Coulomb electric field, using classical physics, or

2.   from the simple Feynman diagram, by integrating the "probability amplitude".



It is easy to calculate that the Coulomb model and the Feynman model approximately agree about the scattering probabilities. But why do they agree?

Also, what is the "virtual photon" γ which the electrons exchange?

In the Feynman diagram, the photon is a Fourier component of Green's function which describes the impulse response of the electromagnetic field. What does it mean that the other electron "absorbs" that component?


The classical limit


Nothing prevents the particles in Feynman diagrams from having macroscopic masses and charges.

If Feynman diagrams are correct, they must predict the behavior of macroscopic charges approximately right.

That is the case, at least in electron-electron scattering. It is easy to calculate that they correctly predict the scattering from the Coulomb field.

The macroscopic counterpart of the vertex correction is the interaction of a macroscopic charge with its own electric field. We have in this blog introduced the "rubber plate" model for the electric field: it is an elastic object which tries to keep up with the charge if the charge is accelerated.

With the rubber plate model, we may be able to calculate the classical vertex correction. Feynman diagrams should approximately replicate it. If that is the case, why do they calculate the same result? We do not know yet.

The classical limit of the magnetic moment μ is a more difficult case. We do not know what the classical limit is supposed to be. Does the charge move in the zitterbewegung loop at the speed of light? For a macroscopic particle, the loop length is extremely tiny,

      h / (m c).


The rubber membrane model once again


In the rubber membrane model, we may imagine that the charge keeps hitting the membrane with a hammer, at short intervals, emitting Green's functions. The hammer also absorbs impulses which come back from previous hits, since the membrane wants to straighten up.


                 /
              / |
           /    | γ virtual photon
           \    | 
              \ |
                 \
                    \
                     e-
   ^ t
   |


In this model, it is quite a natural assumption that the outgoing electron absorbs impulses which were emitted by the incoming electron. As if the incoming electron and the outgoing electron were different particles.

The Feynman diagram above looks sensible in this interpretation.


The "electric" F1 term in the vertex correction










In the utexas.edu link of the previous post, the vertex function has the form above. The term F₁ is the "electric" term, for which the loop correction

       F₁(q²)  →  0,

when q² → 0. In the rubber plate model of the electric field, we guess that F₁ corresponds to the effects of the elastic electric field "wobbling" when the electron meets another charge and changes course. If the electron passes by a nucleus, for example, at a large distance, then there is very little wobbling of the electric field. The wobbling does not affect much the momentum that the electron gains when it flies past the nucleus.

Question. Does the shape of the electric potential affect the electric vertex correction F₁? The electron can change course abruptly if it passes close to a very small charge. It it passes far from a large charge, then the change in the velocity vector is gradual. In the classical analogy, the wobbling in the first case should be significant, but in the second case insignificant.


In a Feynman diagram, a virtual photon q which contains just momentum, no energy, is a Fourier component wave of a type

       exp(i p x / ħ).

It does not care about the shape of the Coulomb potential it was derived from.

Let the electric field originate from a macroscopic point mass M with an electric charge Q. Then only the coefficients of the Fourier components depend on Q. What happens in a Feynman diagram if we scale the coefficients by some constant C?

The "flux" of the virtual photon q depends linearly on C.

Does the scattering probability amplitude scale with the factor C²? Obviously, yes.

 
            \               |
          γ |\             |
             |  \           |
             |    ~~~~      q
             |  /           |
             |/             |
            /               |
          e-               Q

       ^ t
       |


Let the electron e- in the Feynman diagram be macroscopic, too. The diagram above contains the horizontal virtual photon q, and the vertex virtual photon γ.

But now we have a contradiction with the classical limit. In the classical limit, the elastic electric field of the e- behaves quite differently if Q is small and e- passes it quickly, than in the case where Q is large and e- uses a lot of time to pass it far away. In both cases, the momentum q is transferred, but for Q small, the electric field of e- wobbles much more.

Note that for a large Q we could use the Schrödinger equation to calculate the path of e- nonperturbatively. Any effect from the dynamics of the electric field of e- are negligible.

We may have found a case in which Feynman diagrams miscalculate the vertex correction.


The "magnetic" F2 term in the vertex correction


The magnetic term F₂(q²) multiplies a spin matrix σ. The photon q couples to the spin, or the magnetic moment μ, of the electron.

The term F₂(q²) does not go to zero when q² approaches zero. This indicates that the magnetic moment μ is roughly 1/861 times larger than the one predicted by the Dirac equation.

We have speculated that the "bare mass" of the electron is 1/861 times smaller than the full mass me. Then the Dirac equation gives a magnetic moment which is 1/861 larger.

Classically, the "bare mass" of a point charge should be minus infinite, because its electric field has an infinite energy.

We have to analyze the Feynman vertex integral, in order to understand what logic it uses to calculate the bare mass of the electron.

Let us only consider photons with a positive energy 0 < E < me c². The probability amplitude for the electron sending a photon whose energy is

       ~ 1/2 me

is quite small, about 1/1,000. Otherwise the magnetic momentum of the electron would be doubled.

What is the "meaning" of this 1/1,000? Let us try to "build" the electric field potential of the electron up to the distance 1/2 of the Compton wavelength λe. We need Fourier components whose wavelenght is λe.

We do have the energy me c² available in the process. In quantum mechanics, we can then describe things whose size is λe. We are able to describe the electric field of the electron down to the distance λe.

Hypothesis. Feynman diagrams are able to describe and understand the energy of the electric field of the electron down to the distance

       h / (M c)

if the mass energy M is available in the process.


Since the available mass-energy for a static electron is me, Feynman diagrams can describe the electric field's energy down to the distance

       λe  =  h / (me c).

Feynman diagrams reduce the mass of the electron by a factor 1/861, because that mass-energy is in the electric field of the electron, not in the electron itself.

This explains why our semiclassical zitterbewegung argument produces the same anomalous magnetic moment as Feynman diagrams. Both methods calculate the reduced mass of the electron in the same way, subtracting the mass-energy of the electric field farther than λe from the electron.


Why and how do Feynman diagrams understand the energy of the electric field?


Our hypothesis above claims that Feynman diagrams have a surprisingly high ability to understand and calculate things about the electric field.

Even better do Feynman diagrams describe electron-electron scattering: they can model strong scattering where the distance of the electrons is only the classical electron radius,

       re  =  2.8 * 10⁻¹⁵ m.

But we only claim that they understand the electric field energy at the scale

       λe  =  2.4 * 10⁻¹² m.

Incidentally,

       re / λe  =  α / (2 π)  ≈  1/861.


The quantum mechanical principle: you cannot know things which are much smaller than the Compton wavelength?


This principle is broken by electron-electron scattering: we do get information down to the classical radius of the electron.

Why does quantum mechanics break its principle in this case?

And why does quantum mechanics obey the principle when it analyzes the field energy of the electron?

A possible solution: the electric 1 / r potential just "happens" to be such that one can describe scattering which takes place at a very small distance

A time-independent plane wave whose wavelength is the Compton wavelength of the electron, can describe an arbitrary electric potential down to the scale of the Compton wavelength.

But the formula for the electric potential, ~ 1 / r, happens to be such that this crude description produces the right scattering probability down to the classical electron radius.

It is a lucky coincidence.

To describe the energy of the electric field of the electron, we need quanta with an energy E. Since E is limited to me c² in the case of a slow electron, we only can describe the energy of the electric field down to the distance

       λe 

from the electron.

Quantum mechanics thinks that the electron has given up 1/861 of its mass-energy to its electric field. The rest of the mass is governed by the Dirac equation. This explains the anomalous magnetic moment.

Another interpretation: quantum mechanics thinks that in 1/861 of cases, the electron has emitted a real photon whose energy is ~ 1/2 me c². That doubles the magnetic moment of the electron in those rare cases.


Renormalization of the electron mass: the bare mass problem


We found a solution to the classic problem: what is the bare mass of the electron?

The bare mass is

       860/861 me,

for a slow electron. The electric field of the electron only carries 1/861 of me.

We can say that electric field very close to the pointlike electron has no energy, if probed with another slow electron, or a slow positron.

In 2021 we tried to solve the electron mass renormalization problem by claiming that its electric field has no energy at all, since the field can be Fourier decomposed into plane waves for which p ≠ 0 and E = 0. We ended up in problems trying to explain the anomalous magnetic moment.

Our solution also resolves the classical renormalization problem for the electron mass if we replace the classical description with the quantum description.

The classical renormalization problem suggested that a classical point charge is an inconsistent concept. By replacing it with the quantum description, we get rid of the inconsistency.


Conservation of the center of the mass in the electron self-energy diagram


                            _____    γ  virtual photon
                          /           \
           e-  ------------------------------

   --> t


In 2020 we claimed that the virtual photon γ cannot be carrying energy E > 0 in the Feynman self-energy diagram above. We argued in the following way: 

Let an initially static electron emit the photon with E > 0. The electron starts to move to the opposite direction from the photon γ. Let the electron then somehow (magically) absorb γ. How does the electron now know where it must "jump" in order to keep the center of the mass of the system static?

A possible solution to the paradox: the energy of the electric field is always outside the electron. There really is no need for the electron to jump anywhere.

In the rubber membrane & the sharp hammer model, the hammer made a pit into the rubber membrane. There is some elastic energy in the stretched membrane, and that energy never comes back to the hammer.

Thus, the self-energy diagram above only describes one step in a continuous process. The continuous process does not involve any energy transfer.

This means that 860/861 me really is the "true" reduced mass of the electron. It never absorbs the remaining 1/861 me.


The remaining renormalization problem: how to get rid of regularization?


In 2021 we claimed that we can appeal to destructive interference to make Feynman integrals to converge absolutely. Any wave with a very short wavelength is canceled by destructive interference if the wavelength is much shorter than the "geometric scale" of the process.

We are not sure if our claim is correct. We have to study it further. If the electron emits a virtual photon with a very short wavelength, can we really claim that destructive interference cancels it? The system now contains two particles: the electron and the virtual photon. Does the destructive interference apply to this pair?


Why does the Green's function calculate right the field energy farther than the Compton wavelength? A time-independent Green's function


This is another crucial question which we have not solved yet. If the Feynman diagram correctly handles a classical limit, then it has to know what the field energy of the Coulomb field is.

Why does Green's function produce correct results?

Let us consider a time-independent Green's function for the Coulomb potential of a point charge. A point charge can be seen as a point "impulse" which distorts the electric field. The Coulomb field is the "impulse response".

Maxwell's equations without the charge are the "homogeneous" equation:

       H(r)  =  0.

The point charge is the perturbation which we write on the right side, a Dirac delta function:

       H(r)  =  δ(r₀).

What is the energy required to build the Coulomb electric field?

We can build a crude 3-dimensional potential well of a radius R by combining plane waves whose wavelength is

       λ  >  R,

and the formula

       Real( exp(i k • r / ħ) ),

where Real takes the real part of the complex number, and k is the spatial momentum of the virtual photon.

What is the Fourier decomposition (transform) of the 3-dimensional function 1 / r?










It would be the familiar Feynman propagator for the photon, if k would be a 4-momentum.

How much energy is "carried" by each component wave, to build the electric field?

The energy of the electric field is

            ∞
       ~  ∫   1 / r⁴  * r² dr
          R

       ~  1 / R,

down to a radius R.

We obtain a reasonable model if each component exp(i k • r) carries the same energy.

To construct the potential down to a radius R, we need momenta |k| which are at most ~ 1 / R. The energy down to a radius R is then

             r < 1 / R
       ~     ∫            1 / r²  *  4 π r² dr
            0

       ~  1 / R.


Analysis of the time-dependent Green's function for the electric field
















The Feynman integral in the anomalous magnetic moment, in the fnal.gov link of the previous blog post, is









The photon propagator (Green's function) looks like the formula in the 3-dimensional case, but this time k is a 4-momentum

       k  = (E, px, py, pz).

For a 4-momentum k, the square k² is defined

       k²  =  -E²  +  p²

in the metric signature (- + + +).

We should figure out what the "impulse response" in this case looks like.

Note. The electric field, presumably, is always present around the electron, and is time-independent. Why does the Feynman diagram study a time-dependent impulse response? The Feynman integral would converge absolutely, if we would drop the time dimension out. No need to do the dimensional regularization 4 - 2 ε. Our note about conservation of the center of mass above suggests that a virtual photon in the self-energy diagram cannot contain energy E at all.



The photon propagator is quite similar to the massless Klein-Gordon propagator. The massless Klein-Gordon equation is the usual wave equation.


                 #
                 # =======     sharp hammer
                 v

       ____       ____     tense rubber membrane
               \_/


The "impulse response" for the massless Klein-Gordon equation should be similar to the one when we hit a rubber membrane with a sharp hammer. But how is this related to the elastic energy of the rubber membrane when a pointlike static weight is sitting on it?

Bringing energy into the integral. Technically, the Feynman integral needs a parameter containing energy, since it has to reduce the mass-energy me of the electron. If we would use a time-independent Fourier decomposition of the electric field, how could we introduce energy there? This is a technical explanation, but does not help us to understand the physics.


Creating the Coulomb field around an electron, time-dependently


Suppose that the potential is initially flat around an electron. We want to create a pit into the potential, such that the pit would resemble the Coulomb potential around the electron.

Let the potential suddenly change from flat to the Coulomb 1 / r potential. Let us take the Fourier decomposition of this temporally changing potential.

The Fourier decomposition probably is somewhat similar to the Green's function of Maxwell's equations, or alternatively, the Green's function of the Klein-Gordon equation.

We assume that we cannot use quanta whose mass-energy is > me to create the potential.

This means that we can only imitate the Coulomb field down to a distance of about λe from the electron. The resolution of quanta < me c² is no better.

But the energy of the Coulomb field at distances > λe is only

       α / (2 π)  ≈  1/861

of me c². The Coulomb field would have 861 times the required energy, if we would always use a quantum roughly of the size ~ me c². Let us create the potential pit in such a way that in < 1/861 of cases we use the energy of a large quantum whose size is ~  me c².

The time-independent impulse response of the electric field is the Coulomb field. One can guess that a time-dependent impulse response roughly creates a Coulomb field, too. This is a heuristic reason why the propagator in the Feynman integral is the Green's function for Maxwell's equations.

How is the time-dependent Green's function related to the energy of the electric field? We restrict the Green's function only to use quanta < me c².


             <-- 2 λe -->
       ___                    ___   potential
             \________/
                    • e-


The Fourier decomposition of the time-dependent potential pit will have energies E up to me c², and momenta |p| up to me c.

The coefficients in the Fourier decomposition may be interpreted as probabilities of measuring a certain energy E if we measure the energy of the time-dependent potential pit. The probability of seeing the energy as roughly me c² must be < 1/861. The expectation value of the energy should be me c² / 861.

In the anomalous magnetic moment experiment, the virtual photon of the magnetic field B is "measuring" the reduced mass of the electron. It is a good guess that the Fourier decomposition of the pit, or the Green's function of Maxwell's equations give probabilities for various reduced masses.

We found the connection between the Green's function and the energy of the Coulomb field at a distance > λe.


Conclusions


We may have solved the century-old problem about what is the mass-energy of the electric field of the electron. It is

       α / (2 π)  *  me  ≈  1/861 me.

We now have a qualitative understanding what the anomalous magnetic moment Feynman diagram calculates: it determines the mass-energy of the electric field of the electron, and calculates the magnetic moment based on the reduced mass of the electron.

A brief Internet search did not reveal prior art. This may be the first time someone gives an intuitive explanation for the diagram.

We introduced a new general idea: quantum mechanics tries to imitate classical mechanics. The resolution of the imitation is restricted by the Compton wavelength associated with the energy available in the process. The imitation may in some cases be more accurate, if there is a lucky coincidence. The imitation is further restricted by quantization. Measurements of energy must return whole quanta, not fractions.

We still have to study the Feynman integral in detail. Why is the logarithmic divergence canceled by symmetric positive and negative values? What is the role the energy E versus the momentum p? How do gamma matrices treat me, E, and p?

Our most important remaining task is to eliminate regularization and renormalization from Feynman diagrams. In 2021 we came up with some ideas: high-energy virtual particles may be canceled by destructive interference. But do those ideas really work?