Monday, September 20, 2021

We probably solved the mystery of regularization in vacuum polarization

In this blog we have been claiming that destructive interference wipes off very high frequencies, and that there are no true divergences in Feynman integrals.


       e- ---------------------------------
                            | virtual photon q
                            |
                    k     O     -k + q    virtual pair loop
                            |
                            | virtual photon q
           ---------------------------------
     proton
            

In vacuum polarization, a sharp cutoff at the exchanged momentum |q| generates "almost right" results. We only integrate over those 4-momenta k for which the euclidean norm of k satisfies

       |k| < |q|.

It is immediately clear that a sharp cutoff at |q| does not describe destructive interference correctly. Wave phenomena are smooth. We should find a way to make a smooth cutoff.

Let us have a divergent Feynman integral K(q) where q is a parameter.

If we set q = 0, we have a "plain" version of the integral, K(0).

Let us then let q increase from zero. In the case of the vacuum polarization integral (12.448) in Hagen Kleinert's book, the absolute value of the integral decreases to the value K(q) (the integral is infinite, but assume a cutoff at some very large Λ).

We then interpret that the part

       K(q)

is the high frequency part of the integral which destructive interference wipes off.

The remaining part

       K(0) - K(q)

is the correct integral value. The part which was wiped off by destructive interference has been removed. Only the relevant part of the integral remains.

This interpretation has the following advantages:

1. We do not need to speculate about new physics at a high-energy cutoff Λ. The cutoff is purely a formal mathematical tool when we calculate the effect of increasing q.

2. The sign error in the Feynman vacuum polarization integral is removed. The integral makes the Coulomb force stronger, not weaker as literature claims.

3. This explains why various regularization methods do work, and generate the same numerical results. The role of regularization is just to make the integration volume big enough, but not infinite, so that we can calculate the effect of increasing q.

4. We do not need to assume the existence of the Dirac sea, or "vacuum fluctuations" which fill the vacuum. In some literature, it is assumed that the part K(q) in the integral is something real or physical. It could be the Dirac sea of negative energy electrons, or "vacuum energy" which fills the space. We only need to assume the existence of the real particles which enter the scattering experiment.

5. We do not need to renormalize the charge of the electron.


In the case of quantum gravity, the interpretation raises the following problem: since very high-energy quanta may be black holes, how do we calculate the effect of increasing q? Maybe we in gravity have to make a real, physical cutoff.

The interpretation above refutes the Wilsonian view that low-energy QED somehow arises from unknown physics at the Planck scale. In our model, the running of the QED coupling constant is a result of well-known QED physics at the low-energy scale.


A simple calculation example


Suppose that we have a divergent integral which, we believe, describes a physical process:

                   ∞
       K(q) = ∫  1 / (1 + |q|+ x) dx.
                 0

Let us assume that the correct integral I(q) for the physical process is obtained with a sharp integration cutoff at |q|, but with no q inside the integrand:

                  |q|
       I(q) = ∫  1 / (1 + x) dx.
                0

We have

       I(q) = K(0) - K(q),

where we assume that K is calculated with the integral cut off at some large Λ to make the integral finite, so that we can subtract the values.

It makes sense to say that K(q) is the "complement" of I(q).


Summary of divergences


Let us summarize our current view on cutoffs / regularization / renormalization.

Infrared divergences. The reason for infrared divergences is that Feynman integrals erroneously treat overlapping classical probabilities as separate cases.

At the low energy / low frequency / large distances scale, an electron looks like a classical particle whose position is reasonably well defined. In those frequencies, the electron sends classical electromagnetic radiation. That radiation contains many soft photons, for example, in bremsstrahlung. Thus, there is a large overlap in, say, emission of a soft photon A and a soft photon B. The emissions are not separate cases.

A cutoff at a moderate photon energy removes almost all overlapping in classical probabilities. That is why it works.


Ultraviolet divergences. The divergence is removed by destructive interference of high-frequency waves. The traditional regularization / renormalization process is a way to calculate what is left after destructive interference wipes off almost all of high frequencies.

There is a sign error in the Feynman vacuum polarization integral in literature. People have been thinking that the complement of the relevant integral value is the "physical, existing" thing. Paul Dirac thought that the infinite Dirac sea of negative energy electrons screens almost all of the (infinite) bare charge. But the complement is what really does not exist physically.

We have explained why the assumption that the complement exists, leads to problems with the model. Information flows from real particles to the shadow world of the complement. How are we supposed to describe the state of the complement? Why the negative energy electrons do not form "shadow" hydrogen atoms with real protons?

The problem of ultraviolet divergences exists also in various wave problems of classical physics. If we would use Green's functions and calculate over arbitrarily high frequencies, then many problems would diverge. In classical physics we instinctively assume that very high frequencies are cut off by some process (that is, destructive interference). In quantum field theory, this common sense was somehow lost by researchers, and they built the strange interpretation which involves a sign error, regularization, and renormalization.


The vacuum is totally empty in a particle model


Our interpretation probably removes the notion of "vacuum energy" from the calculation of the Casimir effect. There is an attractive dipole force between the metal plates. It is not that the pressure of "vacuum energy" is pushing them together. We need to look at literature, how the Casimir effect is calculated.

In this blog we have advocated a model where particles are fundamental, and wave phenomena arise from path integrals. In a particle model, the vacuum is totally empty. There are no particles in the vacuum. There is no shadow world of negative energy electrons nor vacuum fluctuations.

Textbooks of quantum field theory usually assume the existence of a lowest energy state which is designated as the "vacuum", even though we do not know what that state is, and if the state is unique. Particles are then created with a creation operator.

But in a particle model, the vacuum is totally empty, and real particles enter the experiment. Feynman diagrams are used to calculate the probability amplitude of various outputs. There is no need for a formal creation operator. A new particle is born at a vertex of the diagram. Though we could say that a creation operator creates it there.

No comments:

Post a Comment