Monday, August 30, 2021

Feynman diagram phase rules: lots of errors?

We suspect that the sign of the probability amplitude in the Feynman vacuum polarization diagram is flipped. We have lots of open questions about the phase of the probability amplitude which various Feynman diagrams calculate.


In the link are listed the Feynman rules from Bjorken and Drell.


Let us look at the derivation of the vacuum polarization correction in the book by Hagen Kleinert (2016). It is in Section 12.16.


                           e+    q - p
                              _____
                            /           \
              ~~~~~                ~~~~~
                            \______/
   virtual           e-     q + p     virtual photon q
   photon q


We assume that q is a pure spatial momentum exchange, for example, in the direction of the x axis.

In the diagram, p is arbitrary 4-momentum. We must integrate over all possible values of p up to some cutoff

       |p| <  Λ,

where the norm |p| is the euclidean norm, not the Minkowski space norm. That is,

       |(t, x, y, z)| = sqrt( t² + x² + y² + z² ).

Let us then look at how the 99 page document in the link determines the phase of the correction. Is it 1, i, -1, or -i?

The photon propagator (12.92) is

       -g^μν * i / q²,

where g is the Minkowski metric. In our blog we use the "east coast" sign convention, where g is

                -1      0      0     0
                 0      1      0     0
                 0      0      1     0
                 0      0      0     1.

In the matrix, the first coordinate is the time coordinate. Note that the square of the 4-momentum q² is the square in the Minkowski metric, not in the euclidean metric. In Feynman diagrams, one always uses the Minkowski metric, unless explicitly mentioned.

The square in the Minkowski metric, with our east coast sign convention, is

       (t, x, y, z)² = -t² + x² + y² + z².

The propagator of the second photon line in the diagram brings the coefficient

       -i

to the Feynman formula for the vacuum polarization diagram. In literature, the calculated correction is real. Thus, the loop has to get an imaginary value. Let us find out from where does the coefficient i come to the loop.


The imaginary value of the vacuum loop integral


Formula (12.446) contains no gamma matrices and is easy to grasp. We believe that M there is the real electron mass, and

       M²

is a positive real number.

In Formulae (12.452) and (12.453) we see that the imaginary factor i comes from a contour integral around the poles of

       1 / ( p² - M² )².

Does this make sense? A real-valued integral got an imaginary value.

We have suggested to put the cutoff at

        |p| < |q|.

If |q| is small, the pole is not even in the integration 4-volume. The real-valued integral should stay real.

Could we define that the integral in the Feynman calculus is always made imaginary, either through a contour integral or through explicitly multiplying it by i?

The pole corresponds to a real electron. Its propagator in the Feynman calculus is infinite. If q is small, there is not enough energy to create a real electron. The role of almost real electrons needs to be studied. Are they important in vacuum polarization?


The sign flip: negative or positive contribution?


This is a harder problem than the extra i in the value of the integral.

For example, the sign of the contour integral depends on the way we travel around the poles: clockwise or counter-clockwise.


Wikipedia lists different contours: retarded, advanced, and Feynman propagators.

It is suspicious that the sign of a physical entity, the wave function, depends on an ad hoc choice of the integration contour. We need to find a more robust way to calculate the limits of the integrals close to the poles.


Finding correct phase factors for Feynman diagrams


In the Schrödinger equation, with various potentials V, the phase of a wave may change any amount. Reflection from a hard wall makes a 180 degree shift to the phase. Reflection from a soft wall may cause any phase shift.

Feynman rules determine the phase in a way which is probably too crude to be true. Multiplication by -1 or i does not bring all the possibilities.

We need to develop new rules for calculating the phase in Feynman diagrams. We can already suggest one rule:

The phase does not change over a virtual pair loop or any diagram which does not have any external input. The phase cannot change if no external factor affects the system. There is no Baron Munchausen trick.


We may think of the system as isolated if no external lines come in. In quantum mechanics we assume that the phase of a whole "system" does not change if there is no external interaction. The system may be a molecule, for example. We can perform the double slit experiment with molecules. Their phase behaves in a predictable way.

We have conservation of momentum and conservation of the speed of the center of mass. Conservation of the phase might be a similar law of nature.

Question. Can we derive conservation of the phase from other conservation laws of nature?


Why did Richard Feynman and many others think that the phase could change over a virtual pair loop? Maybe they had in mind the Dirac sea of negative energy electrons. Those electrons, if they exist, would be an external factor which would make the phase to change.

The bare charge of the electron is the same as the apparent charge?

Let us assume that there really is a sign error in the traditional way that the vacuum polarization diagram is added to the plain photon propagator. Instead of weakening the electric attraction, the diagram strengthens it.


   virtual photon p
             ~~~~~~~~~~~ O ~~~~~~~~~~~
                                       virtual pair loop


Let us study the consequences.

1. Keep the cutoff very large. Then the diagram greatly enhances the electric attraction. The "bare charge" of the electron has to be minuscule in the absolute value. We see that little charge greatly magnified when we measure the electron from a macroscopic distance, for example, in the Millikan-Fletcher experiment.

2. Put the cutoff at the natural value |p|, where p is the exchanged momentum. In this case, the natural interpretation is that the bare charge is what we measure from a macroscopic distance. At short distances, creation of virtual pairs enhances the attraction.


In Case 1, we do "renormalization": the bare charge is hidden from us. We use the charge measured from a macroscopic distance. This is an ugly complication.

In Case 2, we have to find good grounds for using |p| as the cutoff. In this blog we have studied throughly destructive interference of the created virtual pairs. The interference is not trivial, because we have two particles, not just one. Case 2 is the beautiful solution. There is no need for an arbitrary cutoff or renormalization.

We can, of course, take as an axiom that there is destructive interference in this case. But it is better to prove the claim from general principles of quantum mechanics.


See the note at the start of the link. We got the correct Uehling potential with a 7% accuracy by putting the cutoff at |p|.


Is there destructive interference in virtual pairs?


We could model the two particles in the nonrelativistic case as a single particle in 6 + 1 dimensions. Then we probably get destructive interference for large 7-momenta.

What is the 7-dimensional wave function of the single particle? 

The Dirac wave function of the positron rotates to the opposite direction to the wave function of the electron. If we take the product of the two, we get a strange non-rotating wave function. Note that the classic Dirac problem of negative energy states of the electron resurfaces here. There probably is destructive interference for this strange wave, too, for large 7-momenta.

Maybe we should interpret that the Dirac wave is a classical wave, and calculate the product from abstract complex waves? Then we could make both waves to rotate to the same direction. Then there is destructive interference.

Yet another model is that the electron wave in the loop "turns back in time". Then we have just one particle which travels back in time as a positron. In this case, there is destructive interference.

Sunday, August 29, 2021

Feynman vacuum polarization and the "complement" problem

UPDATE August 29, 2021: Our method, when putting a sharp cutoff for |k| at |p|, gives for small |p| << M the correction term

      α p² / (16 π M²),

to the photon propagator, while in the correct formula 16 is replaced with 15.


See Formulae (12.446) and (12.465) in the book by Hagen Kleinert (2016) (there q takes the role of our p).

If we put a smooth cutoff, we might be able to get the exact right result. The calculation shows that we get a fine estimate for the Uehling potential with our method.

We need to study large |p| next.

----

Our previous blog post was left at an intriguing problem.


        e- ----------------------------------------
                                |
                               O  vacuum polarization
                                |   virtual pair, 4-momenta
                                |   k, p - k
                                |
                                |  virtual photon, spatial
                                |  momentum p
        Z+ ----------------------------------------


There probably is a sign error in the Feynman vacuum polarization integral.

If the photon would scatter from a real pair, there might be a 180 degree phase shift, but we do not agree that a photon can change its phase by scattering from "empty space", or from a virtual pair which is created by the photon itself. That would be a Baron Munchausen trick. Literature skips over the phase determination discussion and immediately assumes that the vacuum polarization diagram should be subtracted from the plain photon propagator, not added to it.

We believe that large-4-momentum virtual pairs are wiped out by destructive interference. The natural cutoff is roughly  |p|, which is the momentum exchange of the electron and the proton. For larger 4-momenta, there is almost total destructive interference.

Or, we can set a smooth cutoff function

       |p| / |k|,

where k is the 4-momentum circling the virtual pair loop.

But why does the Feynman integral seem to calculate the "complement" of the effect of virtual pairs with small 4-momentum?

We want to calculate the value of the Feynman integral

       F(p²)

of the process if the 4-momentum in the integration is restricted to be smaller than a certain cutoff Λ.

Case A. Let us first set the cutoff Λ to a very large value.

The difference

       F(0) - F(p²) = D

is the correction that we have to add to the plain Coulomb scattering probability amplitude if we let the exchanged momentum be p. This is according to the usual Feynman calculus.

Case B. Let us set the cutoff Λ to |p|. We are interested in the value

       F(p²) = D'.

We have in this case fixed the Feynman integral sign error. We add D' to the plain Coulomb scattering probability amplitude to get the corrected amplitude. This is according to our new calculus.


If D = D', then the "complement" problem is solved. Both methods give the same numerical correction.

The value of D' corresponds to what effect is still left, D corresponds to what effect fell out. It would not be a big surprise if D = D'.

Calculations with Feynman integrals are cumbersome. We need to study detailed calculations to get numerical values out of our new cutoff procedure. Our new cutoff could be called "regularization", but the big difference is that our cutoff has a physical motivation in destructive interference, while regularization methods are ad hoc.

Friday, August 27, 2021

Vacuum polarization makes a strong electric field stronger through a dipole?

Vacuum polarization in the Dirac sea comes from the fact that a charge deforms the paths of negative energy electrons in the sea.

For example, the density of negative energy electrons is larger close to a proton. The density of negative energy electrons is imagined to be lower somewhere far away, perhaps at the infinity.

We do not like the idea of the vacuum being full of particles like negative energy electrons. We would like the vacuum to be totally empty. How could we simulate Dirac's idea without assuming a Dirac sea?

Dirac has to put a cutoff on the momenta |p| of negative energy electrons in, say, one cubic meter of empty space. Then the number of negative electrons in a stationary state in that cubic meter is finite, and we can calculate a finite amount of polarization caused, for example, by a proton.

The cutoff is an ugly feature. The second ugly problem is that negative energy electrons have to be able to store momentum to be able to interact with real particles. What guarantees that empty space does not soak up a macroscopic amount of momentum?

Feynman diagrams solve the second ugly problem by requiring that energy and momentum are always conserved and exit the process in real particles.

Though, as we remarked in an earlier blog post, Feynman diagrams do not conserve the speed of the center of mass. As if empty space could contain mass-energy which our process can displace. Clearly, we must add conservation of the speed of the center of mass to the rules which a Feynman diagram must obey.

The cutoff problem exists in Feynman diagrams, just as in Dirac sea vacuum polarization. 

Uehling and Serber in 1935 calculated vacuum polarization from the Dirac model, and got the correct result. The Lamb shift has been measured very precisely, and it must include the effect from the Uehling potential.


The Feynman diagram


In modern literature, the Uehling potential is calculated from the vacuum polarization Feynman diagram. The result is the same as Uehling got from Dirac's model. Clearly, the Feynman diagram has to calculate the same thing as Dirac's model, though it is not yet clear to us how it does that trick.


        e- -------------------------------------
                              |
                              |
                             O  vacuum polarization loop
                              |
                              | virtual photon p
        Z+ -------------------------------------


The Feynman diagram at the first sight does not assume that the vacuum is full of particles. No such particles are drawn into the diagram. But a vacuum full of particles creeps in when we set a cutoff on the virtual pair 4-momenta in the vacuum polarization loop.

Maybe the solution is to set the cutoff at the nucleus energy?


Dirac in the papers which can be found in the book in the link, suggested that the cutoff should be something like 1 GeV. That is the mass of the proton.

We have in this blog suggested that destructive interference wipes off virtual pairs whose 4-momentum is large. In wave phenomena, long waves typically cannot create shorter waves. The nucleus in the diagram has lots of energy. Its wavelength in the time direction is very short. It could create wave phenomena whose 4-momentum is large.


                                   virtual pair loop
large virtual photon
                           ~~~ O ~~~
                         /                     \
        Z+ ------------------------------------------


In the Feynman diagram above, the nucleus creates a large virtual photon, which in turn can create a virtual pair with large 4-momentum. The electron might interact with the pair. A problem is that the diagram will have more vertices than the simpler vacuum polarization diagram, and consequently, its amplitude is smaller in the Feynman calculus.

This idea does not work, though. If the cutoff would depend on the mass of the nucleus, then the charge visible to a far-away observer would depend on that mass.


A possible solution: vacuum polarization is temporary pairs which "conduct" electric field lines better?


The Uehling potential shows that the proton attracts the electron stronger than we would expect if the electron comes within the reduced Compton wavelength distance 4 * 10⁻¹³ m from the proton.

How to make electric attraction stronger? Create a temporary electric dipole between the electron and the proton. The dipole will make the attraction stronger. It "conducts" electric field lines. The energy for the creation of the dipole would come from the approaching electron.

In this blog we have thus far used electric polarization of a solid as the model of polarization. If polarization grows superlinearly with the field strength, then the attraction between the electron and the proton will appear stronger.

Since a very strong (changing?) electric field can create a real pair, we suspect that polarization really is superlinear.

But how to reconcile this very different model with the idea of the Dirac sea?


The Uehling "force" is mediated by an electron-positron pair?


The Uehling potential becomes important at the distance of the electron reduced Compton wavelength, and it drops off exponentially at larger distances.

This immediately brings into mind that the Uehling potential is an attractive force caused by a dipole virtual electron-positron pair. The mediator "particle" of the Uehling force is a pair.


     proton
           ●        -    +       •  e-
                     dipole


A dipole is born between the proton and the electron which is passing it. The pair makes the attraction between the proton and the electron stronger. A dipole "conducts" the electric line of force and makes attraction stronger.

The energy-time uncertainty principle tells us that the pair, whose energy as a real system would be 1.022 MeV, can at most reach to the distance 1/2 times the reduced Compton wavelength of the electron.

How do we calculate the probability of a dipole forming?

When the electron passes the proton, it produces a sharp pulse in the electric field of the electron. The pulse contains frequencies which are high enough to create a 1.022 MeV pair, but the total energy of the pulse is too small. A real pair cannot be created, but maybe we can create a very short-lived virtual pair?


There is a sign error in the Feynman diagram and it calculates the "complement" of the correct effect?


The following could explain things.

The Feynman vacuum polarization diagram erroneously reduces the calculated cross section. It should increase it.


We wrote on December 25, 2020 that a particle cannot change its phase on its own. The photon (momentum p) in the diagram should keep its phase, not change it by 180 degrees.

The Feynman formula, when we increase the exchanged spatial momentum |p| between the proton and the electron (make the electron go closer), leaves out the effect of virtual pairs with low 4-momentum. Think of Dirac's model. Negative energy electrons with low momentum have the polarization spread over a large spatial area. Decreasing the distance between the electron and the proton leaves out their effect.

But in the "Uehling force" model above, low 4-momentum is necessary for the pair, because otherwise destructive interference wipes it out. In that model, it is the low-4-momentum pairs which create the attractive force.

Thus, when we increase |p|, the Feynman integral (with its negative sign, or 180 degree phase change) leaves out the probability amplitude contribution, say, P, of low-4-momentum pairs.


      e- ---------------------------------------------
                                 |  virtual
                                 |  photon
                                 |  p
     Z+ ---------------------------------------------


Above we have the diagram of plain Coulomb scattering.

Since P in the Feynman calculus has a 180 degree phase shift relative to the plain Coulomb scattering probability amplitude, adding P to it would reduce the cross section. Leaving P out increases the cross section: the Coulomb force looks a little bit stronger, which is the right end result.

We suggest that in the correct calculation, P should not have a 180 degree phase shift, and it should be added to the plain Coulomb scattering amplitude. We need to study this in detail.

P is kind of a "complement" of the Feynman integral.

If this is true, empty space is truly empty. It is the proton and the electron themselves who create virtual pairs, which make the electric line of force stronger.

Open problems: why does Dirac's model calculate the Uehling potential right, even though in his model it is the bending paths of negative energy electrons which cause the polarization? It is not virtual pair dipoles like in our model.

Why does the Feynman integral formula calculate the same thing as Dirac's model? In Dirac's model the negative energy electron enters the process as an independent particle. In the Feynman diagram, it is one half of a virtual pair.

Coulomb focusing for the 2s orbital of hydrogen: it cannot explain vacuum polarization

We claimed in an earlier blog post that electron on the 2s orbital moves radially toward the proton or away from it. Then Coulomb focusing would have no effect.


But when we look at the radial probability of the electron on the 2s orbital, we see that the probability very close to the proton is much less than it would be if the electron would move exactly radially.

From the diagram at the link we see that the electron on the 2s orbital "typically" swings to a distance of 6 Bohr radii, and then dives back toward the proton again.

Let us calculate what would the total probability be for a sphere whose radius is the reduced Compton wavelength

        r = 4 * 10⁻¹³ m = 1 / 137 Bohr radii,

if a classical electron would go almost directly toward the proton.

The electron moves slowly at 6 Bohr radii, and spends most of its time there. The potential at 6 radii is -4.5 eV, and the kinetic energy there is

       4.5 eV - 13.6 eV / 4 = 1.1 eV.

The kinetic energy at r is

       511 keV / 137 = 4 keV.

We see that the classical electron moves 60 times faster at r than at its typical distance.

The probability of the classical electron being in the r-sphere is

       P' = 1 / (137 * 6 * 60) = 1 / 50,000.

However we calculated in the previous blog post that the quantum mechanical probability is

       P = 1 / 1,500,000.

The quantum mechanical probability is 1 / 30 of the classical. We may interpret this that the quantum electron misses the r-sphere in most cases, and instead goes through a sphere whose radius is

        R = 4 * 10⁻¹² m = 10 r.

The speed of an electron in that sphere is just 20 times faster than at the typical distance.

In quantum mechanics, everything is diffuse about the location of particles. It is not surprising that the electron does not swing exactly through the nucleus.

The far electric field of the electron lags behind and does not have time to take part in the sudden movement of the electron in the R-sphere. The effective mass of the electron is slightly reduced, and the electron will pass the proton slightly closer. It is as if the potential which is felt by the electron would be slightly lower. This is kind of extra Coulomb focusing, which is not taken into account in the Schrödinger equation and its solution.

Let us calculate the effect of the mass reduction, on an electron which flies past the proton at the distance R.


                 v
        e-  ---------->   
                                  R = impact parameter
                 
                             ● Z+ 


The kinetic energy of the electron is

       511 keV / (10 * 137) = 400 eV.

The speed of the electron is

       v = 0.04 c = 1.2 * 10⁷ m / s.

The fly-by lasts

       t = 2 * R / v = 7 * 10⁻¹⁹ s.

The force is

       F = k e² / r² = 2 * 10⁻⁵ N,

and the acceleration
 
       a = F / m_e = 2 * 10²⁵ m / s².

The acceleration makes the electron to come closer to the proton the distance:

       s = 1/2 a (t / 2)² = 10⁻¹² m.

We see that the electron turns substantially when close to the proton, tens of degrees.

The field which is farther than

       D = c t / 2 = 10⁻¹⁰ m

does not have time to react. The effective mass of the electron is

      m_e' = m_e (1 - 1 / 70,000).

Since the mass is reduced, the electron will go

       s / 70,000

closer than it would otherwise. Thus, the effective potential is reduced

       400 eV / 280,000 = 0.0014 eV.

The quantum mechanical probability of finding the electron in the R-sphere is

       1,000

times the probability of finding it in the r-sphere, or

       1 / 1,500.

We get an effective potential reduction of

       0.0014 eV / 1,500 = 0.9 μeV.

The reduction is not of the same order of magnitude as the Uehling potential reduction 0.1 μeV.

The Coulomb focusing effect certainly exists in a classical model. Does it show up in a quantum model?


Dirac's sea of negative energy electrons


Mass reduction and Coulomb focusing cannot explain vacuum polarization effects.

Paul Dirac in his 1934 paper sketched a gas of negative energy electrons whose average location is slightly displaced by the electric field of the proton. This displacement is the vacuum polarization. E. A. Uehling and Robert Serber in 1935 calculated the correct, empirically tested, potential which is caused by vacuum polarization.

Let us have a cubic meter of empty space. If we put a ceiling on the momentum |p| that a negative energy electron can possess, we can fit just a finite number of those electrons in the cube. This is because the Pauli exclusion principle prohibits the electrons having the exact same momentum p and spin. It is like the Fermi sea of electrons in metal.

Vacuum polarization in Dirac's model is a collective phenomenon. In a Feynman diagram, vacuum polarization is a dynamic process with just a few particles present. One can derive the Uehling potential formula from a Feynman diagram (but one must use regularization).

Question. Why is the Feynman model equivalent to the Dirac sea model?

Thursday, August 26, 2021

Calculation of the Uehling potential for hydrogen

Let us calculate numeric values for the Uehling potential in hydrogen, so that we get a grasp what is the magnitude of the effect.


Alexei M. Frolov and David M. Wardlaw (2012) found an analytic formula for the Uehling potential. They use Hartree atomic units.


Frolov and Wardlaw mention that the exponential decrease for the Uehling potential for large r is wrong when corrections to the electric field are taken into account.


The square of the 2s wave function very close to the nucleus is 1 / 2, if we use the Bohr radius as the unit of length.

Let us calculate the Uehling potential for a radius of the reduced Compton wavelength

       r = 4 * 10⁻¹³ m

of the electron.

The formula (1) in the Frolov and Wardlaw link is easy to integrate in your head. The integral for r is very roughly

       1 / (5 * 137) * 1 / 40 = 1 / 30,000

The Coulomb potential at r is

       V = -511 keV / 137 = -4 keV.

The Uehling correction to the potential is

       ΔV = 4 keV / 30,000 = -0.13 eV.

The volume of a sphere with the radius r is

       Vol = 1.3 * 10⁻⁶,

where the unit is the Bohr radius.

The probability of the 2s electron to be in that sphere is

       P = 1 / 2 * Vol = 0.7 * 10⁻⁶.

The fall in the energy level of 2s is

       ΔE = P ΔV
             = 0.09 μeV.

In literature, the effect of vacuum polarization is given as 27 MHz or 0.1 μeV.

The contribution from a radius ~ r / 10 is roughly -0.01 μ. The contribution from r / 100 is very small.

We conclude that the Uehling potential lowers the energy of 2s, and the main contribution comes from around a radius which is the reduced Compton wavelength.

Monday, August 23, 2021

Is "vacuum polarization" really Coulomb focusing which is caused by the outer electron static field lagging behind?

On March 13, 2021 we showed that we can explain the Lamb shift by the fact that the electron static electric field "lags behind" in sudden movements, and the effective mass of the electron is reduced a little bit, causing its wave function have a somewhat longer wavelength close to the proton.


The effect is to raise the energy of the 2s orbital 1,000 MHz relative to the 2p orbital.

But we did not calculate the effect of "Coulomb focusing" which is caused by the reduced mass of the electron. The reduced mass makes the electron to pass the proton a little bit closer, which lowers the energy level of the 2s orbital.


E. A. Uehling in 1935 wrote that vacuum polarization is expected to lower the energy level of 2s. The effect is roughly -2.7% on the Lamb shift.

Classically, Coulomb focusing certainly happens because of the reduced mass of the electron. We need to calculate how large the effect is, and if it can explain the Uehling potential in the hydrogen atom.


Elliptic orbits do not close => the potential appears to be not 1 / r


If we reduce the mass of the electron when it comes close (~ 10⁻¹² m) to the proton, then the electron will come somewhat closer to the proton than it would otherwise do. The path of the electron spirals relative to the laboratory frame.

A spiral in the orbit also happens if the potential is not exactly 1 / r. Thus, the lagging behind of the far field of the electron will make the 1 / r potential to appear somewhat steeper.


Calculation of the Uehling potential


We have to find a way to calculate the effect. Is it equal to the Uehling potential?

The Uehling potential decreases exponentially if we increase r. Coulomb focusing does not have an exponential law. It looks like Coulomb focusing cannot explain vacuum polarization.

The 2s orbital is radially symmetric. The electron seems to travel only radially toward the proton, and away from it. Coulomb focusing cannot have any effect on the energy of the 2s orbital.

For 2p, there should be Coulomb focusing. The Sommerfeld orbit is an ellipse whose major axis is 2 times the minor axis. We are are not sure what the effect of focusing is, because vibrations in the electric field cannot escape (the electron in the 2p orbit does not radiate), and consequently, the behavior of the system is not classical.


Paul Dirac and Werner Heisenberg about vacuum polarization


We have failed to find a classical model for vacuum polarization using the framework of Feynman diagrams.

We will next look at the work of Dirac and Heisenberg in 1934. What was their view of vacuum polarization?


Dirac's paper is from April 1934.


In the link is an English translation of Heisenberg's paper Bemerkungen zur Diracschen Theorie des Positrons, Zeit. Phys. 90 (1934).

Sunday, August 22, 2021

It is an error to sum probability amplitudes from Feynman diagrams which produce a different number of particles

In the Matthew Schwartz calculation of the vertex function we encounter two "corrections" two elastic Coulomb scattering.


One of them is the claimed correction due to soft photons in bremsstrahlung.


         e- -----------------------------------------
                                  |  virtual
                                  |  photon
                                  |  p
        Z+ -----------------------------------------


Above we have elastic Coulomb scattering.
         
       
                                             soft real photon
                                             ~~~~~~~
                                           /
         e- ------------------------------------------
                                  |  virtual
                                  |  photon
                                  |  p
        Z+ ------------------------------------------


Above we have a diagram of the electron sending a photon of extremely low energy.

There cannot be any interference between the e- and Z+ in the upper diagram versus the e- and Z+ in the lower diagram. We can, in principle, observe the soft photon: that fact destroys any interference.


How to sum Feynman diagrams which produce a different number of particles?


How can we sum Feynman diagrams which produce different particles? We cannot.

Literature is silent about this problem. How do we combine elastic Coulomb scattering with bremsstrahlung of soft photons? How to calculate probabilities?

We have presented the hypothesis that elastic Coulomb scattering really does not exist. The electron always sends soft photons. However, the photons in most cases are infinitesimal and their effect is negligible.

We want to model the fly-by of an electron. How to do that?

Suggestion. If we only measure large photons of bremsstrahlung, then the emission of a photon is quite rare. Its probability might be P = 0.001. Such a photon requires that the momentum transfer |p| is relatively large. We should subtract the probability P from the probability of elastic scattering with a high momentum transfer |p|. We ignore soft photons and assume that the rest of fly-bys are elastic.


Classically the effect of soft photons is negligible


We need to find out why literature claims that soft photons could have a significant effect on the trajectory of the electron. Classically, the energy in low-frequency bremsstrahlung is very small. Classically it cannot have a significant effect on the electron.

We calculated that classical bremsstrahlung is only 0.05 eV for a mildly relativistic electron which passes a proton at the distance 2.4 * 10⁻¹² m. The photon spectrum of classical bremsstrahlung is from 0 to roughly 250 keV, and the power density is approximately constant for all frequencies f in the spectrum. The total energy of soft photons of energy, say, < 1 eV, is extremely small in the classical process. Soft photons have a negligible effect on the classical process. Why would they have a large effect on the quantum process?

Saturday, August 21, 2021

Bremsstrahlung always contains an infinite number of soft photons but that does not matter

In the previous blog post we presented a conjecture that even a single electron always emits an infinite number of soft photons when it passes a nucleus. Let us analyze this in more detail.


          superheavy electron
         e-  -------------------------->


                          ● Z+
                          macroscopic charge


Let us make the electron very heavy, so that it becomes an almost classical particle. We replace the nucleus Z+ with a macroscopic heavy body with a macroscopic charge.

Then it is obvious that the superheavy electron draws a slowly bending classical path. It receives momentum from the macroscopic charge in many small packets as it draws its path.

The Feynman diagram for the process does not depend on the masses or the charge of the nucleus. The diagram shows the nucleus sending just one packet of momentum to the electron.


Overlapping probabilities


         sectors
                   \   |   /
                     \ | /
                       ●    Z+


Let us divide the space around Z+ into many sectors. There might be a million of such sectors. The electron goes through each sector as it passes the nucleus.

Let us use a separate Feynman diagram to analyze the journey through each sector.

In each sector, there is a tiny probability P that the electron emits a soft photon of a certain energy range ΔE.

If we sum all the probabilities for a very wide range of energies for every sector, we will get a sum which is much larger than 1. Is this an infrared divergence?

No. We forgot that if we sum probabilities, the cases have to be separate. If we want to know the probability of emitting exactly one soft photon, we must demand that one sector emits a single photon and other sectors emit zero photons.

Feynman integral formulas seem to lack the sophistication to ban emission of other photons besides the one which is emitted in the diagram. Feynman integrals then sum overlapping probabilities. That is the reason for the infrared divergence. The integral seems to claim that the probability of emitting a single soft photon is infinite.


Regularization of the infrared divergence


Classically, the electron emits a wave which is like a rounded Dirac delta distribution. There is very little total energy in soft photons. There may be a few soft photons which might be observable, but the rest of the soft photons contain infinitesimal total energy. (See the boldface "Classical number of photons" in the previous blog post.)

We can simply ignore photons whose energy is below some tiny threshold. Putting a cutoff or assigning a tiny "mass" to a photon is acceptable.

If we are interested in a cross section of some event whose probability is much less than 1, then the overlap of probabilities has a very small effect and we can ignore the overlap.

Regularization does not remove the overlap. Regularization only sweeps under the rug the ugly fact that some calculated probabilities would be > 1 because of the flawed method of summing probabilities.

Friday, August 20, 2021

Matthew Schwartz and the divergences in the vertex function and bremsstrahlung

In literature, the calculation of the vertex function requires the removal of both an ultraviolet divergence and an infrared divergence.


Matthew Schwartz (2012) in the link writes about how to remove the infrared divergence.

Our semiclassical "rubber plate" model of the static electric field of the electron explains the origin of both divergences, and why they can be removed.


      e- ------->
                                momentum exchange p
           
                           ● Z+


Let us have an electron passing close to a nucleus Z+. Let us assume that the collision is elastic, that is, no photons are radiated out in the process.

The particles exchange some amount of spatial momentum p.

Assume that the particles exchange the momentum p instantaneously. The electron receives a Dirac delta impulse. The classical spectrum of vibrations of the rubber plate, or the static electric field, around the electron would contain a significant amount of waves from arbitrarily low frequencies to arbitrarily high frequencies. 

This reflects the fact that the (generalized) Fourier transform of the Dirac delta distribution δ(0) is a constant function:

       δ-hat(ξ) = 1.

The response of the rubber plate to a pointlike infinitely short impulse (= hitting the rubber plate with a sharp hammer) would contain a very wide spectrum of frequencies.


                          virtual photon q
                           ~~~~~~~~
                         /                     \
         e- --------------------------------------------
                                 | virtual
                                 | photon
                                 | p
        Z+ --------------------------------------------


Above we have the Feynman diagram of the vertex function. The electron makes a sharp turn when it receives the momentum p from the nucleus Z+. The electron sends a virtual photon q to itself. In a classical model, the photon q is a "summary" of the complex interaction which the electron has with its static electric field.


Removing the ultraviolet divergence


In a classical model we can remove extremely high frequencies from our calculations - that is, remove the ultraviolet divergence.

The electron does not really receive a Dirac delta impulse from the nucleus. Rather, the electron receives a smooth impulse spread over a considerable time which it spends close to the nucleus.

This may be the reason why we can regularize away the infinity in the Feynman integral formula.

The Feynman model assumes that the electron makes a Dirac delta impulse to a massless Klein-Gordon field and produces a Green's function response in the field. The electron later may absorb a component of the Green's function, that is, the photon q above. We sum all such absorption histories to form a picture of the whole process.


The electric form factor depends on the Planck constant h


Do Feynman formulas assume anything about the size of h? The rules and calculations in literature use natural units where h and c are set equal to 1. It is hard to follow where h would appear in the formulas. We know that h does not appear in the differential cross section of elastic Coulomb scattering, but h does appear in the cross section for bremsstrahlung.


In the link above, in formulas (88) and (93), the electric form factor depends on the value of the fine structure constant α:

       F₁(q²) = 1 - α / (2 π)
                           * (1/2 log² (-q² / m²) + divergent part + constant).

Since α is inversely proportional to h, the magnitude of the Planck constant strongly affects the electric vertex function.


When we used our rubber plate model on March 13, 2021 to calculate the Lamb shift, we put a "cutoff" at the zitterbewegung radius. That radius is

       λ / (2 π)

where

       λ = h / (m c) = 2.4 * 10⁻¹² m

is the Compton wavelength of the electron. It is not clear if the cutoff should be dropped down to the classical radius of the electron.


There has to be a "vertex function" in the classical limit, and it cannot depend on the Planck constant


The bremsstrahlung formula from a Feynman diagram depends strongly on the value of h. We explained that by the fact that the radiation is born in a very small volume of spacetime, and it depends on the wavelength, that is, h, if waves can escape from such a small volume to the environment.

However, a radio transmitter produces electromagnetic waves through a similar mechanism as bremsstrahlung, but the output is classical: it does not depend on h.

In the vertex function, waves do not need to escape to the environment. Should the vertex function depend on h if the momentum exchange is small?

There almost certainly exists a classical "vertex function". If we take a macroscopic amount of charge in our hand and swing our hand, there is some interaction between the charge and its far electric field. The interaction does not depend on the value of h. Why would this be different for a single electron in our hand? Why would its interaction depend on h?

We are not certain if the dependency of the electric form factor on the Planck constant is correct in literature.


Removing the infrared divergence


Consider classical bremsstrahlung. Let the electron pass the nucleus quite far away. The classical radiation in the process is very small, and its energy is roughly evenly divided on all frequencies from zero to about f = 1 / Δt, where Δt is the time the electron is "close" to the nucleus.

The classical wave contains an infinite number of soft photons. We see it this way:

Classical number of photons. The classical wave energy in the frequency range f / 2 ... f suffices for a small fraction of a photon, say, 0.001 photons. Similarly, the energy in the range f / 4 ... f / 2 suffices for another 0.001 photons. We can continue this reasoning and get an infinite number of soft photons. However, only a few of the photons are observable. The total energy of the rest of the photons is infinitesimal.


In the Feynman formula for bremsstrahlung there is an infrared divergence. The formula thinks that just 1 photon is emitted at a time. Each emitted photon is a separate case. The classical model removes this divergence: an infinite number of photons are emitted at a time. The cases which the Feynman formula thinks are separate, actually are overlapping.


Literature uses an erroneous trick to remove the infrared divergences in the vertex function and bremsstrahlung


Both in the Schwartz link and the other link it is claimed that infrared divergences "cancel" each other in the vertex function and bremsstrahlung. 

The idea is that the vertex correction for a differential cross section σ of Coulomb scattering is something like

       σ * (1 - 1 / ε + a)

and the correction from very low-energy photons (which we cannot observe) is

       σ * (1 + 1 / ε + b).

There ε is a positive real number which goes to zero.

In the sum of corrections, the divergent term 1 / ε does not appear.

If we make ε small, then the first correction claims that the cross section is negative. That does not make sense. The trick does not solve the mathematical problem.

Could it be that the "canceling" really is destructive interference? No. In quantum mechanics, interference only happens if we cannot distinguish between two histories. But even an infinitesimal photon is, in principle, observable. There is no interference between elastic Coulomb scattering histories and bremsstrahlung histories.


Bremsstrahlung contains an infinite number of soft photons for a single electron?


Suppose that we hold a macroscopic amount of charge in our hand and swing our hand. The classical electromagnetic wave then contains an infinite number of photons, like we argued above.

But Feynman diagrams, after regularization, for an individual electron claim that sending a single photon is quite rare, and sending two photons very rare.

How can we reconcile these two views?

We believe that the classical picture is correct. Even an individual electron always emits an infinite number of soft photons. This means that elastic Coulomb scattering never happens. Large photons are rare because a large momentum transfer requires the electron to come very close to the nucleus, and also because the large value of the Planck constant reduces the number of large real photons which can escape from there.


Conclusions


We are not sure if the electric form factor F₁ really should depend on the Planck constant for low-momentum transfers. We need to study this more.

The Lamb shift depends on h. In our semiclassical rubber plate model that is because the zitterbewegung radius depends on h.

Classical models explain why divergences can be removed in the electric form factor. But are these the real reason why regularization works in the QED case?

Literature uses a trick to "cancel" the infrared divergence in the electric form factor. The trick does not make sense mathematically.

When looking at the Lamb shift in literature we noticed that vacuum polarization reduces the shift by some 2.5%. Thus, vacuum polarization has to be a real phenomenon. We will study this in detail.

Wednesday, August 18, 2021

Quantum gravity: how to calculate the path integral of different geometries?

One of perennial problems in quantum gravity is how to cope with different spacetime geometries which can result from a physical process.

In ordinary quantum mechanics we work in euclidean spacetime or in a Minkowski space. But in quantum gravity, the physical process may result in a spacetime geometry which changes, and significantly differs, from these familiar geometries. A black hole may form, for instance.

A classical analogue is a rubber membrane. If the membrane is approximately planar, then we can define the interference pattern of waves in the membrane in a reasonable way. Simply sum the waves.

But suppose that the geometry of the membrane can change during a physical process. The plane may become a torus, for example. How can we calculate an interference pattern for fundamentally different geometries? How to sum a wave within a plane with a wave within a torus?


A possible solution



The Afshar experiment shows that different paths in a path integral can interact with each other in the course of the experiment. We must be able to calculate intermediate interference patterns to determine the fate of the experiment.

However, radically different spacetime geometries correspond to macroscopically different distributions of mass-energy in spacetime. The interference between macroscopically differing branches of history is negligible.

A tentative solution to the problem: we can (and should) sum waves for branches of the path integral if the branches only differ microscopically. This is how we do everyday quantum mechanics. Simply assume that the change in the geometry of spacetime is negligible.

On the other hand, if the branches differ macroscopically, then we ignore interference effects between them. This is how we do classical mechanics.

The solution is not beautiful. We establish an ad hoc border between the classical world and the quantum world.

Sunday, August 15, 2021

Aharonov, Rohrlich, and the energy-time-position uncertainty principle

Yakir Aharonov and Daniel Rohrlich argue in their book Quantum Paradoxes:


that the energy-time uncertainty principle does not hold in all cases.

Aharonov and Rohrlich present their argument in chapters 7 and 8. They assume a time-dependent coupling constant g(t), which differs from zero between times t = 0 and t = T, and couples the system under measurement and the measuring device.

If one can measure something with an arbitrary precision, and have T arbitrarily small, then they say the measurement can be done impulsively.


An uncertainty principle for preparation of a wave packet


A measurement is related to preparing a quantum system to a certain state.

How quickly we can prepare a photon whose energy E is known with a great precision?

We need to create a wave packet whose spectrum in the Fourier decomposition is narrow and close to E. Such a wave packet necessarily is very long. How can we create such a wave packet?

If a hydrogen atom decays to a lower energy state, it sends a photon from a very small spatial volume, compared to the wavelength of the photon (λ is around 100 nm, while the diameter of the atom is only 0.1 nm). This shows that we can create a required wave packet in a small spatial volume if we have a long time available to create the packet.

On the other hand, we may imagine a very long device which creates a very long electromagnetic wave packet almost instantaneously.

               
      ---------------------------------------------------
                                     ^
                                     one finger disturbs Δt


                          v         v        v
      ---------------------------------------------------
                                ^        ^ 
                         multiple fingers disturb Δx


A classical analogue is a string. We can create a wave packet either by disturbing one location of the string for a long time, or by disturbing a long segment of the string for a very short time.

Energy-time-position uncertainty principle for a massless particle. For a prepared photon the following holds:

       ΔE * max(Δt, Δx / c) >= h,

where ΔE is the uncertainty of the energy, Δt is the time used in preparation, and Δx is the size of the device used in preparation.


Question. Does the same relation hold for measuring the energy of a photon?


What about preparing a wave packet for a massive particle like the electron?

The position-momentum uncertainty relation for the wave packet of a massive particle is

       Δx Δp >= h,

or

       Δx Δ(m v) >= h
<=>
       Δx / v * Δ(m v² / 2) >= h / 2
<=>
       Δt ΔE >= h / 2,

where Δt is the time it takes the electron to move the approximate length of the wave packet. This is very much analogous with the classical string example. Either we need to spend the time Δt to disturb at one location, or we need to spend a very short time to disturb along a long segment Δx.

Let us ignore the factor 1/2 in h / 2 above. We are working with fuzzily defined uncertainties and can ignore factors that are of the order 1.

Energy-time-position uncertainty principle for a massive particle.  The following holds:

       ΔE * max(Δt, Δx / v) >= h,

where ΔE is the uncertainty in energy, Δt is the preparation time, and Δx is the length of the preparation spatial volume. The velocity of the prepared particle is v.


The relation is less strict than for a photon. If v is, for example, 0.01 c, we can make Δx very small.


A practical experiment to prepare electrons with precise energy in a very short time


What practical method can prepare an electron with a sharp momentum p in a very short time in a short distance?


   source          shutter                    shutter
                             --------------------------     
       e- ------>       |   electric field    |        ----> 
                             --------------------------         p + Δp
                                          Δx                      angle α


We have a source of electrons located far away, so that we know that the momentum to the y and z directions is almost zero. We want to select electrons whose momentum in the x direction is very accurately p with a precision Δp. That corresponds to some uncertainty ΔE in the energy.

The position-momentum uncertainty relation gives the length Δx of the wave packet of a selected electron:

       Δx Δp >= h.

Let us assume that the incoming electrons have the momentum very roughly equal to p. Let us have a uniform electric field along a distance Δx. The field is only present for a very short time Δt.

We have shutters which only open the area of the electric field for electrons for some time ΔT. A shutter is a very high repulsive potential which we can create very quickly.

Then we can apply the electric field for a very short time Δt. All the electrons in the area will get the same impulse from the field. The shutter at the far end is opened and the electrons continue their journey. Electrons which are deflected to a certain angle α are the selected ones.

What should ΔT and Δx be? Almost all of the wave packet of an electron must fit in Δx. Otherwise the shutters would exert significant forces on the electron, changing its momentum significantly during the process.

The electric field changes quickly in the time Δt in a spatial volume of the length Δx. That will create, among others, real photons whose energy is up to

       h c / Δx

and momentum up to h / Δx = Δp.

An electron departing to the angle α may have bumped into such a real photon, boosting the momentum of the electron. The boost might be up to 2 Δp, quite significant. However, we can reduce the number of such photons by making the electric field smaller. Thus, the photons do not pose a problem.

Our experiment satisfies the uncertainty relation

       ΔE Δx / v >= h,

which can be calculated like in the previous section.

Using shutters really is cheating. They, too interact with the electron, and they are manipulated over a much longer time ΔT >> Δt.

What about removing the shutters completely? A small number of electrons will receive the impulse from the electric field partially, which will spoil our preparation to some extent. But the shutters themselves cause similar spoiling.

Let us remove the shutters. Then the interaction really is on for a very short time Δt.

What about preparing an electron in a very short distance Δx in a time Δt = h / ΔE?

Let us try to reuse the setup above. This time we let Δx be very small. But the geometry of the electric field is not suitable for our purposes. What to do?

We can create a photon with precise energy in a very small spatial volume if we can use a time Δt = h / ΔE. Then we can let the photon to free an electron from an atom with the photoelectric effect. An atom takes a very small spatial volume. In this indirect way we can prepare an electron in a small spatial volume and have the uncertainty of the energy very small.

Is there a direct way to prepare the electron in a very small spatial volume?


Conclusions


We believe that the energy-time uncertainty principle should be replaced with an energy-time-position uncertainty principle. This is also in line with special relativity where one cannot separate time and space.

In this blog post we studied preparation of a particle with sharply defined energy. Measuring the energy precisely is a related problem, but it should be studied separately.

We should also find out what is the role of "compensating forces" in the book of Aharonov and Rohrlich, in chapter 8. The authors assume an interaction which lasts for a very short time. But how large is the spatial volume where the interaction acts?

Thursday, August 12, 2021

The piston quantum experiment of Aharonov and Rohrlich

Let us continue reading the quantum paradox book of Yakir Aharonov and Daniel Rohrlich (2005).


There is the following thought experiment in the book. A particle is in a cylinder of a length L, closed with a piston. The particle is in an energy eigenstate, so that

       L = (N / 2) λ

for some natural number N > 0 and the de Broglie wavelength λ of the particle. The piston is quickly pulled a length ΔL less than λ, so that L increases.


A. Projecting eigenfunctions


The discrete Fourier transform of the old wave function in wavelengths of the form

       (L + ΔL) / (n / 2),

where n > 0, will contain Fourier components whose wavelength is smaller than λ. This suggests that the particle may end up in a higher energy eigenstate. Is this really possible?


B. Interpretation as a scattering experiment


Let us look at this as a kind of a scattering experiment of the particle from the piston. Then it is clear that the particle cannot pick up speed if it bounces back from a receding piston.

The "wave function" of the particle inside the cylinder really is a path integral of particle paths. Once the piston is moved, these paths will form in a new way. Energy conservation makes sure that any new stationary state has energy less or equal to the old stationary state. Thus, projecting the old wave function to new eigenfunctions is not the right way to calculate the new wave function.


C. Classical coherent wave


If we instead of a particle, confine a classical resonant coherent electromagnetic wave inside the cylinder, and pull the piston quickly, then the new waveform will probably contain Fourier components which have a shorter wavelength than the original wave.

It might be that projecting the old wave to new resonant waves (eigenfunctions) is a good approximate way to calculate the new wave.

Classically, the bouncing wave inside the cylinder pushes the cylinder. If we pull the piston, then the wave loses energy. Classically, high-frequency waves must drain their energy from the original wave.

What if we have just a single photon bouncing in the cylinder? Can it end up in a higher energy state, somehow draining energy from the receding piston?

Suppose that it is the photon itself which moves the piston farther. Energy conservation says that the photon cannot end up in a higher energy state.


D. Electron gas inside the cylinder


We cannot make a coherent electron wave. Let us assume that we have a gas of electrons bouncing inside the cylinder.

A path integral of N electrons might be the right way to calculate (though not practical). The piston cannot increase the energy of an electron in the path integral.


A possible resolution of the problem


If there is a classical electromagnetic wave confined in the cylinder, then we believe that classical physics is correct. When the piston is pulled, the wave loses energy. The resulting new wave will contain some higher frequencies than the original wave. The photons in the higher frequencies drained their energy from photons of the original wave.

If there is just a single particle in the cylinder, then we believe that a scattering experiment is the way to model the behavior. The wave function of the particle really is a path integral of various paths. The particle cannot end up having higher energy than it originally had.


Chiara Marletto and Vlatko Vedral in their 2020 paper The quantum totalitarian property and exact symmetries recommend using a path integral approach.


How does a high-energy photon drain energy from low-energy photons?


The piston must be an electric conductor to reflect the electromagnetic wave inside the cylinder. An electron in the piston can be accelerated through absorbing many low-energy photons. It is possible that the frequency of the electron oscillation is much higher than the frequency of the any of the absorbed photons. Then the electron will radiate high-energy photons.

It is an individual electron which converts energy in a bunch of low-energy photons into energy of high-energy photons.


Can we assume that a classical electromagnetic wave has a fixed number of photons?


Could it be that we, after all, can assume a fixed number of photons in a classical wave? It is the measuring device which converts energy in low-energy photons to a high energy photon?

A challenge in assuming a fixed number of photons is what happens if we measure a photon from an accelerating frame of reference. Suppose that a laser falling freely on Earth sends a single photon to space.

A measuring device floating freely in space would see the wave as a "chirp". The measuring device may absorb a photon from the chirp. There may be soft photons left over from the absorption, and the soft photons will escape to space.

In summary, the number of photons may be fixed in the laser beam in the inertial frame of the laser. But if one wants to absorb the energy of these photons in an accelerating frame, the number and energy of absorbed photons is not predetermined.

If the laser sends a full classical wave, then the measuring device in space will see a classical chirp wave. The measuring device will absorb photons of various frequencies. The measuring device cannot be fully resonant with a chirp. The Fourier decomposition of the remaining wave, after the measuring device, will be very complex. We may interpret that the remnant contains soft photons (and also some very high-frequency photons).

Tuesday, August 10, 2021

Einstein's clock-in-the-box and the reply of Bohr to it

Yakir Aharonov and Daniel Rohrlich have written a very interesting book Quantum Paradoxes (Wiley, 2005):


One of the paradoxes is Albert Einstein's famous 1930 clock-in-the-box thought experiment. There are photons in the box, as well as a clock. A mechanism opens a shutter for a very short time Δt when the clock says it is noon. Some photon(s) escapes, and the total energy of the box is reduced by some amount E.

In the morning we can use a very long time to weigh the box, so that we know its total energy extremely precisely. In the afternoon we can repeat the weighing procedure.

We can determine E as accurately as we like. We also know the approximate time t₀ = noon when the box lost this energy E. Does this contradict the uncertainty principle

       ΔE Δt >= h / (4 π)
?

Niels Bohr came up with a sort of a counter-argument. He uses a complicated procedure to weigh the box. There is a spring scale and known counterweights. After we have hung the smallest counterweight, there is still an uncertainty Δx in the vertical position of the box.

General relativity tells us that the clock runs at different speeds at different heights. Bohr calculated the uncertainty ΔT in the current time (proper time) shown by the clock. (Note that this is not the same as Δt.) We assume that we do not read the clock in the box, but try to determine the time from the laboratory clock. He proved that the uncertainty relation above holds for that particular uncertainty ΔT.


Bohr's argument did not prove much


However, Bohr uses a very slow procedure to weigh the box. The uncertainty principle allows us to perform the weighing much faster, in a duration t = h / (ΔE * 4 π). If we use a faster procedure, the uncertainty ΔT in the reading of the clock inside the box is probably much less than in Bohr's procedure.

Let us then analyze how one could entirely circumvent the uncertainty in the box clock reading. It is quite easy. Instead of the box energy, let us measure the energy of the photons which left the box through the shutter. We can use an arbitrarily long time for measurement, so that ΔE is extremely small.

We can fix the box statically to the laboratory frame. Then the clock inside the box runs at the same speed as the laboratory clock. The uncertainty ΔT in the reading of the clock in the box is very small. We found ΔE and ΔT which do not satisfy the uncertainty relation.

If one tries to squeeze energy E into a wave packet to whose duration is roughly t, then

       ΔE t >= h / (4 π).

This is a typical example of an energy-time uncertainty principle. However, Einstein's clock-in-the-box is about the history of events and uncertainty of the time when an event happened. Why would there be an uncertainty relation about history? We do not see why that should be necessary.

Bohr's argument did not prove the energy-time uncertainty principle, but something about uncertainties in a certain complicated physical experiment.


Analysis in literature


Let us look at the literature. Does anyone claim that Bohr proved something about the uncertainty principle?


H.-J. Treder (1975) writes that the box argument has no bearing on the fourth Heisenberg relation. We agree.


Hrvoje Nikolić (2012) has written about EPR before EPR: a 1930 Einstein-Bohr thought experiment revisited.

Nikolić says that neither Einstein nor Bohr was right. In section II A Nikolić writes that Einstein wanted to show that

       ΔE Δt >= h / (2 π)

does not hold.

We in this blog think that Einstein was right. We can produce a photon in a short interval of time Δt, and later measure the energy of the photon extremely precisely (very small ΔE), using a long time t for the measurement. The correct energy-uncertainty principle says that

      ΔE t >= h / (4 π).

In section III D Nikolić writes that Einstein did not realize that measuring the mass-energy of the box can influence "ΔE (or any other property) of the photon".

We do not understand the claim. It was well known in 1930 that a photon is a quantum of light of a definite frequency f and definite energy E = h f. We can reduce the uncertainty in E by measuring the energy of the box or of the photon itself. Conservation of energy was taken for granted in 1930, as it is today.

After the shutter is opened and closed, then the box and the photon(s) are, of course, entangled. Measuring the energy of the box makes the wave function to "collapse" to certain energy E of the photon(s). However, talking about a "collapse" does not affect the analysis of the process in any way.


Conclusions


Albert Einstein's thought experiment about the clock-in-the-box does not concern the energy-time uncertainty principle at all. Neither does Niels Bohr's counter-argument.

It is wrong to present the debate as a "proof" that the uncertainty principle holds.