Saturday, November 28, 2020

Matter falling into a black hole radiates away ALL its energy in gravitational waves?

The annihilation of an electron and a positron can be interpreted this way:

When the electron and the positron are very close, the immense acceleration converts ALL the energy in them into electromagnetic waves.

The annihilation process ensures conservation of energy:


If we could through some mechanism lower the electron and the positron closer than the classical electron radius 3 * 10^-15 m, we could extract more energy than the mass of the pair.

An analogous question about gravity:

Does a particle falling into a black hole radiate away ALL its energy in gravitational waves?

If yes, then we have a solution for the evaporation of black holes, as well as for the black hole information paradox: the evaporation happens as gravitational waves, and the information is preserved in the outgoing waves!

Why a black hole does not evaporate immediately then? Because it takes a huge amount of time (measured by an external observer) for light to climb up if the light is born close to the horizon.

Imagine a static observer close to the horizon. He sees an infalling particle accelerate at an enormous rate. How large gravitational waves does the particle produce?

What about an infalling photon?


Takashi Nakamura (2006) has calculated the gravitational wave of an infalling particle. Some 3% of its mass energy is radiated away in gravitational waves.

Did Nakamura take into account that gravitons which the horizon devours are particles themselves, and also they will radiate gravitational waves?

For an infalling electron, how much energy is radiated away in electromagnetic waves?



Can a black hole devour a photon at all?



Richard Hanni and Remo Ruffini calculated that the horizon of a black hole behaves like a metal sphere for electric field. Maybe even like superconducting metal?

Suppose that we have an electric charge q outside a black hole, and we suddenly move the charge some distance L.


We can use Edward M. Purcell's reasoning to calculate the energy of the outgoing electromagnetic pulse. The lines of force of the electric field will move suddenly, and their "bending" carries away the energy.

                                   metal
            charge           sphere
                 q ----------------O---------------
                      field line       field line

If there is a superconducting metal sphere nearby, there will be induced charges on its surface, and the electric field lines will be perpendicular to the surface. When we suddenly move the charge q, the field lines will move accordingly, and carry away the energy also along the field lines starting from the metal sphere.

Can the metal sphere absorb any of the energy? Yes, if there is resistance in it - moving charges on its surface will heat up the surface.

What about induced electric currents and the energy of their magnetic field?

If the black hole horizon behaves like a superconducting metal sphere, can the bends in the outgoing field lines become less sharp than in the incoming field lines? If yes, then the black hole could devour some of the energy in the sudden change in the electric field of the charge q.

Solved! Yes, a black hole can devour energy from electromagnetic waves. Imagine a neutron star where the gravitational potential is very low, and the speed of light very slow (as measured by an external observer).

Then a bend in a field line can spend a very long time inside a neutron star. When the bend finally comes out, it is still very sharp and carries the same energy which went in. The neutron star can keep the energy "captive" for a long time. A black hole is the limiting case where the energy stays captive forever.

This may break the "no-hair" conjecture of black holes. Does the black hole electric field store information about the earlier position of the charge q?


Freeman Dyson's argument that the QED perturbation series cannot converge, and "electric black holes"


Dyson argues that if the perturbation series converges, then it will also converge for a pathological theory where charges of the same sign attract each other.

But in the pathological theory we can create macroscopic "electric black holes" of charges, and extract more energy than the rest mass of the charges. That would break energy conservation. Dyson claimed that the perturbation series of this pathological theory cannot converge => the QED perturbation series cannot converge.

Can we somehow prevent electric black holes from forming in the pathological theory? No, because the photons which carry the energy away do not reduce the amount of charge.

In gravity, we have a better chance. Gravitons do carry away the gravitational charge (= mass-energy).

The annihilation of an electron and a positron is an empirical quantum phenomenon, and does not follow from classical electromagnetism.

There might exist a similar mechanism for the annihilation of a black hole. The annihilation just lasts an enormous time because gravitons climb up very slowly (as measured by an external observer).



What about a black hole connected to a white hole?


If a forming black hole would eventually evaporate into gravitational waves or whatever, could the same happen to anything which drops into a black hole that is connected to a white hole?

We like to think that a spaceship can pass unharmed through such a wormhole. If it would evaporate, it would be harmed, or there would be a magical process which both allows the spaceship to pass and makes it evaporate.

A magical process does not seem plausible. This is an argument against evaporation through gravitational waves, or through any mechanism.

Friday, November 27, 2020

Every field equation with two interacting fields is nonlinear: how do we know in physics that singularities do not form?

Suppose that we have two fields φ and ψ. As free fields (no interaction) both fields are governed by a linear differential equation.

The fields might be the electromagnetic field and the Dirac field.

Theorem. If we add an interaction, then the system of φ and ψ is no longer linear.

Proof. Assume first that ψ_0 is identically zero. Pick a solution of φ_0 which is not zero.

The pair (φ_0, ψ_0) is a solution of the combined system.

Then pick a solution (φ_1, ψ_1) where φ_1 is identically zero but ψ_1 not.

But the sum these two solutions is not a solution because of the interaction. QED.


Thus, most of the field equations in quantum physics are nonlinear.

Mathematically, it is very hard to prove that a nonlinear differential equation does not develop singularities. Christodoulou, Klainerman, and Tao have tried to solve this problem for the Einstein equations or the Navier-Stokes equations.

We have the same problem of smoothness for most equations in quantum physics.

http://philsci-archive.pitt.edu/13432/1/PhD%2520Bacelar%2520Valente%2520redux.pdf

Freeman Dyson gave an argument for the non-convergence of the perturbation series in QED. The argument is explained in the above link, the Ph.D. thesis of Mario Bacelar Valente.

The convergence of a perturbation series is related to the general problem if a system has any solutions which do not develop singularities. Thus, the convergence problem of QED is really a ubiquitous problem in quantum physics.

What about renormalization? Infinities may appear because:

1. the field equation itself develops singularities, or

2. our approximation method is bad and creates singularities.


Suppose that we have a nonlinear field equation and send a 1 eV photon to a system governed by that field equation. How do we know that:

1. The system does not output a 1 GeV photon?

2. The system does not create a singularity and collapse into a black hole?

3. There is any solution to the problem at all?

4. The perturbation series which we use to approximate the process converges and does not have huge errors?


For nonlinear field equations, we generally do not know the answer to any of the above questions.

When doing physics we assume that the system is well-behaved. If we use Green's functions to construct a solution, we assume that high momenta p in the "spike" sources are canceled by destructive interference.

The renormalization problem and the convergence problem of QED are no isolated cases. Similar problems appear for all nonlinear field equations.

Thursday, November 26, 2020

Solved! Why does the Feynman propagator model correctly the Coulomb potential 1/r?

https://journals.aps.org/pr/abstract/10.1103/PhysRev.76.769

We can read the solution from a 1949 paper by Feynman, Space-Time Approach to Quantum Electrodynamics. In Section 4 Feynman presents the Fourier decomposition of δ_+(s_21^2), which is essentially a Coulomb potential if speeds are low.

Feynman analyzes scattering using the following recipe:

For each Fourier component of 1/r, estimate the scattering of the wave function of an incoming particle.

https://en.wikipedia.org/wiki/Klein–Nishina_formula

In the derivation of the Klein-Nishina formula we see that if we have a photon field of a momentum q and an arriving electron field of a momentum p, there will be a small perturbation in the electron wave function, and the perturbation looks like a scattered (virtual) electron of a momentum p + q. We say that the scattered electron has "absorbed" a photon.

Thus, once we decompose 1/r into Fourier components, we can use Klein-Nishina like thinking to analyze scattering from each component.


The connection between Compton scattering, the Fourier decomposition of 1/r, and the Green's function for the Klein-Gordon equation


It is no coincidence that a Feynman propagator approximates (classical) Coulomb scattering well. Schrödinger's equation models classical physics well. Feynman analyzes Schrödinger's equation using a Fourier decomposition of 1/r.

But why is the Fourier decomposition of 1/r similar to the Green's function for the Klein-Gordon equation?

The reason might be that we can build a permanent source (= a static charged particle) of the electromagnetic wave equation as a sum of sources lasting an infinitesimal time. That is, the field of a static charge is an integral over Green's functions over each infinitesimal time dt.


The mundane status of a "virtual photon"


We have a partial answer to the question: what exactly is the "virtual photon" of a momentum q which charges exchange when they scatter from each other's Coulomb potential?

The answer: the virtual photon is a weird way to say that we calculate how a particle scatters from one Fourier component (whose momentum is q) of the Coulomb 1/r potential of the other particle!

That explains how the charges "know" to exchange just one photon. They do not know anything but the 1/r potential. It is our perturbation approximation which imposes this fictional exchange of a photon.

A further question: can we reduce virtual photons in all cases to a mundane Fourier decomposition of a 1/r potential? Real photons exist separate from any 1/r potential. What about virtual?

Wednesday, November 25, 2020

Destructive interference gives cutoff for the vacuum polarization loop?

We have in this blog several times contemplated if destructive interference of the waves of a virtual pair can cut off the high momenta solutions for a vacuum polarization loop and make the Feynman integral to converge.

                       virtual e-
                       ________
                     /                 \
                    /                    \  
~~~~~~ A                     B ~~~~~
   virtual     \                    / 
   photon      \_________/
                      virtual e+

time ------------>

The above vacuum polarization loop is a part of a larger Feynman diagram.

The energy E of the experiment gives us some bounds on how precisely we can know the position of A in spacetime. Let the length scale be L, for example, L = 10^-12 m for a 1 MeV collision.

Let |p| be the 4-momentum of the virtual electron e-.

If we can hypothetically measure the momentum of the virtual positron e+, then we can know p almost precisely.

How much does hypothetical destructive interference of various e- paths suppress the magnitude of the wave function of e- at B?

Since e- is virtual, it can move to any direction from A. The direction is not dictated by p.

In one dimension, if we have each point in a line segment of length L sending waves in unison, that is, the phase of the created wave is the same at each point at any time t_0, then there is a total destructive interference for each line segment of the length λ, where λ is the wavelength. The suppression by the destructive interference goes as ~ 1 / |p|.

What about n dimensions and a smooth "brightness" distribution of each sender in an area of size L^n? There is certainly a lot of destructive interference, but does the suppression go as

       ~ 1 / |p|

or even better?

The Feynman vacuum polarization integral is very roughly

       1 / |p|^2

over the 4-dimensional p space. It diverges badly. If we integrate over 3D thin spherical shells of a radius r in the p space, we get an integral of 

       |r|^3 / |r|^2

over r that runs from 0 to infinity. To make the integral to converge, we would need a suppression factor better than 1 / |p|^2 from destructive interference.

We need to study the impact of various dimensions and brightness distributions on the amount of destructive interference.

But is there destructive interference? The standard way to calculate Feynman diagrams ignores any destructive interference at the internal vertices of the diagram. On the other hand, the probability amplitudes for a final outcome are summed in the calculation, and there we do have destructive interference.

In our vacuum polarization loop, the phases of e- and e+ run to opposite directions. There is no destructive interference at all caused by a large momentum p in e- and the opposite momentum -p in e+, in the virtual photon which is produced at B, because the effects of e- and e+ cancel each other out. A question: does it make sense that the probability amplitude of the particles coming to B is much less than the amplitude of the particle flying off?

We may hypothesize about measuring the position of the virtual positron e+, and reducing the uncertainty in the position of the spacetime point A. But if we accept the principle that a virtual particle can move any way regardless of its momentum p, then our measurement would not help to pinpoint the location of A.

Tuesday, November 24, 2020

Can a linear wave equation describe the bending of waves in a potential?

https://authors.library.caltech.edu/3520/1/FEYpr49b.pdf 

We are trying to solve the mystery described in the two previous blog posts.

In the above paper, Richard P. Feynman calculates the Schrödinger equation behavior of a particle under a weak (electric) potential V.

If V is very weak, one can first solve the free particle equation, and then treat the slight change caused by V as a perturbation. Or is this really true?

If we have a beam of electrons, our experience is that an electric field deflects all electrons. A cathode ray tube works this way.

It is not that most electrons would pass the field undeflected, and there would be a small beam of "scattered" electrons.

The Schrödinger equation is a linear differential equation. If we solve the equation numerically, using small time steps Δt, can we solve the equation approximately by collecting a perturbation term which we add to the original free field solution? It cannot really be called a "perturbation" if it is able to deflect the whole beam of electrons.

The question: can the Schrödinger equation or any linear wave equation describe the bending of waves under a potential?

In optics, one calculates the refraction at the surface of glass by fitting waves that propagate in air to waves that propagate in glass. The wave equation is not used to describe the behavior at the surface.

We need to check the literature. Clearly, the perturbation method of Feynman cannot be used if we are dealing with the bending of a whole electron beam. But is the real problem in the linearity of the Schrödinger equation?


Bend a beam solution?


Suppose that we have a solution which describes the behavior of an electron beam in a V = 0 potential.

Let us then simply "bend" the beam in spatial dimensions. The Schrödinger equation is no longer satisfied because the nabla squared term will change its value in the bent solution. We can make the Schrödinger equation to hold again if we introduce a complex valued potential V which, when multiplied by Ψ, restores the equality in the equation. Here we assume that the wave function is not zero anywhere.

If we are bending a plane wave, then we can choose V to be "almost" real-valued.

If our wave function would be real-valued, then we would face the problem how to restore the equality when Ψ(t, x) = 0?


The Feynman approximation of summing the "scattered" waves is bad


Feynman assumes that the bulk of the wave function remains as it was when V = 0, and sums the "perturbation" terms caused by a weak potential V != 0.

He then assumes that the perturbation wave propagates as a separate, "scattered" wave.

This approximation makes sense if we want to approximate the wave function for a very short spatial distance. But if we let the beam continue far away, then the approximation is very bad: it does not model the bending of the beam in any way!

Why does Feynman obtain a correct estimate for Coulomb scattering, if the approximation method is this bad? We have to find out.


Green's functions



The above Wikipedia article defines a Green's function as a solution to a Dirac delta source in a linear differential equation:

        L G = δ(t = t_0, x = x_0).

One can then find a solution for an initial value problem of the source in the equation

       L g = f(t, x)

by building the source function f from Dirac deltas and summing various G to obtain g.


The Wikipedia article claims that a relativistic propagator gives the probability amplitude for a particle to travel from x to y. Is that really so?

In Feynman diagrams, a new particle is born from a "disturbance" to the free wave equation of that particle. A disturbance is natural to describe as a source to the free wave equation. In that context, a "propagator" of that source is the Green's function.

Is it correct to say that the propagator gives the probability amplitude of a "particle" to travel from x to y? Probably no. Rather, it tells the probability amplitude that a source produced a particle at x and that particle is seen at y.


A correction to Feynman's "The Theory of Positrons" paper?


The approximation method of Feynman, which we described above, is bad. One can try to fix it by treating the (small) term VΨ as a source in the Schrödinger equation. The source creates new waves which we can calculate using the mathematical Green's function.

(UPDATE Nov 27, 2020: for the Schrödinger equation, the Green's function is a sharp wave packet and its diffusion in time. It makes no difference if we treat VΨ as a source, or as a "new wave"! Thus, our "fix" really changes nothing.)

However, this does not make the electron beam to bend. This not a good approximation either, in that case.

This new approximation method might work if the potential V is only significant in an area which is smaller than the wavelength of an electron. That may be the reason why Feynman was able to get the correct Coulomb scattering formula!

Sunday, November 22, 2020

Why does the Feynman propagator model scattering from a 1/r potential?

https://www.math.arizona.edu/~faris/methodsweb/hankel.pdf

From the link we find out that the Fourier transform for a spherically symmetric potential function

       V(r) = 1 / |r|^a,

where 1 <= a < 3, in a 3D space, is

       F(k) ~ 1 / |k|^(3 - a).

For the Coulomb potential, 1/r, the Fourier transform is

        1 / |k|^2,

which looks like the Feynman momentum space propagator for the (virtual) photon:

        1 / (p^2 + iε).


Scattering from a 1 / r potential


Let us shoot a particle towards a Coulomb potential. We analyze this using classical physics.

particle
  ● ----->
------------------------o----------------->
x axis              charge


The center of the Coulomb potential is at the origin. We shoot the particle to the direction of the x axis from far away, starting from a random location (y, z).

The initial distance of the particle from the x axis is called the impact parameter b.

The deflection angle is

       Θ(b) = +-2 arccos(1 / sqrt(1 + γ^2)),

where γ = |C| / (2 E b). In this, C describes the field strength and E is the energy.

If we assume that the potential is very weak, then γ is very small. The deflection angle is then
       Θ(b) = +-2 arccos(1 - 1/2 * γ^2)
       = +-2 γ
       ~ 1 / b.

The momentum change of the particle, p is thus:

       p ~ 1 / b.

The particle receives a momentum |p'| > |p| if its impact parameter is b' < b. The probability for such a case is

       P ~ b^2 ~ 1 / p^2.

We see that the classical probability follows the formula of the Feynman propagator.

Is this a coincidence? Why would a classical scattering process follow the formula of:

1. the Fourier decomposition of the Coulomb potential, and

2. the Green's function of the massless Klein-Gordon equation (or the electromagnetic wave equation)?

If we use a potential

        V(r) = 1 / r^1.5,

then the Fourier transform of the potential is

        F(k) = 1 / k^1.5.


The scattering angle, or the gained momentum, is

       p ~ 1 / b^1.5, or
       b ~ 1 / p^(2/3).

The probability of a momentum gain |p'| > |p| is

       P ~ b^2 ~ 1 / p^(4/3).

The exponent 4/3 does not match the Fourier transform exponent 1.5!

It looks like the Coulomb potential is a special case where the Fourier transform "matches" a classical scattering probability.

Friday, November 20, 2020

Why does the Feynman propagator for a photon model correctly Coulomb's law?

https://physics.stackexchange.com/questions/44418/are-the-maxwells-equations-enough-to-derive-the-law-of-coulomb

The Feynman propagator for a photon is derived from the Klein-Gordon equation.

The Klein-Gordon equation is analogous to the wave equation (for the electric field E) which one can derive from Maxwell's equations.

https://en.wikipedia.org/wiki/Electromagnetic_wave_equation

One obtains the Feynman propagator from the following question: 

What is the "response" of a wave equation to a Dirac delta like source impulse at a point in space at a point in time?

https://en.wikipedia.org/wiki/Green%27s_function

The impulse response is called the Green's function.

https://physics.stackexchange.com/questions/279723/how-to-obtain-the-explicit-form-of-greens-function-of-the-klein-gordon-equation

We know that Feynman diagrams correctly model the Coulomb scattering of electrons and positrons. The scattering in classically governed by the Coulomb force. Why do Feynman diagrams work? They are derived from a wave equation, not from the Coulomb force equation.

Monday, November 16, 2020

The logic behind the renormalization group of a quantum field theory

https://en.wikipedia.org/wiki/Renormalization_group

We are currently studying renormalization groups, in order to understand why gravity is non-renormalizable.

https://arxiv.org/abs/0709.3555

Assaf Shomer has written a 10 page explanation of the non-renormalizability of gravity.

Let us calculate a Feynman path integral, using some large number Λ as a cutoff for momenta.

In QED, the integral over a (vacuum polarization) loop diverges badly, but by setting a cutoff we can calculate results which are empirically correct! Why is that? What is going on?

Shomer requires that the partition function (= the generator for all correlation functions) stays the same regardless of the cutoff. The correlation functions tell us the physical behavior of the system. Why would a relatively arbitrary cutoff Λ give the correct behavior and not another slightly different cutoff Λ'?

Maybe the right model is to adjust the values of coupling constants for various cutoff sizes, in a way that the integral which yields the partition function has the same value regardless of the cutoff size?

Shomer derives in his paper the equation (13), which determines the RG flow, that is, the dependency we must set on the coupling constants on the cutoff Λ, in order to have the partition function integral the same regardless of the cutoff.

How does this compare to the intuitive idea of scaling self-similar systems, as outlined in the Wikipedia article?

In what way does a higher Λ mean analyzing the system in more precise detail? Like we would analyze the block spin example of Leo P. Kadanoff in the Wikipedia article?

The analogy between a higher cutoff and more detail is not clear. We have to think more about this.


Bare charge and a dressed electron


The Wikipedia article contains the familiar claim that virtual electron-positron pairs around an electron can "screen" some of the  charge of the electron.

As if a "dressed" electron would appear to have a smaller charge.

This claim is misleading. Consider an electron in a classical polarized media. It is the polarization of the media close to the the observer which does screen some of the electron charge. That is, the molecules close to the observer are polarized, and cancel some of the electric field of the electron.

Suppose then that the observer moves closer to the electron. Does the electron charge appear larger then? That depends on the amount of polarization close to the electron. It is somewhat misleading to say, as in Wikipedia, that the observer "bypasses a screen of virtual particles" as he moves closer. The bypassing is not relevant but the magnitude of polarization closer to the charge.

Let us analyze a scattering experiment of an electron and a positron using Feynman diagrams. In the diagrams, there is a vacuum polarization loop which makes the interaction weaker. The integral for the loop diverges.

But in the diagram there is no cloud of virtual pairs which would screen the charge. Why would we invent an artificial "explanation" using imagined virtual pairs?

Wednesday, November 11, 2020

Quanta magazine claims that there is progress in the black hole information paradox

https://www.quantamagazine.org/the-black-hole-information-paradox-comes-to-an-end-20201029/ 

The Quanta article says that a group of researchers first considered black hole evaporation in the context of the conjectured AdS/CFT duality. Then they were able to eliminate the link to AdS/CFT using path integrals.

The Wikipedia page:

https://en.wikipedia.org/wiki/Black_hole_information_paradox

discusses the work of Penington et al.

The claims remind us of the announcement by Stephen Hawking in 2004 that he is able to recover the information which has fallen into a black hole, using Euclidean path integrals:

https://arxiv.org/abs/hep-th/0507171

Does Hawking radiation exist? Vladimir Belinski has claimed that the calculation by Hawking is erroneous. In this blog we have raised questions about conservation of momentum if we assume that the black hole horizon radiates photons. A photon carries away a momentum p. What, and how, could absorb the opposite momentum -p?

As far as we know, no one has refuted the criticism by Belinski, and no one has shown a mechanism which would conserve momentum.

What about the claims that we can use a path integral and show that the information falling toward the black hole horizon is preserved, after all?

Let us assume that a macroscopic black hole forms, and it devours and crushes a large part of the wave function, or, of the path integral.

In quantum mechanics, one cannot simply throw away a part of the wave function or a path integral. It is a strange claim that the remaining part would be equivalent to the entire original wave function.

The horizon of a black hole is classically a one-way surface. Information can fall in, but can never come back.

Let us do a thought experiment: instead of a black hole, we have a horizon which leads to a wormhole, and the wormhole opens into a white hole in some other part of our universe. If we claim that the horizon necessarily returns back the information which has passed by, how do we explain that the same information ends up to another part of our universe? This is against the "no-copying" principle of quantum mechanics.

People who claim that a black hole horizon must necessarily give up the information it has devoured, kind of claim that the universe behind the horizon is "inferior" to our own universe. They think that the entropy should be calculated based on what is on our side of the horizon, and we should ignore what is behind the horizon. That does not sound like a reasonable assumption. Why would the other side be inferior to our side?

The Quanta magazine article points at the large number of assumptions and idealizations which Penington et al. use.  That is a weakness in the new work.


Does a system eventually radiate all its entropy out in an asymptotic Minkowski space?


Consider a block of glass. It is amorphic material and contains quite a lot of entropy. When a very long time passes, glass becomes crystallized, and its entropy dramatically decreases. After an immense time, the block of glass will assume a state of the lowest energy. That state might be one spherical crystal - spherical because of the gravitational attraction.

Hawking, Bekenstein, and others probably had this phenomenon in their mind when they conjectured that a black hole horizon must necessarily have a non-zero temperature since it encloses a lot of entropy inside.

But let us think about the wormhole example above. If we throw a block of glass through the horizon, the entropy of the glass will slowly seep out into the new part of the universe where the block of glass ends up. There is no obvious reason why the entropy should climb up the wormhole to the wrong direction and magically return back from the horizon.

If we take macroscopic one-way surfaces seriously, then the entropy will remain behind the surface. If it is a wormhole, then the entropy will pop out of a white hole. If the surface is a black hole horizon, then the entropy will remain confined behind the horizon.

Let us compare a block of glass to a black hole horizon. An observer can see that the atoms are in a disorder in the glass. He sees that there is a lot of entropy.

But if a black hole horizon rapidly becomes an ideal geometric object, and essentially black, too, then an outside observer does not see a lot of entropy there. He does know that the horizon devoured a lot of entropy, but he can no longer directly observe the disorder. This is in contrast to a block of glass. 


Do cosmological horizons somehow radiate back the entropy in the galaxies that they devoured?



In an accelerating expansion of an FLRW universe, galaxies eventually disappear behind a cosmological horizon.

If horizons generally would give back the entropy which they have devoured, would a cosmological horizon eventually return us all the information in the galaxies which it swallowed? That seems implausible.

Tuesday, November 10, 2020

The energy of a graviton has to be hf

https://en.wikipedia.org/wiki/Graviton

Wikipedia states: "it is unclear which variables might determine graviton energy."

Let us assume that we have a mass M attached to a harmonic oscillator whose frequency is f. The harmonic oscillator device A is attached to the crust of Earth.

When the mass M swings in the oscillator, it produces a dipole gravitational wave.

Earth, in turn, produces an opposite dipole wave, which - far away - almost exactly cancels the dipole wave produced by M. This is the reason why observed gravitational waves are quadrupole, not dipole.

Let us then assume that we have another harmonic oscillator B of the frequency f close to our first oscillator A.

According to quantum mechanics, the oscillator A can only lose energy in units of hf, where h is the Planck constant.

If the gravitational interaction can transfer energy from A to B, it must happen in units hf. This strongly suggests that the energy of a single graviton is hf, just as it is for a single photon.

But does the oscillator A lose energy at all? Could it be that any energy state of A is stable under the gravitational interaction and cannot decay into a lower energy state?

If the mass M is huge, then we believe that gravitation behaves in a classical way. The oscillator B will certainly start to oscillate if A oscillates. This behavior might be measurable using a Cavendish torsion balance.

https://en.wikipedia.org/wiki/Cavendish_experiment

Thus, there is every reason to believe that the oscillator A can transfer energy packets of the size hf to B.


Conclusions


The energy of a graviton is most probably hf. It is the same energy as for a photon of the same frequency. The "reason" for the packet size is that a quantum harmonic oscillator can only gain or lose energy in packets of the size hf.

What does conservation of the ADM energy really mean?

https://en.wikipedia.org/wiki/ADM_formalism

The ADM formalism is supposed to prove conservation of the energy of a closed system when observed from "infinity".

But there is a conceptual problem in this. Suppose that we initially have a system S which is static, and we have a static solution of the Einstein equations. Let M be the ADM mass of S at infinity.

Let us then make some change to S. Some masses inside S move and produce gravitational waves. Conservation of energy requires that the total mass of S plus the energy of the waves stays the same.

Let us then calculate the new ADM mass as the limit at infinity. Since the speed of light is finite, no information about the change in S has yet reached infinity. The metric is the same as before, and the ADM mass is trivially the same old M!

What we would like to have is the conservation law:

(*) The total energy of the system S plus the energy of the emitted gravitational waves is conserved. We calculate the energy of the waves using some approximation method.

Almost nothing is known about the existence of solutions for the Einstein equations. Therefore, (*) is an open problem.

We need to check the original papers about the ADM formalism. What do the authors state about conservation laws?

Monday, November 9, 2020

Quantization solves the existence problem of the solutions for the Einstein equations?

https://en.wikipedia.org/wiki/Exact_solutions_in_general_relativity#Existence_of_solutions

In 1993, Demetrios Christodoulou and Sergiu Klainerman were able to prove the stability of the Minkowski vacuum under small perturbations.

But the existence of solutions for the Einstein equations remains unproven for essentially all practical cases - that is, if we have a non-symmetric, non-uniform mass distribution.


The Navier-Stokes equations

It is notoriously hard to prove the existence of smooth solutions for non-linear differential equations. The most famous example is the Clay Millennium Problem about the smooth solutions of the Navier-Stokes equations.

Let us think about a real physical fluid, say, water. A milliliter of water contains some 3 * 10^22 water molecules H2O. The Navier-Stokes equations approximate a viscous flow of a very large number of water molecules. The equations are an idealized effective theory of a macroscopic amount of water.

A priori, there is no reason why the equations would make sense, or have smooth solutions, if we extend them to the case where a water molecule is infinitesimally small. The Clay Millennium problem may have little physical relevance. 

A water molecule size gives a natural cutoff scale for the Navier-Stokes equations. Approximate solutions of the equations are physically relevant provided that features whose size is of the order of a molecule do not affect the solution.

In the case of water, the quantum of water, a single molecule, saves us from the problem of the existence of smooth solutions.


Maxwell's equations

The electromagnetic field is another example of quantization. Maxwell's equations describe the behavior of a macroscopic classical field. We assume that a photon carries an energy hf, where h is the Planck constant and f is the frequency of classical (circularly polarized) macroscopic wave.

A very large number of coherent photons form a classical macroscopic wave. But Maxwell's equations do not describe the absorption of a single photon correctly. The equations are not aware of the quanta.

Maxwell's equations are an effective theory. Does it make sense to study the smoothness of the solutions for features whose size is much less than the photon wavelength? Probably not - a very short wavelength would involve a photon of a high energy. How could such a photon be produced? From where would the energy come from?


The Einstein equations


What about the existence and smoothness of solutions for the equations of General Relativity?

We do not know if the field is quantized. If it is, then we might get a cutoff scale which saves us from considering features smaller than a certain length.

We need to check if the problem in proving the stability of the Einstein equations involves very small features. If the problem is the appearance of singularities, then a cutoff scale would help.

In our earlier post today about a perpetuum mobile we again pointed at the possibility that the Einstein equations, combined to a lagrangian of an infinitely strong vessel, may have no sensible solutions at all. This potential problem is separate from the stability and smoothness of pure Einstein equations.


Conclusions


The smoothness and stability problems in various non-linear physical equations probably are not relevant, once we take into account the quantization of the field.

An improved perpetuum mobile for general relativity

The Schwarzschild interior solution has the peculiar property that the spatial metric is not affected by the pressure. Pressure only affects the temporal metric.

The radial spatial metric in the solution is stretched by mass-energy.

Suppose that we have an infinitely strong spherical vessel. If we place some mass-energy M at its center, then the volume of the vessel grows, while the area of the surface of the vessel remains constant.

Let us fill the vessel with incompressible fluid.

If we can somehow remove the mass M from the center of the vessel, then the pressure inside the vessel grows infinite, and we can, in principle, extract a large amount of energy from the system. We have a perpetuum mobile.

Last year, our attempts to construct a perpetuum mobile were thwarted by the gravitational field of the infinite pressure, if we try to remove M.

Let us try a new approach: we convert M instantaneously into light, and let the light escape from the vessel.

                 M
             _       _
            (_| -- |_)  vessel

Let us have a hollow tube which goes vertically through the vessel. We place the mass M inside the tube as an infinitely thin plate, like the dashes -- in the above diagram.

Let us convert M instantaneously into photons. Let us shoot the photons straight up and down from the plate.

Can the mass-energy M escape, leaving behind a vessel which has an infinite pressure?

A global observer perceives the speed of light to be "slower" close to the (large) mass M than inside the vessel. Or does he? If we have the energy M/2 moving as photons, what is the perceived speed by a global observer?

Could it be that the infinite pressure which is created inside the vessel, has enough time to affect the photons which carry M away, and can deflect almost all the photons back to the center of the vessel?

We do not know if the Einstein equations have any solution for the setup of the sketched perpetuum mobile. It might be that the equations do not give any prediction of the behavior.

In this blog we have conjectured that the Einstein equations are too "strict" and that no physically reasonable solution for them exists for many macroscopic setups. We have suggested that a switch to some kind of a rubber model would remove the excess strictness. How would a rubber model handle our perpetuum mobile?

The speed of sound inside a neutron star CAN exceed the vacuum speed of light

https://www.nature.com/articles/s41567-020-0914-9

Evidence for quark-matter cores in massive neutron stars by Eemeli Annala, Aleksi Vuorinen et al. in Nature Physics June 1, 2020 studies the mass distribution of a neutron star, assuming an arbitrary function f:

       pressure = f (energy density of matter).

The authors mention that most hadronic models predict that the speed of sound squared, c_s^2, is equal to 0.5 or larger for high densities (we use natural units where c = 1).

Is it possible for the speed of sound to exceed the vacuum speed of light?

The answer is definitely yes, if we define the speed of sound as the phase speed of a well-formed periodic pressure wave (a "sine" wave). That is, as the speed of the crest of wave. A crest of an infinite sine wave does not transmit any information from a point A to a point B. The speed of the crest is not constrained by the universal signal speed limit, that is, the vacuum speed of light c.

https://motls.blogspot.com/2020/10/a-fun-calculation-of-maximum-speed-of.html

https://arxiv.org/abs/gr-qc/0703121

Lubos Motl in his blog post (2020), as well as George Ellis et al. in their arxiv paper (2007) claim that the "causal limit" for the speed of sound is the speed of light, c. But they fail to define what they exactly mean by the speed of sound.

When considering the stiffness properties of matter, the natural definition for the speed of sound is the phase speed, not the signal speed. The phase speed depends on the stiffness, and can exceed the speed of light.

                          string

      wall |--------------------------| wall

As a practical example, consider a tense string which is attached to walls at its endpoints. If we pluck the string, we can create a standing wave into it. There is no speed limit for the phase speeds of the two sine wave components of the standing wave. The standing wave does not transport any information and is not constrained by the speed of light.