Thursday, May 22, 2025

Retardation and dark energy

Let us have a collapsing spherical shell of dust. In this blog we have been claiming that a clock at the center cannot know the gravity potential right "now" in the coordinate time, and will tick faster than predicted by general relativity. We claim that the rate of a clock is an "effect" of the gravity field, and the effect cannot propagate faster than light.

We have been struggling to calculate how this will affect the collapse of a uniform dust ball.

In FLRW models, the expansion of the (dust-filled) universe will slow down as if gravity would be newtonian: we can study a small spatially spherical volume of the universe, and we obtain the deceleration from newtonian gravity.

We believe that the dust-filled universe in an FLRW model behaves like an expanding dust ball in an asymptotically Minkowski space.

We have been struggling to incorporate the retardation effect to this newtonian model of expansion (or collapse) of a dust ball.

Let us, once again, try to figure out what happens. We will study a collapse of a uniform dust ball in an asymptotically Minkowski space.


Retardation makes the gravity potential well shallower: a rubber sheet model


We believe that a clock inside a collapsing dust ball is not aware of the acceleration of individual dust particles far away. Therefore, the potential, as defined by the clock rate, is higher than one would calculate if one would assume that the clock knows the configuration "right now" in the coordinate time.


                          bulge
      ___                                        ___  rubber sheet
            \•______----------______•/
   weight  --->                  <--- weight
      

In a rubber sheet model of gravity, a contracting ring of weights will have the rubber sheet bulged upward at the center, because the rubber at the center of the ring does not yet know that the weights in the ring have been accelerated as they fall lower.

As the weights slide lower, potential energy is released. Most of the potential energy will go to the kinetic energy of the weights. Some will go to a "longitudinal" wave in the rubber sheet, that is, to the stretching of the bulge, and the kinetic energy of the rubber sheet.

In this model, the "gravity" simulated by the rubber sheet is somewhat weaker because not all released potential energy goes to the kinetic energy.

Presumably, the weakening effect grows stronger when the ring contracts. But could it entirely cancel the acceleration of the simulated "gravity" at some point? (Dark energy seems to be accelerating the expansion of the universe.)

Then all the potential energy released would at some point go to the deformation of the rubber sheet and the kinetic energy of the rubber sheet.

      
                   • -----______----- •
                              pit


Could it be that the bulge has time to flip into a "pit" at some point? Then the simulated "gravity" would appear stronger.


Modeling the retarded gravity field with a rubber sheet: a singularity appears


Let us have a mass M which is initially static. Then we start to accelerate it at a constant acceleration a.


                                                 observers
               ● ---> a      R              ×     r     ×
              M                                2            1

             ----->
      

Let observer 1 be such that he is not yet aware of the acceleration of M. 

Let us use the standard retardation rule: observer 2 sees the location of M as if M would have moved at the constant speed v, where v is the last observation that observer 2 made.

The gravity potential V(x) seen by observer 2 is continuous in x, but the derivative dV / dx in not continuous. The derivative should be continuous in a rubber sheet model?

                   ____
                  /               --> v
           -----
        rubber sheet


If we have a sharp turn moving in the rubber sheet, then the acceleration of the sheet is infinite at the turn. It is a singularity – nonsensical.


An electric charge which is suddenly accelerated



















How does the Edward M. Purcell calculation handle the analogous case for the electromagnetic field?

The electric field lines above are continuous, but make sharp turns. Can we produce those turns with a time-varying magnetic field which does not contain a singularity?










We can determine the curl of B from the time-varying field E. Is there a guarantee that B will not contain sources, and that the curl of E satisfies the upper equation?

If we would use the standard retardation formula for the electric potential V, then at the rightmost point of the circle above,

       dE / dt

would be infinite. But the requirement that the electric field lines are continuous means that we cannot apply the retardation rule to the potential V at that point.

What about the sharp turns of the electric field lines at other places? There, dE / dt is infinite and the curl of B is infinite. That seems nonsensical. Suppose then that we make the turns smooth.

When the charge q moves to the right, the magnetic field B takes the well known for of field lines circling the trajectory of q.

Let us accelerate q to the right. The second formula above is, at least approximately, satisfied. The same holds for the first formula.

In electromagnetism, the electric potential V does not have any direct effect on anything. It does not make clocks to tick slower. Therefore, the rubber sheet model does not describe electromagnetism. The potential V can change infinitely fast inside a contracting or expanding shell of charges.

We conclude that the electromagnetic analogy is not useful for us.


A new take: a tense rubber sheet


Let us try to prove that the bulge can turn into a pit. 


                          bulge
      ___                                         ___ rubber sheet
             \•______----------______•/
   weight --->                  <--- weight


Let the rubber sheet be very tense. We can assume that the weights move much slower than the waves in the rubber sheet.

Let us assume that the weights start moving. A bulge forms. Could it turn into a pit as the rubber sheet tries to straighten itself up at the center area?

No, it is not possible.


The clock on the bulge runs fast: the observer at the bulge sees the contraction of the dust shell to slow down mysteriously?


This could finally explain why the collapse of a dust shell appears to slow down. If we reverse time, then the expansion would speed up mysteriously, just like in dark energy.



***  WORK IN PROGRESS  ***



Sunday, May 11, 2025

Problems in the anthropic principle

People appeal to the anthropic principle to explain the following observation:

- the laws of nature seem to be fine-tuned to allow biological life.


Anthropic reasoning goes this way: if the laws would not be fine-tuned, then there would exist no observer who would be wondering the fine-tuning.

Let us analyze what exactly is involved in the reasoning. What are the assumptions?


If there can only exist one universe, then the anthropic principle does not explain anything


Suppose that laws of nature dictate that exactly one universe must exist. Not zero or 2. 

Why are the laws of nature fine-tuned to allow biological life and sentient observers in that one universe?

The anthropic principle in this case does not explain anything. It could well be that the only universe is not suitable for life.

The existence of observers in our universe is an a posteriori observed fact. There is nothing a priori which requires observers.


Religious assumptions: the one universe must be fine-tuned for life


Let us assume that laws of nature dictate that exactly one universe must exist.

There might be a law of nature which requires that observers must exist in that one universe, and that I must be born as a creature which as an adult will become an observer. This would explain my a posteriori observation.

These assumptions resemble a religion: the laws of nature must be fine-tuned for life. It is like the creation myth in the Bible.

John D. Barrow and Frank J. Tipler formulated a hypothetical law of nature: the universe must be constructed in such a way that intelligent observers will arise.


A multiverse and the anthropic principle


Let us then assume that there exists a vast number of universes, with different laws of nature. Some models in "string theory" have this assumption: the multiverse.

Then there might exist a very large number of universes suitable for life. Many universes will contain observers.

Let us assume the following:

1.   My "soul" is predestined to be born inside a creature which will be sentient and become an observer as an adult.

2.   There are many universes with such creatures.


Then it is not surprising at all that I was born into a universe which contains observers.

Note that we need assumption 1, too, in addition to 2. If I would be predestined to be born either as a human, a rock, or an electron, then 2 would not explain why I happen to be an observer. Why am I not an electron?


Why do I find myself being one of the first intelligent observers on Earth?


There have been a huge number of vertebrates on Earth in the past 530 million years. Only about 10 billion humans have, so far, been aware of other galaxies besides the Milky Way.

Can an anthropic principle explain why I am among the first such observers on Earth?

If Earth could not harbor life, there would exist no such observers. This is the standard anthropic argument.

But there is no reason why I should be among the first. Why not the 10²⁰'th such observer?

We come back to the doomsday argument.


Observers could exist in the universe, but I could be a fish?


The a posteriori observation which I have made is that intelligent observers exist in a universe which contains billions of galaxies.

If I were a fish living in the year 2025, I would not have made such observation, even though it is true currently.

What do anthropic principles say about this?

They say that life must exist in our universe. But do they say anything about why I am not a fish?

An analogous question: suppose that on some exoplanet there exists an observer who is far superior to humans. A man is like a fish to this superobserver. Why was I born as a human (= fish), and not as the superobserver?

Weak anthropic principles basically say that for an observer of a type A to exist, the universe must be such that A can exist. That is close to a tautology. Obviously, it cannot answer to more complex questions, like "why was I not born as a fish?"

A human fetus may not be much more intelligent than an adult fish. If my "soul" would choose its birthplace by random from creatures which have some rudimentary intelligence, it is extremely unlikely that I would be born as a human fetus, which as an adult will be one of the first 10 billion humans to know about about galaxies.


We need stronger principles than the weak anthropic principles, to explain why "I" am among the first intelligent observers on Earth


Let us look at so-called strong anthropic principles.

Barrow and Tipler (1986) proposed at least the following variants:

1.   laws of nature require that there must exists exactly one universe, and that universe must contain observers;

2.   laws of nature require that any existing universe must contain observers (e.g., in quantum mechanics, an observer is required to make the wave function to collapse);

3.   laws of nature require that there must exist many universes (and some must contain observers).


All these imply that there exists at least one universe with observers. But they do not explain why I was not born as a fish. They do not explain why I was able to observe that this universe contains observers. Furthermore, they do not explain why I am among the ~ 10 billion first intelligent observers on Earth.


The Copernican principle



The Copernican principle can be stated like this: the physical location of the Solar system is "typical" in the universe. It is not the center of any important cosmological structure. The principle is strictly in contradiction with the Ptolemaic model of the Solar system where Earth is the center of everything.

Empirical observations strongly support the Copernican principle. 

What about the Copernican principle in the time dimension?

The doomsday argument is a Copernican principle with respect to time.

Do we exist at a "typical" time in the history of universe? No. The universe is expected to be very much suitable for life for ar least 1,000 billion years. We are living in a "young" universe.

The Copernican principle does not seem to hold on the surface of Earth. I am not a fish.


Conclusions


"Weak" anthropic principles are almost tautologies.

"Strong" anthropic principles contain a very brave hypothetical law of nature: a universe must necessarily produce "observers" at some point of time.

The Copernican principle is true for the spatial location of the Solar system, but it is not true with respect to the time dimension. Furthermore, on Earth the spatial Copernican principle does not hold at all: I was born a human and not a fish, even though there are more instances of a fish than a human.

Can we conclude that there must exist a mysterious law of nature "outside our universe", which places us to the current epoch and into conscious observers called humans?

Yes, that is the natural conclusion. Note that even if we would be placed at a random epoch, also that would a constitute a law of nature: you can expect to exist at a random epoch.

Ordinary laws of physics do not say anything about where and when we can expect to exist as an observer. Laws of quantum physics do talk about a collapse of a wave function caused by an "observer". Usually, people assume that the "observer" can be any large object which causes the wave function to "decohere". That does not require that I am the observer.

We can talk about natural laws of the placement of the subject. The placement seems to be non-random.

Suppose that you buy a new computer game and choose the character you are going to play in the game. The game is the "universe" and the character is the "observer". Obviously, the character will probably not be a random character in the game.

The hypothesis that our universe is a computer game, explains why we find ourselves living during a very special epoch as very special observers. The hypothesis implies strong anthropic principles: a computer game always contains "observers" if it has players.

What other hypothesis could explain our special position?

The doomsday argument is refuted because we are not born as random humans. The big oversight in the doomsday argument is that it assumes that we somehow know the prior probabilities of how we would be placed as observers. We do not know.

Suppose then that we would find ourselves as random observers. Why would we be random? What hypothesis could explain that?












In Plato's allegory of the cave, people have spent their entire life chained to the wall of a cave. They see shadows projected on the opposite wall. They do not see the real world, only shadows. However, through philosophy, one can learn about the real world.

Plato's allegory is somewhat similar to the computer game hypothesis. We are living inside a computer game. But through mathematics and logic we can learn something of the real world which exists outside the game.

When I started this blog in 2013, I, unfortunately, named it "metaphysical thoughts". The blog had been about physics, not metaphysics. The current blog post and the previous one can honestly be called "metaphysics". We finally have some content which fits the name of this blog!

Saturday, May 10, 2025

The doomsday argument

The doomsday argument was popularized in the 1980s by the astrophysicist Brandon Carter, a colleague of Stephen Hawking.


The argument is so simple that it has been invented several times in the course of history.

People estimate that ~ 100 billion individuals of homo sapiens have lived on Earth in the past 250,000 years. Suppose that I was "chosen" by some mechanism to be a random individual among all the homo sapiens who will ever live.

Then it is likely that the total number of homo sapiens who will ever live will be something like ~ 200 billion. If we assume that the population of Earth will stay at 10 billion, then homo sapiens is expected to go extinct in ~ 700 years.

How could the number future homo sapiens could be so low, only 10¹¹? If humans will colonize exoplanets, there might be 10¹⁷ humans living in the next 10,000 years.

The doomsday argument is an example of anthropic reasoning.



Could it be that humans will be replaced by artificial intelligence machines?


What if artificial intelligence machines will replace humans in the future? Communication between exoplanets is slow. Each colonized exoplanet must have at least one instance of AI, autonomous from other instances. If the AI sends space probes, each probe must have an autonomous AI instance.

If humans are replaced by AI machines, we expect a very large number of autonomous, individual AI machines to exist in the future.

Why was I born as an instance of homo sapiens and not an instance of an AI machine? Or is it so that only ~ 100 billion AI machines will exist in the future?


Why was I born as homo sapiens, and not a random vertebrate?


Vertebrates appeared on Earth 530 million years ago. There has been a huge number of individuals in the past. The 100 billion instances of homo sapiens is an extremely tiny fraction of all instances of vertebrates which have lived.

Why am I not a random instance of a vertebrate, living some time in the past 530 million years?

Maybe I was destined to be born as a "self-conscious" being? But I was not born that way: a fetus or a newborn child is not self-conscious in the way that an adult is.


The self-indication assumption objection refutes the doomsday argument


The self-indication assumption is one way to refute the doomsday argument. Suppose that I am one of an almost infinite number N of "souls" who may be born as homo sapiens, or not be born at all. If I then find myself born as homo sapiens, I cannot deduce anything about the number of homo sapiens who will ever live.

Let us have two possible worlds:

- A: 200 billion homo sapiens will live;

- B: 10¹⁰⁰ homo sapiens will live.


Let, a priori, the probability of A and B be both 0.5.

Let me find myself living as the 100 billionth instance of homo sapiens in the world. Can I now deduce anything more about the probabilities of A and B?

No. In both cases, A and B, the probability of me being born as the 100 billionth instance is the same.


Self-indication assumption and vertebrates


The self-indication assumption does not answer the question: why was I born as an intelligent vertebrate (homo sapiens), and not as an average vertebrate (a small fish)?

There may exist a huge number of homo sapiens in the future, but that does not explain why I was born as homo sapiens at a time when only a tiny fraction of all instances of vertebrates have been homo sapiens.


I was not born as a random instance of a vertebrate?


This is the natural hypothesis: I was somehow predestined to be born as a being which becomes self-aware as an adult. There was no randomness in this.

Then it is also natural to assume that I was not born as a random instance of homo sapiens.

The doomsday argument is refuted because there is no randomness.

If there is no randomness, then there is some law of nature outside our universe. That law of nature brought me here.

The doomsday argument itself assumes that there is a law of nature outside our universe: that a "random process" causes me to be born as an instance of homo sapiens.


Criticism of the self-indication assumption




Nick Bostrom (2002) argued that the self-indication assumption leads to absurd consequences. Bostrom's argument is called the "presumptuous philosopher".

Suppose that we have two possible cosmologies:

- A: there are only 10¹¹ conscious observers in the universe;

- B: there are 10¹⁰⁰ conscious observers in the universe.


A priori, A and B have a probability 0.5. I am a conscious observer. Can a I deduce if I am in A or B?

Let us use the self-indication assumption. Let us assume that "souls" are assigned at random from the very large pool of N souls to the universe. Let us assume that I am a random soul in the pool. I realize that I was born. This implies that I am almost certainly in the universe B, not in A. Let us call the reasoning in this paragraph P.

Nick Bostrom is quoted in Wikipedia saying that it is "absurd" if one can draw the conclusion above in the reasoning P. He believes that the absurdity discredits the self-indication assumption.

We in our blog do not think that the reasoning P is absurd – rather, P is the natural conclusion if the randomization model and the pool of N souls is correct.

The weakness in P is that we do not know if the assumptions are correct. It certainly does not look like that my soul was assigned randomly into the universe.


Conclusions


The doomsday argument probably does not hold. It is easily refuted by the self-indication assumption. It is also refuted if there is no randomness. We cannot claim that only ~ 200 billion humans will ever live.

There seems to be a law of nature which is outside our universe. That law of nature decides in what role I am born.

In quantum mechanics, the collapse of the wave function, or in the many-worlds interpretation, how we choose the branch in which we live, is an unsolved mystery. The mystery looks like the question "why I was born as this instance of homo sapiens?"

We have to think about this: what kind of a law of nature outside our universe could possible solve the mystery in quantum mechanics?

Thursday, April 24, 2025

John Bell's inequality: did John von Neumann derive it in 1932? No

John von Neumann's 1932 book about the foundations of quantum mechanics contains a proof, which, according to von Neumann, shows that quantum mechanics cannot be simulated with a "hidden variable" theory.


Grete Hermann in 1935 claimed that the proof contains a flaw, and John Bell in 1966 came up with a similar criticism.

Who is right?


Quantum mechanics is about waves: can we model quantum mechanics with "non-wavelike" phenomena?


A wave packet which models the position ans the momentum of a particle, is a typical quantum mechanical object.

Can we somehow build a wave packet from "non-wavelike" phenomena?

Intuitively, it is not easy. We can build a wave packet as a sum of two wave packets, but how could we build a wave packet from, say, ten real numbers?

Suppose that we have a sample of 1,000 classical particles, each of which has a definite position and momentum. This sample can approximately model a wave packet, but not exactly.

We have not yet checked the details of the proof by von Neumann. We believe that von Neumann aimed to show that, under certain assumptions, one cannot build a wave from a (finite) number of non-wavelike subsystems.


Criticism by Hermann and Bell: a digital computer


Grete Hermann and John Bell criticized the proof by von Neumann. They said that it might still be possible to simulate wavelike behavior with hidden variables.

Suppose that we have a digital computer which models and calculates a wave packet. Certainly, we can, at least approximately, simulate the wave packet in the computer. A digital computer is not a "wavelike" object. We proved that one can, approximately, simulate a wave with non-wavelike objects.

Could it be that a computer could simulate a wave packet exactly?

The random number generator is a problem. A pure digital computer must contain a deterministic pseudo-random number generator. This implies that the behavior of the computer program does differ from an idealized quantum system, and that we, in principle, can observe that the output of the program is not like that of a true quantum system. We conclude that a digital computer cannot simulate a quantum system.

Maybe this is what von Neumann proved?


The de Broglie-Bohm model contains a "pilot wave"


The model of Louis de Broglie and David Bohm (1952) contains a global "pilot wave". The particle is like a small boat which sails on the pilot wave.

We know that the model can reproduce the standard quantum mechanical results, like the interference pattern in a double-slit experiment.

The model does contain a wave. It is not surprising that the model can describe wave phenomena.

But the model cannot simulate the "true" random behavior of quantum mechanics? If we initialize the particles to certain definite values, then the system certainly will not behave in a truly random way. It is just like in the case of a digital computer.


Bell's inequality (1964)



The famous inequality of John Bell (1964) shows that one cannot model the results of the Einstein-Podolsky-Rosen experiment with a "local hidden variable" theory. The state of the two particles at distant locations cannot be simulated with a model where each particle possesses a determined unique state.

The state of the two particles is "entangled". It cannot be split into two determined parts.

The result by Bell refers to locality, while the result of von Neumann does not. It is now clear that von Neumann did not prove Bell's theorem in 1932.


Did von Neumann prove a triviality?


How possibly could a short argument about expectation values rule out hidden variable models?

It has to be magic – or a triviality.

Suppose that we have a true random generator which outputs either -1 or 1 at random. Then the expectation value for its first output is 0.

But if it is a pseudo-random generator, which uses a complicated mathematical formula to generate a sequence of pseudo-random numbers, then the expectation value of the first output is -1 or 1 – not 0.

Did John von Neumann prove this triviality?


Tom Harper (2023) has posted a video and a paper where he analyzes arguments by Louis Caruana (1995):


and Mermin and Schack (2018):


According to Caruana, von Neumann proves the following trivial fact:

- If quantum systems A and B are truly indistinguishable, then there cannot exist any hidden variables whose state could differ between A and B.


The criticism by John Bell (1966)



"Dispersionless", informally, means that a system is a classical particle, or some other classical, not quantum, object. It cannot be a quantum object in a superposition state.

Suppose that we have a quantum system which consists of dispersionless subcomponents. In a sense, the quantum system then is determined by hidden variables. An example: a tense plastic string behaves much like a wavelike object (quantum), but at a low level we can decompose it into atoms, which might be non-wavelike.

In section III of the paper, John Bell writes that one should not require the additivity of expectation values of different measurements for dispersionless subcomponents. Bell's statement is suspicious. The parameters of a classical particle can be measured arbitrarily accurately, without disturbing the system. That is, all measurements "commute", in the terminology of quantum mechanics. Then the expectation value of 

       measured position / meter

       + measured momentum / kg m/s

is the sum of the expectation values of each summand!

If we have an arbitrary hidden variable model which outputs values for measurements, then the question what is the expectation value of

       position + momentum

is not well defined. Let the hidden variable model be a computer program which outputs a number when we type either "position" or "momentum". What does the "expectation value" of position + momentum mean? We cannot type that string as the input to the computer. Should we first type "position" and then "momentum", and take the sum?


Conclusions


John von Neumann proved a triviality: something like this:

- You cannot simulate a genuine random number generator with a pseudo-random number generator, because for a pseudo-random generator you can, with a mathematical formula, predict the output of the generator!

Von Neumann used his operator algebra for the proof. The algebra is somewhat complicated, and people have had hard time evaluating the impact and relevance of the proof.

Hidden variable models are not trivial. A trivial result will not illuminate the problem much.

Our own contribution in this blog is noting that since quantum mechanics is about waves, then simplifying a system into a few real numbers is not going to work, in most cases.

This implies the famous result of John Bell: if you assume that the "wave" describing particles A and B has collapsed (i.e., the wave has been reduced into a few real numbers), then you will not be able to produce the phenomena of quantum mechanics. Letting a wave collapse often destroys information – but that information is needed to reproduce quantum phenomena.

To produce the interference pattern in the double-slit experiment, you need a wave. A classical particle which is characterized by a few real numbers (its velocity components), will not do.

Since the de Broglie-Bohm model includes the pilot wave, it can produce the interference pattern with hidden variables, deterministically.

We conclude that the critics of the von Neumann proof were right: the proof is not relevant for hidden variable models.

Friday, April 18, 2025

Chyba, Hand, Hossenfelder and generating electricity from a stationary magnetic field? It does not work

Sabine Hossenfelder (2025) is advertising a strange invention, almost a perpetuum mobile, which is supposed to stand static relative to the surface of Earth and produce an electric current from:

1.   the dipole magnetic field B of Earth, and

2.   the rotation of Earth as the device stands static relative the surface of Earth.

The claim that such a simple device can extract energy is very suspicious.


The original paper of Chyba and Hand (2016) is here:


J. Jeener (2018) refutes the result of Chyba and Hand:



From where would the energy to the electric current come? The dipole magnetic field of Earth does not change anything


Let us assume that Earth is a rigid charged sphere whose magnetic field is produced by charges which are static relative to Earth.

If the device of Chyba and Hand would produce in a circuit loop L an electric current I which can do work, it should be able to tap the rotation energy of Earth. The device must be able to send some of the angular momentum J of Earth to space, through electromagnetic waves.

Specifically, if the current I in the circuit loop does work on a resistor Ω, then the radiation of angular momentum to space should be so much larger that some energy can be extracted locally in the loop.

Let us first assume that Earth is uncharged, rotating, and there is a current I in the loop L.

Let the amount of of angular momentum radiated by the system be

       dJ / dt.

Does it change if we add a static dipole magnetic field B₀ to Earth?

Far away from the system, the electric field E of the radiated electromagnetic wave oscillates or rotates. The Poynting vector

      S  =  1 / μ₀  *  E  ×  B₀

is, on the average, zero. Adding the static dipole field B₀ does not change anything, in terms of the radiation.

The energy extraction of the device does not depend on the magnitude of B₀ in any way. We cannot say that the magnetic field B₀ of Earth helps the device in any way.

The device can extract energy from the rotation of Earth if it contains an electric charge or a permanent magnet. Then the device radiates the angular momentum of Earth slowly away in electromagnetic waves. But this not the mechanism which Chyba and Hand allege to exist.

We conclude that the claim of Chyba and Hand is erroneous.


Electromotive force in a circuit loop



Faraday's law states that the electromotive force (the path integral of the voltage) around a circuit loop is

       dΦ / dt,

where Φ is the magnetic flux through the loop.

The device of Chyba and Hand stands static on the surface of Earth. As time progresses, there, obviously, is no change in Φ through the loop. There is no electromotive force and there is no current.


Empirical measurements by Chyba, Hand, and Chyba (2025)



The measurements supposedly show power generation which almost fits inside the error brackets of the measurement. This is typical for claims about perpetuum mobiles. The device never produces 1 kW or 1 MW of surplus energy. That would be easy to detect.

It is always something which is very difficult to measure – because it does not really generate any power.

The same was true for "cold fusion" experiments.


Conclusions


It is extremely unlikely that the device of Chyba and Hand could produce electric power. Sabine Hossenfelder should correct her video blog to include the refutation by J. Jeener (2018).

Thursday, March 27, 2025

Retardation of clocks from acceleration in a collapsing dust ball

Let us try to estimate the effect of clock retardation in a collapsing dust ball, or in an expanding universe, if retardation is based on the acceleration of matter. That is, clocks are able to anticipate the effect of a mass shell contracting or expanding at a constant velocity.

We would assume that a clock at the center of the shell "calculates" the radius of the shell based on the latests contraction velocity v that the clock knows of.

Is this a reasonable assumption?

Let us look at a single mass M and a test mass m (or a test clock) in the field of M. The usual retardation rule is that if M moves at a constant speed v, then the test mass or test clock will know the gravity field of M as if m would know the current position of M in the laboratory coordinates.

It makes a lot of sense to assume that a test clock m can "calculate" its own gravity potential based on the assumption that masses M continue their movement at a constant velocity. The clock then adjusts its rate according to that gravity potential.

Another way to look at this is to assume that M is static in the laboratory coordinates, and the test clock m moves around. An atomic clock on the surface of Earth can adjust its ticking based on what is its distance from the center of Earth.

In our March 15, 2025 blog post we calculated the retardation inside a shell assuming that the clock is not aware of the contraction speed v of the shell. That yields a very large retardation effect. If the ignorance of the clock only concerns the acceleration of the shell, the retardation effect is much smaller. But is the effect still large enough to explain dark energy?


A crude calculation based on the expansion of the universe


The "current" radius of the observable universe is estimated to be 46 billion light-years. The age of the universe is 13.7 billion years.

The universe was expanding significantly faster than now, say, 6.9 billion years ago.


The scale factor in the matter-dominated phase is

       a(t)  ~  t^⅔.

The time derivative is

       da / dt  ~  1 / t^⅓.

When the age of the universe was a half of the current age, the expansion speed was

       2^⅓  =  1.26

times the current speed.

If a clock at the center of the observable universe "calculates" its rate by assuming that the universe would still be expanding at that 1.26X speed, then the clock will overestimate its gravity potential and will tick too fast. The speed of light is "too fast" close to the center, which means a repulsive force from the center.

Could it be that the current value of matter and dark matter Ω = 0.3 has something to do with the observed acceleration of the expansion?

Let us try to estimate the repulsion, using the comoving coordinates of the "dust" (= matter and dark matter in the universe). On January 18, 2025 we argued that gravity looks very much newtonian in comoving coordinates. But we did not consider retardation of clocks then.


Conservation of energy in a rubber sheet model of gravity


We can simulate the collapse of a dust ball by letting small slippery weights to slide toward a central depression in the rubber sheet.

In the rubber sheet model, longitudinal, spherically symmetric waves exist.

A rubber sheet model allows the collapse process to "oscillate". The potential energy of the weights flows in a complicated way into the elastic energy of the sheet, as well as to the kinetic energy of the weights.

We want energy conservation in the collapse process. The rubber sheet guarantees energy conservation.

In a simplistic retardation model, conservation of energy probably would be breached.

We conclude that a satisfactory retardation model must involve something similar to the rubber sheet model of gravity.

In the rubber sheet model, is it possible that the collapsing dust ball could send longitudinal waves also outward?

The weight of the dust ball, the kinetic energy, and the longitudinal waves stays constant. Therefore, during the early stages of the collapse, the shape of the rubber sheet outside the ball stays constant. But could it be that once the longitudinal wave inside the ball has moved past the center, it could come out of the ball?

A simpler case is a circular ring of weights sliding toward the center.


Any "delayed" spherically symmetric process requires the existence of longitudinal waves?


There are no spherically symmetric transverse waves. If we have a spherically symmetric process which alters some local parameter (e.g., slowing down of clocks), the process maybe has to be "relayed" through longitudinal waves?

If we can measure the propagation of the wave, we presumably can extract energy from the wave?

As an example, consider the density of air in a spherically symmetric vessel. Let us suddenly contract the vessel. Eventually, the air pressure inside the vessel must even out. The process happens through sound waves, which are longitudinal waves.

Literature seems to claim that there are no longitudinal waves in general relativity. Let us investigate this.


Move a mass M suddenly closer to a clock: general relativity cannot satisfy Gauss's law


             ●  -->             o
            M                clock


We suddenly move a large mass M closer to a clock. The clock slows down. Can we describe the process as a "longitudinal wave"?

According to our June 20, 2024 blog post, there probably is no solution at all for the process in general relativity: the process is "dynamic". But let us assume that a solution would exist. The metric of time,

       g₀₀,

determines the rate of the clock.

If the metric of time could change "faster than light", then we would be able communicate faster than light: simply compare the rate of two adjacent clocks.

It is reasonable to assume that the change in the metric of time cannot propagate faster than the local speed of light.

Let us assume that M is not huge. Then the force of gravity is almost all derived from the metric of time, g₀₀?

Let us assume that instead of M, we have an electric charge Q. An approximate solution for the problem can be derived using Edward M. Purcell's approach, drawing lines of force which do not break. The lines of force can be derived from the 4-potential of the field.

But g₀₀ corresponds to the scalar potential φ only. For electromagnetism, there is gauge freedom. The scalar potential φ can be chosen in many ways. This breaks the analogy between g₀₀ and φ.

















Can general relativity describe a "magnetic gravity" field at all? In the familiar Edward M. Purcell diagram above, the lines of force in the circular zone turn a lot. Is it possible to generate such a force by modifying the metric of space?

The figure corresponds to a charge Q which suddenly acquires a constant speed v to the right.

Let us imagine that it is a large mass M which acquires a speed v to the right, and we have test masses m floating around M to test the direction of the force at various locations. Will the force be anything like the corresponding electromagnetic force?

There is a circular "transition zone" where the lines of force turn sharply. Outside that zone, Coulomb's force and the newtonian gravity force are analogous.

The question is what happens in the circular transition zone.

If the test masses m are static, then according to the geodesic equation, only the metric of time can exert a force on m. Just outside the transition zone, the metric of time, g₀₀, is constant.


                      line of force
                      |
                      |
                       ----   ×       transition zone
                           |
                           |
                           ●
                          M


Let us look at the zone straight up from the mass M in the diagram. In the transition zone, there is a strong force to the right. That requires that the value of g₀₀ must decline steeply when we go to the right. But then, at the location marked with ×, there should be a very strong force downward. We cannot implement the lines of force with the metric g₀₀.

In electromagnetism, the lines of force require the vector potential A, in addition to the scalar potential φ, which is analogous to g₀₀.

We conclude that general relativity cannot satisfy Gauss's law for an accelerating mass M.


Gauss's law in general relativity seems to fail


In general, general relativity does not satisfy Gauss's law. In the Schwarzschild metric, test masses "float" at the event horizon, if we use the standard Schwarzschild coordinates.

For a small mass M, the Schwarzschild metric is approximately equivalent to newtonian gravity, and Gauss's law approximately holds.

In the previous section, we argued that a metric cannot satisfy Gauss's law for an accelerating mass M.

But in the FLRW model of the universe, gravity works in the newtonian way, and Gauss's law does hold.

Let us then have a spherically symmetric mass shell which suddenly starts to contract. If the change in the metric of time, g₀₀, cannot propagate faster than the local speed of light, then |g₀₀| will be larger at the center of the shell than near the shell. The geodesic equation implies that a static test mass m will feel force from the center toward the shell. This breaks Gauss's law because it would be equivalent to having negative mass at the center.

Our blog post on February 12, 2025 showed that Maxwell's equations fail for accelerating systems of charges. But in many cases, Gauss's law seems to hold for electromagnetism, either approximately or even precisely.

For general relativity, Gauss's law seems to fail, except for the FLRW model. This makes the FLRW model suspicious. Why would the universe satisfy something which is not generally true in general relativity?


Conclusions


For a collapsing shell, general relativity seems to require that a change in the metric of time, g₀₀, can propagate infinitely fast. This is a very dubious result.

The FLRW model has an (unrealistic) symmetry, and Gauss's law holds in it, even though Gauss's law does not normally hold in general relativity. This makes FLRW even more dubious than it was before.

Let us assume that the change in the metric of time can only propagate at the local speed of light. Is the propagation process "wavelike"? In physics, most propagating processes are wave phenomena. Or is the propagation "rigid" so that the rate of a clock immediately changes when it "knows" that it is in a low gravity potential?

A rubber sheet model behaves in a wavelike fashion, and the there exist longitudinal waves in it.

We have definitely shown the following: we cannot assume that Gauss's law holds for gravity. The collapse of a massive dust ball is likely to differ from a simple newtonian gravity model.

In a forthcoming blog post we will analyze the collapse of a dust ball further. The magnitude of the retardation seems to be large enough, so that it can explain dark energy, but the details are still very obscure.

Saturday, March 15, 2025

Retarded slowing down of clocks in a collapse

Let us continue our analysis of the assumption that a clock cannot "know" faster than light how it should tick. It can only receive information of the gravity potential at the local speed of light.


What empirical evidence do we have for the correctness of the Einstein field equations?


We know that the Schwarzschild metric describes gravity phenomena very accurately within the Solar system.

We know that binary pulsars orbit in the way predicted by the Schwarzschild metrics of the components. Also, the power of the produced gravitational waves match calculations where linearized Einstein equations are used, to an accuracy < 1%.

Gravitational waves observed by LIGO match numerical calculations made using numerical programs. However, we have not checked the heuristics used in the numerical programs.

On June 20, 2024 we showed that the Einstein fields equations do not have a solution at all for a two-body system. How do LIGO numerical models handle the nonexistence of solutions? Also, the LIGO measured results have a large margin of error, something like 10%.

The Schwarzschild metric is a static system. Gravitational waves are produced by a quadrupole system. These are configurations which are quite different from a collapse or an expansion of a spherically symmetric dust ball.

The only empirical data which we have about a large collapse or an expansion is what we know about the expansion of the observable universe.

We do know that a star can collapse into a neutron star or a black hole. But we do not have any detailed measured data about the process.

The only data which we have about a large expansion does not match the Einstein field equations. The equations do not predict dark energy, or the Hubble tension. They may also fail to predict the things seen by the James Webb telescope.

Since the Einstein equations fail for our (sparse) empirical data, there is a good chance that the equations describe a collapse or an expansion incorrectly.


The June 20, 2024 result about the nonexistence of "dynamic" solutions to the Einstein equations


The problem in that result seemed to be that general relativity does not possess "canonical coordinates" where we could determine kinetic energy in an unambiguous way. The corresponding problem does not occur in Minkowski space, where any inertial frame can be taken as the canonical coordinates.


Canonical coordinates require that there must be retardation in the rate of clocks?


Suppose that we can determine against a canonical time coordinate how fast a clock ticks. How does the clock know how fast it should tick? Empirically, we know that clocks tick slower in a low gravity potential. But how does the clock know that it is in a low potential?

If we can use canonical coordinates similar to Minkowski space, it is natural to assume that the information about the allowed clock rate can only spread at the speed of light. If the clock does not know that it has fallen into a lower gravity potential, then the clock maintains its rate.

How does general relativity handle this?


How does general relativity decide at which rate a clock should tick?


Empirically, we know that a clock in a static, low gravity potential ticks slower than far away in space. This has been demonstrated with atomic clocks on Earth, as well as in satellites. The phenomenon is also present in the redshift of light when it rises up from the surface of Earth. The redshift is approximately one billionth.

If we have a clock which ticks once per second, then in general relativity, the metric of time determines how many times it will tick in a second of coordinate time. There should be one tick in a second of proper time. The rate of the proper time is

       sqrt(-g₀₀)

times the rate of the coordinate time.


General relativity does not allow the metric to change "faster than light"?


This is a question which we have touched several times in this blog. How fast can a change in the metric propagate in general relativity?

People often seem to assume that it cannot propagate faster than the local speed of light. Some changes in the metric can be detected by an observer. It would open a channel of faster-than-light communication, if changes in the metric can propagate faster than light.

Our example of a collapsing dust shell in the previous blog post seems to contradict this principle. People usually assume that the metric of time slows down instantaneously inside the shell, as the shell contracts.


           # -->                    ×                     <-- #
     dust shell            center            dust shell



In the link, Ajay Mohan cites a book by E. Poisson. The metric inside a thin shell is assumed to be Minkowski.

However, the assumption may not be sound. Suppose that the shell is not exactly symmetric. Then the metric inside the shell should change in a complicated way as the shell contracts. An observer inside the shell can measure the metric close to him. The metric is not flat, but has a complicated form.

If we let the changes in the metric happen instantaneously inside the shell, that can open a faster-than-light communication channel.

If the shell is perfectly spherically symmetric, then slowing down the metric of time inside the shell instantaneously does not enable communication – but that is not a realistic physical configuration.

Maybe we should adopt the rule that the metric cannot change faster than light?

Then we encounter another problem. Inside a spherically symmetric collapsing shell there is no matter inside. Does that require that the metric inside is flat? If yes, then the metric of time will propagate faster than light inside the shell.

Can we find a curved metric inside the shell, such that its Ricci tensor is zero? The Schwarzschild solution is an example of a curved metric for which the Ricci tensor is zero.

Birkhoff's theorem may imply that the metric inside the shell must be flat.

The singularity theorems of Roger Penrose assume that an empty volume of space cannot focus or defocus a beam of light. But if the speed of light is faster at the center of the collapsing shell, then there is defocusing.

Hypothesis. A realistic collapsing shell does not have a solution in general relativity, such that the solution would not allow faster-than-light communication.


Conjecture. Any solution for a spherically symmetric collapsing shell in general relativity requires the metric of time to change faster than light.


Note that we already proved on June 20, 2024 that general relativity probably does not have a solution for any realistic dynamic problem at all. The hypothesis above is probably void. But it might be that any attempt to find a solution will also lead to faster-than-light communication.

In the case of the conjecture above, there may exist a solution where the thickness of the shell is the Dirac delta function. It is not physically realistic.

We may have uncovered yet another fundamental problem in general relativity: it would allow faster-than-light communication if it would have solutions at all!


What implications does faster-than-light communication have?


If we can change the undulating metric inside a shell instantaneously, that probably enables us to transfer energy faster than light. A faster-than-light energy transfer is forbidden in an energy condition.


The dominant energy condition demands that energy can never flow faster than light.

What would happen if we could send signals inside a shell faster than light? Then the physics inside the shell cannot be analogous to Minkowski space, because such signals cannot happen in Minkowski space. This would probably break an equivalence principle.


Can general relativity correctly handle the collapse of a dust ball?


On May 26, 2024 we showed that the Oppenheimer-Snyder 1939 solution is incorrect, since the comoving Tolman coordinates allow one to travel to an "earlier" time coordinate. But maybe there exists a correct solution to the problem?


The Einstein field equations are







where the cosmological constant Λ is zero and the stress-energy tensor is denoted by T. Let us use the standard Schwarzschild coordinates.


Zhang and Yi (2012) write about Birkhoff's theorem.


Willem van Oosterhuit (2019) gives Birkhoff's theorem in the following form:

Birkhoff's theorem. Any C² solution of the vacuum Einstein equations, which is spherically symmetric in an open set U, is locally isometric to the maximally extended Schwarzschild solution in U.


               #                   •  •  •  •                     #
         shell S                ball D                 shell S


We interpret the theorem in this way: let us have a collapsing spherically symmetric dust ball D and a spherically symmetric shell S enclosing D. Then the vacuum solution between S and D stays isometric (= isomorphic) to the Schwarzschild solution for a fixed mass M.

Whatever we do with S, the metric between S and D stays isometric (= isomorphic) to the Schwarzschild metric associated with a fixed M.

This means that D cannot "know" if we let S descend lower or not. The vacuum between S and D prevents any flow of information between S and D.

This implies that if we measure things with proper distances and proper time intervals, the collapse of D happens in the exact same way, regardless of what we do with S.

Let us then compare two histories:

- in history A, the shell S and D form one, almost uniform, dust ball, with an infinitesimal gap between them and we let them collapse freely;

- in history B, we use a tangential pressure within S to slow down its collapse; D collapses freely.


In history B, the metric of time, g₀₀, in the vacuum between S and D, will eventually differ from history A. The metric there will still be isomorphic to the fixed Schwarzschild metric M, but the absolute value of g₀₀ will be different in A and B.

The rate of clocks (i.e., g₀₀) in the vacuum below S depends on how low we let S descend. The gravity potential of a clock depends on how high S is.

The collapse of D happens in the exact same way, measured in proper times and proper lengths, regardless of how high S is. This implies that g₀₀ must change immediately throughout the dust ball D, if we manipulate the shell S.

We proved that the metric of time, g₀₀, changes instantaneously in the vacuum between S and D, and within D. The change in the metric propagates faster than light.

That is, the problem of the infinitely fast metric change remains if we have a dust ball enclosed in a dust shell S.


Discussion


If general relativity has solutions at all for a collapse of a uniform or a slightly nonuniform dust ball, it seems to require infinitely fast changes in the metric of time within the ball. This may even enable faster-than-light communication within the ball.

General relativity seems to break a fundamental principle of special relativity. We conclude that general relativity probably is a wrong model for a dust ball collapse.

The FLRW model of the expanding universe looks very much like a dust ball expansion in general relativity. If general relativity cannot handle a dust ball correctly, why would it handle an expanding universe correctly?

Dark energy is an indication that general relativity fails to treat an expanding universe correctly. If the expansion is accelerated, that seriously contradicts the general relativity model.


What aspects of the FLRW model have been verified empirically?


A.   Nucleosynthesis fits the FLRW model.

B.   The expansion of the universe by a factor 1,100 since the last scattering (cosmic microwave background, CMB) fits FLRW.

C.   Baryon acoustic oscillations (BAO) fit the model where the age of the universe at the last scattering was as in FLRW.


Deviations from FLRW are:

1.   the Hubble constant derived (in a complicated way) from the CMB differs from standard candle observations by 7%;

2.   the James Webb telescope sees "too many" mature galaxies when the age of the universe was just 300 million years;

3.   dark energy seems to be accelerating the expansion of the universe, while the expansion should slow down.


If retardation in the rate of clocks makes the expansion of the universe to oscillate, that might explain items 1, 2, and 3. The average speed of the expansion is correctly predicted by FLRW (or a newtonian gravity model), but an oscillation in the speed of the expansion can produce even large anomalies to the smooth process.

Question. Can retardation explain cosmic inflation?


Retardation when the dust ball approaches its Schwarschild radius


The matter and dark matter density of the observable universe is estimated to be 30% of the "critical density", Ω = 0.30.


The "current" radius of the observable universe is 46 billion light-years and its Schwarzschild radius is 14 billion light-years. Their ratio is approximately 0.30.

Maybe the accelerating expansion is associated with the (dark) matter density falling to 0.3X the critical density?








On the right side is a constant. The density ρ ~ 1 / a³, where a is the scale factor. Thus,

       ρ a²  ~  1 / a.

The value of 1 / Ω - 1 is now roughly 2. When a was 1/2, its value must have been roughly 1, or Ω = 0.5. When a was 1/4, Ω = 0.67. When a was 1/1,000, then Ω = 0.998.

The fine-tuning, or flatness, problem is why Ω was so close to 1 in the early stages of the universe.

Let us try to calculate the effect of retardation when a dust ball collapses close to its Schwarzschild radius. The gravity potential of the edges falls fast. We expect to see a large repulsive force which arises from the retardation of clocks near the center. That is, clocks at the center tick significantly faster than at the edges. A ray of light is bent from the center toward the edge of the ball.

Let us first use newtonian gravity to calculate the retardation potential. The mass of the dust ball is M and the radius R. The gravity potential is 

       -G M / r                                              for r > R,

       -3/2 G M / R  +  1/2 G M r² / R³       for r < R.

The potential at the center is

       V  =  -3/2 G M / R(t).

Let dR(t) / dt = -v. Then

       dV / dt  =  -3/2 G M  *  -1 / R(t)²  * -v

                     =  -3/2 G M v / R(t)².

The delay for the center of the ball to know about the decline in the potential is very crudely a half of the radius R(t) divided by the speed of light c:

       1/2 R(t) / c.

The retardation then would mean that the potential at the center is higher than calculated in newtonian gravity, very roughly by the amount:

       ΔV  =  3/2 G M v / R(t)²  *  1/2 R(t) / c

              =  3/4 G M / R(t)  *  v / c.

We can compare this to the newtonian potential difference between r = 1/2 R(r) and the center:

       1/8 G M / R(t).

We see that if the velocity v = c / 6, then the "retardation force" would approximately cancel the newtonian gravity force when r < R(t) / 2.

When the dust ball is approaching its Schwarzschild radius, the speed of its surface dR(t) / dt is relativistic. We conclude that the retardation force can easily cancel the newtonian gravity force inside the dust ball. The order of magnitude is large enough.

However the retardation potential close to the center is linear in r, while the newtonian gravity potential is ~ r². This would cause the uniform density of the dust ball to be compromised. Would that make the cosmic microwave background (CMB) in the sky nonuniform?

Our dust ball model has hard time explaining the uniformity of the CMB, anyway. Any phenomenon which is associated with the edges of the ball, can easily break the uniformity of the CMB.


The uniformity of the CMB


The cosmic microwave background is uniform in every direction to one part in 100,000. A cosmological model must be able to account for this phenomenon. In ΛCDM, two ad hoc assumptions are introduced in order to explain this:

1.   the spatial topology of the universe is 3D surface of a 4-dimensional sphere, and

2.   inflation.


It is not economical if we have to explain one observed fact with two ad hoc hypotheses. There is no evidence that the spatial topology can differ from a 3D plane, besides the hypothesized FLRW model of the universe. Inflation creates energy from nothing. It runs counter to all the observations we have about nature: energy is conserved.

In our blog we have tried to build a model where the spatial topology is a 3D plane and the observable universe is an explosion of a dust ball. The uniformity should be explained by some mechanism which makes a uniform dust ball to stay uniform when it collapses or expands.

An ad hoc solution would be to claim that in a large dust ball, we can calculate the contraction or expansion speed at a location x simply by looking at some environment of x, and ignoring the rest of the ball. This principle seems to hold for the gravitational attraction: locally, the expansion of the universe seems to obey newtonian gravity (with the exception of dark energy).

But why would retardation obey such a locality principle? And if it obeys that, why should we calculate the retardation based on the radius of the observable universe?


Dark energy is weakening?



Lodha et al. published their results from the Dark Energy Spectroscopic Instrument on March 18, 2025. Dark energy seems to be weakening recently.

If that really is the case, it is consistent with our retardation hypothesis: the expansion rate may even accelerate at times, but on the average, it should obey the formulae of the FLRW model.

Note that if ΛCDM is augmented with an "evolving" dark energy, the model becomes even more ad hoc than it was before. We can explain any deviation from the FLRW expansion rate by adding an evolving dark energy!


Retardation generates "negative mass" inside a collapsing spherical shell


Retardation makes light to bend away from the central volume of a collapsing shell. This is equivalent to putting some negative mass to the central volume

Could it be that this negative mass is relatively uniform throughout the collapsing dust ball? This could explain the uniformity of the CMB.

Let us use comoving coordinates of the dust in a collapsing dust ball. On January 18, 2025 we argued that gravity in those coordinates may look newtonian. We can draw lines of force for the gravity field in a familiar way.

Let us imagine that the collapsing dust ball consists of concentric collapsing dust shells. Could it be that, in the comoving coordinates, these shells create a fairly uniform density of "negative mass" inside the dust ball?

The density of negative mass is zero at the edge of the dust ball, but may be relatively uniform inside the ball. Then in a small subvolume of the ball, it may look almost exactly uniform.

Let us have a contracting uniform shell whose radius is R(t). The first guess for the "retardation potential" for the shell is something like

       V  ~  -r

for r < R(t). That is the, potential is the highest at the center of the shell. However, this does not seem like a good guess, since the negative mass density for this potential is

       ρ  ~  1 / r.

There would be a singularity at the center, which does not look nice.

In a rubber sheet model of gravity, a collapsing shell corresponds to a ring of weights moving toward a center. The rubber sheet in this case can "anticipate" linear processes: the sheet moves downward at a constant speed. But it cannot anticipate an accelerating motion of the weights.

Maybe we should only make a retardation potential based on the acceleration of the masses in the collapsing dust ball?


Conclusions


We have many reasons to believe that there is retardation when a gravity potential adjusts the rate of clocks. It would be strange if a clock at the center of a collapsing spherical shell would immediately know how fast it should tick.

Retardation causes a potential which resists the collapse of the dust ball. A very naive calculation shows that retardation may at times create a repulsive force which is stronger than the attraction of gravity.

The naive retardation model is awkward since the negative mass which would create that potential would have an infinite density at the center.

We will next look at a more sophisticated model in which only the acceleration of the collapse creates retardation. Is retardation large enough to explain dark energy?

The result published on March 18, 2025 makes ΛCMD even more awkward than it was before. Dark energy density can change as time progresses. The predictive power of such a physical model is zero!