Wednesday, January 31, 2024

Gil Kalai (2019): The argument against quantum computers

Gil Kalai has written several papers where he claims that quantum computers can never work well enough, i.e., in a way such that they would help in solving some optimization problems much faster than digital computers. Is he right?


We will look at his 2019 paper where he outlines his pessimistic view and gives grounds for it.

But first we will present our own analysis of the problem.















Richard Feynman suggested in 1982 that quantum computers can solve problems in quantum physics (photo Wikipedia).


Nature contains many "analog computers" which are immensely faster than digital computers: the wind tunnel and the helium atom


Let us first consider a wind tunnel as a machanical "computer". The wind tunnel is "programmed" by putting an airplane model into it. The "program" is executed by blowing air into the tunnel. The output of the program is what sensors measure about the air flow.

Simulating turbulence in a digital computer is hard. A brute force approach would require a matrix of at least (10,000)³ = 10¹² cells and at least one million computation steps for each cell. That makes 10¹⁸ steps. If the computer can perform 10⁹ operations per second, the required time is 30 years.

In the wind tunnel, each molecule in air performs a "computation", at least 10⁹ times per second. The number of computation steps is ~ 10³³ per second, or more.

Our next analog computer is the helium atom. The helium atom can compute its own energy states very quickly, and it is resilient under disturbances from the outside world. Infrared radiation does not disturb the energy levels appreciably.

Calculating the helium energy levels through brute force with a digital computer is impossible, as far as we know.

The helium atom is not a programmable computer. Rather, it is a "calculator" which can calculate the measurable properties of the atom.

Is the helium atom a "quantum computer"? The two electrons can be viewed as "generalized" qubits which interact in a complex way. It is definitely a quantum process. We could say that it is a tailor-made quantum calculator.

These computers in nature prove the following claim:

Analog and quantum computers can solve certain problems immensely faster than digital computers.

The claim is self-evident, after all. Most processes in nature are too resource-intensive to be simulated in a digital computer.


Boson sampling: an interference pattern produced by linear optics is hard to calculate with a digital computer



Scott Aaronson and Alex Arkhipov (2013) showed that certain interference patterns from linear optics are hard to compute with digital computers.

This is a special case of our own claim that nature contains many analog computers which are hard to simulate with a digital computer.


Trapped-ion quantum computers



Each ion in the trap stores one qubit. The state 1 or 0 of the qubit is determined by the state of its electrons in the orbitals. The states 0 and 1 can be different hyperfine states (whose lifetime is very long), or a ground state and a very long-lived excited state.

A laser beam can manipulate the state of each qubit, and also entangle the state of two qubits. The basic gate CNOT requires that the state of a qubit A determines if a NOT operation is applied to a qubit B. The logic gate is implemented using a laser beam.

Since the difference between the energy levels 0 and 1 is very small, the ion trap has to be cooled to a millikelvin temperature in order to minimize spontaneous changes in a qubit state.


Wenchao Ge et al. (2018) mention that phonons in the constellation of ions in a single trap can be used to couple specific qubits. Phonons mediate interactions between the spin states of ions.

Challenges in trapped-ion computers include decoherence of qubit states through infrared photons, or through the gate operations inside the computer.

The architecture is inherently fragile: the energy states 0 and 1 of the qubits have to be close to each other, in order for us to be able to operate on them with low-energy photons or phonons. But the small energy difference makes the bits fragile. A small disturbance can make the bits to flip.

A single ion trap cannot hold a large number of ions. There should exist a method of communication between different traps. For example, ions could be transported between traps. This sounds difficult.


Why does a digital computer work so well, while it seems to be impossible to build a commercial quantum computer? Insulation is the key


In trapped-ion quantum computers, the problem seems to be that it is hard to insulate individual qubits against disturbances which come from logic gate operations applied to different qubits.

Also, there seems to exist problems in keeping the computer cold enough, so that infrared photons do not toggle qubits.

Why are these problems not present in a digital computer?

1.   In a digital computer we are dealing with "non-quantum" phenomena, which involve millions of atoms or electrons, or more. The physics is macroscopic.

2.   We can insulate the processes from each other with macroscopic electric insulators.

3.   We can use macroscopic physics to implement the bits and the logic gates, and can attain an almost error-proof function.

4.   Infrared photons are not a problem, because the processes involve so many atoms and electrons.


"Insulation" seems to be a luxury which is available in the non-quantum realm, while it is not available in quantum phenomena.


Why is the helium atom quantum computer computer not vulnerable under disturbances?


By definition, "operations" between electrons and the nucleus cannot disturb the processing because all those disturbances must be included in the "computation" which the atom performs.

Infrared photons do not appreciably disturb the helium atom. This is an empirical fact: disturbances from the photons do not affect the end result of the "calculation".

Does this give us any advice for an implementation of a programmable quantum computer? 

We could make the energy levels of a qubit states 0 and 1 to differ by a large amount. But then the energy required for logic operations would be large, and that would heat up the computer. There probably are also other problems in using large quanta.

There might exist problems and algorithms which are resilient agains flipped qubits. Then a quantum computer might perform well for carefully selected problems.


The analogy between a fusion reactor and a quantum computer


Conventional processing in the chemical industry uses pipes, containers, and pumps to control processes. We can use various materials like steel or glass to "insulate" chemical processes from other processes.

In a fusion reactor we cannot use matter walls to insulate and control the process. Rather, we have to use magnetic fields. It is hard to keep the dense and ultra-hot plasma tamed. The plasma must not come into contact with the matter in the walls of the reactor.

A quantum computer has a similar problem with the insulation of the process, as we explained above.

The only practical fusion reactor today is the Sun. Why does the Sun work well, while commercial reactors fail?

The gravity of the matter in the Sun implements the insulation required in a fusion reactor. Also, the large mass and the volume keep the process automatically in an equilibrium: no pipes, walls, or pumps are required.

In the helium atom "quantum computer", the large attraction of the nucleus assigns large energies to the electrons, which insulates the atom from the effects of infrared photons. The "computation process" is the "equilibrium" between the electrons and the nucleus. The helium atom as a computer is somewhat analogous to the Sun as a fusion reactor.


Gil Kalai's arguments (2019): a NISQ computer


Kalai concentrates on "noisy intermediate-scale quantum computers" (NISQ), which is defined as a quantum computer which has at most 500 qubits, and where there is a significant probability that any single qubit flips on its own, or a logic gate produces a wrong result.

Kalai basically presents conjectures about NISQ computers. The paper does not contain a detailed analysis of a trapped-ion computer, or of any other architecture.

Suppose that there is a 1% chance that a single logic gate operation corrupts a qubit. Then, after a few hundred operations, the computer typically has several flipped qubits, and the program execution is misguided. The result of the computation will mostly be garbage.

In a traditional digital computer program, flipping a single bit usually makes the program to "crash": it cannot produce a sensible output.

However, if the chance of corruption is not 1%, but, say, 0.001%, then we can do quite a lot of calculation, and in most cases there is no corruption of any qubit. Then the NISQ computer does work reasonably well.

Kalai's conjecture simply is that corruption will remain at a high level, regardless of the architecture of the quantum computer. But Kalai does not give grounds for his conjecture, since there is no detailed analysis of any individual architecture.

Maybe there exists a quantum computer architecture which circumvents the problems of insulation?


Scott Aaronson (2006) defends quantum computers



Scott Aaronson's lecture (2006) contains counterarguments to several of Gil Kalai's claims.

Aaronson quotes a claim by Leonid Levin: 

-   300 qubits after the Hadamard initialization process contain a superposition of all the 10⁹⁰ binary numbers N that we can express with 300 bits. That collection is too large to be "physical". The probability amplitude of each one bit string is too small to be meaningful.


Is this a problem?

Let us consider a smaller quantum computer with just 20 bits. The there are only one million distinct states. Suppose that we apply logic gates and are able to guide the state to a certain integer N < 2²⁰. There is no obvious reason why a large number of initially superposed states would be a problem at all. If we imagine the process backwards, then a single state N is divided into a superposition of one million states.

Such processes happen in nature all the time. A photon goes through a pinhole and is absorbed by an atom in the screen behind it. There may be 10²⁰ atoms which can do the absorption. Quantum mechanics seems to have no problem in superposing 10²⁰ distinct states.


Progress in the implementation of quantum computers is slow


The main argument against quantum computers is practical: there has been a great deal of publicity around quantum computers for two decades now, but there still do not exist implementations which could do "useful" work. There are no genuine, commercially available quantum computers.

Let us compare this to the history of digital computers. The very first machines in the 1940s could do useful work, and commercial models were available a decade later.

The progress with quantum computers alarmingly resembles the progress (if any) with fusion reactors.


Michael Brooks (2023) writes that quantum computers at this stage are "good for nothing".


Conclusions


We were not able to find any fundamental reason why quantum computers would be impossible. Nature contains many processes which can "calculate" results which are intractable with a digital computer.

Gil Kalai has repeatedly claimed that quantum computers face insurmountable problems. But he does not present a detailed proof why this has to be so.

Efforts to implement useful quantum computers have failed, despite of two decades of work. This makes us pessimistic. Radically new ideas may be required.

Saturday, January 27, 2024

Biot-Savart: how natural are our assumptions?

UPDATE August 25, 2024: Our work with the Poynting vector in the past few days suggests that the inertia of a negative charge (electron) close to a positive charge (proton) is reduced, not increased. Why does a horizontally moving proton then pull a vertically approaching electron to the horizontal direction where the proton is going? Because the field energy is moving with the proton, the field energy has momentum, and the electron absorbs both the energy and the associated momentum. We will write a new analysis based on this insight.

----

On December 17, 2023 we derived the Biot-Savart law of electromagnetism from the Coulomb force and various assumptions about inertia. We wrote in the blog post that these assumptions are ad hoc – but is that really the case?























1. Assumption about the Coulomb force


This assumption is about how the test charge q "sees" the relative positions of the protons and electrons in the wire.

There is no clear reason why we should use the frame of the protons (= the laboratory frame). It is not "special" in any way.

The only frame "special" for q is the comoving frame of q itself. We conclude that this assumption is not ad hoc, but the only natural assumption.


2. Assumption of the extra inertia of a test charge q inside an electric field of another charge Q


We have discussed this many times in this blog. If q moves tangentially relative to Q, then q "carries" along it field energy to another location. We assume that the energy associated is the absolute value of the potential |V| between q and Q, and it moves along with q. The energy |V| adds to the inertia of q. This is a natural assumption.

If q moves radially relative to Q, then in addition to the moving of the field energy |V| around, there is also a "flow" of energy E to q, or from q, because of the Coulomb force F between q and Q. A good guess for the inertia effect of this energy flow is that it ships energy from Q to q, or vice versa.

The energy flow E doubles the inertia effect of |V| in a radial movement of q.


3. Assumption of the extra inertia of q when there is a current in a wire


Let us have q close to a wire element where an electric current is flowing. The protons and electrons essentially cancel out the electric fields of each other. Because of the canceling effect, we expect that the "flow" of energy E in the preceding section is negligible and it does not cause inertia effects.

However, the potential energy |V| of q in the field of the wire electrons can be assumed to "move" along with the electrons, while the potential energy |-V| of q in the field of the protons does not move.

When q approaches the wire element, it "picks" up more of the moving potential energy |V|. Therefore, q acquires a momentum p.

These assumptions are quite natural.


4. Assumption of a paradoxical momentum exchange


This assumption is not very natural. If an electron q is close to a wire where electrons are charge carriers, then things happen as if the charge carriers would have a positive charge and move to the opposite direction to the electrons. That is, they would be like "holes" in a semiconductor.

Can we find an intuitive explanation for this?

Let us consider the following configuration:

   
                      wire

          v <-- ●           ●
                   e-          proton



                         ^  V
                         |
                         • q = e-


The proton pulls q and gives it energy "which does not move" in the diagram. The test charge q puts that energy to the field between q and e-. The energy in that field "moves" to the left. To conserve momentum, the test charge q must itself start to move to the right. This explains away the "paradoxical" part!


Conclusions


We were able to show that the assumptions in our derivation of the Biot-Savart law are not totally ad hoc. We found intuitive grounds for all of them.

This result may help us to understand magnetic gravity. We have been wondering what is the gravitomagnetic field of a mass flow like.

Sunday, January 14, 2024

Inertia of an electric current relative to the protons

Our previous blog post shows that the Poynting vector cannot explain the inertia which electrons have relative to the protons in a suitably designed coil, which simulates a rotating charged sphere where the electric field is canceled with an opposite charge. This is expected. For a wire, the electric field typically is negligible, while the magnetic field can be very strong. The Poynting vector has a very small value.


The sliding charged tube


Let us analyze, once again, the sliding charged tube around a charged cylinder, which we presented in the previous blog post:


             E              B               E
          ^  ^  ^                          ^  ^  ^
          |   |   |      × × × ×        |   |   |

                   negative charge
                         -------------
          ++++++++++++++++++  positive charge
                         ------------- 
                              --> v


In this case, we can claim that the Poynting vector explains some of the inertia of the the tube.

Let the charged tube move very slowly. Then the energy density of the magnetic field B is very small. Energy in the electric field E on the right side of the tube must flow to the left side, as the tube progresses on its journey. Presumably, this energy flow is described by the Poynting vector, and momentum is associated with it.

In the diagram above if we make the right hand forefinger to point to the direction of E and the middle finger to B, then the thumb points left: that is, energy flows to the left, as expected.

We can very crudely explain the inertia of the tube relative to the cylinder in the following way:

1.   The tube carries the mass-energy of its own electric field (even though that energy is "hidden" by the opposite charge of the cylinder), and

2.   an equivalent amount of inertia arises from the energy flow from the electric field on the right side of the tube to the left side.


The momentum associated with item 1 flows to the right, and the momentum associated with 2 flows to the left. The total momentum, measured in the laboratory frame, is zero. This explains why the momentum held by a pure magnetic field B without an electric field E is zero.


                 rope
                    --> v
              _________
           O_________O  wheel
           |     v <--     |
       ==============
                 frame


Let us have a pulley made of two wheels and a loop of rope, like in the diagram. If we make the pulley to rotate, the system has a zero momentum relative to the laboratory frame. However, the pulley has inertia relative to the frame. The inertia arises from two movements to opposite directions.


The electric field E generated by a decreasing magnetic field B


Faraday's law of induction says

       ∇ × E  =  -dB / dt.

If the magnetic field B in the above diagram decreases, then an induced electric field E' has its curl pointing to the screen. If the right hand has its thumb pointing to the screen, then the fingers tell us the direction of the field E':


             ×   ×   ×    -dB / dt

              <------
                  E'

             ------------    --> v
        ++++++++++++   cylinder
             ------------ 
                tube


The electric field E' "resists" the change in the magnetic field B. The electric field tries to push the negatively charged tube to the right and the positively charged cylinder to the left. Can we somehow explain this process with inertia?

The following model might do the job: in the pulley diagram in the previous section, the lower rope "pushes" positive charges to the left and the upper rope pushes negative charges to the right. In this model, the magnetic field B describes momentum stored in the pulley system.


A current in a closed wire loop: "topology" or a flow of negative energy?


              current
                 <--- I  
                 ______
               /             \
              |                |
                \______/

              wire loop


The sliding charged tube model qualitatively described the momenta associated with a magnetic field. It was crucial that the Poynting vector tells about an energy flow from the right to the left.

But if we have a current in a closed wire loop, then the electric field E is essentially zero everywhere. The Poynting vector does not describe any significant energy flow.

In this case, we might claim that there still is an energy flow, but it is not captured by the Poynting vector.

In the sliding tube model, we may interpret the "hole" in the electric field E around the tube containing negative energy which moves along with the tube. Using this model, switching to a closed wire loop is no dramatic change: we still have negative energy moving along with the electrons.

The Poynting vector does not understand these energy flows: it sums the negative energy flow to the positive energy flow associated with the electric field of the tube, and thinks that the total flow is zero. It is not aware of the separate flows of positive energy and negative energy.

In this model, a pure magnetic field B involves separate flows of positive and negative energy. These flows make up the "pulley" that we described in a preceding section.


Conclusions


On December 17, 2023 we asked where the energy and momentum of a pure magnetic field B is stored. "Pure" in this context means that there is no significant electric field E present, and, consequently, the Poynting vector is essentially zero.

Our pulley model / negative energy model offers an explanation for where the momentum is stored. There is a flux of positive energy to the direction where the charges carrying the current move, and a simultaneous flux of negative energy to the same direction. We obtain zero by summing these energy fluxes  –  that is why the Poynting vector is zero. But the system is dynamic, even though the Poynting vector is not aware of that.