Baidu Blog

Back

This article was translated from Spanish by Gemini 2.5 Pro.

The day before writing this draft, Google launched a new algorithm, Quantum Echoes (perhaps echo algorithm?), which aims for high verifiability. Originally, I didn’t want to get into abstract things on the blog, but the comments under these news/reports are really hard to ignore, so let’s talk about what a quantum is.

Classical Mechanics#

We live in a world that seems orderly and predictable. An apple falls from a tree, a planet orbits the sun, a billiard ball rolls across the table.

All the phenomena mentioned above follow a set of rules that are familiar and intuitive to us.

Classical mechanics!

In the universe described by Newton, everything functions like a huge, precise clockwork mechanism. Every object in the universe, from the smallest dust particle to the most massive galaxy, has a defined position and momentum.

If we could precisely know all the initial conditions of a system at a given moment, such as the position, velocity of all particles, and the forces acting on them, we could, in principle, accurately predict the state of that system at any future time, just as we predict what time someone will return home.

This idea is known as determinism and forms the cornerstone of classical physics.

In this classical world, physical properties are continuous. Imagine walking up a smooth ramp; you can stop at any height on that ramp.

The speed of an electric car can be 11 km/h, 11.4 km/h, or 11.4514 km/h; the values can be infinitely subdivided.

Physical quantities like energy, speed, and momentum can vary smoothly, and any value within their range is allowed.

This theory has been incomparably successful in describing the macroscopic objects of our daily experience; from building bridges to launching spacecraft, classical mechanics is the physical principle we know how to use best.

The Breakdown#

However, in the late 19th and early 20th centuries, as scientists turned their attention to more microscopic domains, to the behavior of atoms and light, this perfectly predictable universe began to crack.

A series of experiments revealed strange phenomena that classical physics could not explain, shaking its centuries-old, unshakeable position.

Black-body Radiation#

Classical theory failed catastrophically in predicting the color (frequency distribution) of light emitted by a heated object (a “black body”).

The theory predicted that in the ultraviolet region, the radiation energy would tend toward infinity, whereas in actual experiments, the radiation energy gradually decreased. That is, theory and experiment did not match.

This became known as the ultraviolet catastrophe.

At this point, someone might ask, “Well, if the theory and experiment don’t match, what’s the big deal?”

The big deal is that the theory was derived from the foundations of classical mechanics, from determinism. And if the experiment did not fit the theory, it meant that, at least in this domain, the familiar classical mechanics was no longer applicable.

It’s as if the subway arrived at the same time every day, but one day, it suddenly doesn’t show up. You ask a passerby, and they tell you, “There is no subway station here.”

So, how have you been getting to work?

To solve this riddle, the physicist Max Planck proposed a revolutionary hypothesis in 1900: the emission and absorption of energy is not continuous, but occurs in discrete, indivisible “packets” of energy.

He called this minimum, indivisible unit of energy a “quantum.”

This was the first appearance of the concept of the quantum, marking the beginning of a revolution in physics.

Photoelectric Effect#

By the late 19th century, it was already known that light was a wave and that metals contained free electrons. Thus, a spontaneous idea arose: could we use the light wave to knock electrons out of the metal?

Strangely, the ability to knock out electrons depended on the color (frequency) of the light, not its intensity (brightness).

Even the faintest blue light could instantly eject electrons, while the most intense red light accomplished nothing.

Albert Einstein explained this in 1905, boldly proposing that light itself is composed of these energy packets, which he called “light quanta” (later known as “photons”).

Each photon carries an amount of energy, and only when the energy of a single photon is large enough (i.e., the light’s frequency is high enough) can it knock an electron out of an atom (escaping the metal surface).

This explanation not only earned Einstein the Nobel Prize but also provided strong evidence for the particle nature of light.

We can explain this more simply. In the classical imagination, light is like a stream of water; higher brightness means a stronger current.

The metal is like a wall, and if the current is strong enough, it can dislodge some stones (electrons).

It was later discovered that light is not a stream of water, but rather a road. The brightness determines how wide the road is, not how strong the energy is. What actually carries the energy are the cars driving on the road, i.e., the photons.

Only when the car is large enough (high frequency, large energy) can it knock down that wall and dislodge the stones.

Atomic Stability#

Everyone knows that the world is composed of atoms and molecules.

But here a question arises: what is their structure?

An atom is not a small solid ball, but a system composed of a nucleus and electrons, where the electrons are particles moving around the nucleus.

However, according to the laws of classical electromagnetism, a charged particle in accelerated motion should continuously radiate energy.

Energy is finite, so an electron cannot radiate energy indefinitely. Thus, as it orbited, it would constantly lose energy, its speed would decrease, its orbital radius would shrink, and it would eventually spiral into the nucleus.

If this theory were correct, it would mean that all atoms in the universe would collapse in an instant. Our tables, the air, our bodies, and indeed the entire world, could not exist.

But the reality is that everything is fine; the world is as stable as ever. This was the great mystery scientists faced in the early 20th century.

In 1913, the physicist Niels Bohr proposed a bold hypothesis: what if electrons couldn’t move freely, but were only allowed to occupy certain specific orbits?

It’s as if the electrons could only be on the steps of a staircase, instead of running happily up a ramp.

These steps correspond to different energy states, which we call energy levels. When an electron is in one energy level, it is stable; it doesn’t emit energy and doesn’t fall.

Only when an electron “jumps” to another energy level does it absorb or release a quantum of energy, which is exactly the difference between the two levels:

E=hνE = h\nu

Quantum Mechanics#

In summary, quanta are not continuous but come in discrete packets. This is the essence of quantum thinking.

Physicists realized that the rules governing the microscopic world are radically different from those we know in the macroscopic world. A completely new theoretical system was born: “quantum mechanics.”

It is not a simple adjustment of classical mechanics but a complete paradigm revolution, specifically designed to describe the behavior of microscopic particles like atoms, electrons, and photons.

Thus, the map of physics was split in two. Classical mechanics remains the queen of the macroscopic world, while quantum mechanics is the absolute sovereign of the microscopic domain.

Both theories have achieved astonishing success in their respective fields, but the pictures of reality they paint are vastly different.

Classical physics is not wrong; rather, it must be seen as an emergent approximation of quantum mechanics at the macroscopic scale. (What? You don’t know what emergence is?)

The determinism and predictability we experience in our daily lives are not fundamental properties of the universe. Understanding this is crucial. The classical world seems so orderly because it is the result of the large-scale statistical averaging of the probabilistic behavior of countless microscopic quantum events.

It’s like observing the flow of a large crowd: we can predict its general trend, but we cannot determine the specific path of any single individual. The deterministic reality we perceive is built upon a quantum foundation filled with probability, uncertainty, and possibility.

This is not just a change in physical theory but a profound philosophical transformation that has fundamentally altered our understanding of the nature of reality. (So, are we trying to say that the probabilistic calculation of LLMs is also quantum mechanics? When in doubt, quantum mechanics!)

What is a quantum?#

The word “quantum” sounds mysterious and profound, but its core concept is exceptionally simple. It comes from the Latin quantus, meaning “how much.”

In physics, a quantum (plural quanta) refers to the smallest, indivisible discrete unit of any physical entity involved in an interaction.

It is like the “atom” of a physical property, the fundamental brick or data packet of that property.

Definition#

We can understand this concept through some concrete examples. The most famous example is light.

We normally perceive light as continuous, but in reality, it is composed of energy packets, and these packets are photons. Therefore, a photon is a quantum of light.

Similarly, electric charge is not infinitely divisible; there exists a minimum unit of charge, the elementary charge, which is the quantum of charge.

This concept was first proposed by Planck when studying black-body radiation, who hypothesized that energy could only be absorbed or emitted in integer multiples of Planck’s constant hh times the frequency vv.

Later, Einstein, in explaining the photoelectric effect, materialized this concept by proposing the existence of light quanta.

From the noun “quantum,” we derive a more central verb: “to quantize.”

Quantization refers to the fact that the value of a physical property can only take on certain discrete, discontinuous values, like integers, rather than any arbitrary value.

In simple terms, the classical world is a ramp. In classical mechanics, physical quantities like energy and speed are like a smooth ramp.

You can stand at any height on the ramp: 1 meter, 1.1 meters, or 1.14 meters. There are no forbidden intermediate places.

In quantum mechanics, however, many physical quantities, especially in bound systems (systems formed by two or more particles bound by interaction forces, whose total energy is less than the energy of the completely separated particles), are like a staircase.

You can only be on the first, second, or third step, but never with one foot on the first step and the other on the second. (Trying to be in two places at once, huh?)

For example, the energy of an electron in an atom is quantized. It can only have certain specific, discrete energy levels, like the different steps of a staircase.

When an electron absorbs or releases energy, it “jumps” instantly from one energy level to another, a process called a quantum leap, and it never passes through intermediate states. (As we will discuss the wave function later, this should not be understood as teleportation. A quantum leap seems instantaneous, but it is actually a quantum state transition caused by an interaction.)

Atomic stability, again#

Quantization is not just an interesting microscopic phenomenon; it is the fundamental reason for the stable existence of our universe. Let’s return to the atomic stability crisis mentioned earlier: why don’t electrons fall into the nucleus?

The answer lies in the quantization of energy. Because the electron’s energy levels are quantized, there is an insurmountable energy “step.” An electron can jump to a lower energy level and release energy, but it cannot do so indefinitely.

Once it reaches the lowest energy level (the ground state), it can no longer lose any more energy because there are no more steps below it. (Or do you want to enter the Backrooms?)

This minimum energy prevents the electron from falling, thus ensuring the stability of the atom.

Therefore, the principle of quantization is the cornerstone of our world. It is because properties like energy and angular momentum are discrete at the microscopic level, in “packets,” that atoms can form stable structures, chemical bonds can unite molecules, and the material world can present the rich diversity and stable order we observe.

At a deeper level, the discovery of quantization revealed the fundamental limitations of the language of classical physics.

Classical physics is based on the assumption of continuity, and its main mathematical tool is calculus, which deals with continuous changes. The discovery of quantization means that the underlying logic of the universe is discrete, digital.

This forced physicists to adopt a completely new mathematical language, such as linear algebra and operator theory, to describe this reality built on stairs instead of ramps.

This also explains why the mathematical form of quantum mechanics seems so abstract to beginners: it is based on completely new concepts.

The three big features of quantum mechanics#

Since we are talking about quanta, we must simultaneously explain wave-particle duality, superposition, and quantum entanglement. Furthermore, these are the fundamental physical concepts for understanding quantum computing.

Wave-particle duality: Am I a wave? Oh, I’m a particle#

In the classical world, things are clearly divided into two categories.

Particles and waves.

Particles are discrete entities that occupy a defined position in space, like a billiard ball.

Waves are diffuse disturbances that propagate through space, like ripples on the surface of water.

The two are clearly distinct and do not mix.

However, in the quantum world, this clear distinction disappears.

A central principle of quantum mechanics is wave-particle duality, which states that every microscopic entity, whether it’s an electron considered a particle or light considered a wave, simultaneously possesses both particle and wave properties.

Which property is observed depends entirely on the experimental setup and the method of observation. Importantly, we can never observe both complementary properties in the same experiment.

Nothing demonstrates the strangeness of wave-particle duality better than the famous double-slit experiment.

The experimental design is very simple, but its results are enough to subvert all our intuitions about reality.

First, let’s use macroscopic objects as a reference. Imagine you have a wall with two parallel slits. You throw tennis balls randomly at this wall.

Some balls will be blocked by the wall, while others will pass through one of the slits and hit the wall behind it.

In the end, on the back wall, which we’ll call the receiving screen, each ball that passes through will leave a dot.

You will see two stripe-shaped areas corresponding to the slits, where the tennis balls are concentrated. This fits our intuition perfectly and is typical particle behavior.

Perhaps this sounds a bit complicated. Imagine looking at a wall with two slits through which you can see the scenery behind. The scenery you see is roughly equivalent to where the balls land.

Of course, you can use basketballs instead of tennis balls.

Now, let’s place the experimental setup in a water tank and use water waves instead of tennis balls. When the water waves reach the double slits, each slit becomes a new wave source, generating circular waves that expand.

These two sets of waves overlap and interfere with each other as they propagate. In some places, two wave crests meet, forming a higher crest (constructive interference).

In other places, a crest and a trough meet and cancel each other out, leaving the water surface calm (destructive interference).

Finally, on the receiving screen, we will see a series of alternating light and dark fringes. This is the interference pattern, the characteristic behavior of waves.

Here comes the crucial part of the experiment. We place everything in a small device and use an electron emitter to shoot electrons at a wall with two tiny slits.

We would expect to see two fringes, as mentioned before; the places you can see would be roughly where the electrons land.

However, the experimental result is that, although the electrons arrive at the receiving screen one by one, like little dots, over time, the pattern these dots form is an interference pattern, just like the water waves!

This result is baffling. It seems to suggest that each individual electron, when not being observed, simultaneously passes through both slits and interferes with itself like a wave, only to finally appear as a particle at a random point on the screen.

To find out which path the electron actually took, let’s install detectors at the slits to observe the electrons as they pass.

And this is where the strangest scene in the quantum world occurs. As soon as we start observing which slit the electron passes through, the interference pattern instantly disappears.

The behavior of the electrons becomes orderly, like that of the tennis balls, leaving only two fringes on the receiving screen.

In quantum mechanics, the simple act of knowing the electron’s path completely changes the outcome of the experiment. The act of observation seems to force the electron to collapse from a diffuse, possibility-filled wave state into a particle with a defined trajectory.

Therefore, this experiment reveals the limitations of our perception of reality. Wave and particle are not ultimate descriptions of what a quantum entity is; they are rather two imperfect metaphors we have borrowed from the classical world to describe how it behaves in specific contexts.

An electron itself is neither a wave nor a particle in the classical sense; it is a quantum entity of a deeper level that our everyday language cannot accurately describe.

Our act of observation is like forcing the projection of this complex quantum entity onto one of the two classical concepts we can understand (wave or particle).

Therefore, wave-particle duality is not so much a dual identity of microscopic particles, but rather our inability to accurately describe the quantum world using classical language and intuition.

Superposition: I want the horse to run and also not eat hay#

Wave-particle duality leads us to another, even more fundamental quantum concept: superposition.

It refers to the fact that, before being measured, a quantum system can exist in a mixture of all its possible states simultaneously.

The position of an electron is not a precise point, but rather a probability cloud that extends through space, describing the probability of finding the electron in different places.

The mathematical description of this probability cloud is the wave function.

Imagine a coin spinning rapidly in the air. Before it lands, it is neither heads nor tails, but is in a dynamic, mixed state containing both possibilities.

When we catch it in our hand (measure it), its state instantly collapses, being determined as either heads or tails.

When a measurement is performed on a system in superposition, the superposition state instantly disappears, and the system (the world) randomly chooses one of the possible states to manifest. This process is called the collapse of the wave function.

We can accurately calculate the probability of each outcome using the wave function, but we can never predict which outcome we will get before the measurement. The universe, at its most fundamental level, seems to be rolling the dice.

To reveal the absurd consequences of extending the concept of superposition from the microscopic to the macroscopic world, the physicist Erwin Schrödinger designed a famous thought experiment in 1935: Schrödinger’s cat.

A (hypothetical) cat is placed in a completely sealed steel box. Inside the box, there is a small device containing a radioactive atom, a Geiger counter, and a hammer connected to a flask of poisonous cyanide.

This radioactive atom has a 50% chance of decaying in the next hour. If the atom decays, the Geiger counter will be triggered, causing the hammer to break the poison flask, and the cat will die. If the atom does not decay, the cat will remain unharmed.

According to the principle of superposition in quantum mechanics, before an observation is made, the radioactive atom is in a superposition state of “decayed” and “not decayed.”

Since the cat’s life or death is strictly linked to the state of this atom, then, before opening the box and observing, the cat itself must also be in a superposition state of “alive” and “dead.” That is, it is simultaneously alive and dead.

Of course, Schrödinger himself did not believe that a cat could be alive and dead at the same time. The purpose of his thought experiment was not to prove the correctness of quantum mechanics, but to point out sharply, through a reductio ad absurdum, how absurd it is to apply the Copenhagen school’s interpretation of quantum superposition indiscriminately to macroscopic objects.

This experiment dramatically exposes a central unresolved issue in quantum mechanics: the measurement problem.

What exactly constitutes a measurement?

The triggering of the Geiger counter?

The interaction of the cat with the poisonous gas?

Or the moment the human scientist opens the box?

Where exactly is the boundary between the quantum world of probabilistic, possibility-filled laws and the classical reality we perceive as “either/or”? Quantum theory itself does not provide a clear, non-arbitrary answer.

Quantum entanglement: You and I, heart to heart#

If superposition already challenges our cognition, quantum entanglement takes this strangeness to the extreme.

It refers to the fact that two or more quantum particles can be correlated in a special way, such that their physical properties become inseparable, forming a single unified system, no matter how far apart they are.

It is impossible to describe the state of one particle independently; its state only makes sense in relation to the other particle.

Imagine that, in some way (like the decay of an unstable particle), we produce a pair of entangled electrons. According to the law of conservation of angular momentum, their spins must be opposite.

Before being measured, each electron is in a superposition state of “spin up” and “spin down.”

Now, we separate these two electrons, sending one to the North Pole and the other to the South Pole.

When you measure the electron at the North Pole and find its spin is “up,” at that very instant, the state of the other electron, far away at the South Pole, is immediately determined to be “spin down,” and vice versa.

This correlation is instantaneous and seems to ignore spatial distance. It was this phenomenon that deeply disturbed Einstein, who called it “spooky action at a distance” (spooky action at a distance), because it seemed to violate the principle of special relativity that no information or influence can travel faster than the speed of light.

To understand the uniqueness of quantum entanglement, we must distinguish it from the classical correlations we are familiar with.

Imagine a pair of gloves, one left and one right. You put them in two separate opaque boxes, randomly send one to a friend in France, and keep the other at home.

When you open your box and find it’s the right glove, you instantly know that your friend’s box must contain the left glove. This is not surprising, because this information (which one is left and which is right) was predetermined from the start. Your discovery only revealed a fact that already existed.

Quantum entanglement is completely different.

The properties of the entangled particles (like spin up or down) are not predetermined before measurement.

Both are in a superposition state containing both possibilities. It’s like saying that, before being observed, each of the two gloves is simultaneously a left glove and a right glove.

It is not until the moment one of the boxes is opened and observed that the states of both gloves are jointly and instantly determined: one becomes left, and the other becomes right.

Bell’s theorem, proposed by physicist John Bell, and numerous subsequent experiments have eloquently demonstrated that quantum reality works in this latter way, ruling out the possibility of local hidden-variable theories like the glove one.

It is crucial to emphasize that, although the correlation between entangled particles is instantaneous, this cannot be used for superluminal communication.

The reason is that, while the measurement of one particle instantly affects the other, the measurement result itself is completely random.

We cannot control one particle to collapse to “spin up” to send a “1” signal to the other party.

The phenomenon of quantum entanglement fundamentally challenges a basic belief, the principle of locality, which states that an object is only affected by its immediate surroundings.

It reveals to us that the universe, at its deepest level, may be non-local, that there exists a profound interconnectedness between all things that goes beyond our classical intuition.

The separation and distance we perceive may just be a macroscopic illusion.

Two entangled particles, no matter how far apart, must essentially be considered as a single, indivisible quantum system. This is precisely what Einstein found “spooky.”

Quantum computing? What’s that? Can you eat it?#

In the early 1980s, physicist Richard Feynman posed a profound question.

Can we simulate the physical world with a computer?

But he soon realized that, since nature is fundamentally quantum, not classical, any attempt to accurately simulate quantum phenomena with a classical computer would face a fundamental obstacle.

The exponential wall#

Classical computers are excellent at handling the vast majority of problems in our daily lives, but they fall short when simulating quantum systems (like complex molecular interactions or the properties of new materials). The fundamental reason lies in the exponential scaling problem.

The complete state of a quantum system is described by its wave function. For a system composed of NN qubits, a classical computer needs to store and process 2N2^N complex numbers (probability amplitudes) to fully describe its state.

As the number of qubits NN increases linearly, the required classical computational resources (memory and time) grow exponentially.

Simulating 10 qubits requires storing 210=10242^{10} = 1024 complex numbers, which is easy for any laptop.

Simulating 30 qubits requires 2302^{30} complex numbers, which needs about 8GB of memory, still within the reach of a PC.

But simulating 50 to 60 qubits… the required memory…

This insurmountable computational obstacle is often known as the exponential wall. It means that for any quantum system of moderate size, a classical computer cannot even accurately store its state, let alone simulate its dynamic evolution.

Faced with this challenge, Feynman proposed an idea: why not use a quantum system to simulate another quantum system?

To simulate a system that follows quantum rules, the most efficient method is to build a computer that itself operates on the principles of quantum mechanics.

This idea laid the foundation for quantum computing. Quantum computers are not intended to replace all functions of classical computers, but to be specialized devices that use the inherent properties of quantum mechanics (like superposition and entanglement) to solve specific types of problems.

Especially those that, due to their exponential complexity, are intractable for classical computers, such as simulating quantum systems, certain optimization problems, and cryptography.

The wave function!#

(Please be aware that the following content may be very abstract, counter-intuitive, and not very visual. I will do my best to describe it in words.)

To build a quantum computer, we first need a mathematical language capable of describing quantum information. The core of this language is linear algebra, which translates the physical reality of quantum states into precise mathematical objects.

(What? You’re asking me to explain linear algebra?)

The basic unit of a classical computer is the bit, which can only be in one of two defined states: 0 or 1.

The basic unit of quantum computing is the qubit, which can be 0|0⟩, 1|1⟩, or any superposition of both.

The mathematical description of this superposition state is called the wave function or state vector, usually denoted as ψ|\psi⟩ (this is a convenient notation called Dirac notation).

For a single qubit, its general state can be written as a linear combination of the two basis states 0|0⟩ and 1|1⟩:

  • 0|0⟩ and 1|1⟩ are the computational basis vectors. They are orthogonal unit vectors that, in a two-dimensional complex vector space, correspond to the column vectors (1,0)T(1, 0)^T and (0,1)T(0, 1)^T, respectively.
  • The coefficients α\alpha and β\beta are complex numbers called probability amplitudes.

They not only determine the probability of the measurement outcome, but their relative phase also contains crucial information about quantum interference, which is key to quantum computing.


When we measure a qubit in a superposition state ψ=α0+β1|\psi⟩ = \alpha|0⟩ + \beta|1⟩, its wave function collapses, randomly falling into one of the basis states.

According to the Born rule of quantum mechanics, the probability of measuring 0|0⟩ is α2|\alpha|^2, and the probability of measuring 1|1⟩ is β2|\beta|^2.

Please note that this normalization condition is not an arbitrary convention but a direct consequence of the fundamental physical axiom of probability conservation.

Since a measurement must yield a result (either 0 or 1), the sum of the probabilities of all possible outcomes must equal 100% (i.e., 1).

This condition ensures that our mathematical description of quantum states always corresponds to the probabilistic reality of the physical world.

Calculation, calculation, calculation, calculation#

We have detailed the definition of the fundamental quantum state in quantum computing. The next question is: how do these states evolve over time?

Or, in other words, how is the calculation actually performed?

Schrödinger Equation#

The time evolution of a closed quantum system (like an ideal quantum computer) is governed by the time-dependent Schrödinger equation.

itψ(t)=H^ψ(t)i\hbar \frac{\partial}{\partial t} |\psi(t)⟩ = \hat{H} |\psi(t)⟩

(Where ψ(t)|\psi(t)⟩ is the state of the system (wave function) at time tt, H^\hat{H} is the system’s Hamiltonian operator (representing total energy), and \hbar is the reduced Planck constant.)

Okay, I know you don’t want to read any further. Simply put, this formula describes how the state (wave function) of a quantum system changes over time.

That is, if we know the initial state ψ(0)|\psi(0)⟩, this equation can tell us the state ψ(t)|\psi(t)⟩ at any later time tt.

If it’s still unclear, we can make an analogy: F=maF=ma is the basis of classical mechanics, and itψ(t)=H^ψ(t)i\hbar \frac{\partial}{\partial t} |\psi(t)⟩ = \hat{H} |\psi(t)⟩ is the basis of quantum mechanics.

As for how this function is calculated, we will skip it here. Basically, time is divided into many small steps, and the evolution kernel KK for each step is obtained by summing all paths weighted by phase=action\text{phase} = \frac{\text{action}}{\hbar}:

K(b,a)=D[x(t)]eiS[x(t)]K(b,a)=\int\mathcal{D}[x(t)]\,e^{\frac{i}{\hbar}S[x(t)]}

By expanding the short-time propagator to first order in time and rearranging the path integral, in the limit, one obtains the same differential equation form, the Schrödinger equation.

Unitary operations#

The Schrödinger equation is a linear differential equation, and its solution can be expressed as a linear operator acting on the initial state. If the state of a system at time t=0t=0 is ψ(0)|\psi(0)⟩, then at a later time tt, its state will be ψ(t)=U(t)ψ(0)|\psi(t)\rangle = U(t) |\psi(0)\rangle, where U(t)=eiHtU(t) = e^{-\frac{i}{\hbar} H t} is called the time evolution operator. The Schrödinger equation rigorously proves that U(t)U(t) must be a unitary operator.

An operator (or matrix) UU is unitary if its conjugate transpose UU^† is equal to its inverse U1U^{-1}, i.e., UU=IU^†U = I (where II is the identity matrix).

Unitarity has two crucial physical implications: probability conservation and reversibility.

A unitary transformation preserves the norm (length) of a vector. This means that if the initial state is normalized (total probability of 1), then any state after a unitary evolution will also necessarily be normalized.

This ensures that probability is neither created nor destroyed during the calculation.

Since unitary operators always have an inverse, every step of quantum evolution is theoretically reversible. We can trace back exactly to the state before the calculation by applying the inverse operation UU^†.

Unitary can be simply understood as, no matter how the system evolves or how the wave function rotates, the total probability will always be 100%, no more, no less.

At this point, someone might ask, “What on earth are you talking about? I don’t understand at all!”

Here we must introduce an important concept: in quantum computing, any loss of information implies that the system has irreversibly interacted with the environment (such as a measurement or decoherence), which would destroy the quantum superposition state.

Therefore, a quantum computer must ensure that, during the calculation process, information is not discarded or copied (due to the quantum no-cloning theorem).

This requires all operations to be reversible unitary transformations.

Gates#

In quantum computing, the operations performed on qubits are called quantum gates. Since any quantum computation process must be a physically allowed evolution of a closed system, every quantum gate must be represented by a unitary matrix.

This constitutes the deepest connection between quantum computing and quantum physics: the logical rules of computation are derived directly from the fundamental physical laws of the universe.

Okay, I know this is confusing, and I’ve written it confusingly. To summarize a bit: quantum computing has a fundamental formula, the Schrödinger equation. Then, to ensure the accuracy of our calculations, the entire system must remain in a quantum state, so we must ensure stability during the calculation, and for that, we use unitary operations.

Then, to calculate, we need logic gates. Quantum logic gates are different from classical ones, and the reasons for this difference are what we just discussed.

Algorithm, algorithm, algorithm, algorithm#

Quantum algorithms manipulate the wave function of qubits through a series of carefully designed quantum gates (unitary transformations), using superposition and interference to solve problems.

H#

The Hadamard gate (H) is one of the most fundamental tools for creating superposition states. Its matrix representation is:

H=12[1111]H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}

When the Hadamard gate acts on a qubit in the basis state 0|0⟩, the effect is:

0H0+12|0⟩ \xrightarrow{H} \frac{|0⟩ + |1⟩}{\sqrt{2}}

The result is a uniform superposition state, where the probability of measuring 0 or 1 is 50% each. This operation is the basis of quantum parallelism. By applying the Hadamard gate to multiple qubits, we can create a superposition state containing all 2N2^N possible inputs, allowing a quantum algorithm to compute on all these inputs simultaneously.

In short, the H gate gives us a quantum in a superposition state.

CNOT#

The Controlled-NOT (CNOT) gate is a two-qubit gate that is a key tool for creating quantum entanglement.

Its action is as follows: if the control qubit is 1|1⟩, it flips the state of the target qubit; otherwise, it does nothing.

When the control qubit is in a superposition state, the CNOT gate can generate entanglement. A typical example is the creation of a Bell state.

(A Bell state refers to: Φ+=12(00+11)Φ=12(0011)Ψ+=12(01+10)Ψ=12(0110)\begin{aligned} |\Phi^+\rangle &= \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \\ |\Phi^-\rangle &= \frac{1}{\sqrt{2}}(|00\rangle - |11\rangle) \\ |\Psi^+\rangle &= \frac{1}{\sqrt{2}}(|01\rangle + |10\rangle) \\ |\Psi^-\rangle &= \frac{1}{\sqrt{2}}(|01\rangle - |10\rangle) \end{aligned})

The matrix representation of the CNOT gate is:

CNOT=[1000010000010010]CNOT = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 \end{bmatrix}

For example, we put the first qubit (control) into a superposition state 12(0+1)\frac{1}{\sqrt{2}}(|0⟩ + |1⟩) using a Hadamard gate, and keep the second qubit (target) in 0|0⟩. The total state of the system is 12(00+10)\frac{1}{\sqrt{2}}(|00⟩ + |10⟩).

Now we apply a CNOT gate. For the 00|00⟩ part, the control bit is 0, so the target bit is unchanged, remaining 00|00⟩.

But for the 10|10⟩ part, the control bit is 1, so the target bit is flipped, becoming 11|11⟩.

Finally, we obtain an entangled state Φ+=12(00+11)|\Phi^+⟩ = \frac{1}{\sqrt{2}}(|00⟩ + |11⟩).

This state is entangled because it cannot be written as the product of the states of two independent qubits. These two qubits have lost their individual identity and form an inseparable whole. Measuring one of them instantly determines the state of the other, no matter how far apart they are.

Summary#

Okay, I know you probably have no idea what all this is, so let’s do a little experiment to illustrate quantum entanglement and quantum computing.

Suppose 0 is heads and 1 is tails. We have two coins and we ask a quantum computer to flip them.

What we want is for the two coins to be entangled, so that their outcomes are always the same (00 or 11) or, after an adjustment, always opposite (01 or 10).

In this experiment, we will demonstrate the case where “both coins are heads or both are tails” (i.e., 00 and 11).

We start from the initial state of two qubits 00|00⟩ (both heads) and sequentially perform the following two quantum operations:

The first operation is an H gate, which transforms 0|0⟩ into the superposition state 12(0+1)\frac{1}{\sqrt{2}}(|0⟩ + |1⟩).

After applying it to the first qubit, the state of the system becomes:

ψ1=12(00+10)|\psi_1⟩ = \frac{1}{\sqrt{2}}(|00⟩ + |10⟩)

This means the first qubit is like a spinning coin, while the second is still heads.

The second operation is a CNOT gate, which acts on the two qubits (qubit 0 as control, qubit 1 as target).

If the control qubit is 1|1⟩, it flips the target qubit; if it is 0|0⟩, it does nothing.

The state of the system becomes:

ψ2=12(00+11)|\psi_2⟩ = \frac{1}{\sqrt{2}}(|00⟩ + |11⟩)

That is, the outcomes of the two coins are always the same. When the first is heads, the second is also heads. When the first is tails, the second is also tails.

Their results are perfectly correlated. This state is a quantum entanglement state (a Bell state).

Qubit technologies#

We’ve talked a lot about math, so now let’s tell some stories to relax: qubit technologies.

Superconducting circuits#

Superconducting quantum computing is one of the fastest-developing and most scalable routes today. Its main advantage is that it can leverage mature semiconductor micro- and nanofabrication processes to rapidly expand the number of qubits.

Its physical basis consists of cooling a macroscopic electronic circuit to extremely low temperatures, close to absolute zero (approximately 15 mK), so that it enters a superconducting state, thereby displaying controllable macroscopic quantum effects.

A superconducting qubit is essentially an artificial macroscopic quantum system that can be simplified as a nonlinear LC resonant circuit (f=12πLCf = \frac{1}{2\pi\sqrt{LC}}).

In a standard LC circuit, its energy levels are evenly spaced, like a perfect harmonic oscillator.

This means that if you try to use a microwave pulse of a specific frequency to drive the system from the ground state 0|0\rangle to the first excited state 1|1\rangle, this pulse will also drive the system from 1|1\rangle to the second excited state 2|2\rangle, and so on.

This degeneracy of energy levels prevents us from precisely confining the system’s operations to the two qubit states 0|0\rangle and 1|1\rangle, making it impossible to build an effective qubit.

The key component to solving this fundamental problem is the Josephson junction (JJ).

A Josephson junction consists of two layers of superconductor separated by an extremely thin insulating barrier. Its unique physical effect allows superconducting electron pairs (Cooper pairs) to pass through this insulating layer via the quantum tunneling effect, forming a supercurrent.

The physical properties of this process give the Josephson junction a crucial attribute: a nonlinear inductance.

The presence of this nonlinear inductance completely changes the energy level structure of the LC circuit. It breaks the degeneracy of the harmonic oscillator’s energy levels, so that the energy difference between the ground state 0|0\rangle and the first excited state 1|1\rangle (transition frequency ω01\omega_{01}) is no longer equal to the energy difference between the first excited state 1|1\rangle and the second excited state 2|2\rangle (transition frequency ω12\omega_{12}). This non-uniformity in the spacing of energy levels is called anharmonicity.

It is precisely this anharmonicity that allows us to use a precisely tuned microwave frequency to selectively drive only the 01|0\rangle \leftrightarrow |1\rangle transition, without accidentally exciting to higher levels. This effectively turns this macroscopic circuit into a precisely controllable two-level quantum system, i.e., a qubit.

Among the many superconducting qubit designs, the Transmon qubit (transmission-line shunted plasma oscillation qubit) has become the standard architecture for industry leaders like IBM and Google.

The Transmon design cleverly sets the circuit parameters in a regime where the Josephson energy (EJE_J) is much greater than the charging energy (ECE_C), i.e., EJECE_J \gg E_C.

The main advantage of this design is that it greatly reduces the qubit’s sensitivity to environmental charge noise, significantly extending its coherence time, which is a major breakthrough compared to earlier superconducting qubit designs. IBM’s Heron and Condor processors, and Google’s Sycamore and Willow processors, all use Transmon-based architectures.

The advantages of superconducting circuits include extremely fast gate speeds and excellent scalability.

The disadvantages are shorter coherence times, a stringent operating environment, and limited qubit connectivity.

At the end, we will provide a unified explanation of the key terms that have appeared here: gate operation speed, scalability, coherence time, and qubit connectivity. (What? You don’t know what a stringent operating environment is? Answer: 15 mK).

Ion traps#

Unlike artificial superconducting circuits, ion trap quantum computing chooses to use nature’s most perfect quantum systems, individual atoms, as qubits.

By stripping the outer electrons from an atom to charge it, these ions can be precisely manipulated using electromagnetic fields.

The core of an ion trap quantum computer is the use of individually charged atoms (ions), such as ytterbium-171 (171Yb+^{171}\text{Yb}^+) or barium (137Ba+^{137}\text{Ba}^+), as its qubits.

These ions are suspended in an ultra-high vacuum chamber and confined by a Paul trap, which is composed of a combination of static electric fields and alternating radio-frequency (RF) electric fields.

This combination of electromagnetic fields creates a saddle-shaped potential well in space. By rapidly rotating the potential well, the ions are dynamically confined to the center of the well, achieving excellent isolation from the external environment.

Well, this is quite abstract, so you can skim it. The main point is that in an ion trap, since all ions are confined in the same potential well, they interact and repel each other via the electrostatic Coulomb force.

This interaction causes the vibrational modes of the entire ion chain to be collective. These quantized vibrational modes are called phonons.

What is this good for? Extremely long coherence times, very high gate fidelities, all-to-all connectivity, and perfect qubit uniformity.

Sounds great, but the disadvantages are also obvious: extremely slow gate operation speeds and low scalability.

At the end, we will provide a unified explanation of the key terms that have appeared here: coherence time, gate fidelity, all-to-all connectivity, qubit uniformity, gate operation speed, and scalability.

Photonic processors#

Photonic quantum computing adopts a completely different paradigm from the matter-based qubits mentioned above. It uses the smallest unit of light energy, the photon, as a qubit, leveraging its wave-like and particle-like nature to process information.

Photonic quantum computing is mainly based on linear optical components, such as beam splitters, mirrors, and phase shifters, to manipulate photonic qubits. These components guide photons to interfere, thereby achieving quantum gate operations.

However, photons barely interact with each other naturally, which makes implementing two-qubit entanglement gates the biggest challenge in photonic quantum computing. Current solutions are often probabilistic, requiring the use of auxiliary photons and projective measurements to achieve this. This computation model is known as Measurement-Based Quantum Computation (MBQC).

The advantages of photonic processors are extreme robustness and room-temperature operation. The disadvantages are probabilistic two-qubit gates, difficulty in generating high-quality single-photon sources, and photon loss.


The superconducting route chooses to prioritize speed and scalability potential, viewing the relatively high error rate as an engineering problem that can be solved in the future through error correction and mitigation techniques.

The ion trap route, on the other hand, seeks maximum qubit quality and connectivity from the outset, accepting slower gate speeds as a trade-off.

As for photonic processors, once they overcome significant challenges like probabilistic gates and photon loss, they could completely change the paradigm for implementing and applying quantum computing, making it more accessible and easier to integrate.

Features#

Now let’s explain some of the features mentioned above.

-   Gate operation speed (gate time / gate speed)     The time required to perform one quantum logic gate. Shorter is better, as more gates can be completed before coherence runs out. However, making gates faster generally makes it harder to maintain high fidelity and low crosstalk. -   Scalability     As the number of qubits, connections, and control/readout channels multiplies, can the system keep costs, error rates, interconnection, and heat dissipation under control? It also includes the ability of modular/distributed quantum computing to connect multiple QPUs (quantum processing units). -   Coherence time     Here we will talk about decoherence, with more details later. Decoherence is the process by which a quantum system interacts with its environment, gradually losing the phase relationships of its superposition state and evolving from a quantum state to a classical mixed state. -   Qubit connectivity     Refers to how many other qubits a given qubit can directly perform a two-qubit gate with (represented by CNOT, which creates quantum entanglement). Higher connectivity requires fewer intermediate swaps during compilation, resulting in lower circuit depth. -   Gate fidelity     Whether the result of a gate operation is the ideal one. Will be explained in more detail later. -   All-to-all connectivity     Any pair of qubits can directly perform a two-qubit gate, reducing routing overhead. Not all physical platforms possess this natively. -   Qubit uniformity     The degree of consistency in parameters such as frequency and noise among different qubits on a chip. High uniformity simplifies calibration and control, and improves scalability and manufacturing yield. -   High-quality single-photon sources and photon loss (photonic)     Photonic quantum computing requires single-photon sources that are on-demand, pure, and indistinguishable. At the same time, losses in channels/devices (as well as insufficient detection efficiency) must be reduced to extremely low levels; otherwise, the success rate of gates and overall scalability will be very low.

Performance#

Simply counting the number of qubits is far from sufficient to measure the true capability of a quantum computer.

A processor with hundreds of noisy, poorly connected qubits may have far less computational capability than a system with only a few dozen high-quality, fully connected qubits. Therefore, evaluating quantum performance requires a set of rigorous, multi-dimensional key metrics.

Coherence time#

Coherence time is the central metric for measuring a qubit’s ability to maintain its fragile quantum state. It defines the lifespan of quantum information before it is destroyed by environmental noise.

A longer coherence time means we have more time to execute quantum gate operations, allowing us to run deeper and more complex quantum algorithms.

T1, also known as energy relaxation time or longitudinal relaxation time, describes the characteristic time it takes for a qubit in the excited state 1|1\rangle to spontaneously decay to the ground state 0|0\rangle due to energy exchange with the environment.

T1 measures the stability of the qubit’s energy level populations. In the Bloch sphere model, this process can be imagined as a vector pointing to the north pole (representing the 1|1\rangle state) gradually relaxing towards the south pole (representing the 0|0\rangle state).

T2, also known as decoherence time or transverse relaxation time, describes the time it takes for the quantum phase information of a qubit in a superposition state (e.g., 12(0+1)\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)) to be randomized by environmental noise.

Phase is the basis of quantum interference, and quantum interference is the key to the speed advantage of quantum algorithms. T2 measures the stability of phase relationships in a quantum superposition state.

In the Bloch sphere model, this process manifests as a vector on the equator, whose azimuthal angle information gradually becomes uncertain, eventually dispersing uniformly on the equatorial plane.

There is a fundamental relationship between T1 and T2: T22T1T_2 \le 2T_1. This is because any physical mechanism that causes energy relaxation (T1 process) will necessarily destroy phase relationships (T2 process).

But, conversely, pure phase noise can cause decoherence without causing energy loss. Therefore, the T2 time is usually shorter than the T1 time and is often the most critical factor limiting quantum computing performance.

In summary, coherence time acts as a limiter in quantum computing.

Although the gate operation speed determines how fast a single operation is, the coherence time determines how many consecutive operations we can perform in total before the quantum state collapses.

The longer the coherence time, the more operations we can perform.

A very simple example: imagine a person who is only awake when the sun is out and falls asleep when it’s not.

This coherence time is the duration of the sunlight. Only if the sunlight lasts longer can this person do more things in a day.

Gate fidelity#

Gate fidelity is the central metric for measuring the accuracy of quantum gate operations. It quantifies the closeness between a quantum gate operation executed on physical hardware and the theoretical, ideal, noise-free mathematical transformation.

For example, a fidelity of 99.9% means there is a 0.1% probability that an error will occur when executing that gate.

Two metrics usually appear here: 1Q Gate Fidelity and 2Q Gate Fidelity.

Because two-qubit gates require precise control of the interaction between qubits, they are usually more complex and prone to errors than single-qubit gates. Therefore, the fidelity of 2Q gates is often the performance bottleneck of an entire quantum computer.

It is important to note that quantum algorithms often require the execution of thousands or even millions of quantum gates.

Even if the error rate of a single gate is low, these errors accumulate during the calculation and can eventually completely obscure the correct result.

The total fidelity of a quantum circuit can be roughly seen as the product of the fidelities of all the individual gates it contains. If a circuit contains NN gates and the average fidelity of each gate is FF, then the overall success probability of the circuit is approximately FNF^N.

For example, a seemingly high two-qubit gate fidelity of 99% (F=0.99F=0.99), after only 70 gate operations, the overall circuit fidelity (0.99700.99^{70}) will drop below 50%, meaning the calculation result is more than half likely to be wrong, no different from a random guess.

This is the fundamental reason why today’s Noisy Intermediate-Scale Quantum (NISQ) computers can only run shallow circuits. It also explains why improving gate fidelity from 99.5% to 99.9% is a huge engineering leap, as it directly relates to expanding the scale of problems that can be computed.

Achieving high fidelity (generally required to be above 99.9%) is also a prerequisite for applying quantum error correction (QEC) codes.

As for what NISQ and QEC are, please be patient, we will explain them slowly.

Comprehensive metrics#

To overcome the limitations of single metrics, researchers have developed various holistic benchmarks, designed to evaluate the overall computational capability of a quantum computer with a single comprehensive figure.

Quantum Volume (QV), proposed by IBM, is currently the most widely adopted holistic benchmark in the industry.

It does not simply measure a single parameter but comprehensively evaluates the system’s performance by running a series of random quantum circuits of a specific shape. QV aims to answer a central question: “How large and complex a quantum circuit can this quantum computer successfully execute?”

In addition to QV, the industry is also exploring other benchmarks.

For example, the company IonQ proposed the concept of Algorithmic Qubits (#AQ).

#AQ aims to measure the effective scale of a computer by running a specific, practical algorithm (like the QAOA optimization algorithm).

The #AQ value is equal to the maximum number of qubits on which such an algorithm can be successfully run. For example, #AQ 36 means the system can successfully run a representative instance of the algorithm on 36 qubits.

The quantum era#

Although quantum hardware has seen rapid development in recent years, we must be aware that the entire field is still in a very fundamental stage: the era of Noisy Intermediate-Scale Quantum (NISQ) computing.

This means that the quantum computers we have in our hands are more like the first steam engine prototypes built by hand by genius artisans in a garage: they roar, they leak, they are not efficient, but they irrefutably demonstrate the feasibility of a completely new power paradigm.

We are at the dawn of a difficult transition, from the “patching” era of Error Mitigation to the industrial era of Quantum Error Correction (QEC).

This is why this article has not chased the daily-changing chip models or fleeting algorithm news. Instead, it has focused on understanding quantum mechanics and the fundamentals of quantum computing.

Because only by understanding the underlying meaning can one truly discern whether what is before us is real or fake, tangible or illusory.

In this world of information explosion, holding on to the core ideas is the only way to keep a pulse on this technological revolution, no matter what new, dazzling advancements emerge in the future.

Recalling the beginnings of human physics, from Newton’s deterministic, clockwork-like classical universe, to that quantum reality filled with probability, superposition, and entanglement, which is both unsettling and fascinating.

In the last century, humanity has taken a cognitive leap, beginning to truly use quantum thinking as a basis for investigating new things.

And quantum computing is the first customer of this new era. This is not just a race for computational power. Its deeper meaning is that humanity is learning to think and solve problems in a completely new language—the native language of the universe.

We are no longer satisfied with roughly simulating this world with classical 0s and 1s, but are trying to directly master the wave function itself, letting the interference of probability amplitudes guide us to the answers.

The future of quantum computers may not be to replace the computers on our desks, but to become an intellectual prosthesis for humanity’s exploration of unknown territories.

It will first open doors never seen before in those fields where classical computation is powerless, such as new drug development, materials science, and the simulation of the universe.

The future has just begun.


Thank you all for reading. I will post more about the Quantum Echoes algorithm in my notes later, it probably won’t be very long. This concludes this article. Writing it took about 7 hours in total, and if you add in the research and running some experiments, the time spent was considerably longer. I hope you enjoyed reading. See you in the next article! 👋

What is a Quantum?
https://baidu.blog.icechui.com/blog/p/qubit1
Author baidu0com
Published at October 29, 2025