You are here

Quantum computer changes the world

In October 2019, Google announced that its quantum computer, Sycamore, had done a calculation in three minutes and 20 seconds that would have taken the world’s fastest supercomputer 10,000 years. “Quantum supremacy,” Google claimed for itself. We now have a quantum computer, it was saying, capable of performing calculations that no regular, “classical” computer is capable of doing in a reasonable time.

Where do you buy a computer like that? You don’t. Google’s Sycamore can’t run Word or Chrome, it can’t even run a nice friendly game of Minesweeper. In fact, Google’s supreme quantum computer doesn’t know how to do anything, other than perform one useless calculation. It resembles the huge computer in “The Hitchhiker’s Guide to the Galaxy,” which came up with the calculation of 42, as the “Answer to the Ultimate Question of Life, the Universe, and Everything” – although no one knows what the question is.

The question is now being worked on in Tel Aviv, on Derech Hashalom Street. In their generic office in the city’s Nahalat Yitzhak neighborhood, three physicists who received their doctorates at Rehovot’s Weizmann Institute of Science – Nissim Ofek, 46; Yonatan Cohen, 36; and Itamar Sivan, 32 – are developing instruments of control that will tame the quantum monster.

“Ten years ago, when I took a course in quantum computing, it was considered science fiction,” Dr. Sivan, the CEO of their company, Quantum Machines, relates. “The experts said that it wouldn’t happen in our lifetime or may never happen. As a physicist, quantum computing is a dream come true. Almost all our employees are physicists, even those who work as programmers, and most of them approached us. They read about an Israeli company for quantum computing and simply couldn’t restrain themselves. There’s nothing more exciting than to learn for years about Schrödinger’s cat and about all the wild quantum effects, and then to enter a laboratory and actually build Schrödinger’s cat and leverage the theory into a prodigious force of calculation.”

Already in high school, Sivan, who was born and raised in Tel Aviv, knew that he was drawn to the mysterious world of elusive particles. “I did honors physics, and in that framework we learned a little quantum mechanics. Without mathematics at that stage, only the ideas of quantum mechanics. My brain took off. The quantinizing of the world, of the space around me, was very tangible. I felt that I understood the quantum world. Afterward I understood that I didn’t understand anything, but that’s not important. It’s preferable to develop an intuition for quantum at an early age – like for a language. Afterward I did military service, but I didn’t forget that magic.

“I was a bureau chief [i.e., military secretary], not the most intellectually challenging job in the army,” he continues, “and I was afraid that when I was discharged, I would be too old. You know, it’s said that all the great mathematicians achieved their breakthroughs before the age of 25. So, in parallel with army service I started undergraduate studies at the Open University. On the day after my discharge, I flew to Paris to continue my studies at the École Normale Supérieure – because there are a few other things that are also worth doing when you’re young, such as living in Paris.”

He met his partners in the project, Nissim Ofek and Yonatan Cohen, at the Weizmann Institute, where they all studied at the Center for Submicron Research, under Prof. Moty Heiblum.

Sivan: “Nissim had completed his Ph.D. and was doing a postdoc at Yale just when Yonatan and I started. At the same time, Yonatan and I established the Weizmann Institute’s entrepreneurship program. When we graduated, we asked each other: Okay, what do we know how to do in this world? The answer: quantum electronics and entrepreneurship. We really had no choice other than to found Quantum Machines.”

“QM is a singular startup,” says Prof. Amir Yacoby, a Harvard University physicist and a member of the company’s scientific advisory board. “A great many startups promise to build ever more powerful quantum computers. QM is out to support all those ambitious platforms. It’s the first company in the world that is building both the hardware and the software that will make it possible to use those computers. You have to understand that quantum computing was born in university labs before the electronics industry created designated devices for it. What we did was to take devices designated for classical computers and adapt them to the quantum computers. It took plenty of student years. That’s why QM looks so promising. These guys were the wretches who went through hell, who learned the needs the hard way. Today, every research group that I’m familiar with is in contact with them or has already bought the system from them. QM is generating global enthusiasm.”

We’ll return to the Israeli startup, but first we need to understand what all the fuss is about.

What we refer to as the universal computing machine was conceived by the man considered the father of computer sciences, Alan Turing, in 1936. Years before there were actual computers in the world, Turing suggested building a read-write head that would move a tape, read the different state in each frame, and replicate it according to commands it received. It sounds simplisltic, but there is no fundamental difference between the theoretical Turing machine and my new Lenovo laptop. The only difference is that my Turing machine reads-writes so many frames per second that it’s impossible to discern that it’s actually calculating. As the science-fiction writer Arthur C. Clarke put it, “Any sufficiently advanced technology is indistinguishable from magic.”

Classical computers perform these calculations by means of transistors. In 1947, William Shockley, Walter Brattain and John Bardeen built the first transistor – the word is an amalgam of “transfer” and “resistor.” The transistor is a kind of switch that sits within a slice of silicon and acts as the multi-state frame that Turing dreamed of. Turn on the switch and the electricity flows through the transistor; turn it off, and the electricity does not flow. Hence, the use of transistors in computers is binary: if the electricity flows through the transistor, the bit, or binary digit, is 1; and if the current does not flow, the bit is 0.

With transistors, the name of the game is miniaturization. The smaller the transistor, the more of them it is possible to compress into the silicon slice, and the more complex are the calculations one can perform. It took a whole decade to get from the one transistor to an integrated circuit of four transistors. Ten years later, in 1965, it had become possible to compress 64 transistors onto a chip. At this stage, Gordon Moore, who would go on to found Intel, predicted that the number of transistors per silicon slice would continue to grow exponentially. Moore’s Law states that every 18 months, like clockwork, engineers will succeed in miniaturizing and compressing double the number of transistors in an integrated circuit.

Moore’s Law is a self-fulfilling fusion of a natural law and an economic prediction. A natural law, because miniaturized electrical circuits are more efficient and cheaper (it’s impossible to miniaturize a passenger plane, for example); and an economic law, because the engineers’ bosses read Moore’s article and demanded that they compress double the number of transistors in the following year. Thus we got the golden age of computers: the Intel 286, with 134,000 transistors in 1982; the 386, with 275,000 transistors, in 1985; the 486, with 1,180,235 transistors, in 1989; and the Pentium, with 3.1 million transistors, in 1993. There was no reason to leave the house.

Today, the human race is manufacturing dozens of billions of transistors per second. Your smartphone has about 8.5 billion transistors. According to a calculation made by the semiconductor analyst Jim Handy, since the first transistor was created in 1947, 2,913,276,327,576,980,000,000 transistors – that’s 2.9 sextillion – have been manufactured, and within a few years there will be more transistors in the world than all the cells in all the human bodies on earth.

However, the golden age of the transistors is behind us. “Moore’s Law ceased being relevant long ago,” says Amir Yacoby. “Computers are continuing to be improved, but the pace has slowed. After all, if we’d continued to miniaturize transistors at the rate of Moore’s Law, we would have reached the stage of a transistor the size of an atom – and we would have had to split the atom.”

The conventional wisdom is that the slowdown in the rate of the improvement of classic computers is the engine driving the accelerated development of quantum computers. QM takes a different approach. “There’s no need to look for reasons to want more computing power,” Sivan says. “It’s a bottomless pit. Generate more calculating power, and we will find something to do with it. Programmers are developing cooler applications and smarter algorithms, but everything rests on the one engine of calculating power. Without that engine, the high-tech industry would not have come into being.”

“Moore’s Law,” Cohen adds, “starts to snafu precisely because miniaturization brought us to the level of solitary atoms, and the quantum effectsare in any case already starting to interfere with the regular behavior of the transistors. Now we are at a crossroads. Either we continue to do battle against these effects, which is what Intel is doing, or we start harnessing them to our advantage.”

And there’s another problem with our universal Turing machine: even if we were able to go on miniaturizing transistors forever, there is a series of “hard problems” that will always be one step ahead of our computers.

“Mathematicians divide problems according to complexity classes,” Cohen explains. “Class P problems are simple for a classic computer. The time it takes to solve the problem increases by polynomials, hence the P. Five times three is an example of a polynomial problem. I can go on multiplying and my calculating time will remain linear for the number of digits that I add to the problem. There are also NP problems, referring to nondeterministic polynomial time. I give you the 15 and you need to find the primary factors – five times three. Here the calculating time increases exponentially when the problem is increased in linear terms. NP complexity problems are difficult for classic computers. In principle, the problem can still be solved, but the calculating time becomes unreal.”

A classic example of an NP complexity problem is that of the traveling salesman. Given a list of cities and the distance between each two cities, what is the shortest route for the traveling salesman – who in the end has to return to his hometown – to take? Between 14 cities, the number of possible routes is 10 to the 11th power. A standard computer performs an operation every nanosecond, or 10 to the 9th power operations per second, and thus will calculate all the possible routes in 100 seconds. But if we increase the number of cities to just 22, the number of possibilities will grow to 10 to the 19th power, and our computer will need 1,600 years to calculate the fastest route. And if we want to figure out the route for 28 cities, the universe will die before we get the result. And in contrast to the problem that Google’s quantum supremacy computer addressed, the problem of the traveling salesman comes from the real world. Airlines, for example, would kill to have a computer that could do such calculations.

In fact, modern encrypting is based on the same computer-challenging problems. When we enter the website of a bank, for example, the communication between us and the bank is encrypted. What is the sophisticated Enigma-like machine that prevents outsiders from hacking into our bank account? Prime numbers. Yes, most of the sensitive communication on the internet is encrypted by a protocol called RSA (standing for the surnames of Ron Rivest, the Israeli Adi Shamir, and Leonard Adelman), whose key is totally public: breaking down a large number into prime numbers. Every computer is capable of hacking RSA, but it would take many years for it to do so. To break down a number of 300 digits into prime numbers would require about 100 years of calculation. A quantum computer would solve the problem within an hour – and hack the internet.

“The central goal of the study of quantum algorithms in the past 25 years was to try and understand what quantum computers could be used for,” says Prof. Scott Aaronson, a computer scientist from the University of Texas at Austin and a member of QM’s scientific advisory board. “People need to understand that the answer is not self-evident. Nature granted us a totally bizarre hammer, and we have to thank our good fortune that we somehow managed to find a few nails for it.”

‘Spooky action’

What is this strange hammer? Without going deeply into quantum theory, suffice it to explain that quantum mechanics is a scientific theory that is no less grounded than the Theory of General Relativity or the theory of electricity – even if it conflicts sharply with common sense. As it happens, the universe was not tailor-made for us.

Overall, quantum mechanics describes the motion of particles in space. At about the same time as Turing was envisioning his hypothetical computer, it was discovered that small particles, atomic and sub-atomic, behave as if they were large waves. We will illuminate two cracks with a flashlight and we will look at the wall on the other side. What will we see? Bands of light and shade alternately. The two waves that will be formed in the cracks will weaken or strengthen each other on the other side – like ocean waves. But what happens if we fire one particle of light, a solitary photon, at the two cracks? The result will be identical to the flashlight: destructive and constructive interference of waves. The photon will split in two, pass through the two cracks simultaneously and “become entangled” with itself on the other side.

It’s from this experiment, which was repeated in numberless variations, that the two odd traits of quantum mechanics are derived: what scientists call “superposition” (the situation of the particle we fired that split into two and passed between the two cracks in parallel) and the ability to predict only the probability of the photon’s position (we don’t know for certain where the particle we fired will hit). An equally strange trait is quantum entanglement. When two particles are entangled, the moment one particle “decides” where it is located, it influences the behavior of the other particles, even if it is already on the other side of the cracks or on the other side of the Milky Way. Einstein termed this phenomenon “spooky action at a distance.”

“The world of quantum mechanics is so bizarre that it’s insanely attractive,” Sivan suggests. “On the one hand, the results contradict common sense; on the other hand, it is one of the most solidly grounded theories.”

“The best analogy was provided by the physicist Richard Feynman, who conceived the idea of a quantum computer in 1982,” notes Cohen. Feynman compared “the world” to “a great chess game being played by the gods… We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules.”

According to Cohen, “Until the beginning of the 20th century, physicists could only look at pawns – at the binary moves. Quantum mechanics shows us that there is a larger and far more interesting set of laws in nature: there are knights, rooks, queens.”

“Here,” adds Sivan, pointing, “this table here has an end, right? No, it doesn’t. Like the particle that passes through the cracks, this table also has no defined size in space, only probability. The prospect is that we will find a table particle fading exponentially at the edge of the table. In order to work with the table on an everyday basis, we can make do with the classic, simplistic description. But our world is a quantum world and we need to know how to describe it truly. And for that we need quantum computers. In order to describe a simple molecule with 300 atoms – penicillin, let’s say – we will need 2 to the 300th power classic transistors – which is more than the number of atoms in the universe. And that is only to describe the molecule at a particular moment. To run it in a simulation would require us to build another few universes,” to supply all the material needed.

But humanity is today running simulations on whole galaxies.

Sivan: “True, but humanity is really bad at that. We are simplifying, cutting corners. This table will have a boundary in a simulation, so that you can work with it. The galaxy you are simulating is composed of molecules that behave according to quantum mechanics, but in the simulation you will run, the galaxy – having no other choice – will operate according to the principles of classical mechanics. That was Feynman’s great insight: We cannot simulate a quantum world with classical computers. Only a quantum computer will know how to simulate a quantum system.”

Feynman didn’t stop at imagining a machine that would depict or simulate a quantum system – that is, a computer that would be analogic for a quantum system. He took a step forward and asked: Why not build a universal quantum calculating machine? The theoretical principles for the universal quantum computer were set forth by the Israeli-born physicist David Deutsch in 1985. A quantum computer, Deutsch stated, will not be comparable to a Turing machine; it will be capable of solving every problem that a Turing machine is capable of solving – and another few problems, too. Such as NP complexity problems.

“Classic computers are based on binary bits, two states, 0 or 1,” Cohen says. “But like the particle in the experiment, Schrödinger’s cat can also be in a superposition, both dead and living, both 0 and 1. We don’t know how to do that with cats yet, but there are systems that we can bring to superposition. Every such system is called a quantum bit, or qubit. Of course, the superposition will ultimately collapse, because we need to see the result on the other side, but along the way the cat was both living and dead, the lone photon truly passed through both cracks – with the result in accordance.”

Sivan: “Two classic bits can take four possible combinations: 00, 01, 10 or 11. Two quantum bits can be in all four of those combinations simultaneously: 00, also 01, also 10 and also 11. With eight qubits you reach 256 combinations. That is true exponential force. Let’s say you have a processor with a billion transistors, a billion bits, and you want to double its memory. You would have to add another billion bits. To double the memory in a quantum computer you will have to add one qubit.”

How does it work? Take, for example, two simple calculations with two classic bits. In the first calculation you feed 00 into the machine and the algorithm says to the computer to switch, or turn over, the first bit, so we get 01. Then we want to solve another problem. We feed into the computer two bits in a 11 state, and the computer turns over the second bit, so we get 10. Two calculations, two operations. Now we will entangle a pair of quantum bits in superposition: they are both 00 and 11. Instead of two operations, the quantum computer will turn over the second bit and we will get both 01 and 10. Two calculations, one operation. And the operation will continue to be one, no matter how many calculations we perform. If in the classic computer, we are at any given moment in one state out of two states, 0 or 1, to the power of the number of bits we have, in the quantum computer we are at any given moment in each of the states.”

An important clarification is in order here. Scott Aaronson’s blog, called “Shtetl-Optimized,” carries the motto, “Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once.” That’s because a quantum computer can be in all the states at every given moment – but we, by heaven’s grace, are not quantum beings. We need an answer. That is why scientists are building the quantum computer with delicate choreography so that all the mistaken calculations will weaken one another and the calculations that contribute to the right answer will empower one another – so that we non-quantum mortals will, with high probability, be able to measure the right answer from among the random nonsense.

“Almost every popular article is wrong on this point,” Prof. Aaronson explains. “Like Sisyphus rolling the boulder up the hill, I have been trying for 15 years to explain that if we simply measure the superposition of each of the possible answers, we will get a random answer. For that we don’t need quantum computers – you can flip a coin or spin a top. All the hopes we are pinning on quantum computing depend on our ability to increase the probability of the right answer and reduce the probability of all the wrong answers.”

Thus, the classic bit is encoded through an electrical current in semiconductors, so that if the current does not flow we get 0, and if it does flow we get 1. The revolution of the quantum computer hasn’t yet determined what the best way is to encode quantum bits, but at the moment the most advanced quantum computers are using a two-atom electron. The electron can be either in atom left, 0, or in atom right, 1 – or in both of them, in superposition – at the same time. Google’s Sycamore has 53 such qubits, fewer than the number of classical bits there were in the world when Moore formulated his law in 1964. All the giants – such as IBM, Intel, Microsoft and Alibaba – are in the quantum race to add qubits; the experts think that in a year or two we will see quantum computers with 100 or 200 qubits. The rate of increase is astounding, appropriate for a “quantum Moore’s Law.” Now arises the question: If one qubit works, and 53 qubits work together, why not create more qubits? Why not create a processor possessing hundreds, thousands, millions of qubits, to hack the RSA encryption of all the banks in the world and retire on a yacht?

The answer is that quantum computers make mistakes. Classical computers make mistakes, too, but we’re not aware of that because the classical computers also correct the mistakes. If, for example, a calculation is run on three classical bits, and one bit produces the result 0, and two bits produces the result 1, the processor will determine that the first bit was wrong and return it to state 1. Democracy. In quantum computing, democracy doesn’t work, because the voters entered the polling booth together. Think of three cubits entangled to 000 and to 111, which is to say, three electrons that are present together both in the left atom and in the right atom simultaneously. If the third bit turns over by mistake, we will get a state of 001 and 110. If we try to correct the mistake, or even to check whether a mistake occurred, our superposition will collapse immediately and we will get 000 or 111. In other words, the qubits defeat themselves. The quantum entanglement that makes the computer marvel possible is the same one that precludes the possibility of adding more qubits: The electrons simply coordinate positions, so that it is impossible to ask them who made the mistake. That is a problem, because qubits are notorious for their sensitivity to the environment and there are also prone to make mistakes – a lot more than regular bits.

“Classical bits do not have a continuum of possibilities,” Prof. Yacoby notes. “What is a classical bit? The electricity flows or doesn’t flow. Even if the current weakens or becomes stronger, it is still considered a current. The quantum bits are sequential, the electron can be largely in atom right and partially in atom left. That is their strength and that is their weakness. Therefore, every interaction with the environment affects them dramatically. If I use my regular computer and an electronic wave passes through the transistor, the state of the bit does not change. The same electronic wave passing through a qubit will cause loss of the qubit’s coherence, memory. The information will leak out to the surroundings and we will not be able to reconstruct it.”

For this reason, we will not see quantum iPads in the near – or distant – future. A classical processor performs a calculation in a nanosecond, but will preserve the information for days, months, years ahead. A quantum computer also performs a calculation in a nanosecond – and at best will manage to preserve the information for a hundredth of a microsecond. Quantum computers are so sensitive to external interference that they must be isolated from their surroundings at almost minus 273 degrees Celsius, one 10,000th of a degree above absolute zero.

“The interaction of the qubits with the environment is a serious problem, because they lose the memory,” says Yacoby. “But that only means that they are measuring something in regard to the environment. There is a whole field of quantum sensors that enable us to learn about traits of materials with psychopathic sensitivity. Quantum clocks can measure a change in the force of gravity of the Earth from my nose to my chin. It’s unbelievable. Lockheed Martin is developing a cruise missile that will be able to navigate itself without GPS, solely according to the quantum sensitivity to minute differences in Earth’s magnetic field. And there are quite a few startups that use quantum sensors to identify cancerous cells. These are applications for which I foresee commercial success long before we actually have quantum computers.”

There’s also another game that can be played with quantum sensitivity: encryption. A quantum computer can hack the widespread encryption protocol on the internet, RSA, because it can calculate NP problems with no problem. But given that superposition collapses the moment the “black box” is opened to examine whether the cat is dead or alive, a quantum encryption protocol will be immune by virtue of its being quantum. Communication with the bank can be left open on a quantum server. Anyone who tries to listen to the line will cause the collapse of the superposition and hear gibberish – and the bank and the client will know that someone listened in.

But with all due respect to the benefit that can be extracted from the fact that quantum computers don’t work – but can only sense – humanity will benefit tremendously if we can make them work. In our world, everything is quantum at its base. Mapping the structure of chemical molecules requires quantum computing power, and we will know how to ward off diseases only when the pharmaceutical companies are able to run quantum simulations. The neurons in our brain are quantum, and we will be able to create true artificial intelligence only when we have quantum computers that can run independent thoughts.

“It’s not the race to the moon,” Cohen says, “it’s the race to Mars. In my opinion, the greatest scientific and engineering challenge now facing the human race is the actualization of quantum computers. But in order to actualize all those dreams, we need to understand how we correct errors in qubits, how we control them. That’s what we’re doing. QM is the first company in the world that is totally focused on developing control and operating systems for quantum computers. The system we are developing has a decisive role in correcting errors. In fact, the third founder of QM, Nissim, was the first person in the world to prove that errors in quantum bits can be corrected. He didn’t show it on paper – he proved it, succeeded, demonstrated it. Instead of measuring every qubit and seeing which was wrong, it’s possible to examine whether the qubits are in the same state. If one qubit is in a different state, we’ll know that it is wrong. You can know whether you voted for a party that didn’t win without knowing the results of the election.”

QM was founded in 2018 with the aim of bypassing the problem of errant qubits with the help of some old friends: classical bits. If the classical computer contains hardware and software, meaning a great many transistors and a language that tells the processor which calculations to run on them, in a quantum computer, the cake has three layers: quantum hardware (that is, qubits), classical hardware that will be able to operate the quantum hardware, and software (both classical and quantum). “That is our way of having an impact on the qubits while reading the results in our world,” Sivan says. “If we were quantum beings, we would be able to speak directly with the computer – but we’re not.”

Would you like to be a quantum being? It would save you a lot of work.

“Yes, but then the other quantum beings wouldn’t buy our products.”

QM is building the classical hardware and software that will be able to send the right electric signals to the electrons – and to read the results – with minimal interference to the black wonder box. Their integrated system is called the “Quantum Orchestration Platform.”

“Today there is separate hardware for every individual quantum computer,” Cohen says. “We are building an orchestra system that can work with every such computer and will send the most correct electrical signals to the qubits. In addition, we are developing programming language that will make it possible for us to program the algorithms – the commands. That’s a general quantum language, like C [programming language]. Today there is a potpourri of languages, each quantum computer and its language. We want our language, QUA, to be established as the standard, universal language for quantum computing.”

Sound off the wall? Not all that much. Last month, QM joined the IBM Q Network, in an attempt to integrate the computer conglomerate’s programming languages into the Quantum Orchestration Platform of Sivan and his colleagues, and to publish a complete complier (a complier is a computer program that can translates computer code written in one programming language into another language) by the second quarter of 2020. The complier will be able to translate every quantum programming language into the QM platform. Thus, an algorithm written in a university in Shanghai will be able to run on a quantum computer built in Google’s laboratories in, say, Mountain View.

Says Yonatan Cohen: “The major players, like Google and IBM, are still gambling. They are developing a quantum processor that is based on their own [singular] technology. And it could be that in a few years we will discover a better platform, and their processor will not have any use. We are building a system that is agnostic to quantum hardware. Our goal is to grow with the industry, no matter what direction it develops in. Because the underlying assumption is that you don’t know exactly when quantum computers will start to be practicable. Some people say three years, others say 20 years. But it’s clear to us that whoever is in the forefront when it erupts will win bigtime, because he will control the new computing force. Everyone will have to work with him, in his language, with his hardware.”

Sivan: “It’s possible that in another few years, we will look back on this decade and see an unexampled technological turning point: the moment when quantum computers went into action. That’s not another technological improvement. It’s a leap…”

A quantum leap!

Sivan: “Exactly.”

Oded Carmeli