News

Published on July 14th, 2019 📆 | 4405 Views ⚑

0

IBM Research explains how quantum computing works and why it matters


Text to Speech Demo

As the technological progress codified as Moore’s Law slows down, computer scientists are turning to alternative methods of computing, such as superconducting quantum processors to deliver computing gains in the future.

Jeffrey Welser, vice president and lab director at IBM Research at Almaden, spoke about quantum computing at the 49th annual Semicon West chip manufacturing show in San Francisco last week. I caught up with him to get a layman’s view of quantum computing from him.

IBM also displayed a part of its IBM Q System at the show, giving us an idea of how much refrigeration technology has to be built around a current quantum processor to ensure that its calculations are accurate.

Binary digits — ones and zeroes — are the basic components of information in classical computers. Quantum bits, or qubits, are built on a much smaller scale. And qubits can be in a state of 0, 1, or both at any given time. These computers can handle extremely complex calculations in parallel, but they require a huge amount of manufacturing precision just to be accurate. IBM is working on improving this, and it may take years before the improvements take hold and give quantum computing a chance to beat classical computers, Welser said.

In a quantum processor, superconducting qubits, or quantum bits, process the quantum information and send the computation outcomes back through the system via microwave signals. The whole contraption around the processor is meant to cool it as much as possible. The quantum processor has to sit inside a shield to protect itself from electromagnetic radiation.

Here’s an edited transcript of our interview.

Above: Jeff Welser, IBM Research vice president and lab director at Almaden.

Image Credit: Dean Takahashi

VentureBeat: The usual question is, what the heck is quantum computing?

Jeff Welser: Quantum computing is a form of computing that takes advantage of some quantum effects that we believe can do certain types of algorithms much more efficiently than classical. The basic unit for a quantum computer is something we call a quantum bit, a qubit. We’re all familiar with regular bits, a one or a zero. That’s what we use for normal computation. A qubit can also be a one or a zero, but because it’s a quantum bit, it can be in a superposition of both a one and a zero at the same time. It has a probability of being either of these.

Moreover, you can entangle two qubits, or hundreds or thousands of qubits, and whenever you do an operation on one of them, it determines the state of all of them instantly, because of the entanglement. In a sense, it gives you the ability to do a massively parallel calculation. For algorithms or problems that map onto it, you can do things exponentially faster or better than you can with a classical system.

Examples of things that can do this—chemistry and materials, they themselves are of course based on quantum chemistry. It’s all quantum effects. You can simulate those atoms or interactions with a quantum computer much more precisely and at much larger scales. The example I gave in the keynote, think about the caffeine molecule. It’s an important molecule for us every day. It has about 95 electrons, so it’s not a particularly big molecule, but if you wanted to simulate that exactly on a classical computer, you’d have to have 10 to the 48th power classical bits. For reference, there are about 10 to the 50th power atoms in the planet earth. Obviously, you’ll never do that.

With a quantum system, if it were a very robust, fault-tolerant quantum system, you could do it with only 160 qubits. The system here is a model of our 50-qubit system. We’re not that far away from 160 today. If you go on the IBM Q website, you can get access to a 16-qubit system that you can play with for fun. In some sense, we still have a few years to go where we’ll really have something that has value over classical systems, but it’s not as far away as we used to think.

VentureBeat: What kind of physical space are we talking about?

Welser: If you look at the system, the reason it’s structured this way is really—it’s a lot about isolating the chip. The chip is down on that bottom section where those wires are all going in. That’s the actual quantum computing chip. If we were using it, we’d have a canister and things around it to insulate it, so you couldn’t see it, but we’ve uncovered it. When it’s covered up, this whole system goes down to low pressure, but also goes down to low temperature, which is what really matters.

The top is about 40 degrees Kelvin, and then it goes down to four Kelvin, 100 milli-Kelvin, and so on. When you get down to the bottom it’s at 15 milli-Kelvin, which is 15 thousandths of a degree above absolute zero. For reference, space is about two to three Kelvin. It’s a couple hundred times colder than outer space, when you get down there.

The reason you have to have it so cold is you need to isolate it from any kind of interference, including thermal interference. Any thermal energy will knock the qubits out of the superposition state that we want. Even with all this isolation, the qubits will only maintain their superposition for about 100 microseconds. That’s actually very good. We’re proud of that number. But it’s still a very short time, obviously. You have to do all your calculations in that time period before you’re going to have an error generated.

Above: IBM’s quantum computer

Image Credit: IBM

VentureBeat: Is this a demo unit now?

Welser: This is a demo, yes. The components are all there. In theory, you could run it. But it’s missing the vacuum systems and things around it. The ones that are running today are in the basement of our Yorktown Heights lab in New York. We have several systems up and running there. Those are the ones you can access in the cloud. We put the first one online in May 2016. That was a five-qubit system. As I said, we now have a 16-qubit system you can use for free, and we have a 20-qubit system for people who join our network. We have a network of companies and universities, more than 70 at this point, that get access to the 20-qubit system as well.

We’ve also put together a whole open-source software infrastructure called Qiskit. It’s giving people the tools they need to try to program this. One of the challenges is, as you can guess, it’s very different programming than we’re used to. Qiskit has ways you can do individual gates to manipulate qubits if you understand that part. Over time we’re introducing libraries, so a chemist could use a library of quantum algorithms. They would understand what the high-level algorithm does, and that would translate down to running on the quantum computer.

VentureBeat: What are people finding it useful for right now?

Welser: Most of the people that are looking at it are in three main areas. One is for chemistry or materials discovery. For example, JSR, a big semiconductor polymer producer, is a member. Samsung is a member. They’re using it a lot for—they believe that when they get large enough systems, it will help them in discovering new materials with different properties for whatever application is necessary. Materials drives a lot of what goes on in consumer products, in cars, in batteries, and so on. That’s one where we believe that, in three to five years, we’ll have systems large enough that you’ll see actual benefit from them. Right now it’s really just experiments.

The next one is optimization. We have J.P. Morgan Chase and Barclays as members. They’re looking at using this for doing really large quantum Monte Carlo simulations or other optimization problems for pricing bonds or predicting the behavior of very complex financial systems. Today we do that with very large supercomputers, but it’s one of those things where—similar to the caffeine problem, you can only simulate so much. That’s more like five years out before you have a system that’s large enough.





The other one is for AI and machine learning. There are some machine learning problems that can be mapped onto quantum systems that we think will allow you to do much larger sets of parameter and feature spaces than you can do on standard systems. We just published a paper around that about six months ago. That one, again, is three to five, maybe five years out.

The one I haven’t mentioned, which most people think about, is factorization or cryptography, this idea that quantum computers can factor very large numbers potentially, and therefore could break the internet, break the encryption we use. It is true that if you had a large enough system, you could factor very large numbers, and the current types of encryption we use on the internet would be vulnerable then. But to get there you’d need a system probably in the thousands of qubits, or even millions. Those would have to be very robust, very error-free qubits, which we don’t have today. We have at least 10 years, if not 15 or 20, before we have a system large enough to do that. No immediate concern there.

Meanwhile, there are already known methods of encryption that we could be using today on classical systems that do not map well onto a quantum computer. Even when you get a very large system, they would not be vulnerable. A form of cryptography called lattice cryptography, for example. We have plenty of time to implement those sorts of things. In fact, one of the things we talk to our clients about, because a lot of our clients are large industry or government players, is that it’s too early to worry about anything breaking the internet.

If you’re archiving data or you have data that you want to make sure stays secure and private for 10, 15, 20 years in the future — think about your tape archives of all your data you’re putting in there – it’s not too early to think about encrypting that and using something like lattice cryptography, which is very feasible. 15 years from now, you’re not going to want to go back and re-encrypt all the data in your archive when quantum computers come along. It’s not too early to be thinking along those lines.

Above: IBM is doing chemistry with quantum computers.

Image Credit: IBM

VentureBeat: How large an effort is this within IBM right now?

Welser: It’s a strong focus. We have a lot of work going on in our Yorktown Heights lab, as well as some software working going on in the Albany lab, as well as the Zurich lab. Part of the reason for creating this large network of universities and companies is we need a lot of people working on it in lots of different spaces. We’re going to continue to advance the hardware part of it of course, as well as continuing to enable the algorithm and software part, but we want lots of people out there building applications, because that’s how we’re going to figure out how to use this.

VentureBeat: How many years have you been working on it so far?

Welser: Arguably we’ve been working on this since 1981. In 1981 there was a very famous meeting that took place between a bunch of physicists. It was co-sponsored by MIT and IBM. Richard Feynman, who’s a fairly famous physicist—that’s where he basically coined the idea of quantum computing. He said he thought it would make sense to think about using the quantum effects to do computation. He also pointed out that it might be necessary to use those if you ever wanted to do chemistry simulations.

That was where the idea started to percolate around. People started to put together some of the ideas around what it would take to build it—David DiVincenzo, a physicist working for IBM in the ‘90s, put together a set of five criteria it would take to build a quantum computer. We built our first seven-qubit system back in the late ‘90s using trapped ions — a completely different technology – just to prove it was even possible. It wasn’t particularly usable, but it proved the concept.

In terms of the version you see here, we started working on this version more like six or seven years ago, to figure out how you could build—these are based on superconducting transmons, is what the actual device is in the bottom. We started building that six or seven years ago, and as I said, in May 2016 we put the first one out.

The IBM Q System One, which is our first commercial-grade version, is coming online in the near future. That’ll be for a lot of people who want a more robust system. We hope that will continue to spread the work out into more companies that aren’t so much in the depths of quantum computing, but are more generalists.

Above: One of IBM Research’s quantum computing labs.

Image Credit: a href="IBM Research/Flickrhttps://www.flickr.com/photos/ibm_research_zurich/26671252146/

VentureBeat: There were a lot of skeptics about this early on. What are some milestones that you’ve overcome that helped defeat some of that skepticism?

Welser: We’re seeing it very incrementally moving forward. A lot of the skepticism comes because there are only two algorithms that are known to be—that have been theoretically proven to be faster on a quantum computer. There’s Shor’s Algorithm, which is the factorization, and Grover’s Algorithm, which is a type of search algorithm. But everything else was more speculation as far as whether or not it really will be faster.

We’re starting to see papers getting published now where people show, “Hey, I just did this, and if you scale this up to a certain number of qubits, it’s more than you could possibly do on a classical system.” People are starting to run simulations and show that you can do this. It’s breaking down some of the skepticism.

The other thing is, we started our own road map of increasing quantum volume, we call it. That is, finding ways to get the error rate down while we increase the number of qubits. That’s showing you can do deeper and deeper circuits, more and more complex algorithms. These things together are starting to make people think, “Well, this is looking much more real.” No one knows where we’ll go off to in the end, but people are starting to see that if you take it and combine it with classical computing in clever ways, you can get something that might look feasible.

VentureBeat: Is there any Moore’s Law type of benefit you get from this?

Welser: Not directly. There’s probably no direct analog. But one thing we’re looking at is, we would like to double the quantum volume every year, similar to the way Moore’s Law doubled the number of components. But it’s a more complicated problem, because to double the quantum volume you need to not only increase the number of qubits—that’s fairly easy, because these are large compared to what we do—these are running in the 40nm range, versus Moore’s Law is down below 10nm today. We can easily make more qubits. That’s not a problem. But if we don’t improve the error rate on the qubits, then having more qubits doesn’t help. You need to get the error rate down.

We hope to find ways to continue to improve that error rate on a regular basis, so it allows the quantum volume to improve on a Moore’s Law-like pattern. But it’s very different physics involved now.

Source link

Tagged with:



Comments are closed.