Scaling Quantum Systems
One of the questions we are often asked is how we will scale our processors and continue to enhance system performance. The D-Wave Two™ system is based on a quantum processor that has 512 qubits. Our observation so far is that the larger the number of qubits we implement, the more computational power the processor demonstrates. Our plans for the future are to develop processors with more qubits. At this time we have 1000 qubit processors in our lab that we expect to release later this year.
However, there are more dimensions to processor performance than just the number of qubits. Inter-qubit connectivity, the number of connections between qubits, is often more important than the number of qubits in terms of posing hard problems. Our processors have been designed with flexibility to change inter-qubit connectivity as our knowledge of performance and applications mature. The time to program a problem and to readout a solution is also important when comparing to highly optimized state-of-the-art hardware and software on ‘classical’ systems, so we are improving in these areas as well.
As you may have read, the processor is implemented on a superconducting integrated circuit chip somewhat smaller than your thumbnail. The processor itself takes up a small fraction of this chip, and is designed at a feature size equivalent to what the semiconductor industry had achieved in the very late 1990’s. Our expectation is that we will not run out of room on our chip for the foreseeable future.
One of the benefits of implementing a quantum processor using superconducting integrated circuit technology, whether we are speaking about quantum annealing systems such as D-Wave’s technology (e.g., R. Harris et al., Phys. Rev. B 82, 024511 (2010)), which compute by looking for the lowest energy state, or the gate model of quantum computing, is that there exists a well-established classical superconducting electronics technology that is naturally suited for integration with quantum devices (e.g. M. W. Johnson et al., “A scalable control system for a superconducting adiabatic quantum optimization processor”, Supercond. Sci. Technol., 23, 065004 (2010) and references therein). We leverage this technology extensively in our processor - more than three quarters of the electrical circuitry on our processor chip are superconducting integrated circuits used for control and readout of the qubits.
In terms of other parts of the system, the cryogenics (the dilution refrigerator that takes the temperature down to near absolute zero) take up much more space than the processor chip they serve to cool, and they will not need to be increased to accommodate larger processors - certainly not in the foreseeable future. This means that the 15.5 kilowatts of power that the entire system uses today won’t increase with each new processor generation. Compare that power draw to today’s supercomputers – some of which are approaching 100 megawatts of power.
We also face important challenges as we scale our processors towards solving ever more complex problems. Some of these challenges range from increasing the precision and accuracy of specifying problem parameters, to reducing noise that contributes to problem inaccuracy and interferes with quantum effects that our processor leverages during quantum annealing (e.g. T. Lanting et al., “Entanglement in a quantum annealing processor”, arXiv:1401.3500), and implementing error encoding schemes (such as K. Pudenz, T. Albash, D. Lidar, “Error-corrected quantum annealing with hundreds of qubits”, Nature Comm. 5, 3243, Feb 6, 2014) for example.
There has been a fair amount of discussion in both the media and the scientific community about the impact of environmental noise (heat, vibration, magnetism etc) and the need for quantum error correction in quantum computing, and what that means for the D-Wave technology. One of the most attractive characteristics of quantum annealing systems such as the D-Wave system, is that they are more robust against decoherence from certain types of environmental noise than other quantum systems, such as those built on a gate model, would be. D-Wave has already developed extensive ‘classical’ error correction methodologies, but as we scale to larger and more powerful processors it is likely that information encoding methodologies will be important. Of course there are more details to the area of error correction, and those interested can find more discussion of this in the papers listed below.
Jeremy Hilton, VP Processor Development
Error Correction papers:
N G Dickson, et al., "Thermally assisted quantum annealing of a 16-qubit problem", Nature Communications 4, 1903 (2013)
Ari Mizel, M. W. Mitchell, and Marvin L. Cohen, “Energy barrier to decoherence”, Phys. Rev. A 63, 040302(R) (2001)
Andrew M. Childs, Edward Farhi, John Preskill, “Robustness of adiabatic quantum computation”, Phys. Rev. A 65, 012322 (2001)