### This tutorial is intended to introduce the concepts and terminology used in Quantum Computing, to provide an overview of what a Quantum Computer is, and why you would want to program one.

The material here is written using very high level concepts and is designed to be accessible to both technical and non-technical audiences. Some background in physics, mathematics and programming is useful to help understand the concepts presented in this document, although this is not a requirement.

### What you will learn

By following through the material in this primer, you will learn:

• How quantum physics gives us a new way to compute
• The similarities and differences between quantum computing and classical computing
• How the fundamental units of quantum computing (qubits) are manipulated to solve hard problems
• Why Quantum Computing is well suited to AI and machine learning applications, and how quantum computers may be used as 'AI co-processors'

# SECTION 1

## 1.1 - Conventional computing

To understand quantum computing, it is useful to first think about conventional computing. We take modern digital computers and their ability to perform a multitude of different applications for granted. Our desktop PCs, laptops and smart phones can run spreadsheets, stream live video, allow us to chat with people on the other side of the world, and immerse us in realistic 3D environments. But at their core, all digital computers have something in common. They all perform simple arithmetic operations. Their power comes from the immense speed at which they are able to do this. Computers perform billions of operations per second. These operations are performed so quickly that they allow us to run very complex high level applications. Conventional digital computing can be summed up by the diagram shown in figure 1.

Figure 1. Dataflow in a conventional computer

Although there are many tasks that conventional computers are very good at, there are still some areas where calculations seem to be exceedingly difficult. Examples of these areas are: Image recognition, natural language (getting a computer to understand what we mean if we speak to it using our own language rather than a programming language), and tasks where a computer must learn from experience to become better at a particular task. Even though there has been much effort and research poured into this field over the past few decades, our progress in this area has been slow and the prototypes that we do have working usually require very large supercomputers to run them, consuming a vast quantities of space and power.

We can ask the question: Is there a different way of designing computing systems, from the ground up? If we could start again from scratch and do something completely different, to be better at these tasks that conventional computers find hard, how would we go about building a new type of computer?

## 1.2 - A new kind of computing

Quantum computing is radically different from the conventional approach of transforming bits strings from one set of 0's and 1's to another. With quantum computing, everything changes. The physics that we use to understand bits of information and the devices that manipulate them are totally different. The way in which we build such devices is different, requiring new materials, new design rules and new processor architectures. Finally, the way we program these systems is entirely different. This document will explore the first of these issues, how replacing the conventional bit (0 or 1) with a new type of information - the qubit - can change the way we think about computing.

## 1.3 - The light switch game

To begin learning about quantum computing, it is important to understand why we can't use conventional digital computers to solve certain problems. Let's consider a mathematical problem, which we'll call the light switch game, that illustrates this point.

The light switch game involves trying to find the best settings for a bunch of switches. Here's a graphical example introducing this problem:

Figure 2. The light switch game

Let's imagine that each light switch has a number associated with it, which is chosen for you (you don't get to change this). We call this a 'bias value'. You get to choose whether to turn each light switch ON or OFF. In our game, ON = +1 and OFF = -1. We then add up all the switches' bias values multiplied by their ON/OFF values. This gives us a number. The objective of the game is to set the switches to get the lowest number. Mathematically, we call the bias values of each switch $h_i$ and the switch settings are called $s_i$.

Figure 3. Playing the light switch game - add up the bias values of each switch multiplied by their settings (which you have to choose)

So depending upon which switches we set to +1 and which we set to -1, we will get a different score overall. You can try this game. Hopefully you'll find it easy because there's a simple rule to winning:

Figure 4. Working out the answer for a particular "guess" at the switch settings

We find that if we set all the switches with positive biases to OFF and all the switches with negative biases to ON and add up the result then we get the lowest overall value. Easy, right? I can give you as many switches as I want with many different bias values and you just look at each one in turn and flip it either ON or OFF accordingly.

OK, let's make it harder. So now imagine that many of the pairs of switches have an additional rule, one which involves considering PAIRS of switches in addition to just individual switches... we add a new bias value (called J) which we multiply by BOTH the switch settings that connect to it, and we add the resulting value we get from each pair of switches to our overall number too. Still, all we have to do is decide whether each switch should be ON or OFF subject to this new rule.

Figure 5. Making the game harder by adding additional terms that depend on the settings of pairs of switches.

But now it is much, much harder to decide whether a switch should be ON or OFF, because its neighbors affect it. Even with the simple example shown with 2 switches in the figure above, you can't just follow the rule of setting them to be the opposite sign to their bias value anymore (try it!). With a complex web of switches having many neighbors, it quickly becomes very frustrating to try and find the right combination to give you the lowest value overall.

Figure 6. The light switch game with connecting terms added, generating an 'interacting' web of light switches

## 1.4 - How does quantum mechanics help?

With a couple of switches you can just try every combination of ON's and OFF's, there are only four possibilities: [ON ON], [ON OFF], [OFF ON] or [OFF OFF]. But as you add more and more switches, the number of possible ways that the switches can be set grows exponentially:

Figure 7. The exponential problem with the light switch game.

You can start to see why the game isn't much fun anymore. In fact it is even difficult for our most powerful supercomputers. Being able to store all those possible configurations in memory, and moving them around inside conventional processors to calculate if our guess is right takes a very, very long time. With only 500 switches, there isn't enough time in the universe to check all the configurations.

Quantum mechanics can give us a helping hand with this problem. The fundamental power of a quantum computer comes from the idea that you can put bits of information into a superposition of states. You can think of this as being a situation where the qubit has not yet decided which state it wants to be in. Some people like to refer to the qubit in superposition as 'being in both states at the same time'. You can alternatively think of the qubit's state as being undecided as to whether it is +1 or -1. Which means that using a quantum computer, our light switches can be ON and OFF at the same time:

Figure 8: A quantum mechanical bit of information (qubit) can reside in what is known as a superposition state, where it has not yet decided whether to be in the +1 or the -1 state (alternatively, you can think of it as being 'in both states').

Now lets consider the same bunch of switches as before, but now held in a quantum computer's memory (notice that the bias values haven't been added yet):

Figure 9. A network of connected quantum bits in superposition. The answer is in there somewhere!

Because all the light switches are on and off at the same time, we know that the correct answer (correct ON/OFF settings for each switch) is represented in there somewhere - it is just currently hidden from us. But that is OK, because quantum mechanics is going to find it for us. The D-Wave quantum computer allows you to take a 'quantum representation' like this, and extract the configuration of ONs and OFFs with the lowest value. Here's is how it works:

Figure 10. The computer begins with the bits in superposition, ends with them behaving as regular classical bits, and finds the answer along the way.

You start with the system in its quantum superposition as described above, and you slowly adjust the quantum computer to turn off the quantum superposition effect. At the same time, you slowly turn up all those bias values (the h and J's from earlier). As this operation is performed, the switches slowly drop out of their superposition state and choose a classical state, either ON or OFF. At the end, each switch MUST have chosen to be either ON or OFF. The quantum mechanics working inside the computer helps the light switches settle into the right states to give the lowest overall value when you add them all up at the end. Even though with $N$ switches there are $2^N$ possible configurations it could have ended up in, it finds the lowest one, winning the light switch game. So we see that the quantum computer allows us to minimize expressions such as the one considered here:

$E(s) = \sum_i h_i s_i + J_{ij}s_i s_j$

which can be difficult (if not impossible) for classical computers.

# SECTION 2

## 2.1 - It's a math expression - who cares?

We didn't build a machine to play a strange masochistic light switch game. The concept of finding a good configuration of binary variables (switches) in this way lies at the heart of many problems that are encountered in everyday applications. A few are shown in figure below. Even the concept of scientific discovery itself is an optimization problem (you are trying to find the best 'configuration' of terms contributing to a scientific equation which match real world observations).

Figure 11. Examples of applications which under the hood all involve finding good 'switch settings' and can be tackled more efficiently with quantum computers.

## 2.2 - The energy program

To understand how these problems might be cast as finding settings of switches, let's consider how the quantum computer is programmed. Recall Figure 1, where bit strings in were transformed into other bits strings via the application of a logic program. Instead of that, we now have a resource where bits can be undecided, so the computation is performed in a fundamentally different way, as shown in Figure 12. In this case, a group of qubits are initialized into their superposition states, and this time an ENERGY PROGRAM (instead of a logic program) is applied to the group. The qubits go from being undecided at the beginning of the computation, to all having chosen either -1 or +1 states at the end of the computation. What is an Energy Program? It is just those h and J numbers - the bias settings - that we introduced earlier. In the light switch game, we said that the H and J's were given to you. Well, now we see where they come from - they are the definition of the problem you are trying to solve.

Figure 12. The basic operation of a quantum computer is to supply an energy program (a series of h and J numbers) and let the computer find the switch settings (+1 and -1).

Crafting an energy program as a series of h and J values - to encode a real world problem that you care about - is exceedingly hard and time-consuming. It would be the equivalent of programming your desktop PC by sending in machine code to the microprocessors inside! Luckily, there is a better way to program quantum computers by using a quantum compiler. This process is explained in more detail in the Programming with D-Wave white paper.

## 2.3 - Quantum computers can LEARN

The discipline of teaching computers to reason about the world and learn from experience is known as machine learning. It is a sub-branch of the field of artificial intelligence. Most of the code we write is fairly static - that is, given new data it will perform the same computation over and over again and make the same errors. Using machine learning we can design programs which modify their own code and therefore learn new ways to handle pieces of data that they have never seen before.

The type of applications that run very well on D-Wave quantum computers are applications where learning and decision making under uncertain conditions are required. For example, imagine if a computer was asked to classify an object based on several images of similar objects you had shown it in the past. This task is very difficult for conventional computing architectures, which are designed to follow very strict logical reasoning. If the system is shown a new image, it is hard to get it to make a general statement about the image, such as 'it looks similar to an apple'. D-Wave's processors are designed to support applications that require high level reasoning and decision making.

How can we use a quantum computer to implement learning, for example, if we want the system to recognize objects? Writing an energy program for this task would be very difficult, even using a quantum compiler, as we do not know in detail how to capture the essence of objects that the system must recognize. Luckily there is a way around this problem, as there is a mode in which the quantum computer can tweak its own energy program in response to new pieces of incoming data. This allows the machine to make a good guess at what an object might be, even if it has never seen a particular instance of it before. The following section gives an overview of this process.

## 2.4 - A computer that programs itself

In order to get the system to tweak its own energy program, you start by showing the system lots and lots of instances of the concept that you want it to learn about. An example in shown in Figure 13. Here the idea is to try to get the computer to learn the difference between images of different types of fruit. In order to do this, we present images (or rather, a numeric representation of those images) to the system illustrating many different examples of apples, raspberries and melons. We also give the system the 'right' answer each time by telling it what switch settings (labels) it should be ending up selecting in each case. The system must find an energy program (shown as a question mark as we do not know it at the beginning) so that when an image is shown to the system, it gets the labels correct each time. If it gets many examples wrong, the algorithm knows that it must change its energy program.

Figure 13. Teaching the quantum chip by allowing it to write its own energy program. The system tweaks the energy program until it labels all the examples that you show it correctly. This is also known as the 'training' or 'learning' phase.

At first the system chooses an energy program (remember that it is just a bunch of H and J values) at random. It will get many of the labellings wrong, but that doesn't matter, as we can keep showing it the examples and each time allow it to tweak the energy program so that it gets more and more labels (switch settings) correct. Once it can't do any better on the data that it has been given, we then keep the final energy program and use that as our 'learned' program to classify a new, unseen example (figure 14).

In machine learning terminology this is known as a supervised learning algorithm because we are showing the computer examples of images and telling it what the correct labels should be in order to help it learn. There are other types of learning algorithms supported by the system, even ones that can be used if labeled data is not available.

Figure 14. After the system has found a good energy program during the training phase, it can now label unseen examples to solve a real world problem. This is known as the 'testing' phase.