Optimization problems are some of the most complex problems to solve.
Imagine you are building a house, and have a list of things you want to have in your house, but you can’t afford everything on your list because you are constrained by a budget. What you really want to work out is the combination of items which gives you the best value for your money.
This is an example of a optimization problem, where you are trying to find the best combination of things given some constraints. While problems with only a few choices are easy, as the number of choices grows, they quickly get very hard to solve optimally. With just 270 on/off switches, there are more possible combinations than atoms in the universe!
These types of optimization problems exist in many different domains, such as systems design, mission planning, airline scheduling, financial analysis, web search, and cancer radiotherapy. They are some of the most complex problems in the world, with potentially enormous benefits to businesses, people and science if optimal solutions can be readily computed.
Machines learn to recognize objects by detecting recurring patterns.
When you look at a photograph it is very easy for you to pick out the different objects in the image: trees, mountains, velociraptors etc. This task is almost effortless for humans, but is in fact a hugely difficult task for computers to achieve. This is because programmers don’t know how to define the essence of a ‘tree’ in computer code.
Machine learning is the most successful approach to solving this problem, by which programmers write algorithms that automatically learn to recognize the ‘essences’ of objects by detecting recurring patterns in huge amounts of data. Because of the amount of data involved in this process, and the immense number of potential combinations of data elements, this is a very computationally-expensive optimization problem. As with other optimization problems, these can be mapped to the native capability of the D-Wave QPU.
Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical...
In 1981, Nobel Prize–winning physicist Richard Feynman delivered his seminal lecture “Simulating Physics with Computers”. His idea was that unlike a classical computer which could only approximate a simulation of physics, a quantum computer could simulate it exactly – as quantum physics. In a paper published in 1982 he said, “I therefore believe it's true that with a suitable class of quantum machines you could imitate any quantum system, including the physical world.”
Today quantum materials simulation is being actively pursued by scientists around the world, and some see it as the first “killer application” for quantum computers.
The better our model, the better we are at predicting the future.
Many things in the world are uncertain, and governed by the rules of probability. We have in our heads a model of how things will turn out in the future, and the better our model is, the better we are at predicting the future. We can also build computer models to try and capture the statistics of reality. These tend to be very complicated, involving a huge number of variables.
In order to check to see if a computer’s statistical model represents reality we need to be able to draw samples from it, and check that the statistics of our model match the statistics of real world data. Monte Carlo simulation, which relies on repeated random sampling to approximate the probability of certain outcomes, is an approach used in many industries such as finance, energy, manufacturing, engineering oil and gas and the environment. For a complex model, with many different variables, this is a difficult task to do quickly.