top of page

The Quantum Wrench Initiative

  • rinaismailati
  • 2 days ago
  • 4 min read

Every problem has a solution. Sometimes, many problems have many solutions and 1 key, especially when the underlying mathematical formulation is the same, but variables and conditions differ. Wouldn't it be interesting to have a tool, a wrench, that opens many doors?




In information theory, NP hard problems are those problems that typically involve searching through an exponentially large number of possibilities, like scheduling, routing, or optimization tasks, i.e. are difficult to solve, and a nightmare for classical tools! Combinatorial optimization problems are part of this family of problems, with a main objective to minimize and constraints that must be respected.


Classical tools like Gurobi are great in solving many optimization problems, but they struggle when the problems become highly complex, with several overlapping constraints. In addition to that, these tools give 1 optimal solution only. And what happens when that solution fails due to unexpected circumstances? As an alternative, we propose Quantum Markov Chain Monte Carlo. It is a sampler that finds the solutions that minimize a combinatorial optimization problem by proposing new potential solutions, and accepting those with low energy, sampling from a so-called Boltzmann distribution. Basically, solutions that are better for a given optimization problem have lower energy and higher probability to be sampled. After many iteration steps, the algorithm will not be limited to only 1 solution but will generate several good solutions (it's a sampler!). The quantum version of MCMC makes the process faster and finds better solutions by exploiting phenomena like quantum tunneling and quantum superposition. It helps us reach solutions where classical tools like Gurobi get stuck (we call them local minima) due to those constraints that we mentioned earlier creating a rugged energy landscape. Compared to classical MCMC, our Q-MCMC mixes faster, due to better choice of parameters. This means, it converges faster to the theoretical distribution of states/solutions.


Solving logistics problem with Q-MCMC


As an example, let's consider logistics / supply chains problems. Suppose you have thousands of customers in America that you want to serve, but limited budget to open only P warehouses from which you deliver goods to these customers. If you open more than P warehouses, you shorten the distances to your customers, but you break your budget constraint. There is a heavy penalty for such proposed solution. It is important to respect the number of warehouses you can open, and choose the locations of these P warehouses wisely, such that you minimize the distances between them and your customers, and even better, they should be closer to customers with more demands. I.e. minimize the demand*distance product over all customer-warehouse combinations. Sometimes demands can peak, especially on holidays. In December you consider opening one extra warehouse, which comes with opening costs, of course, but you will still open it, because it is beneficial to you. Thus, you have time windows now, where demand changes (more demand in December compared to November). But where to put this new warehouse? Suppose you run your solver and it tells you Chicago is the best city where to build your initial warehouse in November (as a young business you can only afford 1). For peace of mind, it would be nice if the solver gave you a solution for December too (one extra warehouse, 2 warehouses in total for December). You run the Q-MCMC nd it gives you Chicago + Los Angeles.


These solutions were generated using our Q-MCMC algorithm. Green sphere: chosen warehouse. Blue: customers.



Take another example, with different settings. Demand is normal. P=1. The solver tells you that you must open the warehouse in Boston for your customer in NYC. However, there is a sudden storm and the warehouse must close. If you had used a standard solver, you'd be left without a backup solution. However, with Q-MCMC you don't need to worry. When you execute Q-MCMC it gives you the most optimal solution with highest probability, and the second (third, fourth, etc) solutions with lower probability (but still sufficiently high) as your alternative solutions in case of emergencies and dynamic situations.


First solution (Boston, for your customer in NYC):



Second best solution closest, Chicago:




In real life, there are many warehouses, many customers and a lot of dynamic situations.


Nurse Scheduling


Nurse scheduling or nurse rostering is also an NP hard, combinatorial optimization problem. What makes it particularly nasty, is the overlap of several constraints, some of which are hard conditions (e.g. a nurse must not work more than a permitted number of hours, or it is prohibited to work two sequential shifts). It helps to take their preferences into account (another constrain), holidays must be respected, skill levels need to be appropriate for a given shift, etc. Sometimes nurses do last minute cancellations and nurse schedulers must fill their shift quickly. A solution that respects all or most constraints, will have low energy and will be chosen with higher probability by our Q-MCMC sampler. The advantage of Q-MCMC is that it gives you several good solutions, exactly to handle the last-minute cancellations, or other dynamic situations without having to re-run everything from the start or calling a nurse who is already overworked (nurse burnout, plus more expenses for the hospital)!


As an analogy to our logistics example, think of nurses as variables that need to be combined with shifts and days, where days are analogous to the monthly time windows in the supply chain problem. If a ward has e.g. 100 nurses, we don't operate with 100 nurses every day. The key is to predict (via ML/AI) how many nurses are expected to come to work a certain day. This also determines the number of variables used and qubit bits.




Other technical advantages of our Q-MCMC algorithm:

  • The result is not damaged by the noise in quantum computers. If anything, noise helps to achieve better results faster (better mixing).

  • The number of qubits is kept under control, because some constraints are solved quantumly, some classically. The reason for this choice is to use an encoding that saves enough qubits (quantum bits) but also uses enough quantum that it provides advantages to classical solvers.

  • We are using intelligent mixing parameters that makes the algorithm faster than classical MCMC.

Comments


Subscribe to BrainStorm newsletter

Thanks for submitting!

  • Twitter
  • Facebook
  • Linkedin

© 2023 by BrainStorm. Proudly created with Wix.com

bottom of page