Download presentation
Presentation is loading. Please wait.
1
KEG PARTY!!!!! Keg Party tomorrow night Prof. Markov will give out extra credit to anyone who attends* *Note: This statement is a lie
2
Trugenberger’s Quantum Optimization Algorithm Overview and Application
3
Overview Inspiration Basic Idea Mathematical and Circuit Realizations Limitations Future Work
4
Overview Inspiration Basic Idea Mathematical and Circuit Realizations Limitations Future Work
5
Two Main Sources of Inspiration Exploiting Quantum Parallelism Analogy of Simulated Annealing
6
What is quantum parallelism? What is quantum parallelism? We can represent super-positions of specific instances of data in a single quantum state We can then apply a single operator to this quantum state and thereby change all instances of data in a single step
7
What is Simulated Annealing? Comes from physical annealing Iteratively heat and cool a material until there’s a high probability of obtaining a crystalline structure Can be represented as a computational algorithm Iteratively make changes to your data until there is a high probability of ending up with the data you want
8
Overview Inspiration Basic Idea Mathematical and Circuit Realizations Limitations Future Work
9
Basic Idea Use this inspiration to come up with a more generalized quantum searching algorithm Trugenberger’s algorithm does a heuristic search on the entire data set by applying a cost function to each element in the data set Goal is to find a minimal cost solution
10
The high-level algorithm Use quantum parallelism to apply the cost function to all elements of the data set simultaneously in one step Iteratively apply this cost function to the data set Number of iterations is analogous to an instance of simulated annealing
11
Overview Inspiration Basic Idea Mathematical and Circuit Realizations Limitations Future Work
12
Representing the Problem: Graph Coloring Super-position of the data elements N instances Use n qubits to represent the N instances Each instance encoded as a binary number I^k whose value is between 0 and 2^n
13
Cost Functions in General should return a cost for that data element In this algorithm we will want to minimize cost Data elements with lower cost are better solutions
14
Skeleton of the U operator The imaginary exponential of the cost function is the main engine of the quantum optimization
15
What is Cnor? We know in general that exp(i*theta) = cos(theta) + i*sin(theta) Since U will need the imaginary exponential of the cost function, we want to normalize the cost function By normalizing, we ensure that the cost function result is between 0 and pi/2
16
What is Cnor? C(I^k) can at most be Cmax and is at least Cmin Cnor is always between 1 and 0
17
And Cmin and Cmax? Simple to determine for graph coloring Cmin = 0 (no pair connected vertices shares the same color) Cmax = # of edges (every pair of connected vertices shares the same color) More general method for determining Cmin and Cmax will be introduced later
18
Fleshing out U for Graph Coloring
19
Still don’t quite have our magic operator As written, U by itself will not lower the probability amplitude of bad states and increase the amplitude of good states If we apply U now, the probability amplitudes of both the best and worst data elements will be the same and differ only in phase
20
Take Advantage of Phase Differences We can accomplish the proper amplitude modifications by using a controlled form of the U gate Can’t be an ordinary controlled gate though
21
Ucs: The Answer to our Problems Ucs is a controlled gate that applies U to the data elements when the control bit is |0> and applies the inverse of U when the control bit is |1>
22
Control Bits also need some modification Control bit always starts out in |0> state Before applying Ucs, we run the control bit through a Hadamard gate After applying Ucs, we run it through another Hadamard gate This gives us a nice super-position of minimal and maximal cost elements
23
Matlab results for Graph Coloring Data element Probability amplitude 0000 0010.25 0100.3536 0110.25 1000.25 1010.3536 1110 ----------------------------------------------------------------- 000i*0.3536 001i*0.25 0100 011i*0.25 100i*0.25 1010 110i*0.25 111i*0.3536
24
Measurement If we were to measure the control bit now and get a |0>, we’d know that the data will get the “first half” of the super-position: Data element Probability amplitude 0000 0010.25 0100.3536 0110.25 1000.25 1010.3536 1110
25
Measurement However if we got a |1> instead, we’d know that the data will get the “second half” of the super-position: Data element Probability amplitude 000i*0.3536 001i*0.25 0100 011i*0.25 100i*0.25 1010 110i*0.25 111i*0.3536
26
Measurement A control qubit measurement of |0> means we have a better chance of getting a lower cost state (a good solution) A control qubit measurement of |1> means we have a better chance of getting a higher cost state (a bad solution)
27
Measurement Assume the world is perfect and we always get a |0> when we measure the control qubit We can effectively increase our probability of getting good solutions and decrease the probability of getting bad solutions by iterating the H,Ucs,H operations We iterate by duplicating the circuit and adding more control qubits
28
Matlab Results after 26 “Ideal” Iterations Data element Probability amplitude 0000 0010 0100.3536 0110 1000 1010.3536 1110 ----------------------------------------------------------------- 0000 0010 0100 0110 1000 1010 1100 1110
29
Life Isn’t Fair We don’t always get a |0> for all the control qubits when we measure Some of the qubits are bound to be measured in the |1> state Upon measuring the control qubits we can at least know the quality of our computation
30
The Tradeoff If we increase the number of control qubits (b), then we have a chance of bumping up the probability amplitudes of the lower cost solutions and canceling out the probability amplitudes of the higher cost solutions
31
The Tradeoff However, if we increase the number of control qubits (b), we ALSO lower our chances of getting more control qubits in the |0> state
32
Some good news As mentioned earlier, the measurement of the control qubits will tell us how good our bad a particular run was Trugenberger gives an equation for the expected number of runs needed for a good result:
33
Analogy to Simulated Annealing Can view b, the number of control qubits, as a sort of temperature parameter Trugenberger gives some energy distributions based on the “effective temperature” being equal to 1/b Simply an analogy to the number of iterations needed for a probabilistically good solution
34
A Whole New Meaning for k k can be seen as a certain subset of the |S> super-position of data elements For the graph coloring problem, k=3 More generally for other problems, k can vary from 1 to K where K > 1
35
Equations affected by generalization Cnor changes:
36
Equations effected by generalization U changes (this in turn changes Ucs which utilizes U):
37
Overview Inspiration Basic Idea Mathematical and Circuit Realizations Limitations Future Work
38
U operator Constructing the U operator may itself be exponential in the number of qubits Perhaps some physical process to get around this
39
Cost Function Oracle? Trugenberger glosses over the implementation of the cost function (in fact no implementation is suggested) Some problems may still be intractable if cost function is too complicated
40
Only a Heuristic Trugenberger’s algorithm may not get the exact minimal solution Although, keeping in mind the tradeoff, more control qubits can be added to increase the odds of a good solution
41
Overview Inspiration Basic Idea Mathematical and Circuit Realizations Limitations Future Work
42
Future Work Look into physical feasibility of cost function and construction of Ucs Run more simulations on various problems and compare against classical heuristics Compare with Grover’s algorithm
43
Reference Quantum Optimization by C.A. Trugenberger, July 22, 2001 (can be found on LANL archive)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.