Quantum Gibbs Sampling

Slides:



Advertisements
Similar presentations
Lower Bounds for Local Search by Quantum Arguments Scott Aaronson (UC Berkeley) August 14, 2003.
Advertisements

New Evidence That Quantum Mechanics Is Hard to Simulate on Classical Computers Scott Aaronson Parts based on joint work with Alex Arkhipov.
Pretty-Good Tomography Scott Aaronson MIT. Theres a problem… To do tomography on an entangled state of n qubits, we need exp(n) measurements Does this.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
Sublinear-time Algorithms for Machine Learning Ken Clarkson Elad Hazan David Woodruff IBM Almaden Technion IBM Almaden.
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Sergey Bravyi, IBM Watson Center Robert Raussendorf, Perimeter Institute Perugia July 16, 2007 Exactly solvable models of statistical physics: applications.
Primal Dual Combinatorial Algorithms Qihui Zhu May 11, 2009.
Shortest Vector In A Lattice is NP-Hard to approximate
Satyen Kale (Yahoo! Research) Joint work with Sanjeev Arora (Princeton)
On Combinatorial vs Algebraic Computational Problems Boaz Barak – MSR New England Based on joint works with Benny Applebaum, Guy Kindler, David Steurer,
Thermalization Algorithms: Digital vs Analogue Fernando G.S.L. Brandão University College London Joint work with Michael Kastoryano Freie Universität Berlin.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Semidefinite Programming
Quantum Algorithms II Andrew C. Yao Tsinghua University & Chinese U. of Hong Kong.
1 Quantum NP Dorit Aharonov & Tomer Naveh Presented by Alex Rapaport.
Pablo A. Parrilo ETH Zürich Semialgebraic Relaxations and Semidefinite Programs Pablo A. Parrilo ETH Zürich control.ee.ethz.ch/~parrilo.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Introduction to Monte Carlo Methods D.J.C. Mackay.
Piyush Kumar (Lecture 2: PageRank) Welcome to COT5405.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Quantum Gibbs Samplers
Monte Carlo Methods1 T Special Course In Information Science II Tomas Ukkonen
Simulated Annealing.
Quantum Gibbs Samplers Fernando G.S.L. Brandão QuArC, Microsoft Research Based on joint work with Michael Kastoryano University of Copenhagen Quantum Hamiltonian.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Deterministic Algorithms for Submodular Maximization Problems Moran Feldman The Open University of Israel Joint work with Niv Buchbinder.
Search by quantum walk and extended hitting time Andris Ambainis, Martins Kokainis University of Latvia.
Approximation Algorithms based on linear programming.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Quantum Approximate Markov Chains and the Locality of Entanglement Spectrum Fernando G.S.L. Brandão Caltech Seefeld 2016 based on joint work with Kohtaro.
Fernando G.S.L. Brandão MSR -> Caltech Faculty Summit 2016
Fernando G.S.L. Brandão Caltech QIP 2017
The NP class. NP-completeness
Lap Chi Lau we will only use slides 4 to 19
Quantum algorithms for evaluating Boolean formulas
Heuristic Optimization Methods
New Characterizations in Turnstile Streams with Applications
Multiplicative updates for L1-regularized regression
Advanced Statistical Computing Fall 2016
Fernando G.S.L. Brandão Microsoft Research MIT 2016
Topics in Algorithms Lap Chi Lau.
Non-additive Security Games
Part VI NP-Hardness.
Sam Hopkins Cornell Tselil Schramm UC Berkeley Jonathan Shi Cornell
Sum of Squares, Planted Clique, and Pseudo-Calibration
A Combinatorial, Primal-Dual Approach to Semidefinite Programs
Structure learning with deep autoencoders
NP-Completeness Yin Tat Lee
Background: Lattices and the Learning-with-Errors problem
Markov chain monte carlo
Structural Properties of Low Threshold Rank Graphs
Hidden Markov Models Part 2: Algorithms
Haim Kaplan and Uri Zwick
Bin Fu Department of Computer Science
Markov Chain Monte Carlo: Metropolis and Glauber Chains
The Communication Complexity of Distributed Set-Joins
3rd Lecture: QMA & The local Hamiltonian problem (CNT’D)
On the effect of randomness on planted 3-coloring models
Reinforcement Learning in MDPs by Lease-Square Policy Iteration
Chapter 11 Limitations of Algorithm Power
NP-Completeness Yin Tat Lee
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979.
Presentation transcript:

Quantum Gibbs Sampling Fernando G.S.L. Brandão Caltech Based on joint work with Krysta Svore (MSR) and Michael Kastoryano (U. Copenhagen) SQUINT 2017

Gibbs States Questions: Thermal state of Hamiltonian H at temperature T: Crucial to understand equilibrium properties of system at finite temperature Questions: 1. When can we prepare ⍴T in reasonable time? 2. What can we do with ⍴T?

Classical Gibbs States Questions: 1. When can we sample from ⍴T in reasonable time? Well developed theory: Algorithm: Glauber dynamics (Metropolis) Consider e.g. Ising model: Coupling to bath modeled by stochastic map Q i j Metropolis Update: /n The stationary state is the thermal (Gibbs) state:

Classical Gibbs States Questions: 1. When can we sample from ⍴T in reasonable time? Well developed theory: Algorithm: Glauber dynamics (Metropolis) Performance: (Stroock, Zergalinski ‘95, …) efficient above thermal phase transition inefficient below (in fact NP-hard for some models (Sly ‘10, …)) ∞ Tc easy hard

Classical Gibbs States Questions: 2. What can we do with ⍴T? Simulate thermal properties classical models And much more: Gibbs distributions (a.k.a. Markov random fields, Boltzmann distribution, …) have central role in Markov Chain Monte Carlo methods (e.g. estimating volume convex body) Machine learning (e.g. Boltzmann machines) Optimization (e.g. matrix multiplicative update )

Quantum Gibbs States Questions: This talk: progress on both questions When can we prepare ⍴T in reasonable time? (Temme et al ’07, …) Quantum analogue of Metropolis But not as useful as Metropolis (Non-local. Very hard to analyse; no q. computer to try it out) 2. What can we do with ⍴T? We don’t know…. This talk: progress on both questions

This Talk Part 1: Using Quantum Gibbs States for Quantum Speed-ups Part 2: Conditions for Efficient Preparation of Gibbs States

Quantum Algorithms Exponential speed-ups: Simulate quantum physics, factor big numbers (Shor’s algorithm), …, Polynomial Speed-ups: Searching (Grover’s algorithm), ... Heuristics: Quantum annealing, adiabatic optimization, ...

Solving Semidefinite Programming belongs here Quantum Algorithms Exponential speed-ups: Simulate quantum physics, factor big numbers (Shor’s algorithm), …, Polynomial Speed-ups: Searching (Grover’s algorithm), ... Heuristics: Quantum annealing, adiabatic optimization, ... Solving Semidefinite Programming belongs here

Quantum Algorithms Solving Semidefinite Programming belongs here Exponential speed-ups: Simulate quantum physics, factor big numbers (Shor’s algorithm), …, Polynomial Speed-ups: Searching (Grover’s algorithm), ... Heuristics: Quantum annealing, adiabatic optimization, ... Solving Semidefinite Programming belongs here + perhaps even exponential speed-ups too

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, s-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Some Applications: operations research (location probems, scheduing, ...), bioengineering (flux balance analysis, ...), approximating NP-hard problems (max-cut, ...), field theory (conformal bootstraping), ... Algorithms Interior points: O((m2nr + mn2)log(1/ε)) Multilicative Weights: O((mnr/ε2)) Lower bound: No faster than Ω(nm), for constant ε and r

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, s-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Linear Programming: special case Many applications (combinatorial optimization, operational research, ....) Natural in quantum (density matrices, ...) Algorithms Interior points: O((m2ns + mn2)log(1/ε)) Multiplicative Weights: O((mns (⍵R)/ε2)) Lower bound: No faster than Ω(nm), for constant ε and r

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, s-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Linear Programming: special case Many applications (combinatorial optimization, operational research, ....) Natural in quantum (density matrices, ...) Algorithms Interior points: O((m2ns + mn2)log(1/δ)) Multiplicative Weights: O((mns (⍵R)/δ2)) Lower bound: No faster than Ω(nm), for constant ε and r width error size of solution

Semidefinite Programming … is an important class of convex optimization problems Input: n x n, s-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Output: X Linear Programming: special case Many applications (combinatorial optimization, operational research, ....) Natural in quantum (density matrices) Algorithms Interior points: O((m2ns + mn2)log(1/δ)) Multiplicative Weights: O((mns (⍵R)/δ2)) Lower bound: No faster than Ω(nm), for constant ε and r Are there quantum speed-ups for SDPs/LPs? Natural question. But unexplored so far width error size of solution

SDPs and Gibbs States What Gibbs state have to do with this? Moreover Gibbs state is the state of maximum entropy with the right expectation values (Jaynes 57) Let ⍴ be a quantum state s.t. Then there is a Gibbs state of the form with same expectation values.

SDPs and Gibbs States SDP: By Jaynes’ principle there is a solution of the form Why? Apply it to Xopt/tr(Xopt) to get Gibbs state with same expectation values for Aj, C How can we find λj’s?

SDP Duality Primal: Dual: y: m-dimensional vector Under mild conditions: Optprimal = Optdual

Size of Solutions Primal: Dual: R parameter: Tr(Xopt) ≤ R r parameter: ∑i (yopt)i ≤ r

SDP Lower Bounds Even to write down optimal solutions take time: Primal (n x n PSD matrix X): Ω(n2) Dual (m dim vector y): Ω(m) If wants to compute only optimal value can sometimes beat bound E.g. Multiplicative Weights algorithm: O((mns (⍵R)/δ2)) time But to even just to compute optimal value requires: Classical: Ω(n+m) (for constant r, R, s, δ) Quantum: Ω(n1/2 + m1/2) (for constant r, R, s, δ) Easy reduction to search problem (Apeldoorn, Gilyen, Gribling, de Wolf) See poster this afternoon Quantum: Ω(nm) if n ≅ m (for constant R, s, δ) Ω(nm)

SDP Lower Bounds Even to write down optimal solutions take time: Primal (n x n PSD matrix X): Ω(n2) Dual (m dim vector y): Ω(m) Even just to compute optimal value requires: Classical: Ω(n+m) (for constant r, R, s, δ) Quantum: Ω(n1/2 + m1/2) (for constant r, R, s, δ) Easy reduction to search problem

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) Input: n x n, s-sparse matrices C, A1, ..., Am and numbers b1, ..., bm

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) Input: n x n, s-sparse matrices C, A1, ..., Am and numbers b1, ..., bm Normalization: ||Ai||, ||C|| ≤ 1 Output: Samples from y/||y||1 and value ||y||1 and/or Quantum Samples from X/tr(X) and value tr(X) Value opt ± δ (output form similar to HHL Q. Algorithm for linear equations)

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) Oracle Model: We assume there’s an oracle that outputs a chosen non-zero entry of C, A1, ..., Am at unit cost: choice of Aj row k l non-zero element

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) The good: Unconditional Quadratic speed-ups in terms of n and m Close to optimal: Ω(n1/2 + m1/2) q. lower bound

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) The good: Unconditional Quadratic speed-ups in terms of n and m Close to optimal: Ω(n1/2 + m1/2) q. lower bound The bad: Close to optimal: no general super-polynomial speed-ups

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) The good: Unconditional Quadratic speed-ups in terms of n and m Close to optimal: Ω(n1/2 + m1/2) q. lower bound The bad: Close to optimal: no general super-polynomial speed-ups Terrible dependence on other parameters: (Rr)32 δ-18

Quantum Algorithm for SDP Result 1: (B., Svore ‘16) There is a quantum algorithm for solving SDPs running in time n1/2 m1/2 s2 poly(log(n, m), R, r, δ) The good: Unconditional Quadratic speed-ups in terms of n and m Close to optimal: Ω(n1/2 + m1/2) q. lower bound The bad: Close to optimal: no general super-polynomial speed-ups Terrible dependence on other parameters: (Rr)32 δ-18 But: Since 2 days ago, improvement by Ambainis: (Rr)13δ-10

Larger Speed-ups? Result 2: There is a quantum algorithm for solving SDPs running in time TGibbs m1/2poly(log(n, m), s, R, r, δ)

Larger Speed-ups? Result 2: There is a quantum algorithm for solving SDPs running in time TGibbs m1/2poly(log(n, m), s, R, r, δ) TGibbs := Time to prepare on quantum computer Gibbs states of the form for real numbers |νi| ≤ O(log(n), poly(1/δ))

Larger Speed-ups? Result 2: There is a quantum algorithm for solving SDPs running in time TGibbs m1/2poly(log(n, m), s, R, r, δ) TGibbs := Time to prepare on quantum computer Gibbs states of the form for real numbers |νi| ≤ O(log(n), poly(1/δ)) Can use Quantum Gibbs Sampling (e.g. Quantum Metropolis) as heuristic. Exponential Speed-up if thermalization is quick (poly #qubits = polylog(n)) Gives application of quantum Gibbs sampling outside simulating physical systems

Larger Speed-ups with “quantum input” Result 3: There is a quantum algorithm for solving SDPs running in time m1/2poly(log(n, m), s, R, r, δ, rank) with data in quantum form

Larger Speed-ups with “quantum input” Result 3: There is a quantum algorithm for solving SDPs running in time m1/2poly(log(n, m), s, R, r, δ, rank) with data in quantum form Quantum Oracle Model: There is an oracle that given i, outputs the eigenvalues of Ai and its eigenvectors as quantum states rank := max (maxi rank(Ai), rank(C))

Larger Speed-ups with “quantum input” Result 3: There is a quantum algorithm for solving SDPs running in time m1/2poly(log(n, m), s, R, r, δ, rank) with data in quantum form Quantum Oracle Model: There is an oracle that given i, outputs the eigenvalues of Ai and its eigenvectors as quantum states rank := max (maxi rank(Ai), rank(C)) Idea: in this case one can easily perform the Gibbs sampling in poly(log(n), rank) time Question: How relevant is this model?

Partial Tomography Problem: Given a set of measurements {Ai} (Ai2=Ai), and access to copies of an unknown quantum state ⍴, want to find a description of state σ s.t.

Partial Tomography Problem: Given a set of measurements {Ai} (Ai2=Ai), and access to copies of an unknown quantum state ⍴, want to find a description of state σ s.t. This is a SDP, with bj = ±tr(Ai⍴)+ε We can implement an oracle to bi by measuring Ai on ⍴ Assume the measurements are efficient (poly(log(n)) time)

Partial Tomography Problem: Given a set of measurements {Ai} (Ai2=Ai), and access to copies of an unknown quantum state ⍴, want to find a description of state σ s.t. Quantum algorithm gives and λi’s in time m1/2poly(log(n, m), s, R, r, 1/ε, rank) For “low rank measurements” (rank(Ai) = polylog(n)) gives exponential speed-up

Partial Tomography Problem: Given a set of measurements {Ai} (Ai2=Ai), and access to copies of an unknown quantum state ⍴, want to find a description of state σ s.t. Quantum algorithm gives and λi’s in time m1/2poly(log(n, m), s, R, r, 1/ε, rank) For “low rank measurements” (rank(Ai) = polylog(n)) gives exponential speed-up (Aaronson ‘17) Can find σ with poly(log(n), log(m),) samples of ⍴ Can we improve sample complexity to match this result?

Special Case: Max Eigenvalue Computing the max eigenvalue of C is a SDP : ,

Special Case: Max Eigenvalue Computing the max eigenvalue of C is a SDP : , This is a well studied problem: Quantum Annealing (cool down -C): If we can prepare for β = O(log(n)/δ) can compute max eigenvalue to error δ

Special Case: Max Eigenvalue (Poulin, Wocjan ‘09) Can prepare for s-sparse C in time Õ(s n1/2) on quantum computer Idea: Phase estimation + Amplitude amplification phase estimation Post-selecting on “0” gives a purification of Gibbs state with Pr > O(1/n) Using amplitude amplification can boost Pr > 1-o(1) with O(n1/2) iterations

Quantizing Arora-Kale Algorithm General Case: Quantizing Arora-Kale Algorithm The quantum algorithm is based on a classical algorithm for SDP due to Arora and Kale (2007) based on the multiplicative weight method.

Quantizing Arora-Kale Algorithm General Case: Quantizing Arora-Kale Algorithm The quantum algorithm is based on a classical algorithm for SDP due to Arora and Kale (2007) based on the multiplicative weight method. Let’s review their method Assumptions: We assume bi ≥ 1. Can reduce general case to this with blow up of poly(r) in complexity We also assume w.l.o.g. Solve dual decision problem, of finding feasible y s.t. b.y≤⍺ Optimization follows from binary search

The Oracle The Arora-Kale algorithm has an auxiliary algorithm (the ORACLE) which solves a simple linear programming: Searches for a vector y s.t. i) ii)

Arora-Kale Algorithm 1. 2. 3. 4.

Arora-Kale Algorithm 1. 2. 3. 4. Thm (Arora-Kale ‘07) ȳ.b ≤ (1+δ) α Can find optimal value by binary search

Why Arora-Kale works? Since Must check is feasible From Oracle, for all t: We need:

Matrix Multiplicative Weight MMW (Arora, Kale ‘07) Given n x n matrices 0< Mt <I and ε < ½, with and λn : min eigenvalue 2-player zero-sum game interpretation: - Player A chooses density matrix Xt - Player B chooses matrix 0 < Mt<I Pay-off: tr(Xt Mt) “Xt = ⍴t strategy almost as good as global strategy”

Matrix Multiplicative Weight MMW (Arora, Kale ‘07) Given n x n matrices Mt and ε < ½, with and λn : min eigenvalue From Oracle: By MMW:

Quantizing Arora-Kale Algorithm We make it quantum as follows: Implement ORACLE by Gibbs Sampling to produce yt and apply amplitude amplification to solve it in time Õ(s2 n1/2 m1/2) Sparsify Mt to be a sum of O(log(m)) terms: Quantum Gibbs Sampling + amplitude amplification to prepare in time Õ(s2 n1/2 ).

Quantizing Arora-Kale Algorithm We make it quantum as follows: Implement ORACLE by Gibbs Sampling to produce yt and apply amplitude amplification to solve it in time Õ(s2 n1/2 m1/2) We’ll show there is a feasible yt of the form yt = Nqt with qt := exp(h)/tr(exp(h)) and We need to simulate an oracle to the entries of h. We do it by measuring ⍴t with Ai. To prepare each ⍴t takes time Õ(s2 n1/2 ). To sample from qt requires Õ(m1/2 ) calls to oracle for h. So total time is Õ(s2 n1/2 m1/2)

Quantizing Arora-Kale Algorithm We make it quantum as follows: Implement ORACLE by Gibbs Sampling to produce yt and apply amplitude amplification to solve it in time Õ(s2 n1/2 m1/2) Sparsify Mt to be a sum of O(log(m)) terms: Can show it works by Matrix Hoeffding bound: Z1, …, Zk independent n x n Hermitian matrices s.t. E(Zi)=0, ||Zi||<λ. Then

Quantum Arora-Kale, Roughly 1. 2. 3. Sparsify Mt to (M’)t 4. Gibbs Sampling Gibbs Sampling

Implementing Oracle by Gibbs Sampling Searches for a vector y s.t. i) ii)

Implementing Oracle by Gibbs Sampling Searches for (non-normalized) probability distribution y satisfying two linear constraints: Claim: We can take Y to be Gibbs: There are constants N, λ, μ s.t.

Jaynes’ Principle Then there is a Gibbs state of the form (Jaynes 57) Let ⍴ be a quantum state s.t. Then there is a Gibbs state of the form with same expectation values. Drawback: no control over size of the λi’s.

Finitary Jaynes’ Principle (Lee, Raghavendra, Steurer ‘15) Let ⍴ s.t. Then there is a with s.t. (Note: Used to prove limitations of SDPs for approximating constraints satisfaction problems; see James Lee’s talk)

Implementing Oracle by Gibbs Sampling Claim There is a Y of the form with λ, μ < log(n)/ε and N < α s.t.

Implementing Oracle by Gibbs Sampling Claim There is a Y of the form with λ, μ < log(n)/ε and N < α s.t. Can implement oracle by exhaustive searching over x, y, N for a Gibbs distribution satisfying constraints above (only α log2(n)/ε3 different triples needed to be checked)

Part 2 Can we prepare thermal states? S B Easy? Couple to a bath of the right temperature and wait. But size of environment might be huge. Maybe not efficient (Terhal and diVincenzo ’00, …) S B Warning: In general it’s impossible to prepare thermal states efficiently, even at constant temperature and of classical models, but defined on general graphs Warning 2: Spin glasses not expected to thermalize. (PCP Theorem, Arora et al ‘98)

Classical Result (Stroock, Zergalinski ’92; Martinelli, Olivieri ’94, …) Classically, Gibbs state with finite correlation length can be efficiently simulated with Metropolis The Gibbs state has correlation length ξ if for every f, g f g

Preparing Quantum Gibbs States Efficiently We conjecture the same holds for quantum systems. Partial result Thm (B. Kastoryano ‘16) For a Hamiltonian H on a d-dimensional lattice with N qubits, ⍴T can be prepared on a quantum computer in time exp(logd(N)) if ⍴T has finite correlation length ⍴T is an approximate Markov State

Q. Approximate Markov States C B A quantum approximate Markov if for every A, B, C Every Gibbs state of a commuting Hamiltonian is a Markov state (B., Kato ‘16) Conjecture true in 1 dim Conjecture strengthening of area law for thermal states

Strengthening of Area Law C B A (Wolf, Verstraete, Hastings, Cirac ‘07) From conjecture: Gives rate of saturation of area law

Conclusion and Open Problems The theory of quantum Gibbs state is at least as rich as for classical systems. Still at the beginning…

Conclusion and Open Problems The theory of quantum Gibbs state is at least as rich as for classical systems. Still at the beginning… Can we improve the parameters of the SDP q. algorithm? Q. computer only used for Gibbs Sampling. Application of small-sized q. computer? Are there more meaningful exponential speed-ups? Are quantum Gibbs states approximate Markov? Is there a local version of quantum metropolis? (Yes if Gibbs state is Markov) Thanks!