Checking the Consistency of Local Density Matrices

Slides:



Advertisements
Similar presentations
PROOF BY CONTRADICTION
Advertisements

1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Totally Unimodular Matrices
Equilibrium Concepts in Two Player Games Kevin Byrnes Department of Applied Mathematics & Statistics.
The General Linear Model. The Simple Linear Model Linear Regression.
Eigenvalues and Eigenvectors
Entropy. Optimal Value Example The decimal number 563 costs 10  3 = 30 units. The binary number costs 2  10 = 20 units.  Same value as decimal.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.
Dirac Notation and Spectral decomposition Michele Mosca.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
Dirac Notation and Spectral decomposition
Techniques for studying correlation and covariance structure
Correlation. The sample covariance matrix: where.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lecture 16 (2011)
Discrete Mathematics, 1st Edition Kevin Ferland
Eigenvalues and Eigenvectors
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Signal & Weight Vector Spaces
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Dilworth’s theorem and extremal set theory 張雁婷 國立交通大學應用數學系.
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Random Access Codes and a Hypercontractive Inequality for
Systems of Linear Differential Equations
Chapter 3 The Real Numbers.
3. The X and Y samples are independent of one another.
Chapter 11 Optimization with Equality Constraints
MATH2999 Directed Studies in Mathematics Quantum Process on One Qubit System Name: Au Tung Kin UID:
Linear Algebra Linear Transformations. 2 Real Valued Functions Formula Example Description Function from R to R Function from to R Function from to R.
Derivative and properties of functions
Chapter 7: Sampling Distributions
Quantum Two.
Quantum One.
Distinct Distances in the Plane
Autumn 2016 Lecture 11 Minimum Spanning Trees (Part II)
Additive Combinatorics and its Applications in Theoretical CS
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Basis and Dimension Basis Dimension Vector Spaces and Linear Systems
Chap 3. The simplex method
More about Posterior Distributions
Quantum Information Theory Introduction
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Aviv Rosenberg 10/01/18 Seminar on Experts and Bandits
Statistical Assumptions for SLR
§2-3 Observability of Linear Dynamical Equations
Summarizing Data by Statistics
§1-2 State-Space Description
Singular Value Decomposition SVD
Controllability and Observability of Linear Dynamical Equations
§2-2 Controllability of Linear Dynamical Equations
I.4 Polyhedral Theory (NW)
Properties of Solution Sets
Flow Feasibility Problems
Chapter 5 Limits and Continuity.
§2-2 Controllability of Linear Dynamical Equations
I.4 Polyhedral Theory.
STATISTICAL INFERENCE PART I POINT ESTIMATION
CIS 700: “algorithms for Big Data”
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Chapter 5: Morse functions and function-induced persistence
Linear Vector Space and Matrix Mechanics
Eigenvalues and Eigenvectors
CSE 203B: Convex Optimization Week 4 Discuss Session
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

Checking the Consistency of Local Density Matrices Yi-Kai Liu Computer Science and Engineering University of California, San Diego y9liu@cs.ucsd.edu

The Consistency Problem Consider an n-qubit system We are given density matrices ρ1,…,ρm, where each ρi describes a subset of the qubits Ci Is there a state σ (on all n qubits) whose reduced density matrices match ρ1,…,ρm? ρ2 C2 = {2,4,5} ρ1 C1 = {1,2,3}

An Example 3 qubits, ρA = ρB = |φ)(φ|, where |φ) = (|00) + |11)) / √2 ρB, B = {2,3} ρA, A = {1,2} 3 qubits, ρA = ρB = |φ)(φ|, where |φ) = (|00) + |11)) / √2 There is no state σ s.t. tr3(σ) = ρA, tr1(σ) = ρB Can see this using strong subadditivity: S(1,2,3) + S(2) ≤ S(1,2) + S(2,3)

A More General Problem Consider a finite quantum system We are given a set of observables T1,…,Tr, together with expectation values t1,…,tr Is there a state σ with these expectation values, that is, tr(Ti σ) = ti for i = 1,…,r ? Consistency of local density matrices is a special case of this problem For each subset of qubits C, knowing the density matrix for C is equivalent to knowing the expectation values of all Pauli matrices on C

Our Results I. For the consistency problem: If ρ1,…,ρm are consistent with some state σ > 0, then they are also consistent with a state σ’ of the following form: σ’ = (1/Z) exp(M1+…+Mm), where each Mi is a Hermitian matrix that acts on the same qubits as ρi, and Z is a normal-izing factor. So the existence of σ’ is a necessary and sufficient condition for consistency

Our Results II. For the general problem: If there exists a state σ > 0 with expectation values t1,…,tr, then there exists a state σ’ which has the same expectation values, and is of the form: σ’ = (1/Z) exp(θ1T1+…+θrTr), where θ1,…,θr are real. This holds under a technical assumption that T1,…,Tr and I are linearly independent over the reals

Related Work These results were previously derived by Jaynes (1957), as part of the maximum-entropy principle for statistical inference Jaynes’ proof uses the Lagrange dual of the entropy-maximization problem We give a somewhat different proof, using the convexity of the partition function

Facts about the Partition Function Given observables T1,…,Tr Let Z(θ) = tr(exp(θ1T1+…+θrTr)) Let ψ(θ) = log Z(θ) Consider the family of states ρ(θ) = exp(θ1T1+...+θrTr) / Z(θ) Replacing Ti with Ti+αI does not change ρ(θ) The function ψ is convex and differentiable, and ∂ψ/∂θi = tr(Ti ρ(θ))

Expectation Values and the Partition Function Given expectation values t1,…,tr Can assume ti = 0 (by translating Ti appropriately) We want to find θ s.t. gradient(ψ(θ)) = 0 Example: a single qubit, want <σz> = –1 (this happens when θ → –infty) ψ(θ) = log tr(exp(θ(σz+1))) = log(e2θ + 1) ψ(θ) θ

Expectation Values and the Partition Function We want to find θ s.t. gradient(ψ(θ)) = 0 Another example: a single qubit, want <σz> = –1 and <σx> = –1 (not possible) ψ(θ) = log tr(exp(θ1(σz+1) + θ2(σx+1))) ψ(θ1,θ2) θ1 θ2

Proof Sketch We prove claim II, which implies claim I as a special case We know there is a state ρ > 0 s.t. tr(Ti ρ) = ti. We can write ρ in the form: ρ(θ,φ) = exp(θ1T1+…+θrTr + φ1U1+…+φsUs) / Z(θ,φ) where T1,…,Tr,U1,…,Us are a complete set of observables. Let ui = tr(Ui ρ) be the expectation values of the Ui. We can assume ti = 0, ui = 0, for all i.

Proof Sketch Since the Ti and Ui are a complete set of observables, there is a unique (θ,φ) s.t. ρ(θ,φ) has the expectation values ti and ui. So gradient(ψ(θ,φ)) vanishes at exactly one point. Since ψ is a convex function, it follows that: ψ(θ,φ) → infty as ||θ,φ|| → infty, where ||θ,φ|| is the norm of the vector (θ,φ).

Proof Sketch Now consider states ρ’ of the form: ρ’(θ) = exp(θ1T1+…+θrTr) / Z’(θ) The partition functions of ρ’ and ρ are related: ψ’(θ) = ψ(θ,0) Hence ψ’(θ) → infty as ||θ|| → infty. Since ψ’ is convex, it follows that ψ’ attains its minimum at some point θmin. Hence gradient(ψ’(θmin)) = 0; and so the state ρ’(θmin) has the desired expectation values ti.

Proof Sketch Example: we want <σz> = –.6 σx plays the role of the extra observables Ui, with <σx> = –.3 (1): gradient(ψ(θ,φ)) = 0 ψ’(θ) = ψ(θ,0) (2): gradient(ψ’(θ)) = 0 ψ(θ,φ) (2) (1) φ θ