UCL, 23 Feb 2006 UCL, 23 Feb 2006 Entanglement Probability Distribution of Random Stabilizer States Oscar C.O. Dahlsten Martin B. Plenio.

Slides:



Advertisements
Similar presentations
Quantum t-designs: t-wise independence in the quantum world Andris Ambainis, Joseph Emerson IQC, University of Waterloo.
Advertisements

Tony Short University of Cambridge (with Sabri Al-Safi – PRA 84, (2011))
Introduction to Proofs
On the Density of a Graph and its Blowup Raphael Yuster Joint work with Asaf Shapira.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Spin chains and channels with memory Martin Plenio (a) & Shashank Virmani (a,b) quant-ph/ , to appear prl (a)Institute for Mathematical Sciences.
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
Michael A. Nielsen University of Queensland Quantum entropy Goals: 1.To define entropy, both classical and quantum. 2.To explain data compression, and.
Copyright © Cengage Learning. All rights reserved. 9 Inferences Based on Two Samples.
8. Statistical tests 8.1 Hypotheses K. Desch – Statistical methods of data analysis SS10 Frequent problem: Decision making based on statistical information.
Maximum likelihood (ML) and likelihood ratio (LR) test
Hypothesis testing Some general concepts: Null hypothesisH 0 A statement we “wish” to refute Alternative hypotesisH 1 The whole or part of the complement.
Online Graph Avoidance Games in Random Graphs Reto Spöhel Diploma Thesis Supervisors: Martin Marciniszyn, Angelika Steger.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
1 Unsupervised Learning With Non-ignorable Missing Data Machine Learning Group Talk University of Toronto Monday Oct 4, 2004 Ben Marlin Sam Roweis Rich.
NMR Quantum Information Processing and Entanglement R.Laflamme, et al. presented by D. Motter.
Maximum likelihood (ML) and likelihood ratio (LR) test
Evaluating Hypotheses
Abstract typical entanglement Emergence of typical entanglement ConclusionIntroduction CQC, 29 Sep 2006 CQC, Cambridge Emergence of typical entanglement.
ABCRandom GaussiansConclusion&OutlookIntroductionTypical entanglement IQC 27 June 2007 Randomization workshop, IQC Waterloo Typical entanglement and random.
Discrete Random Variables and Probability Distributions
Coherent Classical Communication Aram Harrow (MIT) quant-ph/
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Torun, 5 June Symposium on Mathematical Physics N.Copernicus University Efficient Generation of Generic Entanglement Oscar C.O. Dahlsten with Martin.
EECS 598 Fall ’01 Quantum Cryptography Presentation By George Mathew.
Maximum likelihood (ML)
School of Physics & Astronomy FACULTY OF MATHEMATICAL & PHYSICAL SCIENCE Parallel Transport & Entanglement Mark Williamson 1, Vlatko Vedral 1 and William.
Introduction to AEP In information theory, the asymptotic equipartition property (AEP) is the analog of the law of large numbers. This law states that.
1  The goal is to estimate the error probability of the designed classification system  Error Counting Technique  Let classes  Let data points in class.
© Copyright McGraw-Hill CHAPTER 6 The Normal Distribution.
MATH 224 – Discrete Mathematics
Sociology 5811: Lecture 7: Samples, Populations, The Sampling Distribution Copyright © 2005 by Evan Schofer Do not copy or distribute without permission.
Generalized Deutsch Algorithms IPQI 5 Jan Background Basic aim : Efficient determination of properties of functions. Efficiency: No complete evaluation.
Short Resume of Statistical Terms Fall 2013 By Yaohang Li, Ph.D.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
ENTROPIC CHARACTERISTICS OF QUANTUM CHANNELS AND THE ADDITIVITY PROBLEM A. S. Holevo Steklov Mathematical Institute, Moscow.
Entropy and the Second Law Lecture 2. Getting to know Entropy Imagine a box containing two different gases (for example, He and Ne) on either side of.
Random Sampling, Point Estimation and Maximum Likelihood.
INTEGRALS Areas and Distances INTEGRALS In this section, we will learn that: We get the same special type of limit in trying to find the area under.
Copyright © Cengage Learning. All rights reserved. CHAPTER 10 GRAPHS AND TREES.
Chapter 7 Point Estimation
INTEGRALS 5. INTEGRALS In Chapter 2, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.
Integrals  In Chapter 2, we used the tangent and velocity problems to introduce the derivative—the central idea in differential calculus.  In much the.
H ij Entangle- ment flow multipartite systems [1] Numerically computed times assuming saturated rate equations, along with the lower bound (solid line)
Risk Analysis & Modelling Lecture 2: Measuring Risk.
Confidence Intervals First ICFA Instrumentation School/Workshop At Morelia, Mexico, November 18-29, 2002 Harrison B. Prosper Florida State University.
6. 1 Multiplication with Exponents and Scientific Notation 6
Consistency An estimator is a consistent estimator of θ, if , i.e., if
A Monomial matrix formalism to describe quantum many-body states Maarten Van den Nest Max Planck Institute for Quantum Optics Montreal, October 19 th 2011.
A generalization of quantum Stein’s Lemma Fernando G.S.L. Brandão and Martin B. Plenio Tohoku University, 13/09/2008.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
Main Menu Main Menu (Click on the topics below) Combinatorics Introduction Equally likely Probability Formula Counting elements of a list Counting elements.
Copyright © Cengage Learning. All rights reserved. 2 Limits and Derivatives.
1 Conference key-agreement and secret sharing through noisy GHZ states Kai Chen and Hoi-Kwong Lo Center for Quantum Information and Quantum Control, Dept.
On Minimum Reversible Entanglement Generating Sets Fernando G.S.L. Brandão Cambridge 16/11/2009.
Multipartite Entanglement and its Role in Quantum Algorithms Special Seminar: Ph.D. Lecture by Yishai Shimoni.
Copyright © Cengage Learning. All rights reserved. 9 Inferences Based on Two Samples.
Week 31 The Likelihood Function - Introduction Recall: a statistical model for some data is a set of distributions, one of which corresponds to the true.
Coherent Communication of Classical Messages Aram Harrow (MIT) quant-ph/
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
Debasis Sarkar * Department of Applied Mathematics, University of Calcutta *
Lecture 1.31 Criteria for optimal reception of radio signals.
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Distributive Property
Information-Theoretical Analysis of the Topological Entanglement Entropy and Multipartite correlations Kohtaro Kato (The University of Tokyo) based on.
Quantum Information Theory Introduction
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Stability Analysis of MNCM Class of Algorithms and two more problems !
Parametric Methods Berlin Chen, 2005 References:
Learning From Observed Data
Presentation transcript:

UCL, 23 Feb 2006 UCL, 23 Feb 2006 Entanglement Probability Distribution of Random Stabilizer States Oscar C.O. Dahlsten Martin B. Plenio

UCL, 23 Feb 2006 Explaining the Title The title is ‘Entanglement Probability Distribution of Random Stabilizer States’ Entanglement is the amount of quantum correlations, here taken between two parties sharing a pure state. By Entanglement Probability Distribution we mean P(E), the likelihood of having entanglement of value E. Stabilizer states are an important discrete subset of general states. By random stabilizer states, me mean that we are sampling at random and without bias from states that are restricted to be stabilizer states.

UCL, 23 Feb 2006 Talk Structure This talk aims to explain the paper: Exact Entanglement Probability Distribution in Randomised Bipartite Stabilizer States. [Dahlsten, Plenio, quant-ph/ ] 1. Introduction, aim of work 2. Entanglement Probability Distribution 3. Properties of Distribution 4. Summary and Outlook

UCL, 23 Feb 2006 Motivation Entanglement is a fundamental resource in quantum information tasks. We can classify and quantify entanglement between two parties quite well, but there is a plethora of classes for more than two parties. Here we consider two simplifications to the problem: A. Restrict the states to be ‘stabilizer states’, a discrete subset of all possible quantum states. B. Restrict entanglement types to those that are ‘typical’.

UCL, 23 Feb 2006 A. Only Stabilizer States Stabilizer states are an important discrete subset of all possible states [Gottesman, Caltech PhD thesis]. Called stabilizer states as the state is defined by listing the Pauli Matrices that ’stabilize it’, i.e. They can be parametrised efficiently, yet form a rich variety of states: and etc. Bipartite Entanglement in stabilizer states comes in integer values, E=0,1,2,…E max [Audenaert, Plenio, quant-ph/ ] [Fattall et al. quant-ph/ ]

UCL, 23 Feb 2006 B. Only Typical Entanglement Second simplification: Consider only the typical entanglement in a completely randomised system. [Hayden et al., quant-ph/ ] Physical setting: imagine two-level atoms in a gas colliding at random, causing entanglement between energy levels. Asymptotically the system is completely randomised. Alice and Bob E-Entangled with probability P(E). t=0t=1 Alice Bob

UCL, 23 Feb 2006 Typical Entanglement cont’d. In general states it is known that the average typical/generic entanglement is near maximal. (Page’s conjecture). Here typical is defined relative to the uniform distribution on states, given by the ’Haar measure’ on unitaries. There is a concentration of the distribution around this average with increasing N -’concentration of measure’. Is the above still true under the restriction of stabilizer states?

UCL, 23 Feb 2006 Exact Objective: find P(E) The first question in this line of enquiry is: what is the typical bipartite entanglement in randomised stabilizer states? To answer this we need the probability distribution P(E). Entanglement value E is typical if P(E) significant, atypical if P(E) insignificant. Hence the objective is to find and study P(E) for randomised bipartite stabilizer states.

UCL, 23 Feb 2006 Overview 1.(Done) Introduction, aim of work -Simplify entanglement classification by restricting classes to those that are typical in stabilizer states. -Therefore aim to find P(E) of randomised stabilizer states, where E is bipartite entanglement. Next 2. Entanglement Probability Distribution We derive an expression for P(E). 3. Properties of Distribution 4. Summary and Outlook

UCL, 23 Feb 2006 P(E) Theorem Statement Notation: The N qubits are grouped such that N A belong to Alice(the smaller party) and N B to Bob. The total state is pure and N=N A +N B. The state is restricted to be a stabilizer state, but any such state is equally likely. Then P(E), the probability of E entanglement between Alice and Bob is:

UCL, 23 Feb 2006 Proof Outline (i) Take probability distribution on stabilizer states as flat. Then p(state)=1/n tot where n tot is the total number of states for the given N. Entanglement E is an integer, So P(E)=n E /n tot where n E (N,NA) is the number of possible stabilizer states with entanglement E. Simplest example: N=2, N A =1 whereby Then an explicit count gives n tot =60, n 0 =36, n 1 =24. Thus P(0) =36/60 and p(1)=24/60 n0n0 n1n1 All n tot states

UCL, 23 Feb 2006 Proof Outline (ii) Finding n E (N, N A ) for any N and N A is tricky. Use three lemmas: Lemma 1: The total number of states is known to be [Gottesmann, Aaronson quant-ph/052328] [Gross, quant-ph/ ] Lemma 2: The number of unentangled states n 0 is Lemma 3: There is an invariant ratio (proof complicated) The lemmas together give an iterative expression for n E. This gives P(E) as P(E)=n E /n tot

UCL, 23 Feb 2006 Overview 1.(Done) Introduction, aim of work 2. (Done)Entanglement Probability Distribution Derived Next 3. Properties of Distribution -Distribution is ‘Gaussianish’ -Average is nearly maximal -Concentration around average -Similar to general states 4. Summary and Outlook

UCL, 23 Feb 2006 Distribution is ‘Gaussian-ish’ An entirely equivalent form of the distribution is Where is messy but comparatively small Therefore P(E) is roughly the side of a Gaussian curve, centred on N/2

UCL, 23 Feb 2006 Example of P(E) An example of P(E), for N=12, N A =5.

UCL, 23 Feb 2006 Average is Nearly Maximal Recall maximal entanglement possible is N A, the number of qubits in the smallest of the two groups. By the main P(E) theorem, one sees the average entanglement,, is nearly maximal for large N. Therefore if we pick stabilizer states at random we expect to get near maximal entanglement on average.

UCL, 23 Feb 2006 Concentration at Average Distribution squeezes up around the average with increasing N. Typical entanglement for large N is thus nearly maximal. Animation to the right shows P(E) with fixed N A but N increasing.

UCL, 23 Feb 2006 Similar to General States The average entanglement in general states is also near maximal [‘Page’s conjecture’]. The figure below compares the averages for N=10 and varying N A. There is concentration around the average for general states too [Hayden et al., quant-ph/ ].

UCL, 23 Feb 2006 Summary We give the Probability Distribution of Entanglement in randomised stabilizer states. It shows the typical entanglement is near maximal. Surprisingly this is very similar to the case for general states. Note: [Smith&Leung, quant-ph/ ] also interesting. Outlook Is there a stabilizer-general state similarity for other quantities than entanglement? What about multipartite entanglement? What happens during the process of randomisation?