Quantum Convolutional Neural Networks (QCNN)

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

A brief review of non-neural-network approaches to deep learning
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Quantum Packet Switching A. Yavuz Oruç Department of Electrical and Computer Engineering University of Maryland, College Park.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
Quantum Mechanics from Classical Statistics. what is an atom ? quantum mechanics : isolated object quantum mechanics : isolated object quantum field theory.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
West Virginia University
Quantum Error Correction Jian-Wei Pan Lecture Note 9.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Tensor networks and the numerical study of quantum and classical systems on infinite lattices Román Orús School of Physical Sciences, The University of.
Introduction to MERA Sukhwinder Singh Macquarie University.
Adaptive Hopfield Network Gürsel Serpen Dr. Gürsel Serpen Associate Professor Electrical Engineering and Computer Science Department University of Toledo.
PARALLELIZATION OF ARTIFICIAL NEURAL NETWORKS Joe Bradish CS5802 Fall 2015.
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
Dimensions of Neural Networks Ali Akbar Darabi Ghassem Mirroshandel Hootan Nokhost.
Deep Belief Network Training Same greedy layer-wise approach First train lowest RBM (h 0 – h 1 ) using RBM update algorithm (note h 0 is x) Freeze weights.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Real space RG and the emergence of topological order Michael Levin Harvard University Cody Nave MIT.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Big data classification using neural network
QUANTUM COMPUTING: Quantum computing is an attempt to unite Quantum mechanics and information science together to achieve next generation computation.
Deep Feedforward Networks
The Relationship between Deep Learning and Brain Function
Summary of “Efficient Deep Learning for Stereo Matching”
Deep Learning Amin Sobhani.
Computer Science and Engineering, Seoul National University
Article Review Todd Hricik.
Intelligent Information System Lab
Neural Network Decoders for Quantum Error Correcting Codes
Machine Learning Basics
Lecture 5 Smaller Network: CNN
Deep Learning Qing LU, Siyuan CAO.
Convolutional Networks
Structure learning with deep autoencoders
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
Generalized DMRG with Tree Tensor Network
Introduction to Deep Learning for neuronal data analyses
Ganapathy Mani, Bharat Bhargava, Jason Kobes*
INF 5860 Machine learning for image classification
of the Artificial Neural Networks.
Very Deep Convolutional Networks for Large-Scale Image Recognition
network of simple neuron-like computing elements
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Bayesian Deep Learning on a Quantum Computer
Quantum computation with classical bits
Ch4: Backpropagation (BP)
Boltzmann Machine (BM) (§6.4)
Designing Neural Network Architectures Using Reinforcement Learning
Artificial Intelligence 12. Two Layer ANNs
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Learning and Memorization
Computational approaches for quantum many-body systems
Computational approaches for quantum many-body systems
Random Neural Network Texture Model
A quantum machine learning algorithm based on generative models
Outline Announcement Neural networks Perceptrons - continued
An introduction to neural network and machine learning
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

Quantum Convolutional Neural Networks (QCNN) Authors: Iris Cong., Soonwon Choi, Mikhail D. Lukin 1Department of Physics, Harvard University 2Department of Physics, University of California, Berkeley Pre-Print Submission: 9 Oct 2018 Slides by: Sreeraj Rajendran Date: Nov 28, 2018

Motivation Classical Machine Learning: Self Driving Vehicle Object Classification Speech Recognition Machine learning based on neural networks has recently provided signicant advances for many practical applications Much progress has been made in realization of quantum information processors Explore quantum computers can be utilized to further enhance performance of machine learning systems Machine Learning + Quantum many-body physics = Exciting!

Multi Body Interaction problems Physical problems pertaining to the properties of microscopic systems made of a large number of interacting particles. Repeated interactions between particles create quantum correlations, or entanglement The wave function of the system is a complicated object holding a large amount of information Makes exact or analytical calculations impractical or even impossible Theoretical analysis difficult. Interactions create quantum correlations and entanglement State function is extremely complicated

Motivation… Classical Machine Learning to study Quantum Physics machine learning techniques can be used for efficient description of different input states of complex quantum systems Quantum Machines to enhance machine learning for practical problems Quantum phase recognition (QPR) Convolutional Neural Networks provide successful machine learning structure for classification tasks Classical: Neural Nets to model ground states and hamiltonians: G. Carleo and M. Troyer, Science 355, 602 (2017) Artificial neural network encoding a many-body quantum state of N spins.A restricted Boltzmann machine architecture that features a set of N visible artificial neurons (yellow dots) and a set of M hidden neurons (gray dots) is shown. For each value of the many-body spin configuration , the artificial neural network computes the value of the wave function  Quantum circuits: Farhi and Navin

Classical Convolutional Neural Network(CNN) CNN transforms an input image into a series of feature maps (blue rectangles), and finally into an output probability distribution (purple bars)

Classical vs Quantum Deep Learning

QCNN Circuit Model Layer structure inspired by CNN Input: Unknown Quantum state Convolution: Apply unitaries Ui Pooling: Reduce size by measuring fraction of input Unitary controlled rotations are applied to remaining based on measurement Repeat until system size is small Fully Connected unitary F Non-local measurement Result by measuring fixed number of output qubits

QCNN Circuit Model… classified training data : α = 1, ..., M} |ψα are input states and yα = 0 or 1 are the corresponding binary classification outputs. QCNN is characterized by unitaries and has an expected final measurement value Initialize randomly Optimize: η is the learning rate

Quantum Phase Recognition (QPR) Objective: Given quantum many-body system in unknown ground state , does belong to a particular quantum phase P. Test setup: Detect 1D symmetry protected topological phase (P)

Data Preparation Input= Ground State Training data consists of ground states along the line h2 = 0, where the Hamiltonian is exactly solvable by Jordan- Wigner transformation. 40 evenly spaced points with h1 in range [0,2] –Gray dots in image |ψα> input states yα = 0 or 1 are the corresponding binary classification outputs. M=40 The blue and red diamonds are phase boundary points extracted from infinite size density-matrix renormalization group (DMRG) numerical simulations, while the colors represent the expectation value of the QCNN output.

QCNN test model architecture Choose n= 3 ( as Hamiltonian involved 3 qubit terms) Convolution layer: C1 : (n+1) qubit unitaries on every nth qubit U1=4 qubit Unitary n convolution layers of n-qubit unitaries C2,C3,C4 with 3 qubit unitaries U2,U3 and U4 respectively Pooling Layer The pooling layer measures n − 1 out of every contiguous block of n qubits Apply unitary V on remaining 1 qubit 2 qubits measured and associated with unitary V based on measurement result Repeat conv + pool layer for required depth (eg: d=1 ) Fully Connected Layer: arbitrary unitary on the remaining N/n^d qubits Classification output = measurement output of the middle qubit( or any other fixed one)

Learning Procedure All parameters cµ are set to random values between 0 and 2π for the unitaries {Ui , Vj , F}. In every iteration of gradient descent, compute the derivative of the mean-squared error function (slide 8) to first order with respect to each of these coefficients cµ Update Cu: repeat the gradient descent procedure until the error function changes on the order of 10^−5 between successive iterations Hyperparameter tuning: Learning rate is calculated using bold driver technique.

QCNN Circuit for QPR Blue line segments: controlled- phase gates Blue three-qubit gates flip the middle/target qubit when the left and right (control) qubits both have X =−1 Orange two-qubit gates flip the phase of the target qubit when the X measurement of the control yields −1 FC layer applies controlled-phase gates followed by an Xi projection, effectively measuring

QCNN output for varying depths Layer depth=1,2,3 &4 for violet, yellow , red and blue respectively Adding more layers gives sharper contrasts

Why QCNN works QCNN circuit with multiple pooling layers can be viewed as a combination of MERA (multiscale entanglement renormalization ansatz) — an important variational approach for many-body wavefunctions — and nested QEC (Quantum Error Correction) — a mechanism to detect and correct quantum errors without collapsing the entire wavefunction. This makes QCNN a powerful architecture to classify input quantum states.

Why QCNN works… QCNN circuits combine the multi-scale entanglement renormalization ansatz and quantum error correction to mimic renormalization-group flow, making them capable of recognizing different quantum phases and associated phase transitions.

Why QCNN works (Alternative explanation) Although a QCNN performs nonunitary measurements in the pooling layers, similar to QEC circuits, it is equivalent to a setup where all measurements are postponed to the very end and the pooling layers are replaced by unitary controlled gates which act on both the measured and unmeasured qubits. In this way, the QCNN can be viewed as unitary evolution of the N input qubits followed by measurements, and the classification output is equivalent to measuring a nonlocal observable.

Conclusion Concrete circuit model for quantum classification Application to quantum phase recognition Theoretical explanation for the working: QCNN= MERA + QEC = RG flow

Discussion QCNN for two and higher dimension phase detection Potentially help identify nonlocal order parameters for lesser-understood phases, such as quantum spin liquids or anyonic chains, which have sharp expectation values up to the phase boundaries. Design of fault-tolerant quantum storage and quantum operations, where specific noise models can be used to generate training data to optimize the design of QEC codes and operations on their code spaces. Structural similarity of QCNN with its classical counterpart motivates potential adoption of more efficient gradient computation schemes

Thank You