Neural Network Decoders for Quantum Error Correcting Codes

Slides:



Advertisements
Similar presentations
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
Deep Learning Bing-Chen Tsai 1/21.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Kostas Kontogiannis E&CE
Neural Networks Basic concepts ArchitectureOperation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Quantum Error Correction SOURCES: Michele Mosca Daniel Gottesman Richard Spillman Andrew Landahl.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Multi Layer Perceptrons (MLP) Course website: The back-propagation algorithm Following Hertz chapter 6.
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Radial Basis Function Networks
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Neural Networks Lecture 8: Two simple learning algorithms
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Quantum Error Correction Jian-Wei Pan Lecture Note 9.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Hybrid AI & Machine Learning Systems Using Ne ural Network and Subsumption Architecture Libraries By Logan Kearsley.
© Copyright 2004 ECE, UM-Rolla. All rights reserved A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
Michigan REU Final Presentations, August 10, 2006Matt Jachowski 1 Multivariate Analysis, TMVA, and Artificial Neural Networks Matt Jachowski
NEURAL NETWORKS FOR DATA MINING
Neural Networks for Protein Structure Prediction Brown, JMB 1999 CS 466 Saurabh Sinha.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
M Machine Learning F# and Accord.net. Alena Dzenisenka Software architect at Luxoft Poland Member of F# Software Foundation Board of Trustees Researcher.
CSE & CSE6002E - Soft Computing Winter Semester, 2011 Neural Networks Videos Brief Review The Next Generation Neural Networks - Geoff Hinton.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
CIAR Summer School Tutorial Lecture 1b Sigmoid Belief Nets Geoffrey Hinton.
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Chapter 18 Connectionist Models
Chapter 6 Neural Network.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Today’s Lecture Neural networks Training
Big data classification using neural network
Neural Networks.
Deep Feedforward Networks
Neural Networks Dr. Peter Phillips.
Backpropagation in fully recurrent and continuous networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Deep Learning Qing LU, Siyuan CAO.
CSE P573 Applications of Artificial Intelligence Neural Networks
A brief introduction to neural network
Classification / Regression Neural Networks 2
Introduction to Deep Learning for neuronal data analyses
Artificial Intelligence Methods
CS 4501: Introduction to Computer Vision Training Neural Networks II
Artificial Neural Network & Backpropagation Algorithm
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Emre O. Neftci  iScience  Volume 5, Pages (July 2018) DOI: /j.isci
Neural Networks Geoff Hulten.
Backpropagation.
Artificial Neural Networks
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
Backpropagation and Neural Nets
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
EE 193/Comp 150 Computing with Biological Parts
An introduction to neural network and machine learning
Presentation transcript:

Neural Network Decoders for Quantum Error Correcting Codes Stefan Krastanov, Liang Jiang

Goal: Given a syndrome deduce the error (efficiently).

Error Correcting Codes (ECC) 101

Classical Binary ECC Use N unreliable physical bits (susceptible to flips) to represent K reliable logical bits by imposing N-K constraints. Use the syndrome (i.e. check which constraints are broken) to deduce what error happened (and reverse it). Caveat: multiple errors can lead to the same syndrome - finding the most probable is the best you can do.

Error Model for a Qubit Slightly more complicated than a bit… X, Z, and Y errors can happen. Y is equivalent to both X and Z happening. We need two bits per qubit to track errors.

Decoding (Correcting) the Code e to s is easy s to e is impossible (it is one to many relation) s to most probable e is difficult (infeasible / NP-hard) s to fairly probable e if there is additional structure can be “easy”

Example of Additional Structure - Toric Code

Neural Networks 101 Different approach to ECC with Neural Networks suggested by: R35.00002 : A neural decoder for topological codes - Torlai, Melko Other interesting uses of Neural Networks for quantum many-body theory: Solving the Quantum Many-Body Problem with Artificial Neural Networks - Carleo, Troyer (arXiv:1606.02318)

Neuron A Nicer Neuron

Layered Neural Network

Neural Networks are “trained”: You provide training input/output pairs, and train the network to “learn” the input/output mapping.

Applying Neural Nets to Decoding

Reversal of a one-way function Generate millions of errors e (for a toric code of distance L). Compute corresponding syndromes s. Train the network to learn the reverse s → e mapping. fully connected fully connected … … … input layer - syndrome output layer - error (L2 neurons per X or Z error type) (2L2 neurons per X or Z error type)

The Output of the Neural Network Neural Networks are continuous (even differentiable) maps. The output is a real number between 0 and 1. The training data is binary (error either happened or not). How do we decide whether the Neural Network is predicting an error or not? fully connected fully connected … … … input layer - syndrome output layer - error (L2 neurons per X or Z error type) (2L2 neurons per X or Z error type)

Neural Networks do not provide answers, rather probability distributions over possible answers! At least in the fairly common architecture we are using. Using buzzwords is fun, but for the rest of the presentation you can think of neural networks as the easiest-to-implement black box to deduce a conditional probability distribution given some data.

Sampling the Neural Network Measure syndrome s From the neural network get probability distribution over possible errors PS(e) Sample an e from P You can do even better - you can check whether snew=s+He is trivial If not, then continue sampling until it is (or you give up)

Application: Distance 5 Toric Code at 10% physical error rate It beats Minimal Weight Perfect Matching (the black line) in two ways: It knows about correlations between X and Z It gives the most probable error, not the one with “minimal free energy” Application: Distance 5 Toric Code at 10% physical error rate

Smarter Sampling (Message Passing / Belief Propagation)

More Layers

MWPM 11 MWPM 9 MWPM 7 Bigger Codes

Conclusions and Work-in-progress Works amazingly well on small instances of the Toric Code With ConvNets or RG algorithms it should work on large Toric Codes Coupled to intelligent sampling and graph algorithms it should work great on big LDPC codes. The network can be recurrent (have a feedback loop) in order to be used beyond single shot decoding. stefan.krastanov@yale.edu Work not published yet but I would be happy to discuss and share resources. Further Reading (and sources for some intro graphics) Stanford CS231 by Andrej Karpathy cs231n.github.io Keras’s software package documentation keras.io “The Unreasonable Effectiveness of Recurrent Neural Networks”, A. Karpathy “Understanding LSTM Networks”, Chris Olah “An Intuitive Explanation of Convolutional Neural Networks”, Ujjwal Karn “A theory of the learnable”, Valiant

Hyperparameter Search Results One or more hidden layers of size 4x the input layer (after that we reach diminishing returns) Hidden layer activation: tanh Output layer activation: sigmoid Loss: binary crossentropy (to be discussed in more detail) fully connected fully connected … … …