Neural Key Exchange Presented by: Jessica Lowell 10 December 2009 CS 6750.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
For Wednesday Read chapter 19, sections 1-3 No homework.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
CS 678 –Boltzmann Machines1 Boltzmann Machine Relaxation net with visible and hidden units Learning algorithm Avoids local minima (and speeds up learning)
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Secure exchange of information by synchronization of neural networks Authors: Ido Kanter, Wolfgang Kinzel and Eran Kanter From: Europhys. Lett Presented.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks Marco Loog.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Quantum Key Establishment Wade Trappe. Talk Overview Quantum Demo Quantum Key Establishment.
Lecture 23 Symmetric Encryption
Artificial Neural Networks
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial Basis Function (RBF) Networks
Radial Basis Function Networks
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Artificial Neural Networks
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
Appendix B: An Example of Back-propagation algorithm
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Chapter 20 Symmetric Encryption and Message Confidentiality.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
CS555Topic 251 Cryptography CS 555 Topic 25: Quantum Crpytography.
NEW DIRECTIONS IN CRYPTOGRAPHY Made Harta Dwijaksara, Yi Jae Park.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
PARALLELIZATION OF ARTIFICIAL NEURAL NETWORKS Joe Bradish CS5802 Fall 2015.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Lecture 5 Neural Control
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Neural Networks Si Wu Dept. of Informatics PEV III 5c7 Spring 2008.
Joe Bradish Parallel Neural Networks. Background  Deep Neural Networks (DNNs) have become one of the leading technologies in artificial intelligence.
1 Diffie-Hellman (Key Exchange) Protocol Rocky K. C. Chang 9 February 2007.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 17: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
@Yuan Xue CS 285 Network Security Block Cipher Principle Fall 2012 Yuan Xue.
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Neural Network Architecture Session 2
Neural Networks.
Fall 2004 Perceptron CS478 - Machine Learning.
CSC321: Neural Networks Lecture 19: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
CSSE463: Image Recognition Day 17
Machine Learning Today: Reading: Maria Florina Balcan
CS 4501: Introduction to Computer Vision Training Neural Networks II
network of simple neuron-like computing elements
CSSE463: Image Recognition Day 17
Artificial Intelligence 12. Two Layer ANNs
CSSE463: Image Recognition Day 17
Sanguthevar Rajasekaran University of Connecticut
Modeling IDS using hybrid intelligent systems
Presentation transcript:

Neural Key Exchange Presented by: Jessica Lowell 10 December 2009 CS 6750

What is a neural network? Simple processing elements which can exhibit complex global behavior through connections and parameters Input, output, and hidden nodes Source: GameDev.net

Neural networks in cryptography First used for DES cryptanalysis (Dourlens, 1995)‏ Decryption (Shihab, 2006)‏ Pseudo-random number generation (Karras & Zorkadis, 2003)‏ Neural key exchange The last of these is what we are looking at!

Why use neural networks for key exchange? Does not require number theory Low complexity (linear with the size of the network)‏ We will come back to this later.

Tree-parity machines Type of multilayer feedforward network One output, K hidden neurons, K*N inputs, inputs are binary Output of each hidden neuron is sum of all multiplications of input neurons, weights Binary output value Source: Wikimedia Commons

Synchronization The basis of neural key exchange is synchronization of the weights of tree parity machines. Similar to synchronization of chaotic oscillators in chaos communications

Kanter-Kinzel-Kanter: Three main ingredients Knowledge of output does not uniquely determine internal representation, so observer cannot tell which weight vector was updated (hidden units)‏ Tree parity machine Bounded weights What does this mean? Observer cannot recover initial weight vectors from the knowledge of time-dependent synchronized keys.

Kanter-Kinzel-Kanter: The protocol 1. Initialize random weight values for each party's tree parity machine. 2. Do until synchronization is achieved: 1. Generate random input vector X 2. Compute the hidden neuron values 3. Compute the output neuron value 4. Compare the output values of the tree parity machines If outputs are different, go to 2.1 If outputs are the same, apply a learning rule (e.g. Hebbian, Anti-Hebbian, Random Walk) to the weights

Why this is really intriguing Not based on number theory  Could potentially give rise to faster key exchange  Algorithms based on number theory are potentially vulnerable to having their operations inverted by quantum computers – neural algorithms possibly more secure

Can it be brute-forced? Attacker would have to test all possible keys (2L+1) KN possibilities For a reasonable value of N, is is impossible with today's computer power.

Can attacker fight fire with fire? Can attacker learn with own tree parity machine? Kanter, Kinzel, Kanter observed empirically that attacker synchronized less quickly than Alice and Bob Attacker is less likely to make coordinated move with either of the parties than they are to make coordinated move with each other (Klimov et al, 2002)

So what's the catch? Klimov, Mityaguine, and Shamir found three unusual attacks to which neural key exchange was vulnerable: 1. Geometric attack 2. Genetic attack 3. Probabilistic analysis

Vulnerability: Geometric attack Based on the geometric interpretation of the action of a perceptron The procedure: If output A != output B, the attacker doesn't update output C If output A = output B and output A = output C, attacker updates using the learning rule Otherwise, attacker uses geometry-based formula to update

Vulnerability: Genetic attack A biologically-inspired attack for a biologically- inspired cryptosystem Simulates a population of tree parity machines trained with the same inputs as those of the two parties “At each stage...networks whose outputs mimic those of the two parties breed and multiply, while unsuccessful networks die.”

Vulnerability: Probabilistic analysis Easier to predict the position of a bounded point in a random walk after several moves, than to guess its original position The attacker does not know which perceptrons are updated in each round (the moves are unknown)‏ Attack uses dynamic programming to calculate the probabilities of particular outputs using probability distribution

Fixes Authentication (Volkmer & Schaumburg, 2004)‏ Addition of feedback mechanism (Prabakaran et al, 2008)‏ One party sending erroneous output bits which the other party can predict and remove (Allam & Abbas, 2009)‏

Other improvements on the original protocol “Public channel cryptography by synchronization of neural networks and chaotic maps” (Mislovaty et al, 2003)‏ “Neural cryptography with feedback” (Ruttor et al, 2004)‏

Conclusion Neural key exchange is promising against conventional attacks and quantum computing, but needs work against some unconventional attacks.