Establishing the Equivalence between Recurrent Neural Networks and Turing Machines. Ritesh Kumar Sinha(02d05005) Kumar Gaurav Bijay(02005013)

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Introduction to Neural Networks
Restricted Machines Presented by Muhannad Harrim.
Introduction to Neural Networks 2. Overview  The McCulloch-Pitts neuron  Pattern space  Limitations  Learning.
CS 345: Chapter 9 Algorithmic Universality and Its Robustness
Introduction to Artificial Neural Networks
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Kostas Kontogiannis E&CE
Biological and Artificial Neurons Michael J. Watts
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
December 8, 2009Theory of Computation Lecture 22: Turing Machines IV 1 Turing Machines Theorem 1.1: Any partial function that can be computed by a Post-
Neural Networks An Introduction.
Theory of Computation. Computation Computation is a general term for any type of information processing that can be represented as an algorithm precisely.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Design and Analysis of Algorithms
Neuro-fuzzy Systems Xinbo Gao School of Electronic Engineering Xidian University 2004,10.
By: Er. Sukhwinder kaur.  Computation Computation  Algorithm Algorithm  Objectives Objectives  What do we study in Theory of Computation ? What do.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Explorations in Neural Networks Tianhui Cai Period 3.
ANNs (Artificial Neural Networks). THE PERCEPTRON.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
5. Alternative Approaches. Strategic Bahavior in Business and Econ 1. Introduction 2. Individual Decision Making 3. Basic Topics in Game Theory 4. The.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 14/15 – TP19 Neural Networks & SVMs Miguel Tavares.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
Logical Calculus of Ideas Immanent in Nervous Activity McCulloch and Pitts Deep Learning Fatima Al-Raisi.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Artificial Intelligence & Neural Network
ICS 586: Neural Networks Dr. Lahouari Ghouti Information & Computer Science Department.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Lecture 5 Neural Control
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
Topic 3: Automata Theory 1. OutlineOutline Finite state machine, Regular expressions, DFA, NDFA, and their equivalence, Grammars and Chomsky hierarchy.
Code: BM301 Mathematics for Computing Prof.(Dr.) Monalisa Banerjee By.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Joost N. Kok Universiteit Leiden
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
of the Artificial Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks
Perceptron as one Type of Linear Discriminants
The Programming Language L
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence 12. Two Layer ANNs
Artificial Intelligence Chapter 3 Neural Networks
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
Artificial Intelligence Chapter 3 Neural Networks
The Programming Language L
Introduction to Neural Network
David Kauchak CS158 – Spring 2019

Presentation transcript:

Establishing the Equivalence between Recurrent Neural Networks and Turing Machines. Ritesh Kumar Sinha(02d05005) Kumar Gaurav Bijay( )

“No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either” – Marvin Minsky.

Introduction  Plan : Establish Equivalence between recurrent neural network and turing machine History of Neurons Definitions Constructive Proof of equivalence  Approach : Conceptual Understanding

Motivation  Understanding the learning patterns of human brain – concept of neuron  Is Turing Machine the ultimate computing machine ?  How powerful are neural networks – DFA, PDA, Turing Machine, still higher …

Brain  The most complicated human organ sense, perceive, feel, think, believe, remember, utter Information processing Centre  Neurons : information processing units

MCP Neuron  McCulloch and Pitts gave a model of a neuron in 1943  But it's only a highly simplified model of real neuron Positive weights (activators) Negative weights (inhibitors)

Artificial Neural Neworks (ANN)  Interconnected units : model neurons  Modifiable weights (models synapse)

Types of ANN  Feed-forward Networks Signals travel in one way only Good at computing static functions  Neuro-Fuzzy Networks Combines advantages of both fuzzy reasoning and Neural Networks. Good at modeling real life data.  Recurrent Networks

Recurrent Neural Networks Activation Function: f(x) = x, if x > 0 0, otherwise

Turing Machine and Turing Complete Languages  Turing Machine: As powerful as any other computer  Turing Complete Language: Programming language that can compute any function that a Turing Machine can compute.

A language L  Four basic operations: No operation : V  V Increment: V  V+1 Decrement : V  max(0,V-1) Conditional Branch: IF V != 0 GOTO j (V is any variable having positive integer values, and j stands for line numbers)  L is Turing Complete

Turing Machine can encode Neural Network  Turing machine can compute any computable function, by definition  The activation function, in our case, is a simple non-linear function  Turing machine can therefore simulate our recurrent neural net  Intuitive, cant we write code to simulate our neural net

An example  Function that initiates: Y=X L1: X  X - 1 Y  Y + 1 if X != 0 goto L1

An example  Function that initiates: Y=X if X != 0 goto L1 X  X + 1 if X != 0 goto L2 L1: X  X - 1 Y  Y + 1 if X != 0 goto L1 L2: Y  Y

L is Turing Complete : Conceptual Understanding  Idea: Don’t think of C++, think of 8085 ;)  Subtraction: Y = X1 - X2

L is Turing Complete : Conceptual Understanding  Idea: Don’t think of C++, think of 8085 ;)  Subtraction: Y = X1 - X2 Yes, decrement X, increment Y, when Y=0, stop  Multiplication, division:

L is Turing Complete : Conceptual Understanding  Idea: Don’t think of C++, think of 8085 ;)  Subtraction: Y = X1 - X2 Yes, decrement X, increment Y, when Y=0, stop  Multiplication, division: Yes, think of the various algos you studied in Hardware Class :)

L is Turing Complete : Conceptual Understanding  If: if X=0 goto L

L is Turing Complete : Conceptual Understanding  If: if X=0 goto L Yes, if X != 0 goto L2 Z  Z + 1 // Z is dummy if Z != 0 goto L L2:...

Constructing a Perceptron Network for Language L  For each variable V : entry node N V  For each program row i : instruction node N i  Conditional branch instruction on row i : 2 transition nodes N i’ and N i”

Constructions  Variable V : N V  No Operation: N i N i+1

Constructions Continued  Increment Operation : N i N i+1 N i N V  Decrement Operation: N i N i+1 N i N V

Constructions Continued  Conditional Branch Operation : N i N i’ N i N i’’ N v N i’’ N i’’ N i+1 N i’ N j N i’’ N j

Definitions  Legal State: Transition Nodes (N i’ and N i’’ ) : 0 outputs Exactly one N i has output=1  Final State : All instruction nodes have 0 outputs.

Properties  If y i = 1 then program counter is on line i  V = output of N v  Changes in network state are activated by non-zero nodes.

Proof If row i is V  V If row i is V  V - 1 For V  V + 1, behavior is similar

Proof Continued If row i is : if V != 0 GOTO j

Proof Continued Thus a legal state leads to another legal state.

What about other activation functions? Feed-forward Net With Binary State Function is DFA Recurrent Neural Network with: 1) Sigmoid is Turing Universal 2) Saturated Linear Function is Turing Universal

Conclusion  Turing machine can encode Recurrent Neural Network.  Recurrent neural net can simulate a Turing complete language.  So, Turing machines are Recurrent Neural Networks !

Project Implementation of Push down automata (PDA) using Neural Networks.

References [1] McCulloch, W. S., and Pitts, W. : A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 1943 : [2] Hyotyniemi, H : Turing Machines are Recurrent Neural Networks. In SYmposium of Artificial Networks, Vaasa, Finland, Aug 19-23, 1926, pp [3] Davis, Martin D.Weyuker, Elaine J. : Computability, complexity, and languages : fundamentals of theoretical computer science, New York : Academic Press, 1983.

References Continued [4] J. E. Hopcroft and J. D. Ullman, Introduction to Automata Theory, Languages and Computation. Addison-Wesley, [5] Arbib, M. : Turing Machnies, Finite Automata and Neural Nets. Journal of the ACM (JACM), NY, USA, Oct 1961, Vol 8, Issue 4, pp