Internal States Sensory Data Learn/Act Action Development Action Selection Action[1] Action[2] Action[N] Action[i] (being developed) Action[i] Environment.

Slides:



Advertisements
Similar presentations
Pattern Association.
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Hopefully a clearer version of Neural Network. I1 O2 O1 H1 H2I2.
EE 690 Design of Embodied Intelligence
Neural Representation, Embodied and Evolved Pete Mandik Chairman, Department of Philosophy Coordinator, Cognitive Science Laboratory William Paterson University,
Back-propagation Chih-yun Lin 5/16/2015. Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example:
PROTEIN SECONDARY STRUCTURE PREDICTION WITH NEURAL NETWORKS.
Artificial Neural Networks ECE 398BD Instructor: Shobha Vasudevan.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Branch Prediction with Neural- Networks: Hidden Layers and Recurrent Connections Andrew Smith CSE Dept. June 10, 2004.
Neural Networks Basic concepts ArchitectureOperation.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
Neural Networks Chapter Feed-Forward Neural Networks.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
Hopefully a clearer version of Neural Network. With Actual Weights.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
October 28, 2010Neural Networks Lecture 13: Adaptive Networks 1 Adaptive Networks As you know, there is no equation that would tell you the ideal number.
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Neural Networks William Lai Chris Rowlett. What are Neural Networks? A type of program that is completely different from functional programming. Consists.
Multiple-Layer Networks and Backpropagation Algorithms
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Networks An Overview and Analysis.
Explorations in Neural Networks Tianhui Cai Period 3.
Appendix B: An Example of Back-propagation algorithm
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
An informal description of artificial neural networks John MacCormick.
Elegant avoiding of obstacle Young Joon Kim MSRDS First Beginner Course – STEP5.
Artificial Neural Networks Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009.
Multi-Layer Perceptron
Music Genre Classification Alex Stabile. Example File
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
NEURO-FUZZY LOGIC 1 X 0 A age Crisp version for young age.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Controlling a Robot with a Neural Network n CS/PY 231 Lab Presentation # 9 n March 30, 2005 n Mount Union College.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CS206Evolutionary Robotics Artificial Neural Networks Input Layer Output Layer Input Layer Output Layer ++ ++
Leaves Recognition By Zakir Mohammed Indiana State University Computer Science.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
Multiple-Layer Networks and Backpropagation Algorithms
INTERMEDIATE PROGRAMMING LESSON
RADIAL BASIS FUNCTION NEURAL NETWORK DESIGN
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
CSE 473 Introduction to Artificial Intelligence Neural Networks
Lecture 5 Smaller Network: CNN
Prof. Carolina Ruiz Department of Computer Science
Obstacle Detection Ultrasonic Sensor.
INTERMEDIATE PROGRAMMING LESSON
Power and limits of reactive intelligence
Artificial Neural Network & Backpropagation Algorithm
XOR problem Input 2 Input 1
network of simple neuron-like computing elements
Neural Networks Chapter 5
Backpropagation.
Basics of Deep Learning No Math Required
INTERMEDIATE PROGRAMMING LESSON
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
A connectionist model in action
Obstacle Detection.
Pose Estimation in hockey videos using convolutional neural networks
Artificial Neural Network learning
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Internal States Sensory Data Learn/Act Action Development Action Selection Action[1] Action[2] Action[N] Action[i] (being developed) Action[i] Environment Action Combining

Internal States Sensory Data Learn/Act Action Development Action Selection Seek Follow Acquired (always runs) Action being developed. (Seek, Follow, Acquire) 0/1 Acquire Heading adjustment Speed adjustment Environment

Front of Robot (Oriented at 90 degrees) Right Sensor (Oriented at 45 degrees) Left Sensor (Oriented at 135 degrees) Sound Source D0 D1 Signal Strength at Right Sensor = Sound Source Intensity/D1^2 Signal Strength at Left Sensor = Sound Source Intensity/D0^2

Input from left acoustic sensorInput from right acoustic sensor Input layer to hidden layer weights Transfer function Hidden layer to output layer weights Heading adjustment Speed adjustment

Lead robot in a line of n = 12 robots. It is manually controlled. Follower robots. Id = 1 to n-1. Each is controlled by a feed-forward neural network. Each robot follows the robot whose id is one less than its own. Id = 0 Id = 1 Id = 2 Id = 3 Id = 11 Line Formation.

Lead robot. Id = 0 Modified tree Formation. Id = 3 Id = 4Id = 5 Id = 6Id = 7 Id = 2 Follower robots. Id = 1 to n-1. Each is controlled by a feed-forward neural network. Each robot follows the robot whose id is its own divided by 2. Id = 1