EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.

Slides:



Advertisements
Similar presentations
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Advertisements

ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
A Neural Model for Detecting and Labeling Motion Patterns in Image Sequences Marc Pomplun 1 Julio Martinez-Trujillo 2 Yueju Liu 2 Evgueni Simine 2 John.
Chapter Thirteen Conclusion: Where We Go From Here.
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
Institute for Theoretical Physics and Mathematics Tehran January, 2006 Value based decision making: behavior and theory.
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
[1].Edward G. Jones, Microcolumns in the Cerebral Cortex, Proc. of National Academy of Science of United States of America, vol. 97(10), 2000, pp
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
Self-Organizing Hierarchical Neural Network
EE141 EE690 Design of Embodied Intelligence Janusz Starzyk EECS, Ohio University.
EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.
Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk.
Emergence of Machine Intelligence. “…Perhaps the last frontier of science – its ultimate challenge- is to understand the biological basis of consciousness.
EE141 Challenges of Embodied Intelligence Janusz Starzyk, Yinyin Liu, Haibo He School of Electrical Engineering and Computer Science, Ohio University,
Chapter 4: Towards a Theory of Intelligence Gert Kootstra.
EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.
Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology,
Mind, Brain & Behavior Wednesday February 5, 2003.
Mind, Brain & Behavior Friday February 7, From Nerve Cells to Cognition (Cont.) Chapter 18.
Overview 1.The Structure of the Visual Cortex 2.Using Selective Tuning to Model Visual Attention 3.The Motion Hierarchy Model 4.Simulation Results 5.Conclusions.
ICCINC' Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6 th International Conference on Computational.
Lecture 09 Clustering-based Learning
EE141 How to Motivate Machines to Learn and Help Humans in Making Water Decisions? Janusz Starzyk School of Electrical Engineering and Computer Science,
Intentional Robot —Design of a goal-seeking environment-driven agent Intelligence is something that does, while consciousness is something that is. from.
August 19 th, 2006 Computational Neuroscience Group, LCE Helsinki University of Technology Computational neuroscience group Laboratory of computational.
Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University, USA Photo:
1 Consciousness and Cognition Janusz A. Starzyk Cognitive Architectures.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Psychology: memory. Overview An understanding of human memory is critical to an appreciation of how users will store and use relevant information when.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Prediction in Human Presented by: Rezvan Kianifar January 2009.
Neural Network with Memory and Cognitive Functions Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University,
Emotions: a computational semiotics perspective Rodrigo Gonçalves, Ricardo Gudwin, Fernando Gomide Electrical and Computer Engineering School (FEEC) State.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Outline: Biological Metaphor Biological generalization How AI applied this Ramifications for HRI How the resulting AI architecture relates to automation.
A New Theory of Neocortex and Its Implications for Machine Intelligence TTI/Vanguard, All that Data February 9, 2005 Jeff Hawkins Director The Redwood.
EE141 Motivated Learning based on Goal Creation Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University, USA
$ recognition & localization of predators & prey $ feature analyzers in the brain $ from recognition to response $ summary PART 2: SENSORY WORLDS #10:
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Intelligent Robotics Today: Robot Control Architectures Next Week: Localization Reading: Murphy Sections 2.1, 2.3, 2.5, 3.1, 3.5, 3.6, 4.1 – 4.3, 4.5,
Cognition Through Imagination and Affect Murray Shanahan Imperial College London Department of Computing.
Neurons and Their Connections
Autism Presented by : Hosein Hamdi. Autism manifests during the first three years of life Genetic factors play a significant and complex role in autism.
PHYSIOLOGICAL UNDERPINNINGS OF LANGUAGE, PROBLEM SOLVING, AND REASONING.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
1 Discovery and Neural Computation Paul Thagard University of Waterloo.
Cognitive Modular Neural Architecture
Cognitive Science Overview Introduction, Syllabus
Chapter 2 Cognitive Neuroscience. Some Questions to Consider What is cognitive neuroscience, and why is it necessary? How is information transmitted from.
Maestro AI Vision and Design Overview Definitions Maestro: A naïve Sensorimotor Engine prototype. Sensorimotor Engine: Combining sensory and motor functions.
Does the brain compute confidence estimates about decisions?
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Central Nervous System The Brain. 1. Somatic sensory area.
Chapter 11: Artificial Intelligence
SIE 515 The Human Brain vs. The Computer
Consciousness and Cognition
Bioagents and Biorobots David Kadleček, Michal Petrus, Pavel Nahodil
Sensorimotor Learning and the Development of Position Invariance
Subsuption Architecture
Confidence as Bayesian Probability: From Neural Origins to Behavior
Biologically Based Networks
Biologically Based Networks
Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations  Martin A. Giese, Giacomo Rizzolatti  Neuron 
Behavior Based Systems
Biological Based Networks
Presentation transcript:

EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative cortex Visual cortex Artificial Brain Organization Janusz Starzyk, Ohio University

EE141 2  Abstract thinking and action planning  Capacity to learn and memorize useful things  Spatio-temporal memories  Ability to talk and communicate  Intuition and creativity  Consciousness  Emotions and understanding others  Surviving in complex environment and adaptation  Perception  Motor skills in relation to sensing and anticipation Elements of Intelligence

EE141 3 Problems of Classical AI  Lack of robustness and generalization  No real-time processing  Central processing of information by a single processor  No natural interface to environment

EE141 4 Intelligent Behavior  Emergent from interaction with environment  Based on large number of sparsely connected neurons  Asynchronous  Interact with environment through sensory- motor system  Value driven  Adaptive

EE141 5 Sensors Actuators Reactive Associations Sensory Inputs Motor Outputs Simple Brain Organization

EE141 6 Simple Brain Properties  Interacts with environment through sensors and actuators  Uses distributed processing in sparsely connected neurons  Uses spatio-temporal associative learning  Uses feedback for input prediction and screening input information for novelty

EE141 7 Sensors Actuators Value System Anticipated Response Reinforcement Signal Action Planning Sensory Inputs Motor Outputs Brain Structure with Value System

EE141 8 Brain Structure with Value System Properties  Interacts with environment through sensors and actuators  Uses distributed processing in sparsely connected neurons  Uses spatio-temporal associative learning  Uses feedback for input prediction and screening input information for novelty  Develops an internal value system to evaluate its state in environment using reinforcement learning  Plans output actions for each input to maximize the internal state value in relation to environment  Uses redundant structures of sparsely connected processing elements

EE141 9 Value System in Reinforcement Learning Control Value System in Reinforcement Learning Control Value System States Controller Reinforcement Signal Environment Optimization

EE Sensors Actuators Value System Anticipated Response Reinf. Signal Sensory Inputs Motor Outputs Action Planning Understanding Decision making Artificial Brain Organization

EE  Learning should be restricted to unexpected situation or reward  Anticipated response should have expected value  Novelty detection should also apply to the value system  Need mechanism to improve and compare the value Artificial Brain Organization

EE Sensors Actuators Value System Anticipated Response Reinf. Signal Sensory Inputs Motor Outputs Action Planning Understanding Improvement Detection Expectation Novelty Detection Inhibition Comparison Artificial Brain Organization

EE  Anticipated response block should learn the response that improves the value  A RL optimization mechanism may be used to learn the optimum response for a given value system and sensory input  Random perturbation of the optimum should be used to the optimum response in case the value system changed  New situation will result in new value and WTA will chose the winner  Problem is how to do Artificial Brain Organization

EE Artificial Brain Organization

EE Positive Reinforcement Negative Reinforcement Sensory Inputs Motor Outputs Artificial Brain Organization

EE Artificial Brain Selective Processing  Sensory inputs is represented by more and more abstract features in the sensory inputs hierarchy  Possible implementation is to use winner takes all or Hebbian circuits to select the best match  Random wiring may be used to preselect sensory features  Uses feedback for input prediction and screening input information for novelty  Uses redundant structures of sparsely connected processing elements

EE WTA Artificial Brain Organization

EE  V. Mountcastle argues that all regions of the brain perform the same algorithm V. Mountcastle  SOLAR combines many groups of neurons (microcolumns) in a pseudorandom way  Each microcolumn has the same structure  Thus it performs the same computational algorithm satisfying Mountcastle’s principle  Mindful Brain Cortical Organization and the Group-Selective Theory of Higher Brain Function G. M. Edelman and V. B. Mountcastle MIT Press, March 1982 Mindful BrainG. M. EdelmanV. B. Mountcastle Microcolumn Organization

EE Microcolumn Organization Positive Reinforcement Negative Reinforcement Sensory Inputs Motor Outputs WTA superneuron

EE  Each microcolumn contains a number of superneurons  Within each microcolumn, superneurons compete on different levels of signal propagation  Superneuron contains a predetermined configuration of  Sensory (blue)  Motor and (yellow)  Reinforcement neurons (positive green and negative red)  Superneurons internally organize to perform operations of  Input selection and recognition  Association of sensory inputs  Feedback based anticipation  Learning inhibition  Associative value learning, and  Value based motor activation Superneuron Organization

EE  Sensory neurons are primarily responsible for providing information about environment  They receive inputs from sensors or other sensory neurons on lower level  They interact with motor neurons to represent action and state of environment  They provide an input to reinforcement neurons  They help to activate motor neurons  Motor neurons are primarily responsible for activation of motor functions  They are activated by reinforcement neurons with the help from sensory neurons  They activate actuators or provide an input to lower level motor neurons  They provide an input to sensory neurons  Reinforcement neurons are primarily responsible for building the internal value system  They receive inputs from reinforcement learning sensors or other reinforcement neurons on lower level  They receive inputs from sensory neurons  They provide an input to motor neurons  They help to activate sensory neurons Superneuron Organization

EE WTA Sensory Neurons Interactions

EE Sensory Neurons Functions  Sensory neurons are responsible for  Representation of inputs from environment  Interactions with motor functions  Anticipation of inputs and screening for novelty  Selection of useful information  Identifying invariances  Making spatio-temporal associations WTA

EE Sensory Neurons Functions Sensory neurons  Represent inputs from environment by  Responding to activation from lower level (summation)  Selecting most likely scenario (WTA)  Interact with motor functions by  Responding to activation from motor outputs (summation)  Anticipate inputs and screen for novelty by  Correlation to sensory inputs from higher level  Inhibition of outputs to higher level  Select useful information by  Correlating its outputs with reinforcement neurons  Identify invariances by  Making spatio-temporal associations between neighbor sensory neurons

EE141 25