EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
A new approach to Artificial Intelligence.  There are HUGE differences between brain architecture and computer architecture  The difficulty to emulate.
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
[1].Edward G. Jones, Microcolumns in the Cerebral Cortex, Proc. of National Academy of Science of United States of America, vol. 97(10), 2000, pp
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Brain-like design of sensory-motor programs for robots G. Palm, Uni-Ulm.
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
Sparse Neural Systems: The Ersatz Brain gets Thinner James A. Anderson Department of Cognitive and Linguistic Sciences Brown University Providence, Rhode.
EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Emergence of Machine Intelligence. “…Perhaps the last frontier of science – its ultimate challenge- is to understand the biological basis of consciousness.
EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.
Introduction to Neural Network Justin Jansen December 9 th 2002.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
Chapter 4: Towards a Theory of Intelligence Gert Kootstra.
Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology,
Mind, Brain & Behavior Wednesday February 5, 2003.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
IJCNN, International Joint Conference on Neural Networks, San Jose 2011 Pawel Raif Silesian University of Technology, Poland, Janusz A. Starzyk Ohio University,
ICCINC' Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6 th International Conference on Computational.
Lecture 09 Clustering-based Learning
EE141 How to Motivate Machines to Learn and Help Humans in Making Water Decisions? Janusz Starzyk School of Electrical Engineering and Computer Science,
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
Introduction to Behavior- Based Robotics Based on the book Behavior- Based Robotics by Ronald C. Arkin.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
EE141 How to Motivate a Machine ? How to Motivate a Machine ? Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University, USA.
1 Consciousness and Cognition Janusz A. Starzyk Cognitive Architectures.
EE141 Emergence of the Embodied Intelligence How to Motivate a Machine ? Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University,
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
The search for organizing principles of brain function Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Network Unsupervised Learning
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Neural Network with Memory and Cognitive Functions Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University,
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
THE VISUAL SYSTEM: EYE TO CORTEX Outline 1. The Eyes a. Structure b. Accommodation c. Binocular Disparity 2. The Retina a. Structure b. Completion c. Cone.
A New Theory of Neocortex and Its Implications for Machine Intelligence TTI/Vanguard, All that Data February 9, 2005 Jeff Hawkins Director The Redwood.
EE141 Motivated Learning based on Goal Creation Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University, USA
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Self-Organization, Embodiment, and Biologically Inspired Robotics Rolf Pfeifer, Max Lungarella, Fumiya Iida Science – Nov Rakesh Gosangi PRISM lab.
Algorithmic, Game-theoretic and Logical Foundations
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
Chapter 2. From Complex Networks to Intelligent Systems in Creating Brain-like Systems, Sendhoff et al. Course: Robots Learning from Humans Baek, Da Som.
Cognitive Modular Neural Architecture
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Cognitive Science Overview Introduction, Syllabus
EE141 Machine motivation and cognitive neuroscience approach Janusz Starzyk Computational Intelligence.
Chapter 2 Cognitive Neuroscience. Some Questions to Consider What is cognitive neuroscience, and why is it necessary? How is information transmitted from.
Does the brain compute confidence estimates about decisions?
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
       January 3 rd, 2005 The signaling properties of the individual neuron. How we move from understanding individual nerve cells.
Artificial Intelligence (CS 370D)
Bioagents and Biorobots David Kadleček, Michal Petrus, Pavel Nahodil
CORTICAL MECHANISMS OF VISION
Machine motivation and cognitive neuroscience approach
Artificial Intelligence Lecture No. 28
Behavior Based Systems
Biological Based Networks
Presentation transcript:

EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting June Motivation: How a new understanding of the brain will lead to the creation of truly intelligent machines fromJ. Hawkins “On Intelligence”

EE141 2  Abstract thinking and action planning  Capacity to learn and memorize useful things  Spatio-temporal memories  Ability to talk and communicate  Intuition and creativity  Consciousness  Emotions and understanding others  Surviving in complex environment and adaptation  Perception  Motor skills in relation to sensing and anticipation Elements of Intelligence

EE141 3 Problems of Classical AI  Lack of robustness and generalization  No real-time processing  Central processing of information by a single processor  No natural interface to environment  No self-organization  Need to write software

EE141 4 Intelligent Behavior  Emergent from interaction with environment  Based on large number of sparsely connected neurons  Asynchronous  Self-timed  Interact with environment through sensory- motor system  Value driven  Adaptive

EE141 5 Design principles of intelligent systems Design principles  synthetic methodology  time perspectives  emergence  diversity/compliance  frame-of-reference from Rolf Pfeifer “Understanding of Intelligence” Agent design complete agent principle cheap design ecological balance redundancy principle parallel, loosely coupled processes sensory-motor coordination value principle

EE141 6 The principle of “cheap design”  intelligent agents: “cheap”  exploitation of ecological niche  economical (but redundant)  exploitation of specific physical properties of interaction with real world

EE141 7 Principle of “ecological balance”  balance / task distribution between  morphology  neuronal processing (nervous system)  materials  environment  balance in complexity  given task environment  match in complexity of sensory, motor, and neural system

EE141 8 The redundancy principle  redundancy prerequisite for adaptive behavior  partial overlap of functionality in different subsystems  sensory systems: different physical processes with “information overlap”

EE141 9 Generation of sensory stimulation through interaction with environment  multiple modalities  constraints from morphology and materials  generation of correlations through physical process  basis for cross- modal associations

EE The principle of sensory-motor coordination  self-structuring of sensory data through interaction with environment  physical process — not „computational“  prerequisite for learning Holk Cruse no central control only local neuronal communication global communication through environment neuronal connections

EE The principle of parallel, loosely coupled processes  Intelligent behavior emergent from agent-environment interaction  Large number of parallel, loosely coupled processes  Asynchronous  Coordinated through agent’s –sensory-motor system –neural system –interaction with environment

EE The “value principle”  about motivation  evaluation of actions  frame-of-reference: explicit and implicit values  recent theorizing: information theoretic  (organism tries to mainting “flow of information”)

EE Human Brain at Birth 6 Years Old 14 Years Old Neuron Structure and Self- Organizing Principles

EE Neuron Structure and Self- Organizing Principles (Cont ’ d)

EE Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative cortex Visual cortex Brain Organization

EE  V. Mountcastle argues that all regions of the brain perform the same algorithm V. Mountcastle  SOLAR combines many groups of neurons (minicolumns) in a pseudorandom way  Each microcolumn has the same structure  Thus it performs the same computational algorithm satisfying Mountcastle’s principle  VB Mountcastle (2003). Introduction [to a special issue of Cerebral Cortex on columns]. Cerebral Cortex, 13, 2-4. Minicolumn Organization and Self Organizing Learning Arrays

EE “ The basic unit of cortical operation is the minicolumn … It contains of the order of neurons except in the primate striate cortex, where the number is more than doubled. The minicolumn measures of the order of  m in transverse diameter, separated from adjacent minicolumns by vertical, cell- sparse zones … The minicolumn is produced by the iterative division of a small number of progenitor cells in the neuroepithelium. ” (Mountcastle, p. 2) Stain of cortex in planum temporale. Cortical Minicolumns

EE Groupings of minicolumns seem to form the physiologically observed functional columns. Best known example is orientation columns in V1. They are significantly bigger than minicolumns, typically around mm and have neurons Mountcastle’s summation : “Cortical columns are formed by the binding together of many minicolumns by common input and short range horizontal connections. … The number of minicolumns per column varies … between 50 and 80. Long range intracortical projections link columns with similar functional properties.” (p. 3) Groupping of Minicolumns

EE Sparse Connectivity The brain is sparsely connected. (Unlike most neural nets.) A neuron in cortex may have on the order of 100,000 synapses. There are more than neurons in the brain. Fractional connectivity is very low: 0.001%. Implications:  Connections are expensive biologically since they take up space, use energy, and are hard to wire up correctly.  Therefore, connections are valuable.  The pattern of connection is under tight control.  Short local connections are cheaper than long ones. Our approximation makes extensive use of local connections for computation.

EE Introducing Self-Organizing Learning Array SOLAR  SOLAR is a regular array of identical processing cells, connected to programmable routing channels.  Each cell in the array has ability to self-organize by adapting its functionality in response to information contained in its input signals.  Cells choose their input signals from the adjacent routing channels and send their output signals to the routing channels.  Processing cells can be structured to implement minicolumns

EE SOLAR Hardware Architecture

EE SOLAR Routing Scheme

EE PCB SOLAR XILINX VIRTEX XCV 1000

EE System SOLAR

EE Wiring in SOLAR Initial wiring and final wiring selection for credit card approval problem

EE SOLAR Classification Results

EE Associative SOLAR

EE Associations made in SOLAR

EE Sensors Actuators Reactive Associations Sensory Inputs Motor Outputs Defining Simple Brain

EE Simple Brain Properties  Interacts with environment through sensors and actuators  Uses distributed processing in sparsely connected neurons  Uses spatio-temporal associative learning  Uses feedback for input prediction and screening input information for novelty

EE Sensors Actuators Value System Anticipated Response Reinforcement Signal Action Planning Sensory Inputs Motor Outputs Brain Structure with Value System

EE Brain Structure with Value System Properties  Interacts with environment through sensors and actuators  Uses distributed processing in sparsely connected neurons  Uses spatio-temporal associative learning  Uses feedback for input prediction and screening input information for novelty  Develops an internal value system to evaluate its state in environment using reinforcement learning  Plans output actions for each input to maximize the internal state value in relation to environment  Uses redundant structures of sparsely connected processing elements

EE Value System in Reinforcement Learning Control Value System in Reinforcement Learning Control Value System States Controller Reinforcement Signal Environment Optimization

EE Sensors Actuators Value System Anticipated Response Reinf. Signal Sensory Inputs Motor Outputs Action Planning Understanding Decision making Artificial Brain Organization

EE  Learning should be restricted to unexpected situation or reward  Anticipated response should have expected value  Novelty detection should also apply to the value system  Need mechanism to improve and compare the value Artificial Brain Organization

EE Sensors Actuators Value System Anticipated Response Reinf. Signal Sensory Inputs Motor Outputs Action Planning Understanding Improvement Detection Expectation Novelty Detection Inhibition Comparison Artificial Brain Organization

EE  Anticipated response block should learn the response that improves the value  A RL optimization mechanism may be used to learn the optimum response for a given value system and sensory input  Random perturbation should be applied to the optimum response to explore possible states and learn their the value  New situation will result in new value and WTA will chose the winner Artificial Brain Organization

EE Positive Reinforcement Negative Reinforcement Sensory Inputs Motor Outputs Artificial Brain Organization

EE Artificial Brain Selective Processing  Sensory inputs are represented by more and more abstract features in the sensory inputs hierarchy  Possible implementation is to use winner takes all or Hebbian circuits to select the best match  Random wiring may be used to preselect sensory features  Uses feedback for input prediction and screening input information for novelty  Uses redundant structures of sparsely connected processing elements

EE Microcolumn Organization Positive Reinforcement Negative Reinforcement Sensory Inputs Motor Outputs WTA superneuron

EE  Each microcolumn contains a number of superneurons  Within each microcolumn, superneurons compete on different levels of signal propagation  Superneuron contains a predetermined configuration of  Sensory (blue)  Motor and (yellow)  Reinforcement neurons (positive green and negative red)  Superneurons internally organize to perform operations of  Input selection and recognition  Association of sensory inputs  Feedback based anticipation  Learning inhibition  Associative value learning, and  Value based motor activation Superneuron Organization

EE  Sensory neurons are primarily responsible for providing information about environment  They receive inputs from sensors or other sensory neurons on lower level  They interact with motor neurons to represent action and state of environment  They provide an input to reinforcement neurons  They help to activate motor neurons  Motor neurons are primarily responsible for activation of motor functions  They are activated by reinforcement neurons with the help from sensory neurons  They activate actuators or provide an input to lower level motor neurons  They provide an input to sensory neurons  Reinforcement neurons are primarily responsible for building the internal value system  They receive inputs from reinforcement learning sensors or other reinforcement neurons on lower level  They receive inputs from sensory neurons  They provide an input to motor neurons  They help to activate sensory neurons Superneuron Organization

EE Sensory Neurons Interactions WTA S1 S2 S3 S2h S1h

EE Sensory Neurons Functions  Sensory neurons are responsible for  Representation of inputs from environment  Interactions with motor functions  Anticipation of inputs and screening for novelty  Selection of useful information  Identifying invariances  Making spatio-temporal associations WTA

EE Sensory Neurons Functions Sensory neurons  Represent inputs from environment by  Responding to activation from lower level (summation)  Selecting most likely scenario (WTA)  Interact with motor functions by  Responding to activation from motor outputs (summation)  Anticipate inputs and screen for novelty by  Correlation to sensory inputs from higher level  Inhibition of outputs to higher level  Select useful information by  Correlating its outputs with reinforcement neurons  Identify invariances by  Making spatio-temporal associations between neighbor sensory neurons

EE141 46

EE From Apparent Mess

EE WTA To Clear Mind Organization