EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Sparse Coding in Sparse Winner networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University,
A new approach to Artificial Intelligence.  There are HUGE differences between brain architecture and computer architecture  The difficulty to emulate.
Template design only ©copyright 2008 Ohio UniversityMedia Production Spring Quarter  A hierarchical neural network structure for text learning.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
[1].Edward G. Jones, Microcolumns in the Cerebral Cortex, Proc. of National Academy of Science of United States of America, vol. 97(10), 2000, pp
Brain-like design of sensory-motor programs for robots G. Palm, Uni-Ulm.
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
Self-Organizing Hierarchical Neural Network
Sparse Neural Systems: The Ersatz Brain gets Thinner James A. Anderson Department of Cognitive and Linguistic Sciences Brown University Providence, Rhode.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Emergence of Machine Intelligence. “…Perhaps the last frontier of science – its ultimate challenge- is to understand the biological basis of consciousness.
EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
Chapter 4: Towards a Theory of Intelligence Gert Kootstra.
EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.
Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology,
Mind, Brain & Behavior Wednesday February 5, 2003.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
IJCNN, International Joint Conference on Neural Networks, San Jose 2011 Pawel Raif Silesian University of Technology, Poland, Janusz A. Starzyk Ohio University,
ICCINC' Janusz Starzyk, Yongtao Guo and Zhineng Zhu Ohio University, Athens, OH 45701, U.S.A. 6 th International Conference on Computational.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Lecture 09 Clustering-based Learning
EE141 How to Motivate Machines to Learn and Help Humans in Making Water Decisions? Janusz Starzyk School of Electrical Engineering and Computer Science,
Self-organizing Learning Array based Value System — SOLAR-V Yinyin Liu EE690 Ohio University Spring 2005.
ACM CIKM October 31, 2013 Jeff Hawkins On-line Learning From Streaming Data.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
INTRODUCTION TO PSYCHOLOGY
Machine Learning. Learning agent Any other agent.
Introduction to Behavior- Based Robotics Based on the book Behavior- Based Robotics by Ronald C. Arkin.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
EE141 How to Motivate a Machine ? How to Motivate a Machine ? Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University, USA.
1 Consciousness and Cognition Janusz A. Starzyk Cognitive Architectures.
EE141 Emergence of the Embodied Intelligence How to Motivate a Machine ? Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University,
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
INTEGRATED SYSTEMS 1205 Technology Education A Curriculum Review Sabine Schnepf-Comeau July 19, 2011 ED 4752.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
The search for organizing principles of brain function Needed at multiple levels: synapse => cell => brain area (cortical maps) => hierarchy of areas.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Network Unsupervised Learning
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
Neural Network with Memory and Cognitive Functions Janusz A. Starzyk, and Yue Li School of Electrical Engineering and Computer Science Ohio University,
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
A New Theory of Neocortex and Its Implications for Machine Intelligence TTI/Vanguard, All that Data February 9, 2005 Jeff Hawkins Director The Redwood.
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
EE141 Motivated Learning based on Goal Creation Janusz Starzyk School of Electrical Engineering and Computer Science, Ohio University, USA
Biomedical Sciences BI20B2 Sensory Systems Human Physiology - The basis of medicine Pocock & Richards,Chapter 8 Human Physiology - An integrated approach.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Self-Organization, Embodiment, and Biologically Inspired Robotics Rolf Pfeifer, Max Lungarella, Fumiya Iida Science – Nov Rakesh Gosangi PRISM lab.
Chapter 2. From Complex Networks to Intelligent Systems in Creating Brain-like Systems, Sendhoff et al. Course: Robots Learning from Humans Baek, Da Som.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
EE141 Machine motivation and cognitive neuroscience approach Janusz Starzyk Computational Intelligence.
Basics of Computational Neuroscience. What is computational neuroscience ? The Interdisciplinary Nature of Computational Neuroscience.
Does the brain compute confidence estimates about decisions?
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
       January 3 rd, 2005 The signaling properties of the individual neuron. How we move from understanding individual nerve cells.
Bioagents and Biorobots David Kadleček, Michal Petrus, Pavel Nahodil
Machine motivation and cognitive neuroscience approach
Artificial Intelligence Lecture No. 28
CS621 : Artificial Intelligence
Biological Based Networks
November 8, 2010 Dr. Itamar Arel College of Engineering
Presentation transcript:

EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting June Motivation: How a new understanding of the brain will lead to the creation of truly intelligent machines fromJ. Hawkins “On Intelligence”

EE141 2  Abstract thinking and action planning  Capacity to learn and memorize useful things  Spatio-temporal memories  Ability to talk and communicate  Intuition and creativity  Consciousness  Emotions and understanding others  Surviving in complex environment and adaptation  Perception  Motor skills in relation to sensing and anticipation Elements of Intelligence

EE141 3 Problems of Classical AI  Lack of robustness and generalization  No real-time processing  Central processing of information by a single processor  No natural interface to environment  No self-organization  Need to write software

EE141 4 Intelligent Behavior  Emergent from interaction with environment  Based on large number of sparsely connected neurons  Asynchronous  Self-timed  Interact with environment through sensory- motor system  Value driven  Adaptive

EE141 5 Design principles of intelligent systems Design principles  synthetic methodology  time perspectives  emergence  diversity/compliance  frame-of-reference from Rolf Pfeifer “Understanding of Intelligence” Agent design complete agent principle cheap design ecological balance redundancy principle parallel, loosely coupled processes sensory-motor coordination value principle

EE141 6 The principle of “cheap design”  intelligent agents: “cheap”  exploitation of ecological niche  economical (but redundant)  exploitation of specific physical properties of interaction with real world

EE141 7 Principle of “ecological balance”  balance / task distribution between  morphology  neuronal processing (nervous system)  materials  environment  balance in complexity  given task environment  match in complexity of sensory, motor, and neural system

EE141 8 The redundancy principle  redundancy prerequisite for adaptive behavior  partial overlap of functionality in different subsystems  sensory systems: different physical processes with “information overlap”

EE141 9 Generation of sensory stimulation through interaction with environment  multiple modalities  constraints from morphology and materials  generation of correlations through physical process  basis for cross- modal associations

EE The principle of sensory-motor coordination  self-structuring of sensory data through interaction with environment  physical process — not „computational“  prerequisite for learning Holk Cruse no central control only local neuronal communication global communication through environment neuronal connections

EE The principle of parallel, loosely coupled processes  Intelligent behavior emergent from agent-environment interaction  Large number of parallel, loosely coupled processes  Asynchronous  Coordinated through agent’s –sensory-motor system –neural system –interaction with environment

EE Human Brain at Birth 6 Years Old 14 Years Old Neuron Structure and Self- Organizing Principles

EE Neuron Structure and Self- Organizing Principles (Cont ’ d)

EE Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative cortex Visual cortex Brain Organization While we learn its functions can we emulate its operation?

EE  V. Mountcastle argues that all regions of the brain perform the same algorithm V. Mountcastle  SOLAR combines many groups of neurons (minicolumns) in a pseudorandom way  Each microcolumn has the same structure  Thus it performs the same computational algorithm satisfying Mountcastle’s principle  VB Mountcastle (2003). Introduction [to a special issue of Cerebral Cortex on columns]. Cerebral Cortex, 13, 2-4. Minicolumn Organization and Self Organizing Learning Arrays

EE “ The basic unit of cortical operation is the minicolumn … It contains of the order of neurons except in the primate striate cortex, where the number is more than doubled. The minicolumn measures of the order of  m in transverse diameter, separated from adjacent minicolumns by vertical, cell- sparse zones … The minicolumn is produced by the iterative division of a small number of progenitor cells in the neuroepithelium. ” (Mountcastle, p. 2) Stain of cortex in planum temporale. Cortical Minicolumns

EE Groupings of minicolumns seem to form the physiologically observed functional columns. Best known example is orientation columns in V1. They are significantly bigger than minicolumns, typically around mm and have neurons Mountcastle’s summation : “Cortical columns are formed by the binding together of many minicolumns by common input and short range horizontal connections. … The number of minicolumns per column varies … between 50 and 80. Long range intracortical projections link columns with similar functional properties.” (p. 3) Groupping of Minicolumns

EE Sparse Connectivity The brain is sparsely connected. (Unlike most neural nets.) A neuron in cortex may have on the order of 100,000 synapses. There are more than neurons in the brain. Fractional connectivity is very low: 0.001%. Implications:  Connections are expensive biologically since they take up space, use energy, and are hard to wire up correctly.  Therefore, connections are valuable.  The pattern of connection is under tight control.  Short local connections are cheaper than long ones. Our approximation makes extensive use of local connections for computation.

EE Introducing Self-Organizing Learning Array SOLAR  SOLAR is a regular array of identical processing cells, connected to programmable routing channels.  Each cell in the array has ability to self-organize by adapting its functionality in response to information contained in its input signals.  Cells choose their input signals from the adjacent routing channels and send their output signals to the routing channels.  Processing cells can be structured to implement minicolumns

EE SOLAR Hardware Architecture

EE SOLAR Routing Scheme

EE PCB SOLAR XILINX VIRTEX XCV 1000

EE System SOLAR

EE Wiring in SOLAR Initial wiring and final wiring selection for credit card approval problem

EE SOLAR Classification Results

EE Associative SOLAR

EE Associations made in SOLAR

EE Brain Structure with Value System Properties  Interacts with environment through sensors and actuators  Uses distributed processing in sparsely connected neurons organized in minicolumns  Uses spatio-temporal associative learning  Uses feedback for input prediction and screening input information for novelty  Develops an internal value system to evaluate its state in environment using reinforcement learning  Plans output actions for each input to maximize the internal state value in relation to environment  Uses redundant structures of sparsely connected processing elements

EE Sensors Actuators Value System Anticipated Response Reinf. Signal Sensory Inputs Motor Outputs Action Planning Understanding Improvement Detection Expectation Novelty Detection Inhibition Comparison Possible Minicolumn Organization

EE  Learning should be restricted to unexpected situation or reward  Anticipated response should have expected value  Novelty detection should also apply to the value system  Need mechanism to improve and compare the value  Anticipated response block should learn the response that improves the value  A RL optimization mechanism may be used to learn the optimum response for a given value system and sensory input  Random perturbation should be applied to the optimum response to explore possible states and learn their the value  New situation will result in new value and WTA will chose the winner Postulates for Minicolumn Organization

EE Minicolumn Selective Processing  Sensory inputs are represented by more and more abstract features in the sensory inputs hierarchy  Possible implementation is to use winner takes all or Hebbian circuits to select the best match  “Sameness principle” of the observed objects to detect and learn feature invariances  Time overlap of feature neuron activation to store temporal sequences  Random wiring may be used to preselect sensory features  Uses feedback for input prediction and screening input information for novelty  Uses redundant structures of sparsely connected processing elements

EE Minicolumn Organization Positive Reinforcement Negative Reinforcement Sensory Inputs Motor Outputs Sensory Value Motor superneuron

EE  Sensory neurons are primarily responsible for providing information about environment  They receive inputs from sensors or other sensory neurons on lower level  They interact with motor neurons to represent action and state of environment  They provide an input to reinforcement neurons  They help to activate motor neurons  Motor neurons are primarily responsible for activation of motor functions  They are activated by reinforcement neurons with the help from sensory neurons  They activate actuators or provide an input to lower level motor neurons  They provide an input to sensory neurons  Reinforcement neurons are primarily responsible for building the internal value system  They receive inputs from reinforcement learning sensors or other reinforcement neurons on lower level  They receive inputs from sensory neurons  They provide an input to motor neurons  They help to activate sensory neurons Minicolumn Organization

EE Sensory Neurons Functions Sensory neurons  Represent inputs from environment by  Responding to activation from lower level (summation)  Selecting most likely scenario (WTA)  Interact with motor functions by  Responding to activation from motor outputs (summation)  Anticipate inputs and screen for novelty by  Correlation to sensory inputs from higher level  Inhibition of outputs to higher level  Select useful information by  Correlating its outputs with reinforcement neurons  Identify invariances by  Making spatio-temporal associations between neighbor sensory neurons WTA