Presentation is loading. Please wait.

Presentation is loading. Please wait.

EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting.

Similar presentations


Presentation on theme: "EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting."— Presentation transcript:

1 EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting June 3 2005 Motivation: How a new understanding of the brain will lead to the creation of truly intelligent machines fromJ. Hawkins “On Intelligence”

2 EE141 2  Abstract thinking and action planning  Capacity to learn and memorize useful things  Spatio-temporal memories  Ability to talk and communicate  Intuition and creativity  Consciousness  Emotions and understanding others  Surviving in complex environment and adaptation  Perception  Motor skills in relation to sensing and anticipation Elements of Intelligence

3 EE141 3 Problems of Classical AI  Lack of robustness and generalization  No real-time processing  Central processing of information by a single processor  No natural interface to environment  No self-organization  Need to write software

4 EE141 4 Intelligent Behavior  Emergent from interaction with environment  Based on large number of sparsely connected neurons  Asynchronous  Self-timed  Interact with environment through sensory- motor system  Value driven  Adaptive

5 EE141 5 Design principles of intelligent systems Design principles  synthetic methodology  time perspectives  emergence  diversity/compliance  frame-of-reference from Rolf Pfeifer “Understanding of Intelligence” Agent design complete agent principle cheap design ecological balance redundancy principle parallel, loosely coupled processes sensory-motor coordination value principle

6 EE141 6 The principle of “cheap design”  intelligent agents: “cheap”  exploitation of ecological niche  economical (but redundant)  exploitation of specific physical properties of interaction with real world

7 EE141 7 Principle of “ecological balance”  balance / task distribution between  morphology  neuronal processing (nervous system)  materials  environment  balance in complexity  given task environment  match in complexity of sensory, motor, and neural system

8 EE141 8 The redundancy principle  redundancy prerequisite for adaptive behavior  partial overlap of functionality in different subsystems  sensory systems: different physical processes with “information overlap”

9 EE141 9 Generation of sensory stimulation through interaction with environment  multiple modalities  constraints from morphology and materials  generation of correlations through physical process  basis for cross- modal associations

10 EE141 10 The principle of sensory-motor coordination  self-structuring of sensory data through interaction with environment  physical process — not „computational“  prerequisite for learning Holk Cruse no central control only local neuronal communication global communication through environment neuronal connections

11 EE141 11 The principle of parallel, loosely coupled processes  Intelligent behavior emergent from agent-environment interaction  Large number of parallel, loosely coupled processes  Asynchronous  Coordinated through agent’s –sensory-motor system –neural system –interaction with environment

12 EE141 12 Human Brain at Birth 6 Years Old 14 Years Old Neuron Structure and Self- Organizing Principles

13 EE141 13 Neuron Structure and Self- Organizing Principles (Cont ’ d)

14 EE141 14 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative cortex Visual cortex Brain Organization While we learn its functions can we emulate its operation?

15 EE141 15  V. Mountcastle argues that all regions of the brain perform the same algorithm V. Mountcastle  SOLAR combines many groups of neurons (minicolumns) in a pseudorandom way  Each microcolumn has the same structure  Thus it performs the same computational algorithm satisfying Mountcastle’s principle  VB Mountcastle (2003). Introduction [to a special issue of Cerebral Cortex on columns]. Cerebral Cortex, 13, 2-4. Minicolumn Organization and Self Organizing Learning Arrays

16 EE141 16 “ The basic unit of cortical operation is the minicolumn … It contains of the order of 80-100 neurons except in the primate striate cortex, where the number is more than doubled. The minicolumn measures of the order of 40-50  m in transverse diameter, separated from adjacent minicolumns by vertical, cell- sparse zones … The minicolumn is produced by the iterative division of a small number of progenitor cells in the neuroepithelium. ” (Mountcastle, p. 2) Stain of cortex in planum temporale. Cortical Minicolumns

17 EE141 17 Groupings of minicolumns seem to form the physiologically observed functional columns. Best known example is orientation columns in V1. They are significantly bigger than minicolumns, typically around 0.3-0.5 mm and have 4000-8000 neurons Mountcastle’s summation : “Cortical columns are formed by the binding together of many minicolumns by common input and short range horizontal connections. … The number of minicolumns per column varies … between 50 and 80. Long range intracortical projections link columns with similar functional properties.” (p. 3) Groupping of Minicolumns

18 EE141 18 Sparse Connectivity The brain is sparsely connected. (Unlike most neural nets.) A neuron in cortex may have on the order of 100,000 synapses. There are more than 10 10 neurons in the brain. Fractional connectivity is very low: 0.001%. Implications:  Connections are expensive biologically since they take up space, use energy, and are hard to wire up correctly.  Therefore, connections are valuable.  The pattern of connection is under tight control.  Short local connections are cheaper than long ones. Our approximation makes extensive use of local connections for computation.

19 EE141 19 Introducing Self-Organizing Learning Array SOLAR  SOLAR is a regular array of identical processing cells, connected to programmable routing channels.  Each cell in the array has ability to self-organize by adapting its functionality in response to information contained in its input signals.  Cells choose their input signals from the adjacent routing channels and send their output signals to the routing channels.  Processing cells can be structured to implement minicolumns

20 EE141 20 SOLAR Hardware Architecture

21 EE141 21 SOLAR Routing Scheme

22 EE141 22 PCB SOLAR XILINX VIRTEX XCV 1000

23 EE141 23 System SOLAR

24 EE141 24 Wiring in SOLAR Initial wiring and final wiring selection for credit card approval problem

25 EE141 25 SOLAR Classification Results

26 EE141 26 Associative SOLAR

27 EE141 27 Associations made in SOLAR

28 EE141 28 Brain Structure with Value System Properties  Interacts with environment through sensors and actuators  Uses distributed processing in sparsely connected neurons organized in minicolumns  Uses spatio-temporal associative learning  Uses feedback for input prediction and screening input information for novelty  Develops an internal value system to evaluate its state in environment using reinforcement learning  Plans output actions for each input to maximize the internal state value in relation to environment  Uses redundant structures of sparsely connected processing elements

29 EE141 29 Sensors Actuators Value System Anticipated Response Reinf. Signal Sensory Inputs Motor Outputs Action Planning Understanding Improvement Detection Expectation Novelty Detection Inhibition Comparison Possible Minicolumn Organization

30 EE141 30  Learning should be restricted to unexpected situation or reward  Anticipated response should have expected value  Novelty detection should also apply to the value system  Need mechanism to improve and compare the value  Anticipated response block should learn the response that improves the value  A RL optimization mechanism may be used to learn the optimum response for a given value system and sensory input  Random perturbation should be applied to the optimum response to explore possible states and learn their the value  New situation will result in new value and WTA will chose the winner Postulates for Minicolumn Organization

31 EE141 31 Minicolumn Selective Processing  Sensory inputs are represented by more and more abstract features in the sensory inputs hierarchy  Possible implementation is to use winner takes all or Hebbian circuits to select the best match  “Sameness principle” of the observed objects to detect and learn feature invariances  Time overlap of feature neuron activation to store temporal sequences  Random wiring may be used to preselect sensory features  Uses feedback for input prediction and screening input information for novelty  Uses redundant structures of sparsely connected processing elements

32 EE141 32 Minicolumn Organization Positive Reinforcement Negative Reinforcement Sensory Inputs Motor Outputs Sensory Value Motor superneuron

33 EE141 33  Sensory neurons are primarily responsible for providing information about environment  They receive inputs from sensors or other sensory neurons on lower level  They interact with motor neurons to represent action and state of environment  They provide an input to reinforcement neurons  They help to activate motor neurons  Motor neurons are primarily responsible for activation of motor functions  They are activated by reinforcement neurons with the help from sensory neurons  They activate actuators or provide an input to lower level motor neurons  They provide an input to sensory neurons  Reinforcement neurons are primarily responsible for building the internal value system  They receive inputs from reinforcement learning sensors or other reinforcement neurons on lower level  They receive inputs from sensory neurons  They provide an input to motor neurons  They help to activate sensory neurons Minicolumn Organization

34 EE141 34 Sensory Neurons Functions Sensory neurons  Represent inputs from environment by  Responding to activation from lower level (summation)  Selecting most likely scenario (WTA)  Interact with motor functions by  Responding to activation from motor outputs (summation)  Anticipate inputs and screen for novelty by  Correlation to sensory inputs from higher level  Inhibition of outputs to higher level  Select useful information by  Correlating its outputs with reinforcement neurons  Identify invariances by  Making spatio-temporal associations between neighbor sensory neurons WTA


Download ppt "EE141 1 Design of Self-Organizing Learning Array for Intelligent Machines Janusz Starzyk School of Electrical Engineering and Computer Science Heidi Meeting."

Similar presentations


Ads by Google