Lior Segev Ranit Aharonov Alon Keinan Isaac Meilijson www.math.tau.ac.il/~ruppin Localization of Function in Neurocontrollers.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 1 Overview of Machine Learning (Based on Chapter 1 of Mitchell T.., Machine Learning, 1997)
Advertisements

Introduction to Neural Networks Computing
CROWN “Thales” project Optimal ContRol of self-Organized Wireless Networks WP1 Understanding and influencing uncoordinated interactions of autonomic wireless.
Gathering Performance Information: Overview

Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
Yiannis Demiris and Anthony Dearden By James Gilbert.
Artificial Intelligence Statistical learning methods Chapter 20, AIMA (only ANNs & SVMs)
Introduction to Cognitive Science Sept 2005 :: Lecture #1 :: Joe Lau :: Philosophy HKU.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Dimension reduction : PCA and Clustering Slides by Agnieszka Juncker and Chris Workman.
How does the mind process all the information it receives?
Simulation Models as a Research Method Professor Alexander Settles.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Neural Networks (NN) Ahmad Rawashdieh Sa’ad Haddad.
October 28, 2010Neural Networks Lecture 13: Adaptive Networks 1 Adaptive Networks As you know, there is no equation that would tell you the ideal number.
Hub Queue Size Analyzer Implementing Neural Networks in practice.
The Performance of Evolutionary Artificial Neural Networks in Ambiguous and Unambiguous Learning Situations Melissa K. Carroll October, 2004.
A User Experience-based Cloud Service Redeployment Mechanism KANG Yu.
NUS CS5247 A dimensionality reduction approach to modeling protein flexibility By, By Miguel L. Teodoro, George N. Phillips J* and Lydia E. Kavraki Rice.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
An Introduction to Artificial Intelligence and Knowledge Engineering N. Kasabov, Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering,
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
A two-stage approach for multi- objective decision making with applications to system reliability optimization Zhaojun Li, Haitao Liao, David W. Coit Reliability.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 14/15 – TP19 Neural Networks & SVMs Miguel Tavares.
NEURAL NETWORKS FOR DATA MINING
Design Options for Multimodal Web Applications Adrian Stanciulescu and Jean Vanderdonckt {stanciulescu, UCL/IAG/BCHI.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Introduction to Machine Learning Kamal Aboul-Hosn Cornell University Chess, Chinese Rooms, and Learning.
Learning from observations
CAP6938 Neuroevolution and Developmental Encoding Basic Concepts Dr. Kenneth Stanley August 23, 2006.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Competitive Scheduling in Wireless Networks with Correlated Channel State Ozan.
Pac-Man AI using GA. Why Machine Learning in Video Games? Better player experience Agents can adapt to player Increased variety of agent behaviors Ever-changing.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Reactive and Output-Only HKOI Training Team 2006 Liu Chi Man (cx) 11 Feb 2006.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter1: Introduction Chapter2: Overview of Supervised Learning
Lecture 5 Neural Control
Systems Science & Informatics: Computational Neuroscience Kaushik Majumdar Indian Statistical Institute Bangalore Center.
Playing Tic-Tac-Toe with Neural Networks
Matjaž Gams Jozef Stefan Institute, Ljubljana University Slovenia.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Predictive Automatic Relevance Determination by Expectation Propagation Y. Qi T.P. Minka R.W. Picard Z. Ghahramani.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop Terrence C. Stewart Xuan Choo Chris Eliasmith Centre for Theoretical.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Rationality and Power: the “gap in the middle” in ICT
Soft Computing Applied to Finite Element Tasks
Web *.0 ? Combining peer production and peer-to-peer systems
Dr. Unnikrishnan P.C. Professor, EEE
Ch 14. Active Vision for Goal-Oriented Humanoid Robot Walking (1/2) Creating Brain-Like Intelligence, Sendhoff et al. (eds), Robots Learning from.
Recurrent Neural Networks
CompSci Self-Managing Systems
OVERVIEW OF BIOLOGICAL NEURONS
CSE 4705 Artificial Intelligence
network of simple neuron-like computing elements
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Artificial Intelligence Lecture No. 28
Linear Discrimination
The Brain as an Efficient and Robust Adaptive Learner
Sanguthevar Rajasekaran University of Connecticut
Presentation transcript:

Lior Segev Ranit Aharonov Alon Keinan Isaac Meilijson Localization of Function in Neurocontrollers

Localization of Function –How does one ``understand’’ neural information processing? –A classical, good point to start with is localization of function(s) in neurocontrollers –A good model to start with is Evolutionary Autonomous Agents (EAAs) –Scope of analysis method may be more general

Evolved neurocontrollers

Talk Overview The basic Functional Contribution Analysis (FCA) Localization of Subtasks Synaptic Analysis High-dimensional FCA Informational Lesioning Playing games in the brain, or “My fair lady”.

The basic FCA A multi-lesion approach: learning about normal, intact functioning via lesion ``perturbations’’ Given are a set of neurocontroller lesions and the agent’s corresponding performance levels Assign ``importance’’ levels to the different units of the neurocontroller? The FCA: Find such assginments that maximize performance prediction of unseen lesions

Lesioning C1C1 C2C2 C3C3 C4C4 C5C5 C6C6 p = f(c 1 +c 3 +c 4 +c 6 ) ~ argmin = Σ (p-p) 2 {f,c}{f,c} 1 2N2N ~

The Functional Contribution Algorithm (FCA) f module c module optimal f and c training set min(p-p) 2 ~

The performance prediction function (m. c) P

Single Lesions vs. FCA

Generalization – an Adaptive Lesion Selection algorithm

Task Comparison

The Contribution Matrix – Localization and Specification Task Neuron 12  P 1C 11 C 12  C1PC1P 2C 21 C 22  C2PC2P 3C 31 C 32  C3PC3P NCN1CN1 CN2CN2  C NP

Synaptic Analysis

Network Backbone By weights By contributions

High-dimensional FCA The inherent limitations of basic FCA (e.g., paradoxical lesioning) Compound Elements Order (dimension) of compound elements An efficient High-D algorithm for compound element selection

Complexity of Task Localization

Types of 2D Interactions Paradoxical Interactions – element 1 is advantageous only if element 2 is intact Inverse Paradoxical interactions – element 1 is advantageous only if element 2 is lesioned All significant 2D compound elements belong to either type (there can be others..)

Informational Lesioning Method (ILM) The paradox of the lesioning paradigm The dependence on the lesioning method Controlled lesioning – approaching the limit of intact behavior Implement a lesion as a channel whose input is the firing of the intact element and output is the firing of the lesioned element (given an input). Quantify the lesioning level as an inverse function of the Mutual Information between the input and output of the channel

ILM – In summary: Increased localization precision Portraying a spectrum of short-to-long term functional effects of system units Approaching the limit CVs of the intact state, in the ILM lesioning family Does such a limit exist more generally? Is the beauty inherently in the of the beholder?

Where Game Theory meets Brain Research.. “George said: You know, we are on a wrong track altogether. We must not think of the things we could do with, but only of the things that we can’t do without.” [Three men in a boat: to say nothing of the dog!, by Jerome K. Jerome, chapter 3]

FCA and the Shapley Value The Shapley value (SH): A famed, unique solution of cost allocation in a game theory axiomatic system Many functioning networks (including our EAA neurocontrollers) can be addressed within this framework An alternative formulation of the FCA is equivalent to the SH (even though the starting standpoints and motivations are different).

Ongoing FCA Research Optimal Lesioning ? Relation to SH and more efficient algorithms (sampling, high-D..). Generalization to PPR Application to neuroscience data (reverse inactivation, TMS, fMRI). Application to gene networks?

The contribution values can be efficiently determined using the simple FCA. More complex networks require higher dimensional FCA descriptions. The minimal dimension of the FCA may provide an interesting measure of functional complexity. The importance of being lesioned (in the “right” way..) – ILM and beyond. Even if the brain is not “a society of minds”, it can be analyzed with the aids of fundamental tools from game theory. – papers (and code) Summary

Network backbone: 2D interactions