Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 38: PAC Learning, VC dimension; Self Organization.

Slides:



Advertisements
Similar presentations
Computational Learning Theory
Advertisements

VC Dimension – definition and impossibility result
On Complexity, Sampling, and -Nets and -Samples. Range Spaces A range space is a pair, where is a ground set, it’s elements called points and is a family.
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15, 16: Perceptrons and their computing power 6 th and.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
CS621: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture–3: Some proofs in Fuzzy Sets and Fuzzy Logic 27 th July.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Kohonen Self Organising Maps Michael J. Watts
Northwestern University Winter 2007 Machine Learning EECS Machine Learning Lecture 13: Computational Learning Theory.
Measuring Model Complexity (Textbook, Sections ) CS 410/510 Thurs. April 27, 2007 Given two hypotheses (models) that correctly classify the training.
Vapnik-Chervonenkis Dimension
Vapnik-Chervonenkis Dimension Definition and Lower bound Adapted from Yishai Mansour.
Vapnik-Chervonenkis Dimension Part II: Lower and Upper bounds.
Learning to Identify Winning Coalitions in the PAC Model A. D. Procaccia & J. S. Rosenschein.
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
Lecture 09 Clustering-based Learning
PAC learning Invented by L.Valiant in 1984 L.G.ValiantA theory of the learnable, Communications of the ACM, 1984, vol 27, 11, pp
CS623: Introduction to Computing with Neural Nets (lecture-20) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 25: Perceptrons; # of regions;
CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 36-37: Foundation of Machine Learning.
CS621: Artificial Intelligence Lecture 11: Perceptrons capacity Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CS344 : Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 28- PAC and Reinforcement Learning.
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
Prof. Pushpak Bhattacharyya, IIT Bombay 1 CS 621 Artificial Intelligence Lecture /10/05 Prof. Pushpak Bhattacharyya Linear Separability,
1 Machine Learning: Lecture 8 Computational Learning Theory (Based on Chapter 7 of Mitchell T.., Machine Learning, 1997)
T. Poggio, R. Rifkin, S. Mukherjee, P. Niyogi: General Conditions for Predictivity in Learning Theory Michael Pfeiffer
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 41,42– Artificial Neural Network, Perceptron, Capacity 2 nd, 4 th Nov,
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 30: Perceptron training convergence;
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 30 Uncertainty, Fuizziness.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 21 Computing power of Perceptrons and Perceptron Training.
Computational Learning Theory IntroductionIntroduction The PAC Learning FrameworkThe PAC Learning Framework Finite Hypothesis SpacesFinite Hypothesis Spaces.
Prof. Pushpak Bhattacharyya, IIT Bombay 1 CS 621 Artificial Intelligence Lecture /08/05 Prof. Pushpak Bhattacharyya Fuzzy Set (contd) Fuzzy.
CS621 : Artificial Intelligence
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 21: Perceptron training and convergence.
Prof. Pushpak Bhattacharyya, IIT Bombay CS 621 Artificial Intelligence Lecture 28 – 22/10/05 Prof. Pushpak Bhattacharyya Generating Function,
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall Perceptron Rule and Convergence Proof Capacity.
Lecture 14, CS5671 Clustering Algorithms Density based clustering Self organizing feature maps Grid based clustering Markov clustering.
CS 8751 ML & KDDComputational Learning Theory1 Notions of interest: efficiency, accuracy, complexity Probably, Approximately Correct (PAC) Learning Agnostic.
The brain has the power to tell us what we think, dream, imagine, say, learn, feel, the way we look at things from different points of view and much more.
Carla P. Gomes CS4700 Computational Learning Theory Slides by Carla P. Gomes and Nathalie Japkowicz (Reading: R&N AIMA 3 rd ed., Chapter 18.5)
CS623: Introduction to Computing with Neural Nets (lecture-17) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 9– Uncertainty.
CS623: Introduction to Computing with Neural Nets (lecture-9) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 20: Neural Net Basics: Perceptron.
1 CS 391L: Machine Learning: Computational Learning Theory Raymond J. Mooney University of Texas at Austin.
CS623: Introduction to Computing with Neural Nets (lecture-18) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Machine Learning 12. Local Models.
Self-Organizing Network Model (SOM) Session 11
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Vapnik–Chervonenkis Dimension
Lecture 22 Clustering (3).
CSCI B609: “Foundations of Data Science”
CSCI B609: “Foundations of Data Science”
CS344 : Introduction to Artificial Intelligence
CS 621 Artificial Intelligence Lecture 27 – 21/10/05
Machine Learning: UNIT-3 CHAPTER-2
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
CS 621 Artificial Intelligence Lecture 29 – 22/10/05
Artificial Neural Networks
CS621 : Artificial Intelligence
Presentation transcript:

Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 38: PAC Learning, VC dimension; Self Organization

VC-dimension Gives a necessary and sufficient condition for PAC learnability.

Def:- Let C be a concept class, i.e., it has members c1,c2,c3,…… as concepts in it. C1 C2 C3 C

Let S be a subset of U (universe). Now if all the subsets of S can be produced by intersecting with C i s, then we say C shatters S.

The highest cardinality set S that can be shattered gives the VC-dimension of C. VC-dim(C)= |S| VC-dim: Vapnik-Cherronenkis dimension.

IIT Bombay6 2 – Dim surface C = { half planes} x y

IIT Bombay7 a |s| = 1 can be shattered S 1 = { a } {a}, Ø y x

IIT Bombay8 a |s| = 2 can be shattered b S 2 = { a,b } {a,b}, {a}, {b}, Ø x y

IIT Bombay9 a |s| = 3 can be shattered b c x y S 3 = { a,b,c }

IIT Bombay10

IIT Bombay11 AB D C x y |s| = 4 cannot be shattered S 4 = { a,b,c,d }

 A Concept Class C is learnable for all probability distributions and all concepts in C if and only if the VC dimension of C is finite  If the VC dimension of C is d, then…(next page) IIT Bombay12

(a) for 0< ε <1 and the sample size at least max[(4/ ε )log(2/ δ ), (8d/ ε )log(13/ ε )] any consistent function A:S c  C is a learning function for C (b) for 0< ε <1/2 and sample size less than max[((1- ε )/ ε )ln(1/ δ ), d(1-2( ε (1- δ )+ δ ))] No function A:S c  H, for any hypothesis space is a learning function for C. IIT Bombay13

Book 1. Computational Learning Theory, M. H. G. Anthony, N. Biggs, Cambridge Tracts in Theoretical Computer Science, Paper’s 1. A theory of the learnable, Valiant, LG (1984), Communications of the ACM 27(11): Learnability and the VC-dimension, A Blumer, A Ehrenfeucht, D Haussler, M Warmuth - Journal of the ACM, 1989.

Biological Motivation Brain

Higher brain Brain Cerebellum Cerebrum 3- Layers: Cerebrum Cerebellum Higher brain

Search for Meaning Contributing to humanity Achievement,recognition Food,rest survival

Higher brain ( responsible for higher needs) Cerebrum (crucial for survival) 3- Layers: Cerebrum Cerebellum Higher brain

Side areas For auditory information processing Back of brain( vision) Lot of resilience: Visual and auditory areas can do each other’s job

Left Brain and Right Brain Dichotomy Left Brain Right Brain

Left Brain – Logic, Reasoning, Verbal ability Right Brain – Emotion, Creativity Music Words – left Brain Tune – Right Brain Maps in the brain. Limbs are mapped to brain

, O/p grid.. I/p neuron Character Reognition

Self Organization or Kohonen network fires a group of neurons instead of a single one. The group “some how” produces a “picture” of the cluster. Fundamentally SOM is competitive learning. But weight changes are incorporated on a neighborhood. Find the winner neuron, apply weight change for the winner and its “neighbors”.

Neurons on the contour are the “neighborhood” neurons. Winner

Weight change rule for SOM W (n+1) = W (n) + η (n) ( I (n) – W (n) ) P+ δ (n) P+ δ (n) P+ δ (n) Neighborhood: function of n Learning rate: function of n δ(n) is a decreasing function of n η(n) learning rate is also a decreasing function of n 0 < η(n) < η(n –1 ) <=1

Pictorially Winner δ(n).. Convergence for kohonen not proved except for uni- dimension

… … …. P neurons o/p layer n neurons Clusters: A A : A WpWp B : C : :