INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, 2014 Lecture.

Slides:



Advertisements
Similar presentations
Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0) ETHEM ALPAYDIN © The MIT Press, 2010
Advertisements

INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning 2nd Edition
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
1 Machine Learning: Lecture 10 Unsupervised Learning (Based on Chapter 9 of Nilsson, N., Introduction to Machine Learning, 1996)
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning 3rd Edition
Supervised Learning Recap
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Self Organization: Competitive Learning
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
INTRODUCTION TO Machine Learning 2nd Edition
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
01 -1 Lecture 01 Artificial Intelligence Topics –Introduction –Knowledge representation –Knowledge reasoning –Machine learning –Applications.
Radial Basis Function Networks 표현아 Computer Science, KAIST.
Supervised learning: Mixture Of Experts (MOE) Network.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Aula 4 Radial Basis Function Networks
02 -1 Lecture 02 Agent Technology Topics –Introduction –Agent Reasoning –Agent Learning –Ontology Engineering –User Modeling –Mobile Agents –Multi-Agent.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
8/10/ RBF NetworksM.W. Mak Radial Basis Function Networks 1. Introduction 2. Finding RBF Parameters 3. Decision Surface of RBF Networks 4. Comparison.
Radial Basis Function Networks
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
UNSUPERVISED LEARNING NETWORKS
MACHINE LEARNING 10 Decision Trees. Motivation  Parametric Estimation  Assume model for class probability or regression  Estimate parameters from all.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Clustering Instructor: Max Welling ICS 178 Machine Learning & Data Mining.
Fast Learning in Networks of Locally-Tuned Processing Units John Moody and Christian J. Darken Yale Computer Science Neural Computation 1, (1989)
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 15: Mixtures of Experts Geoffrey Hinton.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Hierarchical Mixture of Experts Presented by Qi An Machine learning reading group Duke University 07/15/2005.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Machine Learning 12. Local Models.
Machine Learning Supervised Learning Classification and Regression
INTRODUCTION TO Machine Learning
Neuro-Computing Lecture 4 Radial Basis Function Network
Chap 8: Adaptive Networks
INTRODUCTION TO Machine Learning
CSCE833 Machine Learning Lecture 9 Linear Discriminant Analysis
INTRODUCTION TO Machine Learning
INTRODUCTION TO Machine Learning 2nd Edition
INTRODUCTION TO Machine Learning 3rd Edition
Radial Basis Functions: Alternative to Back Propagation
Linear Discrimination
Presentation transcript:

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture Slides for

CHAPTER 12: LOCAL MODELS

Introduction 3  Divide the input space into local regions and learn simple (constant/linear) models in each patch  Unsupervised: Competitive, online clustering  Supervised: Radial-basis functions, mixture of experts

Competitive Learning 4

5 Winner-take-all network

Adaptive Resonance Theory  Incremental; add a new cluster if not covered; defined by vigilance, ρ 6 (Carpenter and Grossberg, 1988)

Self-Organizing Maps 7  Units have a neighborhood defined; m i is “between” m i-1 and m i+1, and are all updated together  One-dim map: (Kohonen, 1990)

Radial-Basis Functions  Locally-tuned units: 8

Local vs Distributed Representation 9

Training RBF 10  Hybrid learning:  First layer centers and spreads: Unsupervised k-means  Second layer weights: Supervised gradient-descent  Fully supervised (Broomhead and Lowe, 1988; Moody and Darken, 1989)

Regression 11

Classification 12

Rules and Exceptions 13 Default rule Exceptions

Rule-Based Knowledge 14  Incorporation of prior knowledge (before training)  Rule extraction (after training) (Tresp et al., 1997)  Fuzzy membership functions and fuzzy rules

Normalized Basis Functions 15

Competitive Basis Functions 16  Mixture model:

Regression 17

Classification 18

EM for RBF (Supervised EM) 19  E-step:  M-step:

Learning Vector Quantization 20  H units per class prelabeled (Kohonen, 1990)  Given x, m i is the closest: x mimi mjmj

Mixture of Experts  In RBF, each local fit is a constant, w ih, second layer weight  In MoE, each local fit is a linear function of x, a local expert: 21 (Jacobs et al., 1991)

MoE as Models Combined  Radial gating:  Softmax gating: 22

Cooperative MoE 23  Regression

Competitive MoE: Regression 24

Competitive MoE: Classification 25

Hierarchical Mixture of Experts 26  Tree of MoE where each MoE is an expert in a higher-level MoE  Soft decision tree: Takes a weighted (gating) average of all leaves (experts), as opposed to using a single path and a single leaf  Can be trained using EM (Jordan and Jacobs, 1994)