Jump to first page. The Generalised Mapping Regressor (GMR) neural network for inverse discontinuous problems Student : Chuan LU Promotor : Prof. Sabine.

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

1 Image Classification MSc Image Processing Assignment March 2003.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
PARTITIONAL CLUSTERING
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Data Mining Classification: Alternative Techniques
Support Vector Machines
Pattern Recognition and Machine Learning: Kernel Methods.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Ch. 4: Radial Basis Functions Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from many Internet sources Longin.
X0 xn w0 wn o Threshold units SOM.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
Distributed Regression: an Efficient Framework for Modeling Sensor Network Data Carlos Guestrin Peter Bodik Romain Thibaux Mark Paskin Samuel Madden.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
Giansalvo EXIN Cirrincione unit #6 Problem: given a mapping: x  d  t  and a TS of N points, find a function h(x) such that: h(x n ) = t n n = 1,
Segmentation Divide the image into segments. Each segment:
Neural Networks Dr. Peter Phillips. Neural Networks What are Neural Networks Where can neural networks be used Examples Recognition systems (Voice, Signature,
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Giansalvo EXIN Cirrincione unit #7/8 ERROR FUNCTIONS part one Goal for REGRESSION: to model the conditional distribution of the output variables, conditioned.
Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
1 Nearest Neighbor Learning Greg Grudic (Notes borrowed from Thomas G. Dietterich and Tom Mitchell) Intro AI.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Aula 4 Radial Basis Function Networks
Lecture 09 Clustering-based Learning
Dan Simon Cleveland State University
Radial Basis Function (RBF) Networks
Radial-Basis Function Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Radial Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Radial Basis Function Networks
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Artificial Neural Networks
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Networks
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Artificial Intelligence Techniques Multilayer Perceptrons.
Jointly Optimized Regressors for Image Super-resolution Dengxin Dai, Radu Timofte, and Luc Van Gool Computer Vision Lab, ETH Zurich 1.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
CS621 : Artificial Intelligence
Neural Networks Demystified by Louise Francis Francis Analytics and Actuarial Data Mining, Inc.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Fast Learning in Networks of Locally-Tuned Processing Units John Moody and Christian J. Darken Yale Computer Science Neural Computation 1, (1989)
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Growing Hierarchical Tree SOM: An unsupervised neural.
Similarity Measurement and Detection of Video Sequences Chu-Hong HOI Supervisor: Prof. Michael R. LYU Marker: Prof. Yiu Sang MOON 25 April, 2003 Dept.
Neural Network Approximation of High- dimensional Functions Peter Andras School of Computing and Mathematics Keele University
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Data Mining, Neural Network and Genetic Programming
Neural Networks Dr. Peter Phillips.
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Neuro-Computing Lecture 4 Radial Basis Function Network
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
EM Algorithm and its Applications
Presentation transcript:

Jump to first page

The Generalised Mapping Regressor (GMR) neural network for inverse discontinuous problems Student : Chuan LU Promotor : Prof. Sabine Van Huffel Daily Supervisor : Dr. Giansalvo Cirrincione

Jump to first page Mapping Approximation Problem n Feedforward neural networks are :  universal approximators of nonlinear continuous functions (many-to-one, one-to-one)  they don’t yield multiple solutions  they don’t yield infinite solutions  they don’t approximate mapping discontinuities

Jump to first page Inverse and Discontinuous Problems n Mapping : multi-valued, complex structure. conditional average of the target data n Poor representation of the mapping by least squares approach (sum-of-squares error function) for feedforward neural networks. n Mapping with discontinuities.

Jump to first pagegatingnetwork Network 1Network 2Network 3 input output mixture-of-experts It partitions the solution between several networks. It uses a separate network to determine the parameters of each kernel, with a further network to determine the coefficients. winner-take-all Jacobs and Jordan Bishop (ME extension) kernel blending

Jump to first page Example #1 ME MLP

Jump to first page Example #2 ME MLP

Jump to first page Example #3 ME MLP

Jump to first page Example #4 ME MLP

Jump to first page Generalised Mapping Regressor ( GMR ) (G. Cirrincione and M. Cirrincione, 1998) n approximate every kind of function or relation. input : collection of components of x and y output : estimation of the remaining components n output all solutions, mapping branches, equilevel hypersurfaces. Characteristics :

Jump to first page n coarse-to-fine learning  incremental  competitive  based on mapping recovery (curse of dimensionality) n topological neuron linking  distance  direction n linking tracking  branches  contours n open architecture function approximation  pattern recognition Z (augmented) space  unsupervised learning GMR Basic Ideas clustersmapping branches

Jump to first page GMR four phases object merged Object Merging LearningRecall- ing branch 1 branch 2 INPUT Linking links object 1 pool of neurons object 2 object 3TrainingSet

Jump to first page EXIN Segmentation Neural Network (EXIN SNN) n clustering (G. Cirrincione, 1998) w 4 = x 4 vigilance threshold xx Input/weight space

Z (augmented) space coarse quantization EXIN SNN high  z ( say  1 ) branch (object) neuron GMR Learning

Z (augmented) space production phase Voronoi sets domain setting GMR Learning

Z (augmented) space secondary EXIN SNNs  z =  2 <  1 TS#1 TS#2 TS#3 TS#4 TS#5 Other levels are possible fine quantization GMR Learning

GMR Coarse to fine Learning ( Example) object neuron fine VQ neurons object neuron Voronoi set

Jump to first page GMR Linking n Voronoi set: setup of the neuron radius (domain variable) neuron i riri asymmetric radius Task 1 :

Jump to first page Weight Space GMR Linking n For one TS presentation: zizi d1d1 w1w1 w5w5 w3w3 w4w4 d1d1 w2w2 d 5 d3d3 d4d4 d2d2 branch and bound search technique k-nn Linking candidates è distance test è direction test è create a link or strengthen a link Task 2 : Linking direction

Jump to first page Branch and Bound Accelerated Linking n neuron tree constructed during learning phase (multilevel EXIN SNN learning) n methods in linking candidate step (k-nearest-neighbors computation):   -BnB : <  d 1, ( ( : linking factor predefined)  k-BnB : k predefined.

Jump to first page GMR Linking branch-and-bound in linking experimental results: 83 %

Jump to first page branch and bound (cont.) Apply branch and bound in learning phase ( labelling ) : n Tree construction  k-means  EXIN SNN n Experimental results (in the 3-D example)  50% of labeling flops are saved

GMR Linking Example link

GMR Merging Example

GMR Recalling Example level 1 neuron level 2 neuron branch 1 branch 2 è level one neurons : input within their domain è level two neurons : only connected ones è level zero neurons : isolated (noise)

Experiments spiral of Archimedes  = a  (a = 1)

Experiments Sparse regions further normalizing + higher mapping resolution

Experiments noisy data

Experiments

contours : links among level one neurons GMR mapping of 8 spheres in a 3-D scene.

Jump to first page Conclusions GMR is able to : u solve inverse discontinuous problems u approximate every kind of mapping u yield all the solutions and the corresponding branches GMR can be accelerated by applying tree search techniques GMR needs : p interpolation techniques p kernels or projection techniques for high dimensional data p adaptive parameters

Jump to first page Thank you ! (shi-a shi-a)

l 1 = 0 b 1 = 0 l 1 = 0 b 1 = 0 l 6 = 0 b 6 = 0 l 6 = 0 b 6 = 0 l 5 = 0 b 5 = 0 l 5 = 0 b 5 = 0 l 2 = 0 b 2 = 0 l 2 = 0 b 2 = 0 l 3 = 0 b 3 = 0 l 3 = 0 b 3 = 0 l 4 = 0 b 4 = 0 l 4 = 0 b 4 = 0 l 7 = 0 b 7 = 0 l 7 = 0 b 7 = 0 l 8 = 0 b 8 = 0 l 8 = 0 b 8 = 0 l 3 = 2 b 3 = 1 l 3 = 2 b 3 = 1 GMR Recall input w1w1 w2w2 w3w3 w7w7 w8w8 w4w4 w5w5 w6w6 r1r1 l 1 = 1 b 1 = 1 l 1 = 1 b 1 = 1 è linking tracking è restricted distance è level one test connected neuron : level zero  level two branch  the winner branch

GMR Recall input w1w1 w2w2 w3w3 w7w7 w8w8 l 1 = 0 b 1 = 0 l 1 = 0 b 1 = 0 l 6 = 0 b 6 = 0 l 6 = 0 b 6 = 0 l 5 = 0 b 5 = 0 l 5 = 0 b 5 = 0 l 2 = 0 b 2 = 0 l 2 = 0 b 2 = 0 l 3 = 0 b 3 = 0 l 3 = 0 b 3 = 0 l 4 = 0 b 4 = 0 l 4 = 0 b 4 = 0 l 7 = 0 b 7 = 0 l 7 = 0 b 7 = 0 l 8 = 0 b 8 = 0 l 8 = 0 b 8 = 0 w4w4 w5w5 w6w6 r2r2 l 1 = 1 b 1 = 1 l 1 = 1 b 1 = 1 l 3 = 2 b 3 = 1 l 3 = 2 b 3 = 1 l 2 = 1 b 2 = 2 l 2 = 1 b 2 = 2 l 2 = 1 b 2 = 1 l 2 = 1 b 2 = 1 è level one test è linking tracking branch cross

GMR Recall l 6 = 0 b 6 = 0 l 6 = 0 b 6 = 0 l 6 = 2 b 6 = 4 l 6 = 2 b 6 = 4 l 6 = 1 b 6 = 6 l 6 = 1 b 6 = 6 input w1w1 w2w2 w3w3 l 1 = 0 b 1 = 0 l 1 = 0 b 1 = 0 l 5 = 0 b 5 = 0 l 5 = 0 b 5 = 0 l 2 = 0 b 2 = 0 l 2 = 0 b 2 = 0 l 3 = 0 b 3 = 0 l 3 = 0 b 3 = 0 l 4 = 0 b 4 = 0 l 4 = 0 b 4 = 0 l 7 = 0 b 7 = 0 l 7 = 0 b 7 = 0 l 8 = 0 b 8 = 0 l 8 = 0 b 8 = 0 w4w4 w5w5 w6w6 l 1 = 1 b 1 = 1 l 1 = 1 b 1 = 1 l 3 = 2 b 3 = 1 l 3 = 2 b 3 = 1 l 2 = 1 b 2 = 2 l 2 = 1 b 2 = 2 l 2 = 1 b 2 = 1 l 2 = 1 b 2 = 1 l 4 = 1 b 4 = 4 l 4 = 1 b 4 = 4 l 5 = 2 b 5 = 4 l 5 = 2 b 5 = 4 l 4 = 1 b 4 = 5 l 4 = 1 b 4 = 5 l 4 = 1 b 4 = 4 l 4 = 1 b 4 = 4 … until completion of the candidates è level one neurons : input within their domain è level two neurons : only connected ones è level zero neurons : isolated (noise) w7w7 w8w8 l 6 = 1 b 6 = 4 l 6 = 1 b 6 = 4  clipping Tow Branches Tow Branches Two Branches Two Branches

GMR Recall input w1w1 w2w2 w3w3 w7w7 w8w8 l 7 = 0 b 7 = 0 l 7 = 0 b 7 = 0 l 8 = 0 b 8 = 0 l 8 = 0 b 8 = 0 w4w4 w5w5 w6w6 è Output = weight complements of the level one neurons è Output interpolation l 1 = 1 b 1 = 1 l 1 = 1 b 1 = 1 l 3 = 2 b 3 = 1 l 3 = 2 b 3 = 1 l 2 = 1 b 2 = 1 l 2 = 1 b 2 = 1 l 4 = 1 b 4 = 4 l 4 = 1 b 4 = 4 l 4 = 1 b 4 = 4 l 4 = 1 b 4 = 4 l 6 = 1 b 6 = 4 l 6 = 1 b 6 = 4