1 The tiling algorithm Learning in feedforward layered networks: the tiling algorithm writed by Marc M é zard and Jean-Pierre Nadal.

Slides:



Advertisements
Similar presentations
You have been given a mission and a code. Use the code to complete the mission and you will save the world from obliteration…
Advertisements

Variations of the Turing Machine
3.6 Support Vector Machines
Constraint Satisfaction Problems
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advanced Piloting Cruise Plot.
© 2008 Pearson Addison Wesley. All rights reserved Chapter Seven Costs.
Chapter 1 The Study of Body Function Image PowerPoint
Author: Julia Richards and R. Scott Hawley
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
UNITED NATIONS Shipment Details Report – January 2006.
Summary of Convergence Tests for Series and Solved Problems
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
Properties of Real Numbers CommutativeAssociativeDistributive Identity + × Inverse + ×
My Alphabet Book abcdefghijklm nopqrstuvwxyz.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Addition Facts
Year 6 mental test 10 second questions
Reductions Complexity ©D.Moshkovitz.
Comp 122, Spring 2004 Order Statistics. order - 2 Lin / Devi Comp 122 Order Statistic i th order statistic: i th smallest element of a set of n elements.
REVIEW: Arthropod ID. 1. Name the subphylum. 2. Name the subphylum. 3. Name the order.
Chapter 4: Informed Heuristic Search
Chapter 11: Models of Computation
Randomized Algorithms Randomized Algorithms CS648 1.
Recurrences : 1 Chapter 3. Growth of function Chapter 4. Recurrences.
ABC Technology Project
CS105 Introduction to Computer Concepts GATES and CIRCUITS
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
Real Numbers and Complex Numbers
Green Eggs and Ham.
VOORBLAD.
Quadratic Inequalities
1 Breadth First Search s s Undiscovered Discovered Finished Queue: s Top of queue 2 1 Shortest path from s.
Factor P 16 8(8-5ab) 4(d² + 4) 3rs(2r – s) 15cd(1 + 2cd) 8(4a² + 3b²)
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
1..
CONTROL VISION Set-up. Step 1 Step 2 Step 3 Step 5 Step 4.
© 2012 National Heart Foundation of Australia. Slide 2.
Lets play bingo!!. Calculate: MEAN Calculate: MEDIAN
Understanding Generalist Practice, 5e, Kirst-Ashman/Hull
Chapter 5 Test Review Sections 5-1 through 5-4.
Addition 1’s to 20.
25 seconds left…...
Copyright © Cengage Learning. All rights reserved.
Januar MDMDFSSMDMDFSSS
Week 1.
Statistical Inferences Based on Two Samples
Analyzing Genes and Genomes
We will resume in: 25 Minutes.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Chapter 8 Estimation Understandable Statistics Ninth Edition
Connecting LANs, Backbone Networks, and Virtual LANs
Intracellular Compartments and Transport
PSSA Preparation.
VPN AND REMOTE ACCESS Mohammad S. Hasan 1 VPN and Remote Access.
Experimental Design and Analysis of Variance
Essential Cell Biology
Immunobiology: The Immune System in Health & Disease Sixth Edition
1 Decidability continued…. 2 Theorem: For a recursively enumerable language it is undecidable to determine whether is finite Proof: We will reduce the.
Learning in feed forward layered Networks: The Tiling Algorithm Authors – Marc Mezard and Jean Pierre Nadal. Dated 21 st June 1989 Published at J. Phys.
Presentation transcript:

1 The tiling algorithm Learning in feedforward layered networks: the tiling algorithm writed by Marc M é zard and Jean-Pierre Nadal

2 Outline Introduction The tiling algorithm Simulations Concluding remarks

3 Introduction The feedforward layered system The drawbacks of back propagation The structure of the network has to be guessed. The error is not guaranteed to converge to an absolute minimum with zero error. Units are added like tiles whenever they are needed.

4 Introduction

5 The tiling algorithm Basic notions and notation Theorem for convergence Generation the master unit Building the ancillary units: divide and conquer

6 Basic notions and notation We consider layered nets, made of binary units which can be in a plus or minus state. A unit i in the Lth layer is connected to the N L-1 units and has state

7 Basic notions and notation For a given set of p 0 (distinct) pattern of N 0 binary units, we want to learn a given mapping ( )

8 Theorem for convergence To each input pattern ( ) there corresponds a set of values of the neurons in the Lth layer as the internal representation of pattern in the layer L.

9 Theorem for convergence We say that two patterns belong to the same class (for the layer L) if they have the same internal representation, which we call the prototype of the class. The problem becomes to map these prototypes onto the desired output.

10 Theorem for convergence master unit the first unit in each layer ancillary unit all the other units in each layer, use to fulfil the faithfulness condition. faithful two input patterns having different output should have different internal representations.

11 Theorem for convergence Theorem: Suppose that all the classes in layer L-1 are faithful, and that the number of errors of the master unit, e L-1, is non-zero. Then there exists at least one set of weights w connecting the L-1 layer to the master unit such that. Furthermore, one can construct explicitly one such set of weights u.

12 Theorem for convergence Proof: Let be the prototypes in layer L-1. be the desired output(1 or – 1). If the master unit of the Lth layer is connected to the L-1th layer with the weight w(w 1 = 1, w j = 0 for j ~= 1), then e L =e L-1.

13 Theorem for convergence Let be one of the patterns for which, and let u be u 1 =1 and then

14 Theorem for convergence Consider other patterns, can be -N L-1, -N L-1 +2, …, N L-1 Because the representations in the L-1 layer are faithful, -N L-1 can never be obtained. Thus one can choose so the patterns for which still remain.

15 Theorem for convergence Hence u is one particular solution which, if used to define the master unit of layer L, will give

16 Generation the master unit Using pocket algorithm If the particular set u of the previous section is taken as initial set in the pocket algorithm, the output set w will always satisfy.

17 Building the ancillary units The master unit is not equal to the desired output unit means that at least one of the two classes is unfaithful. We pick one unfaithful class and add a new unit to learn the mapping for the patterns μ belonging to this class only. Repeat above process until all classes are faithful.

18 Simulations Exhaustive learning (use the full set of the 2 N patterns) Parity task Random Boolean function Generalization Quality of convergence Comments

19 Parity task In the parity task for N 0 Boolean units the output should be 1 of the number of units in state +1 is even, and – 1 otherwise.

20 Parity task Hidden layer Unit number ThresholdCoupling from the input layer to the hidden unit i

21 Parity task Output unit ThresholdCouplings from the hidden layer to the output unit

22 Random Boolean function A random Boolean function is obtained by drawing at random the output (±1 with equal probability) for each input configuration. The numbers of layers and of hidden units increase rapidly with N 0.

23 Generalization The number of training patterns is smaller than 2 N. The N 0 input neurons are organized in a one-dimensional chain, and the problem is to find out whether the number of domain walls is greater or smaller than three.

24 Generalization domain wall The presence of two neighboring neurons pointing in opposite directions When the average of domain walls are three in training patterns, the problem is harder than other numbers.

25 See figure 1 in page 2199 Learning in feedforward layered networks: the tiling algorithm

26 Quality of convergence To quantify the quality of convergence one might think of at least two parameters. e L p L : the number of distinct internal representations in each layer L.

27 See figure 2 in page 2200 Learning in feedforward layered networks: the tiling algorithm

28 Comments There is a lot of freedom in the choice of the unfaithful classes to be learnt. How to choose the maximum number of iterations which are allowed before one decides hat the perceptron algorithm has not converged?

29 Concluding remarks presented a new strategy for building a feedforward layered network Identified some possible roles of the hidden units: the master units and the ancillary units continuous inputs and binary outputs conflicting data more than one output units