Presentation is loading. Please wait.

Presentation is loading. Please wait.

Neuro-Computing Lecture 5 Committee Machine

Similar presentations


Presentation on theme: "Neuro-Computing Lecture 5 Committee Machine"— Presentation transcript:

1 Neuro-Computing Lecture 5 Committee Machine

2 Suggested Reading Neural Networks Ensembles Haykin: Chapter 7
- Kuncheva: Combining Pattern Classifiers Methods and Algorithms, Wiley, 2004.

3 Pattern Classification
Processing Feature extraction Classification Fork spoon

4

5 The power of Parliament
- Many `experts' arguing about the same problem - The `experts' can locally disagree, but overall, all of them try to solve the same global problem - The collective answer is usually more stable to fluctuations in intelligence/emotions/biases of individual `experts‘ - There exist many types of formalizations of this idea.

6 Combining neural classifiers

7 Averaging Results: Mean Error for Each Network
Suppose we have L trained experts with outputs yi(x) for a regression problem to approximate h(x) each with an error of ei. Then we can write: Thus the sum of squares error for network yi is: Where x[.] denotes the expectation (average or mean value). Thus the average error for the networks acting individually is:

8 Averaging Results: Mean Error for Committee
Suppose instead we form a committee by averaging the outputs yi to get the committee prediction: This estimate will have error: Thus, by Cauchy’s inequality: Indeed, if the errors are uncorrelated ECOM = EAV / L but this is unlikely in practice as errors tend to be correlated

9 Methods for constructing Diverse Neural Classifiers
Different learning machines Different representation of patterns Partitioning the training set Different labeling in learning

10 Methods for constructing Diverse Neural Classifiers
Different learning machines Methods for constructing Diverse Neural Classifiers 1.Using different learning algorithms. For example we can use multi-layer perceptron, radial basis function net­work and decision trees in an ensemble to make a composite system (Kittler et al. 1997). 2.Using certain algorithms with different complexity. For example using networks with different number of nodes and layers or nearest neighbour classifiers with different number of prototypes. 3. Using different learning parameters. For example in an MLP, different initial weights, number of epochs, learning rate and momentum or cost function can affect the generalization of the network (Windeatt and Ghaderi 1998).

11 Methods for constructing Diverse Neural Classifiers
Different representation of patterns Methods for constructing Diverse Neural Classifiers 1. Using different feature sets or rule sets. This method is useful particularly when more than one source of information is available (Kittler et al. 1997). 2. Using decimation techniques Even when only one feature set exists, we can produce different feature sets (Turner and Ghosh 1996) to be used in training the different classifiers by removing different parts of this set. 3. Feature set partitioning in a multi-net system This method can be useful if patterns include parts which are independent. For example different parts of an identification form or a human face. In this case, patterns can be divided into sub-patterns each one can be used to train a classifier. One of theoretical properties of neural networks is the fact that they do not need special feature extraction for classification. Feature sets can be the same real valued measurements (like gray levels in image processing). The number of these measurements can be high so if we apply all of them to a single network, the curse of dimensionality will occur. To avoid this problem they can be divided into parts each one used as the input of a sub-network (Rowley 1999, Rowley et al. 1998) which are independent.

12 Methods for constructing Diverse Neural Classifiers
Partitioning the training set Methods for constructing Diverse Neural Classifiers 1. Random partitioning such as: •Cross validation, (Friedman 1994) is a statistical method in which the training set is divided into k subsets of approximately equal size. A network is trained k times. Each time one of subsets is "left-out", so changing the "left-out" subset we will have k different trained networks. If k equals the number of samples, it is called " leave-one-out" •Bootstrapping, in which the mechanism is essentially the same as cross validation, but subsets (patterns) are chosen randomly with replacement (Jain et al. 1987, Efron 1983). This method has been used in the well known technique of Bagging (Breiman 1999, Bauer and Kohavi 1999, Maclin and Optiz 1997, Quinlan 1996). It has yielded significant improvement in many real problems. 2.Partitioning based on spatial similarities Mixtures of Experts is based on the divide and conquer strategy (Jacobs 1995, Jacobs et al. 1991), in which the training set is partitioned according to the spa¬tial similarity rather than random partitioning in the former methods. During training, different classifiers (experts) try to deal with the different parts of the input space. There is a competitive gating mechanism that localizes the experts, using a gating network.

13 Methods for constructing Diverse Neural Classifiers
Partitioning the training set Methods for constructing Diverse Neural Classifiers 3 . Partitioning based on experts' ability •Boosting is a general technique for constructing a classifier with good performance and small error from some individual classifiers which have performance just a little better than free guess (Drucker 1993). a. Boosting by filtering. This approach involves filtering the training examples by different versions of a weak learning algorithm. It assumes the availability of a large (in theory, infinite) source of examples, with the examples being either discarded or kept during training. An advantage of this approach is that it allows for a small memory requirement compared to the other two approaches. b. Boosting by subsampling. This second approach works with a training sample of fixed size. The examples are "resampled according to a given probability distribution during training. The error is calculated with respect to the fixed training sample. c. Boosting by reweighting. This third approach also works with a fixed training sample, but it assumes that the weak learning algorithm can receive "weighted“ examples.The error is calculated with respect to the weighted examples.

14 Methods for constructing Diverse Neural Classifiers
Different labeling in learning If we redefine the classes for example by giving the same label to a group of classes, the classifiers which are made by different defining of classes can generalize diversely. Methods like Error Correction Output Codes, ECOC, (Dietterich 1991, Dietterich and Bakiri 1991), binary clustering (Wilson 1996) and pair wise coupling (HaKtie and Tibsliirani 1996) use this technique.

15 Combination strategies
Static combiners Dynamic combiners

16 Combination strategies
Static combiners Dynamic combiners Voting, Sum, Product, Max, Min,Averaging Borda Counts Non-trainable Trainable Weighted Averaging {DP-DT, Genetic,…} Stacked Generalization Mixture of Experts

17 Stacked Generalization
Wolpert (1992) - In stacked generalization, the output pattern of an ensemble of trained experts serves as an input to a second-level expert

18 Mixture of Experts Jacobs et al. (1991)

19 Combining classifiers for face recognition
Zhao, Huang, Sun (2004) Pattern Recognition Letters

20 Combining classifiers for face recognition
Jang et al (2004) LNCS

21 Combining classifiers for view-independent face recognition
Kim, Kittler (2006) IEEE Trans. Circuits and Systems for Video Technology


Download ppt "Neuro-Computing Lecture 5 Committee Machine"

Similar presentations


Ads by Google