Download presentation
Presentation is loading. Please wait.
Published byChloe Sanders Modified over 9 years ago
1
1
2
Experimental Results ELM Weighted ELM Locally Weighted ELM Problem 2
3
All training data are randomly chosen Targets are normalize -1 to 1 Features are normalize 0 to 1 Using RMSE criterion 3
4
Sinc function: X=-10:0.05:10 Train:351 Test:50 (hidden neuron, h, k) Original ELM (10) Weighted ELM (10,0.01) Locally Weighted ELM (10,1,20) 1.95E-19.41E-51.53E-4 4
5
5
6
Function: X=-5:0.05:5 Train:151 Test:50 (hidden neuron, h, k) 6 Original ELM (10) Weighted ELM (10,0.01) Locally Weighted ELM (10,1,20) T2FNN 2.81E-11.39E-48.15E-41.3E-3
7
7
8
Function: X1,x2,x3=-1:0.005:1 Train:351 Test:50 (hidden neuron, h, k) 8 Original ELM (10) Weighted ELM (10,0.01) Locally Weighted ELM (10,1,20) 1.41E-43.09E-62.61E-5
9
Machine CPU Feature:6 Train:100 Test:109 (hidden neuron, h, k) Original ELM (10) Weighted ELM (10,0.9) Locally Weighted ELM (10,1,40) 0.1113420.1034730.105663 9
10
Auto Price Feature:15,1 nominal,14 continuous Train:80 Test:79 (hidden neuron, h, k) Original ELM (15) Weighted ELM (10,0.9) Locally Weighted ELM (10,0.9,50) 0.2012550.1895840.193568 10
11
Cancer Feature:32 Train:100 Test:94 (hidden neuron, h, k) Original ELM (10) Weighted ELM (3,0.9) Locally Weighted ELM (3,1,40) 0.5336560.5284150.532317 11
12
Input layer hidden layer output layer The weights between input layer and hidden layer and the biases of neurons in the hidden layer are randomly chosen. 12
13
13
14
14
15
Ex 15
16
16
17
Find the k nearest training data to testing data 17
18
Paper 數據 Randomly weight and bias The output of Nearest data (feature selection…?) 18 -0.93182 0.31220 5 0.02930 9 0.06156 2 000 1-0.953520.25826 0.06062 1 0.06156 2 0 0.01923 1 0.00568 2 2-0.98056 0.39312 2 0.02204 4 0.030280 0.01923 1 0.00568 2 3-0.97946 0.21105 9 0.02930 9 0.04592 1 0 0.03846 2 0.02272 7 6-0.9493 0.20431 6 0.00601 2 0.09284 3 0 0.01923 1 0.03409 1
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.