Presentation is loading. Please wait.

Presentation is loading. Please wait.

Enhanced MWO Training Algorithm to Improve Classification Accuracy

Similar presentations


Presentation on theme: "Enhanced MWO Training Algorithm to Improve Classification Accuracy"— Presentation transcript:

1 Enhanced MWO Training Algorithm to Improve Classification Accuracy
of Artificial Neural Networks Ahmed Abusnaina, Rosni Abdullah, Ali Kattan

2 Outline Introduction Objectives Proposed Method Results Conclusion
Ahmad Abusnaina SCDM'14

3 Introduction Artificial Neural Network (ANN)
The ANN is an interconnected group of processing units “artificial neurons” via a series of adjusted weights. These neurons use a mathematical model for information processing to accomplish a variety of tasks. Ahmad Abusnaina SCDM'14

4 Introduction The most challenging aspects of ANN are:
The mechanism of learning (training ) that adjusts the neurons’ weights values to minimize the error function . The mechanism of information flow that depends on ANN structure. Ahmad Abusnaina SCDM'14

5 ANN Training Gradient-descent (GD) algorithms
Gradient-based technique such as Back-propagation (BP). Suffer from its slowness. Simply fall into local minima problem. Population-based Metaheuristic training algorithms It depend on global optimization methods to overcome the drawbacks of GD learning algorithms. The search space of the ANN weights training process is considered as continuous optimization problem because it is high-dimensional and multimodal, also it could be corrupted by noises or missing data. Ahmad Abusnaina SCDM'14

6 Objectives This paper proposes an enhanced version of MWO; the E-MWO algorithm. The proposed method is supposed to overcome the weakness of the MWO algorithm The E-MWO aims to improve the classification accuracy of ANN and to minimize the ANN training time. Ahmad Abusnaina SCDM'14

7 Facts About Mussels Mussel is a species of mollusk, usually thriving on rocky-shore and soft-bottom habitats. Mussels depend on physical processes, such as water flow rate, temperature, salinity, Also depend on biological processes, such as amount of food resources for survival, growth, and reproduction. Ahmad Abusnaina SCDM'14

8 Mussels Wandering Optimization Algorithm (MWO)
MWO is a novel meta-heuristic algorithm, ecologically inspired for global optimizations by Jing An et. al (2013). MWO is inspired by mussels’ movement behavior when they form bed pattern in their surroundings habitat. The population of mussels consists of N individuals, these individuals are in a certain spatial region of marine ‘‘bed’’ called the habitat. The habitat is mapped to a d-dimensional space Sd of the problem to be optimized, where as the objective function value f(s) at each point s ∈ Sd represents the nutrition provided by the habitat. Ahmad Abusnaina SCDM'14

9 Mussels Wandering Optimization Algorithm (MWO)
MWO algorithm is composed of six steps as follows: Initialization of mussels-population and the algorithm parameters. Calculate the short-range density ζs and long-range density ζl for each mussel. Determine the movement strategy for each mussel. Update the position for all mussels. Evaluate the fitness of each mussel mi after position updating. Examine the termination criteria. Ahmad Abusnaina SCDM'14

10 Algorithm. 1 - MWO algorithm
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Initialization: Set t = 0; FOR (mussel mi, i = 1 to N) Uniformly randomize initial position xi(0)for all mussel mi; Calculate the fitness value of initial mussels : f(xi(0)); END FOR Find the global best mussel and record its position as xg; Iteration: // G: maximum number of iterations, : predefined precision. WHILE (t<G, or f (x*) >) Calculate the distance from mi to other mussels by Eq. (1); Calculate the short-range reference and long-range reference by Eq.(2); Calculate short-range density and long-range density by Eq.(3) and Eq.(4); Compute its moving probability Pi(t) according to Eq.(5); IF Pi = 1 THEN Generate its step length li (t) Le´vy distribution by Eq.(6) ELSE li (t) = 0 END IF Compute the new position coordinate i (t) using Eq. (7). Evaluate the fitness value of the new position coordinate i (t)) Rank all mussels according to the fitness from best to worst, find the global best mussel and update the best position xg; Set t = t+1; END WHILE Output the optimized results and end the algorithm Ahmad Abusnaina SCDM'14

11 MWO Shortcomings and Proposed Solutions
Lines in MWO Proposed Solution in E-MWO Lines in E-MWO Possibility to fall into the problem of premature convergence. 7, 18 & 21 Introducing a new hybrid-selection scheme. 7, 19, 25, 26 & 27 Lack in the explorative ability due to depending on single-step at the position update. 15 Using multi-step length to make the algorithm more explorative. 16 The setting of control parameters is performed statically and by trial and error. 2 Setting the value of µ dynamically and adaptively. 22, 23 & 24 The MWO terminates depending on the number of iterations or a predefined precision. 9 Using dynamic termination criterion. Ahmad Abusnaina SCDM'14

12 The Enhanced-Mussels Wandering Optimization (E-MWO)
1- Hybrid Selection Scheme to Guide the Population The original MWO algorithm uses the global-best mussel as guidance to update all other mussels. The MWO has high possibility to fall into Premature Convergence. This problem is solved in E-MWO by using two selection schemes simultaneously: global-best and random selection schemes. At each iteration, the mussels’ population will update their ANN weights values as long as there is a new mussel becomes global-best or the global-best has new fitness value. But if the global-best is repeated for T iterations with the same fitness value, another mussel is chosen randomly as a guidance mussel. Ahmad Abusnaina SCDM'14

13 The Enhanced-Mussels Wandering Optimization (E-MWO)
2- Adaptive Setting of E-MWO Parameters The shape parameter (µ) is important for the MWO as it used to determine the extent of mussel movement. The adaptive feature of the E-MWO will allow setting the value of µ dynamically and adaptively depending on new introduced variables which are: Similarity Ratio (SR) and Update Ratio (UR) Making µ adaptive and dynamic is essential: Avoid making a tedious experiments for finding the proper value of µ. Ahmad Abusnaina SCDM'14

14 The Enhanced-Mussels Wandering Optimization (E-MWO)
3- Multi-Step Length The original adapted MWO uses one step length to update the ANN weights of input to hidden layers and hidden to output layers. The E-MWO uses separate steps length for each layer to layer update First step length l1 used in update process of weights input to hidden Second step length l2 used in update process of weights hidden to output. Multi-step length will make the mussels’ population more diverse, which will make the mussels’ population more explorative. Ahmad Abusnaina SCDM'14

15 The Enhanced-Mussels Wandering Optimization (E-MWO)
4- Dynamic Termination Criterion The termination criterion for E-MWO depends on the dynamic quality measure. Two newly introduced quality measure; Update ratio UR and similarity ratio SR . Whereas UR should not be less than ε1, while SR should not exceed ε2 UR >ε1 AND SR<ε2. As these two relations are preserved as the solution is converging, thus better performance accuracy will be achieved. Ahmad Abusnaina SCDM'14

16 Algorithm 2. E-MWO Algorithm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Initialization: Set t = 0; FOR (mussel mi, i = 1 to N) Uniformly randomize the initial position xi(0)for all mussel mi Calculate the initial fitness value of the mussel f(xi(0)) END FOR Find the global best mussel and record its position as xg and set it as selected mussel ms Iteration: WHILE (t <MaxIterations AND UR>ε1 AND SR<ε2) Calculate the distance from mi to all other mussels by Eq. (1); Calculate short-range reference and long-range reference by Eq.(2); Calculate short-range density and long-range density by Eq.(3) and Eq.(4); Compute the moving probability Pi(t) according to Eq.(5); IF Pi = 1 THEN Generate all steps length lij(t) Le´vy distribution by Eq.(6) ELSE lij(t) = 0 END IF Compute the new position coordinate i (t) using Eq. (7) according to ms. Evaluate the fitness value of the new position coordinate i (t)) Calculate the Similarity Ratio SR by Eq.(8) Calculate the Update Ratio UR by Eq.(9) Calculate the new value of shape parameter (μ) by Eq.(10) Rank all mussels by their fitness, find the global best mussel and set it as selected mussel ms. IF ( the global best mussel is the same for the last T iterations AND with same fitness value) THEN (select a mussel randomly from the best M of the mussels’ population and set it as selected mussel ms ) Set t = t+1; END WHILE Output the optimized results and end the algorithm Ahmad Abusnaina SCDM'14

17 The E-MWO-based Training of ANN
The Feed-forward ANN weights (including biases) are adjusted and tuned using the E-MWO algorithm. Solving any classification problem as illustrated in the Figure below. Ahmad Abusnaina SCDM'14

18 Experimental Setup and Results
To evaluate the E-MWO optimization algorithm, four widely-used benchmark classification datasets have been used. The dataset obtained from UCI Machine Learning Repository namely, Iris dataset, Glass dataset, Wisconsin Breast Cancer dataset Diabetes dataset.

19 The Effect of Shape Parameter on the MWO

20 Table 1.Classification accuracy of ANN for different datasets.
E-MWO MWO HS-BtW GA BP Iris Best 100.0 100 96.6 Mean 91.0 89.6 86.8 84.6 Cancer 98.5 99.2 97.8 97.1 97.3 98.2 97.4 96.1 Diabetes 92.8 79.0 77.9 79.2 78.0 74.5 75.3 73.8 75.4 Glass 95.3 60.4 72.0 62.7 58.7 49.1 58.8 45.2 60.1

21 Table 2.Training time of ANN for different datasets (time in seconds).
E-MWO MWO HS-BtW GA BP Iris Best 6.0 5.0 49.0 10.0 1,132.0 Mean 2.7 4.4 58.3 8.0 826.4 Cancer 26.0 30.0 1,355.0 3,909.0 25.9 29.9 64.9 1,349.4 4,097.3 Diabetes 16.0 175.0 16,311.0 27.1 25.8 72.7 392.3 14,112.0 Glass 20.0 18.0 64.0 740.0 6,104.0 18.1 18.7 69.1 639.7 3,003.1

22 Conclusions & Future Works
In this paper, an enhanced version of MWO is proposed; E-MWO. The E-MWO is Adaptive for the shape parameter, Uses multi-step length, Uses hybrid selection scheme Uses dynamic termination criterion. The Feed-forward artificial neural networks have been trained by adapting the E-MWO algorithms. The pattern classification of real-world problems have been tackled. Ahmad Abusnaina SCDM'14

23 Conclusions & Future Works
The obtained results indicate that the E-MWO and the MWO algorithm are superior in terms of convergence time, and competitive in terms of classification accuracy. The future work includes further analysis for the MWO algorithm, by focusing on premature convergence problem. In addition, detailed description of the E-MWO and how it overcome the drawbacks of the MWO with deep analysis using more benchmarking problems should be investigated, which are currently ongoing works by the authors. Ahmad Abusnaina SCDM'14

24 Thank You Q & A


Download ppt "Enhanced MWO Training Algorithm to Improve Classification Accuracy"

Similar presentations


Ads by Google