Slides are based on Negnevitsky, Pearson Education, 2005 1 Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Advertisements

CS6800 Advanced Theory of Computation
NEURAL NETWORKS Perceptron
Tetris – Genetic Algorithm Presented by, Jeethan & Jun.
Artificial neural networks:
Tuesday, May 14 Genetic Algorithms Handouts: Lecture Notes Question: when should there be an additional review session?
Fuzzy Genetics-based Machine Learning Algorithms Presented by Vahid Jazayeri.
TEMPLATE DESIGN © Genetic Algorithm and Poker Rule Induction Wendy Wenjie Xu Supervised by Professor David Aldous, UC.
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2002.
Evolutionary Computation Application Peter Andras peter.andras/lectures.
Genetic Algorithms Nehaya Tayseer 1.Introduction What is a Genetic algorithm? A search technique used in computer science to find approximate solutions.
7/2/2015Intelligent Systems and Soft Computing1 Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Introduction,
Intro to AI Genetic Algorithm Ruth Bergman Fall 2004.
Genetic Algorithm What is a genetic algorithm? “Genetic Algorithms are defined as global optimization procedures that use an analogy of genetic evolution.
Chapter 6: Transform and Conquer Genetic Algorithms The Design and Analysis of Algorithms.
Radial Basis Function Networks
Universidad de los Andes-CODENSA The Continuous Genetic Algorithm.
Genetic Programming.
Slides are based on Negnevitsky, Pearson Education, Lecture 10 Evolutionary Computation: Evolution strategies and genetic programming n Evolution.
Genetic Algorithm.
Evolutionary Intelligence
© Negnevitsky, Pearson Education, CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University.
Efficient Model Selection for Support Vector Machines
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Soft Computing Lecture 18 Foundations of genetic algorithms (GA). Using of GA.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
© Negnevitsky, Pearson Education, Lecture 10 Introduction Introduction Neural expert systems Neural expert systems Evolutionary neural networks.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 11 Section 1 – Slide 1 of 34 Chapter 11 Section 1 Random Variables.
© Negnevitsky, Pearson Education, Lecture 10 Evolutionary Computation: Evolution strategies and genetic programming Evolution strategies Evolution.
Lecture 8: 24/5/1435 Genetic Algorithms Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Genetic Algorithms Michael J. Watts
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
GENETIC ALGORITHM A biologically inspired model of intelligence and the principles of biological evolution are applied to find solutions to difficult problems.
EE459 I ntroduction to Artificial I ntelligence Genetic Algorithms Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
© Negnevitsky, Pearson Education, Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Introduction,
Genetic Algorithms. Evolutionary Methods Methods inspired by the process of biological evolution. Main ideas: Population of solutions Assign a score or.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Last lecture summary. SOM supervised x unsupervised regression x classification Topology? Main features? Codebook vector? Output from the neuron?
 Negnevitsky, Pearson Education, Lecture 9 Evolutionary Computation: Genetic algorithms n Introduction, or can evolution be intelligent? n Simulation.
Genetic Algorithms Przemyslaw Pawluk CSE 6111 Advanced Algorithm Design and Analysis
Introduction to Genetic Algorithms. Genetic Algorithms We’ve covered enough material that we can write programs that use genetic algorithms! –More advanced.
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
Chapter 12 FUSION OF FUZZY SYSTEM AND GENETIC ALGORITHMS Chi-Yuan Yeh.
Parallel Genetic Algorithms By Larry Hale and Trevor McCasland.
Waqas Haider Bangyal 1. Evolutionary computing algorithms are very common and used by many researchers in their research to solve the optimization problems.
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
N- Queens Solution with Genetic Algorithm By Mohammad A. Ismael.
The Standard Genetic Algorithm Start with a “population” of “individuals” Rank these individuals according to their “fitness” Select pairs of individuals.
Genetic Algorithms. Underlying Concept  Charles Darwin outlined the principle of natural selection.  Natural Selection is the process by which evolution.
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Overview Last two weeks we looked at evolutionary algorithms.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Genetic Algorithms An Evolutionary Approach to Problem Solving.
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Chapter 12 Case Studies Part B. Control System Design.
Chapter 14 Genetic Algorithms.
Genetic Algorithms.
Chapter 4 Beyond Classical Search
EE368 Soft Computing Genetic Algorithms.
Presentation transcript:

Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction n Evolutionary neural networks n Fuzzy evolutionary systems n Summary

Slides are based on Negnevitsky, Pearson Education, Evolutionary neural networks n Although neural networks are used for solving a variety of problems, they still have some limitations. n One of the most common is associated with neural network training. The back-propagation learning algorithm cannot guarantee an optimal solution. In real-world applications, the back-propagation algorithm might converge to a set of sub-optimal weights from which it cannot escape. As a result, the neural network is often unable to find a desirable solution to a problem at hand.

Slides are based on Negnevitsky, Pearson Education, n Another difficulty is related to selecting an optimal topology for the neural network. The “right” network architecture for a particular problem is often chosen by means of heuristics, and designing a neural network topology is still more art than engineering. n Genetic algorithms are an effective optimisation technique that can guide both weight optimisation and topology selection.

Slides are based on Negnevitsky, Pearson Education, Encoding a set of weights in a chromosome

Slides are based on Negnevitsky, Pearson Education, n The second step is to define a fitness function for evaluating the chromosome’s performance. This function must estimate the performance of a given neural network. We can apply here a simple function defined by the sum of squared errors. n The training set of examples is presented to the network, and the sum of squared errors is calculated. The smaller the sum, the fitter the chromosome. The genetic algorithm attempts to find a set of weights that minimises the sum of squared errors.

Slides are based on Negnevitsky, Pearson Education, n The third step is to choose the genetic operators – crossover and mutation. A crossover operator takes two parent chromosomes and creates a single child with genetic material from both parents. Each gene in the child’s chromosome is represented by the corresponding gene of the randomly selected parent. n A mutation operator selects a gene in a chromosome and adds a small random value between  1 and 1 to each weight in this gene.

Slides are based on Negnevitsky, Pearson Education, Crossover in weight optimisation

Slides are based on Negnevitsky, Pearson Education, Mutation in weight optimisation

Slides are based on Negnevitsky, Pearson Education, Can genetic algorithms help us in selecting the network architecture? The architecture of the network (i.e. the number of neurons and their interconnections) often determines the success or failure of the application. Usually the network architecture is decided by trial and error; there is a great need for a method of automatically designing the architecture for a particular application. Genetic algorithms may well be suited for this task.

Slides are based on Negnevitsky, Pearson Education, n The basic idea behind evolving a suitable network architecture is to conduct a genetic search in a population of possible architectures. n We must first choose a method of encoding a network’s architecture into a chromosome.

Slides are based on Negnevitsky, Pearson Education, Encoding the network architecture n The connection topology of a neural network can be represented by a square connectivity matrix. n Each entry in the matrix defines the type of connection from one neuron (column) to another (row), where 0 means no connection and 1 denotes connection for which the weight can be changed through learning. n To transform the connectivity matrix into a chromosome, we need only to string the rows of the matrix together.

Slides are based on Negnevitsky, Pearson Education, Encoding of the network topology

Slides are based on Negnevitsky, Pearson Education, n Binary string representation for the network architecture n 10 steps as described on pages 222 and 289, respectively –Step 1: initial values –Step 2: Fitness function –Step 3: generating an initial population –Step 4: decoding an individual chromosome into a neural network

Slides are based on Negnevitsky, Pearson Education, –Step 5: repeating step 4 –Step 6: selecting a pair of chromosomes –Step 7: creating a pair of offspring chromosomes –Step 8: placing the created offspring chromosomes in the new population –Step 9: repeating step 6 to obtain more new chromosomes –Step 10: go back to step 4, till the fixed number of generations

Slides are based on Negnevitsky, Pearson Education, The cycle of evolving a neural network topology

Slides are based on Negnevitsky, Pearson Education, Fuzzy evolutionary systems n Evolutionary computation is also used in the design of fuzzy systems, particularly for generating fuzzy rules and adjusting membership functions of fuzzy sets. n In this section, we introduce an application of genetic algorithms to select an appropriate set of fuzzy IF-THEN rules for a classification problem. n For a classification problem, a set of fuzzy IF-THEN rules is generated from numerical data. n First, we use a grid-type fuzzy partition of an input space.

Slides are based on Negnevitsky, Pearson Education, Fuzzy partition by a 3  3 fuzzy grid

Slides are based on Negnevitsky, Pearson Education, n Black and white dots denote the training patterns of Class 1 and Class 2, respectively. n The grid-type fuzzy partition can be seen as a rule table. n The linguistic values of input x1 (A 1, A 2 and A 3 ) form the horizontal axis, and the linguistic values of input x2 (B 1, B 2 and B 3 ) form the vertical axis. n At the intersection of a row and a column lies the rule consequent. Fuzzy partition

Slides are based on Negnevitsky, Pearson Education, In the rule table, each fuzzy subspace can have only one fuzzy IF-THEN rule, and thus the total number of rules that can be generated in a K  K grid is equal to K  K.

Slides are based on Negnevitsky, Pearson Education, Fuzzy rules that correspond to the K  K fuzzy partition can be represented in a general form as: where x p is a training pattern on input space X1  X2, P is the total number of training patterns, C n is the rule consequent (either Class 1 or Class 2), and is the certainty factor that a pattern in fuzzy subspace A i B j belongs to class C n. CF A i B j CnCnCnCn

Slides are based on Negnevitsky, Pearson Education, To determine the rule consequent and the certainty factor, we use the following procedure: Step 1: Partition an input space into K  K fuzzy subspaces, and calculate the strength of each class of training patterns in every fuzzy subspace. Each class in a given fuzzy subspace is represented by its training patterns. The more training patterns, the stronger the class  in a given fuzzy subspace, the rule consequent becomes more certain when patterns of one particular class appear more often than patterns of any other class. Step 2: Determine the rule consequent and the certainty factor in each fuzzy subspace.

Slides are based on Negnevitsky, Pearson Education, The certainty factor can be interpreted as follows: n If all the training patterns in fuzzy subspace A i B j belong to the same class, then the certainty factor is maximum and it is certain that any new pattern in this subspace will belong to this class. n If, however, training patterns belong to different classes and these classes have similar strengths, then the certainty factor is minimum and it is uncertain that a new pattern will belong to any particular class.

Slides are based on Negnevitsky, Pearson Education, n This means that patterns in a fuzzy subspace can be misclassified. Moreover, if a fuzzy subspace does not have any training patterns, we cannot determine the rule consequent at all. n If a fuzzy partition is too coarse, many patterns may be misclassified. On the other hand, if a fuzzy partition is too fine, many fuzzy rules cannot be obtained, because of the lack of training patterns in the corresponding fuzzy subspaces.

Slides are based on Negnevitsky, Pearson Education, Training patterns are not necessarily distributed evenly in the input space. As a result, it is often difficult to choose an appropriate density for the fuzzy grid. To overcome this difficulty, we use multiple fuzzy rule tables.

Slides are based on Negnevitsky, Pearson Education, Multiple fuzzy rule tables Fuzzy IF-THEN rules are generated for each fuzzy subspace of multiple fuzzy rule tables, and thus a complete set of rules for our case can be specified as: 2 x 2  3 x 3  4 x 4  5 x 5  6 x 6 = 90 rules. 2 x 2  3 x 3  4 x 4  5 x 5  6 x 6 = 90 rules.

Slides are based on Negnevitsky, Pearson Education, Once the set of rules S ALL is generated, a new pattern, x = (x1, x2), can be classified by the following procedure: Step 1: In every fuzzy subspace of the multiple fuzzy rule tables, calculate the degree of compatibility of a new pattern with each class. Step 2: Determine the maximum degree of compatibility of the new pattern with each class. Step 3: Determine the class with which the new pattern has the highest degree of compatibility, and assign the pattern to this class.

Slides are based on Negnevitsky, Pearson Education, The number of multiple fuzzy rule tables required for an accurate pattern classification may be large. Consequently, a complete set of rules can be enormous. Meanwhile, these rules have different classification abilities, and thus by selecting only rules with high potential for accurate classification, we reduce the number of rules.

Slides are based on Negnevitsky, Pearson Education, Can we use genetic algorithms for selecting fuzzy IF-THEN rules ? n The problem of selecting fuzzy IF-THEN rules can be seen as a combinatorial optimisation problem with two objectives. n The first, more important, objective is to maximise the number of correctly classified patterns. n The second objective is to minimise the number of rules. n Genetic algorithms can be applied to this problem.

Slides are based on Negnevitsky, Pearson Education, A basic genetic algorithm for selecting fuzzy IF- THEN rules includes the following steps: Step 1: Randomly generate an initial population of chromosomes. The population size may be relatively small, say 10 or 20 chromosomes. Each gene in a chromosome corresponds to a particular fuzzy IF-THEN rule in the rule set defined by S ALL. Step 2: Calculate the performance, or fitness, of each individual chromosome in the current population.

Slides are based on Negnevitsky, Pearson Education, The problem of selecting fuzzy rules has two objectives: to maximise the accuracy of the pattern classification and to minimise the size of a rule set. The fitness function has to accommodate both these objectives. This can be achieved by introducing two respective weights, w P and w N, in the fitness function: where P s is the number of patterns classified successfully, P ALL is the total number of patterns presented to the classification system, N S and N ALL are the numbers of fuzzy IF-THEN rules in set S and set S ALL, respectively.

Slides are based on Negnevitsky, Pearson Education, The classification accuracy is more important than the size of a rule set. That is,

Slides are based on Negnevitsky, Pearson Education, Step 3: Select a pair of chromosomes for mating. Parent chromosomes are selected with a probability associated with their fitness; a better fit chromosome has a higher probability of being selected. Step 4: Create a pair of offspring chromosomes by applying a standard crossover operator. Parent chromosomes are crossed at the randomly selected crossover point. Step 5: Perform mutation on each gene of the created offspring. The mutation probability is normally kept quite low, say The mutation is done by multiplying the gene value by –1.

Slides are based on Negnevitsky, Pearson Education, Step 6: Place the created offspring chromosomes in the new population. Step 7: Repeat Step 3 until the size of the new population becomes equal to the size of the initial population, and then replace the initial (parent) population with the new (offspring) population. Step 9: Go to Step 2, and repeat the process until a specified number of generations (typically several hundreds) is considered. The number of rules can be cut down to less than 2% of the initially generated set of rules.