Ch. Eick: Evolutionary Machine Learning n Different Forms of Learning: Learning agent receives feedback with respect to its actions (e.g. from a teacher)

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
COMPUTER AIDED DIAGNOSIS: CLASSIFICATION Prof. Yasser Mostafa Kadah –
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
CHAPTER 9: Decision Trees
Genetic Algorithms (Evolutionary Computing) Genetic Algorithms are used to try to “evolve” the solution to a problem Generate prototype solutions called.
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
Rulebase Expert System and Uncertainty. Rule-based ES Rules as a knowledge representation technique Type of rules :- relation, recommendation, directive,
Learning Classifier Systems. Learning Classifier Systems (LCS) The system has three layers: – A performance system that interacts with environment, –
Institute of Intelligent Power Electronics – IPE Page1 Introduction to Basics of Genetic Algorithms Docent Xiao-Zhi Gao Department of Electrical Engineering.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
EC: Lecture 17: Classifier Systems Ata Kaban University of Birmingham.
Learning Classifier Systems to Intrusion Detection Monu Bambroo 12/01/03.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Intro to AI Genetic Algorithm Ruth Bergman Fall 2002.
Chapter 14 Genetic Algorithms.
Genetic Algorithm Genetic Algorithms (GA) apply an evolutionary approach to inductive learning. GA has been successfully applied to problems that are difficult.
Genetic Algorithms Nehaya Tayseer 1.Introduction What is a Genetic algorithm? A search technique used in computer science to find approximate solutions.
Classifier Systems Anil Shankar Dept. of Computer Science University of Nevada, Reno.
CS Instance Based Learning1 Instance Based Learning.
Khaled Rasheed Computer Science Dept. University of Georgia
Genetic Algorithm.
Computer Implementation of Genetic Algorithm
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Ch. Eick: Evolutionary Machine Learning Classifier Systems n According to Goldberg [113], a classifier system is “a machine learning system that learns.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
Evolution Strategies Evolutionary Programming Genetic Programming Michael J. Watts
CS 484 – Artificial Intelligence1 Announcements Lab 4 due today, November 8 Homework 8 due Tuesday, November 13 ½ to 1 page description of final project.
CS 484 – Artificial Intelligence1 Announcements Lab 3 due Tuesday, November 6 Homework 6 due Tuesday, November 6 Lab 4 due Thursday, November 8 Current.
LOGO Ensemble Learning Lecturer: Dr. Bo Yuan
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Ch. Eick: Evolutionary Machine Learning n Different Forms of Learning: Learning agent receives feedback with respect to its actions (e.g. from a teacher)
Genetic Algorithms Michael J. Watts
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
ART – Artificial Reasoning Toolkit Evolving a complex system Marco Lamieri
Computational Complexity Jang, HaYoung BioIntelligence Lab.
Eick: LCS-Review Bull-Paper Review1 1.Holland (1986): “Classifier systems … rule-based systems with general mechanisms to process rules in parallel, for.
1 Chapter 14 Genetic Algorithms. 2 Chapter 14 Contents (1) l Representation l The Algorithm l Fitness l Crossover l Mutation l Termination Criteria l.
Kansas State University Department of Computing and Information Sciences CIS 732: Machine Learning and Pattern Recognition Friday, 16 February 2007 William.
GENETIC ALGORITHMS.  Genetic algorithms are a form of local search that use methods based on evolution to make small changes to a popula- tion of chromosomes.
GAs: why do they sometimes not work? n The coding moves the GA to operate on a different search space --- bad coding might deceive the GA or might slow.
Learning from observations
Genetic Algorithms. Evolutionary Methods Methods inspired by the process of biological evolution. Main ideas: Population of solutions Assign a score or.
Learning Classifier Systems BRANDEN PAPALIA, MICHAEL STEWART, JAMES PATRICK FACULTY OF ENGINEERING, COMPUTING AND MATHEMATICS.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #12 2/20/02 Evolutionary Algorithms.
EE749 I ntroduction to Artificial I ntelligence Genetic Algorithms The Simple GA.
Franciszek Seredynski, Damian Kurdej Polish Academy of Sciences and Polish-Japanese Institute of Information Technology APPLYING LEARNING CLASSIFIER SYSTEMS.
Learning Classifier Systems (Introduction) Muhammad Iqbal Evolutionary Computation Research Group School of Engineering and Computer Science Victoria University.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Chapter 9 Genetic Algorithms Evolutionary computation Prototypical GA
Data Mining and Decision Support
Towards a Mapping of Modern AIS and Learning Classifier Systems Larry Bull Department of Computer Science & Creative Technologies University of the West.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Overview Last two weeks we looked at evolutionary algorithms.
Christoph F. Eick: Thoughts on Designing Michigan-style Classifier Systems Thoughts on Selection Methods in Michigan-style Classifier Systems  When solving.
George Yauneridge.  Machine learning basics  Types of learning algorithms  Genetic algorithm basics  Applications and the future of genetic algorithms.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Evolution Strategies Evolutionary Programming
Chapter 11: Learning Introduction
General Aspects of Learning
Bull-Paper Review1 Holland (1986): “Classifier systems … rule-based systems with general mechanisms to process rules in parallel, for the adaptive generation.
General Aspects of Learning
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Lecture 4. Niching and Speciation (1)
General Aspects of Learning
Presentation transcript:

Ch. Eick: Evolutionary Machine Learning n Different Forms of Learning: Learning agent receives feedback with respect to its actions (e.g. from a teacher) –Supervised Learning: feedback is received with respect to all possible actions of the agent –Reinforcement Learning: feedback is only received with respect to the taken action of the agent Unsupervised Learning: Learning when there is no hint at all about the correct action n Inductive Learning is a form of supervised learning that centers on learning a function based on sets of training examples. Popular inductive learning techniques include decision trees, neural networks, nearest neighbor approaches, discriminant analysis, and regression. n The performance of an inductive learning system is usually evaluated using n-fold cross-validation. Learning Paradigms and General Aspects of Learning

Ch. Eick: Evolutionary Machine Learning Classifier Systems n According to Goldberg [113], a classifier system is “a machine learning system that learns syntactically simple string rules to guide its performance in an arbitrary environment”. n A classifier system consists of three main components: Rule and message system Apportionment of credit system Genetic Algorithm (for evolving classifers) n First implemented in a system called CS1 by Holland/Reitman(1978). n Example of classifer rules: 00##: #0: ##:1000 ##00:0001 n Fitness of a classifier is defined by its surrounding environments that pays payoff to classifiers and extract fees from classifiers. n Classifier systems employ a Michigan approach (populations consist of single rules) in the context of an externally defined fitness function.

Ch. Eick: Evolutionary Machine Learning n We need a set of rules that can solve problems collaboratively --- comparable to find a good soccer or baseball team n We want to have a set of rules that cover all the important situations, and not a set of rules that can only handle very specialized situations --- coverage is an important issue! n Delayed rewards pose particular problems n Only rules responsible for a chosen decision should be reward/penalized n ‘Lazy’ and inactive rules need to be removed n Rules whose reward behavior is predictable or preferable over rules whose reward behavior is harder to predict; prediction error and experience in XCS Challenges in Developing Michigan- style Classifier Systems r1 r2 r6 r4 r5 r3 Example: Reward +5

Ch. Eick: Evolutionary Machine Learning Bucket Brigade Algorithm n Developed by Holland for the apportionment of credits that relies on the model of a service economy, consisting of two main components: auction and a clearing house. n The environment as well as the classifiers post messages. n Each classifier maintains a bank account that measures its strength. Classifiers that match a posted string, make a bid proportial to their strength. Usually, the highest bidding classifier is selected to post its message (other, more parallel schemes are also used) n The auction permits appropriate classifiers to post their messages. Once a classifier is selected for activation, it must clear its payments through a clearing house paying its bid to other classifiers or the environment for matching messages rendered. A matched and activated classifier sends its bid to those classifiers responsible for sending messages that matched the bidding classifiers conditions. The sent bid-money is distributed in some manner between those classifiers.

Ch. Eick: Evolutionary Machine Learning Bucket Bridgade (continued) n Rules that cooperate with a classifier are rewarded by receiving the classifiers bid, the last classifier in a chain receives the environmental reward, all the other classifiers receive the reward from their predecessor. A classifier’s strength might be subject to taxation. The idea that underlies taxation is to punish inactive classifiers: T i (t):=c tax  S i (t) n The strength of a classifier is updated using the following equation: S i (t+1)= S i (t) - P i (t) - T i (t) + R i (t) A classifier bids proportional to its strength: B i =c bid  S i n Genetic algorithms are used to evolve classifiers. A classifiers strength defines its fitness, fitter classifiers reproduce with higher probability (e.g. roulette wheel might be employed) and binary string mutation and crossover operators are used to generate new classifiers. Newly generated classifiers replace weak, low strength classifier (other schemes such as crowding could also be employed).

Ch. Eick: Evolutionary Machine Learning Pittburgh-style Systems n Populations consist of rule-sets, and not of individual rules. n No bucket brigade algorithms is necessary. n Mechanisms to evaluate individual rules are usually missing. n Michigan-style systems are geared towards applications with dynamically changing requirements (“models of adaptation”); Pitt-style systems rely on more static environments assuming a fixed fitness function for rule-sets that are not necessary in the Michigan approach. n Pittsburgh approach systems usually have to cope with variable length chromosomes. n Popular Pittsburgh-style systems include: Smith’s LS-1-system (learns symbolic rule-sets) Janikov’s GIL system (learns symbolic rules; employs operators of Michalski’s inductive learning theory as its genetic operators) Giordana&Saita’s REGAL(learns symbolic concept descriptions) DELVAUX (learns (numerical) Bayesian rule-sets)

Ch. Eick: Evolutionary Machine Learning New Trends in Learning Classifier Systems (LCS) n Holland-style LCS work is very similar to work in reinforcement learning, especially Evolutionary Reinforcement Learning and an approach called “Q-Learning”. Newer paper claim that “bucket brigade” and “Q-Learning” are basically the same thing, and that LCS can benefit from recent advances in the area of Q-learning. n Wilson accuracy-based XCS has received significant attention in the literature (to be covered later) n Holland stresses the adaptive component of “his invention” in his newer work. n Recently, many Pittsburgh-style systems have been designed that learn rule-based systems using evolutionary computing which are quite different from Holland’s data-driven message passing systems such as: Systems that learn Bayesian Rules or Bayesian Belief Networks Systems that learn fuzzy rules Systems that learn first order logic rules Systems that learn PROLOG style programs n Work somewhat similar to classifier systems has become quite popular in field of agent-based systems that have to learn how to communicate and collaborate in a distributed environment.

Ch. Eick: Evolutionary Machine Learning Important Parameters for XCS XCS learns/maintains the following parameters for all its classifiers during the course of its operation: n p is the expected payoff; has a strong influence (combined with the rule’s fitness value) if a matching classifier’s action is selected for execution.  is the error made in predicting the payoffs n F (called fitness) denotes a classifiers “normalized accuracy” --- accuracy is the inverse of the degree of error made by a classifier; F combined with a determines which classifiers are chosen to be deleted from the population. F  p determines which actions of competing classifiers are selected for execution. n a determines the average size of action-sets this classifier belonged to; the smaller a/F is the less likely it becomes that this classifier is deleted. exp (experience) counts how often the classifier the classifier belonged to the action set; has some influence on the prediction of other parameters --- namely, if exp is low default parameters are used when predicting the other parameter (especially, for , F and a) n Moreover, it is important to know that only classifiers belonging to the action set are considered for reproduction.