CITS7212 Computational Intelligence CI Technologies.

Slides:



Advertisements
Similar presentations
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Advertisements

Population-based metaheuristics Nature-inspired Initialize a population A new population of solutions is generated Integrate the new population into the.
Computational Intelligence Winter Term 2013/14 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
CS6800 Advanced Theory of Computation
Swarm algorithms COMP308. Swarming – The Definition aggregation of similar animals, generally cruising in the same direction Termites swarm to build colonies.
PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image:
Ant colonies for the traveling salesman problem Eliran Natan Seminar in Bioinformatics (236818) – Spring 2013 Computer Science Department Technion - Israel.
Tuesday, May 14 Genetic Algorithms Handouts: Lecture Notes Question: when should there be an additional review session?
Bio-Inspired Optimization. Our Journey – For the remainder of the course A brief review of classical optimization methods The basics of several stochastic.
Ant Colony Optimization. Brief introduction to ACO Ant colony optimization = ACO. Ants are capable of remarkably efficient discovery of short paths during.
Biologically Inspired Computation Lecture 10: Ant Colony Optimisation.
Planning under Uncertainty
EC: Lecture 17: Classifier Systems Ata Kaban University of Birmingham.
1 A hybrid particle swarm optimization algorithm for optimal task assignment in distributed system Peng-Yeng Yin and Pei-Pei Wang Department of Information.
Fuzzy Medical Image Segmentation
Ant Colony Optimization Optimisation Methods. Overview.
Ant Colony Optimization Algorithms for the Traveling Salesman Problem ACO Kristie Simpson EE536: Advanced Artificial Intelligence Montana State.
Genetic Algorithms Nehaya Tayseer 1.Introduction What is a Genetic algorithm? A search technique used in computer science to find approximate solutions.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Ant Colony Optimization: an introduction
FORS 8450 Advanced Forest Planning Lecture 19 Ant Colony Optimization.
Ant colony optimization algorithms Mykulska Eugenia
Particle Swarm Optimization Algorithms
CSM6120 Introduction to Intelligent Systems Other evolutionary algorithms.
Genetic Algorithms and Ant Colony Optimisation
EE4E,M.Sc. C++ Programming Assignment Introduction.
CI TECHNOLOGIES CITS4404 Artificial Intelligence & Adaptive Systems.
Ch. Eick: Evolutionary Machine Learning Classifier Systems n According to Goldberg [113], a classifier system is “a machine learning system that learns.
Swarm Computing Applications in Software Engineering By Chaitanya.
Swarm Intelligence 虞台文.
Algorithms and their Applications CS2004 ( )
G5BAIM Artificial Intelligence Methods Graham Kendall Ant Algorithms.
SWARM INTELLIGENCE Sumesh Kannan Roll No 18. Introduction  Swarm intelligence (SI) is an artificial intelligence technique based around the study of.
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
Mark shelton | merrick cloete saman majrouh | sahithi jadav.
(Particle Swarm Optimisation)
The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010.
1 IE 607 Heuristic Optimization Particle Swarm Optimization.
PSO and ASO Variants/Hybrids/Example Applications & Results Lecture 12 of Biologically Inspired Computing Purpose: Not just to show variants/etc … for.
Topics in Artificial Intelligence By Danny Kovach.
Ant Colony Optimization. Summer 2010: Dr. M. Ameer Ali Ant Colony Optimization.
Object Oriented Programming Assignment Introduction Dr. Mike Spann
Discrete optimization of trusses using ant colony metaphor Saurabh Samdani, Vinay Belambe, B.Tech Students, Indian Institute Of Technology Guwahati, Guwahati.
1 “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions.
Algorithms and their Applications CS2004 ( ) 13.1 Further Evolutionary Computation.
1 Genetic Algorithms and Ant Colony Optimisation.
Learning Classifier Systems BRANDEN PAPALIA, MICHAEL STEWART, JAMES PATRICK FACULTY OF ENGINEERING, COMPUTING AND MATHEMATICS.
Optimizing Pheromone Modification for Dynamic Ant Algorithms Ryan Ward TJHSST Computer Systems Lab 2006/2007 Testing To test the relative effectiveness.
Ant colony optimization. HISTORY introduced by Marco Dorigo (MILAN,ITALY) in his doctoral thesis in 1992 Using to solve traveling salesman problem(TSP).traveling.
CITS7212: Computational Intelligence An Overview of Core CI Technologies Lyndon While.
Introduction Metaheuristics: increasingly popular in research and industry mimic natural metaphors to solve complex optimization problems efficient and.
Biologically Inspired Computation Ant Colony Optimisation.
Particle Swarm Optimization (PSO)
B.Ombuki-Berman1 Swarm Intelligence Ant-based algorithms Ref: Various Internet resources, books, journal papers (see assignment 3 references)
Genetic Algorithms. Solution Search in Problem Space.
Ant Colony Optimisation. Emergent Problem Solving in Lasius Niger ants, For Lasius Niger ants, [Franks, 89] observed: –regulation of nest temperature.
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
Swarm Intelligence. Content Overview Swarm Particle Optimization (PSO) – Example Ant Colony Optimization (ACO)
CI TECHNOLOGIES CITS4404 Artificial Intelligence & Adaptive Systems.
Scientific Research Group in Egypt (SRGE)
Particle Swarm Optimization
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
Meta-heuristics Introduction - Fabien Tricoire
School of Computer Science & Engineering
metaheuristic methods and their applications
Computational Intelligence
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Ant Colony Optimization
traveling salesman problem
Computational Intelligence
Presentation transcript:

CITS7212 Computational Intelligence CI Technologies

Particle swarm optimisation A population-based stochastic optimisation technique Eberhart and Kennedy, 1995 Inspired by bird-flocking Imagine a flock of birds searching a landscape for food Each bird is currently at some point in the landscape Each bird flies continually over the landscape Each bird remembers where it has been and how much food was there Each bird is influenced by the findings of the other birds Collectively the birds explore the landscape and share the resulting food

PSO For our purposes The landscape represents the possible solutions to a problem (i.e. the search space) Time moves in discrete steps called generations At a given generation, each bird has a position in the landscape and a velocity Each bird knows Which point it has visited that scored the best (its personal best pbest) Which point visited by any bird that scored the best (the global best gbest) At each generation, for each bird Update (stochastically) its velocity v, favouring pbest and gbest Use v to update its position Update pbest and gbest as appropriate

PSO Initialisation can be by many means, but often is just done randomly Termination criteria also vary, but often termination is either After a fixed number of generations, or After convergence is “achieved”, e.g. if gbest doesn’t improve for a while After a solution is discovered that is better than a given standard Performance-wise A large population usually gives better results A large number of generations gives better results But both obviously have computational costs Clearly an evolutionary searching algorithm, but co-operation is via gbest, rather than via crossover and survival as in EAs

Ant colony optimisation Another population-based stochastic optimisation technique Dorigo et al., 1996 Inspired by colonies of ants communicating via pheromones Imagine a colony of ants with a choice of two paths around an obstacle A shorter path ABXCD vs. a longer path ABYCD Each ant chooses a path probabilistically wrt the amount of pheromone on each Each ant lays pheromone as it moves along its chosen path Initially 50% of ants go each way, but the ants going via X take a shorter time, therefore more pheromone is laid on that path Later ants are biased towards ABXCD by this pheromone, which reinforces the process Eventually almost all ants will choose ABXCD Pheromone evaporates over time to allow adaptation to changing situations

ACO The key points are that Paths with more pheromone are more likely to be chosen by later ants Shorter/better paths are likely to have more pheromone Therefore shorter/better paths are likely to be favoured over time But the stochastic routing and the evaporation means that new paths can be explored

ACO Consider the application of ACO to the Traveling Salesman Problem Given n cities, find the shortest tour that visits each city exactly once Given m ants, each starting from a random city In each iteration, each ant chooses a city it hasn’t visited yet Ants choose cities probabilistically, favouring links with more pheromone After n iterations (i.e. one cycle), all ants have done a complete tour, and they all lay pheromone on each link they used The shorter an ant’s tour, the more pheromone it lays on each link In subsequent cycles, ants tend to favour links that contributed to short tours in earlier cycles The shortest tour found so far is recorded and updated appropriately Initialisation and termination are performed similarly to PSO

Learning Classifier Systems Reading: M. Butz and S. Wilson, “An algorithmic description of XCS”, Advances in Learning Classifier Systems, 2001 O. Sigaud and S. Wilson, “Learning classifier systems: a survey”, Soft Computing – A Fusion of Foundations, Methodologies and Applications 11(11), 2007 R. Urbanomwicz and J. Moore, “Learning classifier systems: a complete introduction, review, and roadmap”, Journal of Artificial Evolution and Applications, 2009

LCSs Inspired by a model of human learning: frequent update of the efficacy of existing rules occasional modification of governing rules ability to create, remove, and generalise rules LCSs simulate adaptive expert systems – adapting both the value of individual rules and the structural composition of rules in the rule set LCSs are hybrid machine learning techniques, combining reinforcement learning and EAs reinforcement learning used to update rule quality an EA used to update the composition of the rule set

Algorithm Structure An LCS maintains a population of condition-action-prediction rules called classifiers the condition defines when the rule matches the action defines what action the system should take the prediction indicates the expected reward of the action At each step (input), the LCS: forms a match set of classifiers whose conditions are satisfied by the input chooses the action from the match set with the highest average reward, weighted by classifier fitness (reliability) forms the action set – the subset of classifiers from the match set who suggest the chosen action executes the action and observes the returned payoff

Algorithm Structure Simple reinforcement learning is used to update prediction and fitness values for each classifier in the action set A steady-state EA is used to evolve the composition of the classifiers in the LCS the EA executes at regular intervals to replace the weakest members of the population the EA operates on the condition and action parts of classifiers Extra phases for rule subsumption (generalisation) and rule creation (covering) are used to ensure a minimal covering set of classifiers is maintained

An Example Diagram taken from a seminar on using LCSs for fraud detection, by M. Behdad

LCS Variants There are two main styles of LCS algorithms: 1.Pittsburgh-style: each population member represents a separate rule set, each forming a permanent “team” 2.Michigan-style: a single population of rules is maintained; rules form ad-hoc “teams” as required LCS variants differ on the definition of fitness: strength-based (ZCS): classifier fitness is based on the predicted reward of the classifier and not its accuracy accuracy-based (XCS): classifier fitness is based on the accuracy of the classifier and not its predicted reward, thus promoting the evolution of accurate classifiers XCS generally has better performance, although understanding when remains an open question

Fuzzy logic facilitates the definition of control systems that can make good decisions from noisy, imprecise, or partial information Zadeh, 1973 Two key concepts Graduation: everything is a matter of degree e.g. it can be “not cold”, or “a bit cold”, or “a lot cold”, or … Granulation: everything is “clumped”, e.g. age is young, middle-aged, or old Fuzzy systems age 1 0 old middle-aged young

Fuzzy Logic The syntax of Fuzzy logic typically includes propositions ("It is raining", "CITS7212 is difficult", etc.), and Boolean connectives (and, not, etc.) The semantics of Fuzzy logic differs from propositional logic; rather than assigning a True/False value to a proposition, we assign a degree of truth between 0 and 1, (e.g. v("CITS7212 is difficult") = 0.8) Typical interpretations of the operators and and not are v(not p) = 1 – v(p) v(p and q) = min { v(p), v(q) } (Godel-Dummett norm) Different semantics may be given by varying the interpretation of and (the T-norm). Anything commutative, associative, monotonic, continuous, and with 1 as an identity is a T-norm. Other common T-norms are: v(p and q) = v(p)*v(q) (product norm) and v(p and q) = max{v(p) + v(q) -1, 0} (Lukasiewicz norm)

Vagueness and Uncertainty The product norm captures our understanding of probability or uncertainty with a strong independence assumption prob(Rain and Wind) = prob(Rain) * prob(Wind) The Godel-Dummett norm is a fair representation of Vagueness: If it’s a bit windy and very rainy, it’s a bit windy and rainy Fuzzy logic provides a unifying logical framework for all CI Techniques, as CI techniques are inherently vague Whether or not it is actually implemented is another question

A fuzzy control system is a collection of rules IF X [AND Y] THEN Z e.g. IF cold AND ¬warming-up THEN open heating valve slightly Such rules are usually derived empirically from experience, rather than from the system itself Attempt to mimic human-style logic Granulation means that the exact values of any constants (e.g. where does cold start/end?) are less important The fuzzy rules typically take observations, and according to these observations’ membership of fuzzy sets, we get a fuzzy action The fuzzy action then needs to be defuzzified to become a precise output Fuzzy Controllers

Fuzzy Control temperature d(temperature) / dt Cold zero Right +ve Hot -ve heat cool heat no change Applying Fuzzy Rules Image from