CI TECHNOLOGIES CITS4404 Artificial Intelligence & Adaptive Systems.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

KULIAH II JST: BASIC CONCEPTS
Approaches, Tools, and Applications Islam A. El-Shaarawy Shoubra Faculty of Eng.
Population-based metaheuristics Nature-inspired Initialize a population A new population of solutions is generated Integrate the new population into the.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
UNIVERSITY OF JYVÄSKYLÄ Building NeuroSearch – Intelligent Evolutionary Search Algorithm For Peer-to-Peer Environment Master’s Thesis by Joni Töyrylä
PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image:
Bio-Inspired Optimization. Our Journey – For the remainder of the course A brief review of classical optimization methods The basics of several stochastic.
EXPERT SYSTEMS apply rules to solve a problem. –The system uses IF statements and user answers to questions in order to reason just like a human does.
Biologically Inspired Computation Lecture 10: Ant Colony Optimisation.
Machine Learning Neural Networks
Chapter 4 DECISION SUPPORT AND ARTIFICIAL INTELLIGENCE
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Ant Colony Optimization Optimisation Methods. Overview.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
L/O/G/O Ant Colony Optimization M1 : Cecile Chu.
Particle Swarm Optimization Algorithms
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
CSM6120 Introduction to Intelligent Systems Other evolutionary algorithms.
Genetic Algorithms and Ant Colony Optimisation
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
EE4E,M.Sc. C++ Programming Assignment Introduction.
CITS7212 Computational Intelligence CI Technologies.
Artificial Neural Networks
Artificial Neural Networks
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Swarm Computing Applications in Software Engineering By Chaitanya.
Swarm Intelligence 虞台文.
Algorithms and their Applications CS2004 ( )
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
UNIT OVERVIEW CITS4404 Artificial Intelligence & Adaptive Systems.
(Particle Swarm Optimisation)
The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010.
PSO and ASO Variants/Hybrids/Example Applications & Results Lecture 12 of Biologically Inspired Computing Purpose: Not just to show variants/etc … for.
Topics in Artificial Intelligence By Danny Kovach.
Ant Colony Optimization. Summer 2010: Dr. M. Ameer Ali Ant Colony Optimization.
Object Oriented Programming Assignment Introduction Dr. Mike Spann
Problem Solving Techniques. Compiler n Is a computer program whose purpose is to take a description of a desired program coded in a programming language.
Fuzzy Genetic Algorithm
© 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang 12-1 Chapter 12 Advanced Intelligent Systems.
Controlling the Behavior of Swarm Systems Zachary Kurtz CMSC 601, 5/4/
Ant colony optimization. HISTORY introduced by Marco Dorigo (MILAN,ITALY) in his doctoral thesis in 1992 Using to solve traveling salesman problem(TSP).traveling.
Biologically inspired algorithms BY: Andy Garrett YE Ziyu.
CITS7212: Computational Intelligence An Overview of Core CI Technologies Lyndon While.
SwinTop: Optimizing Memory Efficiency of Packet Classification in Network Author: Chen, Chang; Cai, Liangwei; Xiang, Yang; Li, Jun Conference: Communication.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Introduction Metaheuristics: increasingly popular in research and industry mimic natural metaphors to solve complex optimization problems efficient and.
Biologically Inspired Computation Ant Colony Optimisation.
Particle Swarm Optimization (PSO)
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
A field of study that encompasses computational techniques for performing tasks that require intelligence when performed by humans. Simulation of human.
DRILL Answer the following question’s about yesterday’s activity in your notebook: 1.Was the activity an example of ACO or PSO? 2.What was the positive.
Introduction to Artificial Intelligence Revision Session.
Genetic Algorithms. Solution Search in Problem Space.
Ant Colony Optimisation. Emergent Problem Solving in Lasius Niger ants, For Lasius Niger ants, [Franks, 89] observed: –regulation of nest temperature.
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
CI TECHNOLOGIES CITS4404 Artificial Intelligence & Adaptive Systems.
CITS4404 Artificial Intelligence & Adaptive Systems
Supervised Learning in ANNs
Particle Swarm Optimization
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
Advanced Artificial Intelligence Evolutionary Search Algorithm
metaheuristic methods and their applications
Chapter 12 Advanced Intelligent Systems
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Algorithms and data structures
Presentation transcript:

CI TECHNOLOGIES CITS4404 Artificial Intelligence & Adaptive Systems

2 Key technologies Evolutionary algorithms Particle swarm optimisation Ant colony optimisation Artificial neural networks Learning classifier systems Fuzzy reasoning Market-based algorithms Bayesian reasoning Artificial immune systems

3 Evolutionary algorithms Based on the principle of evolution by natural selection The algorithm maintains a population of encodings The structure of an encoding captures what the algorithm is allowed to vary in its search for a good solution Each encoding represents a solution Each solution has a corresponding fitness that describes how good that solution is In each generation The fitnesses are used to decide which solutions survive The survivors become parents and they spawn new encodings, generated via mutation and crossover The children also represent solutions with fitnesses…

×× Encodings Solutions Fitnesses Population

5 EAs contd. Note that an encoding/solution, once created, never changes Its descendants will be different The general idea is that good solutions generate “similar” solutions, some of which may be an improvement Parents generate children either singly or in combination The initial population is generated either randomly, or using some domain knowledge Termination can be determined in several ways A fixed number of generations Until improvement ceases Until a certain fitness is obtained

6 Particle swarm optimisation Based on the behaviour of a flock of birds searching for food The algorithm maintains a population of particles Each particle moves continually over the landscape At any moment, each particle has a position, representing a solution; and a velocity, representing its momentum Each particle remembers the best solution it has ever seen, its personal best pbest The algorithm also remembers the global best gbest In each generation Each particle’s velocity is updated, favouring pbest and gbest Each particle’s position is updated using its new velocity The pbests and gbest are updated as appropriate

7 swarm-optimization-demo-1.html swarm-optimization-demo-1.html Population dimension = 4 Delay between iterations = 500

8 PSO contd. Each particle explores different solutions in different generations Collectively the swarm explores the landscape The updating mechanisms mean that particles favour areas of the landscape known to have good solutions Good solutions the particle has seen Good solutions other particles have seen As time proceeds, the swarm focuses on a smaller and smaller area Eventually, the swarm will converge on the area surrounding gbest

9 Ant colony optimisation Based on groups of ants communicating via pheromones Given a problem structured as a network, the algorithm maintains a population of ants that traverse the network An ant selects each step through the network probabilistically It will favour “good” steps It will favour steps with more pheromone When an ant completes a traversal, it lays pheromone on the path that it used The amount of pheromone laid will be proportional to the quality of the path Pheromone evaporates over time, to allow for adaptation in the steps selected

10

11 ACO contd. The key points are that When one ant discovers something good, every ant benefits Initially-random choices improve over time ACO applies naturally to problems involving spatial networks Travelling salesman Vehicle routing Electronic messaging etc. But many other problems can be cast as networks Scheduling Timetabling Image processing etc.

12 Artificial neural networks Based on the structure of the brain and its processing ability ANNs act mainly as Function approximators Pattern recognisers An ANN is composed of one or more layers of neurons Each neuron is very simple Power and intelligence emerges from their (usually vast!) numbers, and from the interconnections between them Data is fed into one end of the network (the input layer), it passes through the hidden layers of the network, and it emerges from the output layer The various layers generate progressively higher-level information

13 Input Links Output Links Output Bias Weight Activation Function Input Function Input layerHidden layer(s)Output layer

14 ANNs contd. The number of hidden layers required is determined by the complexity of the problem being solved Zero hidden layers – can represent only linearly-separable functions One hidden layer – can represent any continuous function Multiple hidden layers – can represent any function ANNs can be Acyclic (feed-forward) – stateless processing Cyclic (recurrent) – supports short-term memory ANNs learn by fine-tuning the weights on their links, usually by one of two mechanisms Back-propagation Evolution or similar

15 Learning classifier systems Based on how “experts” solve problems and acquire skills The algorithm maintains a database of “condition-action-prediction” rules The condition defines when the rule applies The action states what the system should do The prediction indicates the expected reward Given a problem instance, the algorithm Forms a match set of rules whose conditions are satisfied Chooses the action A with the best predicted performance Forms the action set of rules that recommend A Executes A and observes the actual performance, which is fed back to update the action set

16 (Diagram adapted from a seminar on using LCSs for fraud detection, by M. Behdad) Environment DetectorsEffectors Database #011: ##:00 32 #0##: #:01 27 #0#1: #01:10 24 Action Set #011: #: reward Previous Action Sets internal reinforcement updated periodically by evolution, covering, and subsumption 0011 Match Set #011:01 43 #0##: #:01 27 #0#1:11 18

17 LCS contd. As well as direct feedback, the rule set is periodically updated By subsumption, generalisation, and covering By evolution or similar Feedback can be based on either The performance obtained by using the action The accuracy of a rule’s prediction Sometimes the database is divided into semi-permanent “teams” of rules that are known to work well together

18 Fuzzy reasoning Based on human processing of noisy/imprecise/partial data Two key concepts Granulation: everything is “clumped”, e.g. a person can be “young”, or “middle-aged”, or “old” Graduation: everything is a matter of degree, e.g. a day can be “not cold”, or “a bit cold”, or “a lot cold”, or … Instead of saying that a state is either “cold” or “not cold”, we assign a degree of truth: e.g. a state is “0.8 cold” Operators are changed accordingly, e.g. v(not(p))= 1 – v(p) v(p and q)= min {v(p), v(q)} There are several alternative formulations for and

19 Fuzzy contd. A fuzzy control system is a collection of rules IF X [AND Y] THEN Z e.g. IF cold AND not warming-up THEN increase heating slightly These rules attempt to mimic human-style logic Granulation means that the exact values of constants are unimportant In each cycle, the system Takes a set of observations and fuzzifies them Applies all of the rules that match, generating a set of fuzzy results Defuzzifies the results to get a precise output

20 Example from temp is 0.48 cool pressure is 0.57 low and 0.25 ok Rule 3 gives 0.25 Z Rule 2 gives 0.48 P2

21 Market-based algorithms Based on the greedy operation of trading markets

22 Bayesian reasoning Based on probabilistic reasoning with learning

23 Artificial immune systems Based on the learning mechanisms of body-defense systems