Evolutionary Search Artificial Intelligence CMSC 25000 January 25, 2007.

Slides:



Advertisements
Similar presentations
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Advertisements

Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Data Mining Classification: Alternative Techniques
Data Mining Classification: Alternative Techniques
1 CS 391L: Machine Learning: Instance Based Learning Raymond J. Mooney University of Texas at Austin.
Navneet Goyal. Instance Based Learning  Rote Classifier  K- nearest neighbors (K-NN)  Case Based Resoning (CBR)
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Information Retrieval Ling573 NLP Systems and Applications April 26, 2011.
Data Mining Techniques Outline
Learning: Nearest Neighbor, Perceptrons & Neural Nets
Searching by Constraint & Searching by Evolution CSPP Artificial Intelligence January 21, 2004.
Information Retrieval: Models and Methods October 15, 2003 CMSC Gina-Anne Levow.
Chapter 14 Genetic Algorithms.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor.
Evolutionary Search Artificial Intelligence CSPP January 28, 2004.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Advanced Multimedia Text Classification Tamara Berg.
Learning: Nearest Neighbor Artificial Intelligence CMSC January 31, 2002.
Computer Implementation of Genetic Algorithm
Efficient Model Selection for Support Vector Machines
K Nearest Neighborhood (KNNs)
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Genetic algorithms Prof Kang Li
CS 484 – Artificial Intelligence1 Announcements Lab 3 due Tuesday, November 6 Homework 6 due Tuesday, November 6 Lab 4 due Thursday, November 8 Current.
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
1 Chapter 14 Genetic Algorithms. 2 Chapter 14 Contents (1) l Representation l The Algorithm l Fitness l Crossover l Mutation l Termination Criteria l.
Nearest Neighbor & Information Retrieval Search Artificial Intelligence CMSC January 29, 2004.
Nearest Neighbor & Information Retrieval Search Artificial Intelligence CMSC January 29, 2004.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Learning with Decision Trees Artificial Intelligence CMSC February 20, 2003.
Exact and heuristics algorithms
Learning by Simulating Evolution Artificial Intelligence CSMC February 21, 2002.
1 Genetic Algorithms K.Ganesh Introduction GAs and Simulated Annealing The Biology of Genetics The Logic of Genetic Programmes Demo Summary.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
Edge Assembly Crossover
Nearest Neighbor Ling 572 Advanced Statistical Methods in NLP January 12, 2012.
Learning with Decision Trees Artificial Intelligence CMSC February 18, 2003.
An Introduction to Genetic Algorithms Lecture 2 November, 2010 Ivan Garibay
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
EVOLUTIONARY SYSTEMS AND GENETIC ALGORITHMS NAME: AKSHITKUMAR PATEL STUDENT ID: GRAD POSITION PAPER.
Genetic Algorithms An Evolutionary Approach to Problem Solving.
Genetic Algorithm. Outline Motivation Genetic algorithms An illustrative example Hypothesis space search.
Evolutionary Search Artificial Intelligence CMSC January 31, 2008.
Chapter 14 Genetic Algorithms.
Genetic Algorithms.
Information Retrieval: Models and Methods
Instance Based Learning
Information Retrieval: Models and Methods
Searching by Constraint (Continued)
Machine Learning Basics
Comparing Genetic Algorithm and Guided Local Search Methods
K Nearest Neighbor Classification
Learning with Identification Trees
Instance Based Learning
Boltzmann Machine (BM) (§6.4)
Searching by Constraint & Searching by Evolution
Artificial Intelligence CMSC January 27, 2004
Nearest Neighbors CSC 576: Data Mining.
Searching by Constraint & Searching by Evolution
Hubs and Authorities & Learning: Perceptrons
Chapter 7: Transformations
Artificial Intelligence CMSC January 25, 2005
Memory-Based Learning Instance-Based Learning K-Nearest Neighbor
Presentation transcript:

Evolutionary Search Artificial Intelligence CMSC January 25, 2007

Agenda Motivation: –Evolving a solution Genetic Algorithms –Modelling search as evolution Mutation Crossover Survival of the fittest Survival of the most diverse Conclusions

Motivation: Evolution Evolution through natural selection –Individuals pass on traits to offspring –Individuals have different traits –Fittest individuals survive to produce more offspring –Over time, variation can accumulate Leading to new species

Simulated Evolution Evolving a solution Begin with population of individuals –Individuals = candidate solutions ~chromosomes Produce offspring with variation –Mutation: change features –Crossover: exchange features between individuals Apply natural selection –Select “best” individuals to go on to next generation Continue until satisfied with solution

Genetic Algorithms Applications Search parameter space for optimal assignment –Not guaranteed to find optimal, but can approach Classic optimization problems: –E.g. Travelling Salesman Problem Program design (“Genetic Programming”) Aircraft carrier landings

Genetic Algorithm Example Cookie recipes (Winston, AI, 1993) As evolving populations Individual = batch of cookies Quality: 0-9 –Chromosomes = 2 genes: 1 chromosome each Flour Quantity, Sugar Quantity: 1-9 Mutation: –Randomly select Flour/Sugar: +/- 1 [1-9] Crossover: –Split 2 chromosomes & rejoin; keeping both

Fitness Natural selection: Most fit survive Fitness= Probability of survival to next gen Question: How do we measure fitness? –“Standard method”: Relate fitness to quality :0-1; :1-9: Chromosome Quality Fitness

GA Design Issues Genetic design: –Identify sets of features = genes; Constraints? Population: How many chromosomes? –Too few => inbreeding; Too many=>too slow Mutation: How frequent? –Too few=>slow change; Too many=> wild Crossover: Allowed? How selected? Duplicates?

GA Design: Basic Cookie GA Genetic design: –Identify sets of features: 2 genes: flour+sugar;1-9 Population: How many chromosomes? –1 initial, 4 max Mutation: How frequent? –1 gene randomly selected, randomly mutated Crossover: Allowed? No Duplicates? No Survival: Standard method

Basic Cookie GA Results Results are for 1000 random trials –Initial state: 1 1-1, quality 1 chromosome On average, reaches max quality (9) in 16 generations Best: max quality in 8 generations Conclusion: –Low dimensionality search Successful even without crossover

Basic Cookie GA+Crossover Results Results are for 1000 random trials –Initial state: 1 1-1, quality 1 chromosome On average, reaches max quality (9) in 14 generations Conclusion: –Faster with crossover: combine good in each gene –Key: Global max achievable by maximizing each dimension independently - reduce dimensionality

Solving the Moat Problem Problem: –No single step mutation can reach optimal values using standard fitness (quality=0 => probability=0) Solution A: –Crossover can combine fit parents in EACH gene However, still slow: 155 generations on average

Questions How can we avoid the 0 quality problem? How can we avoid local maxima?

Rethinking Fitness Goal: Explicit bias to best – Remove implicit biases based on quality scale Solution: Rank method –Ignore actual quality values except for ranking Step 1: Rank candidates by quality Step 2: Probability of selecting ith candidate, given that i-1 candidate not selected, is constant p. –Step 2b: Last candidate is selected if no other has been Step 3: Select candidates using the probabilities

Rank Method Chromosome Quality Rank Std. Fitness Rank Fitness Results: Average over 1000 random runs on Moat problem - 75 Generations (vs 155 for standard method) No 0 probability entries: Based on rank not absolute quality

Diversity Diversity: –Degree to which chromosomes exhibit different genes –Rank & Standard methods look only at quality –Need diversity: escape local min, variety for crossover –“As good to be different as to be fit”

Rank-Space Method Combines diversity and quality in fitness Diversity measure: –Sum of inverse squared distances in genes Diversity rank: Avoids inadvertent bias Rank-space: –Sort on sum of diversity AND quality ranks –Best: lower left: high diversity & quality

Rank-Space Method Chromosome Q D D Rank Q Rank Comb Rank R-S Fitness Diversity rank breaks ties After select others, sum distances to both Results: Average (Moat) 15 generations W.r.t. highest ranked 5-1

GA’s and Local Maxima Quality metrics only: –Susceptible to local max problems Quality + Diversity: –Can populate all local maxima Including global max –Key: Population must be large enough

GA Discussion Similar to stochastic local beam search –Beam: Population size –Stochastic: selection & mutation –Local: Each generation from single previous –Key difference: Crossover – 2 sources! Why crossover? –Schema: Partial local subsolutions E.g. 2 halves of TSP tour

Question Traveling Salesman Problem –CSP-style Iterative refinement –Genetic Algorithm N-Queens –CSP-style Iterative refinement –Genetic Algorithm

Iterative Improvement Example TSP –Start with some valid tour E.g. find greedy solution –Make incremental change to tour E.g. hill-climbing - take change that produces greatest improvement –Problem: Local minima –Solution: Randomize to search other parts of space Other methods: Simulated annealing, Genetic alg’s

Machine Learning: Nearest Neighbor & Information Retrieval Search Artificial Intelligence CMSC January 25, 2007

Agenda Machine learning: Introduction Nearest neighbor techniques –Applications: Credit rating Text Classification –K-nn –Issues: Distance, dimensions, & irrelevant attributes Efficiency: –k-d trees, parallelism

Machine Learning Learning: Acquiring a function, based on past inputs and values, from new inputs to values. Learn concepts, classifications, values –Identify regularities in data

Machine Learning Examples Pronunciation: –Spelling of word => sounds Speech recognition: –Acoustic signals => sentences Robot arm manipulation: –Target => torques Credit rating: –Financial data => loan qualification

Machine Learning Characterization Distinctions: –Are output values known for any inputs? Supervised vs unsupervised learning –Supervised: training consists of inputs + true output value »E.g. letters+pronunciation –Unsupervised: training consists only of inputs »E.g. letters only Course studies supervised methods

Machine Learning Characteristics Many machine learning techniques –Supervised vs Unsupervised Supervised: Input + true labels Unsupervised: Input ONLY –Classification vs Regression Classification: Output is from finite label set Regression: Output is continuous valued –Decision Boundary What function is learned? “Inductive Bias” –Linear, Rectangular, Vornoi diagram –Input features Discrete? Continuous? Which ones? Scaling?

Machine Learning Characterization Distinctions: –Are output values discrete or continuous? Discrete: “Classification” –E.g. Qualified/Unqualified for a loan application Continuous: “Regression” –E.g. Torques for robot arm motion Characteristic of task

Machine Learning Characterization Distinctions: –What form of function is learned? Also called “inductive bias” Graphically, decision boundary E.g. Single, linear separator –Rectangular boundaries - ID trees –Vornoi spaces…etc…

Machine Learning Functions Problem: Can the representation effectively model the class to be learned? Motivates selection of learning algorithm For this function, Linear discriminant is GREAT! Rectangular boundaries (e.g. ID trees) TERRIBLE! Pick the right representation!

Machine Learning Features Inputs: –E.g.words, acoustic measurements, financial data –Vectors of features: E.g. word: letters –‘cat’: L1=c; L2 = a; L3 = t Financial data: F1= # late payments/yr : Integer F2 = Ratio of income to expense: Real

Machine Learning Features Question: –Which features should be used? –How should they relate to each other? Issue 1: How do we define relation in feature space if features have different scales? –Solution: Scaling/normalization Issue 2: Which ones are important? –If differ in irrelevant feature, should ignore

Complexity & Generalization Goal: Predict values accurately on new inputs Problem: –Train on sample data –Can make arbitrarily complex model to fit –BUT, will probably perform badly on NEW data Strategy: –Limit complexity of model (e.g. degree of equ’n) –Split training and validation sets Hold out data to check for overfitting

Nearest Neighbor Memory- or case- based learning Supervised method: Training –Record labeled instances and feature-value vectors For each new, unlabeled instance –Identify “nearest” labeled instance –Assign same label Consistency heuristic: Assume that a property is the same as that of the nearest reference case.

Nearest Neighbor Example Problem: Robot arm motion –Difficult to model analytically Kinematic equations –Relate joint angles and manipulator positions Dynamics equations –Relate motor torques to joint angles –Difficult to achieve good results modeling robotic arms or human arm Many factors & measurements

Nearest Neighbor Example Solution: –Move robot arm around –Record parameters and trajectory segment Table: torques, positions,velocities, squared velocities, velocity products, accelerations –To follow a new path: Break into segments Find closest segments in table Get those torques (interpolate as necessary)

Nearest Neighbor Example Issue: Big table –First time with new trajectory “Closest” isn’t close Table is sparse - few entries Solution: Practice –As attempt trajectory, fill in more of table After few attempts, very close

Nearest Neighbor Example Credit Rating: –Classifier: Good / Poor –Features: L = # late payments/yr; R = Income/Expenses Name L R G/P A0 1.2G B25 0.4P C5 0.7 G D P E P F G G G H P

Nearest Neighbor Example Name L R G/P A0 1.2G B25 0.4P C5 0.7 G D P E P F G G G H P L R A B C D E F G H

Nearest Neighbor Example L A B C D E F G H R Name L R G/P I J K G IP J ?? K Distance Measure: Sqrt ((L1-L2)^2 + [sqrt(10)*(R1-R2)]^2)) - Scaled distance

Nearest Neighbor Analysis Problem: –Ambiguous labeling, Training Noise Solution: –K-nearest neighbors Not just single nearest instance Compare to K nearest neighbors –Label according to majority of K What should K be? –Often 3, can train as well

Text Classification

Matching Topics and Documents Two main perspectives: –Pre-defined, fixed, finite topics: “Text Classification” –Arbitrary topics, typically defined by statement of information need (aka query) “Information Retrieval”

Vector Space Information Retrieval Task: –Document collection –Query specifies information need: free text –Relevance judgments: 0/1 for all docs Word evidence: Bag of words –No ordering information

Vector Space Model Computer Tv Program Two documents: computer program, tv program Query: computer program : matches 1 st doc: exact: distance=2 vs 0 educational program: matches both equally: distance=1

Vector Space Model Represent documents and queries as –Vectors of term-based features Features: tied to occurrence of terms in collection –E.g. Solution 1: Binary features: t=1 if present, 0 otherwise –Similiarity: number of terms in common Dot product

Vector Space Model II Problem: Not all terms equally interesting –E.g. the vs dog vs Levow Solution: Replace binary term features with weights –Document collection: term-by-document matrix –View as vector in multidimensional space Nearby vectors are related –Normalize for vector length

Vector Similarity Computation Similarity = Dot product Normalization: –Normalize weights in advance –Normalize post-hoc

Term Weighting “Aboutness” –To what degree is this term what document is about? –Within document measure –Term frequency (tf): # occurrences of t in doc j “Specificity” –How surprised are you to see this term? –Collection frequency –Inverse document frequency (idf):

Term Selection & Formation Selection: –Some terms are truly useless Too frequent, no content –E.g. the, a, and,… –Stop words: ignore such terms altogether Creation: –Too many surface forms for same concepts E.g. inflections of words: verb conjugations, plural –Stem terms: treat all forms as same underlying

Efficient Implementations Classification cost: –Find nearest neighbor: O(n) Compute distance between unknown and all instances Compare distances –Problematic for large data sets Alternative: –Use binary search to reduce to O(log n)

Efficient Implementation: K-D Trees Divide instances into sets based on features –Binary branching: E.g. > value –2^d leaves with d split path = n d= O(log n) –To split cases into sets, If there is one element in the set, stop Otherwise pick a feature to split on –Find average position of two middle objects on that dimension »Split remaining objects based on average position »Recursively split subsets

K-D Trees: Classification R > 0.825?L > 17.5?L > 9 ? No Yes R > 0.6?R > 0.75?R > ?R > ? No YesNo Yes No PoorGood Yes No Yes GoodPoor NoYes Good No Poor Yes Good

Efficient Implementation: Parallel Hardware Classification cost: –# distance computations Const time if O(n) processors –Cost of finding closest Compute pairwise minimum, successively O(log n) time

Nearest Neighbor: Issues Prediction can be expensive if many features Affected by classification, feature noise –One entry can change prediction Definition of distance metric –How to combine different features Different types, ranges of values Sensitive to feature selection

Nearest Neighbor: Analysis Issue: –What is a good distance metric? –How should features be combined? Strategy: –(Typically weighted) Euclidean distance –Feature scaling: Normalization Good starting point: –(Feature - Feature_mean)/Feature_standard_deviation –Rescales all values - Centered on 0 with std_dev 1

Nearest Neighbor: Analysis Issue: –What features should we use? E.g. Credit rating: Many possible features –Tax bracket, debt burden, retirement savings, etc.. –Nearest neighbor uses ALL –Irrelevant feature(s) could mislead Fundamental problem with nearest neighbor

Nearest Neighbor: Advantages Fast training: –Just record feature vector - output value set Can model wide variety of functions –Complex decision boundaries –Weak inductive bias Very generally applicable

Summary Machine learning: –Acquire function from input features to value Based on prior training instances –Supervised vs Unsupervised learning Classification and Regression –Inductive bias: Representation of function to learn Complexity, Generalization, & Validation