Logistics Problem Set 1 Office Hours Reading Mailing List Dan’s Travel

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed search strategies
An Introduction to Artificial Intelligence
Representing Hypothesis Operators Fitness Function Genetic Programming
Search I: Chapter 3 Aim: achieving generality Q: how to formulate a problem as a search problem?
Finding Search Heuristics Henry Kautz. if State[node] is not in closed OR g[node] < g[LookUp(State[node],closed)] then A* Graph Search for Any Admissible.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Recent Progress in the Design and Analysis of Admissible Heuristic Functions Richard E. Korf Computer Science Department University of California, Los.
Review: Search problem formulation
Problem Spaces & Search CSE 473. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n.
Informed Search CSE 473 University of Washington.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Evolving Killer Robot Tanks Jacob Eisenstein. Why Must We Fight? Giving the people what they want Essence of embodiment: Moving around and surviving in.
CSC344: AI for Games Lecture 4: Informed search
Pattern Databases Robert Holte University of Alberta November 6, 2002.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
Informed search algorithms
Genetic Algorithm.
Computer Implementation of Genetic Algorithm
Search: Heuristic &Optimal Artificial Intelligence CMSC January 16, 2003.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
CS 484 – Artificial Intelligence1 Announcements Lab 3 due Tuesday, November 6 Homework 6 due Tuesday, November 6 Lab 4 due Thursday, November 8 Current.
Informed search algorithms
Li Wang Haorui Wu University of South Carolina 04/02/2015 A* with Pattern Databases.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CSE 473: Artificial Intelligence Spring 2012
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Chapter 9 Genetic Algorithms.  Based upon biological evolution  Generate successor hypothesis based upon repeated mutations  Acts as a randomized parallel.
Heuristic Search Andrea Danyluk September 16, 2013.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
CSE 573: Artificial Intelligence Autumn2012 Heuristics & Pattern Databases for Search With many slides from Dan Klein, Richard Korf, Stuart Russell, Andrew.
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
Chapter 9 Genetic Algorithms Evolutionary computation Prototypical GA
Chapter 9 Genetic Algorithms
Heuristics & Constraint Satisfaction CSE 573 University of Washington.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Functions.
Blocks World Problem. The CS Terminal Specifies the block at the top of the stack. Example CS evaluates to E Note: Evaluates to nil if the stack is empty.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Computer Science cpsc322, Lecture 7
Review: Tree search Initialize the frontier using the starting state
Introduction to Genetic Algorithms
Chapter 14 Genetic Algorithms.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Logistics Problem Set 1 Mailing List Due 10/13 at start of class
Local Search Algorithms
Prof. Marie desJardins September 20, 2010
Finding Heuristics Using Abstraction
CS621: Artificial Intelligence
Planning CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and.
Heuristics Local Search
Heuristics Local Search
Lecture 9 Administration Heuristic search, continued
Problem Spaces & Search
Heuristic Search Generate and Test Hill Climbing Best First Search
Search.
Search.
Beyond Classical Search
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Lecture 4: Tree Search Strategies
Presentation transcript:

Logistics Problem Set 1 Office Hours Reading Mailing List Dan’s Travel Due 10/13 at start of class Office Hours Monday 3:30pm – o email for another time Reading R&N ch 6 (skip 5 for now) Mailing List Last reminder to sign up via course web page Dan’s Travel Off email until Mon morning  If problems on PS, make best assumption © Daniel S. Weld

Knowledge Representation 573 Topics Perception NLP Multi-agent Robotics Reinforcement Learning MDPs Supervised Learning Planning Uncertainty Search Knowledge Representation Problem Spaces Agency © Daniel S. Weld

Search Problem spaces Blind Depth-first, breadth-first, iterative-deepening, iterative broadening Informed Best-first, Dijkstra's, A*, IDA*, SMA*, DFB&B, Beam, hill climb,limited discrep, AO*, LAO*, RTDP Local search Heuristics Evaluation, construction, learning Pattern databases Online methods Techniques from operations research Constraint satisfaction Adversary search © Daniel S. Weld

Combinatorial Optimization Nonlinear programs Convex Programs Integer Programming (NPC) Linear Programs (poly time) Local optimality  global optimality Kuhn Tucker conditions for optimality Flow & Matching (fast!) © Daniel S. Weld

Genetic Algorithms Start with random population Produce new generation Representation serialized States are ranked with “fitness function” Produce new generation Select random pair(s): probability ~ fitness Randomly choose “crossover point” Offspring mix halves Randomly mutate bits Selection Crossover Mutation 174629844710 650122942870 174629844710 174611094281 164611094281 510413310889   094001133281 776511094281 776529844710 776029844210 776511094281 900077644555 © Daniel S. Weld

Properties Randomized, parallel, beam search Using fitness function Importance of careful representation © Daniel S. Weld

© Daniel S. Weld

} } Experiment Fitness Initialized to 300 random examples Given 166 random problem instances Fitness = number of these problems solved Initialized to 300 random examples Results after 10 generations The following program discovered Solves all 166 (EQ (DU (MT CS) (NOT CS) (DU (MS NN) (NOT NN))) } } Move all blocks to table Build correct stack © Daniel S. Weld

© Daniel S. Weld

© Daniel S. Weld

Koza Block Stacking Learn a program which stacks blocks Initial blocks can be in any orientation Program should make a tower spelling “Universal” Clever Representation CS = name of block on top of current stack TB = name of top block st it and lower blocks OK NN = name next block needed above TB Imagine if blocks described using x,y coords! © Daniel S. Weld

Available Actions (MS x) (MT x) (EQ x y) (NOT x) (DU x y) if x is on table, move it to top of stack (MT x) If x is somewhere in the stack, move to table the topmost block (EQ x y) Returns T if x equals y (NOT x) (DU x y) Do x until y returns T Any problems? © Daniel S. Weld

Eisenstein’s Representation onRammed onHit Gun Input onScan Base Other actuators Each AFSM is a REX-like program Fixed-length encoding 64 operations per AFSM ~2000 bits per genome Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Training Scaled fitness Mutation pegged to diversity Typical parameters 200-500 individuals 10% copy, 88% crossover, 2% elitism This takes a LONG TIME!!! Sample from ~25 starting positions Up to 50,000 battles per generation 0.2-1.0 seconds per battle 20 minutes to 3 hours per generation Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Results Fixed starting position, one opponent GP crushes all opposition Beats “showcase” tank Randomized starting positions Wins 80% of battles against “learning” tank Wins 50% against “showcase” tank Multiple opponents Beats 4 out of 5 “learning” tanks Both… Unsuccessful Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Example Program Function Input 1 Input 2 Output 1. Random ignore 2. Divide Const_1 Const_2 3. Greater Than Line 1 Line 2 4. Normalize Angle Enemy bearing 5. Absolute Value Line 4 6. Less Than Const_90 7. And Line 6 Line 3 8. Multiply Const_10 9. Less Than Enemy distance Line 8 10. And Line 9 Line 7 11. Multiply Line 10 12. Output Turn gun left Line 11 0.87 0.5 1 -50 50 1 1 100 Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Functions Greater than, less than, equal + - * / % Absolute value Random number Constant And, or, not Normalize relative angle Adapted from Jacob Eisenstein presentation © Daniel S. Weld

Search          Problem spaces Blind Depth-first, breadth-first, iterative-deepening, iterative broadening Informed Best-first, Dijkstra's, A*, IDA*, SMA*, DFB&B, Beam, hill climb, limited discrep,AO*, LAO*, RTDP Local search Heuristics Evaluation, construction via relaxation Pattern databases Online methods Techniques from operations research Constraint satisfaction Adversary search         © Daniel S. Weld

Admissable Heuristics f(x) = g(x) + h(x) g: cost so far h: underestimate of remaining costs Where do heuristics come from? © Daniel S. Weld

Relaxed Problems Derive admissible heuristic from exact cost of a solution to a relaxed version of problem For transportation planning, relax requirement that car has to stay on road  Euclidean dist For blocks world, distance = # move operations heuristic = number of misplaced blocks What is relaxed problem? # out of place = 2, true distance to goal = 3 Cost of optimal soln to relaxed problem  cost of optimal soln for real problem © Daniel S. Weld

Simplifying Integrals vertex = formula goal = closed form formula without integrals arcs = mathematical transformations heuristic = number of integrals still in formula what is being relaxed? © Daniel S. Weld

Heuristics for eight puzzle 7 2 3 8 3 5 1 6 1 2 3 7 8 4 5 6  start goal What can we relax? © Daniel S. Weld

Importance of Heuristics 7 2 3 8 5 4 1 6 h1 = number of tiles in wrong place h2 =  distances of tiles from correct loc D IDS A*(h1) A*(h2) 2 10 6 6 4 112 13 12 6 680 20 18 8 6384 39 25 10 47127 93 39 12 364404 227 73 14 3473941 539 113 18 3056 363 24 39135 1641 © Daniel S. Weld

Need More Power! Performance of Manhattan Distance Heuristic 8 Puzzle < 1 second 15 Puzzle 1 minute 24 Puzzle 65000 years Need even better heuristics! Adapted from Richard Korf presentation © Daniel S. Weld

Subgoal Interactions Manhattan distance assumes Underestimates because Each tile can be moved independently of others Underestimates because Doesn’t consider interactions between tiles Adapted from Richard Korf presentation © Daniel S. Weld

Pattern Databases Pick any subset of tiles Precompute a table [Culberson & Schaeffer 1996] Pick any subset of tiles E.g., 3, 7, 11, 12, 13, 14, 15 Precompute a table Optimal cost of solving just these tiles For all possible configurations 57 Million in this case Use breadth first search back from goal state State = position of just these tiles (& blank) Adapted from Richard Korf presentation © Daniel S. Weld

Using a Pattern Database As each state is generated Use position of chosen tiles as index into DB Use lookup value as heuristic, h(n) Admissible? Adapted from Richard Korf presentation © Daniel S. Weld

Combining Multiple Databases Can choose another set of tiles Precompute multiple tables How combine table values? E.g. Optimal solutions to Rubik’s cube First found w/ IDA* using pattern DB heuristics Multiple DBs were used (dif subsets of cubies) Most problems solved optimally in 1 day Compare with 574,000 years for IDDFS Adapted from Richard Korf presentation © Daniel S. Weld

Drawbacks of Standard Pattern DBs Since we can only take max Diminishing returns on additional DBs Would like to be able to add values Adapted from Richard Korf presentation © Daniel S. Weld

Disjoint Pattern DBs Partition tiles into disjoint sets During search 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 Partition tiles into disjoint sets For each set, precompute table E.g. 8 tile DB has 519 million entries And 7 tile DB has 58 million During search Look up heuristic values for each set Can add values without overestimating! Manhattan distance is a special case of this idea where each set is a single tile Adapted from Richard Korf presentation © Daniel S. Weld

Performance 15 Puzzle: 2000x speedup vs Manhattan dist IDA* with the two DBs shown previously solves 15 Puzzles optimally in 30 milliseconds 24 Puzzle: 12 million x speedup vs Manhattan IDA* can solve random instances in 2 days. Requires 4 DBs as shown Each DB has 128 million entries Without PDBs: 65000 years Adapted from Richard Korf presentation © Daniel S. Weld