Department of Computer Science Undergraduate Events More https://www.cs.ubc.ca/students/undergrad/life/upcoming-events https://www.cs.ubc.ca/students/undergrad/life/upcoming-events.

Slides:



Advertisements
Similar presentations
Constraint Satisfaction Problems
Advertisements

Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) Oct, 5, 2012.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Department of Computer Science Undergraduate Events More
Planning under Uncertainty
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) February, 9, 2009.
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) February, 3, 2010.
CPSC 322, Lecture 18Slide 1 Planning: Heuristics and CSP Planning Computer Science cpsc322, Lecture 18 (Textbook Chpt 8) February, 12, 2010.
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) February, 6, 2009.
4 Feb 2004CS Constraint Satisfaction1 Constraint Satisfaction Problems Chapter 5 Section 1 – 3.
Iterative improvement algorithms Prof. Tuomas Sandholm Carnegie Mellon University Computer Science Department.
CPSC 322, Lecture 12Slide 1 CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12 (Textbook Chpt ) January, 29, 2010.
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) February, 4, 2009.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Constraint Satisfaction Problems
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
Intelligent Agents What is the basic framework we use to construct intelligent programs?
Department of Computer Science Undergraduate Events More
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Constraint Satisfaction Problems
Slide 1 CSPs: Arc Consistency & Domain Splitting Jim Little UBC CS 322 – Search 7 October 1, 2014 Textbook §
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Stochastic Local Search CPSC 322 – CSP 6 Textbook §4.8 February 9, 2011.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search CPSC 322 – CSP 5 Textbook §4.8 February 7, 2011.
Department of Computer Science Undergraduate Events More
Chapter 5 Section 1 – 3 1.  Constraint Satisfaction Problems (CSP)  Backtracking search for CSPs  Local search for CSPs 2.
Stocs – A Stochastic CSP Solver Bella Dubrov IBM Haifa Research Lab © Copyright IBM.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Computer Science CPSC 322 Lecture 16 CSP: wrap-up Planning: Intro (Ch 8.1) Slide 1.
Department of Computer Science Undergraduate Events More
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Local Search and Optimization Presented by Collin Kanaley.
CHAPTER 5 SECTION 1 – 3 4 Feb 2004 CS Constraint Satisfaction 1 Constraint Satisfaction Problems.
1. 2 Outline of Ch 4 Best-first search Greedy best-first search A * search Heuristics Functions Local search algorithms Hill-climbing search Simulated.
Department of Computer Science Undergraduate Events More
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
Domain Splitting, Local Search CPSC 322 – CSP 4 Textbook §4.6, §4.8 February 4, 2011.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) Oct, 9, 2013.
Optimization via Search
Stochastic Local Search Computer Science cpsc322, Lecture 15
Department of Computer Science
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Lecture 11 SLS wrap up, Intro To Planning
Local Search Algorithms
Computer Science cpsc322, Lecture 14
Artificial Intelligence (CS 370D)
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Computer Science cpsc322, Lecture 13
Computer Science cpsc322, Lecture 14
Assignment-2 due now Enterprise Architecture Conference Sat. Oct 27
More on Search: A* and Optimization
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Constraint Satisfaction Problems. A Quick Overview
Artificial Intelligence
Local Search Algorithms
CS 8520: Artificial Intelligence
Constraint Satisfaction Problems
Presentation transcript:

Department of Computer Science Undergraduate Events More SAP Code Slam Sat. Oct 13 noon to Sun. Oct 14 noon DMP 110 IBM Info Session Tues. Oct 16 5:30 pm Wesbrook 100 Global Relay Open House Thurs. Oct 18 4:30 – 6:30 pm 220 Cambie St. 2 nd Floor

CPSC 322, Lecture 15Slide 2 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) Oct, 10, 2012

Announcements Thanks for the feedback, we’ll discuss it on Mon Assignment-2 on CSP will be out on Fri (programming!) CPSC 322, Lecture 10Slide 3

CPSC 322, Lecture 15Slide 4 Lecture Overview Recap Local Search in CSPs Stochastic Local Search (SLS) Comparing SLS algorithms

CPSC 322, Lecture 15Slide 5 Local Search: Summary A useful method in practice for large CSPs Start from a possible world Generate some neighbors ( “similar” possible worlds) Move from current node to a neighbor, selected to minimize/maximize a scoring function which combines: Info about how many constraints are violated Information about the cost/quality of the solution (you want the best solution, not just a solution)

CPSC 322, Lecture 5Slide 6 Hill Climbing NOTE: Everything that will be said for Hill Climbing is also true for Greedy Descent

CPSC 322, Lecture 5Slide 7 Problems with Hill Climbing Local Maxima. Plateau - Shoulders (Plateau)

CPSC 322, Lecture 5Slide 8 Corresponding problem for GreedyDescent Local minimum example: 8-queens problem A local minimum with h = 1

CPSC 322, Lecture 5Slide 9 Even more Problems in higher dimensions E.g., Ridges – sequence of local maxima not directly connected to each other From each local maximum you can only go downhill

CPSC 322, Lecture 15Slide 10 Lecture Overview Recap Local Search in CSPs Stochastic Local Search (SLS) Comparing SLS algorithms

CPSC 322, Lecture 15Slide 11 Stochastic Local Search GOAL: We want our local search to be guided by the scoring function Not to get stuck in local maxima/minima, plateaus etc. SOLUTION: We can alternate a) Hill-climbing steps b) Random steps: move to a random neighbor. c) Random restart: reassign random values to all variables.

Which randomized method would work best in each of these two search spaces? Greedy descent with random steps best on A Greedy descent with random restart best on B Greedy descent with random steps best on B Greedy descent with random restart best on A equivalent Evaluation function State Space (1 variable) Evaluation function State Space (1 variable) A B

But these examples are simplified extreme cases for illustration -in practice, you don’t know what your search space looks like Usually integrating both kinds of randomization works best Greedy descent with random steps best on B Greedy descent with random restart best on A Evaluation function State Space (1 variable) Evaluation function State Space (1 variable) A B Which randomized method would work best in each of the these two search spaces?

CPSC 322, Lecture 5Slide 14 Random Steps (Walk) Let’s assume that neighbors are generated as assignments that differ in one variable's value How many neighbors there are given n variables with domains with d values? One strategy to add randomness to the selection variable-value pair. Sometimes choose the pair According to the scoring function A random one E.G in 8-queen How many neighbors? ……..

CPSC 322, Lecture 5Slide 15 Random Steps (Walk): two-step Another strategy: select a variable first, then a value: Sometimes select variable: 1. that participates in the largest number of conflicts. 2. at random, any variable that participates in some conflict. 3. at random Sometimes choose value a)That minimizes # of conflicts b)at random Aispace 2 a: Greedy Descent with Min-Conflict Heuristic

CPSC 322, Lecture 5Slide 16 Successful application of SLS Scheduling of Hubble Space Telescope: reducing time to schedule 3 weeks of observations: from one week to around 10 sec.

17 Example: SLS for RNA secondary structure design RNA strand made up of four bases: cytosine (C), guanine (G), adenine (A), and uracil (U) 2D/3D structure RNA strand folds into is important for its function Predicting structure for a strand is “easy”: O(n 3 ) But what if we want a strand that folds into a certain structure? Local search over strands Search for one that folds into the right structure Evaluation function for a strand Run O(n 3 ) prediction algorithm Evaluate how different the result is from our target structure Only defined implicitly, but can be evaluated by running the prediction algorithm RNA strand GUCCCAUAGGAUGUCCCAUAGGA Secondary structure Easy Hard Best algorithm to date: Local search algorithm RNA-SSD developed at UBC [Andronescu, Fejes, Hutter, Condon, and Hoos, Journal of Molecular Biology, 2004] CPSC 322, Lecture 1

CSP/logic: formal verification 18 Hardware verification Software verification (e.g., IBM) (small to medium programs) Most progress in the last 10 years based on: Encodings into propositional satisfiability (SAT) CPSC 322, Lecture 1

CPSC 322, Lecture 5Slide 19 (Stochastic) Local search advantage: Online setting When the problem can change (particularly important in scheduling) E.g., schedule for airline: thousands of flights and thousands of personnel assignment Storm can render the schedule infeasible Goal: Repair with minimum number of changes This can be easily done with a local search starting form the current schedule Other techniques usually: require more time might find solution requiring many more changes

SLS limitations Typically no guarantee to find a solution even if one exists SLS algorithms can sometimes stagnate Get caught in one region of the search space and never terminate Very hard to analyze theoretically Not able to show that no solution exists SLS simply won’t terminate You don’t know whether the problem is infeasible or the algorithm has stagnated

SLS Advantage: anytime algorithms When should the algorithm be stopped ? When a solution is found (e.g. no constraint violations) Or when we are out of time: you have to act NOW Anytime algorithm: maintain the node with best h found so far (the “incumbent”) given more time, can improve its incumbent

CPSC 322, Lecture 15Slide 22 Lecture Overview Recap Local Search in CSPs Stochastic Local Search (SLS) Comparing SLS algorithms

Evaluating SLS algorithms SLS algorithms are randomized The time taken until they solve a problem is a random variable It is entirely normal to have runtime variations of 2 orders of magnitude in repeated runs! E.g. 0.1 seconds in one run, 10 seconds in the next one On the same problem instance (only difference: random seed) Sometimes SLS algorithm doesn’t even terminate at all: stagnation If an SLS algorithm sometimes stagnates, what is its mean runtime (across many runs)? Infinity! In practice, one often counts timeouts as some fixed large value X Still, summary statistics, such as mean run time or median run time, don't tell the whole story E.g. would penalize an algorithm that often finds a solution quickly but sometime stagnates

CPSC 322, Lecture 5Slide 24 Comparing Stochastic Algorithms: Challenge Summary statistics, such as mean run time, median run time, and mode run time don't tell the whole story What is the running time for the runs for which an algorithm never finishes (infinite? stopping time?) 100% runtime / steps ….. % of solved runs

CPSC 322, Lecture 5Slide 25 First attempt…. How can you compare three algorithms when A. one solves the problem 30% of the time very quickly but doesn't halt for the other 70% of the cases B. one solves 60% of the cases reasonably quickly but doesn't solve the rest C. one solves the problem in 100% of the cases, but slowly? 100% Mean runtime / steps of solved runs % of solved runs

CPSC 322, Lecture 5Slide 26 Runtime Distributions are even more effective Plots runtime (or number of steps) and the proportion (or number) of the runs that are solved within that runtime. log scale on the x axis is commonly used Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps

Comparing runtime distributions x axis: runtime (or number of steps) y axis: proportion (or number) of runs solved in that runtime Typically use a log scale on the x axis Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps Which algorithm is most likely to solve the problem within 7 steps? blue greenred

Comparing runtime distributions x axis: runtime (or number of steps) y axis: proportion (or number) of runs solved in that runtime Typically use a log scale on the x axis Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps Which algorithm is most likely to solve the problem within 7 steps? red

Comparing runtime distributions Which algorithm has the best median performance? I.e., which algorithm takes the fewest number of steps to be successful in 50% of the cases? Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps blue greenred

Comparing runtime distributions Which algorithm has the best median performance? I.e., which algorithm takes the fewest number of steps to be successful in 50% of the cases? Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps blue

Comparing runtime distributions x axis: runtime (or number of steps) y axis: proportion (or number) of runs solved in that runtime Typically use a log scale on the x axis Fraction of solved runs, i.e. P(solved by this # of steps/time) # of steps 28% solved after 10 steps, then stagnate 57% solved after 80 steps, then stagnate Slow, but does not stagnate Crossover point: if we run longer than 80 steps, green is the best algorithm If we run less than 10 steps, red is the best algorithm

Runtime distributions in AIspace Let’s look at some algorithms and their runtime distributions: 1. Greedy Descent 2. Random Sampling 3. Random Walk 4. Greedy Descent with random walk Simple scheduling problem 2 in AIspace:

CPSC 322, Lecture 5Slide 33 What are we going to look at in AIspace When selecting a variable first followed by a value: Sometimes select variable: 1. that participates in the largest number of conflicts. 2. at random, any variable that participates in some conflict. 3. at random Sometimes choose value a)That minimizes # of conflicts b)at random AIspace terminology Random sampling Random walk Greedy Descent Greedy Descent Min conflict Greedy Descent with random walk Greedy Descent with random restart …..

CPSC 322, Lecture 5Slide 34 Stochastic Local Search Key Idea: combine greedily improving moves with randomization As well as improving steps we can allow a “small probability” of: Random steps: move to a random neighbor. Random restart: reassign random values to all variables. Stop when Solution is found (in vanilla CSP …………………………) Run out of time (return best solution so far) Always keep best solution found so far

CPSC 322, Lecture 4Slide 35 Learning Goals for today’s class You can: Implement SLS with random steps (1-step, 2-step versions) random restart Compare SLS algorithms with runtime distributions

CPSC 322, Lecture 15Slide 36 Next Class More SLS variants Finish CSPs (if time) Start planning Assign-2 Will be out on Tue Assignments will be weighted: A0 (12%), A1…A4 (22%) each