Search Problems Russell and Norvig: Chapter 3, Sections 3.1 – 3.3 CS121 – Winter 2003.

Slides:



Advertisements
Similar presentations
Solving problems by searching
Advertisements

Announcements Course TA: Danny Kumar
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
January 26, 2003AI: Chapter 3: Solving Problems by Searching 1 Artificial Intelligence Chapter 3: Solving Problems by Searching Michael Scherger Department.
Artificial Intelligence Problem Solving Eriq Muhammad Adams
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
1 Blind (Uninformed) Search (Where we systematically explore alternatives) R&N: Chap. 3, Sect. 3.3–5.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Feng Zhiyong Tianjin University Fall  datatype PROBLEM ◦ components: INITIAL-STATE, OPERATORS, GOAL- TEST, PATH-COST-FUNCTION  Measuring problem-solving.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
A RTIFICIAL I NTELLIGENCE Problem-Solving Solving problems by searching.
Search Problems Russell and Norvig: Chapter 3, Sections 3.1 – 3.3 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2003/home.htm by Prof. Jean-Claude.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
1 Search Problems (Where reasoning consists of exploring alternatives) R&N: Chap. 3, Sect. 3.1–
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Russell & Norvig Chapter 3]
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Search Problems Russell and Norvig: Chapter 3, Sections 3.1 – 3.3.
Solving problems by searching
Problem-solving agents
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Solving problems by searching This Lecture Read Chapters 3.1 to 3.4 Next Lecture Read Chapter 3.5 to 3.7 (Please read lecture topic material before and.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Solving Problems by Searching CPS Outline Problem-solving agents Example problems Basic search algorithms.

Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department Read Russell & Norvig Sections (Also read Chapters 1 and 2 if.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #3 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam.
Constraint Satisfaction Problems (CSPs) CPSC 322 – CSP 1 Poole & Mackworth textbook: Sections § Lecturer: Alan Mackworth September 28, 2012.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
1 Solving Problems by Searching (Blindly) R&N: Chap. 3 (many of these slides borrowed from Stanford’s AI Class)
1 Solving problems by searching 171, Class 2 Chapter 3.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Introduction to State Space Search
Solving problems by searching
G5AIAI Introduction to AI
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
1 CS B551: Elements of Artificial Intelligence Instructor: Kris Hauser
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
1 Search Problems (Where we see problems as the exploration of alternatives) R&N: Chap. 3, Sect. 3.1–
Solving problems by searching
Constraint satisfaction
Lecture 2: Problem Solving using State Space Representations
Introduction to Artificial Intelligence
Artificial Intelligence Lecture No. 6
Problem Solving by Searching
Artificial Intelligence Solving problems by searching
Artificial Intelligence
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
Solving problems by searching
Problem Solving and Searching
Artificial Intelligence
Problem Solving and Searching
Russell and Norvig: Chapter 3, Sections 3.1 – 3.3
Solving problems by searching
Solving problems by searching
Solving problems by searching
Instructor: Kris Hauser
Solving Problems by Searching
Solving problems by searching
Presentation transcript:

Search Problems Russell and Norvig: Chapter 3, Sections 3.1 – 3.3 CS121 – Winter 2003

Problem-Solving Agent environment agent ? sensors actuators

Problem-Solving Agent environment agent ? sensors actuators Actions Initial state Goal test

State Space and Successor Function Actions Initial state Goal test state space successor function

Initial State Actions Initial state Goal test state space successor function

Goal Test Actions Initial state Goal test state space successor function

Example: 8-puzzle Initial state Goal state

Example: 8-puzzle

Size of the state space = 9!/2 = 181, puzzle .65 x puzzle .5 x millions states/sec 0.18 sec 6 days 12 billion years

Search Problem State space Initial state Successor function Goal test Path cost

Search Problem State space each state is an abstract representation of the environment the state space is discrete Initial state Successor function Goal test Path cost

Search Problem State space Initial state: usually the current state sometimes one or several hypothetical states (“what if …”) Successor function Goal test Path cost

Search Problem State space Initial state Successor function: [state  subset of states] an abstract representation of the possible actions Goal test Path cost

Search Problem State space Initial state Successor function Goal test: usually a condition sometimes the description of a state Path cost

Search Problem State space Initial state Successor function Goal test Path cost: [path  positive number] usually, path cost = sum of step costs e.g., number of moves of the empty tile

Search of State Space

Search State Space

Search of State Space

 search tree

Simple Agent Algorithm Problem-Solving-Agent 1. initial-state  sense/read state 2. goal  select/read goal 3. successor  select/read action models 4. problem  (initial-state, goal, successor) 5. solution  search(problem) 6. perform(solution)

Example: 8-queens Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solutionNot a solution

Example: 8-queens Formulation #1: States: any arrangement of 0 to 8 queens on the board Initial state: 0 queens on the board Successor function: add a queen in any square Goal test: 8 queens on the board, none attacked  64 8 states with 8 queens

Example: 8-queens Formulation #2: States: any arrangement of k = 0 to 8 queens in the k leftmost columns with none attacked Initial state: 0 queens on the board Successor function: add a queen to any square in the leftmost empty column such that it is not attacked by any other queen Goal test: 8 queens on the board  2,067 states

Example: Robot navigation What is the state space ?

Example: Robot navigation Cost of one horizontal/vertical step = 1 Cost of one diagonal step =  2

Example: Robot navigation

Cost of one step = ???

Example: Robot navigation

Cost of one step: length of segment

Example: Robot navigation

Example: Assembly Planning Initial state Goal state Successor function: Merge two subassemblies Complex function: it must find if a collision-free merging motion exists

Example: Assembly Planning

Assumptions in Basic Search The environment is static The environment is discretizable The environment is observable The actions are deterministic  open-loop solution

Search Problem Formulation Real-world environment  Abstraction

Search Problem Formulation Real-world environment  Abstraction Validity:  Can the solution be executed?

Search Problem Formulation Real-world environment  Abstraction Validity:  Can the solution be executed?  Does the state space contain the solution?

Search Problem Formulation Real-world environment  Abstraction Validity:  Can the solution be executed?  Does the state space contain the solution? Usefulness  Is the abstract problem easier than the real- world problem?

Search Problem Formulation Real-world environment  Abstraction Validity:  Can the solution be executed?  Does the state space contain the solution? Usefulness  Is the abstract problem easier than the real- world problem? Without abstraction an agent would be swamped by the real world

Search Problem Variants One or several initial states One or several goal states The solution is the path or a goal node In the 8-puzzle problem, it is the path to a goal node In the 8-queen problem, it is a goal node

Problem Variants One or several initial states One or several goal states The solution is the path or a goal node Any, or the best, or all solutions

Important Parameters Number of states in state space 8-puzzle  181, puzzle .65 x puzzle .5 x queens  2, queens  There exist techniques to solve N-queens problems efficiently! Stating a problem as a search problem is not always a good idea!

Important Parameters Number of states in state space Size of memory needed to store a state

Important Parameters Number of states in state space Size of memory needed to store a state Running time of the successor function

Applications Route finding: airline travel, telephone/computer networks Pipe routing, VLSI routing Pharmaceutical drug design Robot motion planning Video games

Task EnvironmentObservableDeterministicEpisodicStaticDiscreteAgents Crossword puzzleFullyDeterministicSequentialStaticDiscreteSingle Chess with a clockFullyStrategicSequentialSemiDiscreteMulti PokerPartiallyStrategicSequentialStaticDiscreteMulti BackgammonFullyStochasticSequentialStaticDiscreteMulti Taxi drivingPartiallyStochasticSequentialDynamicContinuousMulti Medical diagnosisPartiallyStochasticSequentialDynamicContinuousSingle Image-analysisFullyDeterministicEpisodicSemiContinuousSingle Part-picking robotPartiallyStochasticEpisodicDynamicContinuousSingle Refinery controllerPartiallyStochasticSequentialDynamicContinuousSingle Interactive English tutorPartiallyStochasticSequentialDynamicDiscreteMulti Figure 2.6 Examples of task environments and their characteristics.

Summary Problem-solving agent State space, successor function, search Examples: 8-puzzle, 8-queens, route finding, robot navigation, assembly planning Assumptions of basic search Important parameters

Future Classes Search strategies Blind strategies Heuristic strategies Extensions Uncertainty in state sensing Uncertainty action model On-line problem solving