1 Artificial Intelligence CS 165A Thursday, October 4, 2007  Intelligent agents (Ch. 2)  Blind search (Ch. 3) 1.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Solving problems by searching
Announcements Course TA: Danny Kumar
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
CS 380: Artificial Intelligence Lecture #3 William Regli.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Solving problems by searching
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Solving problems by searching This Lecture Read Chapters 3.1 to 3.4 Next Lecture Read Chapter 3.5 to 3.7 (Please read lecture topic material before and.
For Wednesday Read chapter 3, sections 1-4 Homework: –Chapter 2, exercise 4 –Explain your answers (Identify any assumptions you make. Where you think there’s.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
Solving Problems by Searching CPS Outline Problem-solving agents Example problems Basic search algorithms.
MIDTERM REVIEW. Intelligent Agents Percept: the agent’s perceptual inputs at any given instant Percept Sequence: the complete history of everything the.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
1 Artificial Intelligence CS 165A Tuesday, October 9, 2007  Blind search (Ch. 3) 1.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
1 Solving problems by searching 171, Class 2 Chapter 3.
A RTIFICIAL I NTELLIGENCE Intelligent Agents 30 November
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Artificial Intelligence
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Problem Solving Agents
Agents and Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.5.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Artificial Intelligence Lecture No. 6 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (&
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Lecture 2: Problem Solving using State Space Representation CS 271: Fall, 2008.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Solving problems by searching Chapter 3. Types of agents Reflex agent Consider how the world IS Choose action based on current percept Do not consider.
ARTIFICIAL INTELLIGENCE
Solving problems by searching
More with Ch. 2 Ch. 3 Problem Solving Agents
ECE 448 Lecture 4: Search Intro
Problem Solving by Searching
Artificial Intelligence
Solving problems by searching
Solving problems by searching
Artificial Intelligence
EA C461 – Artificial Intelligence Problem Solving Agents
Solving problems by searching
Solving problems by searching
ARTIFICIAL INTELLIGENCE
Solving problems by searching
Solving Problems by Searching
Solving problems by searching
Presentation transcript:

1 Artificial Intelligence CS 165A Thursday, October 4, 2007  Intelligent agents (Ch. 2)  Blind search (Ch. 3) 1

2 Notes New room!

3 Review What is an agent? –An entity capable of combining cognition, perception and action in behaving autonomously, purposively and flexibly in some environment What does an agent do? –An agent perceives its environment, reasons about its goals, and acts upon the environment What does PEAS stand for (description of an agent)? –Performance measure, Environment, Actuators, Sensors What are we about to do right now?

4 Thursday Quiz #1 Write your name, your perm #, and the date at the top of the page 1.Humans often solve NP-complete problems in times much shorter than the complexity limits. Does McCarthy consider this evidence that computers are intrinsically incapable of doing what people do? 2.Why or why not?

5 Generic Agent Program Implementing f : P*  A …or… f (P*) = A –Lookup table? –Learning? Knowledge, past percepts, past actions Add percept to percepts LUT [percepts, table] NOP Table-Driven-Agent e.g.,

6 Basic types of agent programs Simple reflex agent Model-based reflex agent Goal-based agent Utility-based agent Learning agent

7 Simple Reflex Agent Input/output associations Condition-action rule: “If-then” rule (production rule) –If condition then action (if in a certain state, do this) –If antecedent then consequent

8 Simple Reflex Agent Simple state-based agent – Classify the current percept into a known state, then apply the rule for that state

9 Model-Based Reflex Agent Internal state – keeps track of the world, models the world

10 Model-Based Reflex Agent State-based agent – Given the current state, classify the current percept into a known state, then apply the rule for that state

11 Goal-Based Agent Goal: immediate, or long sequence of actions? –Search and planning – finding action sequences that achieve the agent’s goals

12 Utility-Based Agent “There are many ways to skin a cat” Utility function: Specifies degree of usefulness (happiness) –Maps a state onto a real number

13 Learning Agent

14 Environments Properties of environments –Fully vs. partially observable –Deterministic vs. stochastic –Episodic vs. sequential –Friendly vs. hostile –Static vs. dynamic –Discrete vs. continuous –Single agent vs. multiagent The environment types largely determine the agent design The real world is inaccessible, stochastic, nonepisodic, hostile, dynamic, and continuous –Bummer…

15 Problem Solving and Search Chapter 3

16 Problem Solving Agents Task: Find a sequence of actions that leads to desirable (goal) states –Must define problem and solution Finding a solution is typically a search process in the problem space –Solution = A path through the state space from the initial state to a goal state –Optimal search find the least-cost solution Search algorithm –Input: Problem statement (incl. goal) –Output: Sequence of actions that leads to a solution Formulate, search, execute (action)

17 Search Strategies Uninformed (blind) search (Chap. 3) –Can only distinguish goal state from non-goal state Informed (heuristic) search (Chap. 4) –Can evaluate states

18 Problem Formulation and Search Problem formulation –State-space description  S: Possible states  S 0 : Initial state of the agent  S G : Goal state(s) –Or equivalently, a goal test G(S)  O: Operators O: {S} => {S} –Describes the possible actions of the agent  g: Path cost function, assigns a cost to a path/action At any given time, which possible action O i is best? –Depends on the goal, the path cost function, the future sequence of actions…. Agent’s strategy: Formulate, Search, and Execute –This is offline (or batch) problem solving

19 Typical assumptions for simple PS agent Environment is observable Environment is static Environment is discrete Environment is deterministic

20 State-Space Diagrams State-space description can be represented by a state- space diagram, which shows –States (incl. initial and goal) –Operators/actions (state transitions) –Path costs

21 Example: Romania You’re in Arad, Romania, and you need to get to Bucharest as quickly as possible to catch your flight. Formulate problem –States: Various cities –Operators: Drive between cities Formulate goal –Be in Bucharest before flight leaves Find solution –Actual sequence of cities from Arad to Bucharest –Minimize driving distance/time

22 Romania (cont.)

23 Romania (cont.) Problem description {S} – cities (c i ) S 0 – Arad S G – Bucharest –G(S) – Is the current state (S) Bucharest? {O}: { c i  c j, for some i and j } g ij –Driving distance between c i and c j ? –Time to drive from c i to c j ? –1?

24 Possible paths Sibiu Oradea Zerind Bucharest PitestiBucharest FagarasR. Vilcea Sibiu DobretaMehadiaLugoj Timisoara Arad Which is best?

25 Branching Factor If there are N possible choices at each state, then the branching factor is N If it takes M steps (state transitions) to get to the goal state, then it may be the case that O(N M ) states have to be checked –N = 3, M = 5  N M = 243 –N = 5, M = 10  N M = 9,765,625 –N = 8, M = 15  N M = 35,184,372,088,832 Ouch…. Combinatorial explosion!

26 Abstraction The real world is highly complex! –The state space must be abstracted for problem-solving  Simplify and aggregate –Can’t represent all the details Choosing a good abstraction –Remove as much detail as possible while retaining validity

27 Example Problem: 8-Puzzle States: Various configurations of the puzzle Operators: Movements of the blank Goal test: State matches goal state Path cost: Each move costs 1 How many states are there? 9! = 362,880

28 One way to represent the state Action State ====== ===== > ^ ^ > V V < ^ < V > ^ ^ < V > > ^ < V Goal State: 1*9^0 + 2*9^1 + 3*9^2 + 8*9^3 + 0*9^4 + 4*9^5 + 7*9^6 + 6*9^7 + 5*9^8 = Start State: 5*9^0 + 4*9^1 + 0*9^2 + 6*9^3 + 1*9^4 + 8*9^5 + 7*9^6 + 3*9^7 + 2*9^8 =

29 8-Puzzle is hard (by definition)! Optimal solution of the N-puzzle family of problems is NP-complete –Exponential increase in computation with N –Uninformed search will do very poorly Ditto for the Traveling Salesman Problem (TSP) –Start and end in Bucharest, visit every city at least once –Find the shortest tour Ditto for lots of interesting problems!

30 Example: MU-Puzzle States: Strings comprising the letters M, I, and U Initial state: MI Goal state: MU Operators: (where x stands for any string, including the null string) 1. x I  x IU“Append U” 2. M x  M x x“Replicate x” 3. xI I Ix  xUx“Replace with U” 4. xUUx  xx“Drop UU” Each rule describes the whole string, not a subset Path cost: one per step Try it –Can you draw the state-space diagram? –Are you guaranteed a solution? M I  M I I  M I I I I  M U I  M U I U  M U I U U I U  M U I U U I U U I U U I U  M U I I I I U  M U U I U  M I U  … M I  M I I  M I I I I  M U I  M U I U  M U I U U I U  M U I U U I U U I U U I U  M U I I I I U  M U U I U  M I U  …

31 Clarification of the MU-puzzle rules: 1.If the string ends with I you may add U; for example, MI becomes MIU 2.If the string is Mx then you may create Mxx; for example, MI becomes MII; MIU becomes MIUIU 3.If the string contains III you may replace those three characters with U 4.If the string contains UU you may eliminate both characters

32 Example: The Vacuum World Simplified world: 2 grids States: Location of vacuum, dirt in grids Operators: Move left, move right, suck dirt Goal test: Grids free of dirt Path cost: Each action costs 1

33 The Vacuum World (cont.) How to reach the goal? –If starting at top left state: S, R, S –If starting at middle far right state: L, S What if agent doesn’t know its initial state? –You probably need to redraw the state diagram! –Represent initial uncertainty as one “meta-state”  For example…