Www.sti-innsbruck.at © Copyright 2008 STI INNSBRUCK www.sti-innsbruck.at Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (&

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Additional Topics ARTIFICIAL INTELLIGENCE
Artificial Intelligent
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CS 380: Artificial Intelligence Lecture #3 William Regli.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
1 Solving problems by searching 171, Class 2 Chapter 3.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Problem Solving Agents
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
CS 8520: Artificial Intelligence Intelligent Agents and Search Paula Matuszek Fall, 2005 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Artificial Intelligence
Solving problems by searching
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Solving problems by searching
Solving problems by searching
Solving problems by searching
Intelligent Agents Chapter 2.
Solving problems by searching
Intelligent Agents Chapter 2.
Presentation transcript:

© Copyright 2008 STI INNSBRUCK Intelligent Systems Problem Solving Methods – Lecture 7 Prof. Dieter Fensel (& James Scicluna)

Agenda Agents Environments Rational Agents Environment Types Problem Solving Agent Types Problem Types Problem Formulation Example 2

Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators 3

Environments 4 The agent function maps from percept histories to actions: [f: P*  A ] The agent program runs on the physical architecture to produce f agent = architecture + program

Vacuum-cleaner world 5 Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp

Rational agents An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful Performance measure: An objective criterion for success of an agent's behavior –E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. –It is better to design performance measures according to what one actually wants in the environment than according to how one thinks the agent should behave Rationality is distinct from omniscience (all-knowing with infinite knowledge) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) An agent is autonomous if its behavior is determined by its own experience (ability to learn and adapt). 6

PEAS Performance measure, Environment, Actuators, Sensors PEAS need to be specified in order to design an intelligent agent Examples: –Medical Diagnosis System Agent Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) –Automated Taxi Driver Agent Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard 7

Environment Types Fully observable (vs. partially observable) –An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic) –The next state of the environment is completely determined by the current state and the action executed by the agent. –If the environment is deterministic except for the actions of other agents, then the environment is strategic Episodic (vs. sequential) –The agent's experience is divided into atomic episodes –each episode consists of the agent perceiving and then performing a single action –the choice of action in each episode depends only on the episode itself. Static (vs. dynamic) –the environment is unchanged while an agent is deliberating. –the environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does Discrete (vs. continuous) –a limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent) –an agent operating by itself in an environment. 8

Environment types Chess with Crossword Taxi driving a clock puzzle Fully observableFullyFullyPartially DeterministicStrategicDeterministicStochastic Episodic SequentialSequentialSequential Static SemiStatic Dynamic DiscreteDiscrete DiscreteContinuous Single agentSingle/MultiSingleMulti The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent 9

Agent Functions and Types An agent is completely specified by the agent function mapping percept sequences to actions The aim is to find a way to implement the rational agent function concisely Four basic types in order of increasing generality –Simple reflex agents –Model-based reflex agents –Goal-based agents –Utility-based agents –Learning agents 10

Problem Solving Agent Types 11 Reflex-based Model-based Goal-based

Problem-solving agents Goal formulation: limiting the objectives that the agent is trying to achieve Problem formulation: what actions and states to consider given a goal Search: looking for “the best“ sequence of actions that leads to the goal Execution 12

Some background for the next slides 13

Example: Problem-solving agent An agent on holiday in Romania; currently in Arad; flight leaves tomorrow from Bucharest –Goal formulation: limiting the objectives that the agent is trying to achieve: be in Bucharest tomorrow –Problem formulation: what actions and states to consider given a goal: states: being in various cities actions: drive between cities –Search: choosing “the best“ sequence of actions that leads to the goal (requires knowledge about actions and states) sequence of visited cities, e.g., Arad, Sibiu, Fagaras, Bucharest the agent uses a map of Romania –Execution 14

Example: Romania 15

Problem-solving agents 16

Some notes The algorithm uses a simplified representation/notion of states and actions (abstraction) It works only if the environment is –Static: we do not pay attention to any changes occurring in the environment in between –Observable: initial state is known –Discrete: we can enumerate actions –Deterministic: we do not handle events occurring while executing the action sequence 17

Problem types Deterministic, fully observable  single-state problem –Agent knows exactly which state it will be in; solution is a sequence of actions Partially observable  sensorless problem (conformant problem) –Agent may have no idea where it is –Reasoning about belief states –Solution is a sequence of actions Nondeterministic and/or partially observable  contingency problem –Percepts provide new information about current state –Solution is a tree with state conditions as nodes and actions as edges –Often search and execution are interleaved Unknown state space  exploration problem 18

Example: vacuum world 19 Single-state, start in #5. Solution? [Right, Suck]

Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] 20

Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency –Nondeterministic: Action "Suck" may dirty a clean carpet –Partially observable: location, dirt at current location. –Percept: [L, :Dirty], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck] 21

Single-state problem formulation A problem is defined by four items: 1.initial state e.g., “in Arad" 2.description of actions: successor function S(x) = set of action–state pairs –e.g., S(Arad) = {, … } 3.goal test, can be –explicit, e.g., x = “in Bucharest" –implicit, e.g., Checkmate(x) (an arbitrary condition (formula) over the goal state ) 4.path cost (reflects the performance measure - additive) –e.g., sum of distances, number of actions executed, etc. –c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state; optimal solution 22

Selecting a state space Real world is absurdly complex  state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions –e.g., "Arad  Zerind" represents a set of possible complex routes with detours, rest stops, driving by taxi, bus, hitchhiking etc. (Abstract) solution = can be expanded into a solution in the real world Each abstract action should be "easier" than the original problem Good abstraction – removing detail, maintaining validity 23

Vacuum world state space graph states? Dirt and robot locations actions? Left,Right,Suck goal test? No dirty locations path cost? 1 per action 24

Another example: 8-puzzle states? locations of tiles ((n*n! possible states) actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move 25

Another example: robotic assembly states? real-valued coordinates of robot joint angles and parts of the object to be assembled actions? continuous motions of robot joints goal test? complete assembly path cost? time to execute 26

Another example: Romania 27 states? location of the agent actions? move along edges goal test? in Bucharest path cost? travel distance

Tree search algorithms Basic, general idea: –offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states) –Algorithm needs to know the expansion function 28

Tree search example 29

Tree search example 30

Tree search example 31

Implementation: general tree search 32 The strategy is hidden in the insertion step!

Representation of the state space as a tree: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states. 33 ACTION = right DEPTH = 6 PATH-COST g = 6

Search Strategies A search strategy is defined by picking the order of node expansion Evaluation of Search strategies –Completeness: The strategy is guaranteed to find a solution whenever there exists one –Termination: The algorithm terminates with a solution or with an error message if no solution exists. –Soundness: The strategy only returns admissible solutions. –Correctness: Soundness + Termination –Optimality: The strategy finds the “highest-quality” solution (minimal number of operator applications or minimal cost) –Effort: How many time and/or how many memory is needed to generate an output? Main Search Types: –Uninformed (Blind-) Search –Informed (Heuristic-) Search...Lecture 5 (?) 34