Download presentation
Presentation is loading. Please wait.
Published byPaul Nelson Modified over 8 years ago
1
Intelligent Agent Architectures Chapter 2 of AIMA
2
Last class 2 Think humanlyThink rationally Acting humanly Acting rationally Humanly or rationally Thinking or acting
3
This class u High-level description of intelligent agent architectures u Details are in subsequent chapters throughout this semester 3 Life has to be lived in the forward direction, but it only makes sense in the backward direction.
4
Agents and environments 4 Environment agent percept action What action next??? CS480/580
5
Agent function 5 What action next?
6
Rational agents 6 “history” = {s0,s1,s2……sn….} Performance = f(history) Expected Performance= E(f(history)) Rational ≠ omniscient: action only depends on the percept sequence to date Rational ≠ intentional ignorance: you are supposed to take information from available sensors Rational ≠ omniscient: action only depends on the percept sequence to date Rational ≠ intentional ignorance: you are supposed to take information from available sensors
7
PEAS 7
8
Internet shopping agent 8 Qn: How do these affect the complexity of the problem the rational agent faces? Lack of percepts makes things harder Complex goals make things harder How about the environment?
9
Environment types 9 Environment agent percept action Fully observable or partially observable Single agent or multi-agent Deterministic or stochastic Episodic or sequential Static or dynamic Discrete or continuous Known or unknown
10
Vacuum-cleaner world 10
11
Environment types 11 Observable: The agent can “sense” its environment best: Fully observable worst: non-observable typical: Partially observable Deterministic: The actions have predictable effects best: deterministic worst: non-deterministic typical: Stochastic Static: The world evolves only because of agents’ actions best: static worst: dynamic typical: quasi-static Episodic: The performance of the agent is determined episodically best: episodic worst: non-episodic Discrete: The environment evolves through a discrete set of states best: discrete worst: continuous typical: hybrid Agents: # of agents in the environment; are they competing or cooperating?
12
Types of agents 12
13
Simple reflex agents 13 No history information is maintained
14
Example 14 How about automated taxi?
15
Reflex agents with state 15 EXPLICIT MODELS OF THE ENVIRONMENT --Black-box models --Factored models Logical models Probabilistic models State estimation
16
Goal-based agents 16 It is not always obvious what action to do now given a set of goals Search (Find a path from the current state to goal state; execute the first op) Planning (does the same for structured—non-blackbox state models) State estimation Search/ planning
17
How this class fits together 17 Representation mechanisms: Logic (propositional, first-order) Probabilistic logic Representation mechanisms: Logic (propositional, first-order) Probabilistic logic Learning the models Search: Blind, informed Planning Inference: Logical resolution Bayesian inference Search: Blind, informed Planning Inference: Logical resolution Bayesian inference
18
Utility-based agents 18
19
Summary 19
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.