Download presentation
1
The Robotics Institute
Intelligent Agents Katia Sycara The Robotics Institute Joseph Giampapa
2
What is an agent? An agent is an autonomous computational entity, which: is reactive and proactive is goal driven is intelligent: able to reason, plan and sometimes learn has domain specific intelligence interacts with humans, other agents, and the environment via sensors and effectors in a high level language/protocol anticipates user needs and reacts based on them wish list: friendly, understands natural lang.,etc
3
Multi-Agent Systems (MAS)
An agent is more useful in the context of others: can concentrate on tasks of its expertise can delegate other tasks to other experts can take advantage of its ability to intelligently communicate, coordinate, negotiate But, a MAS is not just a collection of agents it needs meaningful ways for agents to interact it needs some system design and performance evaluation
4
MAS - Two Approaches 1. Centralized design
Build a system that is comprised of agents - should provide good performance Advantages may arise from: possibility to develop each agent as an expert incorporation of non-local expertise rather simple to have multiple developers working concurrently Example: a system within an organization
5
MAS - Two Approaches 2. Open MAS
Usually, the system has no prior static design, only single agents within Agents seek others to provide services, without knowing in advance who they are There is a need for agent finding mechanism Other agent may be non-cooperative or untrusted or malicious Example: markets, Internet
6
Centrally Designed MAS
Advantages: Distributed loads and expertise Simplicity and predictability, since components are known interaction language and protocols are agreed upon agents can be (and usually are) cooperative agents share architecture - software reuse Costly maintenance (adding new agents may necessitate system re-design) May be less fault tolerant. Rigid Difficult to inter-operate with others Not reflecting real-world requirements (not realistic) Disadvantages:
7
Open MAS Advantages: Disadvantages: X
Single agent or groups are designed separately (modular) Flexible, fault tolerant Evolutionary design Easier to maintain Dynamic, open society Overall behavior of the system not predictable Communication protocols, languages, ontologies may vary across agent types Self-interest and malicious behavior difficult to avoid Require more careful agent and interaction protocols design X Disadvantages:
8
Design and Architecture - Outline
Design philosophies Information processing and needs Reactive architectures Deliberative architectures Layered architectures Belief, Desire, Intention (BDI) Concurrent architecture (RETSINA)
9
Agent Design Philosophies
Agents reside in the environment: the world and other agents The environment can be characterized by a set of states S={s1,s2,...} Agents execute actions A={a1,a2,…} An action is a function action: SS, that affects the environment via state change So, in general, an agent is a set of actions that receive input from the world and manipulate the state of the world
10
Agent - Environment Interaction
Output Sensor Input Environment
11
Architecture Design A map of the internal structures of the agent, includes: data structures operation that can be performed on them control and data flow between the structures Starts from a high-level definition and traverses through refinements Design decisions result in adding details, getting closer to code level, reducing generality
12
Information and Processing Needs
System architecture design needs knowledge of: what the expected inputs are? what the required/expected outputs are? what processing can provide this relation between input and output? For agents, in particular: what are the possible/expected states of the world? how should the world state be perceived? how should the agent’s reasoning be affected by the world state? how should agent reasoning result in agent action?
13
Agent Information Needs
Is the “state of the world” sufficient? Yes. Usually it is too much: some or most of it may be inaccessible dynamic and possibly non-deterministic includes other agents, users, internet, etc Perception filters and reduces amount of info. Design question: what is the minimal set of data and how to filter/extract it?
14
Agent Architectures Reactive architectures Deliberative architectures
Layered architectures Belief, Desire, Intention (BDI) Concurrent architecture (RETSINA)
15
Agent Architectures (cont)
Reactive vs. deliberative Reactive agents: upon input from the environment, they react with action execution Deliberative agents: upon input, a reasoning process is invoked. Action is based on the results of the reasoning Agents may populate the whole spectrum between purely reactive and maximally deliberative (rational)
16
Agent Processing Needs
Reactive agents only need to map between world states and actions Deliberative agents need to reason for action. May include: taking into account historical states of world creating and maintaining an internal state reasoning about world and states planning, re-planning for current/future action learning
17
Agent Processing Needs (continued)
Collaborative (social) agents need, in addition: maintain models of other agents and the society reason about others plan collaborative activity reason about interaction: communication, coordination, collaboration
18
Required Agent Attributes
Perception - a function perceive: SP, where P={p1,p2,…} a set of percepts: required for both reactive and deliberative agents may be provided via sensor or any other input Internal state I (records history) not necessary for reactive agents deliberative agents need to maintain information regarding past activity to allow for deliberation Reasoning: performed mainly by deliberative agents, but may be useful for reactive, too Learning: only in deliberative agents
19
Reactive Architecture
Agent Perception Action Environment
20
Agents without State (Reactive)
Perception is a function perceive: SP Action is a function action: SS, Action selection is a function as: PA The world state results in a percept via perception, the percept results in an action selection, and the action transforms the state of the world
21
Example: Subsumption (Brooks)
An agent decision making is performed by a set of task accomplishing behaviors (TAB) In Brooks’ implementation, each TAB is a finite state machine In other implementation, TABs are rules of the type situation action, which maps percepts to actions In the subsumption architecture, multiple behaviors can be activated simultaneously Action selection is based on a subsumption hierarchy, behaviors arranged in layers, which are at different layers of abstraction (layered architecture)
22
Reactive: Pros and Cons
simplicity, economy computational tractability fault tolerance overall behavior emerges from component interaction Cons: without a model of the environment, agents need sufficient info. to determine action agents are “short-sighted”, may limit decision quality relationship between components not clear.
23
Agents with State = Deliberative
Perception Action Reasoning State Environment
24
Agents with State (Deliberative)
Perception is still a function perceive: S P Action is still action: SS But action selection (was as: PA), is now the function as: IA In addition, update: P×II is a function that update the internal state based on percepts (may include complex reasoning)
25
Agents with State: Refinement
Perception Action Reasoning State planning learning inference Environment
26
Layered Agent Architectures
Usually, but not always, deliberative architectures Decision making is performed via separation to several software layers Each layer reasons at a different level of abstraction. Layers interact Two major types: vertical layers: perception input and action output are dealt with by a single layer each horizontal layers: each layer directly connects to perception input and action output
27
Layers’ design Typically, at least two layers, one for reactive behavior and one for proactive No reason not to have multiple layers Typology: information and control flow between the layers, e.g.: Agent Perception Action
28
Information and Control Flow
Action output Layer n Layer 2 Layer 1 Layer n Layer 2 Layer 1 Layer n Layer 2 Layer 1 Perceptual input Action output Perceptual input Perceptual input Action output Horizontal Vertical (one pass) Vertical (two pass)
29
Layers Pros and Cons Horizontal Vertical
each layer acts like an agent - provides independency, simplicity for n different behaviors we implement n layers competition between layers can cause incoherence need for mediation between layers: exponentially complex, a control bottleneck Vertical Low complexity, no control bottleneck Less flexible and not fault tolerant: one decision needs all layers
30
TOURINGMACHINES Three layers produce suggestions for action:
reactive: implements situation-action rules as in Brooks’ subsumption architecture planning: achieves proactiveness via plans based on a library of schemas modeling: model of world, other agents, self, predicts conflicts, generates goals to resolve them Domain of implementation: multiple vehicles
31
Example: TOURINGMACHINES
Perception input Modeling layer Perception subsystem Action subsystem Planning layer Reactive layer Action output Control subsystem
32
INTERRAP A vertically layered two pass architecture
Layers have similar purposes as in TOURINGMACHINES Each layer is associated with a knowledge-base Layers interact with each other: bottom-up: activation top-down: execution
33
Example: INTERRAP Cooperation layer Social knowledge Plan layer
Planning knowledge Behavior layer World model World interface Perception input Action output
34
Belief-Desire-Intention Architecture
Based on practical reasoning and decision on actions. Involves: decision on what goals we want to achieve: deliberation decision on how to achieve these goals: means-ends reasoning Choosing some options creates intentions. These: usually lead to action should persist: once adopted, an agent should persist with the intention, attempt to achieve it the intention should be dropped if it is clearly non-achievable it was already achieved the reason for the intention is not there anymore are related to beliefs about the future
35
BDI Architecture Components
A set of current beliefs about the environment A belief revision function (brf) - updates current beliefs based on perception An option generation function - determines available options (desires) based on beliefs and intentions A set of desires (current options) - possible courses of action available A set of current intentions - the options the agents is committed to trying to perform A filter function (deliberation) - determines new intentions based on current beliefs, desires, intentions An action selection function - selects actions based on intentions
36
Schematic BDI Architecture
Perception input brf beliefs Generate options desires filter intentions action Action output
37
BDI Pros and Cons Key problem: difficult to balance between mental activities Example: dropping intentions requires reconsideration, which is costly but needed Rate of environment change helps set re-consideration Questionable: what advantage do mental states provide? Intuitive, provides functional decomposition Easy to define formally, using logic, and convert to code
38
Concurrent Architectures (RETSINA: Sycara & al.)
Include multiple functional and knowledge modules that work concurrently Coherence between the functional modules is achieved via shared databases Typical functional separation: communication and collaboration planning and reasoning action scheduling execution and monitoring
39
Example: RETSINA Agent Architecture
40
Functional Components
Communicator: handles incoming and outgoing messages in an ACL. Converts requests into goals/objectives Planner: takes objectives and devises detailed plans to achieve them. Creates tasks, actions and new objectives. Uses plan fragments from libraries Scheduler: schedules actions for execution Execution monitor: executes actions and monitors Coordination/collaboration: reasons for such activities, may be internal to planner or to communicator Self-awareness: maintains self model: load, state, etc
41
Planning By incremental instantiation of plan fragments
Conditional planning mechanisms Interleaving planning, information gathering, and execution Declarative description of information flow and control flow requirements 11
42
Task decomposition
43
Knowledge Components Objective DB: holds the agent’s objectives
Task DB: holds the agent’s tasks and actions, before they are scheduled for execution Schedule: holds scheduled actions Task reduction library: includes a set of possible task decompositions Task schema library: includes plan fragments, each provides details on how to perform a task Beliefs DB: holds the beliefs of the agent regarding information relevant to its activity
44
Architecture Attributes
Functional components do not directly interface or synchronize with each other Knowledge components do not directly interface or synchronize with each other Functional components work concurrently These provide: reusability and substitutability of components efficient utilization of computational resources timely task performance reduced development effort
45
RETSINA Agent Functionality
Interacts with humans and other agents Anticipates and satisfy human information needs Provides decision support Integrates planning, information gathering and execution Acquires, use and disseminate timely and relevant information Adapts to user, task and situation 10
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.