Download presentation
Presentation is loading. Please wait.
Published byRodger Cox Modified over 8 years ago
1
1 Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. Franz J. Kurfess CPE/CSC 580: Intelligent Agents 1
2
2 © Franz J. Kurfess Usage of the Slides ◆ these slides are intended for the students of my CPE/CSC 580 “Intelligent Agents” class at Cal Poly SLO ◆ if you want to use them outside of my class, please let me know (fkurfess@calpoly.edu)fkurfess@calpoly.edu ◆ some of them are based on other sources, which are identified and cited ◆ I usually select a subset for each quarter, either by hiding some slides, or creating a “Custom Show” (in PowerPoint) ◆ to view these, go to “Slide Show => Custom Shows”, select the respective quarter, and click on “Show” ◆ To print them, I suggest to use the “Handout” option ◆ 4, 6, or 9 per page works fine ◆ Black & White should be fine; there are few diagrams where color is important
3
3 © Franz J. Kurfess Course Overview ❖ Introduction Intelligent Agent, Multi-Agent Systems Agent Examples ❖ Agent Architectures Agent Hierarchy, Agent Design Principles ❖ Reasoning Agents Knowledge, Reasoning, Planning ❖ Learning Agents Observation, Analysis, Performance Improvement ❖ Multi-Agent Interactions Agent Encounters, Resource Sharing, Agreements ❖ Communication Speech Acts, Agent Communication Languages ❖ Collaboration Distributed Problem Solving, Task and Result Sharing ❖ Agent Applications Information Gathering, Workflow, Human Interaction, E-Commerce, Embodied Agents, Virtual Environments ❖ Conclusions and Outlook
4
4 © Franz J. Kurfess Overview Agent Architectures ❖ Motivation ❖ Objectives ❖ Agent Design Principles ❖ Agent Hierarchy ❖ Intentional Systems ❖ Abstract Agent Architecture ❖ Reactive Agents ❖ Important Concepts and Terms ❖ Chapter Summary
5
5 © Franz J. Kurfess Logistics ◆ Introductions ◆ Course Materials ◆ textbooks (see below) ◆ lecture notes ◆ PowerPoint Slides will be available on my Web page ◆ handouts ◆ Web page ◆ http://www.csc.calpoly.edu/~fkurfess http://www.csc.calpoly.edu/~fkurfess ◆ Term Project ◆ Lab and Homework Assignments ◆ Exams ◆ Grading
6
6 © Franz J. Kurfess Textbooks ◆ Main Textbook ◆ Recommended for additional reading ❖ [Woolridge 2009] Michael Woolridge, An Introduction to Multi-Agent- Systems, Wiley, 2009. ❖ [Russell & Norvig 2009] Stuart Russell and Peter Norvig, Artificial Intelligence - A Modern Approach. 3 nd ed., Prentice Hall, 2009.
7
7 © Franz J. Kurfess Bridge-In
8
8 © Franz J. Kurfess Pre-Test
9
9 © Franz J. Kurfess Motivation
10
10 © Franz J. Kurfess Objectives
11
11 © Franz J. Kurfess Objectives ❖ understand the notion of an agent how agents are distinct from other software paradigms (e.g., objects), characteristics of applications that lend themselves to an agent-oriented solution; ❖ understand the key issues associated with constructing agents capable of intelligent autonomous action, main approaches taken to developing such agents; ❖ understand the key issues and approaches to high-level communication in multi-agent systems; ❖ understand the key issues in designing societies of agents can effectively cooperate in order to solve problems; ❖ understand the main application areas of agent-based solutions; ❖ understand the main techniques for automated decision-making in multi-agent systems voting, forming coalitions, allocating scarce resources, and bargaining [Woolridge 2009]
12
12 © Franz J. Kurfess Evaluation Criteria
13
13 Agent Design Principles Autonomy Embodiment Belief, Desire, Intent Social Behavior
14
14 [Woolridge 2009] Autonomous Agent An agent is a computer system that is capable of independent action on behalf of its user or owner SYSTEM ENVIRONMENT input output
15
15 © Franz J. Kurfess Embodiment and Situatedness ❖ An embodied agent has a physical manifestation often also called a robot software agents typically are not embodied ❖ Agents are situated in an environment often also referred to as context
16
16 © Franz J. Kurfess Belief, Desire, Intention (BDI) ❖ software model developed for the design and programming of intelligent agentsintelligent agents ❖ implements the principal aspects of Michael Bratman's theory of human practical reasoningMichael Bratman theory of human practical reasoning http://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention_software_model
17
17 © Franz J. Kurfess Beliefs ❖ represent the informational state of the agent beliefs about the world (including itself and other agents) ❖ beliefs can include inference rules for the generation of new beliefs ❖ the term belief is used instead of knowledge expresses the subjective nature may change over time
18
18 © Franz J. Kurfess Desires ❖ represent the motivational state of the agent situations that the agent would like to achieve ❖ goals are desires adopted for active pursuit sets of multiple goals should be consistent sets of desires can be inconsistent
19
19 © Franz J. Kurfess Intentions ❖ represent the deliberative state of the agent the agent has chosen to do something ❖ intentions are desires to which the agent has committed to some extent ❖ a plan is a sequences of actions to achieve an intention ❖ an event is a trigger for reactive activity by an agent
20
20 [Woolridge 2009] Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Similarly for many computer environments: witness the Internet Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others
21
21 Agent Hierarchy Reflex Agent Model-Based Agent Goal/Utility-Based Agent Learning Agent Reasoning Agent
22
22 © Franz J. Kurfess Reflex Agent Diagram 2 Sensors Actuators What the world is like nowWhat should I do now Condition-action rules Agent Environment
23
23 © Franz J. Kurfess Model-Based Reflex Agent Diagram Sensors Actuators What the world is like nowWhat should I do now StateHow the world evolvesWhat my actions do Agent Environment Condition-action rules
24
24 © Franz J. Kurfess Utility-Based Agent Diagram Sensors Actuators What the world is like nowWhat happens if I do an actionHow happy will I be thenWhat should I do now StateHow the world evolvesWhat my actions do Utility Agent Environment Goals
25
25 © Franz J. Kurfess Learning Agent Diagram Sensors Actuators Agent Environment What the world is like now What happens if I do an action How happy will I be then What should I do now State How the world evolves What my actions do Utility Critic Learning Element Problem Generator Performance Standard
26
26 Intentional Systems Agents as Intentional Systems The Need for Abstraction Representational Flexibility Post-Declarative Systems
27
27 [Woolridge 2009] Agents as Intentional Systems When explaining human activity, it is often useful to make statements such as the following: Janine took her umbrella because she believed it was going to rain. Michael worked hard because he wanted to possess a PhD. Human behavior is predicted and explained through the attribution of attitudes, such as believing and wanting, hoping, fearing,... The attitudes employed in such folk psychological descriptions are called the intentional notions
28
28 [Woolridge 2009] Agents as Intentional Systems The philosopher Daniel Dennett coined the term intentional system describes entities ‘whose behavior can be predicted by the method of attributing belief, desires and rational acumen’ different ‘grades’ of intentional systems: first-order intentional system has beliefs and desires but no beliefs and desires about beliefs and desires. second-order intentional system is more sophisticated; it has beliefs and desires about beliefs and desires also has other intentional states together with beliefs and desires about those pther intentional states refers to states of others and its own
29
29 [Woolridge 2009] Agents as Intentional Systems The answer seems to be that while the intentional stance description is consistent,... it does not buy us anything, since we essentially understand the mechanism sufficiently to have a simpler, mechanistic description of its behavior. (Yoav Shoham) Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behavior But with very complex systems, a mechanistic, explanation of its behavior may not be practicable As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation — low level explanations become impractical. The intentional stance is such an abstraction
30
30 [Woolridge 2009] Intentional Systems as Abstraction the more we know about a system, the less we need to rely on animistic, intentional explanations of its behavior with very complex systems, a mechanistic, explanation of its behavior may not be practicable intentions can be used to describe complex systems at a higher level of abstraction to express aspects like autonomy goals self-preservation social behavior
31
31 [Woolridge 2009] Agents as Intentional Systems additional points in favor of this idea: Characterizing Agents: provides a familiar, non-technical way of understanding & explaining agents Nested Representations: offers the potential to specify systems that include representations of other systems widely accepted that such nested representations are essential for agents that must cooperate with other agents
32
32 [Woolridge 2009] Post-Declarative Systems this view of agents leads to a kind of post-declarative programming: In procedural programming, we say exactly what a system should do In declarative programming, we state something that we want to achieve give the system general info about the relationships between objects, let a built-in control mechanism figure out what to do e.g., goal-directed theorem proving intentional agents very abstract specification of the system let the control mechanism figure out what to do knowing that it will act in accordance with some built-in theory of agency
33
33 Abstract Agent Architecture Environment, States Actions, Runs State Transformations Agent as Function System
34
34 [Woolridge 2009] Abstract Architecture for Agents Assume the environment may be in any of a finite set E of discrete, instantaneous states: Agents are assumed to have a repertoire of possible actions available to them, which transform the state of the environment: A run, r, of an agent in an environment is a sequence of interleaved environment states and actions:
35
35 [Woolridge 2009] Abstract Architecture for Agents Let: R be the set of all such possible finite sequences (over E and Ac) R Ac be the subset of these that end with an action R E be the subset of these that end with an environment state
36
36 [Woolridge 2009] State Transformer Functions A state transformer function represents behavior of the environment: Note that environments are… history dependent non-deterministic If (r)= ∅, then there are no possible successor states to r. In this case, we say that the system has ended its run Formally, we say an environment Env is a triple Env = 〈 E,e 0, 〉 where: E is a set of environment states, e 0 ∈ E is the initial state, and is a state transformer function
37
37 [Woolridge 2009] Agents Agent is a function which maps runs to actions: An agent makes a decision about what action to perform based on the history of the system that it has witnessed to date. Let AG be the set of all agents
38
38 [Woolridge 2009] Systems A system is a pair containing an agent and an environment Any system will have associated with it a set of possible runs; we denote the set of runs of agent Ag in environment Env by R(Ag, Env) (We assume R(Ag, Env) contains only terminated runs)
39
39 [Woolridge 2009] Systems Formally, a sequence represents a run of an agent Ag in environment Env = 〈 E,e 0, 〉 if: 1. e 0 is the initial state of Env 0 = Ag ( e 0 ) ; and 3. For u > 0,
40
40 Reactive Agents Perception Agents with State Tasks Utility Functions Subsumption Architecture
41
41 [Woolridge 2009] Perception Now introduce perception system: Environment Agent seeaction
42
42 [Woolridge 2009] Perception the see function is the agent’s ability to observe its environment, the action function represents the agent’s decision making process Output of the see function is a percept: see : E → Per maps environment states to percepts action is now a function action : Per* → A maps sequences of percepts to actions
43
43 [Woolridge 2009] Agents with State We now consider agents that maintain state: Environment Agent action next state see
44
44 [Woolridge 2009] Agents with State internal data structure typically used to record information about the environment state and history. let I be the set of all internal states of the agent the perception function see for a state-based agent is unchanged: see : E → Per the action-selection function action is now defined as a mapping from internal states to actions: action : I → Ac An additional function next is introduced: next : I Per → I maps an internal state and percept to an internal state
45
45 [Woolridge 2009] Agent Control Loop 1. Agent starts in some initial internal state i 0 2. Observes its environment state e, and generates a percept see(e) 3. Internal state of the agent is then updated via next function, becoming next(i 0, see(e)) 4. The action selected by the agent is action(next(i 0, see(e))) 5. Goto 2
46
46 [Woolridge 2009] Tasks for Agents agents carry out tasks for users tasks must be specified by users tell agents what to do without telling them how to do it
47
47 [Woolridge 2009] Utility Functions over States associate utilities with individual states the task of the agent is then to bring about states that maximize utility a task specification is a function u : E → ú associates a real number with every environment state
48
48 [Woolridge 2009] Utility Functions over States value of a run minimum utility of state on run? maximum utility of state on run? sum of utilities of states on run? average? disadvantage: difficult to specify a long term view when assigning utilities to individual states one possibility: a discount for states later on
49
49 [Woolridge 2009] Utilities over Runs another possibility assigns a utility not to individual states, but to runs themselves: u : R → ú inherently long term view other variations incorporate probabilities of different states emerging difficulties with utility-based approaches: where do the numbers come from? humans don’t think in terms of utilities hard to formulate tasks in these terms
50
50 Subsumption Architecture
51
51 © Franz J. Kurfess Background ❖ developed by Rodney Brooks in response to the limitations of traditional, symbol-oriented agent architectures explicit representations high-level, top-down, symbol manipulation, closed world
52
52 © Franz J. Kurfess Behaviors ❖ intelligent behavior is possible without explicit representation ❖ intelligent behavior does not require explicit abstract reasoning ❖ emergent properties of some complex system may lead to intelligent behavior
53
53 © Franz J. Kurfess Situatedness ❖ closely intertwined with embodiment ❖ “real” intelligence is situated in the real world not in disembodied systems like theorem provers “semantic gap” dissolves ❖ embodied systems have strong associations between entities in the real world and internal representations ❖ internal representations are not explicit often distributed, holistic representations the “memory location” of an external object’s representation can’t be determined a single basic memory unit (cell) on its own carries no meaningful information
54
54 © Franz J. Kurfess Emergent Intelligence ❖ intelligent behavior is an emergent property results from an agent’s interaction with its environment ❖ intelligence is not “endowed” by a designer apparently intelligent behavior may be a perceptual illusion based on the interpretation of the behavior by the observer ❖ probably influenced by Valentino Braitenberg’s “Vehicles” simple vehicles that exhibit interesting behaviors sensors are coupled with actuators no “intelligence” on board Examples: http://people.cs.uchicago.edu/~wiseman/vehicles/test-run.html http://people.cs.uchicago.edu/~wiseman/vehicles/test-run.html http://www.youtube.com/watch?v=o6XOBdO7gkk http://www.youtube.com/watch?v=o6XOBdO7gkk
55
55 © Franz J. Kurfess Subsumption Architecture ❖ hierarchy of task-accomplishing behaviors ❖ each behavior has a simple structure similar to a reflex agent often described via if... then rules ❖ behaviors compete to exercise control ❖ lower layers more primitive higher priority ❖ higher layers possibly more complex lower priority
56
56 [Woolridge 2009] A Traditional Decomposition of a Mobile Robot Control System into Functional Modules From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985
57
57 [Woolridge 2009] A Decomposition of a Mobile Robot Control System Based on Task Achieving Behaviors From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985
58
58 [Woolridge 2009] Schematic of a Module From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985
59
59 [Woolridge 2009] Levels 0, 1, and 2 Control Systems From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985
60
60 © Franz J. Kurfess Situated Automata ❖ agent is specified in a declarative language high-level, rule-like based on modal logic ❖ the specification is compiled into a digital machine the machine can operate in real-time with a time bound concrete interpretations of “possible world” semantics compilation is equivalent to an NP-complete problem more expressive agent specification languages are more difficult to compile totally impractical at some level
61
61 [Woolridge 2009] Circuit Model of a Finite-State Machine From Rosenschein and Kaelbling, “A Situated View of Representation and Control”, 1994 f = state update function s = internal state g = output function
62
62 [Woolridge 2009] Situated Automata An agent is specified in terms of two components: perception action Two programs are used to synthesize agents RULER for the perception component GAPPS for the action component
63
63 [Woolridge 2009] RULER – Situated Automata RULER takes as its input three components “[A] specification of the semantics of the [agent's] inputs (‘whenever bit 1 is on, it is raining’); a set of static facts (‘whenever it is raining, the ground is wet’); and a specification of the state transitions of the world (‘if the ground is wet, it stays wet until the sun comes out’). The programmer then specifies the desired semantics for the output (‘if this bit is on, the ground is wet’), and the compiler... [synthesizes] a circuit whose output will have the correct semantics.... All that declarative ‘knowledge’ has been reduced to a very simple circuit.”[Kaelbling, 1991]
64
64 [Woolridge 2009] GAPPS – Situated Automata The GAPPS program takes as its input A set of goal reduction rules, (essentially rules that encode information about how goals can be achieved), and a top level goal Then it generates a program that can be translated into a digital circuit in order to realize the goal The generated circuit does not represent or manipulate symbolic expressions; all symbolic manipulation is done at compile time
65
65 [Woolridge 2009] Circuit Model of a Finite-State Machine From Rosenschein and Kaelbling, “A Situated View of Representation and Control”, 1994 “The key lies in understanding how a process can naturally mirror in its states subtle conditions in its environment and how these mirroring states ripple out to overt actions that eventually achieve goals.” RULER GAPPS
66
66 [Woolridge 2009] Situated Automata The theoretical limitations of the approach are not well understood Compilation (with propositional specifications) is equivalent to an NP- complete problem The more expressive the agent specification language, the harder it is to compile it (There are some deep theoretical results which say that after a certain expressiveness, the compilation simply can’t be done.)
67
67 [Woolridge 2009] Advantages of Reactive Agents Simplicity Economy Computational tractability Robustness against failure Elegance
68
68 [Woolridge 2009] Limitations of Reactive Agents Agents without environment models must have sufficient information available from local environment If decisions are based on local environment, how does it take into account non-local information it has a “short-term” view) Difficult to make reactive agents that learn Since behavior emerges from component interactions plus environment, it is hard to see how to engineer specific agents no principled methodology exists It is hard to engineer agents with large numbers of behaviors dynamics of interactions become very complex
69
69 Hybrid Agents
70
70 © Franz J. Kurfess Hybrid Architecture ❖ an agent consists of two or more subsystems deliberative subsystem explicit, symbol-oriented world model reasoning, planning, decision-making reactive subsystem reacts to events without complex reasoning often has precedence ❖ combination of classical and alternative models ❖ naturally leads to layered architecture control subsystems are arranged into a hierarchy increasing levels of abstraction
71
71 [Woolridge 2009] Control Frameworks manages the interactions between the various layers Horizontal layering layers are each directly connected to the sensory input and action output each layer itself acts like an agent, producing suggestions as to what action to perform Vertical layering Sensory input and action output are each dealt with by at most one layer each
72
72 [Woolridge 2009] Hybrid Architectures m possible actions suggested by each layer, n layers m n interactions m 2 (n-1) interactions Introduces bottleneck in central control system Not fault tolerant to layer failure
73
73 [Woolridge 2009] Ferguson – TOURINGMACHINES perception and action subsystems interface directly with the agent’s environment, three control layers embedded in a control framework mediates between the layers
74
74 [Woolridge 2009] Ferguson – TOURINGMACHINES
75
75 [Woolridge 2009] Ferguson – TOURINGMACHINES reactive layer implemented as a set of situation-action rules similar to a subsumption architecture Example: rule-1: kerb-avoidance if is-in-front(Kerb, Observer) and speed(Observer) > 0 and separation(Kerb, Observer) < KerbThreshHold then change-orientation(KerbAvoidanceAngle) planning layer constructs plans and selects actions to execute in order to achieve the agent’s goals
76
76 [Woolridge 2009] Ferguson – TOURINGMACHINES modeling layer contains symbolic representations of the ‘cognitive state’ of other entities in the agent’s environment the three layers communicate with each other they are embedded in a control framework uses control rules Example: censor-rule-1: if entity(obstacle-6) in perception-buffer then remove-sensory-record(layer-R, entity(obstacle-6))
77
77 [Woolridge 2009] Müller –InteRRaP Vertically layered, two-pass architecture cooperation layer plan layer behavior layer social knowledge planning knowledge world model world interface perceptual inputaction output
78
78 © Franz J. Kurfess Post-Test
79
79 © Franz J. Kurfess Evaluation ❖ Criteria
80
80 © Franz J. Kurfess Summary Agent Architectures
81
81 © Franz J. Kurfess Important Concepts and Terms ❖ agent ❖ agent society ❖ architecture ❖ deduction ❖ environment ❖ hybrid architecture ❖ intelligence ❖ intention ❖ multi-agent system ❖ reactivity ❖ subsumption
82
82 © Franz J. Kurfess
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.