5-1 LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems

Slides:



Advertisements
Similar presentations
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Advertisements

Intelligent Architectures for Electronic Commerce Part 1.5: Symbolic Reasoning Agents.
Elephants Don’t Play Chess
Timed Automata.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Outline Overview of agent architectures Deliberative agents
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
Lecturer: Sebastian Coope Ashton Building, Room G.18 COMP 201 web-page: Lecture.
Concrete architectures (Section 1.4) Part II: Shabbir Ssyed We will describe four classes of agents: 1.Logic based agents 2.Reactive agents 3.Belief-desire-intention.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Intelligence without Reason
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Establishing the overall structure of a software system
BDI Agents Martin Beer, School of Computing & Management Sciences,
Behavior- Based Approaches Behavior- Based Approaches.
A Robust Layered Control System for a Mobile Robot Rodney A. Brooks Presenter: Michael Vidal.
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
Software Issues Derived from Dr. Fawcett’s Slides Phil Pratt-Szeliga Fall 2009.
The Robotics Institute
The Need of Unmanned Systems
Architectural Design.
Mobile Robot Control Architectures “A Robust Layered Control System for a Mobile Robot” -- Brooks 1986 “On Three-Layer Architectures” -- Gat 1998? Presented.
Introduction to Behavior- Based Robotics Based on the book Behavior- Based Robotics by Ronald C. Arkin.
GENERAL CONCEPTS OF OOPS INTRODUCTION With rapidly changing world and highly competitive and versatile nature of industry, the operations are becoming.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
©Ian Sommerville 1995 Software Engineering, 5th edition. Chapter 13Slide 1 Architectural Design u Establishing the overall structure of a software system.
5-1 LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Knowledge representation
4 Introduction to AI Robotics (MIT Press)Chapter 4: The Reactive Paradigm1 The Reactive Paradigm Describe the Reactive Paradigm in terms of the 3 robot.
Architectural Design portions ©Ian Sommerville 1995 Establishing the overall structure of a software system.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Architectural Design Yonsei University 2 nd Semester, 2014 Sanghyun Park.
1 Introduction to Software Engineering Lecture 1.
EEL 5937 Agent models. EEL 5937 Multi Agent Systems Lecture 4, Jan 16, 2003 Lotzi Bölöni.
1 Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. Franz J. Kurfess CPE/CSC 580: Intelligent Agents 1.
OOAD Unit – I OBJECT-ORIENTED ANALYSIS AND DESIGN With applications
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Ann Nowe VUB 1 What are agents anyway?. Ann Nowe VUB 2 Overview Agents Agent environments Intelligent agents Agents versus objects.
Brooks’ Subsumption Architecture EEL 6838 T. Ryan Fitz-Gibbon 1/24/2004.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
What is Intelligence Vali Derhami Yazd University, Computer Department
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
Behavior-based Multirobot Architectures. Why Behavior Based Control for Multi-Robot Teams? Multi-Robot control naturally grew out of single robot control.
Review of Parnas’ Criteria for Decomposing Systems into Modules Zheng Wang, Yuan Zhang Michigan State University 04/19/2002.
Finite State Machines (FSM) OR Finite State Automation (FSA) - are models of the behaviors of a system or a complex object, with a limited number of defined.
Formal Verification. Background Information Formal verification methods based on theorem proving techniques and model­checking –To prove the absence of.
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
第 25 章 Agent 体系结构. 2 Outline Three-Level Architectures Goal Arbitration The Triple-Tower Architecture Bootstrapping Additional Readings and Discussion.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
Riyadh Philanthropic Society For Science Prince Sultan College For Woman Dept. of Computer & Information Sciences CS 251 Introduction to Computer Organization.
Slide 1 Chapter 8 Architectural Design. Slide 2 Topics covered l System structuring l Control models l Modular decomposition l Domain-specific architectures.
Artificial Intelligence Knowledge Representation.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES
Do software agents know what they talk about?
Build Intelligence from the bottom up!
Build Intelligence from the bottom up!
Introduction Artificial Intelligent.
CIS 488/588 Bruce R. Maxim UM-Dearborn
Build Intelligence from the bottom up!
Subsuption Architecture
Robot Intelligence Kevin Warwick.
Paper by D.L Parnas And D.P.Siewiorek Prepared by Xi Chen May 16,2003
Behavior Based Systems
Presentation transcript:

5-1 LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems

5-2 Reactive Architectures There are many unsolved (some would say insoluble) problems associated with symbolic AI These problems have led some researchers to question the viability of the whole paradigm, and to the development of reactive architectures Although united by a belief that the assumptions underpinning mainstream AI are in some sense wrong, reactive agent researchers use many different techniques In this presentation, we start by reviewing the work of one of the most vocal critics of mainstream AI: Rodney Brooks

5-3 Brooks – behavior languages Brooks has put forward three theses: 1. Intelligent behavior can be generated without explicit representations of the kind that symbolic AI proposes 2. Intelligent behavior can be generated without explicit abstract reasoning of the kind that symbolic AI proposes 3. Intelligence is an emergent property of certain complex systems

5-4 Brooks – behavior languages He identifies two key ideas that have informed his research: 1. Situatedness and embodiment: ‘Real’ intelligence is situated in the world, not in disembodied systems such as theorem provers or expert systems 2. Intelligence and emergence: ‘Intelligent’ behavior arises as a result of an agent’s interaction with its environment. Also, intelligence is ‘in the eye of the beholder’; it is not an innate, isolated property

5-5 Brooks – behavior languages To illustrate his ideas, Brooks built some based on his subsumption architecture A subsumption architecture is a hierarchy of task- accomplishing behaviors Each behavior is a rather simple rule-like structure Each behavior ‘competes’ with others to exercise control over the agent Lower layers represent more primitive kinds of behavior (such as avoiding obstacles), and have precedence over layers further up the hierarchy The resulting systems are, in terms of the amount of computation they do, extremely simple Some of the robots do tasks that would be impressive if they were accomplished by symbolic AI systems

5-6 A Traditional Decomposition of a Mobile Robot Control System into Functional Modules From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

5-7 A Decomposition of a Mobile Robot Control System Based on Task Achieving Behaviors From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

5-8 Layered Control in the Subsumption Architecture (SA) From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

5-9 Example in SA (cont’d)

5-10 Example in SA (cont’d) When to accept the heading vector from which layer?

5-11 Example in SA (cont’d)

5-12 Basic behaviour: Avoid obstacles Under level 0 control, the robot finds a large empty space and then sits there From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985

5-13 Adding New Behaviour: Wander Under level 1 control, robot heads off a random direction after a few seconds Layered Control System

5-14 Behaviour: Travel to a most distant point Under level 2, robot finds the most distant point and heads of in the direction

5-15 Layered Control in the Subsumption Architecture: Summary The mobile robot control problem can be decomposed in terms of behaviors rather than in terms of functional modules It provides a way to incrementally build and test a complex mobile robot control system Useful parallel computation can be performed There is no need for a central control module The frame problem is solved by not modelling the world at all

5-16 Potential Fields Methodologies Another style of reactive architecture is based on potential fields. Potential field styles of behaviors always use vectors to represent behaviors and vector summation to combine vectors from different behaviors to produce an emergent behavior.

5-17 Visualizing potential fields The motor action of a behavior must be represented as a potential field. A potential field is an array, or field, of vectors. A vector is a mathematical construct which consists of a magnitude and a direction, (m, d). The array represents a region of space. In most robotic applications, the space is in two dimensions, representing a bird’s eye view of the world just like a map.

5-18 Visualizing potential fields (cont’d)

5-19 Five basic potential fields

5-20 Magnitude profiles Magnitude profile: the way the magnitude of vectors in the field change.

5-21 Example in PF Simple navigation: a robot is heading for a goal (10.3m in direction N) and encounters an obstacle. Figure 4.15 A bird’s eye view of a world with a goal and obstacle, and the two active behaviors for the robot who will inhabit this world.

5-22 Example in PF (cont’d)

5-23 Example in PF (cont’d)

5-24 Advantages  A continuous representation that is easy to visualize over a large region of space: easier to visualize the robot’s overall behavior.  Easy to combine fields, and languages such as C++ support making behavioral libraries.

5-25 Disadvantages Sum to a vector with 0 magnitude: local minima problem. Solutions:  Earliest: always have a motor schema producing vectors with a small magnitude from random noise.  Navigation Templates (NaTs): a smarter potential field.  Recent: express the fields as harmonic functions (not to have a local minima of 0). Computationally expensive, have to be implemented on a VLSI chip in order to run in real-time for large areas.

5-26 Steels’ Mars Explorer: social robots Robots interact implicitly by making / detecting changes in the environment. Steels’ Mars explorer system, using the subsumption architecture, achieves near- optimal cooperative performance in simulated ‘rock gathering on Mars’ domain: The objective is to explore a distant planet, and in particular, to collect sample of a precious rock. The location of the samples is not known in advance, but it is known that they tend to be clustered.

5-27 Steels’ Mars Explorer Rules For individual (non-cooperative) agents, the lowest-level behavior, (and hence the behavior with the highest “priority”) is obstacle avoidance: if detect an obstacle then change direction(1) Any samples carried by agents are dropped back at the mother-ship: if carrying samples and at the base then drop samples(2) Agents carrying samples will return to the mother-ship: if carrying samples and not at the base then travel up gradient(3)

5-28 Steels’ Mars Explorer Rules Agents will collect samples they find: if detect a sample then pick sample up(4) An agent with “nothing better to do” will explore randomly: if true then move randomly(5) Subsumption hierarchy: (1) < (2) < (3) < (4) < (5)

5-29 Mars Explorer’s rules (contd.) Simple cooperation: leave trail of radio-active crumbs if you find a sample – instead of (2) and (3):  If carrying samples and at the base then drop samples  If carrying samples and not at the base then drop 2 crumbs and travel up gradient Use crumbs to find samples  If sense crumbs then pick up 1 crumb and travel down gradient (6) (7) (8) Subsumption hieararchy: (1) < (6) < (7) < (4) < (8) < (5)

5-30 Situated Automata A sophisticated approach is that of Rosenschein and Kaelbling In their situated automata paradigm, an agent is specified in a rule-like (declarative) language, and this specification is then compiled down to a digital machine, which satisfies the declarative specification This digital machine can operate in a provable time bound Reasoning is done off line, at compile time, rather than online at run time

5-31 Situated Automata The logic used to specify an agent is essentially a modal logic of knowledge The technique depends upon the possibility of giving the worlds in possible worlds semantics a concrete interpretation in terms of the states of an automaton “[An agent]…x is said to carry the information that P in world state s, written s╞ K(x,P), if for all world states in which x has the same value as it does in s, the proposition P is true.” [Kaelbling and Rosenschein, 1990]

5-32 Situated Automata An agent is specified in terms of two components: perception and action Two programs are then used to synthesize agents  RULER is used to specify the perception component of an agent  GAPPS is used to specify the action component

5-33 Circuit Model of a Finite-State Machine From Rosenschein and Kaelbling, “A Situated View of Representation and Control”, 1994 f = state update function s = internal state g = output function

5-34 RULER – Situated Automata RULER takes as its input three components “[A] specification of the semantics of the [agent's] inputs (‘whenever bit 1 is on, it is raining’); a set of static facts (‘whenever it is raining, the ground is wet’); and a specification of the state transitions of the world (‘if the ground is wet, it stays wet until the sun comes out’). The programmer then specifies the desired semantics for the output (‘if this bit is on, the ground is wet’), and the compiler... [synthesizes] a circuit whose output will have the correct semantics.... All that declarative ‘knowledge’ has been reduced to a very simple circuit.”[Kaelbling, 1991]

5-35 GAPPS – Situated Automata The GAPPS program takes as its input  A set of goal reduction rules, (essentially rules that encode information about how goals can be achieved), and  a top level goal Then it generates a program that can be translated into a digital circuit in order to realize the goal The generated circuit does not represent or manipulate symbolic expressions; all symbolic manipulation is done at compile time

5-36 Circuit Model of a Finite-State Machine From Rosenschein and Kaelbling, “A Situated View of Representation and Control”, 1994 “The key lies in understanding how a process can naturally mirror in its states subtle conditions in its environment and how these mirroring states ripple out to overt actions that eventually achieve goals.” RULER GAPPS

5-37 Situated Automata The theoretical limitations of the approach are not well understood Compilation (with propositional specifications) is equivalent to an NP-complete problem The more expressive the agent specification language, the harder it is to compile it (There are some deep theoretical results which say that after a certain expressiveness, the compilation simply can’t be done.)

5-38 Advantages of Reactive Agents Simplicity Economy Computational tractability Robustness against failure Elegance

5-39 Limitations of Reactive Agents Agents without environment models must have sufficient information available from local environment If decisions are based on local environment, how does it take into account non-local information (i.e., it has a “short-term” view) Difficult to make reactive agents that learn Since behavior emerges from component interactions plus environment, it is hard to see how to engineer specific agents (no principled methodology exists) It is hard to engineer agents with large numbers of behaviors (dynamics of interactions become too complex to understand)

5-40 Hybrid Architectures Many researchers have argued that neither a completely deliberative nor completely reactive approach is suitable for building agents They have suggested using hybrid systems, which attempt to marry classical and alternative approaches An obvious approach is to build an agent out of two (or more) subsystems:  a deliberative one, containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI  a reactive one, which is capable of reacting to events without complex reasoning

5-41 Hybrid Architectures Often, the reactive component is given some kind of precedence over the deliberative one This kind of structuring leads naturally to the idea of a layered architecture, of which TOURINGMACHINES and INTERRAP are examples In such an architecture, an agent’s control subsystems are arranged into a hierarchy, with higher layers dealing with information at increasing levels of abstraction

5-42 Hybrid Architectures A key problem in such architectures is what kind of control framework to embed the agent’s subsystems in, to manage the interactions between the various layers Horizontal layering Layers are each directly connected to the sensory input and action output. In effect, each layer itself acts like an agent, producing suggestions as to what action to perform. Vertical layering Sensory input and action output are each dealt with by at most one layer each

5-43 Hybrid Architectures m possible actions suggested by each layer, n layers m n interactions m 2 (n-1) interactions Introduces bottleneck in central control system Not fault tolerant to layer failure

5-44 Ferguson – TOURINGMACHINES The TOURINGMACHINES architecture consists of perception and action subsystems, which interface directly with the agent’s environment, and three control layers, embedded in a control framework, which mediates between the layers

5-45 Ferguson – TOURINGMACHINES

5-46 Ferguson – TOURINGMACHINES The reactive layer is implemented as a set of situation-action rules, a la subsumption architecture Example: rule-1: kerb-avoidance if is-in-front(Kerb, Observer) and speed(Observer) > 0 and separation(Kerb, Observer) < KerbThreshHold then change-orientation(KerbAvoidanceAngle) The planning layer constructs plans  selecting from a library of plan skeletons (schemas) one that matches the agent’s goals  elaborating it – selecting schemas to achieve sub- goals etc.  no first principles planning (planning from scratch)

5-47 Ferguson – TOURINGMACHINES The modeling layer contains symbolic representations of the ‘cognitive state’ of the agent and the other entities (agents) in the environment  Detects / predicts conflicts and generates goals The three layers communicate with each other and are embedded in a control framework, which use control rules to decide which layer has control over the agent  Control rules can suppress sensor info or action outputs between the layers. Example: censor-rule-1: if entity(obstacle-6) in perception-buffer then remove-sensory-record(layer-R, entity(obstacle-6))

5-48 Müller –InteRRaP Vertically layered, two-pass architecture cooperation layer plan layer behavior layer social knowledge planning knowledge world model world interface perceptual inputaction output

5-49 Differences btw. InteRRap and Touring Each layer has an explicit knowledge base representing the world at a different level of abstraction The way the layers interact among themselves and the environment  In Touring all layers get sensory input and can act  In InteRRap only the lowest level interacts with the environment; layers pass control up if they are “not competent” to deal with the situation using their knowledge bases; actions are implemented by propagating plans downwards  Each level performs situation recognition & goal activation and planning & scheduling

5-50 Hybrid Architectures: Summary Currently the most popular type of agent architectures It is a pragmatic solution, but lacks the conceptual and semantic clarity of unlayered architectures Interaction between layers is a problem in approaches where layers are independent (Touring): considering all possible ways in which layers can interact is hard. This is less of a problem for 2-pass vertically layered architectures (InterRRap).