5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES
5-2 Reactive Architectures There are many unsolved (some would say insoluble) problems associated with symbolic AI These problems have led some researchers to question the viability of the whole paradigm, and to the development of reactive architectures One of the most vocal critics of mainstream AI: Rodney Brooks
5-3 Brooks – behavior languages Brooks has put forward three theses: 1. Intelligent behavior can be generated without explicit representations of the kind that symbolic AI proposes 2. Intelligent behavior can be generated without explicit abstract reasoning of the kind that symbolic AI proposes 3. Intelligence is an emergent property of certain complex systems
5-4 Brooks – behavior languages To illustrate his ideas, Brooks built some based on his subsumption architecture A subsumption architecture is a hierarchy of task-accomplishing behaviors Each behavior is a rather simple rule-like structure Each behavior ‘competes’ with others to exercise control over the agent Lower layers represent more primitive kinds of behavior (such as avoiding obstacles), and have precedence over layers further up the hierarchy
5-5 Subsumption: to class a more specific category under a general principle The rule “come to class” is subsumed by the rule “take advantage of all learning experiences” no complex symbolic representations no complex symbolic representations many behaviors fire simultaneously many behaviors fire simultaneously
5-6 Subsumption architecture - Brooks, 1986 Data Module = {Task Accomplishing Behaviours} Lower layers in the hierarchy have higher priority and are able to inhibit operations of higher layers: total ordering Inhibitor (subsuming) relation b1 <b2 if b1 has priority (total order) The modules located at the lower end of the hierarchy are responsible for basic, primitive tasks; the higher modules reflect more complex patterns of behavior and incorporate a subset of the tasks of the subordinate modules subsumption architecture
5-7 Rules (note use of total order) 1. If detect an object, change direction 2. If carrying samples and at base, drop samples 3. if carrying samples and not at base, travel up gradient and drop 2 crumbs 4. if detect sample, pick sample up 5. if sense crumbs, then pick up 1 crumb and travel down gradient 6. if true, move randomly
5-8 Subsumption - rules: suppress info from a lower layer to give control to a higher one - censor actions of layers, so as to control which layer will do the actions
5-9 Multiple actions fired: Module 1 can monitor and influence the inputs and outputs of Module 2 M1 = wonders about while avoiding obstacles M0 M2 = explores the environment looking for distant objects of interests while moving around M1 Incorporating the functionality of a subordinated control module by a higher module is performed using suppressors (modify input signals) and inhibitors (inhibit output) Lower levels NORMALLY have precedence, but higher levels can choose to restrict lower levels Competence Module (1) Move around Competence Module (0) Avoid obstacles Inhibitor nodeSupressor node Sensors Competence Module (2) Investigate env Competence Module (0) Avoid obstacles Effectors Competence Module (1) Move around Input (percepts) Output (actions)
5-10 All modules are operating all the time. The system uses priority and subsumption to selection WHICH action to do.
5-11 A Decomposition Based on Task Achieving Behaviors – independently achieve actions From Brooks, “A Robust Layered Control System for a Mobile Robot”, 1985
5-12 Modal Logic Narrowly construed, modal logic studies reasoning that involves the use of the expressions ‘necessarily’ and ‘possibly’. However, the term ‘modal logic’ is used more broadly to cover a family of logics with similar rules and a variety of different symbols. A list describing the best known of these logics follows. Modal Logic (system of logic whose formal properties resemble certain moral and epistemological concepts) (Epistemology is the branch of philosophy which studies the nature and scope of knowledge)philosophyknowledge □ It is necessary that.. ◊ It is possible that.. Deontic Logic (Deontic logic: study of the logical relationships among propositions that assert actions or states are morally obligatory, permissible, right or wrong. ) O It is obligatory that.. P It is permitted that.. F It is forbidden that.. Temporal Logic G It will always be the case that.. F It will be the case that.. H It has always been the case that.. P It was the case that.. Doxastic Logic (the modal logic of belief and disbelief) Bx x believes that..
5-13 Modal logic allows the following ideas: The laundry needs to be done. You know the laundry needs to be done. I know that you know that the laundry needs to be done. You know that I know that you know that the laundry needs to be done. I believe exercise causes weight loss. [Will effect my actions even if it is not true.] Allows for conflicting viewpoints. You can turn in an assignment late, but it is not required that you do so. If I don’t fill up with gas now, in all future worlds, I run out. If a student cheats, it is necessary that they get negative points. If a student cheats, it is possible that they receive an F.
5-14 Advantages of Reactive Agents Simplicity Economy Computational tractability Robustness against failure Elegance
5-15 Limitations of Reactive Agents Agents without environment models must have sufficient information available from local environment If decisions are based on local environment, how does it take into account non-local information (i.e., it has a “short-term” view) Difficult to make reactive agents that learn Since behavior emerges from component interactions plus environment, it is hard to see how to engineer specific agents (no principled methodology exists) It is hard to engineer agents with large numbers of behaviors (dynamics of interactions become too complex to understand)
5-16 Hybrid Architectures Many researchers have argued that neither a completely deliberative nor completely reactive approach is suitable for building agents They have suggested using hybrid systems, which attempt to marry classical and alternative approaches An obvious approach is to build an agent out of two (or more) subsystems: a deliberative one, containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI a reactive one, which is capable of reacting to events without complex reasoning
5-17 Hybrid Architectures Often, the reactive component is given some kind of precedence over the deliberative one In such an architecture, an agent’s control subsystems are arranged into a hierarchy, with higher layers dealing with information at increasing levels of abstraction A key problem in such architectures is what kind of control framework to embed the agent’s subsystems in, to manage the interactions between the various layers Horizontal layering Layers are each directly connected to the sensory input and action output. In effect, each layer itself acts like an agent, producing suggestions as to what action to perform. Vertical layering Sensory input and action output are each dealt with by at most one layer each
5-18 Hybrid Architectures m possible actions suggested by each layer, n layers m n interactions m 2 (n-1) interactions Introduces bottleneck in central control system Not fault tolerant to layer failure
5-19 Horizontal architectures normally have a mediator – makes decision as to which layer has control of agent at each time. provides for consistency Vertical may be one pass (information flows up the architecture) or two pass (up then control flows back down)