ISTC-CNR contribution to D2.2

Slides:



Advertisements
Similar presentations
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Advertisements

5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
The AGILO Autonomous Robot Soccer Team: Computational Principles, Experiences, and Perspectives Michael Beetz, Sebastian Buck, Robert Hanek, Thorsten Schmitt,
Background information Formal verification methods based on theorem proving techniques and model­checking –to prove the absence of errors (in the formal.
Yiannis Demiris and Anthony Dearden By James Gilbert.
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
Effective Coordination of Multiple Intelligent Agents for Command and Control The Robotics Institute Carnegie Mellon University PI: Katia Sycara
Concrete architectures (Section 1.4) Part II: Shabbir Ssyed We will describe four classes of agents: 1.Logic based agents 2.Reactive agents 3.Belief-desire-intention.
Agent Mediated Grid Services in e-Learning Chun Yan, Miao School of Computer Engineering Nanyang Technological University (NTU) Singapore April,
JACK Intelligent Agents and Applications Hitesh Bhambhani CSE 6362, SPRING 2003 Dr. Lawrence B. Holder.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
BDI Agents Martin Beer, School of Computing & Management Sciences,
The Need of Unmanned Systems
What is it? A mobile robotics system controls a manned or partially manned vehicle-car, submarine, space vehicle | Website for Students.
IHP Im Technologiepark Frankfurt (Oder) Germany IHP Im Technologiepark Frankfurt (Oder) Germany ©
Introduction to Jadex programming Reza Saeedi
Katanosh Morovat.   This concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control of the operation.
Modeling Driver Behavior in a Cognitive Architecture
Mind RACES from Reactive to Anticipatory Cognitive Embodied Systems Our Objectives The general goal of Mind RACES is to investigate different anticipatory.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
September1 Managing robot Development using Agent based Technologies Dr. Reuven Granot Former Scientific Deputy Research & Technology Unit Directorate.
Cooperating AmigoBots Framework and Algorithms
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Integrating high- and low-level Expectations in Deliberative Agents Michele Piunti - Institute of.
IST Contribution lisbon Mind Races meeting, September 2005.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Synthetic Cognitive Agent Situational Awareness Components Sanford T. Freedman and Julie A. Adams Department of Electrical Engineering and Computer Science.
Mobile Robot Navigation Using Fuzzy logic Controller
Haptic Interfaces and Force-Control Robotic Application in Medical and Industrial Contexts Applicants Prof. Doo Yong Lee, KAIST Prof. Rolf Johansson,
Roma, Meeting Mindraces 2-3 October 2006 ISTC-CNR Achievements MindRACES: From Reactive to Anticipatory Cognitive Embodied Systems (FP )
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
New Bulgarian University MindRACES, First Review Meeting, Lund, 11/01/2006 Anticipation by Analogy An Attempt to Integrate Analogical Reasoning with Perception,
Geoinformatics 2006 University of Texas at El Paso Evaluating BDI Agents to Integrate Resources Over Cyberinfrastructure Leonardo Salayandía The University.
WP6 - D6.1 Design of integrated models ISTC-CNR September, 26/27, 2005 ISTC-CNR September, 26/27, 2005.
Formal Verification. Background Information Formal verification methods based on theorem proving techniques and model­checking –To prove the absence of.
第 25 章 Agent 体系结构. 2 Outline Three-Level Architectures Goal Arbitration The Triple-Tower Architecture Bootstrapping Additional Readings and Discussion.
From NARS to a Thinking Machine Pei Wang Temple University.
Functionality of objects through observation and Interaction Ruzena Bajcsy based on Luca Bogoni’s Ph.D thesis April 2016.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
IDENTIFYING OBSTACLES Advanced Social Communication Middle School: Lesson Two.
Intelligent Agents (Ch. 2)
What is cognitive psychology?
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Service-Oriented Computing: Semantics, Processes, Agents
Learning Fast and Slow John E. Laird
Scenario Specification and Problem Finding
Strategic Team Decision Making Florida Reliability Coordinating
The Systems Engineering Context
Key NLP skills to enhance your professional practice
Chapter 11: Usability © Len Bass, Paul Clements, Rick Kazman, distributed under Creative Commons Attribution License.
Artificial Intelligence Chapter 25 Agent Architectures
Mind RACES: some Emerging Challenges
Today: Classic & AI Control Wednesday: Image Processing/Vision
Intelligent Agents Chapter 2.
Chapter 10: Process Implementation with Executable Models
Service-Oriented Computing: Semantics, Processes, Agents
Robot Teams Topics: Teamwork and Its Challenges
Logical architecture refinement
Analysis models and design models
Service-Oriented Computing: Semantics, Processes, Agents
Robot Intelligence Kevin Warwick.
Market-based Dynamic Task Allocation in Mobile Surveillance Systems
Artificial Intelligence Chapter 25. Agent Architectures
Presented By: Darlene Banta
Artificial Intelligence Chapter 25 Agent Architectures
Dept. of Computation, UMIST
Behavior Based Systems
Presentation transcript:

ISTC-CNR contribution to D2.2 September, 26/27, 2005

Outline Issues and Methodology A Sample Scenario Tools and techniques Anticipation and Deliberation Social Anticipation A Sample Scenario Guards and Thieves Tools and techniques Practical Reasoning Hybrid Architectures The Simulator

Anticipation in Deliberation Anticipation in high-level cognition: Predictions are matched not only with perceptions, but with Goals (that are not current states of affairs and perhaps will never be) (Strategical) Planning: considering the possible consequences of own actions …

“Social” Anticipation Anticipating intentional actions for acting on them (e.g. blocking, helping, relying) for coordination … Basis for predicting: Theory of mind (I know he wants to go to…) Norms (It is forbidden to go to…) Categories (Thieves normally do…) Reputation (People say that…)

Scenario: Guards and Thieves In this scenario, robots or agents can have two different roles (guards or thieves). Some objects are considered to be valuable and the thief’s aim is to find and pick them all. The thief has a store where it places all the valuables it succeeds to take away. The goal of the guard is to protect the valuables. […] INDIVIDUAL and SOCIAL

Individual Issues TASKS QUESTIONS Integrating different levels of action control (e.g. routinary, reasoning), based on different kinds of expectations (e.g. implicit, explicit) and arbitrating them by shifting level of control or by mediating. Being able to transform the representations used for the different levels of control; e.g. learning as routinization of behaviours that are first adopted in a deliberative way; or abstracting concepts that are first learned in a trial-and-error way. How are high-level “decisions” realized by low-level “behaviors”? How can the control come back from low to high level in case of necessity, e.g. errors? Skill learning: how are sequences/patterns of actions “compiled” into behaviors? How are concepts “abstracted”? Interaction between deliberative processes and action control

A “vertical” architecture Having (conflicting) goals, plans and behaviors and different kinds of expectations Control shift From plans to behaviors via (lack of) surprise From behaviors to plans via surprise

Social Issues QUESTIONS TASKS What’s special about anticipating intentional agents? Which social skills can be realized only by the means of anticipatory capabilities? Anticipation of the adversary behaviour (avoiding/intercepting) by using “social” cues Modeling reliance, help, delegation, trust

Some examples Reliance: guard#1 patrols zone1 and zone2, guard#2 patrols zone2 and zone3. Guard#1 is moving towards zone2. Guard#2 can: stay in zone#3 relying guard#1 for patrolling zone2 Help: thief#1 moves towards zone#1 for taking obj#1; thief#2 knows that it is patrolled by a guard. Thief#2 can: communicate this to thief#1; distract the guard; (try to) take obj#1 (realizing a goal of thief#1).

Two Instruments Practical Reasoning – BDI (Jadex) Beliefs, Desires, Intentions Hybrid Architectures (AKIRA) Distributed Control (with Schemas)

Using BDI Beliefs: declarative knowledge Desires: world-states that the agent is trying to reach (goals) Intentions: are the chosen means to achieve the agent’s desires, i.e. sequences of actions (plans) Key mechanisms: Representation, Processing, Deliberation

Using AKIRA (Fuzzy-based) Schema Mechanisms Each Agent embeds a Schema Rulebase (fuzzy logic) Forward model (fuzzy cognitive maps) Context-sensitive command fusion Integrates fuzzy terms weighted by salience Schema selection based on prediction success The schema that predicts better is selected for action control Hierarchical Different “formats” of the expectations

The Simulator We are interfacing our systems to a simulator (based on ODE) Some tasks will be approached by using a set of “base actions” (scripts) Reach object_x (OR location_x) Focus_on object_x (OR location_x) … …when their realization is not our central task (e.g. in the social scenario)

What we don’t do – opportunities for “vertical” integration Some Epistemic Actions (focus_on obj_x) and some Pragmatic Actions (reach obj_x) are implemented as scripts Mechanisms for epistemic and pragmatic actions can be added (e.g. attentional shift: a moving object resembling a thief attracts attention) We assume many basis for predicting; we do not develop algorithms for statistical prediction Better statistical predictive capabilities can be added

Outline 2/2 Theoretical problems we intend to tackle Properties of the scenario and example of tasks The software and hardware instruments

Theoretical problems we intend to tackle QUESTIONS EXAMPLES OF MECHANISMS Recognition of objects from sensory flow on the basis of prediction Integration of sensory flow in time Robustness with respect to contraction or expantions of sensory flow Abstraction …on the basis of prediction Predictors with internal dynamics with parameterized durations Hierarchical architectures

Properties of the scenario and example of tasks Simulated and real robot with ultrasound sensors Composite objects larger than the agent, with: a) different shape; b) different size The robot navigates around the objects EXAMPLE OF TASKS (Find and) recognise objects on the basis of the sensorial flow they generate, independently of their size

The instruments 1/2: 2D c++ custom simulation

The instruments 2/2: Pioneer 3 robot

Routinization/compilation of sequences/patterns of actions by surprise Routinization/compilation of sequences/patterns of actions by surprise. The Watchdog can start by explicitly planning paths (slow and requiring many resources); after a while it “routinizes” a certain path (that no longer needs deliberate control); some expectations are used in the routines for on-line action monitoring. This is in fact a model of “skill learning” as “compilation” of sequences/patterns of actions. The routinization mechanism is based on surprise: when a planned sequence of actions no longer generates surprise, it can be safely routinized. However, the expectation is not lost, but embedded into the routine (we call it reliability) and used for on-line, automatic action monitoring (e.g. using anticipatory classifiers). Passing from routine to deliberate control by surprise. The converse operation (un-compiling skills) uses surprise, too. When a surprise is generated in the automatic control of the routine the system can pass anew from the routine to the deliberate control (e.g. activate a goal or a plan). This happens e.g. when the Watchdog encounters an un-expected situation in its path (e.g. a new obstacle, a door is close, another agent passing). The routine is stopped, many resources and/or attentive control are raised, etc. On-line adjusting/tuning of plans and routines; combine deliberative planning with reactive plan execution (including reactive local plan optimization during execution). If the Watchdog sees an intruder it can build a plan to reach it; but the intruder moves, it can adjust the plan on-line. Minor expectation violations (requiring fine-tuning of actions) should be allowed without “special” mechanisms such as surprise; they simply show that the system is robust and quite fault-tolerant.

AKIRA

Visual Search

Fuzzy-based Schemas