How To Implement Social Policies. A Deliberative Agent Architecture Roberto Pedone Rosaria Conte IP-CNR Division "AI, Cognitive and Interaction Modelling"

Slides:



Advertisements
Similar presentations
A NEW METRIC FOR A NEW COHESION POLICY by Fabrizio Barca * * Italian Ministry of Economy and Finance. Special Advisor to the European Commission. Perugia,
Advertisements

ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Workpackage 2: Norms
Agents, Power and Norms Michael Luck, Fabiola López y López University of Southampton, UK Benemérita Universidad Autonoma de Puebla, Mexico.
Some questions o What are the appropriate control philosophies for Complex Manufacturing systems? Why????Holonic Manufacturing system o Is Object -Oriented.
The cooperative mind and its proximate mechanisms Team Leader: Cristiano Castelfranchi Team Members: Rosaria Conte.
Supporting Business Decisions Expert Systems. Expert system definition Possible working definition of an expert system: –“A computer system with a knowledge.
Entrepreneurial Mind-Set
Managing Technological Change Chapter 4 Matt Dockery.
An RG theory of cultural evolution Gábor Fáth Hungarian Academy of Sciences Budapest, Hungary in collaboration with Miklos Sarvary - INSEAD, Fontainebleau,
Conflict Management Design organizational conflict still fall within the realm of conflict resolution, reduction, or minimization organizational conflict.
Constructing the Future with Intelligent Agents Raju Pathmeswaran Dr Vian Ahmed Prof Ghassan Aouad.
Norms in Multi Agent Systems. ( What about Norms as Such?)‏ Rosaria Conte LABSS/ISTC Laboratory of Agent Based Social Simulation, Institute of Cognitive.
Yaochu Jin FTR/HRE-D August, From Interactive Evolutionary Algorithms to Agent-based Evolutionary Design Interactive Evolutionary Algorithm –When.
A Multi-Agent System for Visualization Simulated User Behaviour B. de Vries, J. Dijkstra.
Distributed Rational Decision Making Sections By Tibor Moldovan.
Virtual Organizations as Normative Multiagent Systems Guido Boella Università di Torino, Joris Hulstijn Vrije Universiteit, Amsterdam,
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
IACT901 - Module 1 Planning Theory - Scope & Integration ABRS Hong Kong 2004 Penney McFarlane University of Wollongong.
Agent-Based Acceptability-Oriented Computing International Symposium on Software Reliability Engineering Fast Abstract by Shana Hyvat.
What is Capacity, Capacity Assessment, and Capacity Development Capacity is defined as “the ability of individuals, organizations, and societies to perform.
Collaborative Reinforcement Learning Presented by Dr. Ying Lu.
1 FM Overview of Adaptation. 2 FM RAPIDware: Component-Based Design of Adaptive and Dependable Middleware Project Investigators: Philip McKinley, Kurt.
Behavioral Change Models for Healthcare Workers Objective:  Explore theoretical models that may prove useful for changing hand hygiene behavior among.
BY Muhammad Suleman MBA MIT BSC (COMPUTER).  What is decision Making  Why decision Making  Conditions under which decision are made  What is Rational.
Bottom-Up Coordination in the El Farol Game: an agent-based model Shu-Heng Chen, Umberto Gostoli.
Decision Making Dr Vasuprada Kartic NAC Batch IX PGDCPM.
Mark Schaller and Douglas T. Kenrick EvolutionEvolution CognitionCognition CommunicationCommunication CultureCulture.
Home, school & community partnerships Leadership & co-ordination Strategies & targets Monitoring & assessment Classroom teaching strategies Professional.
Chapter 8 Architecture Analysis. 8 – Architecture Analysis 8.1 Analysis Techniques 8.2 Quantitative Analysis  Performance Views  Performance.
Session 0. Introduction: Why and key concepts Benedetta Magri, Bangkok, 2-5 September 2013.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Cognitive Reasoning to Respond Affectively to the Student Patrícia A. Jaques Magda Bercht Rosa M. Vicari UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL BRASIL.
Collective Action and Collective Decision Making
Study on Genetic Network Programming (GNP) with Learning and Evolution Hirasawa laboratory, Artificial Intelligence section Information architecture field.
Integrating high- and low-level Expectations in Deliberative Agents Michele Piunti - Institute of.
Module 4 :Session 4 Conflict management Developed by Dr J Moorman.
NAVEEN AGENT BASED SOFTWARE DEVELOPMENT. WHAT IS AN AGENT? A computer system capable of flexible, autonomous (problem-solving) action, situated in dynamic,
Decision Making. Decision Making is an indispensable component of the management process itself. Decision making is a process by which selection of a.
P2P Interaction in Socially Intelligent ICT David Hales Delft University of Technology (Currently visiting University of Szeged, Hungary)
Intelligent agents, ontologies, simulation and environments for norm-regulated MAS Deliberative Normative Agents Ricardo Gralhoz Governance in Open Multi-Agent.
January 17, 2003John Roberts, Stanford GSB1 Stanford GSB Sloan Program Stramgt 258 Strategy and Organization 4. Organization Design for Performance BP.
Manag ing Software Change CIS 376 Bruce R. Maxim UM-Dearborn.
Artificial intelligence methods in the CO 2 permission market simulation Jarosław Stańczak *, Piotr Pałka **, Zbigniew Nahorski * * Systems Research Institute,
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Advanced Decision Architectures Collaborative Technology Alliance An Interactive Decision Support Architecture for Visualizing Robust Solutions in High-Risk.
Chapter 4 Decision Support System & Artificial Intelligence.
COEUR - BCM Business Creativity Module “Virtual group dynamics, leadership and network building” Andrew Turnbull, Aberdeen Business School, Aberdeen, Scotland.
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
Institute of Physics Wroclaw University of Technology 28/09/2005 How can statistical mechanics contribute to social sciences? Piotr Magnuszewski, Andrzej.
What is Artificial Intelligence?
Chapter 2 Culture. Chapter Outline  Introducing Culture  Defining Culture  Cultural Knowledge  Culture and Human Life  Cultural Knowledge and Individual.
Organisational Behaviour
Organic Evolution and Problem Solving Je-Gun Joung.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
CORNERSTONES of Managerial Accounting, 5e. © 2014 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part,
Intelligent Agents: Technology and Applications Unit Five: Collaboration and Task Allocation IST 597B Spring 2003 John Yen.
Feasibility.
Learning Fast and Slow John E. Laird
CREATED BY T.ALAA AL AMOUDI
Cognitive Model of Trust as Relational Capital
TÆMS-based Execution Architectures
Enduring Understandings of the Class
Intelligent Agents Chapter 2.
Sustainability (and other stories)
CASE − Cognitive Agents for Social Environments
1.1.1 Software Evolution.
CREATED BY T.ALAA AL AMOUDI
CHAPTER I. of EVOLUTIONARY ROBOTICS Stefano Nolfi and Dario Floreano
Presentation transcript:

How To Implement Social Policies. A Deliberative Agent Architecture Roberto Pedone Rosaria Conte IP-CNR Division "AI, Cognitive and Interaction Modelling" PSS (Project on Social Simulation) V.LE Marx 15, Roma. June 5-8, 2000

The Problem... Multiple agents in common environments face problems posed –by a finite world (e.g., resource scarcity), and therefore –by social interference. These are problems of social interdependence –Shared problems –Requiring a common solution = multi-agent plan (multiple actions for a unique goal) Necessity of common solutions adopted by the majority. How to obtain that interdependent but autonomous agents apply common solutions?

Autonomous agents? Self-sufficient: By definition, impossible! äSelf-interested: what does it mean? Selfish äHave own criteria to filter external inputs –Beliefs: they may not perceive/understand Problem (cognitive biases) As common (“) Solution (“) Complementarity (“) –Goals: they accept requests when these are useful for some of their goals BUT: difference between interests and goals...

How To Achieve Common Solutions Among Autonomous Agents ? Two approaches Bottom-up: Emergent, spontaneous processes among adaptive agents Top-down: Designed, incentives and sanctions modifying the preferences of rational agents –Acquisition of solutions –Enforcing mechanisms –Violation S p a >p b S

BUT... Evolutionary processes and Adaptive Agents: socially acceptable outcomes are doubtful! Rational agents require unrealistic conditions (specified, severe and certain sanctions).

Bottom UP. Adaptive Agents Analogy between biological evolution and social processes –Fitter individuals survive –Social processes spread and evolve through imitation of fitter agents. But –how can agents perceive the fitness of a strategy? “How do individuals get information about average fitness, or even observe the fitnesses of other individuals?” (Chattoe, 1998) -How to make sure that what propagates is what is socially desirable, without the intervention of some deus ex machina (the programmer)?

Top Down. Rational Agents Socially desirable effects are deliberately pursued through incentives and sanctions. Incentives and sanctions induce rational agents to act according to global interest. Rational agents take decisions by calculating the subjective expected value of actions according to their utility function. A rational decider will comply with the norm if utility of incentive (or of avoiding sanction) is higher than utility of transgression.

Effects Rational deciders will violate a norm n i as soon as one or more of the following conditions applies: -Sanctions are not imposed -Sanctions are not specified -The sanction for violating n i is lower than the value of transgression with equal probability of application of the sanction (1/2). -The sanction for violating an incompatible norm is higher. –The sanction for violating the norm is not or rarely applied. Fast decline in norm compliance is likely to follow from any of the above conditions. Then, what?

Top-Down and Bottom-UP Top down –agents acquire S –agents decide to accept S –have S even if they do not apply it Bottom up –infer S from others –communicate S to one another –control each other. Difference from previous approaches: –S is represented as a specific object in the mind –Can travel from one mind to the other. S Do you know that S? She believes S Should I? These guys believe S. Will they act accordingly?

Shut up, otherwise I knock you down! But How To Tell S? Sanction –may be sufficient (but unnecessary) for acceptance –but it is insufficient (and unnecessary) for recognition. Shut up, otherwise I knock you down! This guy is crazy, better to do as he likes!

Believed Obligations May be insufficient for acceptance But necessary & sufficient for recognition! Shut up! NO! You ought to! I know, but I don’t care!

This requires This requires a cognitive deliberative agent: –To communicate, infer, control: meta-level representation (beliefs about representations of S) –To decide whether to accept/reject a believed: meta- meta-level representation (decision about belief about representation of S). When the content of representation is a Norm, we have Deliberative Normative Agents. S BEL(S) GOAL(S) BEL(GOAL(S)) cognitive deliberative

Deliberative Normative Agents Are able to recognize the existence of norms Can decide to adopt a norm for different reasons (different meta-goals) Can deliberately follow the norm or violate it in specific cases Can react to violations of the norm by other agents Require more realistic conditions!

Deliberative Normative Agents  Can reason about norms  Can communicate norms Can negotiate about norms This implies Have norms as mental objects Hve different levels of representations and links among them

A Deliberative Agent Architecture With Norms (Castelfranchi et al., 1999) DESIRE I (Brazier et al., 1999)

DESIRE II Main Components Coomunication Agent Interaction Management, Perception World Interaction Management, Maintenance of Agent Information = agent models Maintenance of World Information = world models Maintenance of Society Information = social world models Own Process Control = mental states & processing: goal generation & dynamics, belief acceptance, decision-making.

DESIRE III

DESIRE IV Own Process Control

DESIRE V Own Process Control Components information Norm Management(NormsBelief) Strategy Management (Candidate Goals) Goal Management(Selected Goals) Plan Management(Plan Chosen) (Action)

DESIRE VI: Norms and goals non-adopted norms : useful for coordination (predict the behaviour of the other agents) adopted norms : impact on goal generation; among the possible ‘sources of goals’ -> normative goals impact on goal selection by providing criteria about how to select among existing goals; e.g., preference criteria.

DESIRE VII: Norms and plans Norms may generate plans Norms may select plans Norms may select actions E.g. the norm “be kind to colleagues” may lead to a preferred plan to reach a goal within an organisation.

To sum up Adaptive agents: fit = socially acceptable? Rational agents are good enough if sanctions are severe, effective, and certain. Otherwise, collapse in compliance...

With Deliberative Agents Acquisition of norms online Communication and negotiation (social monitoring and control) Flexibility: –agents follow the norm whenever possible –agents violate the norm (sometimes) –agents violate the norm always if possible But graceful degradation with uncertain sanctions!

Work in Progress DESIRE is used for simulation-based experiments on the role of deliberative agents in distributed social control –Market with heterogeneous, interdependent agents Make contracts in the morning (resource exchange under survival pressure) Deliver in the afternoon (norm of reciprocity) Violations can be found out and –The news spread through the group –The event denounced –Both. –Objectives: What are the effects of violation? (Under given enviornmental conditions, norms may be non adaptive…) When and why agents react to violation? What are the effects of reaction?

What To Do Next DESIRE is complex –Computationally: too few agents. Can simpler languages implement meta-level representations? –Mentally: too much deliberation Emotional/affective enforcement? NB -> E (moral sense, guilt) -> NG -> NA. Emotional shortcut others’ evaluations -> SE (shame) -> NA implicit normsimplicit n-goals. Affective computing! But integrated with meta-level representations.