Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi-Agent Systems University “Politehnica” of Bucarest Spring 2011 Adina Magda Florea http://turing.cs.pub.ro/mas_11 curs.cs.pub.ro.

Similar presentations


Presentation on theme: "Multi-Agent Systems University “Politehnica” of Bucarest Spring 2011 Adina Magda Florea http://turing.cs.pub.ro/mas_11 curs.cs.pub.ro."— Presentation transcript:

1 Multi-Agent Systems University “Politehnica” of Bucarest Spring Adina Magda Florea curs.cs.pub.ro

2 Course goals Multi-agent systems (MAS) may be viewed as a collection of distributed autonomous artifacts capable of accomplishing complex tasks through interaction, coordination, collective intelligence and emergence of patterns of behavior. By the end of this course, you will know: what are the basic ideas, models, and paradigms offered by intelligent agents and MAS; build multi-agent systems or select the right MAS framework for solving a problem use the agent technology in different areas of applications what do agents bring  as compared to distributed processing or object oriented software development.

3 Course content What are agents and MAS? Agent architectures
Communication Knowledge representation Distributed planning Coordination Auctions Negotiation Agent oriented programming MAS learning Agents and web services Agents and MAS applications

4 Course requirements Course grades Mid-term exam               20% Final exam                     30% Course activity % Projects  % Laboratory                    20% Requirements: min 7 lab attendances, min 50% of term activity (mid-term ex, projects, lab) Academic Honesty Policy It will be considered an honor code violation to give or use someone else's code or written answers, either for the assignments or exam tests. If such a case occurs, we will take action accordingly.

5 Lecture 1: Introduction
Motivation for agents Definitions of agents  agent characteristics, taxonomy Agents and objects Multi-Agent Systems Agent’s intelligence Areas of R&D in MAS Exemplary application domains

6 Motivations for agents
Large-scale, complex, distributed systems: understand, built, manage Open and heterogeneous systems - build components independently Distribution of resources Distribution of expertise Needs for personalization and customization Interoperability of pre-existing systems / integration of legacy systems 6

7 Agent? The term agent is used frequently nowadays in:
Sociology, Biology, Cognitive Psychology, Social Psychology, and Computer Science  AI Why agents? What are they in Computer Science? Do they bring us anything new in modelling and constructing our applications? Much discussion of what (software) agents are and of how they differ from programs in general 7

8 What is an agent (in computer science)?
There is no universally accepted definition of the term agent and there is a good deal of ongoing debate and controversy on this subject It appears that the agent paradigm is one necessarily endowed with intelligence. Are all computational agents intelligent? Agent = more often defined by its characteristics - many of them may be considered as a manifestation of some aspect of intelligent behaviour. 8

9 Agent definitions “Most often, when people use the term ‘agent’ they refer to an entity that functions continuously and autonomously in an environment in which other processes take place and other agents exist.” (Shoham, 1993) “An agent is an entity that senses its environment and acts upon it” (Russell, 1997)

10 “Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. (Hayes-Roth 1995)” “Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program, with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user’s goals or desires.” (the IBM Agent) 10

11 “Agent = a hardware or (more usually) a software-based computer system that enjoys the following properties: autonomy - agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state; Flexible autonomous action reactivity: agents perceive their environment and respond in a timely fashion to changes that occur in it; pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking initiative.” social ability - agents interact with other agents (and possibly humans) via some kind of agent-communication language; (Wooldridge and Jennings, 1995) 11

12 Identified characteristics
Two main streams of definitions Define an agent in isolation Define an agent in the context of a society of agents  social dimension  MAS Two types of definitions Does not necessary incorporate intelligence Must incorporate a kind of IA behaviour  intelligent agents 12

13 Agents characteristics
act on behalf of a user or a / another program autonomous sense the environment and acts upon it / reactivity purposeful action / pro-activity function continuously / persistent software mobility ? Goals, rationality Reasoning, decision making cognitive Learning/adaptation Interaction with other agents - social dimension Other basis for intelligence? 13

14 Agent Environment Environment properties - Accessible vs inaccessible
- Deterministic vs nondeterministic - Episodic vs non-episodic - Static vs dynamic - Open vs closed Agent Sensor Input Action Output Environment 14

15 Multi-agent systems Many entities (agents) in a common environment 15
Influenece area Interactions

16 MAS - many agents in the same environment
Interactions among agents - high-level interactions Interactions for - coordination - communication - organization Coordination  collectively motivated / interested  self interested - own goals / indifferent - own goals / competition / competing for the same resources - own goals / competition / contradictory goals - own goals / coalitions 16

17 Communication  communication protocol  communication language - negotiation to reach agreement - ontology Organizational structures  centralized vs decentralized  hierarchical/ markets "cognitive agent" approach 17

18 How do agents acquire intelligence?
Cognitive agents The model of human intelligence and human perspective of the world  characterise an intelligent agent using symbolic representations and mentalistic notions: knowledge - John knows humans are mortal beliefs - John took his umbrella because he believed it was going to rain desires, goals - John wants to possess a PhD intentions - John intends to work hard in order to have a PhD choices - John decided to apply for a PhD commitments - John will not stop working until getting his PhD obligations - John has to work to make a living (Shoham, 1993) 18

19 Premises Such a mentalistic or intentional view of agents - a kind of "folk psychology" - is not just another invention of computer scientists but is a useful paradigm for describing complex distributed systems. The complexity of such a system or the fact that we can not know or predict the internal structure of all components seems to imply that we must rely on animistic, intentional explanation of system functioning and behavior. Is this the only way agents can acquire intelligence? 19

20 Comparison with AI - alternate approach of realizing intelligence - the sub-symbolic level of neural networks An alternate model of intelligence in agent systems. Reactive agents Simple processing units that perceive and react to changes in their environment. Do not have a symbolic representation of the world and do not use complex symbolic reasoning. The advocates of reactive agent systems claims that intelligence is not a property of the active entity but it is distributed in the system, and steams as the result of the interaction between the many entities of the distributed structure and the environment. 20

21 Exemplary problems 21 The wise men problem
A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white The problem of Prisoner's Dilemma Outcomes for actor A (in hypothetical "points") depending on the combination of A's action and B's action, in the "prisoner's dilemma" game situation. A similar scheme applies to the outcomes for B. Player A / Player B Defect Cooperate 2 , 2 5 , 0 0 , 5 3 , 3 A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white reasoning as follows: ``Suppose my spot were black. The second wisest of us would then see a black and a white and would reason that if his spot were black, the dumbest would see two black spots and would conclude that his spot is white on the basis of the king's assurance. He would have announced it by now, so my spot must be white."  In formalizing the puzzle, we don't wish to try to formalize the reasoning about how fast other people reason. Therefore, we will imagine that either the king asks the wise men in sequence whether they know the colors of their spots or that he asks synchronously, ``Do you know the color of your spot" getting a chorus of noes. He asks it again with the same result, but on the third asking, they answer that their spots are white. Needless to say, we are also not formalizing any notion of relative wisdom. The two players in the game can choose between two moves, either "cooperate" or "defect". The idea is that each player gains when both cooperate, but if only one of them cooperates, the other one, who defects, will gain more. If both defect, both lose (or gain very little) but not as much as the "cheated" cooperator whose cooperation is not returned. The whole game situation and its different outcomes can be summarized by table 1, where hypothetical "points" are given as an example of how the differences in result might be quantified. 21

22 22  The problem of pray and predators     Cognitive approach
Detection of prey animals Setting up the hunting team; allocation of roles Reorganisation of teams Necessity for dialogue/communication and for coordination Predator agents have goals, they appoint a leader that organize the distribution of work and coordinate actions Reactive approach The preys emit a signal whose intensity decreases in proportion to distance - plays the role of attractor for the predators Hunters emit a signal which acts as a repellent for other hunters, so as not to find themselves at the same place Each hunter is each attracted by the pray and (weakly) repelled by the other hunters Problem definition: coordinating the actions of the predators so that they can surround the prey animals as fast as possible, a surrounded prey animal being considered dead. The prays and the predators (both are agents) move over a space represented in the form of a grid. The objective is for the predators to capture the pray animals by surrounding them as in the figure above. The following hypotheses are laid down: 1. The dimension of the environment is finite 2. The predator and pray animals move at fixed speeds and generally at the same speed. 3. The pray animals move in a random manner by making Brownian movements 4. The predators can use the corner and edges to block a pray animal's path 5. The predators have a limited perception of the world that surrounds them, which means that they can see the prey only if it is in one of the squares at a distance within their field of perception 22

23 Emotional agents 23 A computable science of emotions Virtual actors
Is intelligence the only optimal action towards a a goal? Only rational behaviour? Emotional agents A computable science of emotions Virtual actors Listen trough speech recognition software to people Respond, in real time, with morphing faces, music, text, and speech Emotions: Appraisal of a situation as an event: joy, distress; Presumed value of a situation as an effect affecting another: happy-for, gloating, resentment, jealousy, envy, sorry-for; Appraisal of a situation as a prospective event: hope, fear; Appraisal of a situation as confirming or disconfirming an expectation: satisfaction, relief, fears-confirmed, disappointment Manifest temperament control of emotions 23

24 MAS links with other disciplines
Economic theories Decision theory OOP AOP Markets Autonomy Rationality Distributed systems Communication MAS Learning Mobility Proactivity Cooperation Organizations Reactivity Character Artificial intelligence and DAI Sociology Psychology 24

25 Areas of R&D in MAS Agent architectures
Knowledge representation: of world, of itself, of the other agents Communication: languages, protocols Planning: task sharing, result sharing, distributed planning Coordination, distributed search Decision making: negotiation, markets, coalition formation Learning Organizational theories Norms Trust and reputation 25

26 Areas of R&D in MAS 26 Implementation:
Agent programming: paradigms, languages Agent platforms Middleware, mobility, security Applications Industrial applications: real-time monitoring and management of manufacturing and production process, telecommunication networks, transportation systems, electricity distribution systems, etc. Business process management, decision support eCommerce, eMarkets Information retrieving and filtering Human-computer interaction CAI, Web-based learning - CSCW PDAs Entertainment 26

27 Agents in action NASA’s Earth Observing-1 satellite, which began operation in 2000, was recently turned into an autonomous agent testbed. Image Credit: NASA NASA uses autonomous agents to handle tasks that appear simple but are actually quite complex. For example, one mission goal handled by autonomous agents is simply to not waste fuel. But accomplishing that means balancing multiple demands, such as staying on course and keeping experiments running, as well as dealing with the unexpected. "What happens if you run out of power and you're on the dark side of the planet and the communications systems is having a problem? It's all those combinations that make life exciting," says Steve Chien, principal scientist for automated planning and scheduling at the NASA Jet Propulsion Laboratory in Pasadena, Calif. 27

28 TAC SCM Negotiation was one of the key agent capabilities tested at the conference's Trading Agent Competition. In one contest, computers ran simulations of agents assembling PCs. The agents were operating factories, managing inventories, negotiating with suppliers and buyers, and making decisions based on a range of variables, such as the risk of taking on a big order even if all the parts weren't available. If an agent made an error in judgment, the company could face financial penalties and order cancellations. 28

29 Buttler agent Imagine your very own mobile butler, able to travel with you and organise every aspect of your life from the meetings you have to the restaurants you eat in. The program works through mobile phones and is able to determine users' preferences and use the web to plan business and social events And like a real-life butler the relationship between phone agent and user improves as they get to know each other better. The learning algorithms will allow the butler to arrange meetings without the need to consult constantly with the user to establish their requirements. 29

30 Robocup agents The goal of the annual RoboCup competitions, which have been in existence since 1997, is to produce a team of soccer-playing robots that can beat the human world champion soccer team by the year 2050. 30

31 Intelligent Small World Autonomous Robots for Micro-manipulation
Swarms Intelligent Small World Autonomous Robots for Micro-manipulation A leap forward in robotics research by combining experts in microrobotics, in distributed and adaptive systems as well as in self-organising biological swarm systems. Facilitate the mass-production of microrobots, which can then be employed as a "real" swarm consisting of up to 1,000 robot clients. These clients will all be equipped with limited, pre-rational on-board intelligence. The swarm will consist of a huge number of heterogeneous robots, differing in the type of sensors, manipulators and computational power. Such a robot swarm is expected to perform a variety of applications, including micro assembly, biological, medical or cleaning tasks. 31

32 Intelligent IT Solutions
Goal-Directed™ Agent technology. AdaptivEnterprise™ Solution Suite allow businesses to migrate from traditionally static, hierarchical organizations to dynamic, intelligent distributed organizations capable of addressing constantly changing business demands. Supports a large number of variables, high variety and frequent occurrence of unpredictable external events. 32

33 True UAV Autonomy In a world first, truly autonomous, Intelligent Agent-controlled flight was achieved by a Codarra ‘Avatar’ unmanned aerial vehicle (UAV). The flight tests were conducted in restricted airspace at the Australian Army’s Graytown Range about 60 miles north of Melbourne. The Avatar was guided by an on-board JACK™ intelligent software agent that directed the aircraft’s autopilot during the course of the mission. 33

34 Information agents Personal agents (PDA) Web agents 34
provide "intelligent" and user-friendly interfaces observe the user and learn user’s profile sort, classify and administrate s, organize and schedule user's tasks in general, agents that automate the routine tasks of the users Web agents Tour guides Search engines Indexing agents - human indexing FAQ finders - spider indexing Expertise finder 34

35 Agents in eLearning Agents’ role in e-learning
Enhance e-learning content and experience give help, advice, feedback act as a peer learning participate in assessments participate in simulation personalize the learning experience Enhance LMSs facilitate participation facilitate interaction facilitate instructor’s activities 35

36 Agents for e-Commerce E-commerce
Transactions - business-to-busines (B2B) - business-to-consumer (B2C) - consumer-to-consumer (C2C) Difficulties of eCommerce Trust Privacy and security Billing Reliability 36


Download ppt "Multi-Agent Systems University “Politehnica” of Bucarest Spring 2011 Adina Magda Florea http://turing.cs.pub.ro/mas_11 curs.cs.pub.ro."

Similar presentations


Ads by Google