Software Agents CS 486 January 29, 2003. What is an agent? “Agent” is one of the more ubiquitous buzzwords in computer science today. –It’s used for almost.

Slides:



Advertisements
Similar presentations
Design by Contract.
Advertisements

Computer-aided mechanism design Ye Fang, Swarat Chaudhuri, Moshe Vardi 1.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
3. Basic Topics in Game Theory. Strategic Behavior in Business and Econ Outline 3.1 What is a Game ? The elements of a Game The Rules of the.
Some questions o What are the appropriate control philosophies for Complex Manufacturing systems? Why????Holonic Manufacturing system o Is Object -Oriented.
SECOND MIDTERM REVIEW CS 580 Human Computer Interaction.
Negotiation A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
Preference Elicitation Partial-revelation VCG mechanism for Combinatorial Auctions and Eliciting Non-price Preferences in Combinatorial Auctions.
Bring Success in Beliefs. You don’t have to wait for someone to accept, to promote, to select... to somehow "discover." Access is nearly unlimited;
S.T.A.I.R.. General problem solving strategy that can be applied to a range problems.
Entrepreneurship Presenter:Syed Tariq ijaz kaka khel MBA (Human Resource Management)
Phil 160 Kant.
What is the computational cost of automating brilliance or serendipity? (Computational complexity and P vs NP question) COS 116: 4/12/11 Sanjeev Arora.
ECO290E: Game Theory Lecture 4 Applications in Industrial Organization.
Lecture 2 Page 1 CS 236, Spring 2008 Security Principles and Policies CS 236 On-Line MS Program Networks and Systems Security Peter Reiher Spring, 2008.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
CPS Topics in Computational Economics Instructor: Vincent Conitzer Assistant Professor of Computer Science Assistant Professor of Economics
Agent Mediated Grid Services in e-Learning Chun Yan, Miao School of Computer Engineering Nanyang Technological University (NTU) Singapore April,
Agent Technology for e-Commerce Chapter 10: Mechanism Design Maria Fasli
An Algorithm for Automatically Designing Deterministic Mechanisms without Payments Vincent Conitzer and Tuomas Sandholm Computer Science Department Carnegie.
CS 357 – Intro to Artificial Intelligence  Learn about AI, search techniques, planning, optimization of choice, logic, Bayesian probability theory, learning,
CS 486 Software Agents and Electronic Commerce Chris Brooks.
SECOND PART: Algorithmic Mechanism Design. Mechanism Design MD is a subfield of economic theory It has a engineering perspective Designs economic mechanisms.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
Introduction to Databases
Distributed Rational Decision Making Sections By Tibor Moldovan.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Introduction to Databases Transparencies
Yang Cai Sep 15, An overview of today’s class Myerson’s Lemma (cont’d) Application of Myerson’s Lemma Revelation Principle Intro to Revenue Maximization.
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
Collusion and the use of false names Vincent Conitzer
Introduction to Database Systems 1.  Assignments – 3 – 9%  Marked Lab – 5 – 10% + 2% (Bonus)  Marked Quiz – 3 – 6%  Mid term exams – 2 – (30%) 15%
Introduction to Databases Transparencies 1. ©Pearson Education 2009 Objectives Common uses of database systems. Meaning of the term database. Meaning.
+ Session 3: Supporting Change + Tonight’s Topics Supporting Change: Why do people resist change?? Why do people change? How do we support change MANAGING.
William H. Bowers – The Social Life of Information Chapter 2 – Agents and Angels.
Agents Computer Programs of a certain type Effectively bodiless robots –Rise of internet enables Agents Lostness –As life becomes more complex, we cannot.
CSC2012 Database Technology & CSC2513 Database Systems.
Katanosh Morovat.   This concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control of the operation.
Yang Cai Sep 8, An overview of the class Broad View: Mechanism Design and Auctions First Price Auction Second Price/Vickrey Auction Case Study:
CPS 173 Mechanism design Vincent Conitzer
Agent-Oriented Software Engineering CSC532 Xiaomei Huang.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
Auction Theory תכנון מכרזים ומכירות פומביות Topic 7 – VCG mechanisms 1.
Making Simple Decisions
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
ACM SIGACT News Distributed Computing Column 9 Abstract This paper covers the distributed systems issues, concentrating on some problems related to distributed.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
EEL 5937 Agent models. EEL 5937 Multi Agent Systems Lecture 4, Jan 16, 2003 Lotzi Bölöni.
1 Technical & Business Writing (ENG-715) Muhammad Bilal Bashir UIIT, Rawalpindi.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
OWL Representing Information Using the Web Ontology Language.
Algorithmic, Game-theoretic and Logical Foundations
Introduction of Intelligent Agents
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Intelligent Agents. 2 What is an Agent? The main point about agents is they are autonomous: capable of acting independently, exhibiting control over their.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Software Agents & Agent-Based Systems Sverker Janson Intelligent Systems Laboratory Swedish Institute of Computer Science
Strategic Game Theory for Managers. Explain What is the Game Theory Explain the Basic Elements of a Game Explain the Importance of Game Theory Explain.
WHAT ARE PLANS FOR? Philip E. Agre David Chapman October 1989 CS 790X ROBOTICS Presentation by Tamer Uz.
Intelligent Agents A Tutorial Prof. Fuhua Lin. From Objects to Agents Objects: the “Classical Perspective” Objects: the “Classical Perspective” State.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Intelligent Agents on the Internet and Web BY ROHIT SINGH MANHAS M.C.A 4TH SEM. Dept. of I&CT, MIT, Manipal.
Artificial Intelligence
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
Optimal Stopping.
Market/Agent-Oriented Programming
Intelligent Agents Chapter 2.
© James D. Skrentny from notes by C. Dyer, et. al.
Presentation transcript:

Software Agents CS 486 January 29, 2003

What is an agent? “Agent” is one of the more ubiquitous buzzwords in computer science today. –It’s used for almost any piece of software “I know an agent when I see one” (and the paperclip is not one.)

Examples News-filtering agents Shopbots/price comparison agents Bidding agents Recommender agents Personal Assistants Middle agents/brokers Etc.

Real-world agents Secret Agents Travel Agents Real Estate Agents Sports/Showbiz Agents Purchasing Agents What do these jobs have in common?

What is an agent? An agent is a program with: Sensors (inputs) Effectors (outputs) An environment The ability to map inputs to outputs –But what program isn’t an agent, then?

What is an agent? Can perform domain-oriented reasoning. –Domain-oriented: a program has some specific knowledge about a particular area. –Not completely general. Reasoning – What does this mean? –Does the program have to plan ahead? –Can it be reactive? –Must it be declarative?

What is an agent? Agents must be able to: –Communicate –Negotiate But what do these terms mean? Language? Are pictures, GUIs communication? How sophisticated is “negotiation”? Communication should “degrade gracefully”

What is an agent? Lives in a complex, dynamic environment –Getting at the notion of a complicated problem Has a set of goals –An agent must have something it intends to do. (we’ll return to this idea.) Persistent state Distinguishes agents from subroutines, servlets, etc.

What is an agent? Autonomy/Autonomous execution Webster’s: –Autonomy: “The quality or state of being self- governing” More generally, being able to make decisions without direct guidance. –Authority, responsibility

Autonomy Autonomy is typically limited or restricted to a particular area. –“Locus of decision making” Within a prescribed range, an agent is able to decide for itself what to do. –“Find me a flight from SF to NYC on Monday.” Note: I didn’t say what to optimize – I’m allowing the agent to make tradeoffs.

What is an agent? Not black and white Like “object”, it’s more a useful characterization than a strict category It makes sense to refer to something as an agent if it helps the designer to understand it. –Some general characteristics: Autonomous, goal-oriented, flexible, adaptive, communicative, self-starting

Objects vs. Agents So how are agents different from objects. Objects: passive, noun-oriented, receivers of action. Agents: active, task-oriented, able to take action without receiving a message.

Examples of agent technology, revisited. Ebay bidding agents –Very simple – can watch an auction and increment the price for you. Shopping agents (Dealtime, evenBetter) –Take a description of an item and search shopping sites. –Are these agents? Recommender systems (Firefly, Amazon, Launch, Netflix) –Users rate some movies/music/things, and the agent suggests things they might like. –Are these agents?

More examples of agents Brokering –Constructing links between Merchants Certificate Authorities Customers Agents Auction agents –Negotiate payment and terms. Conversational/NPC agents (Julia) Remote Agent (NASA)

The Intentional Stance We often speak of programs as if they are intelligent, sentient beings: –The compiler can’t find the linker. –The database wants the schema to be in a different format. –My program doesn’t like that input. It expects the last name first. Treating a program as if it is intelligent is called the intentional stance. It doesn’t matter whether the program really is intelligent; it’s helpful to us as programmers to think as if it is.

The Knowledge Level The intentional stance leads us to program agents at the knowledge level (Newell). –Reasoning about programs in terms of: Facts Goals Desires/needs/wants/preferences Beliefs This is often referred to as declarative programming. We can think of this as an abstraction, just like object-oriented programming. –Agent-oriented programming

Example Consider an agent that will find books for me that I’m interested in. States: a declarative representation of outcomes. –hasBook(“Moby Dick”) Facts: Categories of books, bookseller websites, etc. Preferences – a ranking over states –hasBook(“Neuromancer”) > hasBook(“MobyDick) –hasBook(B) & category(B) == SciFi > hasBook(b2) & category(b2) == Mystery

Example Goals: find a book that satisfies my preferences. Take actions that improve the world state. Beliefs: used to deal with uncertainty –May(likes(Chris, “Harry Potter”)) –Prob(likes(Chris, “Harry Potter”)) == 0.10

Rational Machines How do we determine the right thing for an agent to do? If the agent’s internal state can be described at the knowledge level, we can describe the relationship between its knowledge and its goals. Newell’s Principle of Rationality: –If an agent has the knowledge that an action will lead to the accomplishment of one of its goals, then it will select that action.

Preferences and Utility Agents will typically have preferences –This is declarative knowledge about the relative value of different states of the world. –“I prefer ice cream to spinach.” Often, the value of an outcome can be quantified (perhaps in monetary terms.) This allows the agent to compare the utility (or expected utility) of different actions. A rational agent is one that maximizes expected utility.

Example Again, consider our book agent. If I can tell it how much value I place on different books, it can use this do decide what actions to take. Prefer(SciFi, Fantasy) prefer(SciFi, Mystery) Like(Fantasy), Like(Mystery) like(book) & not_buying(otherBook) -> buy(book). How do we choose whether to buy Fantasy or Mystery?

Example If my agent knows the value I assign to each book, it can pick the one that will maximize my utility. (value – price). V(fantasy) = $10, p(fantasy) = $7 V(mystery) = $6, p(mystery) = $4 –Buy fantasy. V(fantasy) = $10, p(fantasy) = $7 V(mystery) = $6, p(mystery) = $1 –Buy mystery.

Utility example Game costs $1 to play. –Choose “Red”, win $2. –Choose “Black”, win $3. A utility-maximizing agent will pick Black. Game costs $1 to play. –Choose Red, win 50 cents –Choose Black, win 25 cents A utility-maximizing agent will choose not to play. (If it must play, it picks Red)

Utility example But actions are rarely certain. Game costs $1. –Red, win $1. –Black: 30% chance of winning $10, 70% chance of winning 0. A risk-neutral agent will pick Black. What if the amounts are in millions?

Rationality as a Design Principle Provides an abstract description of behavior. Declarative: avoids specifying how a decision is reached. –Leaves flexibility Doesn’t enumerate inputs and outputs. –Scales, allows for diverse envoronments. Doesn’t specify “failure” or “unsafe” states. –Leads to accomplishment of goals, not avoidance of failure.

Agents in open systems Open systems are those in which no one implements all the participants. –E.g. the Internet. System designers construct a protocol –Anyone who follows this protocol can participate. (e.g. HTTP, TCP) How to build a protocol that leads to “desirable” behavior? –What is “desirable”?

Protocol design By treating participants as rational agents, we can exploit techniques from game theory and economics. “Assume everyone will act to maximize their own payoff – how do we change the rules of the game so that this behavior leads to a desired outcome?” We’ll return to this idea when we talk about auctions.

Agents and Mechanisms System designers can treat external programs as if they were rational agents. That is, treat external programs as if they have their own beliefs, goals and agenda to achieve. –For example: an auction can treat bidding agents as if they are actually trying to maximize their own profit.

Agents and Mechanisms In many cases, a system designer cannot directly control agent behavior. –In an auction, the auctioneer can’t tell people what to bid. The auctioneer can control the mechanism –The “rules of the game” Design goal: construct mechanisms that lead to self-interested agents doing “the right thing.”

Mechanism Example Imagine a communication network G with two special nodes x and y. –Edges between nodes are agents that can forward messages. –Each agent has a private cost t to pass a message along its edge. –If agents will reveal their t’s truthfully, we can compute the shortest path between x and y. How to get a self-interested agent to reveal its t?

Solution Each agent reveals a t and the shortest path is computed. Costs are accumulated –If an agent is not on the path, it is paid 0. –If an agent is on the path, it is paid the cost of the path – the cost of the shortest path that doesn’t include it - t. –P = g next – (g best – t) For example, if I bid 10 and am in a path with cost 40, and the best solution without me is 60, I get paid 60 – (40 – 10) = 30 Agent compensated for its contribution to the solution.

Analysis If an agent lies: –Was on the shortest path, still on the shortest path. Payment lower – no benefit to lying. –Was on the shortest path, now not on the shortest path. This means the lie was greater than g next – g best But I would rather get a positive amount than 0! –Not on the shortest path, but now are. Underbidding leads to being paid at the lower amount, but still incurring higher cost. Truth would be better!

Example Cost = 2, SP = 5, NextSP = 8 My payment if I bid truthfully: 8 – (5 – 2) = 5. Net: 5 – 2 = 3. If I underbid, my payment will be lower and net cost higher. If I overbid, I either get 0, or the same utility. e.g – if I bid 3, I get 8 – (5 – 3) = 6, but my net is 6 –3 = 3. Therefore, truthtelling is a dominant strategy.

Adaptation and Learning Often, it’s not possible to program an agent to deal with every contingency –Uncertainty –Changing domain –Too complicated Agents must often need to adapt to changing environments or learn more about their environment.

Adaptation Adaptation involves changing an agent’s model/behavior in response to a perceived change in the world. Reactive Agents don’t anticipate the future, just update

Learning Learning involves constructing and updating a hypothesis An agent typically tries to build and improve some representation of the world. Proactive Try to anticipate the future. Most agents will use both learning and adaptation.

Agents in e-commerce Agents play a fairly limited role in today’s e-commerce. –Mostly still in research labs. Large potential both in B2C and B2B –Assisting in personalization –Automating payment

Challenges for agents Uniform protocol/language –Web services? XML? Lightweight, simple to use, robust. –Always a challenge Critical mass –Enough people need to adopt “Killer app” –What will the agent /IM be?