Reaching Agreements: Negotiation

Slides:



Advertisements
Similar presentations
(Single-item) auctions Vincent Conitzer v() = $5 v() = $3.
Advertisements

Reaching Agreements II. 2 What utility does a deal give an agent? Given encounter  T 1,T 2  in task domain  T,{1,2},c  We define the utility of a.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Seminar in Auctions and Mechanism Design Based on J. Hartline’s book: Approximation in Economic Design Presented by: Miki Dimenshtein & Noga Levy.
Utility Theory.
Game Theory S-1.
3. Basic Topics in Game Theory. Strategic Behavior in Business and Econ Outline 3.1 What is a Game ? The elements of a Game The Rules of the.
1 Transportation Model. 2 Basic Problem The basic idea in a transportation problem is that there are sites or sources of product that need to be shipped.
The Voting Problem: A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
IMPOSSIBILITY AND MANIPULABILITY Section 9.3 and Chapter 10.
CS 886: Electronic Market Design Social Choice (Preference Aggregation) September 20.
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March
Negotiation A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
Chapter 15 Bargaining Negotiation may involve: Exchange of information
Part 3: The Minimax Theorem
Best-First Search: Agendas
An Introduction to Game Theory Part I: Strategic Games
EC941 - Game Theory Prof. Francesco Squintani Lecture 8 1.
Bundling Equilibrium in Combinatorial Auctions Written by: Presented by: Ron Holzman Rica Gonen Noa Kfir-Dahav Dov Monderer Moshe Tennenholtz.
Slide 1 of 13 So... What’s Game Theory? Game theory refers to a branch of applied math that deals with the strategic interactions between various ‘agents’,
Planning under Uncertainty
Game-Theoretic Approaches to Multi-Agent Systems Bernhard Nebel.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Lecture 1 - Introduction 1.  Introduction to Game Theory  Basic Game Theory Examples  Strategic Games  More Game Theory Examples  Equilibrium  Mixed.
Labor Union Ideas 1. Intro You and I have seen that firms that want to hire labor take a good look at the marginal revenue product of labor and consider.
Distributed Multiagent Resource Allocation In Diminishing Marginal Return Domains Yoram Bachrach(Hebew University) Jeffrey S. Rosenschein (Hebrew University)
Social choice theory = preference aggregation = voting assuming agents tell the truth about their preferences Tuomas Sandholm Professor Computer Science.
Agent Technology for e-Commerce Chapter 10: Mechanism Design Maria Fasli
Finance, Financial Markets, and NPV
Developing Principles in Bargaining. Motivation Consider a purely distributive bargaining situation where impasse is costly to both sides How should we.
1 chapter: >> First Principles Krugman/Wells Economics
Incomplete Contracts Renegotiation, Communications and Theory December 10, 2007.
Distributed Rational Decision Making Sections By Tibor Moldovan.
Extensive Game with Imperfect Information Part I: Strategy and Nash equilibrium.
Reaching Agreements: Negotiation
Social choice theory = preference aggregation = truthful voting Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
DANSS Colloquium By Prof. Danny Dolev Presented by Rica Gonen
CONSISTENCY MAINTENANCE AND NEGOTIATION. What Is a TMS? A truth maintenance system performs some form of propositional deduction maintains justifications.
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. CHOOSING.
Game Theory.
The Marriage Problem Finding an Optimal Stopping Procedure.
CHAPTER 13 Efficiency and Equity. 2 What you will learn in this chapter: How the overall concept of efficiency can be broken down into three components—efficiency.
Overview Aggregating preferences The Social Welfare function The Pareto Criterion The Compensation Principle.
MAKING COMPLEX DEClSlONS
CPS 173 Mechanism design Vincent Conitzer
7-1 LECTURE 7: Reaching Agreements An Introduction to MultiAgent Systems
Multi-Agent Systems Negotiation Shari Naik. Negotiation Inter-agent cooperation Conflict resolution Agents communicate respective desires Compromise to.
Consumer Choice 16. Modeling Consumer Satisfaction Utility –A measure of relative levels of satisfaction consumers enjoy from consumption of goods and.
Reaching Agreements: Voting Truthful voters vote for the candidate they think is best. Why would you vote for something you didn’t want? (run off.
Nash equilibrium Nash equilibrium is defined in terms of strategies, not payoffs Every player is best responding simultaneously (everyone optimizes) This.
Dynamic Games of complete information: Backward Induction and Subgame perfection - Repeated Games -
Combinatorial Auctions By: Shai Roitman
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Now What….. I want the last remaining orange and so do you.
Data Analysis Econ 176, Fall Populations When we run an experiment, we are always measuring an outcome, x. We say that an outcome belongs to some.
Chapter 5 Supply. Section 1 What Is Supply? What are five services or goods that you supply to people in life? Please tell me the benefit others receive.
Decision theory under uncertainty
Great Theoretical Ideas in Computer Science.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
6.853: Topics in Algorithmic Game Theory Fall 2011 Constantinos Daskalakis Lecture 22.
GamblingGambling What are the odds? Jessica Judd.
Social choice theory = preference aggregation = voting assuming agents tell the truth about their preferences Tuomas Sandholm Professor Computer Science.
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
Statistics Overview of games 2 player games representations 2 player zero-sum games Render/Stair/Hanna text CD QM for Windows software Modeling.
Negotiating Socially Optimal Allocations of Resources U. Endriss, N. Maudet, F. Sadri, and F. Toni Presented by: Marcus Shea.
Chapter 4 Consumer and Producer Surplus >> ©2011  Worth Publishers.
 This will explain how consumers allocate their income over many goods.  This looks at individual’s decision making when faced with limited income and.
1 Chapter 15 Bargaining Negotiation may involve:  Exchange of information  Relaxation of initial goals  Mutual concession.
Computing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains Vincent Conitzer, Tuomas Sandholm Computer.
Social choice theory = preference aggregation = voting assuming agents tell the truth about their preferences Tuomas Sandholm Professor Computer Science.
Presentation transcript:

Reaching Agreements: Negotiation

Voting Truthful voters vote for the candidate they think is best. Why would you vote for something you didn’t want? (run off election – want to pick competition) (more than two canddiates, figure your candidate doesn’t have a chance) We vote in awarding scholarships, teacher of the year, person to hire. Rank feasible social outcomes based on agents' individual ranking of those outcomes A - set of n agents O - set of m feasible outcomes Each agent has a preference relation <i : O x O, asymmetric and transitive 2

Social choice rule (good for society) Input: the agent preference relations (<1, …, <n) Output: elements of O sorted according the input - gives the social preference relation <* of the agent group In other words – creates ordering for the group 3

Desirable properties of the social choice rule: A social preference ordering <* should exist for all possible inputs (individual preferences) <* should be defined for every pair (o, o')O <* should be asymmetric and transitive over O The outcomes should be Pareto efficient: if i A, o <i o' then o <* o‘ (not misorder if all agree) The scheme should be independent of irrelevant alternatives (if all agree on relative ranking of two, should retain ranking in social choice): if i A , < and <‘ are rankings based on different sets of choices and satisfy o <i o' and o <'i o‘ (their relative rankings are unaffected by other choices being present) then the social ranking of o and o' should have same relationship No agent should be a dictator in the sense that o <i o' implies o <* o' for all preferences of the other agents

Arrow's impossibility theorem Must relax desired attributes No social choice rule satisfies all of the six conditions Must relax desired attributes May not require >* to always be defined We may not require that >* is asymmetic and transitive Use plurality protocol: all votes are cast simultaneously and highest vote count wins. Introducing an irrelevant alternative may split the majority causing the old majority and the new irrelevant to drop out of favor (The Ross Perot effect). A binary protocol involves voting pairwise – single elimination The order of the pairing can totally change the results (Figure below is fascinating). Reason for rankings in basketball tournament?

One voter ranks c > d > b > a One voter ranks a > c > d > b One voter ranks b > a > c > d Notice, just rotates preferences. winner (c, (winner (a, winner(b,d)))=a winner (d, (winner (b, winner(c,a)))=d winner (d, (winner (c, winner(a,b)))=c winner (b, (winner (d, winner(c,a)))=b surprisingly, order of pairing yields different winner!

Borda protocol (used if binary protocol is too slow) = assigns an alternative |O| points for the highest preference, |O|-1 points for the second, and so on The counts are summed across the voters and the alternative with the highest count becomes the social choice Winner turns loser and loser turns winner if the lowest ranked alternative is removed (does this surprise you?) See Table on next slide 7

a > b > c a > b > c >d b > c >a Borda Paradox – remove loser, winner changes (notice, c is always ahead of removed item) a > b > c b > c >a c > a > b b > c > a c > a >b a <b <c a=15,b=14, c=13 a > b > c >d b > c > d >a c > d > a > b a > b > c > d b > c > d> a c >d > a >b a <b <c < d a=18, b=19, c=20, d=13 When loser is removed, next loser becomes winner!

Strategic (insincere) voters Suppose your choice will likely come in second place. If you rank the first choice of rest of group very low, you may lower that choice enough so yours is first. True story. Dean’s selection. Each committee member told they had 5 points to award and could spread out any way among the candidates. The recipient of the most points wins. I put all my points on one candidate. Most split their points. I swung the vote! What was my gamble? Want to get the results as if truthful voting were done.

Typical Competition Mechanisms Auction: allocate goods or tasks to agents through market. Need a richer technique for reaching agreements Negotiation: reach agreements through interaction. Argumentation: resolve confliction through debates.

Negotiation May involve: Exchange of information Relaxation of initial goals Mutual concession

Mechanisms, Protocols, Strategies Negotiation is governed by a mechanism or a protocol: defines the ”rules of encounter” between the agents the public rules by which the agents will come to agreements. Given a particular protocol, how can a particular strategy be designed that individual agents can use?

Negotiation Mechanism Negotiation is the process of reaching agreements on matters of common interest. It usually proceeds in a series of rounds, with every agent making a proposal at every round. Issues in negotiation process: Negotiation Space: All possible deals that agents can make, i.e., the set of candidate deals. Negotiation Protocol: – A rule that determines the process of a negotiation: how and when a proposal can be made, when a deal has been struck, when the negotiation should be terminated, and so. Negotiation Strategy: When and what proposals should be made.

Protocol Means kinds of deals that can be made Means sequence of offers and counter-offers Protocol is like rules of chess game, whereas strategy is way in which player decides which move to make

Game Theory Computers make concrete the notion of strategy which is central to game playing

Mechanisms Design Mechanism design is the design of protocols for governing multi-agent interactions. Desirable properties of mechanisms are: Convergence/guaranteed success Maximising global welfare: sum of agent benefits are maximized Pareto efficiency Individual rationality Stability: no agent should have incentive to deviate from strategy Simplicity: low computational demands, little communication Distribution: no central decision maker Symmetry: not want agents to play different roles (all agents have same choice of actions) Guarantees success if it ensures that eventually agreement is certain to be reached. Maximises social welfare if it ensures that any outcome maximises the sum of the utilities of the negotiation participants. Pareto efficient: if there is no other outcome that will make atleast one agent better off without making atleast one other agent worse off. A protocol is individually rational if following the protocol is in the best interest of the negotiation participants. A protocol is stable if it provides all agents an incentive to behave in a particular way. E.g. Nash Equilibrium. No agent has an incentive to deviate from agreed upon strategies. A protocol is simple if using it, a participant can easily determine the optimal strategy. Distribution: No central decision maker nor a single point of failure.

Attributes not universally accepted Can’t always achieve every attribute so look at tradeoffs of choices: (for example) efficiency and stability are sometimes in conflict with each other

Negotiation Protocol Who begins Take turns Build off previous offers Give feed back (or not). Tell what utility is (or not) Obligations Privacy Allowed proposals you can make as a result of negotiation history

Thought Question Why not just compute a joint solution – using linear programming?

Negotiation Process 1 Negotiation usually proceeds in a series of rounds, with every agent making a proposal at every round. Communication during negotiation: Proposal Counter Proposal Agenti concedes Agenti Agentj The proposals that agents make are defined by their strategy; must be drawn from the negotiation set; and must be legal as defined by the protocol. If an agreement is reached as defined by the rule, then negotiation terminates. Example: Agreeing on a price. If agenti is going a buy a book from agentj and agenti can only afford to pay a certain price, agenti will continue to negotiate on the price until the offer from agentj is a price that agenti can pay.

Negotiation Process 2 Another way of looking at the negotiation process is (can talk about 50/50 or 90/10 depending on who ”moves” the farthest): Proposals by Aj Proposals by Ai Point of Acceptance/ aggreement The proposals that agents make are defined by their strategy; must be drawn from the negotiation set; and must be legal as defined by the protocol. If an agreement is reached as defined by the rule, then negotiation terminates. Example: Agreeing on a price. If agenti is going a buy a book from agentj and agenti can only afford to pay a certain price, agenti will continue to negotiate on the price until the offer from agentj is a price that agenti can pay.

Many types of interactive concession based methods Some use multiple objective linear programming – requires that the players construct a crude linear approximation of t heir utility functions. Jointly Improving Direction method. Start out with a neutral suggestive value, continue until no joint improvements are possible. Used in Camp Daivd peace negotiations (Egypt/Israel – Jimmy Carter, Nobel Peace Prize 2002)

Jointly Improving Direction method Iterate over Mediator helps players criticize a tentative agreement (could be status quo) Generates a compromise direction (where each of the k issues is a direction in k-space) Mediator helps players to find a jointly preferred outcome along the compromise direction, and then proposes a new tentative agreement.

Typical Negotiation Problems Task-Oriented Domains(TOD): an agent's activity can be defined in terms of a set of tasks that it has to achieve. The target of a negotiation is to minimize the cost of completing the tasks. State Oriented Domains(SOD): each agent is concerned with moving the world from an initial state into one of a set of goal states. The target of a negotiation is to achieve a common goal. Main attribute: actions have side effects (positive/negative) Worth Oriented Domains(WOD): agents assign a worth to each potential state, which captures its desirability for the agent. The target of a negotiation is to maximize mutual worth (rather than worth to individual).

Complex Negotiations Some attributes that make the negotiation process complex are: Multiple attributes: Single attribute (price) – symmetric scenario. (both benefit in the same way by a cheaper price) Multiple attributes – several inter-related attributes, e.g. buying a car. The number of agents and the way they interact: One-to-one, e.g. single buyer and single seller . Many-to-one, e.g. multiple buyers and a single seller, auctions. Many-to-many, e.g. multiple buyers and multiple sellers.

Single issue negotiation Like money Symmetric (If roles were reversed, I would benefit the same way you would) If one task requires less travel, both would benefit equally by having less travel utility for a task is experienced the same way by whomever is assigned to that task. Non-symmetric – we would benefit differently if roles were reversed if you delivered the picnic table, you could just throw it in the back of your van. If I delivered it, I would have to rent a U-haul to transport it (as my car is small).

Multiple Issue negotiation Could be hundreds of issues (cost, delivery date, size, quality) Some may be inter-related (as size goes down, cost goes down, quality goes up?) Not clear what a true concession is (larger may be cheaper, but harder to store or spoils before can be used) May not even be clear what is up for negotiation (I didn’t realize not having any test was an option) (on the job…Ask for stock options, bigger office, work from home.)

How many agents are involved? One to one One to many (auction is an example of one seller and many buyers) Many to many (could be divided into buyers and sellers, or all could be identical in role) n(n-1)/2 number of pairs

Negotiation Domains:Task-oriented ”Domains in which an agent’s activity can be defined in terms of a set of tasks that it has to achieve”, (Rosenschein & Zlotkin, 1994) An agent can carry out the tasks without interference (or help) from other agents – such as ”who will deliver the mail” All resources are available to the agent Tasks redistributed for the benefit of all agents

Task-oriented Domain: Definition How can an agent evaluate the utility of a specific deal? Utility represents how much an agent has to gain from the deal. (it is always based on change from original allocation) Since an agent can achieve the goal on its own, it can compare the cost of achieving the goal on its own to the cost of its part of the deal. If utility<0, it is worse off than performing tasks on its own. Conflict deal: (stay with status quo) if agents fail to reach an agreement: where no agent agrees to execute tasks other than its own. utlity = 0 Deal: a possible proposal (from the set of possible deals).

Formalization of TOD A Task Oriented Domain(TOD) is a triple <T, Ag, c> where: T is a finite set of all possible tasks; Ag={A1, A2,…, An} is a list of participant agents; c:(T)R+ defines cost of executing each subset of tasks. Assumptions on cost function: c() = 0. The cost of a subset of tasks does not depend on who carries out them. (Idealized situation) Cost function is monotonic, which means that more tasks, more cost. (It can’t cost less to take on more tasks.) T1  T2 implies c(T1)  c(T2)

Redistribution of Tasks Given a TOD <T, {A1,A2}, c>, T is original assignment, D is assignment after the “deal” An encounter (instance) within the TOD is an ordered list (T1, T2) such that for all k, Tk  T. This is an original allocation of tasks that they might want to reallocate. A pure deal on an encounter is the redistribution of tasks among agents: (D1, D2), such that all tasks are reassigned D1 D2= T1 T2 Specifically, : (D1, D2)=(T1, T2) is called the conflict deal. For each deal =(D1, D2), the cost of such a deal to agent k is Costk()=c(Dk) (i.e, cost to k of deal  is cost of Dk, k’s part of deal)

Examples of TOD Parcel Delivery: Several couriers have to deliver sets of parcels to different cities. The target of negotiation is to reallocate deliveries so that the cost of travel to each courier is minimal. Database Queries: Several agents have access to a common database, and each has to carry out a set of queries. The target of negotiation is to arrange queries so as to maximize efficiency of database operations (Join, Projection, Union, Intersection, …) . You are doing a join as part of another operation, so please save the results for me.

Possible Deals Consider an encounter from the Parcel Delivery Domain. Suppose we have two agents. Both agents have parcels to deliver to city a and only agent 2 has parcels to deliver to city b. There are nine distinct pure deals in this encounter: ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) ({a,b}, {a}) ({a,b}, {b}) ({a,b}, {a,b}) the conflict deal

Figure deals knowing union must be {ab} Choices for first agent: {a} {b} {ab} {} Second agent must “pick up the slack” {a} for agent 1 b|ab (for agent 2) {b} for agent 1a|ab {ab} for agent 1 a|ab|b|{} {} for agent 1 ab

Utility Function for Agents Given an encounter (T1, T2), the utility function for each agent is just the difference of costs and is defined as follow: Utilityk()=c(Tk)-Costk() = c(Tk)- c(Dk) where =(D1, D2) is a deal; c(Tk) is the stand-alone cost to agent k (the cost of achieving its goal with no help) Costk() is the cost of its part of the deal. Note that the utility of the conflict deal is always 0.

Parcel Delivery Domain (assuming do not have to return home – like Uhaul) Distribution Point Cost function: c()=0 c({a})=1 c({b})=1 c({a,b)}=3 1 1 city a city b 2 Utility for agent 1 (org {a}): Utility1({a}, {b}) = 0 Utility1({b}, {a}) = 0 Utility1({a, b}, ) = -2 Utility1(, {a, b}) = 1 … Utility for agent 2 (org {ab}): Utility2({a}, {b}) = 2 Utility2({b}, {a}) = 2 Utility2({a, b}, ) = 3 Utility2(, {a, b}) = 0 …

Dominant Deals Deal  dominates deal ' if  is better for at least one agent and not worse for the other, i.e.,  is at least as good for every agent as ': k{1,2}, Utilityk() Utilityk(')  is better for some agent than ': k{1,2}, Utilityk()> Utilityk(') Deal  weakly dominates deal ' if at least the first condition holds (deal isn’t worse for anyone). Any reasonable agent would prefer (or go along with)  over ' if  dominates or weakly dominates '.

Negotiation Set: Space of Negotiation A deal  is called individual rational if  weakly dominates the conflict deal. (no worse than what you have already) A deal  is called Pareto optimal if there does not exist another deal ' that dominates . (best deal for x without disadvantaging y) The set of all deals that are individual rational and Pareto optimal is called the negotiation set (NS).

Utility Function for Agents (example from previous slide) Utility1({a}, {b}) =0 Utility1({b}, {a})=0 Utility1({a,b}, )=-2 Utility1(, {a,b})=1 Utility1({a}, {a,b})=0 Utility1({b}, {a,b})=0 Utility1({a,b}, {a})=-2 Utility1({a,b}, {b})=-2 Utility1({a,b}, {a,b})=-2 Utility2({a}, {b}) =2 Utility2 ({b}, {a})=2 Utility2 ({a,b}, )=3 Utility2 (, {a,b})=0 Utility2 ({a}, {a,b})=0 Utility2 ({b}, {a,b})=0 Utility2 ({a,b}, {a})=2 Utility2 ({a,b}, {b})=2 Utility2 ({a,b}, {a,b})=0

({a}, {b}) ({b}, {a}) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) Individual Rational for Both (eliminate any choices that are negative for either) ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) ({a,b}, {a}) ({a,b}, {b}) ({a,b}, {a,b}) ({a}, {b}) ({b}, {a}) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) individual rational

Pareto Optimal Deals ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) is (-2,3), but nothing beats 3 for agent 2 ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) Pareto Optimal Beaten by ({a}{b}) deal

Negotiation Set ({b}, {a}) (, {a,b}) ({a}, {b}) ({b}, {a}) (, {a,b}) Individual Rational Deals ({a}, {b}) ({b}, {a}) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) Pareto Optimal Deals ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) Negotiation Set ({a}, {b}) ({b}, {a}) (, {a,b})

Negotiation Set illustrated Create a scatter plot of the utility for i over the utility for j Only those where both is positive are individually rational (for both) (origin is conflict deal) Which are pareto optimal? Utility for i Utility for j

Negotiation Set in Task-oriented Domains Utility for agent i Negotiation set: (pareto optimal+ Individual rational) B A C Utility of conflict Deal for agent i The circle delimits the space of all possible deals E All deals left of the line BD will not be individual rational (-ve utility) for agent j and thus j will be better off with the conflict deal E. Similarly, all deals below line AC will not be individual rational (-ve utility) for agent i and thus i will be better off with the conflict deal E. So, the negotiation set contains deals that are in the shaded area BEC. Conflict deal D Utility for agent j Utility of conflict Deal for agent j

Negotiation Protocol P(d) – Product of the two agent utilities from d product maximizing negotiation protocol One step protocol Concession protocol At t >= 0, A offers d(A,t) and B offers d(B,t), such that Both deals are from the negotiation set "i e {A,B} and "t >0, Utilityi(d(i,t)) <= Utilityi(d(i,t-1)) I propose something less desirable for me Negotiation ending Conflict - Utilityi(d(i,t)) = Utilityi(d(i,t-1)) Agreement, $j !=i e {A,B}, Utilityj(d(i,t)) >= Utilityj(d(j,t)) Only A => agree d(B,t) either agrees with proposal Only B => agree d(A,t) either agrees with proposal Both A,B => agree d(k,t) such that P(d(k))=max{P(d(A)),P(d(B))} Both A,B and P(d(A))=P(d(B)) => flip a coin (product is the same, but may not be the same for each agent – flip coin to decide which deal to use) Pure deals Mixed deal

The Monotonic Concession Protocol – One direction, move towards middle Rules of this protocol are as follows. . . Negotiation proceeds in rounds. On round 1, agents simultaneously propose a deal from the negotiation set. (can re-propose same one) Agreement is reached if one agent finds that the deal proposed by the other is at least as good or better than its proposal. If no agreement is reached, then negotiation proceeds to another round of simultaneous proposals. An agent is not allowed to offer the other agent less (in term of utility ) than it did in the previous round. It can either stand still or make a concession. Assumes we know what the other agent values. If neither agent makes a concession in some round, then negotiation terminates, with the conflict deal. Meta data: explanation or critique of deal.

Condition to Consent an Agreement If both of the agents finds that the deal proposed by the other is at least as good or better than the proposal it made. Utility1(2) Utility1(1) and Utility2(1) Utility2(2)

The Monotonic Concession Protocol Advantages: Symmetrically distributed (no agent plays a special role) Ensures convergence It will not go on indefinitely Disadvantages: Agents can run into conflicts Inefficient – no quarantee that an agreement will be reached quickly

Negotiation Strategy Given the negotiation space and the Monotonic Concession Protocol, a strategy of negotiation is an answer to the following questions: What should an agent’s first proposal be? On any given round, who should concede? If an agent concedes, then how much should it concede?

The Zeuthen Strategy – a refinement of monotonic protocol Q: What should my first proposal be? A: the best deal for you among all possible deals in the negotiation set. (Is a way of telling others what you value.) Agent 1's best deal agent 2's best deal

The Zeuthen Strategy Q: I make a proposal in every round, but may be the same as last time. Do I need to make a concession in this round? A: If you are not willing to risk a conflict, you should make a concession. How much am I willing to risk a conflict? How much am I willing to risk a conflict? Agent 1's best deal agent 2's best deal

Willingness to Risk Conflict Suppose you have conceded a lot. Then: – You have lost your expected utility (closer to zero). – In case conflict occurs, you are not much worse off. – You are more willing to risk conflict. An agent will be more willing to risk conflict if the difference in utility between your loss in making an concession and your loss in taking a conflict deal with respect to your current offer. If both are equally willing to risk, both concede.

Risk Evaluation riski= You have to calculate? How much you will lose if you make a concession and accept your opponent's offer? How much you will lose if you stand still which causes a conflict? utility agent i loses by conceding and accepting agent j's offer riski= utility agent 1 loses by not conceding and causing a conflict Utilityi (i )-Utilityi (j ) = Utilityi (i ) where i and i are the current offer of agent i and j, respectively. risk is willingness to risk conflict (1 is perfectly willing to risk)

Risk Evaluation risk measures the fraction you have left to gain. If it is close to one, you have gained little (and are more willing to risk). This assumes you know what others utility is. What one sets as initial goal affects risk. If I set an impossible goal, my willingness to risk is always higher.

The Risk Factor One way to think about which agent should concede is to consider how much each has to loose by running into conflict at that point. Ai best deal Aj best deal Conflict deal How much am I willing to risk a conflict? Maximum to gain from agreement Maximum still hope to gain. Reference for figure: Rosenschein & Zlotkin, 1994 Risk evaluation: One way to think about which agent should concede at each step is to consider how much each has to loose by running into conflict at that point. An agent who has made many concessions has less to lose if conflict is reached rather than an agent who has not made any concessions. If we had a way of measuring agent’s willingness to risk conflict, we could have the agent with less willingness to risk making a concession.

The Zeuthen Strategy Q: If I concedes, then how much should I concede? A: Enough to change the balance of risk (who has more to lose). (Otherwise, it will just be your turn to concede again at the next round) Not so much that you give up more than you needed to Q: What if both have equal risk? A: Both concede.

About MCP and Zeuthen Strategies Advantages: Simple and reflects the way human negotiations work. Stability – in Nash equilibrium – if one agent is using the strategy, then the other can do no better than using it him/herself. Disadvantages: Computationally expensive – players need to compute the entire negotiation set. Communication burden – negotiation process may involve several steps. Refer to figure: For agent i. Block to the far left represents the best deal and as we move to the right, the deals become progressively worse. At each negotiation step, atleast one or more agents must take one or more steps towards the opponent and they must cross each other at some point. Agents cannot backtrack nor can they both stand still.

Can they reach an agreement? Parcel Delivery Domain: recall, agent1 delivered to a, agent2 delivered to a and b Negotiation Set ({a}, {b}) ({b}, {a}) (, {a,b}) Utility of agent 1 Utility1({a}, {b}) = 0 Utility1({b}, {a}) = 0 Utility1(, {a,b})=1 Utility of agent 2 Utility2({a}, {b}) =2 Utility2({b}, {a}) = 2 Utility2(, {a,b})=0 First offer (, {a,b}) ({a}, {b}) Risk of conflict 1 Agent 1 Agent 2 Can they reach an agreement? Who will concede?

Conflict Deal He should concede. He should concede. Agent 1's best deal agent 2's best deal Zeuthen does not reach a settlement as neither will concede as there is no middle ground

Parcel Delivery Domain: Example 2 (don’t return to dist point) Distribution Point Conflict Deal: ({a,b,c,d}, {a,b,c,d}) 7 7 All choices are IR, as can’t do worse ({ac}{bd}) is dominated by ({a}{bcd}) 1 1 1 a b c d Negotiation Set: ({a,b,c,d}, ) ({a,b,c), {d}) ({a,b}, {c,d}) ({a}, {b,c,d}) (, {a,b,c,d}) Cost function: c()=0 c({a})=c({d})=7 c({b})=c({c})=c({a,b})=c({c,d})=8 c({b,c})=c({a,b,c})=c({b,c,d})=9 c({a,d})=c({a,b,d})=c({a,c,d})=c({a,b,c,d})=10

Parcel Delivery Domain: Example 2 (Zeuthen works here both concede on equal risk) No Pure Deal Agent 1's Utility Agent 2's Utility 1 ({a,b,c,d}, ) 10 2 ({a,b,c), {d}) 3 ({a,b}, {c,d}) 4 ({a}, {b,c,d}) 5 (, {a,b,c,d}) Conflict deal agent 1 agent 2 5 4 2 1 3

What bothers you about the previous agreement? Decide to both get (2,2) utility, rather than the expected utility of (0,10) for another choice. Is there a solution? Fair versus higher global utility. Restrictions of this method (no promises for future or sharing of utility)

Nash Equilibrium The Zeuthen strategy is in Nash equilibrium under the assumption that when one agent is using the strategy the other can do no better than use it himself. Generally Nash equilibrium is not applicable in negotiation setting because it requires both sides utility function. It is of particular interest to the designer of automated agents. It does away with any need for secrecy on the part of the programmer, since first step reveals true desires An agent’s strategy can be publicly known, and no other agent designer can exploit the information by choosing a different strategy. In fact, it is desirable that the strategy be known, to avoid inadvertent conflicts.

State Oriented Domain Goals are acceptable final states (superset of TOD) Have side effects - agent doing one action might hinder or help another agent. Example, on(white,gray) has side effect of clear(black). Negotiation : develop joint plans and schedules for the agents, to help and not hinder other agents Example – Slotted blocks world -blocks cannot go anywhere on table – only in slots (restricted resource) Note how this simple change (slots) makes it so two workers get in each ohter’s way even if goals are unrelated.

State oriented domain is a bit more powerful than TOD Joint plan is used to mean “what they both do” not “what they do together” – just the joining of plans. There is no joint goal! The actions taken by agent k in the joint plan are called k’s role and is written as Jk C(J)k is the cost of k’s role in joint plan J. In TOD, you cannot do another’s task as a side effect of doing yours or get in their way. In TOD, coordinated plans are never worse, as you can just do your original task. With SOD, you may get in each other’s way Don’t accept partially completed plans.

Assumptions of SOD Agents will maximize expected utility (will prefer 51% chance of getting $100 than a sure $50) Agent cannot commit himself (as part of current negotiation) to behavior in future negotiation. Interagent comparison of utility: common utility units Symmetric abilities (all can perform tasks, and cost is same regardless of agent performing) Binding commitments No explicit utility transfer (no “money” that can be used to compensate one agent for a disadvantageous agreement)

Achievement of Final State Goal of each agent is represented as a set of states that they would be happy with. Looking for a state in intersection of goals Possibilities: Both can be achieved, at gain to both (e.g. travel to same location and split cost) Goals may contradict, so no mutually acceptable state (e.g., both need a car) Can find common state, but perhaps it cannot be reached with the primitive operations in the domain (could both travel together, but may need to know how to pickup another) Might be a reachable state which satisfies both, but may be too expensive – unwilling to expend effort (i.e., we could save a bit if we car-pooled, but is too complicated for so little gain).

What if choices don’t benefit others fairly? Suppose there are two states that satisfy both agents. State 1: one has a cost of 6 for one agent and 2 for the other. State 2: costs both agents 5. State 1 is cheaper (overall), but state 2 is more equal. How can we get cooperation (as why should one agent agree to do more)?

Mixed deal Instead of picking the plan that is unfair to one agent (but better overall), use a lottery. Assign a probability that one would get a certain plan. Called a mixed deal – deal with probability. Compute probabilty so that expected utility is the same for both

Cost If  = (J:p) is a deal, then costi() = p*c(J)i + (1-p)*c(J)k where k is i’s opponent -the role i plays with (1-p) probability Utility is simply difference between cost of achieving goal alone and expected utility of joint plan For postman Example:

Parcel Delivery Domain (assuming do not have to return home) Distribution Point Cost function: c()=0 c({a})=1 c({b})=1 c({a,b)}=3 1 1 city a city b 2 Utility for agent 1 (org {a}): Utility1({a}, {b}) = 0 Utility1({b}, {a}) = 0 Utility1({a, b}, ) = -2 Utility1(, {a, b}) = 1 … Utility for agent 2 (org {ab}): Utility2({a}, {b}) = 2 Utility2({b}, {a}) = 2 Utility2({a, b}, ) = 3 Utility2(, {a, b}) = 0 …

Consider deal 3 with probability ({},{ab}):p means agent 1 does {} with p probabilty and {ab} with (1-p) probabilty. What should p be to be fair to both (equal utility) (1-p)(-2) + p1 = utility for agent 1 (1-p)(3) + p0 = utility for agent 2 (1-p)(-2) + p1= (1-p)(3) + p0 -2+2p+p = 3-3p => p=5/6 If agent 1 does no deliveries 5/6 of the time, it is fair.

Try again with other choice in negotiation set ({a},{b}):p means agent 1 does {a} with p probabilty and {b} with (1-p) probabilty. What should p be to be fair to both (equal utility) (1-p)(0) + p0 = utility for agent 1 (1-p)(2) + p2 = utility for agent 2 0=2 no solution Can you see why we can’t use a p to make this fair?

Mixed deal All or nothing deal (one does everything) such that mixed deal m = [({TA,TB}, f ):p] e NS P(m) = maxdeNSp(d) Mixed deal makes the solution space of deals continuous, rather than discrete as it was before

A symmetric mechanism is in equilibrium if no one is motivated to change strategies. We choose to use one which maximizes the product of utilities (as is a fairer division). Try dividing a total utility of 10 (zero sum) various ways to see when product is maximized. We may flip between choices even if both are the same just to avoid possible bias – like switching goals in soccer

Examples: Cooperative Each is helped by joint plan Slotted blocks world: initially white block is at 1 and black block at 2. Agent 1 wants black in 1. Agent 2 wants white in 2. (Both goals are compatible.) Assume pick up is cost 1 and set down is one. Mutually beneficial – each can pick up at the same time, costing each 2 – Win – as didn’t have to move other block out of the way! If done by one, cost would be four – so utility to each is 2. 

Examples: Compromise Both can succeed, but worse for both than if other agent weren’t there. Slotted blocks world: initially white block is at 1 and black block at 2, two gray blocks at 3. Agent 1 wants black in 1, but not on table. Agent 2 wants white in 2, but not directly on table. Alone, agent 1 could just pick up black and place on white. Similarly, for agent 2. But would undo others goal. But together, all blocks must be picked up and put down. Best plan: one agent picks up black, while other agent rearranges (cost 6 for one, 2 for other) Can both be happy, but unequal roles. 

Choices Maybe each goal doesn’t need to be achieved. Cost for one is two. Cost for both averages four! If both value it the same, flip a coin to decide who does most of the work. p=1/2 What if we don’t value the goal the same way? Can’t really look at utility in same way as the other person’s goals changes the original plan.

Compromise, continued Who should get to do the easier role? If you value it more, shouldn’t you do more of the work to achieve a common goal? What does this mean if partner/roommate doesn’t value a clean house or a good meal? Look at worth. If A1 assigns worth (utility) of 3 and A2 assigns worth (utility) of 6 to final goal, we could use probability to make it “fair”. Assign ({2}{6}) p of the time. Utilty for agent 1= p(1) + (1-p)(-3) // loses utilty if takes 6 for benefit 3 Utility for agent 2 = p(0) + (1-p)4 Solving for p by setting utitlies equal 4p-3 = 4-4p p = 7/8 Thus, I can take an unfair division and make it fair!

Example: conflict I want black on white (in slot 1) You want white on black (in slot 1) Can’t both win. Could flip a coin to decide who wins. Better than both losing. Weightings on coin needn’t be 50-50. May make sense to have person with highest worth get his way – as utility is greater. (Would accomplish his goal alone) Efficient but not fair? What if we could transfer half of the gained utility to the other agent? This is not normally allowed, but could work out well.

Example:semi-cooperative Both agents want contents of slots 1 and 1 swapped (and it is more efficient to cooperate). Both have (possibly) conflicting goals for other slots To accomplish one Agent’s goal by oneself is 26: 8 for each swap and 10 for rest (pulling numbers out of the air) Cooperative swap is 4 (pulling numbers out of air). Idea, work together to swap, and then flip coin to see who gets his way for rest.

Example: semi-cooperative, cont Winning agent: utility: 26-4-10 = 12 Losing agent: utility: -4 (as helped with swap) So with ½ probability: 12*1/2 -4*1/2 = 4 If they could have both been satisfied, assume cost for each is 24. Then utility is 2. Note, they double their utility, if they are willing to risk not achieving the goal. Note, kept just the joint part of the plan that was more efficient, and gambled on the rest (to remove the need to satisfy the other)

Negotiation Domains: Worth-oriented ”Domains where agents assign a worth to each potential state (of the environment), which captures its desirability for the agent”, (Rosenschein & Zlotkin, 1994) agent’s goal is to bring about the state of the environment with highest value we assume that the collection of agents have available a set of joint plans – a joint plan is executed by several different agents Note – not ”all or nothing” – but how close you got to goal.

Worth-oriented Domain: Definition Can be defined as a tuple: E,Ag,J,c E: set of possible envirinment states Ag: set of possible agents J: set of possible joint plans C: cost of executing the plan

Worth Oriented Domain Rates the acceptability of final states Allows partially completed goals Negotiation : a joint plan, schedules, and goal relaxation. May reach a state that might be a little worse that the ultimate objective Example – Multi-agent Tile world (like airport shuttle) – isn’t just a specific state, but the value of work accomplished

Worth-oriented Domains and Multiple Attributes If you want to pay for some software, then you might consider several attributes of the software such as the price, quality and support – multiple set of attributes. You may be willing to pay more if the quality is above a given limit, i.e. you can’t get it cheaper without compromising on quality. Pareto Optimal – Need to find the price for acceptable quality and support (without compromising on some attributes).

How can we calculate Utility? Weighting each attribute Utility = {Price*60 + quality*15 + support*25} Rating/ranking each attribute Price : 1, quality 2, support 3 Using constraints on an attribute Price[5,100], quality[0-10], support[1-5] Try to find the pareto optimum

Incomplete Information Don’t know tasks of others in TOD. Solution Exchange missing information Penalty for lie Possible lies False information Hiding letters Phantom letters Not carry out a commitment

Subadditive Task Oriented Domain the cost of the union of sum of the costs of the separate sets – adds to a sub-cost for finite X,Y in T, c(X U Y) <= c(X) + c(Y)). Example of subadditive: Deliver to one, saves distance to other (in a tree arrangement) Example of subadditive TOD (= rather than <) deliver in opposite directions –doing both saves nothing Not subadditive: doing both actually costs more than the sum of the pieces. Say electrical power costs, where I get above a threshold and have to buy new equipment.

Decoy task We call producible phantom tasks decoy tasks (no risk of being discovered). Only unproducible phantom tasks are called phantom tasks. Example: Need to pick something up at store. (Can think of something for them to pick up, but if you are the one assigned, you won’t bother to make the trip.) Need to deliver empty letter (no good, but deliverer won’t discover lie)

Incentive compatible Mechanism L there exists a beneficial lie in some encounter T  There exists no beneficial lie. T/P  Truth is dominant if the penalty for lying is stiff enough.

Explanation of arrow If it is never beneficial in a mixed deal encounter to use a phntom lie (with penalties), then it is certainly never beneficial to do so in an all-or-nothing mixed deal encounter (which is just a subset of the mixed deal encounters).

Concave Task Oriented Domain We have 2 tasks X and Y, where X is a subset of Y Another set of task Z is introduced c(X U Z) - c(X) >= c(Y U Z) - c(Y).

Tentative Explanation of Previous Chart I think Arrows show reasons we know this fact (diagonal arrows are between domains). Rule beginning is a fixed point. For example, What is true of a phantom task, may be true for a decoy task in same domain as a phantom is just a decoy task we don’t have to create. Similarly, what is true for a mixed deal may be true for an all or nothing deal (in the same domain) as a mixed deal is an all or nothing deal where one choice is empty. The direction of the relationship may depend on truth (never helps) or lie (sometimes helps). The relationships can also go between domains as sub-additive is a superclass of concave and a super class of modular.

Modular TOD c(X U Y) = c(X) + c(Y) - c(X Y). Notice modular encourages truth telling, more than others

For subadditive domain

Attributesof task system-Concavity c(YU Z) –c(Y) ≤c(XU Z) –c(X) •The cost of tasks Z adds to set of tasks Y cannot be greater than the cost Z add to a subset of Y Expect it to add more to subset (as is smaller) At seats – is postmen doman concave (no, unless restricted to trees) Example: Y is all shaded/blue nodes, X is nodes in polygon. adding Z adds 0 to X (as was going that way anyway) but adds 2 to its superset Y (as was going around loop) Concavity implies sub-additivity Modularity implies concavity

Examples of task systems Database Queries •Agents have to access to a common DB and each has to carry out aset of queries•Agents can exchange results of queries and sub-queries The Fax Domain•Agents are sending faxes to locations on a telephone network.•Multiple faxes can be sent once the connection is established with receiving node•The Agents can exchange message to be faxed

Attributes-Modularity c(XU Y) = c(X) + c(Y) –c(X∩Y) •The cost of the combination of 2 sets of tasks is exactly the sum of their individual costs minus the cost of their intersection Only Fax Domain is modular (as costs are independent) Modularity implies concavity

3-dimensional table of Characterization of Relationship Implied relationship between cells Implied relationship with same domain attribute. L means lying may be beneficial T means telling the truth is always beneficial T/Prefers to lies which are not beneficial because they may always be discovered

Incentive Compatible Fixed Points (FP) (return home) FP1: in SubadditiveTOD, any Optimal Negotiation Mechanism (ONM) over A-or-N deals, “hiding” lies are not beneficial Ex:A1hides letter to c, his utility doesn’t increase. If he tells truth : p=1/2 Expected util ({abc}{})1/2 = 5 Lie: p=1/2 (as utility is same) Expected util (for 1) ({abc}{})1/2 = ½(0) + ½(2) = 1 (as has to deliver the lie) 1 4 4 1

FP2 in SubadditiveTOD, any ONM over Mixed deals, every “phantom” lie has a positive probability of being discovered. (as if other person delivers phantom, you are found out) FP3 in Concave TOD, any ONM over Mixed deals, no “decoy” lie is beneficial. (as less increased cost is assumed so probabilities would be assigned to reflect the assumed extra work) FP4 in Modular TOD, any ONM over Pure deals, no “decoy” lie is beneficial. (modular tends to add exact cost – hard to win)

FP4 1 U(1) 2 U(2) Seems (act) a bc 4 b ac ab c 6 Suppose agent 2 lies about having a delivery to c. Under Lie – benefits are shown  (the apparent benefit is no different than the real benefit) Under truth: The uitlities are 4/2 and someone has to get the better deal (under a pure deal). JUST LIKE IN THIS CASE! The lie makes no difference. I’m assuming we have some way of deciding who gets the better deal that is fair over time.

Non-incentive compatible fixed points FP5: in Concave TOD, any ONM over Pure deals, “Phantom” lies can be beneficial. Example from next slide:A1creates Phantom letter at node c, his utility has risen from 3 to 4 Truth: p = ½ so utility for agent 1 is ({a}{b}) ½ = ½(4) + ½(2) = 3 Lie: ({bc}{a}) is logical division as no percent Util for agent 1 is 6 (org cost) – 2(deal cost) = 4

FP6: in SubadditiveTOD, any ONM over A-or-N deals, “Decoy” lies can be beneficial (not harmful). (as it changes the probability. If you deliver, I make you deliver to h) Ex2 (from next slide):A1lies with decoy letter to h (trying to make agent 2 think picking up bc is worse for agent 1 than it is), his utility has rised from 1.5 to 1.72. (If I deliver, I don’t deliver h) If tells truth, p (of agent 1 delivering all) = 9/14 as p(-1) + (1-p)6 = p(4) + (1-p)(-3)  14p=9 If invents task h, p=11/18 as p(-3) + (1-p)6 = p(4) + (1-p)(-5) Utility(p=9/14) is p(-1) + (1-p)6 = -9/14 +30/14 = 21/14 = 1.5 Utility(p=11/18) is p(-1) + (1-p)6 = -11/18 +42/18 = 31/18 = 1.72 SO – lying helped!

Postmen – return to postoffice Concave Phantom Subadditive (h is decoy)

Non incentive compatible fixed points FP7: in Modular TOD, any ONM over Pure deals, “Hide” lie can be beneficial. (as you think I have less, so increase load will cost more than it realy does) Ex3 (from next slide): A1 hides his letter node b ({e}{b}) = utility for A1 (under lie) is 0 = utility for A2 (under lie) is 4 UNFAIR (under lie) ({b}{e}) = utility for A1 (under lie) is 2 = utility for A2 (under lie) is 2 So I get sent to b, but I really needed to go there anyway, so my utility is actually 4. (as I don’t go to e)

FP8:in Modular TOD, any ONM over Mixed deals, “Hide” lies can be beneficial. Ex4: A1 hides his letter to node a A1’s Utility is 4.5 > 4 (Utility of telling the truth) Under truth Util({fae}{bcd})1/2 = 4 (save going to two) Under lie divide as ({efd}{cab})p (you always win and I always lose. Since work is same, swapping cannot help. In a mixed deal, the choices must be unbalanced. Try again, under lie ({ab}{cdef})p p(4) + (1-p)(0) = p(2) + (1-p)(6) 4p = -4p + 6 p = 3/4 Utility is actually 3/4(6) + 1/4(0) = 4.5 Note, when I get assigned {cdef} ¼ of the time, I STILL have to deliver to node a (after completing by agreed upon deliveries). So I end up going 5 places (which is what I was assigned originally). Zero utility to that.

Modular

Conclusion 􀁺In order to use Negotiation Protocols, it is necessary to know when protocols are appropriate 􀁺TOD’scover an important set of Multi-agent interaction

MAS Compromise: Negotiation process for conflicting goals Identify potential interactions Modify intentions to avoid harmful interactions or create cooperative situations Techniques required Representing and maintaining belief models Reasoning about other agents beliefs Influencing other agents intentions and beliefs

PERSUADER – case study Program to resolve problems in labor relations domain Agents Company Union Mediator Tasks Generation of proposal Generation of counter proposal based on feedback from dissenting party Persuasive argumentation

Negotiation Methods: Case Based Reasoning Uses past negotiation experiences as guides to present negotiation (like in court of law – cite previous decisions) Process Retrieve appropriate precedent cases from memory Select the most appropriate case Construct an appropriate solution Evaluate solution for applicability to current case Modify the solution appropriately

Case Based Reasoning Cases organized and retrieved according to conceptual similarities. Advantages Minimizes need for information exchange Avoids problems by reasoning from past failures. Intentional reminding. Repair for past failure is used. Reduces computation.

Negotiation Methods: Preference Analysis From scratch planning method Based on multi attribute utility theory Gets a overall utility curve out of individual ones. Expresses the tradeoffs an agent is willing to make. Property of the proposed compromise Maximizes joint payoff Minimizes payoff difference

Persuasive argumentation Argumentation goals Ways that an agent’s beliefs and behaviors can be affected by an argument Increasing payoff Change importance attached to an issue Changing utility value of an issue

Narrowing differences Gets feedback from rejecting party Objectionable issues Reason for rejection Importance attached to issues Increases payoff of rejecting party by greater amount than reducing payoff for agreed parties.

Experiments Without Memory – 30% more proposals Without argumentation – fewer proposals and better solutions No failure avoidance – more proposals with objections No preference analysis – Oscillatory condition No feedback – communication overhead increased by 23%

Multiple Attribute: Example 2 agents are trying to set up a meeting. The first agent wishes to meet later in the day while the second wishes to meet earlier in the day. Both prefer today to tomorrow. While the first agent assigns highest worth to a meeting at 16:00hrs, s/he also assigns progressively smaller worths to a meeting at 15:00hrs, 14:00hrs…. By showing flexibility and accepting a sub-optimal time, an agent can accept a lower worth which may have other payoffs, (e.g. reduced travel costs). 100 9 12 16 Worth function for first agent Ref: Rosenschein & Zlotkin, 1994

Utility Graphs - convergence Each agent concedes in every round of negotiation Eventually reach an agreement time Utility No. of negotiations Agentj Agenti Point of acceptance

Utility Graphs - no agreement Agentj finds offer unacceptable time Utility Agentj Agenti No. of negotiations No agreement

Argumentation The process of attempting to convince others of something. Why argument-based negotiations:game-theoretic approaches have limitations Positions cannot be justified – Why did the agent pay so much for the car? Positions cannot be changed – Initially I wanted a car with a sun roof. But I changed preference during the buying process.

4 modes of argument (Gilbert 1994): Logical - ”If you accept A and accept A implies B, then you must accept that B” Emotional - ”How would you feel if it happened to you?” Visceral - participant stamps their feet and show the strength of their feelings Kisceral - Appeals to the intuitive – doesn’t this seem reasonable

Logic Based Argumentation Basic form of argumentation Database ├ (Sentence,Grounds) Where Database: is a (possibly inconsistent) set of logical formulae Sentence is a logical formula know as the conclusion Grounds is a set of logical formula grounds  database sentence can be proved from grounds (we give reason for our conclusions)

Attacking Arguments Milk is good for you Cheese is made from milk Cheese is good for you Two fundamental kinds of attack: Undercut (invalidate premise): milk isn’t good for you if fatty Rebut (contradict conclusion): Cheese is bad for bones

Attacking arguments Derived notions of attack used in Literature: A attacks B = A u B or A r B A defeats B = A u B or (A r B and not B u A) A strongly attacks B = A a B and not B u A A strongly undercuts B = A u B and not B u A

Proposition: Hierarchy of attacks Attacks = a = u  r Defeats = d = u  ( r - u -1) Undercuts = u Strongly attacks = sa = (u  r ) - u -1 Strongly undercuts = su = u - u -1

Abstract Argumentation Concerned with the overall structure of the argument (rather than internals of arguments) Write x  y indicates “argument x attacks argument y” “x is a counterexample of y” “x is an attacker of y” where we are not actually concerned as to what x, y are An abstract argument system is a collection or arguments together with a relation “” saying what attacks what An argument is out if it has an undefeated attacker, and in if all its attackers are defeated. Assumption – true unless proven false

Admissible Arguments – mutually defensible argument x is attacked if no member attacks y and yx argument x is acceptable if every attacker of x is attacked argument set is conflict free if none attack each other set is admissible if conflict free and each argument is acceptable (any attackers are attacked)

a d c b Which sets of arguments can be true? c is always attacked. d is always accpetable

An Example Abstract Argument System