Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reaching Agreements: Negotiation

Similar presentations


Presentation on theme: "Reaching Agreements: Negotiation"— Presentation transcript:

1 Reaching Agreements: Negotiation

2 Typical Competition Mechanisms
Auction: allocate goods or tasks to agents through market. Need a richer technique for reaching agreements Negotiation: reach agreements through interaction. Argumentation: resolve confliction through debates.

3 Negotiation Mechanism
Negotiation is the process of reaching agreements on matters of common interest. It usually proceeds in a series of rounds, with every agent making a proposal at every round. Issues in negotiation process: Negotiation Space: All possible deals that agents can make, i.e., the set of candidate deals. Negotiation Protocol: – A rule that determines the process of a negotiation: how and when a proposal can be made, when a deal has been struck, when the negotiation should be terminated, and so. Negotiation Strategy: When and what proposals should be made.

4 Protocol Means kinds of deals that can be made
Means sequence of offers and counter-offers Protocol is like rules of chess game, whereas strategy is way in which player decides which move to make

5 Game Theory Computers make concrete the notion of strategy which is central to game playing

6 Mechanisms Design Mechanism design is the design of protocols for governing multi-agent interactions. Desirable properties of mechanisms are: Convergence/guaranteed success Maximising global welfare: sum of agent benefits are maximized Pareto efficiency Individual rationality Stability: no agent should have incentive to deviate from strategy Simplicity: low computational demands, little communication Distribution: no central decision maker Symmetry: not wwant agents to play different roles Guarantees success if it ensures that eventually agreement is certain to be reached. Maximises social welfare if it ensures that any outcome maximises the sum of the utilities of the negotiation participants. Pareto efficient: if there is no other outcome that will make atleast one agent better off without making atleast one other agent worse off. A protocol is individually rational if following the protocol is in the best interest of the negotiation participants. A protocol is stable if it provides all agents an incentive to behave in a particular way. E.g. Nash Equilibrium. No agent has an incentive to deviate from agreed upon strategies. A protocol is simple if using it, a participant can easily determine the optimal strategy. Distribution: No central decision maker nor a single point of failure.

7 Attributes not universally accepted
Sometimes be tradeoffs – efficiency and stability and sometimes in conflict with each other

8 Protocols What is an elevator protocol? Direction you face
How close to another do you stand Are you allowed to talk to a stranger? Where you stand (not in front of buttons, near back) What to do if person is running for elevator

9 Negotiation Protocol Who begins Take turns Build off previous offers
Obligations Privacy Legal proposals you can make as a result of negotiation history

10 Negotiation Process 1 Negotiation usually proceeds in a series of rounds, with every agent making a proposal at every round. Communication during negotiation: Proposal Counter Proposal Agenti concedes Agenti Agentj The proposals that agents make are defined by their strategy; must be drawn from the negotiation set; and must be legal as defined by the protocol. If an agreement is reached as defined by the rule, then negotiation terminates. Example: Agreeing on a price. If agenti is going a buy a book from agentj and agenti can only afford to pay a certain price, agenti will continue to negotiate on the price until the offer from agentj is a price that agenti can pay.

11 Negotiation Process 2 Another way of looking at the negotiation process is: Proposals by Aj Proposals by Ai Point of Acceptance/ aggreement The proposals that agents make are defined by their strategy; must be drawn from the negotiation set; and must be legal as defined by the protocol. If an agreement is reached as defined by the rule, then negotiation terminates. Example: Agreeing on a price. If agenti is going a buy a book from agentj and agenti can only afford to pay a certain price, agenti will continue to negotiate on the price until the offer from agentj is a price that agenti can pay.

12 Typical Negotiation Problems
Task-Oriented Domains(TOD): Domains in which an agent's activity can be defined in terms of a set of tasks that it has to achieve. The target of a negotiation is to minimize the cost of completing the tasks. State Oriented Domains(SOD): Domains where each agent is concerned with moving the world from an initial state into one of a set of goal states. The target of a negotiation is to achieve a common goal. Main attribute: actions have side effects (positive/negative) Worth Oriented Domains(WOD): Domains where agents assign a worth to each potential state, which captures its desirability for the agent. The target of a negotiation is to maximize mutual worth.

13 Single issue negotiation
Like money Symmetric (what is more for you is less for me, both benefit equally if roles reversed) If you get more money, I get less If you travel less than I do, I would benefit by switching routes with you

14 Multiple Issue negotiation
Could be hundreds of issues (cost, delivery date, size, quality) Some may be inter-related (as size goes down, cost goes down, quality goes up?) Not clear what a true concession is (larger may be cheaper, but harder to store or spoils before can be used) May not even be clear what is up for negotiation (I didn’t realize not having any test was an option)

15 How many agents are involved?
One to one One to many (auction is an example of one seller and many buyers) Many to many (could be divided into buyers and sellers, or all could be equal) n(n-1)/2 number of pairs

16 Negotiation Domains:Task-oriented
”Domains in which an agent’s activity can be defined in terms of a set of tasks that it has to achieve”, (Rosenschein & Zlotkin, 1994) An agent can carry out the tasks without interference from other agents All resources are available to the agent Tasks redistributed for the benefit of all agents

17 Formalization of TOD A Task Oriented Domain(TOD) is a triple <T, Ag, c> where: T is a finite set of all possible tasks; Ag={A1, A2,…, An} is a list of participant agents; c:(T)R+ defines cost of executing each subset of tasks. Assumptions on cost function: c() = 0. The cost of a subset of tasks does not depend on who carries out them. (Idealized situation) Cost function is monotonic, which means that more tasks, more cost. (It can’t cost less to take on more tasks.) T1  T2 implies c(T1)  c(T2)

18 Redistribution of Tasks
Given a TOD <T, {A1,A2}, c>, An encounter (instance) within the TOD is an ordered list (T1, T2) such that for all k, Tk  T. This is an original allocation of tasks that they might want to reallocate. A pure deal on an encounter is the redistribution of tasks among agents: (D1, D2), such that D1 D2= T1 T2 Specifically, (T1, T2) is called the conflict deal. For each deal =(D1, D2), the cost of such a deal to agent k is Costk()=c(Dk)

19 Examples of TOD Parcel Delivery:
Several couriers have to deliver sets of parcels to different cities. The target of negotiation is to reallocate deliveries so that the cost of travel to each courier is minimal. Database Queries: Several agents have access to a common database, and each has to carry out a set of queries. The target of negotiation is to arrange queries so as to maximize efficiency of database operations (Join, Projection, Union, Intersection, …) .

20 Possible Deals Consider an encounter from the Parcel Delivery Domain. Suppose we have two agents. Both agents have parcels to deliver to city a and only agent 2 has parcels to deliver to city b. There are nine distinct pure deals in this encounter: ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) ({a,b}, {a}) ({a,b}, {b}) ({a,b}, {a,b}) the conflict deal

21 Utility Function for Agents
Given an encounter (T1, T2), the utility function for each agent is defined as follow: Utilityk()=c(Tk)-Costk() where =(D1, D2) is a deal; c(Tk) is the stand-alone cost to agent k (the cost of achieving its goal with no help) Costk() is the cost of its part of the deal. Note that the utility of the conflict deal is always 0.

22 Parcel Delivery Domain (cont)
Distribution Point Cost function: c()=0 c({a})=1 c({b})=1 c({a,b)}=3 1 1 city a city b Utility for agent 1: Utility1({a}, {b}) = 0 Utility1({b}, {a}) = 0 Utility1({a, b}, ) = -2 Utility1(, {a, b}) = 1 Utility for agent 2: Utility2({a}, {b}) = 2 Utility2({b}, {a}) = 2 Utility2({a, b}, ) = 3 Utility2(, {a, b}) = 0

23 Dominant Deals Deal  dominates deal ' if  is better for at least one agent and not worse for the other, i.e.,  is at least as good for every agent as ': k{1,2}, Utilityk() Utilityk(')  is better for some agent than ': k{1,2}, Utilityk()> Utilityk(') Deal  weakly dominates deal ' if at least the first condition holds. Any reasonable agent would prefer (or go along with)  over ' if  dominates or weakly dominates '.

24 Negotiation Set: Space of Negotiation
A deal  is called individual rational if  weakly dominates the conflict deal. (no worse than what you have already) A deal  is called Pareto optimal if there does not exit another deal ' that dominates . (best deal for x without disadvantaging y) The set of all deals that are individual rational and Pareto optimal is called the negotiation set (NS).

25 Utility Function for Agents
Utility1({a}, {b}) =0 Utility1({b}, {a})=0 Utility1({a,b}, )=-2 Utility1(, {a,b})=1 Utility1({a}, {a,b})=0 Utility1({b}, {a,b})=0 Utility1({a,b}, {a})=-2 Utility1({a,b}, {b})=-2 Utility1({a,b}, {a,b})=-2 Utility2({a}, {b}) =2 Utility2 ({b}, {a})=2 Utility2 ({a,b}, )=3 Utility2 (, {a,b})=0 Utility2 ({a}, {a,b})=0 Utility2 ({b}, {a,b})=0 Utility2 ({a,b}, {a})=2 Utility2 ({a,b}, {b})=2 Utility2 ({a,b}, {a,b})=0

26 Individual Deals ({a}, {b}) ({b}, {a}) (, {a,b}) ({a}, {a,b})
({b}, {a,b}) ({a,b}, {a}) ({a,b}, {b}) ({a,b}, {a,b}) ({a}, {b}) ({b}, {a}) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) individual rational

27 Pareto Optimal Deals ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b})

28 Negotiation Set ({b}, {a}) (, {a,b}) ({a}, {b}) ({b}, {a}) (, {a,b})
Individual Rational Deals ({a}, {b}) ({b}, {a}) (, {a,b}) ({a}, {a,b}) ({b}, {a,b}) Pareto Optimal Deals ({a}, {b}) ({b}, {a}) ({a,b}, ) (, {a,b}) Negotiation Set ({a}, {b}) ({b}, {a}) (, {a,b})

29 Negotiation Set illustrated
Create a scatter plot of the utility for i over the utility for j Only those where both is positive are individually rational (for both) (origin is conflict deal) Which are pareto optimal? Utility for i Utility for j

30 Negotiation Set in Task-oriented Domains
Utility for agent i Negotiation set: (pareto optimal+ Individual rational) B A C Utility of conflict Deal for agent i The circle delimits the space of all possible deals E All deals left of the line BD will not be individual rational (-ve utility) for agent j and thus j will be better off with the conflict deal E. Similarly, all deals below line AC will not be individual rational (-ve utility) for agent i and thus i will be better off with the conflict deal E. So, the negotiation set contains deals that are in the shaded area BEC. Conflict deal D Utility for agent j Utility of conflict Deal for agent j

31 The Monotonic Concession Protocol
Rules of this protocol are as follows. . . Negotiation proceeds in rounds. On round 1, agents simultaneously propose a deal from the negotiation set. (can re-propose same one) Agreement is reached if one agent finds that the deal proposed by the other is at least as good or better than its proposal. If no agreement is reached, then negotiation proceeds to another round of simultaneous proposals. An agent is not allowed to offer the other agent less (in term of utility ) than it did in the previous round. It can either stand still or make a concession. Assumes we know what the other agent values. If neither agent makes a concession in some round, then negotiation terminates, with the conflict deal.

32 Condition to Consent an Agreement
If one of the agents finds that the deal proposed by the other is at least as good or better than the proposal it made. Utility1(2) Utility1(1) and Utility2(1) Utility2(2)

33 The Monotonic Concession Protocol
Advantages: Symmetrically distributed (no agent plays a special role) Ensures convergence It will not go on indefinitely Disadvantages: Agents can run into conflicts Inefficient – no quarantee that an agreement will be reached quickly

34 Negotiation Strategy Given the negotiation space and the Monotonic Concession Protocol, a strategy of negotiation is an answer to the following questions: What should an agent’s first proposal be? On any given round, who should concede? If an agent concedes, then how much should it concede?

35 The Zeuthen Strategy Q: What should my first proposal be?
A: the best deal for you among all possible deals in the negotiation set. (Is a way of telling others what you value.) Agent 1's best deal agent 2's best deal

36 The Zeuthen Strategy Q: Do I need to make a concession in this round?
A: If you are not willing to risk a conflict, you should make a concession. How much am I willing to risk a conflict? How much am I willing to risk a conflict? Agent 1's best deal agent 2's best deal

37 Willingness to Risk Conflict
Suppose you have conceded a lot. Then: – You have lost your expected utility (closer to zero). – In case conflict occurs, you are not much worse off. – You are more willing to risk conflict. An agent will be more willing to risk conflict if the difference in utility between your loss in making an concession and your loss in taking a conflict deal with respect to your current offer.

38 Risk Evaluation riski= You have to calculate?
How much you will lose if you make a concession and accept your opponent's offer? How much you will lose if you stand still which causes a conflict? utility agent i loses by conceding and accepting agent j's offer riski= utility agent 1 loses by not conceding and causing a conflict Utilityi (i )-Utilityi (j ) = Utilityi (i ) where i and i are the current offer of agent i and j, respectively. Thus, riski is willingness to risk conflict (1 is perfectly willing to risk)

39 The Risk Factor One way to think about which agent should concede is to consider how much each has to loose by running into conflict at that point. Ai best deal Aj best deal Conflict deal How much am I willing to risk a conflict? Maximum loss from conflict Maximum loss from concession Reference for figure: Rosenschein & Zlotkin, 1994 Risk evaluation: One way to think about which agent should concede at each step is to consider how much each has to loose by running into conflict at that point. An agent who has made many concessions has less to lose if conflict is reached rather than an agent who has not made any concessions. If we had a way of measuring agent’s willingness to risk conflict, we could have the agent with less willingness to risk making a concession.

40 The Zeuthen Strategy Q: If I concedes, then how much should I concede?
A: Just enough to change the balance of risk. (Otherwise, it will just be your turn to concede again at the next round)

41 About MCP and Zeuthen Strategies
Advantages: Simple and reflects the way human negotiations work. Stability – in Nash equilibrium – if one agent is using the strategy, then the other can do no better than using it him/herself. Disadvantages: Computationally expensive – players need to compute the entire negotiation set. Communication burden – negotiation process may involve several steps. Refer to figure: For agent i. Block to the far left represents the best deal and as we move to the right, the deals become progressively worse. At each negotiation step, atleast one or more agents must take one or more steps towards the opponent and they must cross each other at some point. Agents cannot backtrack nor can they both stand still.

42 Can they reach an agreement?
Parcel Delivery Domain: recall, agent 1 delivered to a, agent 2 delivered to both a and b Negotiation Set ({a}, {b}) ({b}, {a}) (, {a,b}) Utility of agent 1 Utility1({a}, {b}) = 0 Utility1({b}, {a}) = 0 Utility1(, {a,b})=1 Utility of agent 2 Utility2({a}, {b}) =2 Utility2({b}, {a}) = 2 Utility2(, {a,b})=0 First offer (, {a,b}) ({a}, {b}) Risk of conflict 1 Agent 1 Agent 2 Can they reach an agreement? Who will concede?

43 Conflict Deal He should concede. He should concede.
Agent 1's best deal agent 2's best deal

44 Parcel Delivery Domain: Example 2
Distribution Point Conflict Deal: ({a,b,c,d}, {a,b,c,d}) 7 7 1 1 1 a b c d Negotiation Set: ({a,b,c,d}, ) ({a,b,c), {d}) ({a,b}, {c,d}) ({a}, {b,c,d}) (, {a,b,c,d}) Cost function: c()=0 c({a})=c({d})=7 c({b})=c({c})=c({a,b})=c({c,d})=8 c({b,c})=c({a,b,c})=c({b,c,d})=9 c({a,d})=c({a,b,d})=c({a,c,d})=c({a,b,c,d})=10

45 Parcel Delivery Domain: Example 2
No Pure Deal Agent 1's Utility Agent 2's Utility 1 ({a,b,c,d}, ) 10 2 ({a,b,c), {d}) 3 ({a,b}, {c,d}) 4 ({a}, {b,c,d}) 5 (, {a,b,c,d}) Conflict deal agent 1 agent 2 5 4 2 1 3

46 Nash Equilibrium The Zeuthen strategy is in Nash equilibrium under the assumption that one agent is using the strategy the other can do no better than use it himself. Generally Nash equilibrium is not applicable in negotiation setting because it requires both sides utility function. It is of particular interest to the designer of automated agents. It does away with any need for secrecy on the part of the programmer. An agent’s strategy can be publicly known, and no other agent designer can exploit the information by choosing a different strategy. In fact, it is desirable that the strategy be known, to avoid inadvertent conflicts.

47 Task Oriented Domain Non-conflicting jobs
Negotiation : Redistribute tasks to everyone’s mutual benefit Example - Postmen domain

48 State Oriented Domain Goals are acceptable final states (superset of TOD) Have side effects - agent doing one action might hinder or help another agent. Example, on(white,gray) has side effect of clear(black). Negotiation : develop joint plans and schedules for the agents, to help and not hinder other agents Example – Slotted blocks world -blocks cannot go anywhere on table – only in slots (restricted resource)

49 Joint plan is used to mean “what they both do” not “what they do together” – just the joining of plans. There is no joint goal! The actions taken by agent k in the joint plan are called k’s role and is written as Jk C(J)k is the cost of k’s role in joint plan J. In TOD, you cannot do another’s task or get in their way. In TOD, coordinated plans are never worse, as you can just do your original task. With SOD, you may get in each other’s way Don’t accept partially completed plans.

50 Assumptions of SOD Agents will maximize expected utility (will prefer 51% chance of getting $100 than a sure $50) Agent cannot commit himself (as part of current negotiation) to behavior in future negotiation. Interagent comparison of utility: common utility units Symmetric abilities (all can perform tasks, and cost is same regardless of agent performing) Binding commitments No explicit utility transfer (no “money” that can be used to compensate one agent for a disadvantageous agreement)

51 Achievement of Final State
Goal of each agent is represented as a set of states that they would be happy with. Looking for a state in intersection of goals Possibilities: Both can be achieved, at gain to both Goals may contradict, so no mutually acceptable state Can find common state, but perhaps it cannot be reached with the primitive operations in the domain Might be a reachable state which satisfies both, but may be too expensive – unwilling to expend effort.

52 Example Suppose there are two states that satisfy both agents.
State 1: There are two roles: one has a cost of 6 for one agent and 2 for the other. State 2: Two roles, but both cost 5. State 1 is cheaper (overall), but state 2 is more equal. How handle?

53 Mixed joint plans Instead of picking the plan that is unfair to one agent (but better overall), use a lottery. Assign a probability that one would get a certain plan. Called a mixed joint plan – plan with probability. Expected utility is the same for both (as their costs/benefits are symmetric)

54 Cost If  = (J:p) is a deal, then
costi() = p*c(J)i + (1-p)*c(J)k where k is i’s opponent -the role i plays with (1-p) probability Utility is simply difference between cost of achieving goal alone and expected utility of joint plan A symmetric mechanism that is in equilibrium if no one is motivated to change strategies. We choose to use one which maximizes the product of utilities (as is a fairer division). Try dividing a utility of 10 various ways to see when product is maximized.

55 Examples: Cooperative
Slotted blocks world: initially white block is at 1 and black block at 2. Agent 1 wants black in 1. Agent 2 wants white in 2. (Both goals are compatible.) Assume pick up is cost 1 and set down is one. Mutually beneficial – each can pick up at the same time, costing each 2 – Win – as didn’t have to move other block out of the way! If done by one, cost would be four – so utility to each is 2.

56 Examples: Compromise Slotted blocks world: initially white block is at 1 and black block at 2, two gray blocks at 3. Agent 1 wants black in 1, but not on table. Agent 2 wants white in 2, but not directly on table. Alone, agent 1 could just pick up black and place on white. Similarly, for agent 2. But together, all blocks must be picked up and put down. Best plan: one agent picks up black, while other agent rearranges (cost 6 for one, 2 for other)

57 Compromise, continued Who should get to do the easier role?
Look at worth. If A1 assigns worth of 3 and A2 assigns worth of 6, we could use probability to make it “fair”. Assign A1to cost-2 task 7/8 of the time. Then expected utility for A1 = 7/8*2+1/8(6) = 5/2 (which is less than worth) Assign A2 to cost-2 task 1/8 of time Expected utility for A2 =7/8*6+1/8*2 = 11/2 (which is less than worth) Note we have split the utility, each ½ under worth, but person who valued it more, did more work. Lying?

58 Example: conflict I want black on white (in slot 1)
You want white on black (in slot 1) Can’t both win. Could flip a coin to decide who wins. Better than both losing. Weightings on coin needn’t be May make sense to have person with highest worth get his way – as utility is greater. (Would accomplish his goal alone) Efficient but not fair? What if we could transfer half of the gained utility to the other agent? For more complicated goals, could also work together to accomplish joint part of goal, and then flip coin for rest. (so loser would have negative utility)

59 Example:semi-cooperative
Both agents want contents of slots 16 and 17 swapped. Both have different goals for other slots – but they could BOTH be achieved (at greater expense to both) Do accomplish one Agent’s goal by oneself is 26: 8 for each swap and 10 for rest. Cooperative swap is 4. Idea, work together to swap, and then flip coin to see who gets his way for rest.

60 Example: semi-cooperative, cont
Winning agent: utility: = 12 Losing agent: utility: -4 So with ½ probability: 12*1/2 -4*1/2 = 4 If they would have both been satisfied, assume cost for each is 24. Then utility is 2. Note, they double their utility, if they are willing to risk not achieving the goal. Note, kept just the joint part of the plan that was more efficient, and gambled on the rest (to remove the need to satisfy the other)

61 Worth Oriented Domain Rates the acceptability of final states
Allows partially completed goals Negotiation : a joint plan, schedules, and goal relaxation. May reach a state that might be a little worse that the ultimate objective Example – Multi-agent Tile world (like airport shuttle) – isn’t just a specific state, but the value of work accomplished

62 Domain Definitions Graph (City Map) G = G(V,E)
v e V => nodes (address / Post office) e e E => edges (roads) Weight function (Distance of road) W : EIN Letters for agent A : LA " Agent Li : I e {A,B} Letters (LA W LB) = f Cost(L) e IN => weight of minimum weight cycle that starts at PO and visits all vertices of L and ends at PO

63 Negotiation Protocol P(d) – Product of the two agent utilities from d
product maximizing negotiation protocol One step protocol Concession protocol At t >= 0, A offers d(A,t) and B offers d(B,t), such that Both deals are from the negotiation set "i e {A,B} and "t >0, Utilityi(d(i,t)) <= Utilityi(d(i,t-1)) (I am making concessions, so my utility is going down) Negotiation ending Conflict – no one will change offer –for all i, Utilityi(d(i,t)) = Utilityi(d(i,t-1)) Agreement, $j !=i e {A,B}, Utilityj(d(i,t)) >= Utilityj(d(j,t)) Only A => agree d(B,t) Only B => agree d(A,t) Both A,B => agree d(k,t) such that P(d(k))=max{P(d(A)),P(d(B))} of those that both accept, pick one with higher product Both A,B and P(d(A))=P(d(B)) => flip a coin Pure deals Mixed deal

64 Mixed deal Element of probability – Agents will perform (DA,DB) with probability p or (DB,DA) with probability 1-p Costi([(DA,DB):p]) = pCost(Di) + (1-p)Cost(Dj) (an expected cost) Utilityi([d:p]) = Cost(Li) – Costi([d:p]) (cost of doing it alone minus expected cost of mixed deal) All or nothing deal – 0<=p<=1 such that mixed deal m = [({LA,LB}, f ):p] e NS P(m) = maxdeNSp(d) Mixed deal makes the solution space of deals continuous, rather than discrete as it was before

65 Hidden letters Utility (figured as if b didn’t exist)
May decide to have A2 deliver both, it doesn’t cost him anything, but there is no benefit to him either. Doesn’t seem fair. But if we saw b, we might decide to have A1 do both (as on his way). Pure deal (one person delivers all) – expected cost=[(d,f):1/2] = 4 Mixed deal – expected cost= [(d,f):3/8] = 3 for A1, 5 for A2 Splits the utility if truth and work alone, A1 cost 10, A2 cost 8 If lie and work alone, A1 cost 6, and A2 cost 8. Pure deal doesn’t split utility equally Compute p mathematically Util1 = 6-8p = 8-8(1-p) = util2 p = 3/8

66 Hidden letters – but it was a lie
Utility for Agent 1 Pure deal (one person delivers all) – expected cost=[(d,f):1/2] ½(2) + ½(10) = 6 (as my empty deal still requires b’s deliver) Mixed deal – expected cost= [(d,f):3/8] = 3/8*10 + 5/8*2 = 5 Splits the utility

67 Phantom letters Utility for agent 1
Expected(on telling the truth) = both win equally (as flip coin) Pure deal – [(c,b))] both win, but 1 gets more because of lie Mixed deal – possibility of being caught as may discover the letter is bogus (all or nothing deal). A1 is given higher chances of doing all deliveries as he had the most original work to do. Mixed deal helps penalize person who is lying

68 Subadditive Task Oriented Domain
the cost of the union of tasks is less than or equal to the sum of the costs of the separate sets – adds to a sub-cost for finite X,Y in T, c(X U Y) <= c(X) + c(Y)). Example of subadditive: Deliver to one, saves distance to other (at right angles, say) Example of non subadditive TOD deliver in opposite directions –doing both saves nothing

69 Incentive compatible Mechanism
L lying is beneficial T  Honesty is better T/P  Lying can be beneficial, but chances of being caught

70 Concave Task Oriented Domain
We have 2 tasks X and Y, where X is a subset of Y Another set of task Z is introduced c(X U Z) - c(X) >= c(Y U Z) - c(Y).

71 MAS Compromise: Negotiation process for conflicting goals
Identify potential interactions Modify intentions to avoid harmful interactions or create cooperative situations Techniques required Representing and maintaining belief models Reasoning about other agents beliefs Influencing other agents intentions and beliefs

72 PERSUADER Program to resolve problems in labor relations domain Agents
Company Union Mediator Tasks Generation of proposal Generation of counter proposal based on feedback from dissenting party Persuasive argumentation

73 Persuasive argumentation
Argumentation goals Ways that an agent’s beliefs and behaviors can be affected by an argument Increasing payoff Change importance attached to an issue Changing utility value of an issue

74 Narrowing differences
Gets feed back from rejecting party Objectionable issues Reason for rejection Importance attached to issues Increases payoff of rejecting party by greater amount than reducing payoff for agreed parties.

75 Experiments Without Memory – 30% more proposals
Without argumentation – lesser proposals and better solutions No failure avoidance – more proposals with objections No preference analysis – Oscillatory condition No feedback – communication overhead by 23%

76 Multiple Attribute: Example
2 agents are trying to set up a meeting. The first agent wishes to meet later in the day while the second wishes to meet earlier in the day. Both prefer today to tomorrow. While the first agent assigns highest worth to a meeting at 16:00hrs, s/he also assigns progressively smaller worths to a meeting at 15:00hrs, 14:00hrs…. By showing flexibility and accepting a sub-optimal time, an agent can accept a lower worth which may have other payoffs, (e.g. reduced travel costs). 100 9 12 16 Worth function for first agent Ref: Rosenschein & Zlotkin, 1994

77 How can we calculate Utility?
Weighting each attribute Utility = {Price*60 + quality*15 + support*25} Rating/ranking each attribute Price : 60, quality : 20, support : 20 INSPIRE uses rating Using constraints on an attribute Price[5,100], quality[0-10], support[1-5] Try to find the pareto optimum

78 Utility Graphs 1 Each agent concedes in every round of negotiation
Eventually reach an agreement time Utility No. of negotiations Agentj Agenti Point of acceptance

79 Utility Graphs 2 No agreement Utility Agentj Agenti time
Agentj finds offer unacceptable time Utility Agentj Agenti No. of negotiations No agreement

80 Argumentation 1 The process of attempting to convince others of something. Why argument-based negotiations: Limitations of game-theoretic approaches Positions cannot be justified – Why did the agent pay so much for the car? Positions cannot be changed – Initially I wanted a car with a sun roof. But I changed preference during the buying process.

81 Argumentation 2 4 modes of argument (Gilbert 1994):
Logical - ”If you accept that A and A implies B, then you must accept that B” Emotional - ”How would you feel if it happened to you?” Visceral - One argumentation participant stamps their feet and show the strength of their feelings Kisceral - Appeals to the intuitive

82 Logic Based Argumentation
Basic form of argumentation Database ├ (Sentence,Grounds) Where Database: is a (possibly inconsistent) set of logical formulae Sentence is a logical formula know as the conclusion Grounds is a set of logical formula grounds  database sentence can be proved from grounds (we give reason for our conclusions)

83 Attacking arguments Two fundamental kinds of attack:
A undercuts B = A invalidates premise of B A rebuts B = A contradicts B Derived notions of attack used in Literature: A attacks B = A u B or A r B A defeats B = A u B or (A r B and not B u A) A strongly attacks B = A a B and not B u A A strongly undercuts B = A u B and not B u A

84 Proposition: Hierarchy of attacks
Attacks = a = u  r Defeats = d = u  ( r - u -1) Undercuts = u Strongly attacks = sa = (u  r ) - u -1 Strongly undercuts = su = u - u -1


Download ppt "Reaching Agreements: Negotiation"

Similar presentations


Ads by Google