Download presentation
Presentation is loading. Please wait.
1
1 Towards a manipulative mediator Lecture for Statistical Methods (89-326) Yehoshua (Yoshi) Gev yoshigev@gmail.com Joint work with: S. Kraus, M. Gelfand, J. Wilkenfeld & E. Salmon
2
2 Outline Background on negotiation and mediation Our goal Agent design Experiments and results Conclusions
3
3 Background
4
4 Domain of negotiation Human-to-human negotiation Closed set of issues with discrete solution values Private utility function For example, in our neighbors’ dispute – noise issue with these solution values: “Tyler will continue to be loud” “Tyler will be quiet after 1am” “Tyler will be quiet after 12am” etc.
5
5 Mediation Assistance from a third party Mediation styles: facilitation organize logistics (e.g., communication channel) formulation propose new solutions encourage to move towards agreement manipulation offer incentives or impose penalties [Wilkenfeld et al., 2005]
6
6 Previous work Very few automated mediators: PERSUADER [Sycara, 1991] based on CBR (on an existing knowledge base) AutoMed [Chalamish & Kraus, 2009] a formulative mediator rule based monitors the negotiations and proposes possible solutions qualitative model for preferences representation
7
7 Our Goal
8
8 Motivation Design a manipulative mediator Create a model that includes incentives and penalties Decide how to make decisions The big question: Can the authority be utilized to assist the parties?
9
9 Social science aspect The research is a collaborative work with political science and psychology groups We had to negotiate over the settings Should we allow the participants to speak freely? Or, restrict them to a closed-list of sentences? Should our interface act as a DSS? utilities calculator history of actions How can we force participants to use the interface?
10
10 Experiments setting Natural negotiations: participants negotiate via video-conferencing a realistic scenario (neighbors’ dispute) very simple computerized system only an interface to exchange offers no utility calculator a mediator agent can participate as a third party GENIUS environment [Koen et al., 2009]
11
11
12
12 Agent Design
13
13 Pilot Tested the system and the scenario Tested AutoMed in the new settings Problems: AutoMed sent very few suggestions AutoMed’s suggestions were often not relevant
14
14 Modifications to AutoMed Choose suggestions similar to last offers Treat close ranks as same utility Treat partial offers as 60% of their maximal score
15
15 Experiments
16
16 Experiment 1 Two groups Control/Baseline: without mediator Treatment/Tested: with mediator Comparison between groups tested parameters: dur – negotiation’s duration (seconds) score – each parties’ score diff – difference between parties’ scores sat – parties’ satisfaction from result (questionnaire) aid – measure of mediator’s assistance (questionnaire)
17
17 Experiment 1 – Results NdursumdiffsatAaidAsatBaidB 1. No mediator1509:3312241884.2 2. Simple mediator1410:5711927441.74.01.5 Hypothesis: diff1 – diff2 = 0 Unpaired two-tailed T-Test t = 2.0904, df = 27 p = 0.044 < 0.05 Conclusion: diff1 != diff2
18
18 Experiment 1 – Results (cont.) Only one significant advantage for the mediator diff was lower with a mediator Many participants disregarded the mediator How can we make them consider the mediator?
19
19
20
20 Experiment 2 We implemented an animated avatar face appearing on the interface text-to-speech capabilities opening statement accompanying text to suggestions Intended to draw the participants’ attention How did it affect the outcomes?
21
21 Experiment 2 – Results Ndursumdiffsataid 1. No mediator1509:3312241884.23 2. Simple mediator1410:571193744.041.62 3. Animated mediator1213:3112451034.172.70 Hypothesis: aid2 – aid3 = 0 Unpaired two-tailed T-Test p = 0.003 < 0.01 Coclusion: aid2 != aid3
22
22 Experiment 2 – Results (cont.) Correlations: Hypothesis: aid is uncorrelated with score (r = 0) Pearson correlation: r = 0.43 for N = 24: t = 2.234 Using T-test: p = 0.035 < 0.05 But pairs of samples are dependant (really, N < 24) besides, we cannot tell the direction of the influence Maybe a different sig. test would work (Fisher trans.) score age sat aid score1 age-0.221 sat0.300.161 aid0.43-0.240.011
23
23 Experiment 2 – Results (cont.) The participants paid more attention aid was higher with the avatar Those who paid attention got higher scores significant correlation between aid and score however, aid and sat were not correlated But, they still didn’t fully utilize the suggestions average score didn’t improve significantly What should be done next?
24
24 Conclusions
25
25 Difficulties Current problems: participants disregard the mediator offers they are involved in the video discussion they cannot see the high utility of the mediator’s offers solution?: more persuading mediator / a utility calculator almost all participants reach agreement what would be the role of the manipulator? experiments with human participants are expensive solution?: use peer-designed agents (PDA) to test the mediator before experimenting with humans [Lin et al., 2010]
26
26 What’s next? Search for a setting that can exploit the mediator Model incentives and penalties Design a manipulator in that model More experiments…
27
27 Summary Even generic agents are restricted by their model Humans are not fully rational don’t calculate their expected score higher scores don’t mean higher satisfaction The environment affect the mediator’s influence
28
28 Thank you…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.