Presentation is loading. Please wait.

Presentation is loading. Please wait.

Galit Haim, Ya'akov Gal, Sarit Kraus and Michele J. Gelfand A Cultural Sensitive Agent for Human-Computer Negotiation 1.

Similar presentations


Presentation on theme: "Galit Haim, Ya'akov Gal, Sarit Kraus and Michele J. Gelfand A Cultural Sensitive Agent for Human-Computer Negotiation 1."— Presentation transcript:

1 Galit Haim, Ya'akov Gal, Sarit Kraus and Michele J. Gelfand A Cultural Sensitive Agent for Human-Computer Negotiation 1

2 Motivation Buyers and seller across geographical and ethnic borders – electronic commerce: – crowd-sourcing: – deal-of-the-day applications: Interaction between people from different countries to succeed, an agent needs to reason about how culture affects people's decision making 2

3 3 Goals and Challenges Can we build an agent that will negotiate better than the people in each countries? Can we build proficient negotiator with no expert designed rules? Culture sensitive agent? The approach 1. Collect data on each country 2. Use machine learning 3. Build influence diagram Sparse Data Noisy Data

4 The Colored Trails (CT) Game An infrastructure for agent design, implementation and evaluation for open environments Designed in 2004 by Barbara Grosz and Sarit Kraus (Grosz et al AIJ 2010) 4 CT is the right test-bed to use because it provides a task analogy to the real world

5 The CT Configuration 7*5 board of colored squares One square is the goal Set of colored chips Move using a chip in the same color 55

6 CT Scenario 2 players Multiple phases : – communication: negotiation (alternating offer protocol) – transfer: chip exchange – movement Complete information Agreements are not enforceable Complex dependencies Game ends when one of the players : reached the goal or did not move for three movement phases 6

7 Scoring and Payment 100 point bonus for getting to goal 5 point bonus for each chip left at end of game 10 point penalty for each square in the shortest path from end-position to goal Performance does not depend on outcome for other player 7

8 Personality, Adaptive Learning (PAL) Agent 8 Human behavior model Take action 8 machine learning Decision Making Data from specific country

9 Learning People's Reliability Predict if the other player will keep its promise 9

10 Learning how People Accept Offers 10 Accept or reject the proposal?

11 Feature Set Domain independent feature: – Current and Resulting scores – Offer generosity – Reliability: between 0 (completely unreliable) to 1(fully reliable) – Weighted reliability: over the previous rounds in the game Domain dependent feature: – Round number 11

12 How to Model People's Behavior For each culture: – Use different features – Choose learning algorithm that minimized error using 10-fold cross validation In US and Israel - we only used domain independent features In Lebanon we added domain dependent features 12

13 Data Collection with Sparse Data Sources of data to train our classifiers: – 222 game instances consisting of people playing a rule-based agent – U.S. and Israel: collect 112 game instances of people playing other people – Lebanon: collect 64 additional games Nasty agent: less reliable when fulfilling its agreement 13 The Lebanon people in this data set almost always kept the agreements and as a result, PAL never kept agreements

14 People Learned Reliability 14

15 Experiment Design 3 countries: 157 people – Israel: 63 – Lebanon: 48 – U.S.A: 46 30 minutes tutorial Boards varied dependencies between players People were always the first proposer in the game There was a single path to the goal 15

16 Decision Making There are 3 decisions that PAL needs to make: Reliability: determine the PAL transfer strategy Accepting an offer: accept or reject a specific offer proposed by the opponent Propose an offer 16 Use backward induction over two rounds…

17 Success Rate: Getting to the Goal 17

18 Performance Comparison: Averages 18

19 Example in Lebanon 2 chips for 2 chips; accepted both sent 1 chip for 1 chip; accepted PAL learned that people in Lebanon were highly reliable PAL did not send, the human sent 19 games were relatively shorter people were very reliable in the training games

20 Example in Israel 2 chips for 2 chips; accepted only PAL sent 1 chip for 1 chip; accepted the human only sent 1 chip for 1 chip; accepted only PAL sent 1 chip for 3 chips; accepted only the human sent 20 games were relatively longer people were less reliable in the training games than in Lebanon

21 Conclusions PAL is able to learn to negotiate proficiently with people across different cultures PAL was able to outperform people in all dependency conditions and in all countries 21 This is the first work to show that a computer agent can learn to negotiate with people in different countries

22 Colored trails is easy to use for your own research Open source empirical test-bed for investigating decision making Easy to design new games Built in functionality for conducting experiments with people Over 30 publications Freely available; extensive documentation http://eecs.harvard.edu/ai/ct (or Google colored trails) http://eecs.harvard.edu/ai/ct THANK YOUhaimga@cs.biu.ac.il 22


Download ppt "Galit Haim, Ya'akov Gal, Sarit Kraus and Michele J. Gelfand A Cultural Sensitive Agent for Human-Computer Negotiation 1."

Similar presentations


Ads by Google