Designing Example Critiquing Interaction Boi Faltings Pearl Pu Marc Torrens Paolo Viappiani IUI 2004, Madeira, Portugal – Wed Jan 14, 2004 LIAHCI.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Overcoming Limitations of Sampling for Agrregation Queries Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University.
Multi‑Criteria Decision Making
Fast Algorithms For Hierarchical Range Histogram Constructions
C4.5 algorithm Let the classes be denoted {C1, C2,…, Ck}. There are three possibilities for the content of the set of training samples T in the given node.
Combining Classification and Model Trees for Handling Ordinal Problems D. Anyfantis, M. Karagiannopoulos S. B. Kotsiantis, P. E. Pintelas Educational Software.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Part 3: The Minimax Theorem
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Visual Recognition Tutorial
Algorithmic Complexity Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park.
Using Trees to Depict a Forest Bin Liu, H. V. Jagadish EECS, University of Michigan, Ann Arbor Presented by Sergey Shepshelvich 1.
Clustering short time series gene expression data Jason Ernst, Gerard J. Nau and Ziv Bar-Joseph BIOINFORMATICS, vol
Review: Search problem formulation
Mutual Information Mathematical Biology Seminar
Iterative Optimization of Hierarchical Clusterings Doug Fisher Department of Computer Science, Vanderbilt University Journal of Artificial Intelligence.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
AN INTRODUCTION TO PORTFOLIO MANAGEMENT
Evaluating Hypotheses
Flow Algorithms for Two Pipelined Filtering Problems Anne Condon, University of British Columbia Amol Deshpande, University of Maryland Lisa Hellerstein,
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
Lecture 17 Interaction Plots Simple Linear Regression (Chapter ) Homework 4 due Friday. JMP instructions for question are actually for.
AN INTRODUCTION TO PORTFOLIO MANAGEMENT
Collaborative Filtering Matrix Factorization Approach
Regression and Correlation Methods Judy Zhong Ph.D.

Quality Indicators (Binary ε-Indicator) Santosh Tiwari.
Chapter 1: Introduction to Statistics
SVM by Sequential Minimal Optimization (SMO)
by B. Zadrozny and C. Elkan
Some Background Assumptions Markowitz Portfolio Theory
Population All members of a set which have a given characteristic. Population Data Data associated with a certain population. Population Parameter A measure.
Solving Hard Instances of FPGA Routing with a Congestion-Optimal Restrained-Norm Path Search Space Keith So School of Computer Science and Engineering.
Trust-Aware Optimal Crowdsourcing With Budget Constraint Xiangyang Liu 1, He He 2, and John S. Baras 1 1 Institute for Systems Research and Department.
Efficient and Scalable Computation of the Energy and Makespan Pareto Front for Heterogeneous Computing Systems Kyle M. Tarplee 1, Ryan Friese 1, Anthony.
Author(s): Rahul Sami and Paul Resnick, 2009 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution.
Disclosure risk when responding to queries with deterministic guarantees Krish Muralidhar University of Kentucky Rathindra Sarathy Oklahoma State University.
CS584 - Software Multiagent Systems Lecture 12 Distributed constraint optimization II: Incomplete algorithms and recent theoretical results.
Investment Analysis and Portfolio Management First Canadian Edition By Reilly, Brown, Hedges, Chang 6.
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Find the sum or difference. Then simplify if possible.
Advanced Decision Architectures Collaborative Technology Alliance An Interactive Decision Support Architecture for Visualizing Robust Solutions in High-Risk.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
On the Relation between SAT and BDDs for Equivalence Checking Sherief Reda Rolf Drechsler Alex Orailoglu Computer Science & Engineering Dept. University.
The Restricted Matched Filter for Distributed Detection Charles Sestok and Alan Oppenheim MIT DARPA SensIT PI Meeting Jan. 16, 2002.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Conditional Probability Mass Function. Introduction P[A|B] is the probability of an event A, giving that we know that some other event B has occurred.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Computer simulation Sep. 9, QUIZ 2 Determine whether the following experiments have discrete or continuous out comes A fair die is tossed and the.
Quality of Pareto set approximations Eckart Zitzler, Jörg Fliege, Carlos Fonseca, Christian Igel, Andrzej Jaszkiewicz, Joshua Knowles, Alexander Lotov,
Probability Theory Modelling random phenomena. Permutations the number of ways that you can order n objects is: n! = n(n-1)(n-2)(n-3)…(3)(2)(1) Definition:
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Nonlinear differential equation model for quantification of transcriptional regulation applied to microarray data of Saccharomyces cerevisiae Vu, T. T.,
September 28, 2000 Improved Simultaneous Data Reconciliation, Bias Detection and Identification Using Mixed Integer Optimization Methods Presented by:
Money and Banking Lecture 11. Review of the Previous Lecture Application of Present Value Concept Internal Rate of Return Bond Pricing Real Vs Nominal.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Outline Time series prediction Find k-nearest neighbors Lag selection Weighted LS-SVM.
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss Pedro Domingos, Michael Pazzani Presented by Lu Ren Oct. 1, 2007.
Oliver Schulte Machine Learning 726
When Security Games Go Green
Game Theory.
Stimulating Preference Expression using Suggestions
Collaborative Filtering Matrix Factorization Approach
Introduction Solving inequalities is similar to solving equations. To find the solution to an inequality, use methods similar to those used in solving.
Evaluating Preference-based Search Tools: A Tale of Two Approaches
On the Design of RAKE Receivers with Non-uniform Tap Spacing
Machine Learning in Practice Lecture 17
Presentation transcript:

Designing Example Critiquing Interaction Boi Faltings Pearl Pu Marc Torrens Paolo Viappiani IUI 2004, Madeira, Portugal – Wed Jan 14, 2004 LIAHCI

Wed Jan 14, 2004Designing Example Critiquing Interaction2 Outline Introduction Stimulating expression of preferences Guaranteeing optimal solutions Conclusion

Wed Jan 14, 2004Designing Example Critiquing Interaction3 Motivation Many real word applications require people to select a most preferred outcome from a large set of possibilities (electronic catalogs) Users are usually unable to correctly state their preferences up front People are greatly helped by seeing examples of actual solutions   example critiquing

Wed Jan 14, 2004Designing Example Critiquing Interaction4 Mixed Initiative Interaction initial preference the system shows K solutions The user critiques the solutions stating a new preference The user picks the final choice

Wed Jan 14, 2004Designing Example Critiquing Interaction5 An implementation: reality user critiques existing solutions trade-off between different criteria

Wed Jan 14, 2004Designing Example Critiquing Interaction6 What to show? Standard approach  show the best solutions  assumption: user model is complete and accurate Does not in general stimulate new preferences

Wed Jan 14, 2004Designing Example Critiquing Interaction7 New approach Display_set = stimulate_set + optimal_set

Wed Jan 14, 2004Designing Example Critiquing Interaction8 What to show? Stimulate set = solutions that  make the user aware of attributes diversity  have high probability to become optimal if new preferences are stated Optimal set = solutions that  are optimal given the current preferences

Wed Jan 14, 2004Designing Example Critiquing Interaction9 Outline Introduction Stimulating expression of preferences Guaranteeing optimal solutions Conclusion

Wed Jan 14, 2004Designing Example Critiquing Interaction10 Stimulating new preferences Pareto optimality  general concept  does not involve weights Dominated solution can become Pareto optimal if new preferences are stated  show solutions that have higher probability of becoming Pareto optimal

Wed Jan 14, 2004Designing Example Critiquing Interaction11 Dominance relation and Pareto optimality Penalty table, 2 preferences s1s1 s2s2 s3s3 s4s4 s5s5 S 1 and S 2 are Pareto optimal S 3 is dominated by S 1 and S 2 S 4 is dominated by S 1 S 5 is dominated by S 1, S 2, S 3. P1 P2

Wed Jan 14, 2004Designing Example Critiquing Interaction12 Pareto Optimal Filters Estimate the probability that a dominated solution can become Pareto optimal when new preferences are stated Different Pareto-filters:  counting filter  attribute filter  probabilistic filter

Wed Jan 14, 2004Designing Example Critiquing Interaction13 Counting filter We count the number of dominators S 1 and S 2 are currently “optimal” S 4 more promising than S 3 and S 5 Counting Filter: number of Dominators

Wed Jan 14, 2004Designing Example Critiquing Interaction14 A new preference is added New column with penalties S 4 becomes Pareto optimal even if the new penalty (0.6) is worse than for S 3 (0.5) and S 5 (0.4) The counting filter predict that S 4 has better chances to become P.O. when a new preference is added.

Wed Jan 14, 2004Designing Example Critiquing Interaction15 Hasse diagrams User Model={p 1,p 2 } Pareto Optimal User Model={p 1,p 2, p 3 } Adding preferences: Pareto optimal set grows, dominance relation becomes sparse

Wed Jan 14, 2004Designing Example Critiquing Interaction16 Attribute filter Solution: n attributes: A 1,..,A n  D 1,..,D n domains for A 1,..,A n  a solution is a complete assignment Preferences modeled as penalty functions defined on attribute domains Look at the attribute space  if two values are the same, any penalty function defined on these values will be the same

Wed Jan 14, 2004Designing Example Critiquing Interaction17 Attribute filter: motivation S 2 and S 3 are both dominated by S 1 If we add new preference  on Location  if North is preferred S 2 will be Pareto Optimal  on Transport  if Tramway is preferred to Bus then S 2 will be P.O.  S3 will always be dominated!! Preferences: on price (to minimize), on M 2 (to maximize)

Wed Jan 14, 2004Designing Example Critiquing Interaction18 Attribute Filter For the new preference, dominated solution s must have lower penalty than all dominant solutions  for discrete domain, attribute values must be different  for continuous domains, consider extreme values

Wed Jan 14, 2004Designing Example Critiquing Interaction19 Probabilistic filter Directly estimate probability of becoming P.O. The bigger the difference on a specific attribute, the more likely the penalties will be different 1 1 domain penalty domain

Wed Jan 14, 2004Designing Example Critiquing Interaction20 Experiments Database of actual accommodation offers (room for rent, studios, apartments) Random datasets 11 attributes of which  4 continuous (price, duration, square meters, distance to university)  7 discrete (kitchen, kitchen type, bathroom, public transportation,..)

Wed Jan 14, 2004Designing Example Critiquing Interaction21 Results (accommodation dataset)

Wed Jan 14, 2004Designing Example Critiquing Interaction22 Results (random dataset) Average fraction of correct predictions number of preferences known

Wed Jan 14, 2004Designing Example Critiquing Interaction23 Outline Introduction Stimulating expression of preferences Guaranteeing optimal solutions Conclusion

Wed Jan 14, 2004Designing Example Critiquing Interaction24 Modelling True preference model P* (unknown)  P*={p* 1, p* 2,.., p* k }  s t : target solution Estimated through a model P  P={p 1,..,p k }  p i are built-in standard penalty functions  assume limited difference between p and p* Penalty functions  p i (a k ): d k -> R  write p i (s) instead of p i (a j (s))

Wed Jan 14, 2004Designing Example Critiquing Interaction25 Selecting displayed solutions Dominance filters Utilitarian filters Egalitarian filters

Wed Jan 14, 2004Designing Example Critiquing Interaction26 Optimal Set Filters Properties We want.. 1. To show a limited number of solutions  each filter selects k solutions to display 2. To ensure that a Pareto-optimal solution in D is Pareto-optimal in S  each filter satisfies this  dominance filter (by definition), Utilitarian and Egalitarian (theorem)

Wed Jan 14, 2004Designing Example Critiquing Interaction27 Optimal Set Filters Properties 3. To include target solutions  only if target solution is included the user can choose it!  probability to include the target solution in D depends on filter. Assumption (1-ε)p i ≤ p i * ≤ (1+ε)p i

Wed Jan 14, 2004Designing Example Critiquing Interaction28 Dominance filter Display k solutions that are not dominated by another one

Wed Jan 14, 2004Designing Example Critiquing Interaction29 Dominance filter: target solution Plot of probability of target solution being included in D, as function of number of preferences  |P|=3,..,12  K=30, 60  m=778, 6444

Wed Jan 14, 2004Designing Example Critiquing Interaction30 Utilitarian filter We minimize the un-weighted sum of penalties Efficiently computed by “branch & bound”

Wed Jan 14, 2004Designing Example Critiquing Interaction31 Utilitarian filter: probability to find target solution Does not depend on m, the number of total solutions (proved analytically) Better than dominator filter

Wed Jan 14, 2004Designing Example Critiquing Interaction32 Egalitarian filter Minimize F(s)  In case of equality, use lexicographic order: (0.4, 0.2) preferred to (0.4, 0.4) Target solution inclusion probability similar to that of the Utilitarian filter.

Wed Jan 14, 2004Designing Example Critiquing Interaction33 Robustness against violated assumption Fraction of PO solutions shown within the best k ones Egalitarian, utilitarian filter

Wed Jan 14, 2004Designing Example Critiquing Interaction34 Outline Introduction Stimulating expression of preferences Guaranteeing optimal solutions Conclusion

Wed Jan 14, 2004Designing Example Critiquing Interaction35 Conclusion Optimal and stimulation set Example critiquing on firmer mathematical ground Suggestions to system developer How to compensate an incomplete/inaccurate user model Experimental evaluation on real and random problems

Wed Jan 14, 2004Designing Example Critiquing Interaction36 Questions

Wed Jan 14, 2004Designing Example Critiquing Interaction37

Wed Jan 14, 2004Designing Example Critiquing Interaction38 Counting filter works already fairly well Attribute filter works very well when only 1 or 2 preferences are missing, but generally probabilistic is the best Impact of correlation between attributes can affect performance Pareto Filters: conclusions Complexity CountingAttributeProbabilisticRandom

Wed Jan 14, 2004Designing Example Critiquing Interaction39 Attribute filter/2 Continuous domains: best values are the extremes  assumption: preference functions are monotonic 1 OR domain penalty

Wed Jan 14, 2004Designing Example Critiquing Interaction40 1 θθ 1 domain a i (o 1 ) a i (o 2 ) 1 θ domain a i (o 1 ) a i (o 2 ) m domain a i (o 1 ) a i (o 2 )

Wed Jan 14, 2004Designing Example Critiquing Interaction41 θ θ 1 domain a i (o) gigi lili sisi 1 θ domain a i (o)lili domain a i (o) lili 1 a i (o 1 )-t θ-t m1m1 m2m2 a i (o) θ +t

Wed Jan 14, 2004Designing Example Critiquing Interaction42

Wed Jan 14, 2004Designing Example Critiquing Interaction43 Theorem Given a set of m solutions S={s 1,..,s m } and a set of penalties {p 1,..,p d } Let S’ be the best k solutions according to the utilitarian filter A solution s in S’ not dominated by any other of S’, is Pareto Optimal in S.

Wed Jan 14, 2004Designing Example Critiquing Interaction44 Simplified Apartment Domain A very simple example:  A={Location, Rent, Rooms}  D Location ={Centre, North, South, East, West}  D Rent ={x|x integer x>0}  D Rooms ={1,2,3..} Preferences: location should be centre and rent less than 500

Wed Jan 14, 2004Designing Example Critiquing Interaction45 Penalty functions P 1 :=  if (Location==centre) then 0 else 1 P 2 :=  If (Rent > 500) then K*(Rent-500)  Else 0

Wed Jan 14, 2004Designing Example Critiquing Interaction46 Electronic catalogues K attributes: A 1,..,A k  D 1,..,D k domains for A 1,..,A k  a solution is a complete assignment  write a j (s), value of S for attribute j Solution set S  is a subset of D 1 x D 2 x D 3 x D 4 x... Preferences modeled as penalty functions defined on attribute domains

Wed Jan 14, 2004Designing Example Critiquing Interaction47 Counting filter* The Dominator set for a solution s 1, is the subset of S of solution that dominates s 1. The counting filter orders solutions on the size of the dominator set. S d (s 1 ) s 1

Wed Jan 14, 2004Designing Example Critiquing Interaction48 Probabilistic filter Directly estimate probability of becoming P.O. The bigger the difference on a specific attribute, the more likely the penalties will be different 1 domain penalty