Approximating Maximum Satisfaction in Group Formation Sean Munson, Grant Hutchins Discrete Math, Olin College 14 December 2004.

Slides:



Advertisements
Similar presentations
Heuristic Searches. Feedback: Tutorial 1 Describing a state. Entire state space vs. incremental development. Elimination of children. Closed and the solution.
Advertisements

Lindsey Bleimes Charlie Garrod Adam Meyerson
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
DECISION TREES. Decision trees  One possible representation for hypotheses.
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
Multi‑Criteria Decision Making
Biointelligence Laboratory, Seoul National University
Heuristic Search and Information Visualization Methods for School Redistricting University of Maryland Baltimore County Marie desJardins, Blazej Bulka,
Radius The distance from the center of a circle to any point on the circle. M A point on the circle Center.
Metrics, Algorithms & Follow-ups Profile Similarity Measures Cluster combination procedures Hierarchical vs. Non-hierarchical Clustering Statistical follow-up.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
1 Learning Semantics-Preserving Distance Metrics for Clustering Graphical Data Aparna S. Varde, Elke A. Rundensteiner, Carolina Ruiz, Mohammed Maniruzzaman.
Nearest Neighbor. Predicting Bankruptcy Nearest Neighbor Remember all your data When someone asks a question –Find the nearest old data point –Return.
Presented by Marlene Shehadeh Advanced Topics in Computer Vision ( ) Winter
Clustering and greedy algorithms — Part 2 Prof. Noah Snavely CS1114
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Estimating Areas The art of Math. Exact Answers? Not Exactly… You can’t always get an exact answer But sometimes you still need to get very close to the.
Estimating Surface Area The art of math…. Estimating the Surface Area of 3-D Objects  You can’t always get an exact answer when looking for the surface.
Clustering Color/Intensity
Error Correcting Codes To detect and correct errors Adding redundancy to the original message Crucial when it’s impossible to resend the message (interplanetary.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Traveling Salesman Problem Continued. Heuristic 1 Ideas? –Go from depot to nearest delivery –Then to delivery closest to that –And so on until we are.
Distance and midpoint.
SOLUTION EXAMPLE 4 Standardized Test Practice Use the Distance Formula. You may find it helpful to draw a diagram.
Chapters 8, 9, 10 Least Squares Regression Line Fitting a Line to Bivariate Data.
1 Efficient Search Ranking in Social Network ACM CIKM2007 Monique V. Vieira, Bruno M. Fonseca, Rodrigo Damazio, Paulo B. Golgher, Davi de Castro Reis,
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
Parsimony-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2010.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Intro to Raster GIS GTECH361 Lecture 11. CELL ROW COLUMN.
Today Ensemble Methods. Recap of the course. Classifier Fusion
One Care Early Indicators Project Survey 2 Preliminary Data – Cohorts 1 and 2 One Care Implementation Council January 9, 2015.
Science Notebooks Management and Assessment Sue Campbell Livingston Middle School.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Parsimony-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2015.
Inference: Probabilities and Distributions Feb , 2012.
Machine Learning Queens College Lecture 7: Clustering.
Applied Multivariate Statistics Cluster Analysis Fall 2015 Week 9.
1 Gloss-based Semantic Similarity Metrics for Predominant Sense Acquisition Ryu Iida Nara Institute of Science and Technology Diana McCarthy and Rob Koeling.
INDIVIDUAL DIFFERENCES IN STUDENT LEARNING ZACH HAMBRICK How People Learn Forum, April 27, 2012.
CZ5211 Topics in Computational Biology Lecture 4: Clustering Analysis for Microarray Data II Prof. Chen Yu Zong Tel:
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Given a set of data points as input Randomly assign each point to one of the k clusters Repeat until convergence – Calculate model of each of the k clusters.
Mismatch String Kernals for SVM Protein Classification Christina Leslie, Eleazar Eskin, Jason Weston, William Stafford Noble Presented by Pradeep Anand.
By J. Hoffmann and B. Nebel
CS Ensembles and Bayes1 Ensembles, Model Combination and Bayesian Combination.
RiskTeam/ Zürich, 6 July 1998 Andreas S. Weigend, Data Mining Group, Information Systems Department, Stern School of Business, NYU 2: 1 Nonlinear Models.
CLUSTER ANALYSIS. Cluster Analysis  Cluster analysis is a major technique for classifying a ‘mountain’ of information into manageable meaningful piles.
Naive Bayes (Generative Classifier) vs. Logistic Regression (Discriminative Classifier) Minkyoung Kim.
Another Example: Circle Detection
Maths Information Session Old Sarum Primary School
Introduction to Elementary Statistics
Operations and Algebraic Thinking
Data Transformation: Normalization
Clustering Data Streams
Semi-Supervised Clustering
A Simple Artificial Neuron
Use the Integral Test to determine which of the following series is divergent. 1. {image}
Randomized Hill Climbing
Nonparametric Link Prediction in Dynamic Graphs
Randomized Hill Climbing
Multiplying binomials the Break up out of jail method
More on Search: A* and Optimization
Midterm Review.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Approximating Maximum Satisfaction in Group Formation
Presentation transcript:

Approximating Maximum Satisfaction in Group Formation Sean Munson, Grant Hutchins Discrete Math, Olin College 14 December 2004

goal Maximize happiness, or satisfaction Input –Preferences: -1, 0, or 1 for each person –Worked with before –Work style preferences Satisfaction: how many preferences you meet

satisfaction –For each team, the sum of how each person feels about the each other person in the group. –Maximize this for each set of teams = -2

possible combinations in our class Teams of 4: Teams of 2:

approximations

form teams of three based on preferences

chain approximation

chain approximation

chain approximation

Fast, simple Variation: choose on most popular or pickiness Problem: only looks at one person’s preferences at a time

group approximation, teams of 3

group approximation Includes everyone’s preferences Still explores very little space

hill climbing

C B DE G H I F A 1043

hill climbing C B DE G H I F A 1043

hill climbing C B DE G H I F A 1043

hill climbing C B DE G H I F A 1065

results: prediction accurate?

results

alternative: nearest neighbors Represent preferences of each student as a 32-number string. Match students based on the distance between their preference strings (d = 12)

Requires calculating distance between each pair of strings. Total comparisons alternative: nearest neighbors

nearest neighbors by work style Three questions. describe students’ working styles –Timeliness (early to late) –Group style (individual to always in team) –Focus (hardcore to relaxed) Answers assigned values from 0 to 2 Each person falls on a coordinate in 3-space

(0,0,0) (2,2,2) (0,2,0) (2,0,0) (2,0,2) (0,0,2) (0,2,2) (2,2,0) nearest neighbors approximation

results: work style nearest neighbors Outperforms random Beaten by chain, group heuristics

future work? Assess more seed and convergent processes Weight edges based on prior experience Break ties with work style Conduct long-term study to evaluate performance of formed teams Evaluate effects of number of preferences expressed

questions