Download presentation
Presentation is loading. Please wait.
1
Online Algorithms – II Amrinder Arora Permalink: http://standardwisdom.com/softwarejournal/presentations/
2
Summary Online Problem – Problem is revealed, one step at a time. Online Algorithm – Needs to respond to the request sequence that has been shown, not knowing next items in the sequence Competitive Ratio – The worst ratio of the online algorithm to the optimal offline algorithm, considering all possible request sequences. Basic Concepts Online algorithms show up in many practical problems. Even if you are considering an offline problem, consider what would be the online version of that problem. Research areas including improving algorithms, improving analysis of existing algorithms, proving tightness of analysis, considering problem variations, etc. Importance and Research Topics
3
Part II only makes sense, if it is better than part I.. 3Online Algorithms - II
4
Two Options Randomized version of online job scheduling Online algorithms in machine learning Online Graph Coloring 4Online Algorithms - II
5
Job Scheduling – Randomized No randomized algorithm for 2-machine scheduling can be better than 4/3- competitive Consider any randomized algorithm A How can we prove a lower bound on competitive ratio of algorithm A? Online Algorithms - II5
6
Job Scheduling Consider the old job sequence of 1,1, 2 After first two jobs, E[L 1 ] ≤ 4/3. Therefore, E[L 2 ] ≥ 2/3. Therefore, after job of size 2 is scheduled, then E[L 2 ] ≥ 8/3. OPT offline makespan = 2. Therefore, competitive ratio of randomized algorithm A ≥ 4/3. Online Algorithms - II6
7
Algorithm “Random Scheduler” 4/3 competitive randomized algorithm Tries to keep machine loads in expected ratio of 2:1. Online Algorithms - II7
8
Online Algorithms in Machine Learning But first, let us understand classification techniques 8Online Algorithms - II
9
Classification Given a collection of records (training set) –Each record contains a set of attributes, and a class. Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible. –A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. Online Algorithms - II9
10
Illustrating Classification Task Online Algorithms - II10
11
Examples of Classification Task Predict tax returns as “clean” or “auditable” Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil Categorizing news stories as finance, weather, entertainment, sports, etc Online Algorithms - II11
12
Classification Techniques Decision Tree based Methods Rule-based Methods Memory based reasoning Neural Networks Naïve Bayes and Bayesian Belief Networks Support Vector Machines Online Algorithms - II12
13
Example of a Decision Tree income < $40K –job > 5 yrs then good risk –job < 5 yrs then bad risk income > $40K –high debt then bad risk –low debt then good risk 13Online Algorithms - II
14
Decision Tree Induction Many Algorithms: –Hunt’s Algorithm (one of the earliest) –CART –ID3, C4.5 –SLIQ,SPRINT Online Algorithms - II14
15
General Structure of Hunt’s Algorithm Let D t be the set of training records that reach a node t General Procedure: –If D t contains records that belong the same class y t, then t is a leaf node labeled as y t –If D t is an empty set, then t is a leaf node labeled by the default class, y d –If D t contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. Online Algorithms - II15
16
Measures of Node Impurity Gini Index Entropy Misclassification error Online Algorithms - II16
17
Different kinds of classifiers.. Different decision trees based on Hunt’s C4.5 Naïve Bayes Support Vector Machine Online Algorithms - II17
18
Online Algorithms in Machine Learning Given m experts, each given an output (0,1) We want to be able predict the output After each try, we are told the result. Goal: After some time, we want to be able to do “not much worse” than the best expert. 18Online Algorithms - II
19
“Weighted Majority” – Algorithm 1 Initialize the weights of all experts w 1..w n to 1 At each step, take the majority decision. That is, output 1 if weighted average of experts saying 1 is at least 0.5 After each step, halve the weight of each expert who was wrong (leave the weight of correct experts unchanged) 19Online Algorithms - II
20
Performance of WM-A1 Proof –Suppose WM-A1 makes M mistakes –After each mistake, total weight goes down by ¼. So, it is no more than n(3/4) M [All initial weights are 1, so initial total weight = n] –After each mistake, best expert’s weight goes down by ½. So, it is no more than 1/2 m –So, 1/2 m ≤ n(3/4) M [Best expert’s weight is no more than the total weight.] 20 The number of mistakes made by Weighted Majority- Algorithm 1 is never more than 2.41 (m + lg n), where m is the number of mistakes made by best expert. Online Algorithms - II
21
Performance of WM-A1 Proof (cont.) 1/2 m ≤ n(3/4) M (4/3) M ≤ n 2 m M lg (4/3) ≤ lg n + m M ≤ [1 / lg (4/3)] [m + lg n] M ≤ 2.41 [m + lg n] 21 The number of mistakes made by Weighted Majority- Algorithm 1 is never more than 2.41 (m + lg n), where m is the number of mistakes made by best expert, and n is number of experts. Online Algorithms - II
22
Are the experts independent? 22Online Algorithms - II
23
“Weighted Majority” – Algorithm 2 Initialize the weights of all experts w1..wn to 1 At each step, take the probability decision. That is, output 1 with probability that is equal to sum of weights of experts that say 1 (divided by total weight). After each step, multiply the weight of each expert who was wrong by β (leave the weight of correct experts unchanged) 23Online Algorithms - II
24
Performance of WM-A2 For β = ½, this is: 1.39m + 2 ln n For β = 3/4, this is: 1.15m + 4 ln n 24 The number of mistakes made by Weighted Majority- Algorithm 2 is never more than (m ln (1/ β) + ln n)/(1- β), where m is the number of mistakes made by best expert. Online Algorithms - II
25
Performance of WM-A2 Proof Suppose we have seen t tries so far. Let F i be the fraction of total weight on the wrong answers at the i-th trial. Suppose WM-A2 makes M mistakes. Therefore M = {i=1 to t} { F i } [Why? Because, in each try, probability of mistake = F i ] Suppose best expert makes m mistakes. After each mistake, best expert’s weight gets multiplied by β. So, it is no more than β m During each round, the total weight changes as: W W (1 – (1- β) F i ) 25Online Algorithms - II
26
Performance of WM-A2 Proof (cont.) Therefore, at the end of t tries, total weight: W = n {i= 1 to t} {1 – (1 – β) F i } Since total weight ≥ weight of best expert: n {i= 1 to t} {1 – (1 – β) F i } ≥ β m Taking natural logs: ln n + {i=1 to t} ln {1 – (1 – β) F i } ≥ m ln β Reversing the inequality (multiply by -1): – ln n – {i=1 to t} ln {1 – (1 – β) F i } ≤ m ln (1/ β) A bit of math: – ln (1 – x) > x – ln n + (1 – β) {i=1 to t} {F i } ≤ m ln (1/ β) – ln n + (1 – β) M ≤ m ln (1/ β) M ≤ {m ln (1/ β) + ln n} / {1 – β} The number of mistakes made by Weighted Majority- Algorithm 2 is never more than (m ln (1/ β) + ln n)/(1- β), where m is the number of mistakes made by best expert. 26
27
Why does this all matter? http://www.fda.gov/predict Online Algorithms - II27
28
Online Graph Coloring Vertices arrive one by one We need to assign a color immediately, which cannot be changed. What we are shown is an induced subgraph – not merely a subgraph. –In other words, edges cannot arrive by themselves. Online Algorithms - II28
29
Applications of Graph Coloring Tasks: { T 1,..., T n } Concurrency Constraint: unsharable resources Conflict Matrix C: –C(i, j) = 0: T i and T j need no common resources –C(i, j) = 1: otherwise Conflict Graph G: a graph with adjacency matrix C G is k-colorable iff tasks can be schedules in k time interval 29
30
Graph Coloring – Offline Version Problem is NP-complete Reduction from 3-SAT Approximation algorithm? Online Algorithms - II30
31
Online Graph Coloring What should we aim for? 1-competitive seems unlike, n-competitive is trivial and useless. Online Algorithms - II31
32
Online Graph Coloring (cont.) For every positive integer k, there exists a tree T k on 2 k-1 vertices such that every on- line coloring algorithm A requires at least k colors. Online Algorithms - II32
33
Party Time! Online Algorithms - II33
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.