Download presentation
Presentation is loading. Please wait.
Published bySamson Rose Modified over 9 years ago
1
The Multiplicative Weights Update Method Based on Arora, Hazan & Kale (2005) Mashor Housh Oded Cats Advanced simulation methods Prof. Rubinstein
2
Outline Weighted Majority Algorithm Binary case Generalized Applications Game Theory Zero-Sum game Linear Programming Fractional Packing problem NP-hard problems Set Cover problem Artificial intelligence (Boosting)
3
WMA – Binary case N experts give their predictions Our decision rule is a weighted majority of the expert predictions Initially, all experts have the same weight on our decision rule The update rule for incorrect experts is:
4
WMA – Binary case This procedure will yield in gains/losses that are roughly as good as those of the best of these experts Theorem 1 – The algorithm results in the following bound: Where: - the number of mistakes that expert I after t steps - the number of mistakes that our algorithm made
5
WMA Binary case – Proof of Theorem 1 I.By induction: II.Define the ‘potential function’: III.Each time we make a mistake, at least half of the total weight decrease by a factor of, so: IV.By induction: V.Using for
6
WMA – Binary case : Example1 4 analysts give their prediction to the stock exchange: 3 are always wrong and the 4 th is always right 4321 Market 1 2 3 4 Day Expert
7
WMA – Binary case : Example1 (Cont.) 4321 Market 0.1250.250.51 0.1250.250.51 0.1250.250.51 1111 0.375/10.75/11.5/13/4 Balance of powers User Day Expert
8
WMA – Binary case : Example1 (Cont.) Since our fourth analyst is never wrong:
9
WMA – Binary case : Example2 100 analysts give their prediction to the stock exchange: 99 predict up with probability 0.05 the 100 th expert predicts up with probability 0.99 The market goes up at 99% of the time.
10
WMA – Binary case : Example2 (Cont.)
11
Generalization of the WMA Set of events/outcomes (P) is not bounded is the penalty that expert pays when the outcome is is the distribution associated with the experts The probability to choose an expert is: At every round we choose an expert according to D and follow his advice
12
Generalization of the WMA (Cont.) The update rule is: The expected penalty of the randomized algorithm is not much worse than that of the best expert Theorem 2 - The algorithm results in: Generalization of the WMA
13
WMA – Comparison via example The stock market example, using randomized expert instead of majority vote Market
14
WMA – Comparison via example (Cont.) With penalty only
15
WMA – Comparison via example (Cont.)
16
With penalty and reward
17
Generalization of the WMA - Example 4 weather man give their forecast There are four possible weather conditions The payoff matrix is: SunnyCloudyRainySnowy
18
Generalization of the WMA – Example (Cont.) The actual weather is sunny and cloudy alternately
19
Generalization of the WMA – Example (Cont.) The actual weather varies on the four possible weather conditions alternately
20
Applications Define the following components in order to draw analogy: Experts Events Payoff matrix Weights Update rule
21
Applications Game theory Zero-Sum games Experts – pure strategies to row player Events – pure strategies to column player Payoff matrix – the payoff to the row player, when the row player plays strategy and the column player plays strategy A distribution on the experts represents a mixed row strategy The game value is (von Neumann’s MinMax theory and Nash equilibrium):
22
Applications Game theory 1) Initialize. Determine. 2) Random a row strategy according to 3) The column player choose the strategy that maximizes his revenues 4) Update 5) If stop. Otherwise – return to step 2. Algorithm for solving Zero-Sum game:
23
Applications Game theory – Example1 321 1/2 1/31/41 1/2 1/31/42 1/2 1/31/43 1/21/31/4 The row player choose minimum of maximum penalty The column player choose maximum of minimum penalty
24
Applications Game theory – Example1 (Cont.)
25
Applications Game theory – Example2 (1) The row player chooses a strategy randomly (2) The column player chooses the strategy that yield maximum benefits for him (3) Updating the weighting over row strategies 321 1/3 1/41 3/4 002 2/3 1/32/33 1/300
26
Applications Game theory – Example2 (Cont.)
27
Applications Artificial Intelligence The objective is to learn an unknown function A sequence of training examples is given: is the fixed unknown distribution on the domain The learning algorithm results in an hypothesis The error is:
28
Applications Artificial Intelligence (Cont.) Strong learning algorithm – for every and, with probability -weak learning algorithm – for every and, with probability Boosting – combining several moderately accurate rules-of-thumb into a singly highly accurate prediction rule.
29
Applications Artificial Intelligence (Cont.) Experts – samples in the training set Events – set of all hypotheses that can be generated by the weak learning algorithm Payoff matrix – The final hypothesis is obtained via majority vote among
30
Applications Linear Programming Finding a feasible solution for a set of m constraints Experts – constraints Events – solution vectors Payoff matrix - the distance from satisfying the constraint: The final solution is Track cases that there is no feasible solution
31
1) Initialize, and the resulting. 2) Given an oracle which solves the following feasibility problem with a single constraint plus a set of easy constraints (Plotkin, Shmoys and Tardos): Where: If there is no feasible solution – break. Applications Linear Programming (Cont.) Algorithm for finding a feasible solution to a LP problem:
32
Applications Linear Programming (Cont.) 3) Update Where: 4) Update 5) If stop. Otherwise – return to step 2. Algorithm for finding a feasible solution to a LP problem (Cont.):
33
Applications Linear Programming - Example Finding a feasible solution to the following problem: Solution:
34
Applications Fractional Vertex Covering problem Finding a feasible solution for a set of m constraints Experts – constraints Events – solution vectors Payoff matrix - The final solution is Track cases that there is no feasible solution
35
1) Initialize, and the resulting. 2) Given an oracle which solves the following feasibility problem with a single constraint plus a set of easy constraints (Plotkin, Shmoys and Tardos): Where: If there is no feasible solution – break. Applications Fractional Vertex Covering problem (Cont.) Algorithm for finding a feasible solution to a Fractional Covering problem:
36
Applications Fractional Vertex Covering problem (Cont.) 3) Update Where: 4) Update 5) If stop. Otherwise – return to step 2. Algorithm for finding a feasible solution to a Fractional Covering problem:
37
Applications Flow problems Maximum multi-commodity flow problem A set of source-sink pairs and capacity restrained edges
38
Applications Flow problems (Cont.) Experts – edges Events – a flow of value on the path, where is the minimum capacity of an edge. Payoff matrix: Update rule Terminate rule:
39
Applications Set Cover problem Find the minimal number of subsets in collection that their union equals the universe Experts – elements in the universe Events – sets in the collection Payoff matrix –
40
Applications Set Cover problem (Cont.) Update rule: We would search for the set which maximizes (Greedy Set Cover Algorithm)
41
Applications Vertex covering problem - Example
42
Applications Vertex covering problem – Example (Cont.) Find the minimum number of nodes (subsets) that cover all of the edges (universal) E set of edges 1,2,41,2,4 2,3,52,3,5 1,31,3 4,5,64,5,6 6
43
Applications Vertex covering problem – Example (Cont.) The maximum subset which includes edge 1.000000 is: node =1 The selected nodes are : c =1 Iteration: 2.000000 The probability for edge i p = 0 0 0.3333 0 0.3333 0.3333 Choose edge following the distribution i = 3 The maximum subset which includes edge 3.000000 is: node = 2 The selected nodes are : c =1 2 Iteration: 3.000000 The probability for edge i p = 0 0 0 0 0 1 Choose edge following the distribution i = 6 The maximum subset which includes edge 6.000000 is: node = 4 The selected nodes are : c = 1 2 4
44
Applications Vertex covering problem – Example (Cont.) Iteration: 1.000000 The probability for edge i p = 0.1667 0.1667 0.1667 0.1667 0.1667 0.1667 Choose edge following the distribution i = 6 The maximum subset which includes edge 6.000000 is: node = 5 The selected nodes are : c = 5 Iteration: 2.000000 The probability for edge i p = 0.3333 0.3333 0.3333 0 0 0 Choose edge following the distribution i = 3 The maximum subset which includes edge 3.000000 is: node = 2 The selected nodes are : c = 5 2 Iteration: 3.000000 The probability for edge i p = 1 0 0 0 0 0 Choose edge following the distribution i = 1 The maximum subset which includes edge 1.000000 is: node = 1 The selected nodes are : c = 5 2 1
45
Summary This paper presents a comprehensive meta-analysis on the weights update method Various fields developed independently methods that has a common ground which can be generalized into one conceptual procedure The procedure includes the determination of experts, events, penalty matrix, weights and an update rule Additional relevant input: error size, size of and number of iterations.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.