Download presentation
Presentation is loading. Please wait.
Published byBrice Maurice Robinson Modified over 9 years ago
1
AGT 2012 SAMOS 1 Braess’s Paradox A. Kaporis Dept. of Information & Commun. Systems Eng., U.Aegean, Samos, Greece & Research Academic Comp. Tech. Inst.,U. Patras Campus, Greece Joint work with: D. Fotakis National Technical University of Athens, School of Electrical and Computer Engineering P. Spirakis Dept. of Computer Eng. and Informatics, U. Patras, Greece & Research Academic Comp. Tech. Inst.,U. Patras Campus, Greece
2
AGT 2012 SAMOS 2 Overview Selfish routing on a network:
3
AGT 2012 SAMOS 3 Overview Selfish routing on a network: an infinite number
4
AGT 2012 SAMOS 4 Overview Selfish routing on a network: an infinite number of infinitesimally small users,
5
AGT 2012 SAMOS 5 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
6
AGT 2012 SAMOS 6 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness:
7
AGT 2012 SAMOS 7 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute
8
AGT 2012 SAMOS 8 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency)
9
AGT 2012 SAMOS 9 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
10
AGT 2012 SAMOS 10 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium. But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum all user’s travel times).
11
AGT 2012 SAMOS 11 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium. But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum all user’s travel times). Central question:
12
AGT 2012 SAMOS 12 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium. But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times). Central question: Is there a cheap (wrt network designer)
13
AGT 2012 SAMOS 13 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium. But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times). Central question: Is there a cheap (wrt network designer) & fair (wrt users)
14
AGT 2012 SAMOS 14 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium. But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times). Central question: Is there a cheap (wrt network designer) & fair (wrt users) way to decrease society’s cost
15
AGT 2012 SAMOS 15 Overview Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes. Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium. But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times). Central question: Is there a cheap (wrt network designer) & fair (wrt users) way to decrease society’s cost, while all users still route free & selfish?
16
AGT 2012 SAMOS 16 Overview Yes!
17
AGT 2012 SAMOS 17 Overview Yes! Exploit Braess’s Paradox
18
AGT 2012 SAMOS 18 Overview Yes! Exploit Braess’s Paradox That is, No matter how new & luxurious is your network,
19
AGT 2012 SAMOS 19 Overview Yes! Exploit Braess’s Paradox That is, No matter how new & luxurious is your network, you can make all users
20
AGT 2012 SAMOS 20 Overview Yes! Exploit Braess’s Paradox That is, No matter how new & luxurious is your network, you can make all users to travel faster,
21
AGT 2012 SAMOS 21 Overview Yes! Exploit Braess’s Paradox That is, No matter how new & luxurious is your network, you can make all users to travel faster, by (cruelly) destroying a part of it.
22
AGT 2012 SAMOS 22 A network toy example
23
AGT 2012 SAMOS 23 A flow dependent latency function per edge
24
AGT 2012 SAMOS 24 NE flow= red Zoro’s-subgraph
25
AGT 2012 SAMOS 25 NE path latency = 1+0+1= 2
26
AGT 2012 SAMOS 26 OPT flow= blue Ο-subgraph
27
AGT 2012 SAMOS 27 OPT path latency = ½+1= 3/2 < 2 = NE path latency
28
AGT 2012 SAMOS 28 A lazy designer just deletes edge uv & achieves optimum routing …
29
AGT 2012 SAMOS 29 A lazy designer just deletes edge uv & achieves optimum routing … so it is cheap (wrt the designer)!
30
AGT 2012 SAMOS 30 No user is sacrificed through slower paths …
31
AGT 2012 SAMOS 31 No user is sacrificed through slower paths … so it is fair (wrt users)!
32
AGT 2012 SAMOS 32 Opposed to Stackelberg strategies, for example:
33
AGT 2012 SAMOS 33 A leader has to control selfish routing, on a pair of links as:
34
AGT 2012 SAMOS 34 Leader is highly motivated by the optimum routing:
35
AGT 2012 SAMOS 35 But, users are reluctant to Leader’s view… since all are mad about the speedy link
36
AGT 2012 SAMOS 36 So, Leader sacrifices ½ of flow through the slower link (up)
37
AGT 2012 SAMOS 37 So, Leader sacrifices ½ of flow through the slower link (up)
38
AGT 2012 SAMOS 38 How about Taxes?
39
AGT 2012 SAMOS 39 How about Taxes? In a large class of networks—including all networks with linear latency functions—marginal cost taxes do not improve the cost of the Nash equilibrium [Cole, Dodis, Roughgarden]. Disutility per user = path latency + taxes paid, is increased. It is more expensive & complex (wrt a designer) to build (& operate) tax-stations per road, than closing (once & for all) some roads.
40
AGT 2012 SAMOS 40 How about Taxes? In a large class of networks—including all networks with linear latency functions—marginal cost taxes do not improve the cost of the Nash equilibrium [Cole, Dodis, Roughgarden]. Disutility per user = path latency + taxes paid, is increased. It is more expensive & complex (wrt a designer) to build (& operate) tax-stations per road, than closing (once & for all) some roads.
41
AGT 2012 SAMOS 41 How about Taxes? In a large class of networks—including all networks with linear latency functions—marginal cost taxes do not improve the cost of the Nash equilibrium [Cole, Dodis, Roughgarden]. Disutility per user = path latency + taxes paid, is increased. It is more expensive & complex (wrt a designer) to build (& operate) tax-stations per road, than closing (once & for all) some roads.
42
AGT 2012 SAMOS 42 But, is Braess’s paradox happening only to a designer’s summer-night dream?
43
AGT 2012 SAMOS 43 But, is Braess’s paradox happening only to a designer’s summer-night dream? It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory.
44
AGT 2012 SAMOS 44 But, is Braess’s paradox happening only to a designer’s summer-night dream? It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory. No!
45
AGT 2012 SAMOS 45 But, is Braess’s paradox happening only to a designer’s summer-night dream? It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory. No! It appears almost always in a very broad class of random graphs [Valiant, Roughgarden, EC ‘06]
46
AGT 2012 SAMOS 46 But, is Braess’s paradox happening only to a designer’s summer-night dream? It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory. No! It appears almost always in a very broad class of random graphs [ Valiant, Roughgarden, EC ‘06 ] It has been long observed in many large cities, such as NY [ Kolata, New York Times ‘90 ]. “It is just as likely to occur as not” [ Steinberg, ‘83 ].
47
AGT 2012 SAMOS 47 Formally, a routing instance G consists of:
48
AGT 2012 SAMOS 48 Formally, a routing instance G consists of: a directed graph G with source node s and destination t.
49
AGT 2012 SAMOS 49 Formally, a routing instance G consists of: a directed graph G with source node s and destination t. a flow r>0 of infinite number of infinitesimally small users that wish to route through paths of G from node s to t.
50
AGT 2012 SAMOS 50 Formally, a routing instance G consists of: a directed graph G with source node s and destination t. a flow r>0 of infinite number of infinitesimally small users that wishes to route through paths of G from node s to t. edges endowed with flow-dependent latency functions (continuous, increasing, etc).
51
AGT 2012 SAMOS 51 Formally, a routing instance G consists of: a directed graph G with source node s and destination t. a flow r>0 of infinite number of infinitesimally small users that wishes to route through paths of G from node s to t. edges endowed with flow-dependent latency functions (continuous, increasing, etc). G is paradox ridden: if on a subgraph H Í G, the selfish routing coincides to the optimum routing.
52
AGT 2012 SAMOS 52 Formally, a routing instance G consists of: a directed graph G with source node s and destination t. a flow r>0 of infinite number of infinitesimally small users that wishes to route through paths of G from node s to t. edges endowed with flow-dependent latency functions (continuous, increasing, etc). G is paradox ridden: if on a subgraph H Í G, the selfish routing coincides to the optimum routing.
53
AGT 2012 SAMOS 53 We focus on real world routing instances, with latencies as: F. Kelly, “The Mathematics of traffic in networks”. The Princeton Companion to Mathematics, Princeton University Press.
54
AGT 2012 SAMOS 54 We focus on real world routing instances, with latencies as: F. Kelly, “The Mathematics of traffic in networks”. The Princeton Companion to Mathematics, Princeton University Press.
55
AGT 2012 SAMOS 55 Designer’s problems. Given an instance G : Paradox-Ridden Recognition (ParRid): decide if G is paradox-ridden. Best Subnetwork Equilibrium Latency (BSubEL): find the best subnetwork H B of G and its equilibrium latency L(H B ).
56
AGT 2012 SAMOS 56 Designer’s problems. Given an instance G : Paradox-Ridden Recognition (ParRid): decide if G is paradox-ridden. Best Subnetwork Equilibrium Latency (BSubEL): find the best subnetwork H B of G and its equilibrium latency L(H B ).
57
AGT 2012 SAMOS 57 Designer’s problems. Given an instance G : Paradox-Ridden Recognition (ParRid): decide if G is paradox-ridden. Best Subnetwork Equilibrium Latency (BSubEL): find the best subnetwork H B of G and its equilibrium latency L(H B ).
58
AGT 2012 SAMOS 58 Designer’s problems. Given an instance G : Minimum Latency Modification (MinLatMod): with each edge e endowed with polynomial of degree d latency on flow x: modify the latency: So that: (i) the Euclidian distance of the coefficient’s vectors is minimum & (ii) the induced common latency gives the cost of optimum flow on G
59
AGT 2012 SAMOS 59 Designer’s problems. Given an instance G : Minimum Latency Modification (MinLatMod): with each edge e endowed with polynomial of degree d latency on flow x: modify the latency: So that: (i) the Euclidian distance of the coefficient’s vectors is minimum & (ii) the induced common latency gives the cost of optimum flow on G
60
AGT 2012 SAMOS 60 Previous work on Braess’s paradox First observed by D. Braess in ’68 and inspired a vast amount of papers. A nice history of it in: [ Roughgarden, Selfish Routing and the Price of Anarchy. MIT, ’05 ]. Recent work shows that the paradox is quite likely to occur [ Valiant, Roughgarden, EC ‘06 ]. For general latencies ParRid is NP-hard [ Roughgarden, Selfish Routing and the Price of Anarchy. MIT, ’05 ]. Also, it is hard even to approximate BSubEL beyond a critical value (4/3 for linear latencies & n/2 for polynomial ones) [ Roughgarden, Selfish Routing and the Price of Anarchy. MIT, ’05 ]. MinLatMod: of the most important problems [ Magnanti, Wong: Network design and transportation planning: Models and algorithms, Transp. Sci. '84 ].
61
AGT 2012 SAMOS 61 Our results Recognizing paradox-ridden instances (ParRid) :. This problem is NP-complete for arbitrary linear latencies [29]. We show that it becomes polynomially solvable for the important case of strictly increasing linear latencies. Furthermore, We reduce the problem ParRid with arbitrary linear latencies to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the constant latency edges.
62
AGT 2012 SAMOS 62 Our results Recognizing paradox-ridden instances (ParRid) :. This problem is NP-complete for arbitrary linear latencies. We show that it becomes polynomially solvable for the important case of strictly increasing linear latencies. Furthermore, We reduce the problem ParRid with arbitrary linear latencies to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the constant latency edges.
63
AGT 2012 SAMOS 63 Our results Recognizing paradox-ridden instances (ParRid) :. This problem is NP-complete for arbitrary linear latencies. We show that it becomes polynomially solvable for the important case of strictly increasing linear latencies. Furthermore, We reduce the problem ParRid with arbitrary linear latencies to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the constant latency edges.
64
AGT 2012 SAMOS 64 Our results Best Subnetwork Equilibrium Latency (BSubEL): For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3. For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme. That is, For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε 2, m is the number of edges. Also, For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
65
AGT 2012 SAMOS 65 Our results Best Subnetwork Equilibrium Latency (BSubEL): For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3. For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme. That is, For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε 2, m is the number of edges. Also, For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
66
AGT 2012 SAMOS 66 Our results Best Subnetwork Equilibrium Latency (BSubEL): For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3. For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme. That is, For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε 2, m is the number of edges. Also, For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
67
AGT 2012 SAMOS 67 Our results Best Subnetwork Equilibrium Latency (BSubEL): For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3. For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme. That is, For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε 2, m is the number of edges. Also, For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
68
AGT 2012 SAMOS 68 Our results Minimum Latency Modification (MinLatMod): If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges. Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions. We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
69
AGT 2012 SAMOS 69 Our results Minimum Latency Modification (MinLatMod): If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges. Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions. We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
70
AGT 2012 SAMOS 70 Our results Minimum Latency Modification (MinLatMod): If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges. Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions. We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
71
AGT 2012 SAMOS 71 Our results Minimum Latency Modification (MinLatMod): If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges. Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions. We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
72
AGT 2012 SAMOS 72 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
73
AGT 2012 SAMOS 73 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
74
AGT 2012 SAMOS 74 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
75
AGT 2012 SAMOS 75 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
76
AGT 2012 SAMOS 76 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
77
AGT 2012 SAMOS 77 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
78
AGT 2012 SAMOS 78 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Is it possible such a “David” H* to perform almost as good as a “Goliath” H B ?
79
AGT 2012 SAMOS 79 Best Subnetwork Equilibrium Latency (BSubEL): the idea For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency. We have to search amongst exponentially many subgraphs to find the best H B one of the minimum latency L(H B ). Can we approximate H B (best) with H* (almost best, that is, L(H * ) ≤L(H B )+ε), which is “small” and, thus, “easy” to find? How “small”? “small” means H* has only polylogarithmic many paths that receive flow. Because, this implies subexponentially many candidates for H*. Central question: Does exist such a “small” H* that performs almost as good as a “big” H B ?
80
AGT 2012 SAMOS 80 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “David”… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
81
AGT 2012 SAMOS 81 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on a “small” subgraph… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “big” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the “big” best subset of strategies.
82
AGT 2012 SAMOS 82 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on a “small” subgraph… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “big” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the best subset of strategies.
83
AGT 2012 SAMOS 83 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on a “small” subgraph… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
84
AGT 2012 SAMOS 84 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “David”… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
85
AGT 2012 SAMOS 85 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “David”… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
86
AGT 2012 SAMOS 86 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “small” subgraph… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “big” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the best subset of strategies.
87
AGT 2012 SAMOS 87 Is it possible such a “small” H* to perform almost as good as a “Goliath” H B ? If one has to bet, then she won’t put her penny on “small” subgraph… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “big” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
88
AGT 2012 SAMOS 88 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “David”… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache for them to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
89
AGT 2012 SAMOS 89 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “David”… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
90
AGT 2012 SAMOS 90 Is it possible such a “small” H* to perform almost as good as a “big” H B ? If one has to bet, then she won’t put her penny on “David”… But, Our hopes come from a seemingly unrelated topic: Once upon a time, there was a bimatrix game, with only 2 rivals. But, each rival was armed with a “Goliath” set of strategies. So, it was a headache to compute their best subsets of strategies to play. Until, a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the best “big” subset of their strategies.
91
AGT 2012 SAMOS 91 How each player finds small & almost optimal subsets of strategies? By the Probabilistic Method… Each rival selects sequentially & at random (according to an optimal but unknown distribution) a row (column) from her matrix. Strong tail bounds show that, with probability ≥ a positive constant, after polylogarithimic many random row (column) selections, her expected gain (by her random “David” subset) will be close to the optimal expected gain (by her “Goliath” subset). Thus, by the Probabilistic Method, there exists “small” subsets of rivals’s strategies that are close to the optimal expected gain. So, each rival has only to exhaustively search in subexponentially time for such “small” subsets.
92
AGT 2012 SAMOS 92 How each player finds small & almost optimal subsets of strategies? By the Probabilistic Method… Each rival selects sequentially & at random (according to an optimal but unknown distribution) a row (column) from her matrix. Strong tail bounds show that, with probability ≥ a positive constant, after polylogarithimic many random row (column) selections, her expected gain (by her random “David” subset) will be close to the optimal expected gain (by her “Goliath” subset). Thus, by the Probabilistic Method, there exists “small” subsets of rivals’s strategies that are close to the optimal expected gain. So, each rival has only to exhaustively search in subexponentially time for such “small” subsets.
93
AGT 2012 SAMOS 93 From a bimatrix game to an instance G (a graph!) A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
94
AGT 2012 SAMOS 94 From a bimatrix game to an instance G (a graph!) A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
95
AGT 2012 SAMOS 95 From a bimatrix game to an instance G A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
96
AGT 2012 SAMOS 96 From a bimatrix game to an instance G A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
97
AGT 2012 SAMOS 97 From a bimatrix game to an instance G A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
98
AGT 2012 SAMOS 98 From a bimatrix game to an instance G A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player. There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
99
AGT 2012 SAMOS 99 From a bimatrix game to an instance G A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player. There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion affects smoothly the random experiment.
100
AGT 2012 SAMOS 100 From a bimatrix game to an instance G A flow = infinite number of players (but, a bimatrix game has only 2 players) … View the flow on paths as the mixed strategy of a single player (FLOW-player)! An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)… View the edge-path adjacency matrix as the payoff matrix of the FLOW-player. There are congestion effects on G (but, a bimatrix has no congestion) Show that congestion only affects smoothly the random experiment.
101
AGT 2012 SAMOS 101 m “short”paths from node s to t
102
AGT 2012 SAMOS 102 Let f the path-flow of the subgraph of the minimum common latency
103
AGT 2012 SAMOS 103 What if we search exhaustively for this precious f?
104
AGT 2012 SAMOS 104 Select k= Θ(logm/ε 2 ) paths at random according to the unknown f
105
AGT 2012 SAMOS 105 These k= Θ(logm/ε 2 ) paths with positive probability induce the minimum common latency (achieved by f) + ε
106
AGT 2012 SAMOS 106 These k= Θ(logm/ε 2 ) paths with positive probability induce the minimum common latency (achieved by f) + ε
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.