Download presentation
Presentation is loading. Please wait.
Published byPierce Goodman Modified over 9 years ago
1
Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem Katsutoshi Hirayama (平山 勝敏) Faculty of Maritime Sciences (海事科学部) Kobe University (神戸大学) hirayama@maritime.kobe-u.ac.jp
2
Summary This work is on the distributed combinatorial optimization rather than the distributed constraint satisfaction. I present the Generalized Mutual Assignment Problem (a distributed formulation of the Generalized Assignment Problem) a distributed lagrangean relaxation protocol for the GMAP a “ noise ” strategy that makes the agents (in the protocol) quickly agree on a feasible solution with reasonably good quality
3
Outline Motivation distributed task assignment Problem Generalized Assignment Problem Generalized Mutual Assignment Problem Lagrangean Relaxation Problem Solution protocol Overview Primal/Dual Problem Convergence to Feasible Solution Experiments Conclusion
4
Motivation: distributed task assignment Example 1: transportation domain A set of companies, each having its own transportation jobs. Each is deliberating whether to perform a job by myself or outsource it to another company. Seek for an optimal assignment that satisfies their individual resource constraints (#s of trucks). Kobe Kyoto Tokyo job1 job2job3 Company1 has {job1} and 4 trucks Company2 has {job2,job3} and 3 trucks profittrucks Co.1 job152 job262 job351 Co.2 job142 job222 job322
5
Motivation: distributed task assignment Example 2: info gathering domain A set of research divisions, each having its own interests in journal subscription. Each is deliberating whether to subscribe a journal by myself or outsource it to another division. Seek for an optimal subscription that does not exceed their individual budgets. Example 3: review assignment domain A set of PCs, each having its own review assignment Each is deliberating whether to review a paper by myself or outsource it to another PC/colleague. Seek for an optimal assignment that does not exceed their individual maximally-acceptable numbers of papers.
6
Problem: generalized assignment problem (GAP) These problems can be formulated as the GAP in a centralized context. job1job2job3 Company1 (agent1) Company2 (agent2) (5,2) (4,2) (6,2) (2,2) (5,1) (2,2) 4 3 (profit, resource requirement) Assignment constraint: each job is assigned to exactly one agent. Knapsack constraint: the total resource requirement of each agent does not exceed its available resource capacity. 01 constraint: each job is assigned or not assign to an agent.
7
Problem: generalized assignment problem (GAP) max. s. t. The GAP instance can be described as the integer program. GAP: (as the integer program) However, the problem must be dealt by the super-coordinator. x ij takes 1 if agent i is to perform job j; 0 otherwise. assignment constraints knapsack constraints
8
Problem: generalized assignment problem (GAP) Drawbacks of the centralized formulation Cause the security/privacy issue Ex. the strategic information of a company would be revealed. Need to maintain the super-coordinator (computational server) Distributed formulation of the GAP: generalized mutual assignment problem (GMAP)
9
Problem: generalized mutual assignment problem (GMAP) The agents (not the supper-coordinator) solve the problem while communicating with each other. job1job2job3 Company1 (agent1)Company2 (agent2) 4 3
10
Problem: generalized mutual assignment problem (GMAP) Assumption: The recipient agent has the right to decide whether it will undertake a job or not. job1 4 3 job2job3job1job2job3 Sharing the assignment constraints (profit, resource requirement) Company1 (agent1)Company2 (agent2) (5,2)(6,2)(5,1)(4,2)(2,2)
11
Problem: generalized mutual assignment problem (GMAP) The GMAP can also be described as a set of integer programs max. s. t. max. s. t. Agent1 decides x 11, x 12, x 13 Agent2 decides x 21, x 22, x 23 Sharing the assignment constraints GMP 1 GMP 2 : variables of others
12
Problem: lagrangean relaxation problem By dualizing the assignment constraints, the followings are obtained. max. s. t. max. s. t. Agent1 decides x 11, x 12, x 13 Agent2 decide x 21, x 22, x 23 LGMP 1 (μ)LGMP 2 (μ) : variables of others : lagrangean multiplier vector
13
Problem: lagrangean relaxation problem Two important features: The sum of the optimal values of {LGMP k (μ) | k in all of the agents} provides an upper bound for the optimal value of the GAP. If all of the optimal solutions to {LGMP k (μ) | k in all of the agents} satisfy the assignment constraints for some values of μ, then these optimal solutions constitute an optimal solution to the GAP. LGMP 1 (μ)LGMP 2 (μ) Opt.Value1Opt.Value2 Opt.Sol1Opt.Sol2 solve GAP + Opt.Value Opt.Sol (if Opt.Sol1 and Opt.Sol2 satisfy the assignment constraints) =
14
Solution protocol: overview The agents alternate the following in parallel while performing P2P communication until all of the assignment constraints are satisfied. Each agent k solves LGMP k (μ), the primal problem, using a knapsack solution algorithm. The agents exchange solutions with each other. Each agent k finds appropriate values for μ (solves the (lagrangean) dual problem) using the subgradient optimization method. Agent1Agent2Agent3 sharing Solve dual & primal prlms exchange time
15
Solution protocol: primal problem Primal problem: LGMP k (μ) Knapsack problem Solved by an exact method (i.e., an optimal solution is needed) max. s. t. LGMP 1 (μ) job1 agent1 4 job2job3 (profit, resource requirement)
16
Solution protocol: dual problem Dual problem The problem of finding appropriate values for μ Solved by the subgradient optimization method Subgradient G j for the assignment constraint on job j Updating rule for μ j : step length at time t
17
Solution protocol: example When job1 agent1 4 job2job3 3 job1job2job3 agent2 Select {job1,job2} Select {job1} and Therefore, in the next, Note: the agents involved in job j must assign μ j to a common value.
18
Solution protocol: convergence to feasible solution A common value to μ j ensures the optimality when the protocol stops. However, there is no guarantee that the protocol will eventually stop. You could force the protocol to terminate at some point to get a satisfactory solution, but no feasible solution had been found. In a centralized case, lagrangean heuristics are usually devised to transform the “ best ” infeasible solution into a feasible solution. In a distributed case, such the “ best ” infeasible solution is inaccessible, since it belongs to global information. I introduce a simple strategy to make the agents quickly agree on a feasible solution with reasonably good quality. Noise strategy: let agents assign slightly different values to μ j
19
Solution protocol: convergence to feasible solution Noise strategy The updating rule for μ j is replaced by : random variable whose value is uniformly distributed over This rule diversifies agents ’ views on the value of μ j, and being able to break an oscillation in which agents repeat “ clustering and dispersing ” around some job. For δ≠0, the optimality when the protocol stops does not hold. For δ=0, the optimality when the protocol stops does hold.
20
Solution protocol: rough image optimal feasible region value of the object function of the GAP Controlled by multiple agents No window, no altimeter, but a touchdown can be detected.
21
Experiments Objective Clarify the effect of the noise strategy Settings Problem instances (20 in total) feasible instances #agents ∈ {3,5,7}; #jobs = 5×#agents profit and resource requirement of each job: an integer randomly selected from [1,10] capacity of each agent = 20 Assignment topology: chain/ring/complete/random Protocol Implemented in Java using TCP/IP socket comm. step length l t =1.0 δ ∈ {0.0, 0.3, 0.5, 1.0} 20 runs of the protocol with each value of δ for each instance; cutoff a run at (100×#jobs) rounds
22
Experiments Measure the followings for each instance Opt.Ratio: the ratio of the runs where optimal solutions were found Fes.Ratio: the ratio of the runs where feasible solutions were found Avg/Bst.Quality: the average/best value of the solution qualities Avg.Cost: the average value of the numbers of rounds at which feasible solutions were found optimal feasible value of object function o f
23
Experiments Observations The protocol with δ= 0.0 failed to find an optimal solution for almost all of the instances. In the protocol with δ ≠ 0.0, Opt.Ratio, Fes.Ratio, and Avg.Cost were obviously improved while Avg/Bst.Quality was kept at a “ reasonable ” level (average > 86%, best > 92%). In 3 out of 6 complete-topology instances, an optimal solution was never found at any value of δ. For many instances, increasing the value of δ may generally have an effect to rush the agents into reaching a compromise.
24
Conclusion I have presented Generalized mutual assignment problem Distributed lagrangean relaxation protocol Noise strategy that makes the agents quickly agree on a feasible solution with reasonably good quality Future work More sophisticated techniques to update μ The method that would realize distributed calculation of an upper bound of the optimal value.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.