Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem Katsutoshi Hirayama (平山 勝敏) Faculty of Maritime Sciences (海事科学部)

Slides:



Advertisements
Similar presentations
Primal Dual Combinatorial Algorithms Qihui Zhu May 11, 2009.
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Adopt Algorithm for Distributed Constraint Optimization
Solving IPs – Cutting Plane Algorithm General Idea: Begin by solving the LP relaxation of the IP problem. If the LP relaxation results in an integer solution,
Pure, Mixed-Integer, Zero-One Models
BA 452 Lesson B.3 Integer Programming 11ReadingsReadings Chapter 7 Integer Linear Programming.
Introduction to Algorithms
Chapter 6 Maximum Flow Problems Flows and Cuts Augmenting Path Algorithm.
Progress in Linear Programming Based Branch-and-Bound Algorithms
Branch & Bound Algorithms
Winter 2005ICS 252-Intro to Computer Design ICS 252 Introduction to Computer Design Lecture 5-Scheudling Algorithms Winter 2005 Eli Bozorgzadeh Computer.
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
1 Fast Primal-Dual Strategies for MRF Optimization (Fast PD) Robot Perception Lab Taha Hamedani Aug 2014.
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Semidefinite Programming
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
1 A Second Stage Network Recourse Problem in Stochastic Airline Crew Scheduling Joyce W. Yen University of Michigan John R. Birge Northwestern University.
Dynamic lot sizing and tool management in automated manufacturing systems M. Selim Aktürk, Siraceddin Önen presented by Zümbül Bulut.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Branch and Bound Algorithm for Solving Integer Linear Programming
Distributed Combinatorial Optimization
Name: Mehrab Khazraei(145061) Title: Penalty or Exterior penalty function method professor Name: Sahand Daneshvar.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Decision Procedures An Algorithmic Point of View
Column Generation Approach for Operating Rooms Planning Mehdi LAMIRI, Xiaolan XIE and ZHANG Shuguang Industrial Engineering and Computer Sciences Division.
ENCI 303 Lecture PS-19 Optimization 2
Minimum Cost Flows. 2 The Minimum Cost Flow Problem u ij = capacity of arc (i,j). c ij = unit cost of shipping flow from node i to node j on (i,j). x.
1 Near-Optimal Play in a Social Learning Game Ryan Carr, Eric Raboin, Austin Parker, and Dana Nau Department of Computer Science, University of Maryland.
Quasi-static Channel Assignment Algorithms for Wireless Communications Networks Frank Yeong-Sung Lin Department of Information Management National Taiwan.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
EE 685 presentation Utility-Optimal Random-Access Control By Jang-Won Lee, Mung Chiang and A. Robert Calderbank.
The Application of The Improved Hybrid Ant Colony Algorithm in Vehicle Routing Optimization Problem International Conference on Future Computer and Communication,
Energy-Efficient Sensor Network Design Subject to Complete Coverage and Discrimination Constraints Frank Y. S. Lin, P. L. Chiu IM, NTU SECON 2005 Presenter:
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Linear Programming Erasmus Mobility Program (24Apr2012) Pollack Mihály Engineering Faculty (PMMK) University of Pécs João Miranda
1 Lagrangean Relaxation --- Bounding through penalty adjustment.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley.
Maximization of System Lifetime for Data-Centric Wireless Sensor Networks 指導教授:林永松 博士 具資料集縮能力無線感測網路 系統生命週期之最大化 研究生:郭文政 國立臺灣大學資訊管理學研究所碩士論文審查 民國 95 年 7 月.
Operational Research & ManagementOperations Scheduling Economic Lot Scheduling 1.Summary Machine Scheduling 2.ELSP (one item, multiple items) 3.Arbitrary.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
M Tech Project – First Stage Improving Branch-And-Price Algorithms For Solving 1D Cutting Stock Problem Soumitra Pal [ ]
Log Truck Scheduling Problem
Branch and Bound Algorithms Present by Tina Yang Qianmei Feng.
1 An Arc-Path Model for OSPF Weight Setting Problem Dr.Jeffery Kennington Anusha Madhavan.
Resource Allocation in Hospital Networks Based on Green Cognitive Radios 王冉茵
IE 312 Review 1. The Process 2 Problem Model Conclusions Problem Formulation Analysis.
PRIMAL-DUAL APPROXIMATION ALGORITHMS FOR METRIC FACILITY LOCATION AND K-MEDIAN PROBLEMS K. Jain V. Vazirani Journal of the ACM, 2001.
Approximation Algorithms Duality My T. UF.
Linear Programming Piyush Kumar Welcome to CIS5930.
Approximation Algorithms based on linear programming.
1 Chapter 5 Branch-and-bound Framework and Its Applications.
Introduction to Algorithms: Brute-Force Algorithms.
1 2 Linear Programming Chapter 3 3 Chapter Objectives –Requirements for a linear programming model. –Graphical representation of linear models. –Linear.
The Transportation and Assignment Problems
Distributed Vehicle Routing Approximation
The minimum cost flow problem
Chapter 6. Large Scale Optimization
Network Optimization Research Laboratory
Department of Information Management National Taiwan University
Chapter 8. General LP Problems
Chapter 8. General LP Problems
Chapter 8. General LP Problems
Integer LP: Algorithms
Chapter 6. Large Scale Optimization
Multidisciplinary Optimization
Constraints.
Presentation transcript:

Distributed Lagrangean Relaxation Protocol for the Generalized Mutual Assignment Problem Katsutoshi Hirayama (平山 勝敏) Faculty of Maritime Sciences (海事科学部) Kobe University (神戸大学)

Summary  This work is on the distributed combinatorial optimization rather than the distributed constraint satisfaction.  I present the Generalized Mutual Assignment Problem (a distributed formulation of the Generalized Assignment Problem) a distributed lagrangean relaxation protocol for the GMAP a “ noise ” strategy that makes the agents (in the protocol) quickly agree on a feasible solution with reasonably good quality

Outline  Motivation distributed task assignment  Problem Generalized Assignment Problem Generalized Mutual Assignment Problem Lagrangean Relaxation Problem  Solution protocol Overview Primal/Dual Problem Convergence to Feasible Solution  Experiments  Conclusion

Motivation: distributed task assignment  Example 1: transportation domain A set of companies, each having its own transportation jobs. Each is deliberating whether to perform a job by myself or outsource it to another company. Seek for an optimal assignment that satisfies their individual resource constraints (#s of trucks). Kobe Kyoto Tokyo job1 job2job3 Company1 has {job1} and 4 trucks Company2 has {job2,job3} and 3 trucks profittrucks Co.1 job152 job262 job351 Co.2 job142 job222 job322

Motivation: distributed task assignment  Example 2: info gathering domain A set of research divisions, each having its own interests in journal subscription. Each is deliberating whether to subscribe a journal by myself or outsource it to another division. Seek for an optimal subscription that does not exceed their individual budgets.  Example 3: review assignment domain A set of PCs, each having its own review assignment Each is deliberating whether to review a paper by myself or outsource it to another PC/colleague. Seek for an optimal assignment that does not exceed their individual maximally-acceptable numbers of papers.

Problem: generalized assignment problem (GAP)  These problems can be formulated as the GAP in a centralized context. job1job2job3 Company1 (agent1) Company2 (agent2) (5,2) (4,2) (6,2) (2,2) (5,1) (2,2) 4 3 (profit, resource requirement) Assignment constraint: each job is assigned to exactly one agent. Knapsack constraint: the total resource requirement of each agent does not exceed its available resource capacity. 01 constraint: each job is assigned or not assign to an agent.

Problem: generalized assignment problem (GAP) max. s. t.  The GAP instance can be described as the integer program. GAP: (as the integer program) However, the problem must be dealt by the super-coordinator. x ij takes 1 if agent i is to perform job j; 0 otherwise. assignment constraints knapsack constraints

Problem: generalized assignment problem (GAP)  Drawbacks of the centralized formulation Cause the security/privacy issue  Ex. the strategic information of a company would be revealed. Need to maintain the super-coordinator (computational server) Distributed formulation of the GAP: generalized mutual assignment problem (GMAP)

Problem: generalized mutual assignment problem (GMAP)  The agents (not the supper-coordinator) solve the problem while communicating with each other. job1job2job3 Company1 (agent1)Company2 (agent2) 4 3

Problem: generalized mutual assignment problem (GMAP)  Assumption: The recipient agent has the right to decide whether it will undertake a job or not. job1 4 3 job2job3job1job2job3 Sharing the assignment constraints (profit, resource requirement) Company1 (agent1)Company2 (agent2) (5,2)(6,2)(5,1)(4,2)(2,2)

Problem: generalized mutual assignment problem (GMAP)  The GMAP can also be described as a set of integer programs max. s. t. max. s. t. Agent1 decides x 11, x 12, x 13 Agent2 decides x 21, x 22, x 23 Sharing the assignment constraints GMP 1 GMP 2 : variables of others

Problem: lagrangean relaxation problem  By dualizing the assignment constraints, the followings are obtained. max. s. t. max. s. t. Agent1 decides x 11, x 12, x 13 Agent2 decide x 21, x 22, x 23 LGMP 1 (μ)LGMP 2 (μ) : variables of others : lagrangean multiplier vector

Problem: lagrangean relaxation problem  Two important features: The sum of the optimal values of {LGMP k (μ) | k in all of the agents} provides an upper bound for the optimal value of the GAP. If all of the optimal solutions to {LGMP k (μ) | k in all of the agents} satisfy the assignment constraints for some values of μ, then these optimal solutions constitute an optimal solution to the GAP. LGMP 1 (μ)LGMP 2 (μ) Opt.Value1Opt.Value2 Opt.Sol1Opt.Sol2 solve GAP + Opt.Value Opt.Sol (if Opt.Sol1 and Opt.Sol2 satisfy the assignment constraints) =

Solution protocol: overview  The agents alternate the following in parallel while performing P2P communication until all of the assignment constraints are satisfied. Each agent k solves LGMP k (μ), the primal problem, using a knapsack solution algorithm. The agents exchange solutions with each other. Each agent k finds appropriate values for μ (solves the (lagrangean) dual problem) using the subgradient optimization method. Agent1Agent2Agent3 sharing Solve dual & primal prlms exchange time

Solution protocol: primal problem  Primal problem: LGMP k (μ) Knapsack problem Solved by an exact method (i.e., an optimal solution is needed) max. s. t. LGMP 1 (μ) job1 agent1 4 job2job3 (profit, resource requirement)

Solution protocol: dual problem  Dual problem The problem of finding appropriate values for μ Solved by the subgradient optimization method  Subgradient G j for the assignment constraint on job j  Updating rule for μ j : step length at time t

Solution protocol: example When job1 agent1 4 job2job3 3 job1job2job3 agent2 Select {job1,job2} Select {job1} and Therefore, in the next, Note: the agents involved in job j must assign μ j to a common value.

Solution protocol: convergence to feasible solution  A common value to μ j ensures the optimality when the protocol stops. However, there is no guarantee that the protocol will eventually stop.  You could force the protocol to terminate at some point to get a satisfactory solution, but no feasible solution had been found. In a centralized case, lagrangean heuristics are usually devised to transform the “ best ” infeasible solution into a feasible solution. In a distributed case, such the “ best ” infeasible solution is inaccessible, since it belongs to global information.  I introduce a simple strategy to make the agents quickly agree on a feasible solution with reasonably good quality. Noise strategy: let agents assign slightly different values to μ j

Solution protocol: convergence to feasible solution  Noise strategy The updating rule for μ j is replaced by : random variable whose value is uniformly distributed over This rule diversifies agents ’ views on the value of μ j, and being able to break an oscillation in which agents repeat “ clustering and dispersing ” around some job. For δ≠0, the optimality when the protocol stops does not hold. For δ=0, the optimality when the protocol stops does hold.

Solution protocol: rough image optimal feasible region value of the object function of the GAP Controlled by multiple agents No window, no altimeter, but a touchdown can be detected.

Experiments  Objective Clarify the effect of the noise strategy  Settings Problem instances (20 in total)  feasible instances  #agents ∈ {3,5,7}; #jobs = 5×#agents  profit and resource requirement of each job: an integer randomly selected from [1,10]  capacity of each agent = 20  Assignment topology: chain/ring/complete/random Protocol  Implemented in Java using TCP/IP socket comm.  step length l t =1.0  δ ∈ {0.0, 0.3, 0.5, 1.0}  20 runs of the protocol with each value of δ for each instance; cutoff a run at (100×#jobs) rounds

Experiments  Measure the followings for each instance Opt.Ratio: the ratio of the runs where optimal solutions were found Fes.Ratio: the ratio of the runs where feasible solutions were found Avg/Bst.Quality: the average/best value of the solution qualities Avg.Cost: the average value of the numbers of rounds at which feasible solutions were found optimal feasible value of object function o f

Experiments  Observations The protocol with δ= 0.0 failed to find an optimal solution for almost all of the instances. In the protocol with δ ≠ 0.0, Opt.Ratio, Fes.Ratio, and Avg.Cost were obviously improved while Avg/Bst.Quality was kept at a “ reasonable ” level (average > 86%, best > 92%). In 3 out of 6 complete-topology instances, an optimal solution was never found at any value of δ. For many instances, increasing the value of δ may generally have an effect to rush the agents into reaching a compromise.

Conclusion  I have presented Generalized mutual assignment problem Distributed lagrangean relaxation protocol Noise strategy that makes the agents quickly agree on a feasible solution with reasonably good quality  Future work More sophisticated techniques to update μ The method that would realize distributed calculation of an upper bound of the optimal value.