1 Traveling Salesman Problem (TSP) Given n £ n positive distance matrix (d ij ) find permutation  on {0,1,2,..,n-1} minimizing  i=0 n-1 d  (i),  (i+1.

Slides:



Advertisements
Similar presentations
Weighted Matching-Algorithms, Hamiltonian Cycles and TSP
Advertisements

Theory of Computing Lecture 18 MAS 714 Hartmut Klauck.
NP-Complete Problems Polynomial time vs exponential time
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Complexity 11-1 Complexity Andrei Bulatov NP-Completeness.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
It is unlikely that there are efficient approximation algorithms with a very good worst case approximation ratio for MAXSAT, MIN NODE COVER, MAX INDEPENDENT.
Approximation Algorithms
1 It is unlikely that there are efficient approximation algorithms with a very good approximation ratio for MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET,
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
1 Maximum matching Max Flow Shortest paths Min Cost Flow Linear Programming Mixed Integer Linear Programming Worst case polynomial time by Local Search.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 7 Monday, 4/3/06 Approximation Algorithms.
Time Complexity.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 7 Tuesday, 4/7/09 Approximation Algorithms.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
1 Integrality constraints Integrality constraints are often crucial when modeling optimizayion problems as linear programs. We have seen that if our linear.
1 NP-Complete Problems (Fun part) Polynomial time vs exponential time –Polynomial O(n k ), where n is the input size (e.g., number of nodes in a graph,
1 The Min Cost Flow Problem. 2 The Min Cost Flow problem We want to talk about multi-source, multi-sink flows than just “flows from s to t”. We want to.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
NP-Completeness: 3D Matching
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
Complexity Classes (Ch. 34) The class P: class of problems that can be solved in time that is polynomial in the size of the input, n. if input size is.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Computer Science Day 2013, May Distinguished Lecture: Andy Yao, Tsinghua University Welcome and the 'Lecturer of the Year' award.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Algorithms for hard problems Introduction Juris Viksna, 2015.
NPC.
CSC 413/513: Intro to Algorithms
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Non-LP-Based Approximation Algorithms Fabrizio Grandoni IDSIA
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
Approximation Algorithms based on linear programming.
CSE 332: NP Completeness, Part II Richard Anderson Spring 2016.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 7 Tuesday, 4/6/10 Approximation Algorithms 1.
Approximation algorithms
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
The Theory of NP-Completeness
NP-Completeness (2) NP-Completeness Graphs 4/13/2018 5:22 AM x x x x x
More NP-Complete and NP-hard Problems
Richard Anderson Lecture 26 NP-Completeness
Optimization problems such as
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
NP-Completeness Proofs
Richard Anderson Lecture 26 NP-Completeness
Approximation algorithms
Computability and Complexity
NP-Completeness (2) NP-Completeness Graphs 11/23/2018 2:12 PM x x x x
Richard Anderson Lecture 25 NP-Completeness
Richard Anderson Lecture 28 Coping with NP-Completeness
The Theory of NP-Completeness
NP-Completeness (2) NP-Completeness Graphs 7/9/2019 6:12 AM x x x x x
Lecture 24 Vertex Cover and Hamiltonian Cycle
Presentation transcript:

1 Traveling Salesman Problem (TSP) Given n £ n positive distance matrix (d ij ) find permutation  on {0,1,2,..,n-1} minimizing  i=0 n-1 d  (i),  (i+1 mod n) The special case of d ij being actual distances on a map is called the Euclidean TSP. The special case of d ij satistying the triangle inequality is called Metric TSP. We shall construct an approximation algorithm for the metric case.

2 Appoximating general TSP is NP- hard If there is an efficient approximation algorithm for TSP with any approximation factor  then P=NP. Proof: We use a modification of the reduction of hamiltonian cycle to TSP.

3 Reduction Proof: Suppose we have an efficient approximation algorithm for TSP with approximation ratio . Given instance (V,E) of hamiltonian cycle problem, construct TSP instance (V,d) as follows: d(u,v) = 1 if (u,v) 2 E d(u,v) =  |V| + 1 otherwise. Run the approximation algorithm on instance (V,d). If (V,E) has a hamiltonian cycle then the approximation algorithm will return a TSP tour which is such a hamiltonian cycle. Of course, if (V,E) does not have a hamiltonian cycle, the approximation algorithm wil not find it!

4

5 General design/analysis trick Our approximation algorithm often works by constructing some relaxation providing a lower bound and turning the relaxed solution into a feasible solution without increasing the cost too much. The LP relaxation of the ILP formulation of the problem is a natural choice. We may then round the optimal LP solution.

6 Not obvious that it will work….

7 Min weight vertex cover Given an undirected graph G=(V,E) with non- negative weights w(v), find the minimum weight subset C µ V that covers E. Min vertex cover is the case of w(v)=1 for all v.

8 ILP formulation Find (x v ) v 2 V minimizing  w v x v so that x v 2 Z 0 · x v · 1 For all (u,v) 2 E, x u + x v ¸ 1.

9 LP relaxation Find (x v ) v 2 V minimizing  w v x v so that x v 2 R 0 · x v · 1 For all (u,v) 2 E, x u + x v ¸ 1.

10 Relaxation and Rounding Solve LP relaxation. Round the optimal solution x* to an integer solution x: x v = 1 iff x* v ¸ ½. The rounded solution is a cover: If (u,v) 2 E, then x* u + x* v ¸ 1 and hence at least one of x u and x v is set to 1.

11 Quality of solution found Let z* =  w v x v * be cost of optimal LP solution.  w v x v · 2  w v x v *, as we only round up if x v * is bigger than ½. Since z* · cost of optimal ILP solution, our algorithm has approximation ratio 2.

12 Relaxation and Rounding Relaxation and rounding is a very powerful scheme for getting approximate solutions to many NP-hard optimization problems. In addition to often giving non-trivial approximation ratios, it is known to be a very good heuristic, especially the randomized rounding version. Randomized rounding of x 2 [0,1]: Round to 1 with probability x and 0 with probability 1-x.

13 MAX-3-CNF Given Boolean formula in CNF form with exactly three distinct literals per clause find an assignment satisfying as many clauses as possible.

14 Approximation algorithms Given maximization problem (e.g. MAXSAT, MAXCUT) and an efficient algorithm that always returns some feasible solution. The algorithm is said to have approximation ratio  if for all instances, cost(optimal sol.)/cost(sol. found) · 

15 MAX3CNF, Randomized algorithm Flip a fair coin for each variable. Assign the truth value of the variable according to the coin toss. Claim: The expected number of clauses satisfied is at least 7/8 m where m is the total number of clauses. We say that the algorithm has an expected approximation ratio of 8/7.

16 Analysis Let Y i be a random variable which is 1 if the i’th clause gets satisfied and 0 if not. Let Y be the total number of clauses satisfied. Pr[Y i =1] = 1 if the i’th clause contains some variable and its negation. Pr[Y i = 1] = 1 – (1/2) 3 = 7/8 if the i’th clause does not include a variable and its negation. E[Y i ] = Pr[Y i = 1] ¸ 7/8. E[Y] = E[  Y i ] =  E[Y i ] ¸ (7/8) m

17 Remarks It is possible to derandomize the algorithm, achieving a deterministic approximation algorithm with approximation ratio 8/7. Approximation ratio 8/7 -  is not possible for any constant  > 0 unless P=NP (shown by Hastad using Fourier Analysis (!) in 1997).

18 Min set cover Given set system S 1, S 2, …, S m µ X, find smallest possible subsystem covering X.

19 Min set cover vs. Min vertex cover Min set cover is a generalization of min vertex cover. Identify a vertex with the set of edges adjacent to the vertex.

20 Greedy algorithm for min set cover

21 Approximation Ratio Greedy-Set-Cover does not have any constant approximation ratio (Even true for Greedy-Vertex-Cover – exercise). We can show that it has approximation ratio H s where s is the size of the largest set and H s = 1/1 + 1/2 + 1/ /s is the s’th harmonic number. H s = O(log s) = O(log |X|). s may be small on concrete instances. H 3 = 11/6 < 2.

22 Analysis I Let S i be the i’th set added to the cover. Assign to x 2 S i - [ j<i S j the cost w x = 1/|S i – [ j<i S j |. The size of the cover constructed is exactly  x 2 X w x.

23 Analysis II Let C* be the optimal cover. Size of cover produced by Greedy alg. =  x 2 X w x ·  S 2 C*  x 2 S w x · |C*| max S  x 2 S w x · |C*| H s

24 It is unlikely that there are efficient approximation algorithms with a very good approximation ratio for MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, …. But we have to solve these problems anyway – what do we do?

25 Simple approximation heuristics or LP- relaxation and rounding may find better solutions that the analysis suggests on relevant concrete instances. We can improve the solutions using Local Search.

26 Local Search LocalSearch(ProblemInstance x) y := feasible solution to x; while 9 z ∊ N(y): v(z)<v(y) do y := z; od; return y;

27 To do list How do we find the first feasible solution? Neighborhood design? Which neighbor to choose? Partial correctness? Termination? Complexity? Never Mind! Stop when tired! (but optimize the time of each iteration).

28 TSP Johnson and McGeoch. The traveling salesman problem: A case study (from Local Search in Combinatorial Optimization). Covers plain local search as well as concrete instantiations of popular metaheuristics such as tabu search, simulated annealing and evolutionary algorithms. A shining example of good experimental methodology.

29 TSP Branch-and-cut method gives a practical way of solving TSP instances of 1000 cities. Instances of size have been solved.. Instances considered by Johnson and McGeoch: Random Euclidean instances and Random distance matrix instances of several thousands cities.

30 Local search design tasks Finding an initial solution Neighborhood structure

31 The initial tour Nearest neighbor heuristic Greedy heuristic Clarke-Wright Christofides

32

33 Neighborhood design Natural neighborhood structures: 2-opt, 3-opt, 4-opt,…

34 2-opt neighborhood

35 2-opt neighborhood

36 2-optimal solution

37 3-opt neighborhood

38 3-opt neighborhood

39 3-opt neighborhood

40 Neighborhood Properties Size of k-opt neighborhood: O( ) k ¸ 4 is rarely considered….

41

42

43

44 One 3OPT move takes time O(n 3 ). How is it possible to do local optimization on instances of size 10 6 ?????

45 2-opt neighborhood t1t1 t4t4 t3t3 t2t2

46 A 2-opt move If d(t 1, t 2 ) · d(t 2, t 3 ) and d(t 3,t 4 ) · d(t 4,t 1 ), the move is not improving. Thus we can restrict searches for tuples where either d(t 1, t 2 ) > d(t 2, t 3 ) or d(t 3, t 4 ) > d(t 4, t 1 ). WLOG, d(t 1,t 2 ) > d(t 2, t 3 ).

47 Neighbor lists For each city, keep a static list of cities in order of increasing distance. When looking for a 2-opt move, for each candidate for t 1 with t 2 being the next city, look in the neighbor list for t 2 for t 3 candidate. Stop when distance becomes too big. For random Euclidean instance, expected time to for finding 2-opt move is linear.

48 Problem Neighbor lists becomes very big. It is very rare that one looks at an item at position > 20.

49 Pruning Only keep neighborlists of length 20. Stop search when end of lists are reached.

50 Still not fast enough……

51 Don’t-look bits. If a candidate for t 1 was unsuccesful in previous iteration, and its successor and predecessor has not changed, ignore the candidate in current iteration.

52 Variant for 3opt WLOG look for t 1, t 2, t 3, t 4,t 5,t 6 so that d(t 1,t 2 ) > d(t 2, t 3 ) and d(t 1,t 2 )+d(t 3,t 4 ) > d(t 2,t 3 )+d(t 4, t 5 ).

53 On Thursday ….we’ll escape local optima using Taboo search and Lin-Kernighan.