Traveling Salesman Problems Motivated by Robot Navigation Maria Minkoff MIT With Avrim Blum, Shuchi Chawla, David Karger, Terran Lane, Adam Meyerson.

Slides:



Advertisements
Similar presentations
Instructor Neelima Gupta Table of Contents Approximation Algorithms.
Advertisements

Design and Analysis of Algorithms Approximation algorithms for NP-complete problems Haidong Xue Summer 2012, at GSU.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
1 NP-completeness Lecture 2: Jan P The class of problems that can be solved in polynomial time. e.g. gcd, shortest path, prime, etc. There are many.
Approximation Algorithms for TSP
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
Paths, Trees and Minimum Latency Tours Kamalika Chaudhuri, Brighten Godfrey, Satish Rao, Satish Rao, Kunal Talwar UC Berkeley.
Combinatorial Algorithms
Approximation Algorithms for Orienteering and Discounted-Reward TSP Blum, Chawla, Karger, Lane, Meyerson, Minkoff CS 599: Sequential Decision Making in.
Planning under Uncertainty
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Approximation Algorithms
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
Traveling Salesman with Deadlines Adam Meyerson UCLA.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithms for Deadline-TSP & Vehicle Routing with Time Windows by Nikhil Bansal, Avrim Blum, Schuchi Chawla & Adam Meyerson STOC 2004 Presented.
CSE 326: Data Structures NP Completeness Ben Lerner Summer 2007.
Approximation Algorithms for TSP with Deadlines Adam Meyerson UCLA.
Network Optimization Problems: Models and Algorithms
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Programming & Data Structures
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
Algorithms for Network Optimization Problems This handout: Minimum Spanning Tree Problem Approximation Algorithms Traveling Salesman Problem.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
The Traveling Salesman Problem Approximation
SPANNING TREES Lecture 21 CS2110 – Spring
IT 60101: Lecture #201 Foundation of Computing Systems Lecture 20 Classic Optimization Problems.
Complexity Classes (Ch. 34) The class P: class of problems that can be solved in time that is polynomial in the size of the input, n. if input size is.
Chandra Chekuri, Nitish Korula and Martin Pal Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms (SODA 08) Improved Algorithms.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
Approximation Algorithms
DP TSP Have some numbering of the vertices We will think of all our tours as beginning and ending at 1 C(S, j) = length of shortest path visiting each.
Algorithms for Path-Planning Shuchi Chawla Carnegie Mellon University.
The Traveling Salesman Problem Over Seventy Years of Research, and a Million in Cash Presented by Vladimir Coxall.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
Minimum Spanning Trees CS 146 Prof. Sin-Min Lee Regina Wang.
Approximation Algorithms for TSP Tsvi Kopelowitz 1.
Approximation Algorithms for Path-Planning Problems with Nikhil Bansal, Avrim Blum and Adam Meyerson Shuchi Chawla Carnegie Mellon University.
Approximation Algorithms for Path-Planning Problems with Nikhil Bansal, Avrim Blum and Adam Meyerson Shuchi Chawla.
SPANNING TREES Lecture 20 CS2110 – Fall Spanning Trees  Definitions  Minimum spanning trees  3 greedy algorithms (incl. Kruskal’s & Prim’s)
SPANNING TREES Lecture 21 CS2110 – Fall Nate Foster is out of town. NO 3-4pm office hours today!
Algorithms for Path-Planning Shuchi Chawla (CMU/Stanford/Wisconsin) 10/06/05.
Algorithms for Path-Planning Shuchi Chawla Carnegie Mellon University Jan 17, 2004.
Approximation Algorithms for Path-Planning Problems Nikhil Bansal, Avrim Blum, Shuchi Chawla and Adam Meyerson Carnegie Mellon University.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Instructor Neelima Gupta Table of Contents Introduction to Approximation Algorithms Factor 2 approximation algorithm for TSP Factor.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
The Theory of NP-Completeness
Traveling Salesman Problems Motivated by Robot Navigation
Approximation Algorithms for Path-Planning Problems
Hamiltonian Cycle and TSP
Computability and Complexity
1.3 Modeling with exponentially many constr.
Approximation Algorithms for TSP
Richard Anderson Lecture 28 Coping with NP-Completeness
1.3 Modeling with exponentially many constr.
The Theory of NP-Completeness
Spanning Trees Lecture 20 CS2110 – Spring 2015.
Lecture 24 Vertex Cover and Hamiltonian Cycle
Presentation transcript:

Traveling Salesman Problems Motivated by Robot Navigation Maria Minkoff MIT With Avrim Blum, Shuchi Chawla, David Karger, Terran Lane, Adam Meyerson

A Robot Navigation Problem Robot delivering packages in a building Goal to deliver as quickly as possible Classic model: Traveling Salesman Problem Find a tour of minimum length Additional constraints: some packages have higher priority uncertainty in robot’s behavior battery failure sensor error, motor control error

Markov Decision Process Model State space S Choice of actions a  A at each state s Transition function T ( s’|s,a ) action determines probability distribution on next state sequence of actions produces a random path through graph Rewards R(s) on states If arrive in state s at time t, receive discounted reward  t R(s) for  MDP Goal: policy for picking an action from any state that maximizes total discounted reward

Exponential Discounting Motivates to get to desired state quickly Inflation: reward collected in distant future decreases in value due to uncertainty at time t robot loses power with fixed probability probability of being alive at t is exponentially distributed discounting reflects value of reward in expectation

Solving MDP Fixing action at each state produces a Markov Chain with transition probabilities p vw Can compute expected discounted reward  v if start at state v :  v = r v +  w p vw  t(v,w)  w Choosing actions to optimize this recurrence is polynomial time solvable Linear programming Dynamic programming (like shortest paths)

Solving the wrong problem Package can only be delivered once So should not get reward each time reach target One solution: expand state space New state = current location  past locations (packages already delivered) Reward nonzero only on states where current location not included in list of previously visited Now apply MDP algorithm Problem: new state space has exponential size

Tackle an easier problem Problem has two novel elements for “theory” Discounting of reward based on arrival time Probability distribution on outcome of actions We will set aside second issue for now In practice, robot can control errors Even first issue by itself is hard and interesting First step towards solving whole problem

Discounted-Reward TSP Given undirected graph G=(V,E) edge weights (travel times) d e ≥ 0 weights on nodes (rewards) r v ≥ 0 discount factor   (0,1) root node s Goal find a path P starting at s that maximizes total discounted reward  (P) =  v  P r v  d P (v)

Approximation Algorithms Discounted-Reward TSP is NP -complete (and so is more general MDP-type problem) reduction from minimum latency TSP So intractable to solve exactly Goal: approximation algorithm that is guaranteed to collect at least some constant fraction of the best possible discounted reward

Related Problems Goal of Discounted-Reward TSP seems to be to find a “ short ” path that collects “ lots ” of reward Prize-Collecting TSP Given a root vertex v, find a tour containing v that minimizes total length + foregone reward (undiscounted) Primal-dual 2-approximation algorithm [GW 95]

k -TSP Find a tour of minimum length that visits at least k vertices 2-approximation algorithm known for undirected graphs based on algorithm for PC-TSP [Garg 99] Can be extended to handle node-weighted version

Mismatch Constant factor approximation on length doesn’t exponentiate well Suppose optimum solution reaches some vertex v at time t for reward  t r Constant factor approximation would reach within time 2t for reward  2t r Result: get only  t fraction of optimum discounted reward, not a constant fraction.

Orienteering Problem Find a path of length at most D that maximizes net reward collected Complement of k -TSP approximates reward collected instead of length avoids changing length, so exponentiation doesn’t hurt unrooted case can be solved via k -TSP Drawback: no constant factor approximation for rooted non-geometric version previously known Our techniques also give a constant factor approximation for Orienteering problem

Our Results Using  -approximation for k -TSP as subroutine (  3/2   +2 ) - approximation for Orienteering e(3/2  + 2)- approximation for Discounted- Reward Collection constant-factor approximations for tree- and multiple-path versions of the problems

Our Results Using  -approximation for k -TSP as subroutine substitute  =2 announced by Garg in 1999 (  3/2  approximation for Orienteering e(3/2  approximation for Discounted- Reward Collection constant-factor approximations for tree- and multiple-path versions of the problems

Eliminating Exponentiation Let d v = shortest path distance (time) to v Define the prize at v as  v =  d v r v max discounted reward possibly collectable at v If given path reaches v at time t v, define excess e v = t v – d v difference between shortest path and chosen one Then discounted reward at v is  e v  v Idea: if excess small, prize ~ discounted reward Fact: excess only increases as traverse path excess reflects lost time; can ’ t make it up

Optimum path assume  = ½ (can scale edge lengths) Claim: at least ½ of optimum path’s discounted reward R is collected before path’s excess reaches 1 s u Proof by contradiction: Let u be first vertex with e u ≥ 1 Suppose more than R/2 reward follows u Can shortcut directly to u then traverse the rest of optimum reduces all excesses after u by at least 1 so “undiscounts” rewards by factor  -1 = 2 so doubles discounted reward collected but this was more than R/2 : contradiction

New problem: Approximate Min-Excess Path Suppose there exists an s - t path P * with prize value  of length l(P * )=d t +e Optimization: find s - t path P with prize value ≥  that minimizes excess l(P)-d t over shortest path to t equivalent to minimizing total length, e.g. k -TSP Approximation: find s - t path P with prize value ≥  that approximates optimum excess over shortest path to t, i.e. has length l(P) = d t + ce better than approximating entire path length

Using Min-Excess Path Recall discounted reward at v is  e v  v Prefix of optimum discounted reward path: collects discounted reward  e v  v  R/2  spans prize  v  R/2 and has no vertex with excess over 1 Guess t = last node on opt path with excess e t  1 Find a path to t of approximately ( 4 times) minimum excess that spans  R/2 prize (we can guess R/2 ) Excesses at most 4, so  e v  v   v /16  discounted reward on found path  R/32

Solving Min-Excess Path problem Exactly solvable case: monotonic paths Suppose optimum path goes through vertices in strictly increasing distance from root Then can find optimum by dynamic program Just as can solve longest path in an acyclic graph Build table For each vertex v : is there a monotonic path from v with length l and prize  ?

Solving Min-Excess Path problem Approximable case: wiggly paths Length of path to v is l v = d v + e v If e v > d v then l v > e v > l v / 2 i.e., take twice as long as necessary to reach v So if approximate l v to constant factor, also approximate e v to twice that constant factor

Approximating path length Can use k -TSP algorithm to find approximately shortest s-t path with specified prize merge s and t into vertex r opt path becomes a tour solve k -TSP with root r “unmerge”: can get one or more cycles r st connect s and t by shortest path

Decompose optimum path monotone wiggly > 2/3 of each wiggly path is excess Divides into independent problems

Decomposition Analysis 2/3 of each wiggly segment is excess That excess accumulates into whole path total excess of wiggly segment  excess of whole path  total length of wiggly segments  3/2 of path excess Use dynamic program to find shortest (min-excess) monotonic segments collecting target prize Use k -TSP to find approximately shortest wiggles collecting target prize Approximates length, so approximates excess Over all monotonic and wiggly segments, approximates total excess

Dynamic program for Min-Excess Path For each pair of vertices and each (discretized) prize value, find Shortest monotonic path collecting desired prize Approximately shortest wiggly path collecting desired prize Note: polynomially many subproblems Use dynamic programming to find optimum pasting together of segments

Solving Orienteering Problem: special case Given a path from s that collects prize  has length  D ends at t, the farthest point from s v t s For any const integer r  1, there exists a path from s to some v with prize   / r excess  (D-d v )/r

Solving Orienteering Problem General case: path ends at arbitrary t Let u be the farthest point from s Connect t to s via shortest path One of path segments ending at u has prize   /2 has length  D  Reduced to special case Using 4-approximation for Min-Excess Path get 8-approximation for Orienteering s t u

Budget Prize-Collecting Steiner Tree problem Find a rooted tree of edge cost at most D that spans maximum amount of prize Complement of k -MST Create Euler tour of opt tree T * of cost  2D Divide this tour into two paths starting at root each of length  D One of them contains at least ½ of total prize Path is a type of tree Use c -approximation algorithm for Orienteering to obtain 2c -approximation for Budget PCST

Summary Showed maximum discounted reward can be approximated using min-excess path Showed how to approximate min-excess path using k -TSP Min-excess path can also be used to solve rooted Orienteering problem (open question) Also solves “tree” and “cycle” versions of Orienteering

Open Questions Non-uniform discount factors each vertex v has its own  v Non-uniform deadlines each vertex specifies its own deadline by which it has to be visited in order to collect reward Directed graphs We used k -TSP, only solved for undirected For directed, even standard TSP has no known constant factor approximation We only use k-TSP/undirectedness in wiggly parts

Future directions Stochastic actions Stochastic seems to imply directed Special case: forget rewards. Given choice of actions, choose to minimize cover time of graph Applying discounting framework to other problems : Scheduling Exponential penalty in place of hard deadlines