 Linear Programming and Smoothed Complexity Richard Kelley.

Slides:



Advertisements
Similar presentations
A Randomized Polynomial- Time Simplex Algorithm for Linear Programming CS3150 Course Presentation.
Advertisements

Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
LIAL HORNSBY SCHNEIDER
Linear Programming (LP) (Chap.29)
Linear Programming – Simplex Method
Introduction to Algorithms
Dragan Jovicic Harvinder Singh
Linear Programming?!?! Sec Linear Programming In management science, it is often required to maximize or minimize a linear function called an objective.
Algorithmic Game Theory and Internet Computing Vijay V. Vazirani Georgia Tech Combinatorial Algorithms for Convex Programs (Capturing Market Equilibria.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Linear Programming Introduction George B Dantzig developed LP in It is a problem solving approach designed to help managers/decision makers in planning.
A Randomized Polynomial-Time Simplex Algorithm for Linear Programming Daniel A. Spielman, Yale Joint work with Jonathan Kelner, M.I.T.
The Simplex Method: Standard Maximization Problems
1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)
Graph Algorithms: Minimum Spanning Tree We are given a weighted, undirected graph G = (V, E), with weight function w:
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Semidefinite Programming
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
An Introduction to Game Theory Part III: Strictly Competitive Games Bernhard Nebel.
Chapter 10: Iterative Improvement
Lecture 20: April 12 Introduction to Randomized Algorithms and the Probabilistic Method.
Objectives: Set up a Linear Programming Problem Solve a Linear Programming Problem.
Solution methods for Discrete Optimization Problems.
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
Linear-Programming Applications
Course: Advanced Algorithms CSG713, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas.
Chapter 15 Constrained Optimization. The Linear Programming Model Let : x 1, x 2, x 3, ………, x n = decision variables Z = Objective function or linear.
Linear Programming David Kauchak cs161 Summer 2009.
C&O 355 Mathematical Programming Fall 2010 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
STUDY OF THE HIRSCH CONJECTURE BASED ON “A QUASI-POLYNOMIAL BOUND FOR THE DIAMETER OF GRAPHS OF POLYHEDRA” Instructor: Dr. Deza Presenter: Erik Wang Nov/2013.
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
Linear Programming. What is Linear Programming? Say you own a 500 square acre farm. On this farm you can grow wheat, barley, corn or some combination.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Algorithms  Al-Khwarizmi, arab mathematician, 8 th century  Wrote a book: al-kitab… from which the word Algebra comes  Oldest algorithm: Euclidian algorithm.
1 Max 8X 1 + 5X 2 (Weekly profit) subject to 2X 1 + 1X 2  1000 (Plastic) 3X 1 + 4X 2  2400 (Production Time) X 1 + X 2  700 (Total production) X 1.
Exact and heuristics algorithms
Soham Uday Mehta. Linear Programming in 3 variables.
3.4: Linear Programming Objectives: Students will be able to… Use linear inequalities to optimize the value of some quantity To solve linear programming.
Linear Programming: Formulations, Geometry and Simplex Method Yi Zhang January 21 th, 2010.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Sullivan Algebra and Trigonometry: Section 12.9 Objectives of this Section Set Up a Linear Programming Problem Solve a Linear Programming Problem.
Linear Programming Piyush Kumar Welcome to CIS5930.
1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel.
Honors Track: Competitive Programming & Problem Solving Seminar Topics Kevin Verbeek.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Network Flow & Linear Programming Jeff Edmonds York University COSC 6111 Lecture 3 Network Flow Linear Programming.
Lap Chi Lau we will only use slides 4 to 19
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Introduction to Randomized Algorithms and the Probabilistic Method
Topics in Algorithms Lap Chi Lau.
The minimum cost flow problem
Constrained Optimization
Linear Programming.
Linear Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
3-3 Optimization with Linear Programming
CISC5835, Algorithms for Big Data
Linear Programming.
Warm Up Solve for x:
Graphical Solution of Linear Programming Problems
Module B Linear Programming.
15th Scandinavian Workshop on Algorithm Theory
1.6 Linear Programming Pg. 30.
Presentation transcript:

 Linear Programming and Smoothed Complexity Richard Kelley

The Nevanlinna Prize  Has anyone heard of the Nevanlinna prize?  What about the Fields Medal?  Godel prize?

About the prize  Awarded at the International Congress of Mathematicians.  Once every 4 years.  You have to be younger than 40.  Awarded for outstanding contributions in “information sciences”  Mathematical aspects of computer science  Scientific computing, optimization, and computer algebra.

2010 Winner  Daniel Spielman  Professor of computer science at Yale.  Spielman showed how to combine worst-case complexity analysis with average-case complexity analysis.  The result is called “smoothed analysis.”  Spielman used smoothed analysis to show that linear programming is “basically” a polynomial-time algorithm.

Worst-Case Thinking  Think of this as a two-player game.  One player presents an algorithm.  The other player, the adversary, selects an input to the algorithm.  The adversary wants the algorithm to run slowly.  The worst-case complexity is the running time of the algorithm on an input chosen by the best possible adversary.

Average-Case Thinking  Also a game, but against a cold and indifferent Nature.  Player one presents an algorithm.  Nature chooses an input. At random.  What should “random” mean?  Because the input is random, the running time becomes a random variable.  The expected value of this random variable is the average-case complexity.

Can we do better?  Yes!  Combine the two!  Let’s pick up some background motivation first…

Linear Programming  A form of constrained optimization.  Common in the real world.  You want to maximize a linear function.  You have a set of linear constraints.  Your variables are required to be nonnegative.  Important in the history of computer science.  Developed during WWII for Army planning.  State secret until 1947.

For Example… max 3x + 5y Subject to 5x + 7.2y <= 4 x + y <= 8 x, y >= 0

And in general

Linear Programming Geometry

Problems that reduce to linear programming  Economics  Utility maximization, cost minimization, profit maximization, game theory (Nash equilibria).  Graph Theory  Matching  Connectivity  Graph coloring  Maximum flows  Minimum cuts  Spanning trees  Scheduling and allocation.  Geometry (Convex Polytopes)  Sorting!!  Linear programming is “P-complete”

Example: Sorting  How is this an optimization problem?  The biggest element in the array should have the biggest index.  The second biggest element should have the second biggest index.  Etc.  How is this a constrained optimization problem?  No element should be duplicated (assuming unique elements).  No index should contain more than one element.

Sorting: The Linear Program

Solving Linear Programs  Even though we’re working in a continuous space, this is a discrete problem.  Why?  The basic idea is to walk along the vertices of the feasible region, climbing to vertices that have better and better values for the objective function.

The Simplex Algorithm  Start at some point the feasible region.  The vertex consisting of all 0’s usually works.  Look at all the neighboring vertices, find one with a higher objective function value.  Involves keeping track of a set of “basis variables”  Variables go in and out of the basis depending on whether or not they make the objective function bigger.  Repeat until you can’t improve the objective value any more.  “Easy” to show that you’re done.

More Geometry

Complexity?  How should we talk about the “size” of an instance of the linear programming problem?  Any guesses how long it should take to run the simplex algorithm?  Usually, it’s pretty quick.  With n variables and m constraints, it takes about 2 m iterations  In theory, this is a O(2^n) algorithm.  Not a typo.  This is a perfect example of the difference between theory and practice!

Worst Case  We want to force the simplex algorithm to look at exponentially many vertices before finding the best one.  The trick is to use a (hyper-)cube.

Worst Case: Klee-Minty

The “Real World”  Huge LPs are solved all the time.  And the simplex algorithm is usually linear.  This is one of the only exponential algorithms that is used widely in practice.

Longstanding Open Problem  For a long time, nobody had a clue.  The solution was pretty much worked out by 2005.

The Solution  Smoothed Analysis.  Start with a arbitrarily chosen input  Could be from the adversary.  “Shake it” a little bit.  The analysis then depends on the size of the input and the magnitude of the shaking.  The idea is that an algorithm with a low smoothed complexity is “almost always” well-behaved.

In Symbols

Graphically – Worst case Complexity

Smoothed Analysis

 Questions?