Transform and Conquer. Algorithms based on the idea of transformation –Transformation stage Problem instance is modified to be more amenable to solution.

Slides:



Advertisements
Similar presentations
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Advertisements

Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Table of Contents Chapter 2 (Linear Programming: Basic Concepts) Three Classic Applications.
Introduction to Management Science
Water Resources Development and Management Optimization (Linear Programming) CVEN 5393 Feb 18, 2013.
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Transform & Conquer Replacing One Problem With Another Saadiq Moolla.
Chapter 6: Transform and Conquer
COSC 3100 Transform and Conquer
CHAPTER ONE Matrices and System Equations
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
LU Factorization LU-factorization Matrix factorization Forward substitution Back substitution.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Numerical Algorithms Matrix multiplication
MA/CSSE 473 Day 25 Transform and Conquer Student questions?
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Three Classic Applications of LP Product Mix at Ponderosa Industrial –Considered limited.
Linear Inequalities and Linear Programming Chapter 5
Operation Research Chapter 3 Simplex Method.
CSC5160 Topics in Algorithms Tutorial 1 Jan Jerry Le
Design and Analysis of Algorithms - Chapter 61 Transform and Conquer Solve problem by transforming into: b a more convenient instance of the same problem.
LINEAR PROGRAMMING INTRODUCTION
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Systems of Equations and Inequalities
Linear Systems Two or more unknown quantities that are related to each other can be described by a “system of equations”. If the equations are linear,
Square n-by-n Matrix.
Copyright © 2011 Pearson, Inc. 7.3 Multivariate Linear Systems and Row Operations.
Pre-Calculus Lesson 7-3 Solving Systems of Equations Using Gaussian Elimination.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
MA/CSSE 473 Day 22 Binary Heaps Heapsort Answers to student questions Exam 2 Tuesday, Nov 4 in class.
Linear Programming: Basic Concepts
Chapter 6 Linear Programming: The Simplex Method
Yasser F. O. Mohammad Assiut University Egypt. Previously in NM Introduction to NM Solving single equation System of Linear Equations Vectors and Matrices.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
Analysis of Algorithms
Solving Linear Programming Problems: The Simplex Method
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Introduction to Operations Research
Chapter 10 Advanced Topics in Linear Programming
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Linear Programming Wyndor Glass Co. 3 plants 2 new products –Product 1: glass door with aluminum framing –Product 2: 4x6 foot wood frame window.
University of Colorado at Boulder Yicheng Wang, Phone: , Optimization Techniques for Civil and Environmental Engineering.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Intro to Computer Algorithms Lecture 15 Phillip G. Bradford Computer Science University of Alabama.
CHAPTER ONE Matrices and System Equations Objective:To provide solvability conditions of a linear equation Ax=b and introduce the Gaussian elimination.
Multivariable Linear Systems and Row Operations
MA/CSSE 473 Day 20 Finish Josephus
Linear Programming for Solving the DSS Problems
Chapter 6 Transform-and-Conquer
Linear programming Simplex method.
Linear Programming Wyndor Glass Co. 3 plants 2 new products
Chapter 10: Solving Linear Systems of Equations
Transform and Conquer This group of techniques solves a problem by a transformation to a simpler/more convenient instance of the same problem (instance.
Chapter 3 The Simplex Method and Sensitivity Analysis
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
Chapter 3 Brute Force Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Linear Programming.
Chapter 6: Transform and Conquer
Linear programming Simplex method.
3. Brute Force Selection sort Brute-Force string matching
Numerical Analysis Lecture10.
Transform and Conquer Transform and Conquer Transform and Conquer.
3. Brute Force Selection sort Brute-Force string matching
Transform and Conquer Transform and Conquer Transform and Conquer.
Lecture 8 Matrix Inverse and LU Decomposition
3. Brute Force Selection sort Brute-Force string matching
Presentation transcript:

Transform and Conquer

Algorithms based on the idea of transformation –Transformation stage Problem instance is modified to be more amenable to solution –Conquering stage Transformed problem is solved Major variations are for the transform to perform: –Instance simplification –Different representation –Problem reduction

Presorting Presorting is an old idea, you sort the data and that allows you to more easily compute some answer –Saw this with quickhull, closest point Some other simple presorting examples –Element Uniqueness –Computing the mode of n numbers

Element Uniqueness Given a list A of n orderable elements, determine if there are any duplicates of any element Brute Force: for each x  A for each y  {A – x} if x = y return not unique return unique Presorting: Sort A for i  1 to n-1 if A[i] = A[i+1] return not unique return unique Runtime?

Computing a mode A mode is a value that occurs most often in a list of numbers –e.g. the mode of [5, 1, 5, 7, 6, 5, 7] is 5 –If several different values occur most often any of them can be considered the mode “Count Sort” approach: (assumes all values > 0; what if they aren’t?) max  max(A) freq[1..max]  0 for each x  A freq[x] +=1 mode  freq[1] for i  2 to max if freq[i] > freq[mode] mode  i return mode Runtime?

Presort Computing Mode Sort A i  0 modefrequency  0 while i ≤ n-1 runlength  1; runvalue  A[i] while i+runlength ≤ n-1 and A[i+runlength] = runvalue runlength += 1 if runlength > modefrequency modefrequency  runlength modevalue  runvalue i+= runlength return modevalue

Gaussian Elimination This is an example of transform and conquer through representation change Consider a system of two linear equations: A 11 x + A 12 y = B 1 A 21 x + A 22 y = B 2 To solve this we can rewrite the first equation to solve for x: x = (B 1 – A 12 y) / A 11 And then substitute in the second equation to solve for y. After we solve for y we can then solve for x: A 21 (B 1 – A 12 y) / A 11 + A 22 y = B 2

Gaussian Elimination In many applications we need to solve a system of n equations with n unknowns, e.g.: A 11 x 1 + A 12 x 2 + … + A 1n x n = B 1 A 21 x 1 + A 22 x 2 + … + A 2n x n = B 2 … A n1 x 1 + A n2 x 2 + … + A nn x n = B n If n is a large number it is very cumbersome to solve these equations using the substitution method. Fortunately there is a more elegant algorithm to solve such systems of linear equations: Gaussian elimination –Named after Carl Gauss

Gaussian Elimination The idea is to transform the system of linear equations into an equivalent one that eliminates coefficients so we end up with a triangular matrix. A 11 x 1 + A 12 x 2 + … + A 1n x n = B 1 A 21 x 1 + A 22 x 2 + … + A 2n x n = B 2 … A n1 x 1 + A n2 x 2 + … + A nn x n = B n A 11 x 1 + A 12 x 2 + … + A 1n x n = B 1 0x 1 + A 22 x 2 + … + A 2n x n = B 2 … 0x 1 + 0x 2 + … + A nn x n = B n In matrix form we can write this as: Ax = B  A’x = B’

Gaussian Elimination Why transform? A 11 x 1 + A 12 x 2 + … + A 1n x n = B 1 0x 1 + A 22 x 2 + … + A 2n x n = B 2 … 0x 1 + 0x 2 + … + A nn x n = B n The matrix with zeros in the lower triangle (it is called an upper triangular matrix) is easier to solve. We can solve the last equation first, substitute into the second to last, etc. working our way back to the first one.

Gaussian Elimination Example Solve the following system: 2x 1 – x 2 + x 3 = 1 4x 1 + x 2 – x 3 = 5 x 1 + x 2 + x 3 = 0 subtract 2*row1 subtract ½*row1 subtract ½*row2

Gaussian Elimination In our example we replaced an equation with a sum or difference with a multiple of another equation We might also need to: –Exchange two equations –Replace an equation with its nonzero multiple Pseudocode: for i  1 to n do A[i,n+1]  B[i] for i  1 to n – 1 for j  i+1 to n do for k  i to n+1 do A[j,k]  A[j,k] – A[i,k]*A[j,i] / A[i,i] O(n 3 ) algorithm

Textbook Chapter 6 Skipping other matrix operations, balanced trees, heaps, binary exponentiation

Problem Reduction If you need to solve a problem, reduce it to another problem that you know how to solve; we saw this idea already with NPC problems Problem 1 to be solved reduction Problem 2 solvable by algorithm A algorithm A Solution to Problem 2 cleanup Solution to Problem 1

Linear Programming One more example of problem reduction; linear programming A Linear Program (LP) is a problem that can be expressed as follows (the so-called Standard Form): –minimize (or maximize) cx –subject to Ax = b x >= 0 where x is the vector of variables to be solved for, A is a matrix of known coefficients, and c and b are vectors of known coefficients. The expression "cx" is called the objective function, and the equations "Ax=b" are called the constraints.

Linear Programming Example Wyndor Glass produces glass windows and doors They have 3 plants: –Plant 1: makes aluminum frames and hardware –Plant 2: makes wood frames –Plant 3: produces glass and makes assembly Two products proposed: –Product 1: 8 ’ glass door with aluminum siding (x1) –Product 2: 4 ’ x 6 ’ wood framed glass window (x2) Some production capacity in the three plants is available to produce a combination of the two products Problem is to determine the best product mix to maximize profits

Wyndor Glass Co. Data Production time per batch (hr)Production time available per week (hr) Product Plant Profit per batch$3,000$5,000 Formulation: Maximize z = 3x 1 + 5x 2 (objective to maximize $$) Subject to x 1 <= 4 (Plant One) 2x 2 <= 12 (Plant Two) 3x 1 + 2x 2 <= 18 (Plant Three) x 1, x 2 >= 0 (Non-negativity requirements)

Example 2: Spacing to Center Text To center text we need to indent it ourselves by using an appropriate number of space characters. The complication is that we have two types of spaces: the usual space and option-space (also known as non-breaking space). These two spaces are different widths. Given three numbers –a (the width of a normal space), –b (the width of an option-space), and –c (the amount we want to indent), Find two more numbers –x (the number of normal spaces to use), and –y (the number of option-spaces to use), So that ax+by is as close as possible to c.

Spacing Problem Visualize problem in 2 dimensions, say a=11, b=9, and c=79. Each blue dot in the picture represents the combination of x option-spaces and y spaces. The red line represents the ideal width of 79 pixels. We want to find a blue dot that's as close as possible to the red line.

Spacing Problem If we want to find the closest point below the line then our equations become: x ≥ 0 y ≥ 0 ax + by ≤ c The linear programming problem is to maximize ax + by ≤ c to find the closest point to the line

Possible Solutions Brute Force

Possible Solutions Simplex method –Consider only points along boundary of “feasible region” –Won’t go into the algorithm here, but it finds solutions in worst case exponential time but generally runs efficiently in polynomial time

Knapsack Problem We can reduce the knapsack problem to a solvable linear programming problem Discrete or 0-1 knapsack problem: –Knapsack of capacity W –n items of weights w 1, w 2 … w n and values v 1, v 2 … v n –Can only take entire item or leave it Reduces to: Maximize where x i = 0 or 1 Constrained by: