Algorithms for the Maximum Subarray Problem Based on Matrix Multiplication Authours : Hisao Tamaki & Takeshi Tokuyama Speaker : Rung-Ren Lin.

Slides:



Advertisements
Similar presentations
1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 19 Prof. Erik Demaine.
Advertisements

Chapter Matrices Matrix Arithmetic
Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
Discussion #33 Adjacency Matrices. Topics Adjacency matrix for a directed graph Reachability Algorithmic Complexity and Correctness –Big Oh –Proofs of.
Lecture 17 Path Algebra Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Reading Material: Chapter 7..
Matrices. Special Matrices Matrix Addition and Subtraction Example.
25.All-Pairs Shortest Paths Hsu, Lih-Hsing. Computer Theory Lab. Chapter 25P.2.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Lecture 22: Matrix Operations and All-pair Shortest Paths II Shang-Hua Teng.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Clustering In Large Graphs And Matrices Petros Drineas, Alan Frieze, Ravi Kannan, Santosh Vempala, V. Vinay Presented by Eric Anderson.
Sequence Alignment II CIS 667 Spring Optimal Alignments So we know how to compute the similarity between two sequences  How do we construct an.
Algorithms All pairs shortest path
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
1 A Linear Space Algorithm for Computing Maximal Common Subsequences Author: D.S. Hirschberg Publisher: Communications of the ACM 1975 Presenter: Han-Chen.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Dynamic programming algorithms for all-pairs shortest path and longest common subsequences We will study a new technique—dynamic programming algorithms.
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
Matrices MSU CSE 260.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
CE 311 K - Introduction to Computer Methods Daene C. McKinney
SMAWK. REVISE Global alignment (Revise) Alignment graph for S = aacgacga, T = ctacgaga Complexity: O(n 2 ) V(i,j) = max { V(i-1,j-1) +  (S[i], T[j]),
AN INTRODUCTION TO ELEMENTARY ROW OPERATIONS Tools to Solve Matrices.
Matrix Algebra. Quick Review Quick Review Solutions.
Chapter 2 – Linear Transformations
Chapter 4 Shortest Path Label-Setting Algorithms Introduction & Assumptions Applications Dijkstra’s Algorithm.
All-Pairs Bottleneck Paths in Vertex Weighted graphs Asaf Shapira Microsoft Research Raphael Yuster University of Haifa Uri Zwick Tel-Aviv University.
Chapter 2 Graph Algorithms.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Geometric Matching on Sequential Data Veli Mäkinen AG Genominformatik Technical Fakultät Bielefeld Universität.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
Lecture 7 All-Pairs Shortest Paths. All-Pairs Shortest Paths.
Parallel Programming: All-Pairs Shortest Path CS599 David Monismith Based upon notes from multiple sources.
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
CSCI 171 Presentation 9 Matrix Theory. Matrix – Rectangular array –i th row, j th column, i,j element –Square matrix, diagonal –Diagonal matrix –Equality.
Matrices Section 2.6. Section Summary Definition of a Matrix Matrix Arithmetic Transposes and Powers of Arithmetic Zero-One matrices.
Efficient heuristic algorithms for the maximum subarray problem Rung-Ren Lin and Kun-Mao Chao.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Timing Model Reduction for Hierarchical Timing Analysis Shuo Zhou Synopsys November 7, 2006.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Introduction to Algorithms All-Pairs Shortest Paths My T. UF.
Matrix Multiplication The Introduction. Look at the matrix sizes.
Introduction to Financial Modeling MGT 4850 Spring 2008 University of Lethbridge.
Table of Contents Matrices - Definition and Notation A matrix is a rectangular array of numbers. Consider the following matrix: Matrix B has 3 rows and.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Section 2.4. Section Summary  Sequences. o Examples: Geometric Progression, Arithmetic Progression  Recurrence Relations o Example: Fibonacci Sequence.
CS 285- Discrete Mathematics Lecture 11. Section 3.8 Matrices Introduction Matrix Arithmetic Transposes and Power of Matrices Zero – One Matrices Boolean.
A very brief introduction to Matrix (Section 2.7) Definitions Some properties Basic matrix operations Zero-One (Boolean) matrices.
All Pairs Shortest Path Algorithms Aditya Sehgal Amlan Bhattacharya.
CSE 504 Discrete Mathematics & Foundations of Computer Science
Properties and Applications of Matrices
12-1 Organizing Data Using Matrices
PRAM Algorithms.
Space-Saving Strategies for Computing Δ-points
Multiplication of Matrices
Answering distance queries in directed graphs using fast matrix multiplication Raphael Yuster Haifa University Uri Zwick Tel Aviv University.
Space-Saving Strategies for Computing Δ-points
Presentation transcript:

Algorithms for the Maximum Subarray Problem Based on Matrix Multiplication Authours : Hisao Tamaki & Takeshi Tokuyama Speaker : Rung-Ren Lin

Outline  Introduction  ½ -approximation  Funny matrix multiplication  Reduction  Two little programs

 Introduction  ½ -approximation  Funny matrix multiplication  Reduction  Two little programs

Definition  Given an m by n matrix, output the maximum subarray.

History  Bentley proposes this problem in  Kadane ’ s algorithm solves 1-D problem in linear time.  Kadane ’ s idea does not work for the 2-D case.  There ’ s an O(m 2 n) algorithm by using Kadane ’ s idea.

Kadane’s algorithm S(i) = A(i) + max{S(i-1), 0} i

Preprocessing  Given a matrix A[1 … m][1 … n], we can compute B[1 … m][1 … n] in O(mn) time such that B[i][j] represents the sum of A[1 … i][1 … j].

 Introduction  ½ -approximation  Funny matrix multiplication  Reduction  Two little programs

n m Time = mn/2 * 2 = mn

n m Time = mn/4 * 4= mn

Time Complexity  O(mn*logm)

 Introduction  ½ -approximation  Funny matrix multiplication  Reduction  Two little programs

Definition  Given two n by n matrices, A ij & B ij C ij = max k=1 to n {A ik + B kj } C ij = max k=1 to n {A ik + B kj } ABC jj ii

History  Funny matrix multiplication is well- studied, since its computational complexity is known to be equivalent to that of “ all-pairs shortest paths ”.  Fredman constructs a subcubic algorithm with running time :

Cont’d  Takaoka improved to in in 1992.

 Introduction  ½ -approximation  Funny matrix multiplication  Reduction  Two little programs

Definition  T(m, n) : computing time for the m by n matrix  T row (m, n) : row-centered  T col (m, n) : column-centered T(m, n) = T row (m, n) + T col (m, n) + T(m, n) = T row (m, n) + T col (m, n) + 4T(m/2, n/2) 4T(m/2, n/2)

Cont’d  T center (m, n) : row & column-centered T row (m, n) = T center (m, n) + 2T row (m, n/2) T col (m, n) = T center (m, n) + 2T col (m/2, n)

Reduction A B CD

Cont’d  Let A, B, C, D[1 … m/2][1 … n/2] be the sum of the given area. X ij = max k=1 to m/2 {A ik + C jk } X ij = max k=1 to m/2 {A ik + C jk } Y ij = max k=1 to m/2 {B ik + D jk } Y ij = max k=1 to m/2 {B ik + D jk } output = max{X ij + Y ij } output = max{X ij + Y ij }

 Introduction  ½ -approximation  Funny matrix multiplication  Reduction  Two little programs

13-card

Pacman

Challenges  It ’ s difficult to search 3-D models.

The End