Algorithm Design and Analysis

Slides:



Advertisements
Similar presentations
Ordinary Least-Squares
Advertisements

Chapter Outline 3.1 Introduction
CHAPTER 3: TWO VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Lecture 37 CSE 331 Nov 4, Homework stuff (Last!) HW 10 at the end of the lecture Solutions to HW 9 on Monday Graded HW 9 on.
Bivariate Regression CJ 526 Statistical Analysis in Criminal Justice.
Note 14 of 5E Statistics with Economics and Business Applications Chapter 12 Multiple Regression Analysis A brief exposition.
Regression Hal Varian 10 April What is regression? History Curve fitting v statistics Correlation and causation Statistical models Gauss-Markov.
Lecture 38 CSE 331 Dec 3, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
Lecture 39 CSE 331 Dec 6, On Friday, Dec 10 hours-a-thon Atri: 2:00-3:30 (Bell 123) Jeff: 4:00-5:00 (Bell 224) Alex: 5:00-6:30 (Bell 242)
Statistics 350 Lecture 23. Today Today: Exam next day Good Chapter 7 questions: 7.1, 7.2, 7.3, 7.28, 7.29.
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
CSE 589 Applied Algorithms Spring Colorability Branch and Bound.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Functions of Several Variables Copyright © Cengage Learning. All rights reserved.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
Lectures on Greedy Algorithms and Dynamic Programming
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
STA291 Statistical Methods Lecture LINEar Association o r measures “closeness” of data to the “best” line. What line is that? And best in what terms.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
S. Raskhodnikova and A. Smith. Based on slides by C. Leiserson and E. Demaine. 1 Adam Smith L ECTURES Priority Queues and Binary Heaps Algorithms.
1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
MAT 2401 Linear Algebra 2.5 Applications of Matrix Operations
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
CSE 421 Algorithms Richard Anderson Lecture 18 Dynamic Programming.
Lecture 11: Simple Linear Regression
Chapter 7. Classification and Prediction
1 Functions and Applications
Chapter 6 Dynamic Programming
(5) Notes on the Least Squares Estimate
The Simple Linear Regression Model: Specification and Estimation
Analysis of Algorithms CS 477/677
Algorithm Design and Analysis
Richard Anderson Lecture 17 Dynamic Programming
Introduction to Instrumentation Engineering
13 Functions of Several Variables
REGRESSION.
Lecture 35 CSE 331 Nov 27, 2017.
Chapter 6 Dynamic Programming
Dynamic Programming General Idea
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Lecture 35 CSE 331 Nov 28, 2016.
Richard Anderson Lecture 16 Dynamic Programming
Linear regression Fitting a straight line to observations.
Least Squares Method: the Meaning of r2
Richard Anderson Lecture 16a Dynamic Programming
Richard Anderson Lecture 16 Dynamic Programming
Contact: Machine Learning – (Linear) Regression Wilson Mckerrow (Fenyo lab postdoc) Contact:
Regression Lecture-5 Additional chapters of mathematics
Chapter 6 Dynamic Programming
Discrete Least Squares Approximation
Chapter 6 Dynamic Programming
All pairs shortest path problem
Dynamic Programming General Idea
Lecture 36 CSE 331 Nov 28, 2011.
Lecture 36 CSE 331 Nov 30, 2012.
Richard Anderson Lecture 16, Winter 2019 Dynamic Programming
Ch3 The Two-Variable Regression Model
Richard Anderson Lecture 17, Winter 2019 Dynamic Programming
Regression Models - Introduction
Approximation of Functions
Presentation transcript:

Algorithm Design and Analysis LECTURE 15 Dynamic Programming Segmented Least Squares Discussion of hw problem on spanning subgraphs with short paths CSE 565 Adam Smith 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

Solution to Homework exercise We discussed the subgraph H output by the procedure in problem 31 (chap. 4) Claim 1: H has no cycles of length 4 or less Claim 2: Any graph with no cycles of length 4 or less has at most n1.5 edges. 9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

6.3 Segmented Least Squares 1-24,26-37

Segmented Least Squares Foundational problem in statistic and numerical analysis. Given n points in the plane: (x1, y1), (x2, y2) , . . . , (xn, yn). Find a line y = ax + b that minimizes the sum of the squared error: Solution. Calculus  min error is achieved when y x the term "regression" stems from the work of Sir Francis Galton (1822-1911), a famous geneticist, who studied the heights of fathers and their sons. He found that offspring of a parent of larger than average height tended to be smaller than their parents, and offspring of parents of smaller than average size tended to be larger than their parents. "regression toward mediocrity". Gauss (1809-1811) used Gaussian elimination to solve multiple linear regression Gauss-Markov theorem (least squares estimates are uniformly minimum variance estimates (UMVUE) among all linear unbiased estimates) O(n) time

Segmented Least Squares Points lie roughly on a sequence of several line segments. Given n points in the plane (x1, y1), (x2, y2) , . . . , (xn, yn) with x1 < x2 < ... < xn, find a sequence of lines that minimizes f(x). Q. What's a reasonable choice for f(x) to balance accuracy and parsimony? goodness of fit number of lines y Note: discontinuous functions permitted! x

Segmented Least Squares Points lie roughly on a sequence of several line segments. Given n points in the plane (x1, y1), (x2, y2) , . . . , (xn, yn) with x1 < x2 < ... < xn, find a sequence of lines that minimizes: the sum of the sums of the squared errors E in each segment the number of lines L Tradeoff function: E + c L, for some constant c > 0. y x

Dynamic Programming: Multiway Choice Notation. OPT(j) = minimum cost for points p1, pi+1 , . . . , pj. e(i, j) = minimum sum of squares for points pi, pi+1 , . . . , pj. To compute OPT(j): Last segment uses points pi, pi+1 , . . . , pj for some i. Cost = e(i, j) + c + OPT(i-1).

Segmented Least Squares: Algorithm Running time. O(n3). Bottleneck = computing e(i, j) for O(n2) pairs, O(n) per pair using previous formula. INPUT: n, p1,…,pN , c Segmented-Least-Squares() { M[0] = 0 for j = 1 to n for i = 1 to j compute the least square error eij for the segment pi,…, pj M[j] = min 1  i  j (eij + c + M[i-1]) return M[n] } can be improved to O(n2) by pre-computing ei,j’s