ROMaN: Revenue driven Overlay Multicast Networking Varun Khare.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Dynamic Programming Introduction Prof. Muhammad Saeed.
Dynamic Programming.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
CSC 252 Algorithms Haniya Aslam
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
1 Dynamic Programming Jose Rolim University of Geneva.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
11-1 Elements of Dynamic Programming For dynamic programming to be applicable, an optimization problem must have: 1.Optimal substructure –An optimal solution.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
A Novel Approach for Transparent Bandwidth Conservation David Salyers, Aaron Striegel University of Notre Dame Department of Computer Science and Engineering.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Sequence Alignment Oct 9, 2002 Joon Lee Genomics & Computational Biology.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
First Ingredient of Dynamic Programming
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Chapter 15 Highlights Charles Tappert Seidenberg School of CSIS, Pace University.
Dynamic Programming Nattee Niparnan. Dynamic Programming  Many problem can be solved by D&C (in fact, D&C is a very powerful approach if you generalized.
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
1Computer Sciences Department. 2 Advanced Design and Analysis Techniques TUTORIAL 7.
Dynamic Programming academy.zariba.com 1. Lecture Content 1.Fibonacci Numbers Revisited 2.Dynamic Programming 3.Examples 4.Homework 2.
Example 2 You are traveling by a canoe down a river and there are n trading posts along the way. Before starting your journey, you are given for each 1
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Simplifying Dynamic Programming Jamil Saquer & Lloyd Smith Computer Science Department Missouri State University Springfield, MO USA.
Dynamic Programming Typically applied to optimization problems
Lecture 12.
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Advanced Design and Analysis Techniques
Lecture 5 Dynamic Programming
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Dynamic Programming.
Analysis of Algorithms CS 477/677
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming.
DYNAMIC PROGRAMMING.
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Analysis of Algorithms CS 477/677
Presentation transcript:

ROMaN: Revenue driven Overlay Multicast Networking Varun Khare

Agenda  Problem Statement  Dynamic Programming  Application to ROMaN

Problem Statement  Dedicated Server farm placed strategically over the Internet  ISP Cost: ISP charge for bandwidth consumption at Server sites ISP

Problem Statement  Distribute users in a meaningful fashion: »Optimize user delay performance OR »Minimize the ISP cost of servicing the end-users ISP Origin

Problem Statement  Given »ISP charging function c i and »available bandwidth B i of all K servers deployed  Find the user distribution u i at each server SRV i for N users where each user consumes b bandwidth, such that »Σu i = N; u i. b ≤ B i and »Σc i.(u i. b) the ISP cost of deploying group is minimized

Dynamic Programming  Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing.  DP solves problems by combining solutions to subproblems.  Idea: Subproblems are not independent. »Subproblems may share subsubproblems, »However, solution to one subproblem may not affect the solutions to other subproblems of the same problem.  DP reduces computation by »Solving subproblems in a bottom-up fashion. »Storing solution to a subproblem the first time it is solved. »Looking up the solution when subproblem is encountered again.  Key: determine structure of optimal solutions

Steps in Dynamic Programming 1.Characterize structure of an optimal solution. 2.Define value of optimal solution recursively. 3.Compute optimal solution values either top- down with caching or bottom-up in a table. 4.Construct an optimal solution from computed values. We’ll apply the above steps to our problem.

Optimal Substructure Let cost(n,k) = Cost of distributing n users amongst k servers 1. If k =1, then cost(n,1) = c 1 (n) 2. If k >1 then distribute i users on k-1 servers cost(n-i, k-1) + n-i users on k th server c k (i) Let cost(n,k) = Cost of distributing n users amongst k servers 1. If k =1, then cost(n,1) = c 1 (n) 2. If k >1 then distribute i users on k-1 servers cost(n-i, k-1) + n-i users on k th server c k (i)  Optimal solution to distribute N users amongst K servers contain optimal solution to distribute i users amongst j servers (i ≤ N and j ≤ K)

Recursive Solution  cost[i, j] = Optimal ISP Cost of distributing i users amongst j servers  We want cost[N,K]. This gives a recursive algorithm and solves the problem.

Example  Start by evaluating cost(n,1) where n = 0,1 … N Since k=1 there is no choice of servers  Thereafter evaluate cost(n,2) where n = 0,1 … N K0K0 K1K1 K2K ServersCost Function K0K0 C 0 (x) = x K1K1 C 1 (x) = x/2 K2K2 C 2 (x) = x/4

Example K0K0 K1K1 K2K /2 2 22/2 3 33/2 ServersCost Function K0K0 C 0 (x) = x K1K1 C 1 (x) = x/2 K2K2 C 2 (x) = x/4 cost(3,2) = min { cost(3,1) + C 2 (0) = 3 cost(2,1) + C 2 (1) = 2 + ½ cost(1,1) + C 2 (2) = 1 + 2/2 cost(0,1) + C 2 (3) = 0 + 3/2 } = 3/2

Example  Eventually evaluating cost(N,K) gives optimized cost of deploying multicast group  Runtime O(K.N 2 ) and space complexity O(K.N) K0K0 K1K1 K2K /21/4 2 22/22/4 3 33/23/4 ServersCost Function K0K0 C 0 (x) = x K1K1 C 1 (x) = x/2 K2K2 C 2 (x) = x/4

Thank You! Questions