A Self-Adaptive Scheduling Algorithm of On-Demand Broadcasts

Slides:



Advertisements
Similar presentations
Ch 11 Distributed Scheduling –Resource management component of a system which moves jobs around the processors to balance load and maximize overall performance.
Advertisements

Active Learning for Streaming Networked Data Zhilin Yang, Jie Tang, Yutao Zhang Computer Science Department, Tsinghua University.
Harmonic Broadcasting for Video-on- Demand Service Enhanced Harmonic Data Broadcasting And Receiving Scheme For Popular Video Service Li-Shen Juhn and.
On Reducing Communication Cost for Distributed Query Monitoring Systems. Fuyu Liu, Kien A. Hua, Fei Xie MDM 2008 Alex Papadimitriou.
1 Adaptive Live Broadcasting for Highly-Demanded Videos Hung-Chang Yang, Hsiang-Fu Yu and Li-Ming Tseng IEEE International Conference on Parallel and Distributed.
Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University.
An adaptive video multicast scheme for varying workloads Kien A.Hua, JungHwan Oh, Khanh Vu Multimedia Systems, Springer-Verlag 2002.
MODELING AND ANALYSIS OF MANUFACTURING SYSTEMS Session 6 SCHEDULING E
CS107 Introduction to Computer Science Lecture 5, 6 An Introduction to Algorithms: List variables.
Efficient Estimation of Emission Probabilities in profile HMM By Virpi Ahola et al Reviewed By Alok Datar.
A New Broadcasting Technique for An Adaptive Hybrid Data Delivery in Wireless Mobile Network Environment JungHwan Oh, Kien A. Hua, and Kiran Prabhakara.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
Confidence Interval Estimation
10/14/ Algorithms1 Algorithms - Ch2 - Sorting.
BodyT2 Throughput and Time Delay Performance Assurance for Heterogeneous BSNs Zhen Ren, Gang Zhou, Andrew Pyles, Mathew Keally, Weizhen Mao, Haining Wang.
More on Logic Today we look at the for loop and then put all of this together to look at some more complex forms of logic that a program will need The.
O PTIMAL SERVICE TASK PARTITION AND DISTRIBUTION IN GRID SYSTEM WITH STAR TOPOLOGY G REGORY L EVITIN, Y UAN -S HUN D AI Adviser: Frank, Yeong-Sung Lin.
Data Scheduling for Multi-item and transactional Requests in On-demand Broadcast Nitin Pabhu Vijay Kumar MDM 2005.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Wireless Cache Invalidation Schemes with Link Adaptation and Downlink Traffic Presented by Ying Jin.
More on Logic Today we look at the for loop and then put all of this together to look at some more complex forms of logic that a program will need The.
Kalman Filter and Data Streaming Presented By :- Ankur Jain Department of Computer Science 7/21/03.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Mingze Zhang, Mun Choon Chan and A. L. Ananda School of Computing
Advanced Algorithms Analysis and Design
Author:Zarei.M.;Faez.K. ;Nya.J.M.
An Overview of Statistical Inference – Learning from Data
Authors: Jiang Xie, Ian F. Akyildiz
OPERATING SYSTEMS CS 3502 Fall 2017
Chapter 7 Confidence Interval Estimation
Memory Management.
Applied Discrete Mathematics Week 2: Functions and Sequences
CPU Scheduling CSSE 332 Operating Systems
Real-Time Soft Shadows with Adaptive Light Source Sampling
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
Monitoring Churn in Wireless Networks
P-values.
CHAPTER 8 Operations Scheduling
COVERED BASICS ABOUT ALGORITHMS AND FLOWCHARTS
On Scheduling of Peer-to-Peer Video Services
Task: It is necessary to choose the most suitable variant from some set of objects by those or other criteria.
Data Dissemination and Management (2) Lecture 10
Quick Test What do you mean by pre-test and post-test loops in C?
SocialMix: Supporting Privacy-aware Trusted Social Networking Services
COSC160: Data Structures Linked Lists
Single-Server Queue Model
Repetition-Counter control Loop
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Video On Demand.
Rutgers Intelligent Transportation Systems (RITS) Laboratory
An Overview of Statistical Inference – Learning from Data
任課教授:陳朝鈞 教授 學生:王志嘉、馬敏修
Chapter 10 Verification and Validation of Simulation Models
CONCEPTS OF HYPOTHESIS TESTING
Alternating Bit Protocol
Hidden Markov Models Part 2: Algorithms
Objective of This Course
Kevin Lee & Adam Piechowicz 10/10/2009
Single-Server Queue Model
Balancing the Tradeoffs between Data Accessibility and Query Delay
One-Way Analysis of Variance
Process Scheduling Decide which process should run and for how long
Lecture 2 Part 3 CPU Scheduling
Hemanth Sampath Erik Lindskog Ravi Narasimhan
Javad Ghaderi, Tianxiong Ji and R. Srikant
of the IEEE Distributed Coordination Function
Record and Playback PHY Abstraction for n MAC Simulations
Simulation Berlin Chen
EE384Y: Packet Switch Architectures II
Data Dissemination and Management (2) Lecture 10
Presentation transcript:

A Self-Adaptive Scheduling Algorithm of On-Demand Broadcasts MSWiM 2001 Rome A Self-Adaptive Scheduling Algorithm of On-Demand Broadcasts W. Sun, W. Shi, B. Shi, W. Ji and Y. Yu Department of Compute Science, Fudan University, Shanghai, China Presented by: Yijun Yu (now in Ghent University, Belgium) Good morning ladies and gentlemen. The topics to present here is an algorithm that enhances the on-demand broadcast mobile system. I am not an expert in this area, but I would like to present the works of my colleagues and hope it could still draw your attention to their great work.

Presentation On-demand broadcasts Previous studies New metrics of performance The LDCF algorithm Experiments Conclusion In this presentation, we shall discuss the problem of on-demand broadcasts and the solution to the drawbacks of the previous methods. In the light of new metrics of performance, the large delay cost first algorithm is presented. The experiments have shown the improvement of the new algorithm over the existing algorithms.

A typical on-demand broadcast system A typical on-demand broadcast mobile system responses the mobile users’ pull requests by broadcasting. The user pull requests are sent to the server by an up-link channel.

Characteristics of an On-Demand Broadcast System Versus a pull-based broadcast system: Uplink channel is necessary for sending requests from users to the server The server would not know the access profiles of mobile users Time out requests should be considered The difference between on-demand broadcasts and purely pull-based broadcasts lies in three points. First, it needs an uplink channel to send data access requests from the users to the server. Secondly, the server does not know the access profile of the mobile users. Lastly, requests may fail because of time out.

Previous work First-Come-First-Serve ( FCFS ) Most-Request-First ( MRF ) Long- Wait-First ( LWF ) How to … Reduce the average access time of mobile users? Handle a failure request that have waited for “quite” a long time? Previous work includes First-come Firsts serve, Most-request First, Long wait first. These work mainly discuss how to reduce the average access time of mobile users; however, they don’t consider how to handle the failure requests that have waited for quite a long time.

New metrics of performance The average costs composed of Access Time cost ( CAT ) Tuning Time cost ( CTT ) Failure request handling cost (CF) Analyzing the average response time, it is spent over Access time cost, Tuning time cost and the cost of handling failure request. This gives a new metrics of the performance.

Largest Delay Cost-First algorithm Input: a request sequence Output: a broadcast schedule while true do receive new requests; for each delayed data item D, compute the cost; broadcast the items with the largest delay cost; end while The large delay cost first (LDCF) algorithm treats the request sequence and generates a broadcast schedule to improve the performance. The server endlessly check new requests, then compute the largest cost for the delayed data items to decide which items are to broadcast. The critical part of the algorithm is to compute or estimate the delay cost.

The LDCF Parameters Constants Average costs: CAT, CTT, CF broadcast period: BP= index + data response time limit RTL: T1  T0 + RTL Variables for access request Q(D,Treq) popularity factor of Data at Time: PF(D,T) safety factor: SF(Q,T)=(Treq+RTL-T) / BP Fail rate: FR(SF) = RR(SF) / R(SF) * FR(SF-1) In order to compute the delay cost, several parameters are used. First, the average costs for access time, tuning time and failure are known constants. The broadcast period (BP) covers the time for index and data of a request message. The server sets a response time limit to the messages. The data should be broadcasted at a time earlier than the request time plus the limit. During the time, the variable parameters according to the varying access request of data D at a request time T request are defined as popularity factor of data at time, the safety factor to evaluate whether a message is urgent. And fail rate is inductively derived from the previous safety factor times the remained request rate.

The LDCF Cost Function Delay cost for request Q: DC(Q) = BP*CAT+CTT+FR(SF(Q,T))*CF Cost function for data D: Cost(D) = SUMQ(D,T){DC(Q)} = PF(D,T)*(BP*CAT+CTT) +SUMQ(D,T) {FR(SF(Q,T))*CF} Given the parameters, the delay cost for a request is accumulated by access time, tuning time and failure process time costs respectively. The cost function for a data D is thus the sum of the delay cost of requests that involves the data item.

Experiment settings The following parameters are assumed: M: number of data items for broadcast=1000 Data: number of data items in one BP unit Index: length of index = 6 Received request number per time slot Zipf(k): skewness of the access distribution RTL: Response time Limit CAT=1, CTT=20, CF=2000 In our experiments, we assumed the number of data items is 1000, the number of data items in a broadcast period is 100. The data index length is 6. The zipf(k) defines the the skewness of access distribution. The average cost of accessing time, tuning time and failure handling time are 1, 20 and 2000 respectively.

1. Average Cost when fail rate of request is low We have done several experiments to show the advantages of LDCF method over LWF, FCFS, MRF schemes. First experiment shows when fail rate is low (data=100, request number per slot is 10.10, zipf(0) distribution and RTL from 1500 to 3500. FCFS is the worst and LDCF is slightly better than the MRF and LWF methods.

2. Average Cost when fail rate is high When the fail rate is high because data is 1020, request number per slot is 247.15, and RTL is only 1500. One can see that LDCF has the lowest average cost, while LWF is slightly worse.

3. Average Cost vs BP To compare the best two methods LDCF and LWF more closely, change the BP so that data varies from 60 to 200, the request per slot is 98.86 and RTL is 1500, one can see that LDCF is uniformly better than LWF.

4. Average Cost vs RTL The same is true for comparison of different RTL when data item is 100 and request per slot is 98.86.

5. Average Cost vs skewness of data access distribution Lastly, the skewness of data access distribution is varied to compare the two methods. The left chart shows when the zipf(k) value are discrete points and the right chart averages the result for the zipf(k) in a random value in the specified ranges. The LDCF method is better and more stable to change.

Conclusion When discussing the performance of a scheduling algorithm, we should take into account not only AT, but also TT and request failure. LDCF was compared with LWF, FCFS and MRF via several experiments, indicating the average cost of LDCF scheduling is the least. The final conclusion is, one should consider not only the access time but also the tuning time and failure handling time in the model. The experiments shows that the LDCF method is superior to the compared LWF, FCFS and MRF methods. Thank you very much.

Thanks MSWiM 2001 Program Committee