Implement Prefetching Technique Using Supply Chain Theory Lixiong Chen.

Slides:



Advertisements
Similar presentations
EISCambridge 2009MoN8 Exploring Markov models for gate-limited service and their application to network-based services Glenford Mapp and Dhawal Thakker.
Advertisements

Paging: Design Issues. Readings r Silbershatz et al: ,
CSE506: Operating Systems Disk Scheduling. CSE506: Operating Systems Key to Disk Performance Don’t access the disk – Whenever possible Cache contents.
CS7810 Prefetching Seth Pugsley. Predicting the Future Where have we seen prediction before? – Does it always work? Prefetching is prediction – Predict.
CHAINING COSC Content Motivation Introduction Multicasting Chaining Performance Study Conclusions.
2P13 Week 11. A+ Guide to Managing and Maintaining your PC, 6e2 RAID Controllers Redundant Array of Independent (or Inexpensive) Disks Level 0 -- Striped.
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
IT Systems Multiprocessor System EN230-1 Justin Champion C208 –
DEMAND VARIABILITY IN SUPPLY CHAINS Eren Anlar. Literature Review Deuermeyer and Schwarz (1981) and Svoronos and Zipkin (1988) provide techniques to approximate.
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
SCM: Information distortion1 Supply Chain Management Demand Variability and Coordination in a Supply Chain.
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
Database Implementation Issues CPSC 315 – Programming Studio Spring 2008 Project 1, Lecture 5 Slides adapted from those used by Jennifer Welch.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Chapter 1 and 2 Computer System and Operating System Overview
Computer Organization and Architecture
A Scalable Video-On-Demand System Using Multi-Batch Buffering Techniques Cyrus C. Y. Choi and Mounir Hamdi, Member, IEEE IEEE ‘03 Transactions on Broadcasting.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
Optimizing RAM-latency Dominated Applications
1.A file is organized logically as a sequence of records. 2. These records are mapped onto disk blocks. 3. Files are provided as a basic construct in operating.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Performance of Web Applications Introduction One of the success-critical quality characteristics of Web applications is system performance. What.
IT The Relational DBMS Section 06. Relational Database Theory Physical Database Design.
M i SMob i S Mob i Store - Mobile i nternet File Storage Platform Chetna Kaur.
1 Inventory Analysis under Uncertainty: Lecture 6 Leadtime and reorder point Uncertainty and its impact Safety stock and service level Cycle inventory,
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
Multiprocessor and Real-Time Scheduling Chapter 10.
1 Scheduling The part of the OS that makes the choice of which process to run next is called the scheduler and the algorithm it uses is called the scheduling.
1 Process Scheduling in Multiprocessor and Multithreaded Systems Matt Davis CS5354/7/2003.
Practice 8 Chapter Ten. 1. Is disk scheduling, other than FCFS scheduling, useful in a single-user environment? Explain your answer. Answer: In a single-user.
Uniprocessor Scheduling
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
Virtual Memory 1 1.
PUSH, PULL AND PUSH-PULL SYSTEMS, BULLWHIP EFFECT AND 3PL
Tivoli Software © 2010 IBM Corporation 1 Using Machine Learning Techniques to Enhance The Performance of an Automatic Backup and Recovery System Amir Ronen,
CS 3204 Operating Systems Godmar Back Lecture 21.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
Department of Computer Science and Software Engineering
Introduction to Supply Chain Management Designing & Managing the Supply Chain Chapter 1 Byung-Hyun Ha
Project Presentation By: Dean Morrison 12/6/2006 Dynamically Adaptive Prepaging for Effective Virtual Memory Management.
Physical Database Design Purpose- translate the logical description of data into the technical specifications for storing and retrieving data Goal - create.
I MPLEMENTING FILES. Contiguous Allocation:  The simplest allocation scheme is to store each file as a contiguous run of disk blocks (a 50-KB file would.
13- 1 Chapter 13.  Overview of Sequential File Processing  Sequential File Updating - Creating a New Master File  Validity Checking in Update Procedures.
NUS.SOC.CS5248 Ooi Wei Tsang 1 Proxy Caching for Streaming Media.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Content caching and scheduling in wireless networks with elastic and inelastic traffic Group-VI 09CS CS CS30020 Performance Modelling in Computer.
1 Virtual Memory. Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost,
The Bullwhip Effect1 Slides 3 The Bullwhip Effect Global Supply Chain Management.
Chapter 11 Managing Inventory throughout the Supply Chain
Energy Efficient Prefetching and Caching Athanasios E. Papathanasiou and Michael L. Scott. University of Rochester Proceedings of 2004 USENIX Annual Technical.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 3.
Video Caching in Radio Access network: Impact on Delay and Capacity
Parallel IO for Cluster Computing Tran, Van Hoai.
CHP-4 QUEUE Cont…. 7.DEQUE Deque (short form of double-ended queue) is a linear list in which elements can be inserted or deleted at either end but not.
Ultriva User’s Conference 2012 Collaborative Planning and Execution Presented by Narayan Laksham.
OPERATING SYSTEMS CS 3502 Fall 2017
OPERATING SYSTEMS CS 3502 Fall 2017
Chapter 2 Memory and process management
Distributed Cache Technology in Cloud Computing and its Application in the GIS Software Wang Qi Zhu Yitong Peng Cheng
Memory Thrashing Protection in Multi-Programming Environment
Informed Prefetching and Caching
Overview Continuation from Monday (File system implementation)
Chapter 2: The Linux System Part 3
Dept. of Computer Science, Univ. of Rochester
Virtual Memory 1 1.
Scheduling of Regular Tasks in Linux
Presentation transcript:

Implement Prefetching Technique Using Supply Chain Theory Lixiong Chen

Why Prefetching Disk I/O is extremely slow compared with direct data transfer within memory system Frequent exposure of disk operation to users can degrade their user experiences This can be avoided if data to be used is cached in memory before its actual request is issued from the users

Important Concerns What to fetch --- Choose data according to anticipation. Not an issue if fetching sequentially: Correlated data is always stored sequentially. When to fetch --- Remains a an issue for systems designed using queuing theory.

Correspondingly, associated with these two concerns are two terms: Prefetch degree: how much data is to be fetched upon a request Prefetch distance: how early to issue the next prefetch request

Existing prefetching technique may become troublesome when: Multiple data requests are issued concurrently, but with different frequency Each request is treated with equal importance Result: frequent data requests may end up with exhausted cached data while less frequent ones still have unconsumed resources

Supply Chain Model Supply chain model features: Multiple request-retrieve operations alone a single process Customer demands are desired to be serviced timely Resource allocation among different types of requests is managed

Problem Mapping Requests = Demand (normally distribution over data volume) Access time = Lead time (Trequest – Tretrieve, assumed to be i.i.d) Batched prefetching = Batched replenishment Multilevel prefetching = Multi-echelon inventory control

The key to the problem mapping is how to determine the “reorder point” and “safety inventory level”. In SCM, there are two types of basic algorithms: Equal Time Supplies (ETS) Equal Safety Factors (ESF)

ETS Applied if total inventory safety level is determined Individual safety inventory level is set proportional to demand rates

EFS Considers the demand uncertainties of demand streams (no longer stable) Highest uncertainty in demand deserves highest safety inventory level

Testbed Linux - trigger distance - prefetching degree Synthetic Benchmarks Real Linux Applications HTTP, VOD server simulation

Performance (benchmark)

Synthetic Benchmark

Real Linux Application

Server Benchmark (HTTP)

Server Benchmark (VOD)

Bullwhip Effect Variations of demand alone supply chain are amplified as orders are placed further away from customer In multi-level memory system, data demands become less stable as requests are issued further away from end users

Multi-level Distortion