Distributed Parity Cache Table. Motivation Parity updating is a high cost operation  Especially for small write operations Read old data 、 Read Old Parity.

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

4.4 Page replacement algorithms
Dead Block Replacement and Bypass with a Sampling Predictor Daniel A. Jiménez Department of Computer Science The University of Texas at San Antonio.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
1 Cache and Caching David Sands CS 147 Spring 08 Dr. Sin-Min Lee.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Multi-core systems System Architecture COMP25212 Daniel Goodman Advanced Processor Technologies Group.
Performance of Cache Memory
The Zebra Striped Network Filesystem. Approach Increase throughput, reliability by striping file data across multiple servers Data from each client is.
On-Chip Cache Analysis A Parameterized Cache Implementation for a System-on-Chip RISC CPU.
RIMAC: Redundancy-based hierarchical I/O cache architecture for energy-efficient, high- performance storage systems Xiaoyu Yao and Jun Wang Computer Architecture.
Modularized Redundant Parallel Virtual System
CSC1016 Coursework Clarification Derek Mortimer March 2010.
Author: Kang Li, Francis Chang, Wu-chang Feng Publisher: INCON 2003 Presenter: Yun-Yan Chang Date:2010/11/03 1.
The Lord of the Cache Project 3. Caches Three common cache designs: Direct-Mapped store in exactly one cache line Fully Associative store in any cache.
LRU Replacement Policy Counters Method Example
Technical University of Lodz Department of Microelectronics and Computer Science Elements of high performance microprocessor architecture Virtual memory.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
Physical Storage Organization. Advanced DatabasesPhysical Storage Organization2 Outline Where and How data are stored? –physical level –logical level.
COEN 180 Main Memory Cache Architectures. Basics Speed difference between cache and memory is small. Therefore:  Cache algorithms need to be implemented.
Lecture 31: Chapter 5 Today’s topic –Direct mapped cache Reminder –HW8 due 11/21/
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
Modularizing B+-trees: Three-Level B+-trees Work Fine Shigero Sasaki* and Takuya Araki NEC Corporation * currently with 1st Nexpire Inc.
Physical Storage Organization. Advanced DatabasesPhysical Storage Organization2 Outline Where and How are data stored? –physical level –logical level.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Cache Control and Cache Coherence Protocols How to Manage State of Cache How to Keep Processors Reading the Correct Information.
Computer Architecture Lecture 26 Fasih ur Rehman.
1 File Systems: Consistency Issues. 2 File Systems: Consistency Issues File systems maintains many data structures  Free list/bit vector  Directories.
Physical Storage Organization. Advanced DatabasesPhysical Storage Organization2 Outline Where and How data are stored? –physical level –logical level.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
Cache Memory By Tom Austin. What is cache memory? A cache is a collection of duplicate data, where the original data is expensive to fetch or compute.
Lecture 40: Review Session #2 Reminders –Final exam, Thursday 3:10pm Sloan 150 –Course evaluation (Blue Course Evaluation) Access through.
Multimedia Information System Lab. Network Architecture Res. Group Cooperative Video Streaming Mechanisms with Video Quality Adjustment Naoki Wakamiya.
Plethora: Infrastructure and System Design. Introduction Peer-to-Peer (P2P) networks: –Self-organizing distributed systems –Nodes receive and provide.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Lecture 17 Final Review Prof. Mike Schulte Computer Architecture ECE 201.
Virtual Memory Questions answered in this lecture: How to run process when not enough physical memory? When should a page be moved from disk to memory?
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
ICOM 6005 – Database Management Systems Design Dr. Manuel Rodríguez-Martínez Electrical and Computer Engineering Department Lecture 7 – Buffer Management.
SOFTENG 363 Computer Architecture Cache John Morris ECE/CS, The University of Auckland Iolanthe I at 13 knots on Cockburn Sound, WA.
Cache Operation.
CAM Content Addressable Memory
Cache Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module.
林仲祥, 李苡嬋 CSIE, NTU 11 / 03, 2008 Simplescalar: Victim Cache 1 ACA2008 HW3.
CSCI206 - Computer Organization & Programming
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Main Memory Cache Architectures
Tutorial Nine Cache CompSci Semester One 2016.
RAID Redundant Arrays of Independent Disks
CAM Content Addressable Memory
Memory Hierarchy Virtual Memory, Address Translation
Consider a Direct Mapped Cache with 4 word blocks
(Find all PTEs that map to a given PPN)
CSCI206 - Computer Organization & Programming
Cache Coherence Protocols:
FIGURE 12-1 Memory Hierarchy
Distributed P2P File System
Interconnect with Cache Coherency Manager
What Happens if There is no Free Frame?
Cache Replacement Scheme based on Back Propagation Neural Networks
Direct Mapping.
Module IV Memory Organization.
CDA 5155 Caches.
Main Memory Cache Architectures
Lecture 22: Cache Hierarchies, Memory
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Distributed Parity Cache Table

Motivation Parity updating is a high cost operation  Especially for small write operations Read old data 、 Read Old Parity 、 write new data 、 write new parity Basic ideas  Delay the generation of parity Cached data could be used without reread Parity & newly written data could be cached for “the same” write Beyond parity?  A server-side cooperative cache

Distributed Parity Cache Table A whole stripe is more meaningful than partial blocks  Local file system cache knows nothing about a whole stripe Distributed parity cache table knows !! Small write phenomenon  Could aggregate small writes  Benefits from previous read Cooperative cache  PVFS does not provide cache

Architecture

Striping Size and Cache Blocks

Cache Block Each block contains 16K data bytes metadata  DTag : Data tag  PTag : Parity tag  LRef : # of hits in this block  GRef: # of hits in this stripe

Cache Replacement Algorithm IF PTag is null THEN IF Operation is READ THEN USE LRef Field & LRU ELSE IF Dirty bit is Set THEN Write the Parity Block and the replaced Block END IF Write Operation Proceed Update the PTag Field END IF ELSE IF PTag == itself.DTag and DTag != itself.DTag THEN Replace the block ELSE IF PTag != DTag USE LRef Field & LRU END IF

Performance Evaluation (1/4) – Native Calls

Performance Evaluation (2/4) – Native Calls

Performance Evaluation (3/4) – POSIX APIs

Performance Evaluation (4/4) – POSIX APIs