FAWN: A Fast Array of Wimpy Nodes

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Scalable Content-Addressable Network Lintao Liu
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
SILT: A Memory-Efficient, High-Performance Key-Value Store
FAWN: Fast Array of Wimpy Nodes Developed By D. G. Andersen, J. Franklin, M. Kaminsky, A. Phanishayee, L. Tan, V. Vasudevan Presented by Peter O. Oliha.
FAWN: Fast Array of Wimpy Nodes A technical paper presentation in fulfillment of the requirements of CIS 570 – Advanced Computer Systems – Fall 2013 Scott.
Paging Hardware With TLB
2P13 Week 11. A+ Guide to Managing and Maintaining your PC, 6e2 RAID Controllers Redundant Array of Independent (or Inexpensive) Disks Level 0 -- Striped.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Schenker Presented by Greg Nims.
Day 20 Memory Management. Assumptions A process need not be stored as one contiguous block. The entire process must reside in main memory.
How caches take advantage of Temporal locality
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
File System Implementation
File System Implementation
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
FAWN: A Fast Array of Wimpy Nodes Presented by: Aditi Bose & Hyma Chilukuri.
Chapter 3.2 : Virtual Memory
FAWN: A Fast Array of Wimpy Nodes Presented by: Clint Sbisa & Irene Haque.
An Approach to Generalized Hashing Michael Klipper With Dan Blandford Guy Blelloch.
Wide-area cooperative storage with CFS
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University Joint work with Jennifer.
COEN 180 Main Memory Cache Architectures. Basics Speed difference between cache and memory is small. Therefore:  Cache algorithms need to be implemented.
Storage: Scaling Out > Scaling Up? Ankit Singla Chi-Yao Hong.
Modularizing B+-trees: Three-Level B+-trees Work Fine Shigero Sasaki* and Takuya Araki NEC Corporation * currently with 1st Nexpire Inc.
Basic File Structures and Hashing Lectured by, Jesmin Akhter, Assistant professor, IIT, JU.
IP Address Lookup Masoud Sabaei Assistant professor
Motivation SSDs will become the primary storage devices on PC, but NTFS behavior may not suitable to flash memory especially on metadata files. When considering.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Chapter 11 Indexing & Hashing. 2 n Sophisticated database access methods n Basic concerns: access/insertion/deletion time, space overhead n Indexing 
Hypertable Doug Judd Zvents, Inc.. hypertable.org Background.
12.1 Chapter 12: Indexing and Hashing Spring 2009 Sections , , Problems , 12.7, 12.8, 12.13, 12.15,
Log-structured Memory for DRAM-based Storage Stephen Rumble, John Ousterhout Center for Future Architectures Research Storage3.2: Architectures.
Paging Example What is the data corresponding to the logical address below:
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 11: File System Implementation.
Amar Phanishayee,LawrenceTan,Vijay Vasudevan
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
Operating Systems Unit 7: – Virtual Memory organization Operating Systems.
CSCI 156: Lab 11 Paging. Our Simple Architecture Logical memory space for a process consists of 16 pages of 4k bytes each. Your program thinks it has.
Cuckoo Filter: Practically Better Than Bloom Author: Bin Fan, David G. Andersen, Michael Kaminsky, Michael D. Mitzenmacher Publisher: ACM CoNEXT 2014 Presenter:
Operating Systems Session 7: – Virtual Memory organization Operating Systems.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Chapter 5 Record Storage and Primary File Organizations
Operating Systems, Winter Semester 2011 Practical Session 9, Memory 1.
Scalable Multi-match Packet Classification Using TCAM and SRAM Author: Yu-Chieh Cheng, Pi-Chung Wang Publisher: IEEE Transactions on Computers (2015) Presenter:
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Section 8 Address Translation March 10th, 2017 Taught by Joshua Don.
Algorithmic Improvements for Fast Concurrent Cuckoo Hashing
Slide credits: Thomas Kao
IP Routers – internal view
FAWN: A Fast Array of Wimpy Nodes
Paging and Segmentation
DHT Routing Geometries and Chord
Indexing and Hashing Basic Concepts Ordered Indices
Hash-Based Indexes Chapter 11
RUM Conjecture of Database Access Method
Virtual Memory Virtual memory is a technique which gives an application program the impression that it has contiguous working memory, while in fact it.
Database Design and Programming
CPS216: Advanced Database Systems
Indexing 4/11/2019.
A Scalable Content Addressable Network
The Gamma Database Machine Project
CSE 326: Data Structures Lecture #14
Towards TCAM-based Scalable Virtual Routers
Presentation transcript:

FAWN: A Fast Array of Wimpy Nodes D. G. Andersen1 J. Franklin1 M. Kaminsky2 A. Phanishayee1 L. Tan1 V. Vasudevan1 1CMU 2Intel Labs

Warning This is not a complete presentation: it just explains some items that were left out of the authors' presentation of FAWN Topics such as Consistent hashing Data log architecture

Consistent hashing (I) Technique used in distributed hashing schemes to tolerate the loss of one or more machines Traditional approach: Each node corresponds to a single bucket If a node dies, We lose a bucket Must update the hash function for all nodes

Consistent hashing (II) Have several buckets per node Organize all buckets into a ring Each bucket has a successor If a bucket is unavailable Select next bucket Go next!

Consistent hashing (III) Potential problem Neighboring bucket will become overloaded Not if we associate with each physical node a set of random disjoint buckets: virtual nodes When a physical node fails, its workload get shared by several other physical nodes

Data log architecture FAWN nodes have two constraints Small memory Poor performance of flash drives for random writes FAWN data log architecture Minimizes its RAM footprint All writes are append-only

Mapping a key to a value Through an in-memory hash table FAWN uses 160-bit keys: i least significant bits are index bits next 15 low-order bits are key fragment Index bits are used to select a bucket Key fragment is stored in bucket entry 15 bits + valid bit + 32-bitdata log address = 48 bits = 6 bytes

The data log One data log per virtual node Data log entries consist of A full 160-bit key A length field The actual data

Basic data store functions Adds an entry to the log and updates corresponding hash table entry Lookup: Locates a data log entry and checks full key Invalidate: Marks hash table entry invalid and adds a delete entry to the log (for durability)

Store

Maintenance functions Split: Splits a data store between existing virtual node and a new virtual node Merge Merge two data stores into one Compact: Compacts data log and updates all hash table entries

Split

Split