File System Performance

Slides:



Advertisements
Similar presentations
More on File Management
Advertisements

Mendel Rosenblum and John K. Ousterhout Presented by Travis Bale 1.
Chapter 11: File System Implementation
G Robert Grimm New York University Sprite LFS or Let’s Log Everything.
File System Implementation
File System Implementation
CS 333 Introduction to Operating Systems Class 18 - File System Performance Jonathan Walpole Computer Science Portland State University.
Chapter 12: File System Implementation
G Robert Grimm New York University Sprite LFS or Let’s Log Everything.
The Design and Implementation of a Log-Structured File System Presented by Carl Yao.
Transactions and Reliability. File system components Disk management Naming Reliability  What are the reliability issues in file systems? Security.
1 File Systems Chapter Files 6.2 Directories 6.3 File system implementation 6.4 Example file systems.
AN IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM FOR UNIX Margo Seltzer, Harvard U. Keith Bostic, U. C. Berkeley Marshall Kirk McKusick, U. C. Berkeley.
IT 344: Operating Systems Winter 2008 Module 16 Journaling File Systems Chia-Chi Teng CTB 265.
File System Implementation Chapter 12. File system Organization Application programs Application programs Logical file system Logical file system manages.
Log-structured File System Sriram Govindan
The Design and Implementation of Log-Structure File System M. Rosenblum and J. Ousterhout.
26-Oct-15CSE 542: Operating Systems1 File system trace papers The Design and Implementation of a Log- Structured File System. M. Rosenblum, and J.K. Ousterhout.
Chapter VIIII File Systems Review Questions and Problems Jehan-François Pâris
1 File Systems: Consistency Issues. 2 File Systems: Consistency Issues File systems maintains many data structures  Free list/bit vector  Directories.
Log-Structured File Systems
Free Space Management.
Page 111/15/2015 CSE 30341: Operating Systems Principles Chapter 11: File System Implementation  Overview  Allocation methods: Contiguous, Linked, Indexed,
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
File System Implementation
CSE 451: Operating Systems Spring 2012 Journaling File Systems Mark Zbikowski Gary Kimura.
Advanced file systems: LFS and Soft Updates Ken Birman (based on slides by Ben Atkin)
12.1 Silberschatz, Galvin and Gagne ©2003 Operating System Concepts with Java Chapter 12: File System Implementation Chapter 12: File System Implementation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition File System Implementation.
CS333 Intro to Operating Systems Jonathan Walpole.
Lecture 21 LFS. VSFS FFS fsck journaling SBDISBDISBDI Group 1Group 2Group N…Journal.
11.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 11.5 Free-Space Management Bit vector (n blocks) … 012n-1 bit[i] =  1  block[i]
Lecture 20 FSCK & Journaling. FFS Review A few contributions: hybrid block size groups smart allocation.
File System Performance CSE451 Andrew Whitaker. Ways to Improve Performance Access the disk less  Caching! Be smarter about accessing the disk  Turn.
CSE 451: Operating Systems Spring Module 17 Journaling File Systems
File System Consistency
Jonathan Walpole Computer Science Portland State University
Transactions and Reliability
FileSystems.
CS703 - Advanced Operating Systems
AN IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM FOR UNIX
CS703 - Advanced Operating Systems
Filesystems 2 Adapted from slides of Hank Levy
Lecture 20 LFS.
Overview Continuation from Monday (File system implementation)
CSE 451: Operating Systems Autumn Module 16 Journaling File Systems
CSE 451: Operating Systems Spring 2011 Journaling File Systems
Printed on Monday, December 31, 2018 at 2:03 PM.
CSE 451: Operating Systems Winter Module 16 Journaling File Systems
CSE 451: Operating Systems Spring Module 17 Journaling File Systems
Overview: File system implementation (cont)
CSE 451: Operating Systems Spring Module 16 Journaling File Systems
Log-Structured File Systems
File-System Structure
M. Rosenblum and J.K. Ousterhout The design and implementation of a log-structured file system Proceedings of the 13th ACM Symposium on Operating.
Log-Structured File Systems
CSE 451: Operating Systems Spring 2006 Module 17 Berkeley Log-Structured File System John Zahorjan Allen Center
CSE 451: Operating Systems Spring 2008 Module 14
Chapter 14: File-System Implementation
Chapter VIIII File Systems Review Questions and Problems
CSE 451: Operating Systems Autumn 2009 Module 17 Berkeley Log-Structured File System Ed Lazowska Allen Center
Log-Structured File Systems
CSE 451: Operating Systems Autumn 2010 Module 17 Berkeley Log-Structured File System Ed Lazowska Allen Center
File System Implementation
CSE 451: Operating Systems Spring 2005 Module 16 Berkeley Log-Structured File System Ed Lazowska Allen Center
Log-Structured File Systems
CSE 451: Operating Systems Spring 2010 Module 14
The Design and Implementation of a Log-Structured File System
Presentation transcript:

File System Performance CSE451 Andrew Whitaker

Ways to Improve Performance Access the disk less Caching! Be smarter about accessing the disk Turn small operations into large operations Turn scattered operations into sequential operations

Technique #1: Caching Memory is MUCH faster than disk So, cache whatever we can in memory File buffers i-nodes Directory entries (name => i-node) Caching reads is a no-brainer Caching writes is more interesting…

Caching Writes Two options Synchronous: data is immediately written out to disk AKA: write-through Asynchronous: disk writes are delayed AKA: write-back Programmer’s perspective: what does it mean when the “write” system call returns? With asynchronous writes, the data has not necessarily hit the disk

Why Use Asynchronous Writes? Allows us to batch-up multiple writes to the same block Allows for better overlap of CPU and I/O CPU does not stall waiting for the disk Allows the disk scheduler to make better decisions Application: write(a); write (b); write(c); Disk: write(b); write(a); write(c); Most data updates in UNIX systems use asynchronous writes by default Programmer can override: fsync(fd);

Problems with Asynchronous Writes File system state can be lost during a crash Missing blocks, missing files, missing directories, storage leaks, etc. For this reason, meta-data updates tend to be done synchronously File/directory creation or deletion

Consistency Problems Problems still arise, even with synchronous meta-data updates For example, file creation must modify an i-node and a directory entry Initialize the i-node Record the <fileName, i-node> mapping in the directory Disks do not support atomic operations Which of these operations comes first to ensure safety? A: Mark the I-node as in use

Dealing with Consistency Problems Always keep the disk in a “safe” state Run a recovery program (like fsck) on startup

i-check: File Consistency Is each block on exactly one list? Create a bit vector with as many entries as there are blocks Follow the free list and each i-node block list When a block is encountered, examine its bit If the bit was 0, set it to 1 If the bit was already 1 if the block is both in a file and on the free list, remove it from the free list and cross your fingers if the block is in two files, call support! If there are any 0’s left at the end, put those blocks on the free list

d-check: Directory Consistency Do the directories form a tree? Cycles are bad! Does the link count of each file (i-node) equal the number of directory links to it?

Technique #2: Better Data Layout Recall basic file system structure: Meta-data: i-nodes, free block list Data: file data, directory data Metadata Data Note: i-nodes are far from the data blocks they describe

Cylinder groups Details: Basic idea: group commonly accessed data and meta-data together This reduces seeks Details: Disk is partitioned into groups of cylinders Data blocks from a file are all placed in the same cylinder group Files in same directory are placed in the same cylinder group i-node for file placed in same cylinder group as file’s data

Cylinder Group Analysis Reduces or eliminates seeks for some common access patterns Does not address rotational delay Performance is workload dependent Performance degrades if cylinders become full Partial solution: pro-actively reserve space

Log Structured File System Let’s assume all reads are cached An iffy assumption, but let’s suspend disbelief Q: How can we turn all writes into large, sequential writes? Insight: this is possible if the location of data on disk can change

A Convention File System Files live at fixed location So, file system writes must use seeks For example: Write to Mathias.txt Write to Andrew.txt Write to Jill.txt Bob.txt Joel.txt Jill.txt Matt.txt Andrew.txt Nolan.txt Trish.txt Mathias.txt

Log-structured File System Use the disk as an append-only log All writes go at the end The location of a file changes over time Old data is not over-written Until the file system becomes full Log growth Mathias.txt Andrew.txt Jill.txt

LFS Details Everything gets written to the log File data, i-nodes, directories LFS tries to buffer many small writes into large segments Typically 512k, 1MB

How Can This Possibly Work? Q: If nothing lives at a fixed location, how do we find “the data”? A: Add a layer of indirection: An i-node map Maps from i-node number to current location The map resides at a fixed location on disk NOT in the log! The map is cached in memory for performance

What Happens When the Disk Gets Full? Partial solution: disk is managed in segments, which are threaded on disk Basically, a linked-list But, this re-introduces seeks! The LFS log contains a combination of valid and obsolete data

Segment Cleaner Goal: make scattered segments contiguous again Approach: Read a segment Write live data to the end of the log Presto: The segment is now clean This is very expensive Each live byte is read and written

LFS Analysis For reads, LFS and a traditional FS are largely equivalent LFS has better performance for small writes and meta-data operations The LFS cleaner has a large impact on performance How important is this?

LFS in Practice LFS is implemented, but not widely used Reasons? Assumptions about read behavior were not valid Reads have not gone away Performance improvements were not sufficient to offset increase complexity, higher variability