CSE 451: Operating Systems Winter 2007 Module 17 Berkeley Log-Structured File System + File System Summary Ed Lazowska lazowska@cs.washington.edu Allen.

Slides:



Advertisements
Similar presentations
1 Log-Structured File Systems Hank Levy. 2 Basic Problem Most file systems now have large memory caches (buffers) to hold recently-accessed blocks Most.
Advertisements

More on File Management
Mendel Rosenblum and John K. Ousterhout Presented by Travis Bale 1.
Jeff's Filesystem Papers Review Part II. Review of "The Design and Implementation of a Log-Structured File System"
CSE 451: Operating Systems Autumn 2013 Module 18 Berkeley Log-Structured File System Ed Lazowska Allen Center 570 © 2013 Gribble,
The design and implementation of a log-structured file system The design and implementation of a log-structured file system M. Rosenblum and J.K. Ousterhout.
CS 333 Introduction to Operating Systems Class 18 - File System Performance Jonathan Walpole Computer Science Portland State University.
CS 333 Introduction to Operating Systems Class 19 - File System Performance Jonathan Walpole Computer Science Portland State University.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
THE DESIGN AND IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM M. Rosenblum and J. K. Ousterhout University of California, Berkeley.
The Design and Implementation of a Log-Structured File System Presented by Carl Yao.
Log-Structured File System (LFS) Review Session May 19, 2014.
THE DESIGN AND IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM M. Rosenblum and J. K. Ousterhout University of California, Berkeley.
Log-structured File System Sriram Govindan
The Design and Implementation of Log-Structure File System M. Rosenblum and J. Ousterhout.
26-Oct-15CSE 542: Operating Systems1 File system trace papers The Design and Implementation of a Log- Structured File System. M. Rosenblum, and J.K. Ousterhout.
1 File Systems: Consistency Issues. 2 File Systems: Consistency Issues File systems maintains many data structures  Free list/bit vector  Directories.
Log-Structured File Systems
CS 153 Design of Operating Systems Spring 2015 Lecture 22: File system optimizations.
CS 153 Design of Operating Systems Spring 2015 Lecture 21: File Systems.
CS333 Intro to Operating Systems Jonathan Walpole.
Local Filesystems (part 1) CPS210 Spring Papers  The Design and Implementation of a Log- Structured File System  Mendel Rosenblum  File System.
CSE 451: Operating Systems Spring 2012 Module 16 BSD UNIX Fast File System Ed Lazowska Allen Center 570.
Embedded System Lab. 정영진 The Design and Implementation of a Log-Structured File System Mendel Rosenblum and John K. Ousterhout ACM Transactions.
File System Performance CSE451 Andrew Whitaker. Ways to Improve Performance Access the disk less  Caching! Be smarter about accessing the disk  Turn.
File System Implementation Issues. The Operating System’s View of the Disk The disk request queue Memory The disk completed queue 10,20 22,14 11,77 22,14.
CSE 451: Operating Systems Winter 2015 Module 17 Journaling File Systems Mark Zbikowski Allen Center 476 © 2013 Gribble, Lazowska,
CSE 451: Operating Systems Spring Module 17 Journaling File Systems
CSE 451: Operating Systems Spring 2010 Module 16 Berkeley Log-Structured File System John Zahorjan Allen Center 534.
File System Consistency
© 2013 Gribble, Lazowska, Levy, Zahorjan
Jonathan Walpole Computer Science Portland State University
AN IMPLEMENTATION OF A LOG-STRUCTURED FILE SYSTEM FOR UNIX
The Design and Implementation of a Log-Structured File System
The Design and Implementation of a Log-Structured File System
File Systems Directories Revisited Shared Files
Filesystems 2 Adapted from slides of Hank Levy
Lecture 20 LFS.
CSE 451: Operating Systems Spring 2012 Module 19 File System Summary
CSE 153 Design of Operating Systems Winter 2018
CSE 451: Operating Systems Winter Module 22 Distributed File Systems
CSE 451: Operating Systems Autumn Module 16 Journaling File Systems
CSE 451: Operating Systems Spring 2011 Journaling File Systems
CSE 451: Operating Systems Autumn 2004 BSD UNIX Fast File System
Printed on Monday, December 31, 2018 at 2:03 PM.
CSE 451: Operating Systems Winter Module 16 Journaling File Systems
CSE 451: Operating Systems Spring Module 17 Journaling File Systems
CSE 451: Operating Systems Spring Module 16 Journaling File Systems
Log-Structured File Systems
M. Rosenblum and J.K. Ousterhout The design and implementation of a log-structured file system Proceedings of the 13th ACM Symposium on Operating.
Log-Structured File Systems
CSE 451: Operating Systems Winter Module 15 BSD UNIX Fast File System
CSE 451: Operating Systems Spring 2006 Module 17 Berkeley Log-Structured File System John Zahorjan Allen Center
CSE 451: Operating Systems Winter 2003 Lecture 14 FFS and LFS
CSE 451: Operating Systems Spring 2008 Module 14
CSE 153 Design of Operating Systems Winter 2019
CSE 451: Operating Systems Autumn 2009 Module 18 File System Summary
CSE 451: Operating Systems Autumn 2003 Lecture 14 FFS and LFS
CSE 451: Operating Systems Autumn 2009 Module 17 Berkeley Log-Structured File System Ed Lazowska Allen Center
Log-Structured File Systems
CSE 451: Operating Systems Autumn 2010 Module 17 Berkeley Log-Structured File System Ed Lazowska Allen Center
File System Implementation
CSE 451: Operating Systems Spring 2005 Module 16 Berkeley Log-Structured File System Ed Lazowska Allen Center
File System Performance
CSE 451: Operating Systems Winter Module 16 Journaling File Systems
Log-Structured File Systems
CSE 451: Operating Systems Winter Module 15 BSD UNIX Fast File System
CSE 451: Operating Systems Spring 2010 Module 14
The Design and Implementation of a Log-Structured File System
Presentation transcript:

CSE 451: Operating Systems Winter 2007 Module 17 Berkeley Log-Structured File System + File System Summary Ed Lazowska lazowska@cs.washington.edu Allen Center 570 1

More on caching (applies both to FS and FFS) Cache (often called buffer cache) is just part of system memory It’s system-wide, shared by all processes Need a replacement algorithm LRU usually Even a small (4MB) cache can be very effective Today’s huge memories => bigger caches => even higher hit ratios Many file systems “read-ahead” into the cache, increasing effectiveness even further 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

Caching writes, vs. reads Some applications assume data is on disk after a write (seems fair enough!) And the file system itself will have (potentially costly!) consistency problems if a crash occurs between syncs – i-nodes and file blocks can get out of whack Approaches: “write-through” the buffer cache (synchronous – slow), or “write-behind”: maintain queue of uncommitted blocks, periodically flush (unreliable – this is the sync solution), or NVRAM: write into battery-backed RAM (expensive) and then later to disk 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

So, you can make things better, but … As caches get big, most reads will be satisfied from the cache No matter how you cache write operations, though, they are eventually going to have to get back to disk Thus, most disk traffic will be write traffic If you eventually put blocks (i-nodes, file content blocks) back where they came from on the disk, then even if you schedule disk writes cleverly, there’s still going to be a lot of head movement (which dominates disk performance) – so you simply won’t be utilizing the disk effectively 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan LFS inspiration Suppose, instead, what you wrote to disk was a log of changes made to files log includes modified data blocks and modified metadata blocks buffer a huge block (“segment”) in memory – 512K or 1M when full, write it to disk in one efficient contiguous transfer right away, you’ve decreased seeks by a factor of 1M/4K = 250 So the disk contains a single big long log of changes, consisting of threaded segments 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan LFS basic approach Use the disk as a log A log is a data structure that is written only at one end If the disk were managed as a log, there would be effectively no seeks The “file” is always added to sequentially New data and metadata (i-nodes, directories) are accumulated in the buffer cache, then written all at once in large blocks (e.g., segments of .5M or 1M) This would greatly increase disk write throughput 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

LFS vs. UNIX File System or FFS i-node directory data i-node map file1 file2 dir1 dir2 Unix File System dir1 dir2 Blocks written to create two 1-block files: dir1/file1 and dir2/file2, in UFS and LFS Log Log-Structured File System file1 file2 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan LFS challenges Locating data written in the log FFS places files in a well-known location, LFS writes data “at the end of the log” Even locating i-nodes! in LFS, i-nodes too go in the log! Managing free space on the disk disk is finite, and therefore log must be finite so cannot just keep appending to log, ad infinitum! need to recover deleted blocks in old part of log need to fill holes created by recovered blocks (Note: Reads are the same as FS/FFS once you find the i-node – and writes are a ton faster) 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

Locating data and i-nodes LFS uses i-nodes to locate data, just like FS/FFS LFS appends i-nodes to end of log, just like data makes them hard to find Solution use another level of indirection: “i-node maps” i-node maps map file #s (i-node #s) to i-node location location of i-node map blocks are kept in a checkpoint region checkpoint region has a fixed location cache i-node maps in memory for performance 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Free space management Reads are no different than in UNIX File System or FFS, once we find the i-node for a file using the i-node map, which is cached in memory, find the i-node, which gets you to the blocks Every write causes new blocks to be added to the current “segment buffer” in memory when segment is full, it is written to disk Over time, segments in the log become fragmented as we replace old blocks of files with new blocks we can “garbage collect” segments with little “live” data and recover the disk space 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Segment cleaning Log is divided into (large) segments Segments are “threaded” on disk (linked list) segments can be anywhere Reclaim space by cleaning segments read segment copy live data to end of log now have free segment you can reuse! Cleaning is an issue costly overhead, when do you do it? 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Detail: Cleaning The major problem for a LFS is cleaning, i.e., producing contiguous free space on disk A cleaner daemon “cleans” old segments, i.e., takes several non-full segments and compacts them, creating one full segment, plus free space The cleaner chooses segments on disk based on: utilization: how much is to be gained by cleaning them age: how likely is the segment to change soon anyway 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan LFS summary As caches get big, most reads will be satisfied from the cache No matter how you cache write operations, though, they are eventually going to have to get back to disk Thus, most disk traffic will be write traffic If you eventually put blocks (i-nodes, file content blocks) back where they came from, then even if you schedule disk writes cleverly, there’s still going to be a lot of head movement (which dominates disk performance) 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Suppose, instead, what you wrote to disk was a log of changes made to files log includes modified data blocks and modified metadata blocks buffer a huge block (“segment”) in memory – 512K or 1M when full, write it to disk in one efficient contiguous transfer right away, you’ve decreased seeks by a factor of 1M/4K = 250 So the disk is just one big long log, consisting of threaded segments 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan What happens when a crash occurs? you lose some work but the log that’s on disk represents a consistent view of the file system at some instant in time Suppose you have to read a file? once you find its current i-node, you’re fine i-node maps provide a level of indirection that makes this possible details aren’t that important 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan How do you prevent overflowing the disk (because the log just keeps on growing)? segment cleaner coalesces the active blocks from multiple old log segments into a new log segment, freeing the old log segments for re-use Again, the details aren’t that important 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Tradeoffs LFS wins, relative to FFS metadata-heavy workloads small file writes deletes (metadata requires an additional write, and FFS does this synchronously) LFS loses, relative to FFS many files are partially over-written in random order file gets splayed throughout the log LFS vs. JFS JFS is “robust” like LFS, but data must eventually be written back “where it came from” so disk bandwidth is still an issue 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan LFS history Designed by Mendel Rosenblum and his advisor John Ousterhout at Berkeley in 1991 Rosenblum went on to become a Stanford professor and to co-found VMware, so even if this wasn’t his finest hour, he’s OK Ex-Berkeley student Margo Seltzer (faculty at Harvard) published a 1995 paper comparing and contrasting LFS with conventional FFS, and claiming poor LFS performance in some realistic circumstances Ousterhout published a “Critique of Seltzer’s LFS Measurements,” rebutting her arguments Seltzer published “A Response to Ousterhout’s Critique of LFS Measurements,” rebutting the rebuttal Ousterhout published “A Response to Seltzer’s Response,” rebutting the rebuttal of the rebuttal 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Moral of the story If you’re going to do OS research, you need a thick skin Very difficult to predict how a FS will be used So it’s hard to generate reasonable benchmarks, let alone a reasonable FS design Very difficult to measure a FS in practice depends on a HUGE number of parameters, involving both workload and hardware architecture 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Summary UFS Hardware Device inodes  data blocks  6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan FFS Low throughput addressed by: larger blocks cylinder groups hardware awareness    6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

JFS Example: file creation Main memory cache Journal App + requests Long post-crash boot times addressed by: transactional journal of changes propagated back to “real” file system asynchronously Example: file creation … Start t Alloc inode 1067 Write inode 1067 w/ [data] Write block 22731 w/ [data] Commit t …. Main memory cache Journal + some file system App requests If data block updates are not journaled, after a crash files may have garbage blocks 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan

© 2007 Gribble, Lazowska, Levy, Zahorjan Write throughput addressed by: the file system is a log LFS Main memory cache file 2 file1 Segments file1 inode maps 6/12/2019 © 2007 Gribble, Lazowska, Levy, Zahorjan