Read vs. mmap Tan Li. Man mmap #include void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); int munmap(void *start, size_t.

Slides:



Advertisements
Similar presentations
Memory Management: Overlays and Virtual Memory
Advertisements

1 Pintos Project #3 Virtual Memory The following slides were created by Xiaomo Liu and others for CS 3204 Fall And Modified by Nick Ryan for Spring.
File Systems.
Segmentation and Paging Considerations
Week 7 - Friday.  What did we talk about last time?  Allocating 2D arrays.
Chapter 11: File System Implementation
Ways to read data from disk to memory Tan Li. read, write read, write -- low level file access, it's an operation between two file discriptors. SYNOPSIS.
Jaishankar Sundararaman
File System Implementation
File System Implementation
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
1 Optimizing Malloc and Free Professor Jennifer Rexford
The ‘mmap()’ method Adding the ‘mmap()’ capability to our ‘vram.c’ device-driver.
File Systems CSE 121 Spring 2003 Keith Marzullo. CSE 121 Spring 2003Review of File Systems2 References In order of relevance... 1)Maurice J. Bach, The.
Memory Management April 28, 2000 Instructor: Gary Kimura.
Lecture 17 FS APIs and vsfs. File and File Name What is a File? Array of bytes. Ranges of bytes can be read/written. File system consists of many files,
File System Implementation
Memory-Mapped Files & Unified VM System Vivek Pai.
CS 346 – Chapter 8 Main memory –Addressing –Swapping –Allocation and fragmentation –Paging –Segmentation Commitment –Please finish chapter 8.
CS 241 Fall 2007 System Programming 1 Virtual Memory Lawrence Angrave and Vikram Adve.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
S -1 Shared Memory. S -2 Motivation Shared memory allows two or more processes to share a given region of memory -- this is the fastest form of IPC because.
File System Implementation Chapter 12. File system Organization Application programs Application programs Logical file system Logical file system manages.
Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Win32 Programming Lesson 18: More Memory Mapped Files and the HEAP (Finally, cool stuff!)
1 File Systems: Consistency Issues. 2 File Systems: Consistency Issues File systems maintains many data structures  Free list/bit vector  Directories.
Operating Systems CMPSC 473 Virtual Memory Management (4) November – Lecture 22 Instructor: Bhuvan Urgaonkar.
UNIX Files File organization and a few primitives.
NCHU System & Network Lab Lab 11 Memory Mapped File.
1 Advanced Memory Management Techniques  static vs. dynamic kernel memory allocation  resource map allocation  power-of-two free list allocation  buddy.
1 Shared Memory. 2  Introduction  Creating a Shared Memory Segment  Shared Memory Control  Shared Memory Operations  Using a File as Shared Memory.
Free Space Management.
Page 111/15/2015 CSE 30341: Operating Systems Principles Chapter 11: File System Implementation  Overview  Allocation methods: Contiguous, Linked, Indexed,
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
CSNB334 Advanced Operating Systems 4. Concurrency : Mutual Exclusion and Synchronization.
Swap Space and Other Memory Management Issues Operating Systems: Internals and Design Principles.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 12: File System Implementation File System Structure File System Implementation.
File Systems cs550 Operating Systems David Monismith.
11.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 11.5 Free-Space Management Bit vector (n blocks) … 012n-1 bit[i] =  1  block[i]
File Systems - Part I CS Introduction to Operating Systems.
FILE SYSTEM IMPLEMENTATION 1. 2 File-System Structure File structure Logical storage unit Collection of related information File system resides on secondary.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
 Backlight 에서 나온 백색광이 액정 셀을 통과하면 서 투과율이 조절되고 red, green, blue 의 color filter 를 투과해 나오는 빛의 혼합을 통해 색이 구 성됨  Color filter 는 셀사이의 빛을 차단하는 black matrix,
Lecture 24 – Paging implementation
Logistics Homework 5 will be out this evening, due 3:09pm 4/14
Virtual Memory CSSE 332 Operating Systems
Today topics: File System Implementation
CSNB334 Advanced Operating Systems 4
Virtual Memory: Systems
Andrew Hanushevsky: Memory Mapped I/O
Virtual Memory Partially Adapted from:
Optimizing Malloc and Free
Virtual Memory: Systems
Evolution in Memory Management Techniques
Virtual Memory: Systems
Making Virtual Memory Real: The Linux-x86-64 way
Outline Allocation Free space management Memory mapped files
Lecture 38 Syed Mansoor Sarwar
CSE 333 – Section 3 POSIX I/O Functions.
Overview: File system implementation (cont)
Virtual Memory.
CSE451 Memory Management Introduction Autumn 2002
CSE 451: Operating Systems Autumn 2005 Memory Management
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Virtual Memory: Systems CSCI 380: Operating Systems
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Dirty COW Race Condition Attack
Presentation transcript:

read vs. mmap Tan Li

Man mmap #include void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); int munmap(void *start, size_t length); The mmap method's arguments include an address, start, total size to be mapped length, file protection prot, mapping flags, a file descriptor fd for an open file and an offset. The flags allow different options to be passed to mmap to control the way a file is mapped to the process address space. buf = mmap(0, infilestat.st_size, PROT_READ, MAP_SHARED, infile,0)

Usage of mmap The mmap() function allows access to resources via address space manipulations, instead of read()/ write(). Once a file is mapped, all a process has to do to access it is use the data at the address to which the file was mapped. So, using pseudo- code to illustrate the way in which an existing program might be changed to use mmap(), the following: fildes = open(...) lseek(fildes, some_offset) read(fildes, buf, len) /* Use data in buf. */ becomes: fildes = open(...) address = mmap(0, len, PROT_READ, MAP_PRIVATE, fildes, some_offset) /* Use data at address. */

Advantages In traditional file I/O involving read, data is copied from the disk to a kernel buffer, then the kernel buffer is copied into the process's heap space for use. In memory-mapped file I/O, data is copied from the disk straight into the process's address space, into the segment where the file is mapped. The file is then accessed by references to memory locations with pointers. This way, I/O on the file is done without the overhead of handling the data twice.

Disadvantages 1. overheads related the algorithms the operating system *MUST* use to figure out which pages to remove (on the fly) when the data set does not fit in main memory, and there are overheads related to the heuristics the operating system employs to try to predict the memory usage pattern to perform some read-ahead. 2. on a 32-bit system, you're certainly not going to be able to mmap() any file larger than 4 GB. The real limit may be a lot lower (1 GB or less), since parts of the address space may be reserved for the kernel, or already used by the program. It might also be fragmented. For these reasons, you usually want to have code to fall back to read() in case the mmap fails. This would be good for portability as well. 3. Use of mmap() may reduce the amount of memory available to other memory allocation functions.

Simulation Result

Usage of madvise() The madvise() system call advises the kernel about how to handle paging input/output in the address range beginning at address start and with size length bytes. It allows an application to tell the kernel how it expects to use some mapped or shared memory areas, so that the kernel can choose appropriate read-ahead and caching techniques. This call does not influence the semantics of the application (except in the case of MADV_DONTNEED), but may influence its performance. The kernel is free to ignore the advice. MADV_SEQUENTIAL Expect page references in sequential order. (Hence, pages in the given range can be aggressively read ahead, and may be freed soon after they are accessed.)

Suggestion madvise() + mmap() with backup meachanics of fread() read() with O_DIRECT with backup meachanics of fread().