COT 5611 Operating Systems Design Principles Spring 2014

Slides:



Advertisements
Similar presentations
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Advertisements

CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Computer System Overview
MEMORY MANAGEMENT By KUNAL KADAKIA RISHIT SHAH. Memory Memory is a large array of words or bytes, each with its own address. It is a repository of quickly.
Computer System Overview Chapter 1. Basic computer structure CPU Memory memory bus I/O bus diskNet interface.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
1 Computer System Overview Chapter 1. 2 n An Operating System makes the computing power available to users by controlling the hardware n Let us review.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Computers Operating System Essentials. Operating Systems PROGRAM HARDWARE OPERATING SYSTEM.
Fundamentals of Programming Languages-II Subject Code: Teaching SchemeExamination Scheme Theory: 1 Hr./WeekOnline Examination: 50 Marks Practical:
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Computer Systems Overview. Lecture 1/Page 2AE4B33OSS W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
1 Computer System Overview Chapter 1. 2 Operating System Exploits the hardware resources of one or more processors Provides a set of services to system.
Introduction to Operating Systems Concepts
CS 704 Advanced Computer Architecture
Module 12: I/O Systems I/O hardware Application I/O Interface
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
Chapter 1 Computer System Overview
Chapter 2 Memory and process management
Memory COMPUTER ARCHITECTURE
Chapter 8: Main Memory.
CS352H: Computer Systems Architecture
Chapter 9: Virtual Memory
COP 4600 Operating Systems Fall 2010
COP 5611 Operating Systems Spring 2010
Introduction to Operating Systems
CGS 3763 Operating Systems Concepts Spring 2013
COT 5611 Operating Systems Design Principles Spring 2014
COT 5611 Operating Systems Design Principles Spring 2012
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
COT 4600 Operating Systems Fall 2009
COP 5611 Operating Systems Spring 2010
COT 4600 Operating Systems Spring 2011
CGS 3763 Operating Systems Concepts Spring 2013
Page Replacement.
CGS 3763 Operating Systems Concepts Spring 2013
CGS 3763 Operating Systems Concepts Spring 2013
COP 4600 Operating Systems Spring 2011
COP 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2014
CGS 3763 Operating Systems Concepts Spring 2013
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
COP 4600 Operating Systems Fall 2010
CGS 3763 Operating Systems Concepts Spring 2013
CGS 3763 Operating Systems Concepts Spring 2013
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
COT 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
COP 5611 Operating Systems Spring 2010
Virtual Memory Overcoming main memory size limitation
Introduction to Operating Systems
Contents Memory types & memory hierarchy Virtual memory (VM)
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 1 Computer System Overview
Chapter 2 Operating System Overview
COP 4600 Operating Systems Fall 2010
COMP755 Advanced Operating Systems
Virtual Memory 1 1.
Presentation transcript:

COT 5611 Operating Systems Design Principles Spring 2014 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 3:30 – 5:30 PM

Lecture 18 Reading assignment: Last time – Error correcting codes Chapter 9 from the on-line text Last time – Error correcting codes 7/30/2019 Lecture 18

Today All-or-nothing and before-or after atomicity Atomicity and processor management Processes, threads, and address spaces Thread coordination with a bounded buffer – the naïve approach 7/30/2019 Lecture 18

All-or-nothing and before-or-after atomicity All-or-nothing atomicity  strategy for masking failures during the execution of programs. Before-or-after atomicity  necessary for coordination of concurrent activities. Applications: Transaction processing systems. Processor virtualization and processor sharing in operating systems Adding multilevel memory management to pipelined processors complicated the design of OS: a missing page exception could occur in the “middle” of an instruction and requires an all-or-nothing interface between the processor and the OS Coordination of concurrent threads and scheduling require before-or-after atomicity. 7/30/2019 Lecture 18

Processes, threads, and address spaces Abstractions: Process  a program in execution Address space  the container in which a process runs Thread  a lightweight process; multiple threads could share the same address space; the creation of a an additional thread does not require the overhead associated with the allocation of an address space The distinction between process and thread is somewhat blurred in the modern OS literature. Virtualization: supports abstractions necessary to: 1. Cope with the mismatch between processor speed and memory and I/O speed. A processor is shared among several processes/threads a process/thread is a virtual processor. 2. Allow processes/treads to run independent of the size of the physical memory An address space is a virtual memory container. 7/30/2019 Lecture 18

Thread coordination with bounded buffers Bounded buffer the virtualization of a communication channel Thread coordination Locks for serialization Bounded buffers for communication Producer thread  writes data into the buffer Consumer thread read data from the buffer Basic assumptions: We have only two threads Threads proceed concurrently at independent speeds/rates Bounded buffer – only N buffer cells Messages are of fixed size and occupy only one buffer cell. Spin lock  a thread keeps checking a control variable/semaphore “until the light turns green.” feasible only when the threads run on a different processors (how could otherwise give a chance to other threads?) 7/30/2019 Lecture 18

7/30/2019 Lecture 18

Implicit assumptions for the correctness of thread coordination implementation One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported. These assumption are not realistic!!! 7/30/2019 Lecture 18

Thread and virtual memory management The kernel supports thread and virtual memory management Thread management: Creation and destruction of threads Allocation of the processor to a ready to run thread Handling of interrupts Scheduling – deciding which one of the ready to run threads should be allocated the processor Virtual memory management  maps virtual address space of a process/thread to physical memory. Each module runs in own address space; if one module runs multiple threads all share one address space. Thread + virtual memory  virtual computer for each module. 7/30/2019 Lecture 18

Multi-level memory systems The amount of storage and the access time increase at the same time CPU registers L1 cache L2 cache Main memory Magnetic disk Mass storage systems Remote storage Memory management schemes  where the data is placed in this hierarchy Manual  left to the user Automatic  based on memory virtualization More effective Easier to use Lecture 26

Forms of memory virtualization Memory-mapped files  in UNIX mmap Copy on write  when several threads use the same data map the page holding the data and store the data only once in memory. This works as long all the threads only READ the data. If one of the threads carries out a WRITE then the virtual memory handling should generate an exception and data pages to be remapped so that each thread gets its only copy of the page. On-demand zero filled pages Instead of allocating zero-filled pages on RAM or on the disk the VM manager maps these pages without READ or WRITE permissions. When a thread attempts to actually READ or WRITE to such pages then an exception is generated and the VM manager allocates the page dynamically. Virtual-shared memory  Several threads on multiple systems share the same address space. When a thread references a page that is not in its local memory the local VM manager fetches the page over the network and the remote VM manager un-maps the page. Lecture 26

Multi-level memory management and virtual memory Two level memory system: RAM + disk. Each page of an address space has an image in the disk The RAM consists of blocks. READ and WRITE from RAM  controlled by the VM manager GET and PUT from disk  controlled by a multi-level memory manager Old design philosophy: integrate the two to reduce the instruction count New approach – modular organization Implement the VM manager (VMM) in hardware. Translates virtual addresses into physical addresses. Implement the multi-level memory manager (MLMM) in the kernel in software. It transfers pages back and forth between RAM and the disk Lecture 26

Lecture 26

The price to pay -the performance of a two level memory The latency Lp << LS LP  latency of the primary device e.g., 10 nsec for RAM LS  latency of the secondary device, e.g., 10 msec for disk Hit ratio h the probability that a reference will be satisfied by the primary device. Average Latency (AS)  AS = h x LP + (1-h) LS. Example: LP = 10 nsec (primary device is main memory) LS = 10 msec (secondary device is the disk) Hit ratio h= 0.90  AS= 0.9 x 10 + 0.1 x 10,000,000 = 1,000,000.009 nsec~ 1000 microseconds = 1 msec Hit ratio h= 0.99  AS= 0.99 x 10 + 0.01 x 10,000,000 = 100,000.0099 nsec~ 100 microseconds = 0.1 msec Hit ratio h= 0.999  AS= 0.999 x 10 + 0.001 x 10,000,000 = 10,000.0099 nsec~ 10 microseconds = 0.01 msec Hit ratio h= 0.9999  AS= 0.999 0x 10 + 0.001 x 10,000,000 = 1,009.99 nsec~ 1 microsecond This considerable slowdown is due to the very large discrepancy (six orders of magnitude) between the primary and the secondary device. Lecture 26

The performance of a two level memory (cont’d) Statement: if each reference occurs with equal frequency to a cell in the primary and in the secondary device then the combined memory will operate at the speed of the secondary device. The size SizeP << SizeS SizeS =K x SizeP with K large (1/K small) SizeP  number of cells of the primary device SizeS  number of cells of the secondary device Lecture 26

Locality of reference Concentration of references Spatial locality of reference Temporal locality of reference Reasons for locality of references Programs consists of sets of sequential instructions interrupted by branches Data structures group together related data elements Working set  the collection of references made by an application in a given time window. If the working set is larger than the number of cells of the primary device significant performance degradation. Lecture 26