Presentation is loading. Please wait.

Presentation is loading. Please wait.

COT 5611 Operating Systems Design Principles Spring 2014

Similar presentations


Presentation on theme: "COT 5611 Operating Systems Design Principles Spring 2014"— Presentation transcript:

1 COT 5611 Operating Systems Design Principles Spring 2014
Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 3:30 – 5:30 PM

2 Lecture 18 Reading assignment: Last time – Error correcting codes
Chapter 9 from the on-line text Last time – Error correcting codes 7/30/2019 Lecture 18

3 Today All-or-nothing and before-or after atomicity
Atomicity and processor management Processes, threads, and address spaces Thread coordination with a bounded buffer – the naïve approach 7/30/2019 Lecture 18

4 All-or-nothing and before-or-after atomicity
All-or-nothing atomicity  strategy for masking failures during the execution of programs. Before-or-after atomicity  necessary for coordination of concurrent activities. Applications: Transaction processing systems. Processor virtualization and processor sharing in operating systems Adding multilevel memory management to pipelined processors complicated the design of OS: a missing page exception could occur in the “middle” of an instruction and requires an all-or-nothing interface between the processor and the OS Coordination of concurrent threads and scheduling require before-or-after atomicity. 7/30/2019 Lecture 18

5 Processes, threads, and address spaces
Abstractions: Process  a program in execution Address space  the container in which a process runs Thread  a lightweight process; multiple threads could share the same address space; the creation of a an additional thread does not require the overhead associated with the allocation of an address space The distinction between process and thread is somewhat blurred in the modern OS literature. Virtualization: supports abstractions necessary to: 1. Cope with the mismatch between processor speed and memory and I/O speed. A processor is shared among several processes/threads a process/thread is a virtual processor. 2. Allow processes/treads to run independent of the size of the physical memory An address space is a virtual memory container. 7/30/2019 Lecture 18

6 Thread coordination with bounded buffers
Bounded buffer the virtualization of a communication channel Thread coordination Locks for serialization Bounded buffers for communication Producer thread  writes data into the buffer Consumer thread read data from the buffer Basic assumptions: We have only two threads Threads proceed concurrently at independent speeds/rates Bounded buffer – only N buffer cells Messages are of fixed size and occupy only one buffer cell. Spin lock  a thread keeps checking a control variable/semaphore “until the light turns green.” feasible only when the threads run on a different processors (how could otherwise give a chance to other threads?) 7/30/2019 Lecture 18

7 7/30/2019 Lecture 18

8 Implicit assumptions for the correctness of thread coordination implementation
One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported. These assumption are not realistic!!! 7/30/2019 Lecture 18

9 Thread and virtual memory management
The kernel supports thread and virtual memory management Thread management: Creation and destruction of threads Allocation of the processor to a ready to run thread Handling of interrupts Scheduling – deciding which one of the ready to run threads should be allocated the processor Virtual memory management  maps virtual address space of a process/thread to physical memory. Each module runs in own address space; if one module runs multiple threads all share one address space. Thread + virtual memory  virtual computer for each module. 7/30/2019 Lecture 18

10 Multi-level memory systems
The amount of storage and the access time increase at the same time CPU registers L1 cache L2 cache Main memory Magnetic disk Mass storage systems Remote storage Memory management schemes  where the data is placed in this hierarchy Manual  left to the user Automatic  based on memory virtualization More effective Easier to use Lecture 26

11 Forms of memory virtualization
Memory-mapped files  in UNIX mmap Copy on write  when several threads use the same data map the page holding the data and store the data only once in memory. This works as long all the threads only READ the data. If one of the threads carries out a WRITE then the virtual memory handling should generate an exception and data pages to be remapped so that each thread gets its only copy of the page. On-demand zero filled pages Instead of allocating zero-filled pages on RAM or on the disk the VM manager maps these pages without READ or WRITE permissions. When a thread attempts to actually READ or WRITE to such pages then an exception is generated and the VM manager allocates the page dynamically. Virtual-shared memory  Several threads on multiple systems share the same address space. When a thread references a page that is not in its local memory the local VM manager fetches the page over the network and the remote VM manager un-maps the page. Lecture 26

12 Multi-level memory management and virtual memory
Two level memory system: RAM + disk. Each page of an address space has an image in the disk The RAM consists of blocks. READ and WRITE from RAM  controlled by the VM manager GET and PUT from disk  controlled by a multi-level memory manager Old design philosophy: integrate the two to reduce the instruction count New approach – modular organization Implement the VM manager (VMM) in hardware. Translates virtual addresses into physical addresses. Implement the multi-level memory manager (MLMM) in the kernel in software. It transfers pages back and forth between RAM and the disk Lecture 26

13 Lecture 26

14 The price to pay -the performance of a two level memory
The latency Lp << LS LP  latency of the primary device e.g., 10 nsec for RAM LS  latency of the secondary device, e.g., 10 msec for disk Hit ratio h the probability that a reference will be satisfied by the primary device. Average Latency (AS)  AS = h x LP + (1-h) LS. Example: LP = 10 nsec (primary device is main memory) LS = 10 msec (secondary device is the disk) Hit ratio h= 0.90  AS= 0.9 x x 10,000, = 1,000, nsec~ 1000 microseconds = 1 msec Hit ratio h= 0.99  AS= 0.99 x x 10,000,000 = 100, nsec~ 100 microseconds = 0.1 msec Hit ratio h=  AS= x x 10,000,000 = 10, nsec~ 10 microseconds = 0.01 msec Hit ratio h=  AS= x x 10,000,000 = 1, nsec~ 1 microsecond This considerable slowdown is due to the very large discrepancy (six orders of magnitude) between the primary and the secondary device. Lecture 26

15 The performance of a two level memory (cont’d)
Statement: if each reference occurs with equal frequency to a cell in the primary and in the secondary device then the combined memory will operate at the speed of the secondary device. The size SizeP << SizeS SizeS =K x SizeP with K large (1/K small) SizeP  number of cells of the primary device SizeS  number of cells of the secondary device Lecture 26

16 Locality of reference Concentration of references
Spatial locality of reference Temporal locality of reference Reasons for locality of references Programs consists of sets of sequential instructions interrupted by branches Data structures group together related data elements Working set  the collection of references made by an application in a given time window. If the working set is larger than the number of cells of the primary device significant performance degradation. Lecture 26


Download ppt "COT 5611 Operating Systems Design Principles Spring 2014"

Similar presentations


Ads by Google