HW/Study Guide. Synchronization Make sure you understand the HW problems!

Slides:



Advertisements
Similar presentations
Operating Systems Part III: Process Management (Process Synchronization)
Advertisements

More on Semaphores, and Classic Synchronization Problems CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Memory Management Design & Implementation Segmentation Chapter 4.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
6: Process Synchronization 1 1 PROCESS SYNCHRONIZATION I This is about getting processes to coordinate with each other. How do processes work with resources.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Informationsteknologi Friday, November 16, 2007Computer Architecture I - Class 121 Today’s class Operating System Machine Level.
Chapter 3.2 : Virtual Memory
Chapter 6: Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Module 6: Synchronization 6.1 Background 6.2 The Critical-Section.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Adopted from and based on Textbook: Operating System Concepts – 8th Edition, by Silberschatz, Galvin and Gagne Updated and Modified by Dr. Abdullah Basuhail,
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 8, 2005 Background Concurrent.
Paging Examples Assume a page size of 1K and a 15-bit logical address space. How many pages are in the system?
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Computer Architecture Lecture 28 Fasih ur Rehman.
Exam2 Review Bernard Chen Spring Deadlock Example semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A)
COS 598: Advanced Operating System. Operating System Review What are the two purposes of an OS? What are the two modes of execution? Why do we have two.
Concurrency, Mutual Exclusion and Synchronization.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
Chap 6 Synchronization. Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Operating Systems Lecture Notes Synchronization Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Consider the Java code snippet below. Is it a legal use of Java synchronization? What happens if two threads A and B call get() on an object supporting.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CSCI 156: Lab 11 Paging. Our Simple Architecture Logical memory space for a process consists of 16 pages of 4k bytes each. Your program thinks it has.
Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Process Synchronization CS 360. Slide 2 CS 360, WSU Vancouver Process Synchronization Background The Critical-Section Problem Synchronization Hardware.
Process Synchronization. Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
6.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Synchronization Background The Critical-Section Problem Peterson’s.
Memory: Page Table Structure
Translation Lookaside Buffer
Lecture 11 Virtual Memory
Process Synchronization: Semaphores
Background on the need for Synchronization
Lecture Topics: 11/19 Paging Page tables Memory protection, validation
A Real Problem What if you wanted to run a program that needs more memory than you have? September 11, 2018.
Virtual Memory User memory model so far:
CS703 - Advanced Operating Systems
Paging COMP 755.
HW/Study Guide.
Chapter 5: Process Synchronization
Paging Examples Assume a page size of 1K and a 15-bit logical address space. How many pages are in the system?
CSCI206 - Computer Organization & Programming
Paging Lecture November 2018.
FIGURE 12-1 Memory Hierarchy
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Lecture 32 Syed Mansoor Sarwar
Topic 6 (Textbook - Chapter 5) Process Synchronization
Process Synchronization
Translation Lookaside Buffer
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 6: Process Synchronization
CSE451 Virtual Memory Paging Autumn 2002
Chapter 6: Synchronization Tools
Presentation transcript:

HW/Study Guide

Synchronization Make sure you understand the HW problems!

global shared int counter = 0, BUFFER_SIZE = 10 ; Producer: while (1) { while (counter == BUFFER_SIZE); // do nothing buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; }

Consumer: while (1) { while (counter == 0); // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; // consume the item }

Identify the race condition in this version of the consumer/producer problem. –The race condition is the incrementing and decrementing of the shared variable counter.

Fix this race condition using the TestAndSet hardware instruction. global shared int counter = 0, BUFFER_SIZE = 10 ; shared int lock = 0 ; Producer: while (1) { while (counter == BUFFER_SIZE); // do nothing buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; while (TestAndSet(lock) == 1) ; counter++; lock = 0 ; }

Consumer: while (1) { while (counter == 0); // do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; while (TestAndSet(lock) == 1) ; //busy wait counter--; lock = 0 ; } global shared int counter = 0, BUFFER_SIZE = 10 ; shared int lock = 0 ;

Now assume there is still one producer but there are now two consumers. –Does this introduce any additional race conditions (the correct answer is yes!)

If so, where does it occur? –The race condition occurs when the variable out is accessed since this is now shared by the two consumers. Now fix this additional race condition using a semaphore.

Consumer: while (1) { while (counter == 0); // do nothing wait(mutex) ; nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; signal(mutex) ; while (TestAndSet(lock) == 1) ; //busy wait counter--; lock = 0 ; } Note that the producer code does NOT have to be modified since it does not use out. global shared int counter = 0, BUFFER_SIZE = 10 ; shared int lock = 0 ; global shared int out = 0 ; //Now the two consumers must share out. struct semaphore mutex = 1 ; //Must supply the semaphore

Assume I have just learned about using semaphores to synchronize the order in which certain statements are executed. I think this is really cool, and want to give it a try. So I want to use semaphores to enforce the following execution order: –Statement S1 of process P1 executes before statement S2 of process P2. –Statement S2 of process P2 executes before statement S3 of Process P3.

Statement S3 of process P3 executes before Statement S1 of process P1. Use semaphores to enforce this ordering, or, show how this may not be such a great idea (i.e., what is the problem here?).

This ordering cannot be enforced since it creates a cyclic waiting condition that would result in deadlock. This can be seen clearly when you look at the requested ordering of the statements: S1  S2  S3 

Now assume we have four processes and want Statement S1 in P1 to execute before statement S2 in P2 before S3 in P3. Also, we want Statement S4 in P4 to execute after S2 in P2. Use semaphores to enforce this ordering. You must explicitly initialize any semaphores you use. struct semaphore S1 = 0 ; struct semaphore S2 = 0 ; P1:P2:P3:P4: S1 ; wait(S1) ; wait(S2) ; wait(S2) ; signal(S1) ; S2 ; S3 ; S4 ; signal(S2) ;

Assume there is one producer process and one consumer process, and that they share a buffer with 10 slots. Implement the producer/consumer problem using semaphores. You must explicitly initialize the semaphores that you use.

BUFFER_SIZE = 10 ; struct semaphore full = 0 ; struct semaphore empty = 10 ; Producer: while (1) { wait(empty) ; buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; signal(full) ; }

Consumer: while (1) { wait(full) ; nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; signal(empty) ; }

Paging Assume a 16-bit virtual address space with pages that are 2048 bytes. How many pages are in the logical address space? –The number of bits required to access any byte in the page (i.e., the offset) is 11. This leaves 5 bits for the logical page. Thus there are 2^5 or 32 pages.

Paging Consider the page table (shown on the next slide) for some process in this same system, and assume the logical address is To what physical address will this logical address be mapped? Show the steps you took to determine this address.

Step 1: Convert to binary: 2052 = Step 2: Take the top most 5 bits and use them as an index into the process page table

Step 3: Take the physical page frame number from the page table

Step 4: Concatenate the offset to the physical page frame to get final address = 2052.

What is the Translation Lookaside buffer? –A very fast associative memory that caches page table entries. What purpose does it serve? –It avoids having to go to the page table in memory if the entry is found in the Translation Lookaside buffer. Thus it avoids the latency of accessing main memory.

Consider a 64-bit address space with 4K pages. How many pages are there in this virtual address space? –Since the page offset requires 12 bits this leaves 54 bits for the logical page number. Thus there are 2^54 logical pages in the address space. Is this a big number? –Yes, it is a big number. You will most likely be asked to work through a two-level page table example.