Download presentation
Presentation is loading. Please wait.
1
The Structure of the “THE” -Multiprogramming System Edsger W. Dijkstra Jimmy Pierce
2
Progress Report on the multiprogramming effort at the Department of Mathematics at the Technological University in Eindhoven. (1968) EL X8 Modern 2.5 μsMemory Cycle Time Address Size Memory Size 27 bits 32K 5 - 70 ns 64 bit 1 - 2 GB
3
Goals Reduction in turn around time for programs of short duration Economic use of peripherals Automatic control of backing store with economic use of central processor Economic feasibility to use machine for programs which only need flexibility of the processor and not the capacity or power of it NOT multi access system Testable
4
Guiding Principles Select project as ambition as justifiably possible Select a machine with sound basic characteristics Experience does not lead to wisdom and understanding Don’t over incorporate features that only add work Don’t depend on specific properties of a machine Make an effort to learn as much as possible from past experiences
5
Layered Operating System
6
Bottom layers provide abstraction of resources Top layers access lower levels for resources Access always proceeds from top down
7
Testing Done in stages Bottom layer thoroughly tested before implementing layer above Allows full system testing in manageable time 5 + 5 + 5 + 5 + 5 + 5 vs. 5 * 5 * 5 * 5 * 5 * 5 “… testing had not yet been completed, but the resulting system is guaranteed to be flawless.”
8
Contains: Processor Allocation Real-time Clock Interrupt Provides: Processor Abstraction Each process appears to have own processor Level 0
9
Processor Allocation Only unidirectional passing of time through various program states has meaning, not amount of time Delaying of process execution will have no harmful effects to the internal logic of the process Thus numerous processes can be executing independently of number of actual processors, as long as processors can switch to the various processes Semaphores used to regulate cooperation between processes
10
Semaphores Integer variable initialized to 1 or 0 P decrements variable and places process on a waiting list if new value is negative Works since delaying a process has no ill effects Two atomic methods, P and V V increments variable and removes a process from waiting list if new value is non-positive
11
Mutual Exclusion Protection of critical sections Mutex is globally accessible mutex = 1; P(mutex); critical section V(mutex);
12
Private Semaphores Used to block process while configuring environment ( creation of reader-writer buffer ) mutex = 1; privateSemaphore = 0; P(mutex); modify global state variables V(privateSemaphore); V(mutex); P(privateSemaphore); Private semaphore can be released globally but only locked privately
13
Contains: Segment Controller Provides: Memory Abstraction Each process has a segment of memory to use Level 1
14
Memory Allocation Unique identifiers for each segment of memory Detaches memory from core or drum identifier Memory can be located in any page at core or drum Segment identifier in core indicating segment is (non)empty and page address Memory swapped out of core does not have to return to same drum page from which it originated
15
Contains: Message Interpreter Provides: Device Abstraction Each process has own individual device handle Level 2
16
Message Interpreter All processes share console using mutual synchronization Process sends message to console with an identification header Handles interaction between processes and the user Operator must identify which process to communicate with Memory segments contain vocabulary for messages
17
Contains: Communication Units Provides: Logical Communication Units Buffering is handled automatically via communication channels Level 3
18
Communication Buffering Mutual exclusion needed to allocate peripherals to processes Processes must be able to communicate with operator ( hardware failure, anomalous conditions ) Peripherals wrapped by buffers and abstracted to logical communication units
19
Contains: Independent User Programs Provides: Independent user code to be executed “simultaneously” Level 4
20
Contains: The Operator Sold Separately Level 5
21
Proving Harmonious Cooperation Processes at rest are at homing positions When process accepts task, it leaves homing position Upon return from task, process returns to homing position Facts:
22
Processes performing a task will only generate a finite number of tasks for other processes Processes can only generate tasks at a lower level Thus cannot have circular process calls Attention must be paid to ensure process requesting segment from segment controller remains in core when call returns
23
It is impossible that all processes have returned to homing positions while there is still a generated, pending, unaccepted task Calls leave homing positions when tasks are accepted Calls to lower levels cause higher levels to block until call returns Calls return to homing positions when tasks are completed
24
Each process will eventually return to homing position after accepting a task Calls cannot generate an infinite number of child procedures Any blocked calls are due to lower process performing work on the higher level’s behalf Unidirectional flow through layers prevents circular waits ( A waiting on B waiting on C waiting on A )
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.