Download presentation
Presentation is loading. Please wait.
1
Chapter 1.2: Introduction These slides, originally provided by your authors, have been modified by your instructor.
2
1.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts 1.2 Computer-System Operation I/O devices and the CPU can execute concurrently. Device controllers normally in charge of a particular device type. USB Controllers – in very common use for flash drives, printers, cameras, many more external devices Disk controllers may have several disks connected. Disk controller must synchronize data being written to and retrieved from disks that it is controlling. Deals with timing, buffering, contention, actual data transfer and handling; parity checking, and more!. Even when device is ready: – seek time, rotational delay, head select, data transfer.
3
1.3 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts More on Controllers Device controller typically have a local buffer storage containing temporary storage of data Data is to be written to one of the devices the controller is ‘controlling’ or perhaps written to RAM. Controller operations are asynchronous from CPU. Remember, these controllers are like little computers execute special programs have local buffer storage, registers for data transfer, etc. Some of these registers are different than CPU registers Shift registers, etc. The type of I/O transfer described here and the next slide is effective for low speed / low volume I/O
4
1.4 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts More on Controllers When data is to be written to, say, a disk, the device controller must copy the data from its own local storage to the device itself (and conversely for a read). Device Drivers, part of the operating system, starts the process by loading registers within the device controller. Among other things, register contents indicate desired operation disk controller is to take (read, write, addresses of target…) Device controller can now transfer data to/from its local buffer. Once the data is transferred from, say, primary memory or keyboard to/from the device controller’s local storage, the device controller sends an ‘interrupt’ back to device driver. The device driver returns control back to the operating system – maybe sending a pointer to the data just read into primary memory; sometimes status indicators are returned. The CPU now process the interrupt by resuming processing possibly with the same process or some other process.
5
1.5 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Common Functions of Interrupts The CPU can accommodate the interrupt by transferring control to (generally) an interrupt vector located in a reserved part of memory. Each location in this interrupt vector (table) contains an address of a routine to ‘handle’ the particular type of interrupt to be accommodated. These routines are called interrupt handlers. There is a number of classes of interrupts and each of these interrupt handlers is developed to address a specific class of interrupt. Some typical interrupts (again, much more later) include Input/output interrupts Supervisor calls (system calls) Machine Check interrupts (a device fails) External interrupt (computer operator can interrupt) Input/Output Others…
6
1.6 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Interrupt Handling So, what does the CPU do in response to the interrupt? While what the CPU is unique to the type of interrupt, let’s assume it was running some other program: So 1. The Interrupt architecture must save the address of the interrupted instruction (instruction address in, say, the executing program) and, in general the state of your program when interrupted (register settings, buffer contents, etc.). 2. Determine type of interrupt (all interrupts are not created equal) 3. Disable ‘certain’ interrupts from interrupting the handling of ‘this’ interrupt (of course, any interrupts that arrive must be stored for future handling…). 4. Accommodate the interrupt (do what needs to be done) 5. Return to interrupted operations – or maybe to some different process.
7
1.7 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Interrupt Handling So operating system software is highly complicated. These kinds of Operating Systems are said to be interrupt driven. Much more later when we discuss Processor Management. This is only a very brief overview…
8
1.8 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Interrupt Timeline Consider: the CPU is busy executing some process. In the background, an I/O device, say a disk, is transferring data from itself into primary memory. When the transfer is complete, a signal (interrupt) is sent to the CPU. The CPU can suspend ‘current’ operations, and process the I/O interrupt Of course at this time, the I/O device becomes idle Note the CPU time to process the interrupt is very small (much smaller than chart indicates). Then the CPU resumes normal processing (whatever that might be) It might resume suspended process or start executing a different process. This is sufficient for now. Much more later. CPU processing the interrupt CPU resumes processing… I/O device is now idle again.
9
1.9 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts 1.2.2 Storage Structure 1.2.2 Storage Structure Popular Misconceptions and Basic Concepts. For a program to be executed by the central processor (CP / CPU – note that I use the terms interchangeably) the program instructions must be first loaded into main memory (primary memory / central memory – also synonymous terms). Programs are normally stored on disk or some other medium until needed. Then they are ‘read into’ primary memory in order to be executed by the CPU. Individual instructions are executed in the CPU one at a time, once fetched from memory. Some call this the ‘fetch – execute’ cycle.
10
1.10 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Memory Access - Simplification Primary Memory is the only memory central processor can access directly. Normally made of high-speed very expensive bipolar semiconductor material (more later). But these can change state (1 to 0 and 0 to 1) very rapidly. Normally this technology called dynamic random access memory (DRAM). Memory Structure – Primary memory is usually organized into huge arrays Each word is normally 32 bits (later I will tell you this is a lie!) and each word (or byte – normally eight bits) is uniquely addressable. In memory, there’s no easy way to determine instructions from data. Memory unit contains huge arrays of bits, bytes (eight bits), and words How these are used is determined by the executing programs. Thus, insofar as the CPU is concerned, ‘data’ is loaded into or stored from internal registers in the CPU to/from primary memory.
11
1.11 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Simplified Instruction Execution Instruction Execution. Oversimplified, but an instruction (data to primary memory) is fetched from primary memory and loaded into an instruction register in the CPU. This machine language instruction is decoded (electronically certain bits in instruction are inspected to determine what instruction is) Is it an Add, Subtract, Compare, Read, Write, … Depending on the parts of the instruction (certain bits), more data may have to be fetched from memory and loaded into CPU registers before the CPU can fully execute the instruction.
12
1.12 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Simplified Instruction Execution - Example Let’s say we have an ADD instruction which adds two quantities, like Add X to Y (z = x+y;) Once the ADD instruction is fetched and decoded, the CPU discovers that it must also load the current values of the other two quantities still in memory. This requires two more memory accesses. These two items (the bits in memory representing the values of X and Y) are moved into separate ‘registers’ in the CPU. The ADD instruction causes the CPU to add the contents of the two registers (that is, the two numbers in bits). Part of executing the ADD instruction causes the CPU to store the sum as found in one of the registers back to a designated location in memory Instruction execution is complete – unless there was some kind of problem. The next instruction in the instruction sequence (the process) is fetched, decoded, and executed.
13
1.13 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Storage Hierarchy Of course, we’d like all the instructions and all the data to be resident in memory. Storage accesses (CPU to Primary Memory) take time. Not possible due to cost of memory, amount of data that might need to be processed, and that main memory is volatile. (lost when power is removed). So, we have alternative storage structures – where high speed, high cost, and volatility may be mitigated with lower speed but very high volumes of data, and non-volatility. Main memory – temporary; volatile; instructions and current data. Directly accessible by CPU and sometimes by some disk controllers (ahead) Secondary storage – extension of main memory provides huge volumes of nonvolatile storage capacity at significantly less ‘cost per bit.’ Examples: Magnetic disks – rigid metal or glass platters covered with magnetic recording material Jump / flash drives, CDs, magnetic tapes, and many others…
14
1.14 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Storage Hierarchy Storage systems organized in hierarchy shown in next slide. The three main parameters are: Speed of access Cost of storage / bit Volatility 1. Registers – clearly fastest. High speed flip-flops; hardware; internal to CPU, disk controllers, and other devices. 2. Cache Stores – Compromise speed of access / cost of storage / performance goals… Cache stores are much smaller in size than RAM; much more expensive in cost/bit; but provide much faster data transfer rates from Cache to the CPU than from primary memory to the CPU. While main memory itself can be viewed as a cache for secondary storage (such as disk storage), there are levels of cache that provide still much faster access between cache and the CPU. In general the faster the technology is, the smaller amount is purchased while the cost / bit rises significantly. On the other hand, some applications need very high performance and cost may be secondary. So, …
15
1.15 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Storage-Device Hierarchy
16
1.16 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Caching May exist at several (or none or one) levels in computer Information in use is ‘copied’ from slower to faster storage technology. In many applications, cache checked first to determine if desired information is present in cache (often most recent disk accesses) If it is, information used directly from the cache (fast) If not, data copied to cache and used there Cache storage will have much less volume than storage being cached Cache management important design problem More on cache much later in this course.
17
1.17 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Performance of Various Levels of Storage Movement between levels of storage hierarchy can be explicit or implicit Discuss several of these values.
18
End of Chapter 1.2
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.