Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.

Slides:



Advertisements
Similar presentations
Computer-System Structures Er.Harsimran Singh
Advertisements

Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Chapter 2: Computer-System Structures
CMPT 300: Operating Systems I Dr. Mohamed Hefeeda
OS2-1 Chapter 2 Computer System Structures. OS2-2 Outlines Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection.
1 Lecture 2: Review of Computer Organization Operating System Spring 2007.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Dr. Mohamed Hefeeda.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Computer System Overview
1 Computer System Overview OS-1 Course AA
Computer System Overview
Midterm Tuesday October 23 Covers Chapters 3 through 6 - Buses, Clocks, Timing, Edge Triggering, Level Triggering - Cache Memory Systems - Internal Memory.
Modified from Silberschatz, Galvin and Gagne ©2009 CS 446/646 Principles of Operating Systems Lecture 1 Chapter 1: Introduction.
Computer System Structures memory memory controller disk controller disk controller printer controller printer controller tape-drive controller tape-drive.
Chapter 1.2: Introduction These slides, originally provided by your authors, have been modified by your instructor.
Computer-System Structures
Computer System Overview Chapter 1. Basic computer structure CPU Memory memory bus I/O bus diskNet interface.
A. Frank - P. Weisberg Operating Systems Functional View of Operating System.
1/21/2010CSCI 315 Operating Systems Design1 Computer System Structures Notice: The slides for this lecture have been largely based on those accompanying.
General System Architecture and I/O.  I/O devices and the CPU can execute concurrently.  Each device controller is in charge of a particular device.
CSCI-235 Micro-Computer in Science System Software.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 1: Introduction.
Operating System Concepts Ku-Yaw Chang Assistant Professor, Department of Computer Science and Information Engineering Da-Yeh University.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 1: Introduction.
Computer Systems Overview. Page 2 W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware resources of one.
1 Computer System Overview Chapter 1. 2 n An Operating System makes the computing power available to users by controlling the hardware n Let us review.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Computer System Overview Chapter 1. Operating System Exploits the hardware resources of one or more processors Provides a set of services to system users.
Chapter 1 Computer System Overview Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
CMPE 421 Parallel Computer Architecture
2.1 Silberschatz, Galvin and Gagne ©2003 Operating System Concepts with Java Chapter 2: Computer-System Structures Computer System Operation I/O Structure.
CHAPTER 2: COMPUTER-SYSTEM STRUCTURES Computer system operation Computer system operation I/O structure I/O structure Storage structure Storage structure.
Chapter 2: Computer-System Structures
Thanks to Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2 Computer-System Structures n Computer System Operation n I/O Structure.
2: Computer-System Structures
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Operating Systems and Networks AE4B33OSS Introduction.
Chapter 1: Introduction. 1.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 1: Introduction What Operating Systems Do (previous.
1 Chapter 2: Computer-System Structures  Computer System Operation  I/O Structure  Storage Structure  Storage Hierarchy  Hardware Protection  General.
Chapter 1: Introduction. 1.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 1: Introduction What Operating Systems Do Computer-System.
The Functions of Operating Systems Interrupts. Learning Objectives Explain how interrupts are used to obtain processor time. Explain how processing of.
Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection Network Structure.
What is cache memory?. Cache Cache is faster type of memory than is found in main memory. In other words, it takes less time to access something in cache.
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 2 Computer-System Structures Slide 1 Chapter 2 Computer-System Structures.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 4 Computer Systems Review.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Lecture 1: Review of Computer Organization
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
Review of Computer System Organization. Computer Startup For a computer to start running when it is first powered up, it needs to execute an initial program.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Chapter 2: Computer-System Structures(Hardware) or Architecture or Organization Computer System Operation I/O Structure Storage Structure Storage Hierarchy.
Computer Systems Overview. Lecture 1/Page 2AE4B33OSS W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware.
1 Computer System Overview Chapter 1. 2 Operating System Exploits the hardware resources of one or more processors Provides a set of services to system.
Chapter 2: Computer-System Structures(Hardware)
Chapter 2: Computer-System Structures
Chapter 1: Introduction
Architecture Background
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Computer-System Architecture
Module 2: Computer-System Structures
Processor Fundamentals
Module 2: Computer-System Structures
Chapter 1 Computer System Overview
Chapter 2: Computer-System Structures
Chapter 2: Computer-System Structures
Module 2: Computer-System Structures
Module 2: Computer-System Structures
Presentation transcript:

Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory – Concurrent execution of CPUs and devices competing for memory cycles

Computer System Operation I/O devices and the CPU can execute concurrently Each device controller is in charge of a particular device type Each device controller has a local buffer CPU moves data from/to main memory to/from local buffers I/O is from the device to local buffer of controller Device controller informs CPU that it has finished its operation by causing an interrupt

Computer Startup and Execution bootstrap program is loaded at power-up or reboot – Typically stored in ROM or EEPROM, generally known as firmware – Initializes all aspects of system – Loads operating system kernel and starts execution Kernel runs, waits for event to occur – Interrupt from either hardware or software Hardware sends trigger on bus at any time Software triggers interrupt by system call Stops current kernel execution, transfers execution to fixed location – Interrupt service routine executes and resumes kernel where interrupted – Usually a service routine for each device / function » Interrupt vector dispatches interrupt to appropriate routine

Interrupt Timeline

Interrupts invoked with interrupt lines from devices Interrupt controller chooses interrupt request to honor – Mask enables/disables interrupts – Priority encoder picks highest enabled interrupt – Software Interrupt Set/Cleared by Software CPU can disable all interrupts with internal flag Interrupt Controller Network Interrupt Interrupt Mask Control Software Interrupt CPU Priority Encoder Timer Int Disable

Example: Network Interrupt  add $r1,$r2,$r3 subi $r4,$r1,#4 slli $r4,$r4,#2 PC saved Disable All Ints Supervisor Mode Restore PC User Mode  Transfer Network Packet from hardware to Kernel Buffers  “Interrupt Handler” lw$r2,0($r4) lw$r3,4($r4) add$r2,$r2,$r3 sw8($r4),$r2  External Interrupt

Common Functions of Interrupts Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines Interrupt architecture must save the address of the interrupted instruction Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt A trap is a software-generated interrupt caused either by an error or a user request An operating system is interrupt-driven

Storage Structure Programs must be in main memory (RAM) to execute Von-Neumann architecture START Fetch next instruction from Memory to IR Increment PC Decode and Execute Instruction in IR STOP ? NO YES

Storage Structure Ideally, we want programs and data to reside in main memory permanently – Main memory is usually too small – Main memory is volatile – loses contents on power loss Secondary storage holds large quantities of data, permanently – Magnetic disk is the most common secondary-storage device – Actually, a hierarchy of storage varying by speed, cost, size and volatility

Storage-Device Hierarchy

Storage Hierarchy Storage systems organized in hierarchy according to speed, cost, and volatility A program in execution (i.e., a process) generates a stream of memory addresses START Fetch next instruction from Memory to IR Increment PC Decode and Execute Instruction in IR STOP ? NO YES

Storage Hierarchy What if next instruction/data is not in (main) memory? – Problem: Memory can be a bottleneck for processor performance – Solution: Rely on memory hierarchy of faster memory to bridge the gap

Caching Important principle, performed at many levels in a computer (in hardware, operating system, software) Information in use copied from slower to faster storage temporarily Faster storage (cache) checked first to determine if information is there – If it is, information used directly from the cache (fast) – If not, data copied to cache and used there What is cache for disk (e.g., secondary memory)?

Caching Analogy You are going to do some research on a particular topic. Thus, you go to the library and look for the a shelve that contains books on that particular topic You pick up a book from the shelve, find a chair, seat and start reading

Caching Analogy You find a reference to another book on the same topic that you are also interested in reading. Thus, you stand up, go to the same shelve, leave the first book and pick up the other book Then, you go back to the chair and start reading the second book Later on you realize that you want to read the first book once again (or another related book). Thus, you repeat the same process (i.e., go to the shelve to find it)

Caching Analogy Suppose that instead of taking just one book from the shelve, you take 10 books on the same topic. Then, you find a table with a chair, put the 10 books on the table, sit there and start reading one of the books If you need another related book, there is a good chance that it is on your table so you don’t have to go to the shelve to get it. Also, you can leave the first book on the table and there is a good chance that you will be needing it again later

Caching Analogy The table is a cache for what? If the book that you need is on the table, you have a cache hit If the book that you need is not on the table, you have a cache miss Cache smaller than storage being cached – Cache management important design problem – Cache size and replacement policy

Caching Temporal Locality (locality in time) – Recently accessed items tend to be accessed again the near future – Keep most recently accessed data closer to the processor – In the analogy? Spatial Locality (locality in space) – Accesses are clustered in the address space – Move words consisting of contiguous words to the faster levels – In the analogy? – Why are “gotos” not good?

Caching We know that, statistically, only a small amount of the entire memory space is being accessed at any given time and values in that subset are being accessed repeatedly Locality properties allow us to use a small amount of very fast memory to effectively accelerate the majority of memory accesses The result is that we end up with a memory system that can hold a large amount of information (in a large, low-cost memory) yet provide nearly the same access speed as would be obtained from having all of the memory be very fast and expensive

Caching Suppose a memory reference is generated by the CPU and it generates a cache miss (i.e., corresponding value is not in the cache) – In the analogy, you don’t have the book that you need on the table The address is sent to main memory to fetch the desired word and load it into the cache. However, the cache is already full. You must replace one of the values in the cache. What do you think would be a good policy for choosing the value to be replaced? – In the analogy, the table is full, which book do you remove from the table to make room for the new one?

Caching Given that we want to keep values that we will need again soon, what about getting rid of the one that won’t be needed for the longest time? Choosing which value to replace is called the replacement policy Will cover this in detail in chapter 9

In general …

Hit: data appears in some block in the faster level – Hit Rate: The fraction of memory accesses found in the higher level – Hit Time: Time to access the faster level which consists of Memory Access Time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the slower level – Miss Rate: 1 – (Hit Rate) – Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor Hit Time << Miss Penalty Will cover memory management in chapters 8 and 9

I/O Structure Storage is one of many types of I/O devices Each device connected to a controller – Maintains local buffer storage and set of special-purpose registers – Responsible for moving data between the peripheral devices that it controls and its local buffer storage – Device driver for each device controller Understands the device controller and presents uniform interface to the device to the rest of the operating system

Direct Memory Access Device controller transfers block of data to/from main memory Interrupts when block transfer completed Only one interrupt per block is generated rather than one interrupt per byte

Computer System Architecture A computer system can be organized in a number of different ways, which we can categorize roughly according to the number of general-purpose processors that it has Single-processor system – From PDAs to mainframes – Almost all have special-purpose processors PCs contain a microprocessor in the keyboard to convert keystrokes into codes to be sent to the CPU Not considered multiprocessor

Computer System Architecture Multi-processor systems – Increase throughput Speed-up ratio with N processors ? – Economy of scale System with N processors or N single-processor systems, which one is cheaper? – Increased reliability Some are fault tolerant – Asymmetric multiprocessing Each processor assigned a specific task A master processor controls the system – Symmetric multiprocessing (SMP) most common No master-slave relationship