Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out.

Slides:



Advertisements
Similar presentations
Categories of I/O Devices
Advertisements

Chapter 3 Process Description and Control
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Queuing Theory For Dummies
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Embedded Real-time Systems The Linux kernel. The Operating System Kernel Resident in memory, privileged mode System calls offer general purpose services.
3.5 Interprocess Communication
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Data Communication and Networks Lecture 13 Performance December 9, 2004 Joseph Conron Computer Science Department New York University
External Devices I/O Modules Programmed I/O Interrupt Driven I/O Direct Memory Access I/O Channels and Processors.
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
1 Computer System Overview Chapter 1. 2 n An Operating System makes the computing power available to users by controlling the hardware n Let us review.
MICROPROCESSOR INPUT/OUTPUT
Khaled A. Al-Utaibi  Interrupt-Driven I/O  Hardware Interrupts  Responding to Hardware Interrupts  INTR and NMI  Computing the.
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
yahoo.com SUT-System Level Performance Models yahoo.com SUT-System Level Performance Models8-1 chapter11 Single Queue Systems.
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
L/O/G/O Input Output Chapter 4 CS.216 Computer Architecture and Organization.
1 Review of Process Mechanisms. 2 Scheduling: Policy and Mechanism Scheduling policy answers the question: Which process/thread, among all those ready.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
Internet Applications: Performance Metrics and performance-related concepts E0397 – Lecture 2 10/8/2010.
(C) J. M. Garrido1 Objects in a Simulation Model There are several objects in a simulation model The activate objects are instances of the classes that.
Chapter 2 Process Management. 2 Objectives After finish this chapter, you will understand: the concept of a process. the process life cycle. process states.
بسم الله الرحمن الرحيم MEMORY AND I/O.
Queuing Theory Simulation & Modeling.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
OPERATING SYSTEMS CS 3502 Fall 2017
OPERATING SYSTEMS CS 3502 Fall 2017
CPU SCHEDULING.
Topics Covered What is Real Time Operating System (RTOS)
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Process Description and Control
Process Scheduling B.Ramamurthy 9/16/2018.
Chapter 6: CPU Scheduling
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Process Scheduling B.Ramamurthy 12/5/2018.
Chapter5: CPU Scheduling
COMP60611 Fundamentals of Parallel and Distributed Systems
Chapter 6: CPU Scheduling
Chapter 10 Multiprocessor and Real-Time Scheduling
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out the essential features of the system being studied. The type of prediction: 1. How many terminals can one connect to a time sharing computer and still maintain a reasonable response time? 2. What percentage of calls will be blocked on the outgoing lines of a small bussiness’s telephone system? What improvement will result if extra lines are added?

3. What improvement if efficiency can one expect in adding a second processor to a computer system? Would it be better to spend the money on a second hard disc? 4. What is the best architecture for a space division packet switch? 5. Other? ( servers, Web servers,…)

1.2 Queueing Theory Queueing theory has its roots early in the twentieth century in the early studies of the Danish mathematician A. K. Erlang on telephone networks and in the creation of Markov models by the Russian mathematician A. A. Markov.

1.3 Queueing Model Customers Calls Jobs Events Connections Requests Tasks Packets

Queue: Buffering, Server: Service

There three server in parallel

M/M/3 The characters in the first two positions indicate: M: Markov statistics, D: Deterministic timing, G: General (Arbitrary) Statistics, Geom: Geometric statistics

Network of Queues An “open” queuing network accepts and loses customers from/to the outside world. Thus the total number of customers in an open network varies with time. A “closed” network does not connect with the outside world and has a constant number of customers circulating throughput it.

A customer that circulates in this queueing network represents the control of the computer terminals

Service Discipline First In First Out (FIFO) Last In First Out (LIFO) LIFO service discipline with pre-emptive resume (LIFOPR) Processor sharing (PS) displine

State Description The state of a queueing network is a vector indicating the total number of customers in each queue at a particular time instant. Several classes of customers  States? “Design and Implementation of the VAX Distributed File Service” by William G. Nichols and Joel S. Emer in the Digital Technical Journal No. 9, June 1989

1.4 Case study I: Performance Model of a Distributed File Service Important operation characteristics + System parameter estimation  Model  Analysis and simulation the system performance The server resources being modeled are: 1.The network interface 2. The CPU 3. The disk subsystem at the server.

The response time breakdown of a Web service

Design Alternatives Design alternatives  Models (adjustable)  Parameters  Analysis, simulation, measurement (artificial workload)  Performance comparison The model distinguishes two types of requests: control operations and data access operations.

Design Alternatives 1. Control operations are operations such as open and close file, which have a high computational component. 2. Data access operations are simple reads and writes. Data access operations usually have low computational requirements by require larger data transfer.

Design Alternatives Model tractable: Assume the service time at each service center is exponentially distributed. Product-form solution (for single class) Approximate-solution (for multiclass mean value analysis technique)

1.5 Case Study II: Single-bus Multiprocessor Modeling “Modeling and Performance Analysis of Single-Bus Tightly-Coupled Multiprocessors” by B. L. Bodnar and A. C. Liu in the IEEE Transactions on Computers, Vol. 38, No. 3, March 1989.

Description of the Multiprocessor Model A tightly-coupled multiprocessor (TCMP) is defined as a distributed computer system where all the processors communicate through a single (global) shared memory. A typical physical layout for such a computer structure is shown in Fig. 1.6 Figure 1.7 illustrates our queueing for this single- bus tightly-coupled multiprocessor (SBTCMP) architecture.

Description of the Multiprocessor Model PE: Processing element BIU: A bus interface unit Each CPU and BIU has mean service rate u(i,1) and u(i, 2), respectively. We also assume the CPU and BIU operate independently each other, and that all the BIU’s in the multiprocessor can be lumped together into a single “equivalent BIU”.

Description of the Multiprocessor Model The branching probability are p(i,1), p(i,2) and p(i, 3). The branching probability p(i,3) is interpreted as the probability that a task associated with PE i will join the CPU queue at PE i after using the BIU. 1-p(i, 3) is then the probability that a task associated with PE i will wait for an interrupt acknowledgment at PE i.

Queueing Model of Multiprocessor Interrupt mechanism: Intertask communication interrupt The state of the task: sleep, ready, execution,… The task sleep in the task pool. If the task requests bus usage, it BIU will enter the BIU queue. If the bus is free, the task will begin using the bus; otherwise, it will wait until the bus released.

Queueing Model of Multiprocessor We assume that only one of the preceding events can occurs at a given moment. That is, CPU and interrupt processes cannot occur concurrently. We also assume that a task queued for bus access will not using the CPU. Tasks waiting for an interrupt or undergoing interrupt servicing will be referred to as “interrupt- driven tasks”.

Queueing Model of Multiprocessor Tasks waiting for interrupts are modeled by a queue feeding a “lock”. The lock is drawn using a triangle to signify that it is enabled via an external stimulus. The purpose of the lock is to allow only one task to pass by it in response to an interrupt.

Queueing Model of Multiprocessor If the task that was forced to release the CPU was an interrupt-driven task, then this pre-empted task will become the first entry on a-last-come-first served queue (i.e., a stack) If the pre-empted task was not an interrupt driven task, then it will become the first entry on the ready list.

Queueing Model of Multiprocessor The model not only considers the the bus contention caused by multiple processors attempting to access the shared memory, but also attempts to include realistically the local behavior of the various tasks running on these same processors.

1.6 Case Study III: TeraNet, A Lightwave Networks

1.7 Case Study IV: Performance Model of a Shared Medium Packet Switch

PlaNET Switching Configuration

Model of Switch