A ‘ringbuffer’ application Introduction to process ‘blocking’ and the Linux kernel’s support for ‘sleeping’ and ‘waking’

Slides:



Advertisements
Similar presentations
Linux device-driver issues
Advertisements

Module R2 Overview. Process queues As processes enter the system and transition from state to state, they are stored queues. There may be many different.
Module R2 CS450. Next Week R1 is due next Friday ▫Bring manuals in a binder - make sure to have a cover page with group number, module, and date. You.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
The ATA/IDE Interface Can we write a character-mode device driver for the hard disk?
Another device-driver? Getting ready to program the network interface.
I/O Multiplexing The role of the ‘poll()’ method in Linux device-driver operations.
IT Systems Multiprocessor System EN230-1 Justin Champion C208 –
EEE 435 Principles of Operating Systems Principles and Structure of I/O Software (Modern Operating Systems 5.2 & 5.3) 5/22/20151Dr Alain Beaulieu.
The ‘process’ abstraction
Sleeping and waking An introduction to character-mode device-driver modules for Linux.
Input/Output Management and Disk Scheduling
04/14/2008CSCI 315 Operating Systems Design1 I/O Systems Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Concurrent TCP connections A look at design-changes which permit a TCP server to handle multiple clients without delays.
Standard C Library Application Programming Interface to System-Calls.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Home: Phones OFF Please Unix Kernel Parminder Singh Kang Home:
The kernel’s task list Introduction to process descriptors and their related data-structures for Linux kernel version
Memory Management Chapter 5.
Kernel timing issues An introduction to the use of kernel timers and work queues.
Device Management.
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Chapter 3 Buffer Cache TOPICS UNIX system Architecture Buffer Cache
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
FreeRTOS.
Operating System Program 5 I/O System DMA Device Driver.
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
Introduction to Processes CS Intoduction to Operating Systems.
FINAL MPX DELIVERABLE Due when you schedule your interview and presentation.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Classical problems.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
Chapter 7 Operating Systems. Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the.
An Object-Oriented Approach to Programming Logic and Design Fourth Edition Chapter 6 Using Methods.
What Every Developer Should Know about the Kernel Dr. Michael L. Collard 1.
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
OS2014 PROJECT 2 Supplemental Information. Outline Sequence Diagram of Project 2 Kernel Modules Kernel Sockets Work Queues Synchronization.
CE Operating Systems Lecture 13 Linux/Unix interprocess communication.
Linux Processes Travis Willey Jeff Mihalik. What is a process? A process is a program in execution A process includes: –program counter –stack –data section.
1 Review of Process Mechanisms. 2 Scheduling: Policy and Mechanism Scheduling policy answers the question: Which process/thread, among all those ready.
4300 Lines Added 1800 Lines Removed 1500 Lines Modified PER DAY DURING SUSE Lab.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
1  process  process creation/termination  context  process control block (PCB)  context switch  5-state process model  process scheduling short/medium/long.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Deadlock Operating Systems: Internals and Design Principles.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Queues Chapter 5 Queue Definition A queue is an ordered collection of data items such that: –Items can be removed only at one end (the front of the queue)
Interrupt driven I/O Computer Organization and Assembly Language: Module 12.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Time Management.  Time management is concerned with OS facilities and services which measure real time.  These services include:  Keeping track of.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 2.
M1G Introduction to Programming 2 2. Creating Classes: Game and Player.
I/O Software CS 537 – Introduction to Operating Systems.
Part IVI/O Systems Chapter 13: I/O Systems. I/O Hardware a typical PCI bus structure 2.
Copyright © Curt Hill More on Operating Systems Continuation of Introduction.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
Transmitter Interrupts Review of Receiver Interrupts How to Handle Transmitter Interrupts? Critical Regions Text: Tanenbaum
Review Array Array Elements Accessing array elements
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
Queues.
Applied Operating System Concepts
Chapter 10 Multiprocessor and Real-Time Scheduling
Mr. M. D. Jamadar Assistant Professor
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

A ‘ringbuffer’ application Introduction to process ‘blocking’ and the Linux kernel’s support for ‘sleeping’ and ‘waking’

Devices might be ‘idle’ With our previous device-driver examples (i.e., dram, cmosram), the data to be read was already there, just waiting to be input But with certain other character devices, such as a keyboard, a program may want to input its data before any new data has actually been entered by a user In such cases we prefer to wait until data arrives rather than to abandon reading

Devices might be ‘busy’ Sometimes an application wants to ‘write’ some data to a character device, such as a printer, but the device temporarily is not able to accept more data, being still busy with processing previously written data Again, in such situations we prefer to just wait until the device becomes ready for us to send it more data rather than to give up

We could do ‘busy waiting’… It is possible for a device-driver to ‘poll’ a status-bit continuously until data is ready, (or until a device is no longer “too busy”): Such a technique is called ‘busy waiting’ But it could waste a lot of valuable CPU time before any benefit was realized! do { status = inb( 0x64 ); } while ( ( status & READY ) == 0 );

Avoid ‘busy waiting’ In a multitasking system we would want to avoid having any processes use the ‘busy waiting’ strategy whenever possible, as it ‘stalls’ any progress by other tasks – it’s a system-performance ‘bottleneck’! So modern operating systems support an alternative strategy, which allows those tasks that could proceed to do so

‘blocking’ while idle If a task is trying to read from a device-file when no data is present, but new data is expected to arrive, the operating system can ‘block’ that task from consuming any valuable CPU time while it is waiting, by ‘putting the task to sleep’ – yet arranging for that task to be ‘awakened’ as soon as some fresh data has actually arrived

‘blocking’ while busy Similarly, if a task is trying to ‘write’ to a device-file, but that device is ‘busy’ with previously written data, then the OS can put this task to sleep, preventing it from wasting any CPU time during its delay so that other tasks can do useful work – but arranging for this ‘sleeping’ task to be ‘woken up’ as soon as the device is no longer ‘busy’ and can accept fresh data

What does ‘sleep’ mean? The Linux kernel puts a task to sleep by simply modifying the value of its ‘state’ variable: –TASK_RUNNING –TASK_STOPPED –TASK_UNINTERRUPTIBLE –TASK_INTERRUPTIBLE … Only tasks with ‘state == TASK_RUNNING’ are granted time on the CPU by the ‘scheduler’

What does ‘wakeup’ mean? A sleeping task is one whose ‘task.state’ is equal to ‘TASK_INTERRUPTIBLE’ or to ‘TASK_UNINTERRUPTIBLE’ A sleeping task is ‘woken up’ by changing its ‘task,state’ to be ‘TASK_RUNNING’ When the Linux scheduler sees that a task is in the ‘TASK_RUNNING’ state, it grants that task some CPU time for execution

‘run’ queues and ‘wait’ queues In order for Linux to efficiently manage the scheduling of its various ‘tasks’, separate queues are maintained for ‘running’ tasks and for tasks that temporarily are ‘blocked’ while waiting for a particular event to occur (such as the arrival of new data from the keyboard, or the exhaustion of prior data sent to the printer)

Some tasks are ‘ready-to-run’ Those tasks that are ready-to-run comprise a sub-list of all the tasks, and they are arranged on a queue known as the ‘run-queue’ Those tasks that are blocked while awaiting a specific event to occur are put on alternative sub-lists, called ‘wait queues’, associated with the particular event(s) that will allow a blocked task to be unblocked

Kernel waitqueues waitqueue

Kernel’s support-routines The Linux kernel makes it easy for drivers to perform the ‘sleep’ and ‘wakeup’ actions while avoiding potential ‘race conditions’ that are inherent in a ‘preemptive’ kernel Your driver can use the support-routines by including the header:

Use of Linux wait-queues #include wait_queue_head_tmy_queue; init_waitqueue_head( &my_queue ); sleep_on( &my_queue ); wake_up( &my_queue ); But can’t unload driver if task stays asleep!

‘interruptible’ is preferred #include wait_queue_head_twq; init_waitqueue_head( &wq ); wait_event_interruptible( wq, ); wake_up_interruptible( &wq ); An ‘interruptible’ sleep can be awoken by a signal, in case you might want to ‘unload’ your driver!

A convenient ‘macro’ DECLARE_WAIT_QUEUE_HEAD( wq ); This statement can be placed outside your module’s functions (i.e., a ‘global’ object) It combines declaration with initialization: wait_queue_head_t wq; init_waitqueue_head( &wq );

Our ‘stash’ device Device works like a public ‘clipboard’ It uses kernel memory to store its data It allows ‘communication’ between tasks What one task writes, another can read!

Ringbuffer A first-in first-out data-structure (FIFO) Uses a storage array of finite length Uses two array-indices: ‘head’ and ‘tail’ Data is added at the current ‘tail’ position Data is removed from the ‘head’ position

‘ringbuffer’ depicted DATADATA HEAD TAIL DATADATA DATADATA DATADATA DATADATA FIFO rules: The next data to be added goes in at the current ‘tail’ position, and the next data to be removed comes from the ‘head’ position The ringbuffer is ‘empty’ when ‘head’ equals ‘tail’, and it is ‘full’ if ‘tail’ + 1 equals ‘head’ (modulo RINGSIZE)

Ringbuffer (continued) One array-position is always left unused Condition ‘head == tail’ means “empty” Condition tail == head-1 means “full” Both ‘head’ and ‘tail’ will “wraparound” Calculation: next = ( next+1 )%RINGSIZE;

read-algorithm for ‘stash’ if ( ringbuffer_is_empty ) { // sleep, until another task supplies some data // or else exit if a signal is received by this task } Remove a byte from the ringbuffer; Copy the byte to user-space; Awaken any sleeping writers; return 1;

write-algorithm for ‘stash’ if ( ringbuffer_is_full ) { // sleep, until some data is removed by another task // or else exit if a signal is received by this task } Copy a byte from user-space; Insert this byte into ringbuffer; Awaken any sleeping readers; return 1;

Demonstration of ‘stash’ Quick demo: we can use I/O redirection For demonstrating ‘write’ to /dev/stash: $ echo “Hello” > /dev/stash For demonstrating ‘read’ from /dev/stash: $ cat /dev/stash

The ‘device’ file-node We cannot use the ‘stash.c’ device-driver until a device-node has been created that allows both ‘read’ and ‘write’ access (the SysAdmin must usually do this setup): #root mknod /dev/stash c 40 0 #root chmod a+rw /dev/stash But you can do it, by using a module that resembles our ‘tempcdev.c’ demo (if you just modify its module-data appropriately)

In-class exercise #1 Download a fresh copy of our ‘tempcdev.c’ module and edit it, so that it will create the ‘/dev/stash’ device-file when you install it Then you can try using our ‘stash.c’ demo to send data from one task to another task by using the ‘echo’ and ‘cat’ commands

In-class exercise #2 Add a ‘get_info()’ function to this driver to create a pseudo-file (named ‘/proc/stash’) that will show the current contents of the ringbuffer (if any) and the current values for the ‘head’ and ‘tail’ buffer-indices Don’t forget: use ‘create_proc_info_entry()’ in your ‘init_module()’ function, and use ‘remove_proc_entry()’ during ‘cleanup’