Linux Thread Programming

Slides:



Advertisements
Similar presentations
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Advertisements

Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
© 2004, D. J. Foreman 2-1 Concurrency, Processes and Threads.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Real-time Systems Lab, Computer Science and Engineering, ASU Thread and Synchronization Yann-Hang Lee School of Computing, Informatics, and Decision Systems.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Cosc 4740 Chapter 6, Part 3 Process Synchronization.
Scheduling Basic scheduling policies, for OS schedulers (threads, tasks, processes) or thread library schedulers Review of Context Switching overheads.
Processes and Threads CS550 Operating Systems. Processes and Threads These exist only at execution time They have fast state changes -> in memory and.
Multi-threaded Programming with POSIX Threads CSE331 Operating Systems Design.
Threads and Thread Control Thread Concepts Pthread Creation and Termination Pthread synchronization Threads and Signals.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
What is a thread? process: an address space with 1 or more threads executing within that address space, and the required system resources for those threads.
CS333 Intro to Operating Systems Jonathan Walpole.
CSE 451: Operating Systems Section 5 Midterm review.
Chapter 2 Processes and Threads Introduction 2.2 Processes A Process is the execution of a Program More specifically… – A process is a program.
1 Review of Process Mechanisms. 2 Scheduling: Policy and Mechanism Scheduling policy answers the question: Which process/thread, among all those ready.
Threads CSCE Thread Motivation Processes are expensive to create. Context switch between processes is expensive Communication between processes.
S -1 Posix Threads. S -2 Thread Concepts Threads are "lightweight processes" –10 to 100 times faster than fork() Threads share: –process instructions,
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
1 VxWorks 5.4 Group A3: Wafa’ Jaffal Kathryn Bean.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Thread-safe Data Structures, Semaphores.
1 Synchronization Threads communicate to ensure consistency If not: race condition (non-deterministic result) Accomplished by synchronization operations.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
CSC 660: Advanced Operating SystemsSlide #1 CSC 660: Advanced OS Synchronization.
Working with Pthreads. Operations on Threads int pthread_create (pthread_t *thread, const pthread_attr_t *attr, void * (*routine)(void*), void* arg) Creates.
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
Chapter 4 – Thread Concepts
CS 537 – Introduction to Operating Systems
Threads Some of these slides were originally made by Dr. Roger deBry. They include text, figures, and information from this class’s textbook, Operating.
C Threads and Semaphores
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
Topics Covered What is Real Time Operating System (RTOS)
Applied Operating System Concepts -
Processes and Threads Processes and their scheduling
OPERATING SYSTEMS CS3502 Fall 2017
PThreads.
Concurrency, Processes and Threads
Chapter 4 – Thread Concepts
Threads Threads.
CS399 New Beginnings Jonathan Walpole.
Thread Programming.
Multithreading Tutorial
Chapter 4 Threads.
Jonathan Walpole Computer Science Portland State University
Multithreading Tutorial
Thread Programming.
Chapter 2: The Linux System Part 3
Synchronization and Semaphores
Jonathan Walpole Computer Science Portland State University
Threads Chapter 4.
Threads Chapter 5 2/17/2019 B.Ramamurthy.
Multithreading Tutorial
Jonathan Walpole Computer Science Portland State University
Threads Chapter 5 2/23/2019 B.Ramamurthy.
Multithreading Tutorial
Concurrency, Processes and Threads
Still Chapter 2 (Based on Silberchatz’s text and Nachos Roadmap.)
Realizing Concurrency using Posix Threads (pthreads)
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Concurrency, Processes and Threads
CSE 153 Design of Operating Systems Winter 2019
Presentation transcript:

Linux Thread Programming Computer Science & Engineering Department Arizona State University Tempe, AZ 85287 Dr. Yann-Hang Lee yhlee@asu.edu (480) 727-7507

Multiple Events in One Thread void compute() { if (event1) then action1; if (event2) then action2; if (event3) then action3; . } or for (i=0, i<n, i++) if event[i] then action[i]; void compute() { if (event1) then action1; else if (event2) then action2; else if (event3) then action3; . } or for (i=0, i<n, i++) { if event[i] then { action[i]; break; } }

Event and Time-Driven Programming Models Each task is performed in a thread (an execution units) initialization external trigger? Take actions and change system state ISR: to set/clear events Task initialization start_time=time( ) computation Sleep(period - ( time( ) -start_time) )

Periodic Thread in POSIX Sleeps for a relative interval can be problematic Use clock_nanosleep void *thread_body(void *arg) { struct timespec next, period; sem_t *s_act; <local variables> <read parameters from arg> sem_wait(s_act); clock_gettime(CLOCK_MONOTONIC, &next); while (condition) { < task body> next = next + period; clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &next, 0); }

Thread and Process Process: Thread: Lightweight process (LWP): an entity to which system resources (CPU time, memory, etc.) are allocated an address space with 1 or more threads executing within that address space, and the required system resources for those threads Thread: a sequence of control within a process and shares the resources in that process Lightweight process (LWP): LWP may share resources: address space, open files, … clone or fork – share or not share address space, file descriptor, etc. In Linux kernel, threads are implemented as standard processes (LWP) that shares certain resources with other processes, and there is no special scheduling semantics or data structures to represent threads

Why Threads Advantages: Drawbacks: the overhead for creating a thread is significantly less than that for creating a process multitasking, i.e., one process serves multiple clients switching between threads requires the OS to do much less work than switching between processes Drawbacks: not as widely available as the process features writing multithreaded programs require more careful thought more difficult to debug than single threaded programs for single processor machines, creating several threads in a program may not necessarily produce an increase in performance (only so many CPU cycles to be had) (“The Problem with Threads”, http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf)

Shared Code and Reentrancy A single copy of code is invoked by different concurrent tasks must reentrant pure code variables in task stack (parameters) guarded global and static variables (with semaphore or taskLock) variables in task content taskOne ( ) { ..... myFunc ( ); } myFunc ( ) { ..... } taskTwo ( ) { ..... myFunc ( ); }

Preemption Preemptivity: suspend the executing job and switch to the other one should a job (or a portion of job) be preemptable ? context switch: save the current process status (PC, registers, etc.) and initiate a ready job transmit a UDP package, write a block of data to disk, a busy waiting loop Preemptivity of resources: concurrent use of resources or critical section lock, semaphore, disable interrupts How can a context switch be triggered? Assume you want to preempt an executing job -- why a higher priority job arrives run out the time quantum waiting executing ready blocked suspended dispatched wake-up

POSIX Thread (pthread) IEEE's POSIX Threads Model: programming models for threads in a UNIX platform pthreads are included in the international standards pthreads programming model: creation of threads managing thread execution managing the shared resources of the process main thread: initial thread created when main() is invoked has the ability to create daughter threads if the main thread returns, the process terminates even if there are running threads in that process to explicitly avoid terminating the entire process, use pthread_exit()

Example of Thread Creation #include <pthread.h> #include <stdio.h> void *thread_routine(void* arg){ printf("Inside newly created thread \n"); } void main(){ pthread_t thread_id; // threat handle void *thread_result; pthread_create( &thread_id, NULL, thread_routine, NULL ); printf("Inside main thread \n"); pthread_join( thread_id, &thread_result ); }

Linux Thread State Transition (Stallings, Operating Systems: Internals and Design Principles)

Multiple Threads -- Scheduling In Linux Three classes of threads for scheduling purposes: Real-time FIFO Real-time round robin Timesharing (for all non real-time processes) Process priority Static -- Used for real-time processes in the range of 0-99, no time quanta, processes run until block or yield voluntarily Dynamic -- for conventional processes, 100-139, and can be adjusted based on time run and priority class All real-time processes are higher priority than any conventional processes

Pthread Attribute struct sched_param { int sched_priority; }; struct pthread_attr { void* stack_base; rt_uint16_t stack_size; /* stack size of thread */ rt_uint8_t priority; /* priority of thread */ rt_uint8_t detachstate; /* detach state */ rt_uint8_t policy; /* scheduler policy */ rt_uint8_t inheritsched; /* Inherit parent prio/policy */ typedef struct pthread_attr pthread_attr_t;

Linux ps and /proc/[pid]/stat

Linux Kernel Thread A way to implement background tasks inside the kernel static struct task_struct *tsk; static int thread_function(void *data) { int time_count = 0; do { printk(KERN_INFO "thread_function: %d times", ++time_count); msleep(1000); }while(!kthread_should_stop() && time_count<=30); return time_count; } static int hello_init(void) { tsk = kthread_run(thread_function, NULL, "mythread%d", 1); if (IS_ERR(tsk)) { …. }

When Synchronization in Necessary A race condition can occur when the outcome of a computation depends on how two or more interleaved kernel control paths are nested To identify and protect the critical regions in exception handlers, interrupt handlers, deferrable functions, and kernel threads On single CPU, critical region can be implemented by disabling interrupts while accessing shared data If the same data is shared only by the service routines of system calls, critical region can be implemented by disabling kernel preemption (interrupt is allowed) while accessing shared data How about multiprocessor systems (SMP) Different synchronization techniques are necessary for data to be accessed by multiple CPUs Note that interrupts can be nested, but they are non-blocking, not preempted by system calls.

Thread Synchronization -- Mutex (1) Mutual exclusion (mutex): guard against multiple threads modifying the same shared data simultaneously provides locking/unlocking critical code sections where shared data is modified Basic Mutex Functions: int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr); int pthread_mutex_lock(pthread_mutex_t *mutex); int pthread_mutex_unlock(pthread_mutex_t *mutex); int pthread_mutex_destroy(pthread_mutex_t *mutex); data type named pthread_mutex_t is designated for mutexes the attribute of a mutex can be controlled by using the pthread_mutex_init() function

Example: Mutex #include <pthread.h> ... pthread_mutex_t my_mutex; // should be of global scope int main() { int tmp; … tmp = pthread_mutex_init( &my_mutex, NULL ); // initialize the mutex // create threads pthread_mutex_lock( &my_mutex ); do_something_private(); pthread_mutex_unlock( &my_mutex ); return 0; }

Thread Synchronization -- Semaphore (2) creating a semaphore: int sem_init(sem_t *sem, int pshared, unsigned int value); initializes a semaphore object pointed to by sem pshared is a sharing option; a value of 0 means the semaphore is local to the calling process gives an initial value to the semaphore terminating a semaphore: int sem_destroy(sem_t *sem); semaphore control: int sem_post(sem_t *sem); int sem_wait(sem_t *sem); sem_post atomically increases the value of a semaphore by 1, sem_wait atomically decreases the value of a semaphore by 1; but always waits until the semaphore has a non-zero value first

Example: Semaphore #include <pthread.h> #include <semaphore.h> void *thread_function( void *arg ) { sem_wait( &semaphore ); perform_task(); pthread_exit( NULL ); } sem_t semaphore; // also a global variable just like mutexes int main() { int tmp; tmp = sem_init( &semaphore, 0, 0 ); // initialize the semaphore pthread_create( &thread[i], NULL, thread_function, NULL ); // create threads while ( still_has_something_to_do() ) sem_post( &semaphore ); ... } pthread_join( thread[i], NULL ); sem_destroy( &semaphore ); return 0; }

Synchronization in Linux Kernel The old Linux system ran all system services to completion or till they blocked (waiting for IO). When it was expanded to SMP, a lock was put on the kernel code to prevent more than one CPU at a time in the kernel. Kernel preemption a process running in kernel mode can be replaced by another process while in the middle of a kernel function In the example, process B may be waked up by a timer and with higher priority Why – dispatch latency (Christopher Hallinan,"Embedded Linux Primer: A Practical Real-World Approach". )

Spinlock Ensuring mutual exclusion using a busy-wait lock. if the lock is available, it is taken, the mutually-exclusive action is performed, and then the lock is released. If the lock is not available, the thread busy-waits on the lock until it is available. it keeps spinning, thus wasting the processor time If the waiting duration is short, faster than putting the thread to sleep and then waking it up later when the lock is available. really only useful in SMP systems Spinlock with local CPU interrupt disable spin_lock_irqsave( &my_spinlock, flags ); // critical section spin_unlock_irqrestore( &my_spinlock, flags ); Reader/writer spinlock – allows multiple readers with no writer

Semaphore Kernel semaphores struct semaphore: count, wait queue, and number of sleepers void sem_init(struct semaphore *sem, int val); // Initialize a semaphore’s counter sem->count to given value inline void down(struct semaphore *sem); //try to lock the critical section by decreasing sem->count inline void up(struct semaphore *sem); // release the semaphore blocked thread can be in TASK_UNINTERRUPTIBLE or TASK_INTERRUPTIBLE (by timer or signal) Special case – mutexes (binary semaphores) void init_MUTEX(struct semaphore *sem) void init_MUTEX_LOCKED(struct semaphore *sem) Read/Write semaphores