Download presentation
Presentation is loading. Please wait.
Published byAudra Jefferson Modified over 9 years ago
1
Object Oriented Analysis & Design SDL Threads
2
Contents 2 Processes Thread Concepts Creating threads Critical sections Synchronizing threads Using threads in games
3
Processes 3 A process consists of Memory Allocated to the process One or more threads of execution in the process A process consists of Memory Allocated to the process One or more threads of execution in the process Process 1 Process 1 Process 2 Process 2 Process 3 Process 3 P1 P3 P2 Scheduler Queue Processes in Memory CPU is assigned to each process in the queue in turns Each process has its own area of protected memory CPU is assigned to each process in the queue in turns Each process has its own area of protected memory
4
Scheduling 4 On a computer with one CPU, the CPU is shared amongst processes Each process waits on a queue for the CPU When the CPU becomes available it is assigned to the next process Which is ready to run Which has the highest priority On a computer with one CPU, the CPU is shared amongst processes Each process waits on a queue for the CPU When the CPU becomes available it is assigned to the next process Which is ready to run Which has the highest priority A process can be in one of three states Running on the CPU Ready to run if the CPU becomes available Blocked, waiting for an event (like a disk read) before it is ready to execute A process can be in one of three states Running on the CPU Ready to run if the CPU becomes available Blocked, waiting for an event (like a disk read) before it is ready to execute
5
Preemptive Scheduling 5 In Round-Robin scheduling Each process on the queue is given the CPU in turn What happens if a process does not want to relinquish the CPU? That process can hog the CPU Other processes will never get the CPU In Round-Robin scheduling Each process on the queue is given the CPU in turn What happens if a process does not want to relinquish the CPU? That process can hog the CPU Other processes will never get the CPU There are 2 solutions Cooperative multi-processing Each process voluntarily gives up the CPU to let other processes execute Preemptive multi-processing Each process gets a time slice and when it is gone, the CPU is taken away and give to a new process There are 2 solutions Cooperative multi-processing Each process voluntarily gives up the CPU to let other processes execute Preemptive multi-processing Each process gets a time slice and when it is gone, the CPU is taken away and give to a new process
6
Priority 6 The goal of priorities is Very important processes get the CPU first All processes get a chance at the CPU No process gets to hog the CPU The goal of priorities is Very important processes get the CPU first All processes get a chance at the CPU No process gets to hog the CPU Hogging occurs when A process gets the CPU and uses all of its time slice It remains at high priority in the queue and consumes most of the CPU The solution is process aging The longer a process is on the scheduler queue, the lower its priority becomes Hogging occurs when A process gets the CPU and uses all of its time slice It remains at high priority in the queue and consumes most of the CPU The solution is process aging The longer a process is on the scheduler queue, the lower its priority becomes
7
Thread Concepts 7 Each process Is given the CPU once per scheduler cycle Has a pointer to the next instruction to be executed Each process Is given the CPU once per scheduler cycle Has a pointer to the next instruction to be executed But what if a single process Had pointers to 2 or more next instructions to execute Had one entry in the scheduler queue for every next instruction to execute We would have a multi-threaded process which could have several threads of execution
8
A Multi-Threaded Process 8 Process 1 ------------ Process 1 ------------ Thread 1 Thread 1 Thread 2 Thread 2 Thread 3 Thread 3 Next instruction T1 T2 T3 Scheduler Queue Multi-threading allows one process to do several things at once
9
Multi-core CPUs 9 Using multi-threading on a single CPU Will yield some improvement since one thread can execute while another thread is blocked Using multi-threading with multiple cores Allows each thread to run on a separate core This gives true parallelism The result is that you application can run several times faster This can make a huge difference in a game Using multi-threading on a single CPU Will yield some improvement since one thread can execute while another thread is blocked Using multi-threading with multiple cores Allows each thread to run on a separate core This gives true parallelism The result is that you application can run several times faster This can make a huge difference in a game
10
Creating Threads 10 To use threading in SDL you should include SDL_thread.h SDL_mutex.h SDL_timer.h (if using SDL_Delay or timer) To use threading in SDL you should include SDL_thread.h SDL_mutex.h SDL_timer.h (if using SDL_Delay or timer) The work for a thread is done by A function which Takes one void* argument to data which can be passed to the thread Returns an int result when the thread terminates The work for a thread is done by A function which Takes one void* argument to data which can be passed to the thread Returns an int result when the thread terminates
11
Creating Threads 11 int thread_worker(void *data) { // code to do the work of the thread return 0; } int main(int argc, char **argv) { SDL_Thread *t = SDL_CreateThread(thread_woker, NULL); int threadResult; SDL_WaitThread(t, &threadResult); } int thread_worker(void *data) { // code to do the work of the thread return 0; } int main(int argc, char **argv) { SDL_Thread *t = SDL_CreateThread(thread_woker, NULL); int threadResult; SDL_WaitThread(t, &threadResult); }
12
Race Conditions 12 The processes work fine and do several things at once Trouble starts when two threads access the same data A thread can stop or resume at any time because It’s time slice expires It blocks waiting for a device There is an interrupt which the CPU must process As a result of these unpredictable events Threads proceed at their own unpredictable rates Unexpected things can happen when a process changes shared data when it is not expected The processes work fine and do several things at once Trouble starts when two threads access the same data A thread can stop or resume at any time because It’s time slice expires It blocks waiting for a device There is an interrupt which the CPU must process As a result of these unpredictable events Threads proceed at their own unpredictable rates Unexpected things can happen when a process changes shared data when it is not expected
13
Race Conditions 13 A man and his wife share access to a bank account The bank has failed to synchronize access to the shared account from ATM’s One day The man strikes it rich and wins $1,000,000 in the lottery The woman decides she wants to withdraw $50 from the account They both head to ATM’s in different parts of town We have a recipe for disaster... A man and his wife share access to a bank account The bank has failed to synchronize access to the shared account from ATM’s One day The man strikes it rich and wins $1,000,000 in the lottery The woman decides she wants to withdraw $50 from the account They both head to ATM’s in different parts of town We have a recipe for disaster...
14
Race Conditions 14 Tim e Husband’s ATMWife’s ATMBalance 0$200 1Start deposit $1000000200 2Start withdraw $50200 3Get balance of $200200 4Get balance of $200200 5Calculate new balance 10000200200 6Store new balance10000200 7Calculate balance $15010000200 8Store new balance150 9Display balance150 10Display balance150
15
Atomic Operations 15 The problem with the ATM’s is Both ATM’s can access the balance at the same time One ATM cannot be guaranteed to finish before the other ATM does something The solution to this problem is Make the access of the account from each ATM an operation which cannot be interrupted This means that each ATM has exclusive access to the account until it relinquishes it This makes each ATM’s access of the account atomic, meaning it cannot be interrupted The problem with the ATM’s is Both ATM’s can access the balance at the same time One ATM cannot be guaranteed to finish before the other ATM does something The solution to this problem is Make the access of the account from each ATM an operation which cannot be interrupted This means that each ATM has exclusive access to the account until it relinquishes it This makes each ATM’s access of the account atomic, meaning it cannot be interrupted
16
Critical Sections 16 A piece of code which cannot be interrupted is called a critical section SDL provides a MUTEX to mark the start and end of a critical section A MUTEX is a variable whose value can be set or retrieved atomically There are two kinds of MUTEX’s Binary MUTEX’s which are locked or not Counting MUTEX’s which count the number of times they are locked and must be unlocked the same number of times A piece of code which cannot be interrupted is called a critical section SDL provides a MUTEX to mark the start and end of a critical section A MUTEX is a variable whose value can be set or retrieved atomically There are two kinds of MUTEX’s Binary MUTEX’s which are locked or not Counting MUTEX’s which count the number of times they are locked and must be unlocked the same number of times
17
Mutex Operations 17 SDL_mutexP(SDL_mutex *m) Increments the lock count on the mutex If the value of the mutex is > 0, and has been set by another thread, blocks until mutex returns to zero The same thread can increment the mutex as many times as it wants SDL_mutexP(SDL_mutex *m) Increments the lock count on the mutex If the value of the mutex is > 0, and has been set by another thread, blocks until mutex returns to zero The same thread can increment the mutex as many times as it wants SDL_mutexV(SDL_mutex *m) Decrements the lock count on the mutex The mutex remains locked to other threads until its value is returned to zero SDL_mutexV(SDL_mutex *m) Decrements the lock count on the mutex The mutex remains locked to other threads until its value is returned to zero
18
Using Mutex’s 18 Any code which accesses shared data should be made atomic This can be accomplished by SDLMutexP(m) at the start of the code SDLMutexV(m) at the end of the code This guarantees that only one thread can execute the critical section at one time This stops race conditions since a thread cannot be interrupted while manipulating shared data Any code which accesses shared data should be made atomic This can be accomplished by SDLMutexP(m) at the start of the code SDLMutexV(m) at the end of the code This guarantees that only one thread can execute the critical section at one time This stops race conditions since a thread cannot be interrupted while manipulating shared data
19
Thread Communication 19 Ensuring that threads cannot access shared data at the same time is not enough Sometimes, one thread cannot proceed until another thread completes some action This requires thread synchronization where A thread can block waiting for another thread and not consuming CPU cycles while it waits Another thread can signal the waiting thread(s) that it can proceed Ensuring that threads cannot access shared data at the same time is not enough Sometimes, one thread cannot proceed until another thread completes some action This requires thread synchronization where A thread can block waiting for another thread and not consuming CPU cycles while it waits Another thread can signal the waiting thread(s) that it can proceed
20
Condition Variables 20 A condition variable represents a list of threads waiting for a signal to continue to run A thread can Wait on a condition variables, suspending it until another thread sends a signal to the condition variable Signal the condition variable, causing one of the threads waiting on the variable to start to execute All operations on condition variables must be executed from within a critical section Waiting on a condition variable, will let other threads run, even though the thread which executed it is still in its critical section A condition variable represents a list of threads waiting for a signal to continue to run A thread can Wait on a condition variables, suspending it until another thread sends a signal to the condition variable Signal the condition variable, causing one of the threads waiting on the variable to start to execute All operations on condition variables must be executed from within a critical section Waiting on a condition variable, will let other threads run, even though the thread which executed it is still in its critical section
21
Condition Variables 21 SDL_cond *condition = SDL_CreateCond(); Create a condition variable SDL_DestroyCond(condition); Destroy a condition variable SDL_CondWait(condition, threadVar) Blocks this thread until it is signalled to continue Temporarily releases the lock on the mutex SDL_CondSignal(condition) Wakes up one of the threads waiting on the condition SDL_cond *condition = SDL_CreateCond(); Create a condition variable SDL_DestroyCond(condition); Destroy a condition variable SDL_CondWait(condition, threadVar) Blocks this thread until it is signalled to continue Temporarily releases the lock on the mutex SDL_CondSignal(condition) Wakes up one of the threads waiting on the condition
22
A Blocking Queue 22 A blocking queue is A queue onto which objects can be placed Objects can be removed from the queue but, if the queue is empty, the thread will block until something is placed onto the queue Each time an object is placed onto the queue, any waiting threads are signalled that an item has been placed on the queue The blocking queue is perfect for implementing the producer- consumer problem A blocking queue is A queue onto which objects can be placed Objects can be removed from the queue but, if the queue is empty, the thread will block until something is placed onto the queue Each time an object is placed onto the queue, any waiting threads are signalled that an item has been placed on the queue The blocking queue is perfect for implementing the producer- consumer problem
23
Thread Pools 23 Usually, it is expensive to create a new thread When you have periodic work to perform, creating and destroying threads can consume a lot of time A better solution is to create a pool of threads Several threads are created at once The threads block until work becomes available Work for the threads is placed on a queue Work on the queue is dispatched to the first available thread After a thread finishes performing the work, it blocks until more work becomes available Usually, it is expensive to create a new thread When you have periodic work to perform, creating and destroying threads can consume a lot of time A better solution is to create a pool of threads Several threads are created at once The threads block until work becomes available Work for the threads is placed on a queue Work on the queue is dispatched to the first available thread After a thread finishes performing the work, it blocks until more work becomes available
24
Using Threads in Games 24 Multi-threading a game can Make use of all cores on the CPU Smooth out jerky animation in the game Multi-threading a game can Make use of all cores on the CPU Smooth out jerky animation in the game There are many ways to incorporate threading in a game A series of synchronous concurrent functions A series of asynchronous concurrent functions Parallelism based on data structures Parallelism based on the requirements of individual objects There are many ways to incorporate threading in a game A series of synchronous concurrent functions A series of asynchronous concurrent functions Parallelism based on data structures Parallelism based on the requirements of individual objects
25
Synchronous Concurrent Functions 25 Read Input AI physics animation GameLogic Render The game loop is split into function which can be executed in parallel The rendering step must wait for all of these processes to complete before rendering can take place The game loop is split into function which can be executed in parallel The rendering step must wait for all of these processes to complete before rendering can take place
26
Asynchronous Concurrent Functions 26 Game Logic Physics Calculations are performed continuously in parallel There is no synchronization The game loop is on a separate thread and renders the state that is calculated when it is time to render a frame Calculations are performed continuously in parallel There is no synchronization The game loop is on a separate thread and renders the state that is calculated when it is time to render a frame
27
Data Parallelism 27 Enemy Position Player Position The data in the game is split up and calculated in parallel If you are moving the player and enemies, separate threads can be used to calculate the new positions of each in parallel The split into threads is based on the data to be calculated rather than the functions to be invoked The data in the game is split up and calculated in parallel If you are moving the player and enemies, separate threads can be used to calculate the new positions of each in parallel The split into threads is based on the data to be calculated rather than the functions to be invoked
28
Parallel Objects 28 Each object in the game is controlled by its own thread The thread could calculate AI Physics Game logic Not all objects necessarily move continuously A thread pool could be used and objects which move can get a thread from the pool to do their calculations Objects which do not have to move do not use a thread This can be synchronized with rendering or asynchronous Each object in the game is controlled by its own thread The thread could calculate AI Physics Game logic Not all objects necessarily move continuously A thread pool could be used and objects which move can get a thread from the pool to do their calculations Objects which do not have to move do not use a thread This can be synchronized with rendering or asynchronous
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.