Download presentation
Presentation is loading. Please wait.
Published byJoel Houston Modified over 8 years ago
1
Fiber Based Job Systems Seth England
2
Preemptive Scheduling Competition for resources Use of synchronization primitives to prevent race conditions in shared data
3
Preemptive Scheduling
5
There are some problems with this model Locks are slow Very hard to reason about the state that our data is in We want to map multiple threads to a single engine system if necessary.
6
Cooperative Scheduling We can improve upon all of these things What if we could schedule the tasks in our engine in such a way that they were never try to be modifying the same thing at the same time? This would eliminate locks and make multithreading much easier to reason about
7
Cooperative Scheduling Instead of letting our program compete for data, we can explicitly schedule our tasks such that they don’t need to anymore
8
Cooperative Scheduling It’s hard to multithread an engine this way if it looks like this
9
Cooperative Scheduling Need to break down task in the engine in order to schedule them effectively These broken-down tasks are usually referred to as jobs
10
Jobs A common solution for multithreading modern game engines is a job system A job is a grouping of data and a transformation
11
Job System Collection of worker threads Assigns jobs to those threads from various queues
12
Job System In my job system there are 3 types of queues (in order of priority) The stalled queue contains jobs that have run but are not finished The immediate queue The deferred queue I’ve seen other systems with low, medium, and high but I find the decision is pretty binary (now or later?)
13
Fibers Units of execution that are manually scheduled by the user They are not pre-empted by other fibers via the OS They are manually scheduled by the application The run on threads, assuming the identity of the thread they’re running on by swapping out a few offsets (stack pointer, instruction pointer, ect) Operating system specific, though almost all OS support this feature They can do wonderful things that we will get to later
14
Windows Functions CreateFiber – Takes the size of the stack you want the fiber to have, the starting address (function pointer), and a void pointer you want to be passed to the function. I pass the job to this function. Does not have near as much overhead and creating a fiber, however, the stack needs to be allocated and there’s no way (on Windows) to specify a location for the stack This can be circumvented with a fiber pool, which we will go over later Returns what is effectively a handle that allows us to schedule that fiber and delete it using DeleteFiber
15
Windows Functions ConvertThreadToFiber – Since fibers can only be called from other fibers, once worker threads are created they need to call this SwitchToFiber – Makes the calling thread assume the identity of the fiber, swapping out its instruction and stack pointer
16
Worker Threads Job System uses a condition variable to signal when the thread needs to wake up
17
Execute Job Queue
18
Example Job
19
Declare Job Just a macro that produces the declaration that windows requires to be passed in to CreateFiber
20
Start Job Basically sets up the “environment” of the job Takes note of where the stack pointer is Declares the start of the inner scope
21
Start Job
22
End Job Close the scope Define a symbol to skip to end of job Mark the job as finished Switch to worker thread
23
End Job
24
Memory Leaks This is a memory leak The data structure never goes out of scope This is why we declare an inner scope and when we want to exit a job early, we jump to the end of the scope The compiler will throw an error if this goto won’t initialize things (non-PODS)
25
Reusing Fibers Windows does not allow us to pass our own memory to use as the stack To get around this, when a job ends we simply goto StartJobSymbol resetting the instruction pointer, and we reset the stack pointer Save the stack pointer when the fiber first runs Now we can reuse fibers over and over
26
Enqueuing Jobs When SwitchToFiber is called, the state of the current fiber is saved When SwitchToFiber is called again on the same fiber, the fiber continues running at the same place with the same stack as before! This is the flexibility of using fibers as opposed to threads. In a job system built on threads when the job returns it will begin from the start of the job function when next called We’ll use the flexibility to great effect later
27
Enqueuing Jobs When inside a job, we can enqueue other jobs to run The Job is then put in a stalled queue The other jobs run When all the queued jobs run the job that enqueued them will start running from where it left off
28
Enqueuing Jobs Jobs are created on the stack This is ok because the stack of that job is in valid memory space so long as it has not gone out of scope
29
Enqueuing Jobs
30
Execution Manager A collection of “Execution Nodes” An Execution Node is a collection of one or more jobs that are queued simultaneously A directional dependency graph of execution nodes, where a parent is connected to one or more children and vice-versa. Connections represent a data dependency
31
Execution Manager
32
Register Execution Node Takes a config (name, type) Takes an array of transformations Takes an array of data Each data pointer will be passed to each job every time it runs Groups these arrays and creates jobs out of them Puts these new jobs into execution nodes
33
Add Dependency Add reference to child to parent Increment child counter
34
Register Execution Node
35
Execute Root The starting point for the execution manager Decrements a counter on each of its children If that counter reaches 0, that child is queued Each of its children do this, and then this happens recursively This recursion will progress and then “unwind”, when we get back to the root node, we’re done Note this recursion does not result in a large runtime stack, while nodes are waiting they aren’t actually running
36
Execute Root
37
Execute Job Node A job Queues up jobs in the node After the jobs finish, we queue up the children if their counter has reached zero (no more dependencies exist)
38
Execute Job Node
39
Developer Workflow
40
Write code as you normally would Put that code in job(s) Put those job(s) inside of execution node(s) Schedule the execution node(s) in the dependency graph Basically the only difference between the single-threaded and multi- threaded work flow is that you have to schedule your code
41
Developer Workflow Write a visual tool, it will make your life much easier When the game exe runs, export the registered execution nodes to a text file Tool then reads the execution nodes from that file Edit connections in the tool and export a file When the game runs, it reads the file that was exported from the tool as the dependencies Could probably make this more robust with good reflection
42
Developer Workflow
44
Uses Core engine systems (graphics, physics, AI) are basically divided into 3 phases An extraction phase where each system creates a copy of shared data An update phase A sync phase where shared data is written back
45
Physics Uses Extract transform position, rotation Update rigidbody/collider matrices Multi-threaded SAP Collision detection multi-threading is trivial Constraints can be divided into non-dependent islands Integration Sync write results back to transform
46
Graphics Uses Extract position, rotation, scale from transform Organized into “Render Lanes” of items with shared state (particles, static models, animated models,) More info on multithreaded rendering can be found elsewhere http://www.gdcvault.com/play/1021926/Destiny-s-Multithreaded- Rendering
47
AI uses AI extracts data from the transform (position, orientation) and from physics (velocity, forces) AI calculates the deltas that need to be applied (forces) AI waits until physics is done integrating to apply these deltas
48
Postmortem Very few multithreading issues Easily identifiable and fixable issues No game logic multithreading Very easy to use
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.