Download presentation
Presentation is loading. Please wait.
Published byJovani Benbrook Modified over 9 years ago
1
A Pipeline for Lockless Processing of Sound Data David Thall Insomniac Games
2
or: How I Learned to Stop Worrying and Love Concurrent Programming David Thall Insomniac Games
3
Our Goal Remove fixed pipeline optimizations from the sound engine –Stop packing runtime sound assets from dependency graphs in the builders Dependency graphs only describe ‘what’ to load (an expensive proposition) –Loose-load sound assets using runtime statistics Precache sounds that require low latency and might play soon Load sounds on-demand if they can withstand greater latency Learn more from “Next-Gen Asset Streaming Using Runtime Statistics” – http://www.gdcvault.com/free/category/262/conference http://www.gdcvault.com/free/category/262/conference
4
Problems Loose-loading should require us to: –Load asynchronously to the main update –Keep file system I/O contention at a minimum –Defragment loaded data often enough to handle new requests –Relocate during playback (many sounds are indefinite length) Unfortunately, our middleware sound API doesn’t properly handle asynchronous handling of sound data. –We can perform relocation during playback But the call is blocking on a sync-point …so they ask the client to perform the move/fixup in a background thread. –But the API must lock every time sound bank data is updated Implementation uses a doubly-linked list to manage loaded sound bank data. –Therefore, update will break if a load or unload request occurs at the same time as a relocation request (viz., must be synchronous)
5
Solutions Attempt #1: Polling API Is it safe to move? –No… someone is loading or updating… skip it –Yes… move the data to a duplicate location »Tell sound API about the new fixup locations »And wait for the blocking call to return… But the state is changing in the mixing thread –So… the sound API can still crash anyway! DOESN’T WORK
6
Solutions Attempt #2: Sync-point Callback API –But now we need to lock on our end to make sure we don’t relocate while they are still processing a load or unload request –DOESN’T WORK Attempt #3: Synchronous Updates from a Background Thread –COULD WORK
7
Our Solution A solution that works –No blind ‘lock-and-hope’ semantics Is designed to be malleable –Sound API is an inherently sequential system And can run concurrent updates on data –Such as loads and unloads behind playback
8
Staged Pipeline Updates Each stage represents a job to be completed Each subsequent stage’s counter checks whether or not it has pending jobs –while (g_counters[LOAD_COMPLETED] < g_counters[LOAD_REQUESTED]) Do some work on request… then… Increment LOAD_COMPLETED counter (must guarantee this happens last) Jobs can run concurrently in separate threads without locks And if we have a system that must run sequentially, we can manage that too.
9
Sound Loading Algorithm
10
Write load requests to a command queue (or set of low and high latency queues), to be processed later...
11
Sound Loading Algorithm If the staging buffer is empty, begin loading a request
12
Sound Loading Algorithm Once the file has been loaded into the staging buffer, signal that the load is complete
13
Sound Loading Algorithm Register the loaded sound file with the sound API. Flag the request as ready for playback.
14
Sound Loading Algorithm Copy the file from the staging buffer to the main buffer
15
Sound Unloading Algorithm
16
Write unload requests to a command queue, to be processed later...
17
Sound Unloading Algorithm If an unload request’s file is already loaded, flag the file as ready for an unload
18
Sound Unloading Algorithm Copy the file from the main buffer to the staging buffer Free allocated memory… Defrag the entire main buffer
19
Sound Unloading Algorithm Begin unloading the sound file
20
Sound Unloading Algorithm When the sound file is completely unloaded, flag the request as completed and the staging buffer as empty
21
Lockless API Restrictions Message-based API –No state queries (immediate queries are meaningless) –No accessors –No handles to memory –Asynchronous –Unidirectional –Pass by value –Errors are deferred / propagated –No required client synchronization However, client may request a message to its input queue for synching its own state data.
22
Results Updates are modular, fast and scalable Solution is general enough to be exported for use in other staged data processing applications
23
Questions?
24
Thank you!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.