Presentation is loading. Please wait.

Presentation is loading. Please wait.

Next-Gen Asset Streaming Using Runtime Statistics David Thall Insomniac Games.

Similar presentations


Presentation on theme: "Next-Gen Asset Streaming Using Runtime Statistics David Thall Insomniac Games."— Presentation transcript:

1 Next-Gen Asset Streaming Using Runtime Statistics David Thall Insomniac Games

2 The Concept Load and unload assets from runtime statistics Simple!

3 Why not just use Dependency Graphs? Dependency graphs can only tell us ‘what’ to load –An expensive proposition! But what about ‘when’? –If we can answer this, we can save a lot of memory

4 Why not just use Dependency Graphs? Dependency graphs create hierarchical and cross- referenced interdependencies –Causes req lists to grow exponentially

5 Why not just use Dependency Graphs? Dependency graphs are tied to a build pipeline –If the dependencies change, so does the built data –This type of design tends to result in fixed-pipeline optimizations, such as data packing, which tend to make runtime optimizations more difficult

6 Why not just use Dependency Graphs? Dependency graphs don’t know about ‘new’ assets. –We’d like to be able to load unreq’d assets ‘on-the-fly’

7 Adding runtime statistics to the mix Examine the constraints –It is impossible to determine out-of-context that an asset will be used during gameplay For example –Will we ever load the high-mip texture? –Will we ever load the jumping animation? –Will we ever load the footstep on snow sound? –On the flipside, we want to know that we did in fact load it

8 Adding runtime statistics to the mix Examine the data –There are many types of assets that are triggered more often by game events and AI than by camera position and orientation Sound Visual FX Animation –Related assets tend to get rendered in spatio-temporal clusters The context is similar This is true both within and across asset types

9 Our focus case: Sound Assets Sound poses a particularly interesting problem, because the data is a one-dimensional function of time (not space) Questions we need to ask: –What sounds will need to be loaded (and when)? –What is our maximum latency (per sound)? Physical Latency: How long will it take? Perceptual Latency: How long can we wait? More questions (implementation details): –How much memory will we require in the active game context? –How much more expensive is it to load and unload in the runtime? We’ll answer these questions later…

10 What types of statistics are useful to collect? Simplest: –Sound ID –Context ID (spatiotemporal subdivisions) –Maximum Latency (user-supplied settings) Optional: –Minimum bounding box Useful if defining context is spatially-bound This tells us “A sound was played in this context with this maximum latency setting”. This is good info!

11 How do we collect our statistics?

12 A closer look at latency Latency at the game context level –Spatial: Regions (Areas, Zones) –Temporal: Throwing the grenade Latency at the sound level –High - Can be loaded on-demand from disc –Med - Can be loaded on-demand from cache –Low - Must be pre-cached into main memory On-Demand --> Event triggers load Can further subdivide at both levels

13 Loose-load every sound asset We loose-load every sound asset, because the latency requirements differ. –Latency setting determines ‘when’ we load (the TIMEFRAME) –The default latency is ‘high’ Sound designers override this in special cases If we’re hitting the BD/DVD too much, we override it –No complex fix-ups across contextual boundaries Loose-loaded sounds are reference counted –When we load from a context, we only pre-cache low latency sounds And we know how much time we have to do it. –If we don’t load from a context, the sounds get loaded on-demand –NOTE: We still need to complete a game with this new tech to know what percentage of sounds will require low vs. high latency Early test results follow the 80 / 20 rule!

14 Memory Management Some constraints: –Need contiguous memory available to handle load requests. –Need to be able to do this at any time (hence, in the runtime). –Need to do all the processing without blocking the main thread. –Must be cheap! Compact all memory as soon as possible. –Statistically-tracked sounds load in bursts or spurts Low-latency sounds are loaded from statistically-generated lists High-latency sounds are loaded less often (by definition). Low-latency  low memory / High-latency  high memory Thus, movement is minimized, both in time and space Defrag actively ‘playing’ sounds in the same buffer –Any other approach unnecessarily complicates memory management Do all loads and unloads asynchronously in a background thread. –Keep everything lock-free. Synchronize all moves with the renderer (and never block it)

15 Results We only require 25% to 50% of our normal memory budget. –In other words, sound designers get 2x to 4x the memory budget. Memory heap sizes are now manageable –This is true for programmers ‘and’ sound designers Programmers can trade off memory for other requirements, such as FX Sound designers directly tweak sound sizes to fit memory requirements High-latency sounds are practically free –Remember… they are loaded and unloaded from high memory Low-latency sound counts are smaller than originally thought (80/20) –Only the ‘hero’ sounds tend to have psychologically-bound latency requirements Low-latency sounds with the highest playback count in the shortest time window should get loaded first (and preempt any high-latency requests) Cull sounds from contextual loading lists after some time threshold

16 Questions?

17 Thank you!


Download ppt "Next-Gen Asset Streaming Using Runtime Statistics David Thall Insomniac Games."

Similar presentations


Ads by Google