Download presentation
Published byJennifer Day Modified over 9 years ago
1
AppliancesClustersMicrocode Trends in Post-production Infrastructure
Tom Burns Technicolor Creative Services
2
Disclaimer The views expressed herein are exclusively those of the presenter, and are not indicative of any Thomson / Technicolor official position nor an endorsement of any current or future technology development or strategy.
3
Appliances Turnkey workstations
Bosch FGS-4000, Quantel Paintbox, Ampex ADO Proprietary circuit boards Expensive & time-consuming to improve Custom software Steep learning curve for developers
4
Clusters High Performance Computing Fast, low-latency network
Shared storage All nodes work on the same task 3D Render farm “embarrassingly parallel” i.e. 1 frame per CPU
5
Microcode
6
Technology Migration over time
“Software running on general-purpose computers will outlast custom hardware every time” – but it might take years to catch up
7
Up and Down the Technology Stack
How can we predict which innovations are likely to succeed? Business processes (VFX == “pipeline”, Post == “workflow”) move up the Stack Appliances move down the Stack HW & SW solutions (once paid off) become Appliances Software evolves much more quickly than either of these Confusion between continuous and discontinuous innovation is the cause of many technology product failures
8
VFX Technology Stack
9
VFX Render Farm
10
Location render farming
11
Merging infrastructure and workflow == pipelining
Compression JPEG-2000 H.264 Transcoding Software VBR Multi-pass Audio QC Automated, file-based Faster than real-time Deliverables Multiple simultaneous file-based renders
12
Migrate bottlenecked processes to GPU
SIMD – Single Instruction Multiple data Shared memory instead of message-passing architecture Memory accesses are expensive Code and Data packing since computation is cheap Rapid Mind development tools AMD, Intel Multi-core x86 CPUs ATI/AMD FireStream 9250 GPU NVIDIA Tesla 10P Cell Broadband Engine, PS3
13
GPU De-Bayering = Data Level parallelism
Packing scalar data into RGBA in texture memory suits GPU architecture very well
14
Object intelligence migrates from blocks to files
A file system contains a certain amount of intelligence in the file itself, in the form of: the filename + numerical extension directory placement (pathname) attributes (e.g. atime, mtime) Post uses SAN, VFX uses NAS Performance and cost Intelligence “built-in” to the object level allows more flexibility in the pipeline without changing the infrastructure HDD with 5 MB storage in 1956
15
Enterprise Service Bus Goals:
Remap the fixed stack into a flexible pipeline Plan exit costs of HW as well as entry costs Adapt to changing business cycles
16
Designing an Enterprise Service Bus for Post
Virtualize dedicated h/w processes on clusters Profile and provide GPU support for bottlenecks Re-factor pipeline for different projects Swap out software to adapt to business cycles Service Orchestration The “Project Coordinator” moves up the stack QC, Delivery, Audit, Monitoring Scale to distributed bus Decentralized – smart endpoints Post facility becomes hub of networked community Open-source Software such as Apache ServiceMix, ESB product And Apache CXF: Web Services
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.