Download presentation
Presentation is loading. Please wait.
1
Benchmarks Breakout
2
Target Applications Run experiments to derive
platform properties as inputs to models properties specific to particular workload User picks a mixture of interesting micro workloads Benchmark is generated automagically
3
Target Applications Detect processor capabilities
for virtual machine self adaptation but micro benchmarks for this are rare But also detect platform capabilities how long it takes to start a thread adapt at application start time Investigate particular optimizations especially in compiler development also optimizations at runtime Accuracy of few percent matters
4
Target Applications Parallel workload for component measurement
isolated measurements are bad Replacing large benchmarks approximating workload of big benchmark because that one is too expensive to run
5
Issues Benchmarking in virtualized system Micro benchmarks can mislead
Too many benchmarks execution time developer attention Enforcing stable conditions quickly Knowing which axes to exercise Balancing the costs of benchmarking at runtime
6
Multicore ? We keep discovering things about our platforms
Not one person who knows ... Acceptable ? compiler developers usually accept that application developers run away ? need to know when to care should developers care ? not if we can do better but can we ? Some optimization is “cheating” (not everyone can get the same benefit)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.