Benchmarks Breakout
Target Applications Run experiments to derive platform properties as inputs to models properties specific to particular workload User picks a mixture of interesting micro workloads Benchmark is generated automagically
Target Applications Detect processor capabilities for virtual machine self adaptation but micro benchmarks for this are rare But also detect platform capabilities how long it takes to start a thread adapt at application start time Investigate particular optimizations especially in compiler development also optimizations at runtime Accuracy of few percent matters
Target Applications Parallel workload for component measurement isolated measurements are bad Replacing large benchmarks approximating workload of big benchmark because that one is too expensive to run
Issues Benchmarking in virtualized system Micro benchmarks can mislead Too many benchmarks execution time developer attention Enforcing stable conditions quickly Knowing which axes to exercise Balancing the costs of benchmarking at runtime
Multicore ? We keep discovering things about our platforms Not one person who knows ... Acceptable ? compiler developers usually accept that application developers run away ? need to know when to care should developers care ? not if we can do better but can we ? Some optimization is “cheating” (not everyone can get the same benefit)