Download presentation
Presentation is loading. Please wait.
Published byDylan Manning Modified over 9 years ago
1
1 Ubiquitous Memory Introspection (UMI) Qin Zhao, NUS Rodric Rabbah, IBM Saman Amarasinghe, MIT Larry Rudolph, MIT Weng-Fai Wong, NUS CGO 2007, March 14 2007 San Jose, CA
2
2 The complexity crunch hardware complexity ↑ + software complexity ↑ ⇓ too many things happening concurrently, complex interactions and side-effects ⇓ less understanding of program execution behavior ↓
3
3 S/W developer –Computation vs. communication ratio –Function/path frequency –Test coverage H/W designer –Cache behavior –Branch prediction System administrator –Interaction between processes The importance of program behavior characterization Better understanding of program characteristics can lead to more robust software, and more efficient hardware and systems
4
4 Common approaches to program understanding Overhead Level of detail Versatility Space and time overhead Coarse grained summaries vs. instruction level and contextual information Portability and ability to adapt to different uses
5
5 Common approaches to program understanding ProfilingSimulation Overheadhighvery high Level of detail very high Versatilityhighvery high
6
6 Common approaches to program understanding ProfilingSimulation HW Counters Overheadhighvery highvery low Level of detail very high very low Versatilityhighvery highvery low
7
7 Slowdown due to HW counters (counting L1 misses for 181.mcf on P4) Time (seconds) (>2000%) (1%) (10%) (1%) (35%) (325%) (% slowdown)
8
8 Common approaches to program understanding ProfilingSimulation HW Counters Desirable Approach Overheadhighvery highvery lowlow Level of detail very high very lowhigh Versatilityhighvery highvery lowhigh
9
9 Common approaches to program understanding ProfilingSimulation HW Counters Desirable Approach Overheadhighvery highvery lowlow Level of detail very high very lowhigh Versatilityhighvery highvery lowhigh UMI
10
10 Key components Dynamic Binary Instrumentation –Complete coverage, transparent, language independent, versatile, … Bursty Simulation –Sampling and fast forwarding techniques –Detailed context information –Reasonable extrapolation and prediction
11
11 Ubiquitous Memory Introspection online mini-simulations analyze short memory access profiles recorded from frequently executed code regions Key concepts –Focus on hot code regions –Selectively instrument instructions –Fast online mini-simulations –Actionable profiling results for online memory optimizations
12
12 Working prototype Implemented in DynamoRIO –Runs on Linux –Used on Intel P4 and AMD K7 Benchmarks –SPEC 2000, SPEC 2006, Olden –Server apps: MySQL, Apache –Desktop apps: Acrobat-reader, MEncoder
13
13 UMI is cheap and non-intrusive ( SPEC2K reference workloads on P4) Average 0% 20% 40% 60% 80% 168.wupwise 171.swim 172.mgrid 173.applu 177.mesa 178.galgel 179.art 183.equake187.facerec 188.ammp 189.lucas 191.fma3d 200.sixtrack 301.apsi164.gzip 175.vpr 176.gcc 181.mcf 186.crafty 197.parser 252.eon 253.perlbmk 254.gap 255.vortex 56.bzip2 300.twolf em3d health mst treeadd tsp ft DynamoRIO UMI Average overhead is 14% 1% more than DynamoRIO % slowdown compared to native execution
14
14 What can UMI do for you? Inexpensive introspection everywhere Coarse grained memory analysis –Quick and dirty Fine grained memory analysis –Expose opportunities for optimizations Runtime memory-specific optimizations –Pluggable prefetching, learning, adaptation
15
15 Coarse grained memory analysis Experiment: measure cache misses in three ways –HW counters –Full cache simulator (Cachegrind) –UMI Report correlation between measurements –Linear relationship of two sets of data 1 strong positive correlation 0 strong negative correlation no correlation
16
16 Cache miss correlation results HW counter vs. Cachegrind –0.99 –20x to 100x slowdown HW counter vs. UMI –0.88 –Less than 2x slowdown in worst case 0 0.2 0.4 0.6 0.8 1 (HW, CG) (HW, UMI)
17
17 What can UMI do for you? Inexpensive introspection everywhere Coarse grained memory analysis –Quick and dirty Fine grained memory analysis –Expose opportunities for optimizations Runtime memory-specific optimizations –Pluggable prefetching, learning, adaptation
18
18 Fine grained memory analysis Experiment: predict delinquent loads using UMI –Individual loads with cache miss rate greater than threshold Delinquent load set determined according to full cache simulator (Cachegrind) –Loads that contribute 90% of total cache misses Measure and report two metrics Delinquent loads identified by Cachegrind Delinquent loads predicted by UMI recall false positive
19
19 UMI delinquent load prediction accuracy Recall (higher is better) False positive (lower is better) benchmarks with ≥ 1% miss rate 88%55% benchmarks with < 1% miss rate 26%59%
20
20 What can UMI do for you? Inexpensive introspection everywhere Coarse grained memory analysis –Quick and dirty Fine grained memory analysis –Expose opportunities for optimizations Runtime memory-specific optimizations –Pluggable prefetcher, learning, adaptation
21
21 Experiment: online stride prefetching Use results of delinquent load prediction Discover stride patterns for delinquent loads Insert instructions to prefetch data to L2 Compare runtime for UMI and P4 with HW stride prefetcher
22
22 Data prefetching results summary 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 running time normalized to native execution (lower is better) SW prefetching + DynamoRIO + UMI HW prefetching + Native Binary Combined prefetching + DynamoRIO + UMI ft Average 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 181.mcf 171.swim 172.mgrid 179.art 183.equake 188.ammp 191.fma3d 301.apsi em3d mst ft Average More than offsets slowdowns from binary instrumentation!
23
23 The Gory Details
24
24 UMI components Region selector Instrumentor Profile analyzer
25
25 Region selector Identify representative code regions –Focus on traces, loops –Frequently executed code –Piggy back on binary instrumentor tricks Reinforce with sampling –Time based, or leverage HW counters –Naturally adapt to program phases
26
26 Instrumentor Record address references –Insert instructions to record address referenced by memory operation Manage profiling overhead –Clone code trace (akin to Arnold-Ryder scheme) –Selective instrumentation of memory operations E.g., ignore stack and static data
27
27 Recording profiles Code Trace Profile T1 T2 T1 counter 0x1040x0280x013 0x012 0x100 0x0240x011 op3op2op1 Code Trace T1 Code Trace T2 0x0310x032 op2op1 counter page protection to detect profile overflow Address Profiles early trace exist
28
28 Mini-simulator Triggered when code or address profile is full Simple cache simulator –Currently simulate L2 cache of host –LRU replacement –Improve approximations with techniques similar to offline fast forwarding simulators Warm up and periodic flushing Other possible analyzer –Reference affinity model –Data reuse and locality analysis
29
29 Mini-simulations and parameter sensitivity 181.mcf 0.00 0.20 0.40 0.60 0.80 1.00 1.20 12481632641282565121024 sampling frequency threshold recall false positive 1.40 normalized performancerecall ratiofalse positive ratio Regular data structures If sampling threshold too high, starts to exceed loop bounds: miss out on profiling important loops Adaptive threshold is best
30
30 Mini-simulations and parameter sensitivity 0.00 0.20 0.40 0.60 0.80 1.00 1.20 1.40 641282565121K2K4K8K32K address profile length 197.parser recall false positive 16K normalized performancerecall ratiofalse positive ratio Irregular data structures Need longer profiles to reach useful conclusions
31
31 Summary UMI is lightweight and has a low overhead –1% more than DynamoRIO –Can be done with Pin, Valgrind, etc. –No added hardware necessary –No synchronization or syscall headaches –Other cores can do real work! Practical for extracting detailed information –Online and workload specific –Instruction level memory reference profiles –Versatile and user-programmable analysis Facilitate migration of offline memory optimizations to online setting
32
32 Future work More types of online analysis –Include global information –Incremental (leverage previous execution info) –Combine analysis across multiple threads More runtime optimizations –E.g., advanced prefetch optimization Hot data stream prefetch Markov prefetcher –Locality enhancing data reorganization Pool allocation Cooperative allocation between different threads Your ideas here…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.