Download presentation
Presentation is loading. Please wait.
1
Neural Methods for Dynamic Branch Prediction Daniel A. Jiménez Department of Computer Science Rutgers University
2
2 The Context u I'll be discussing the implementation of microprocessors u Microarchitecture u I study deeply pipelined, high clock frequency CPUs u The goal is to improve performance u Make the program go faster u How can we exploit program behavior to make it go faster? u Remove control dependences u Increase instruction-level parallelism
3
3 An Example u This C++ code computes something useful. The inner loop executes two statements each time through the loop. int foo (int w[], bool v[], int n) { intsum = 0; for (int i=0; i<n; i++) { if (v[i]) sum += w[i]; else sum += ~w[i]; } return sum; }
4
4 An Example continued u This C++ code computes the same thing with three statements in the loop. u This version is 55% faster on a Pentium 4. u Previous version had many mispredicted branch instructions. int foo2 (int w[], bool v[], int n) { intsum = 0; for (int i=0; i<n; i++) { int a = w[i]; int b = - (int) v[i]; sum += ~(a ^ b); } return sum; }
5
5 How an Instruction is Processed Instruction fetch Instruction decode Execute Memory access Write back Processing can be divided into five stages:
6
6 Instruction-Level Parallelism Instruction fetch Instruction decode Execute Memory access Write back To speed up the process, pipelining overlaps execution of multiple instructions, exploiting parallelism between instructions
7
7 Control Hazards: Branches Conditional branches create a problem for pipelining: the next instruction can't be fetched until the branch has executed, several stages later. Branch instruction
8
8 Pipelining and Branches Instruction fetch Instruction decode Execute Memory access Write back Pipelining overlaps instructions to exploit parallelism, allowing the clock rate to be increased. Branches cause bubbles in the pipeline, where some stages are left idle. Unresolved branch instruction
9
9 Branch Prediction Instruction fetch Instruction decode Execute Memory access Write back A branch predictor allows the processor to speculatively fetch and execute instructions down the predicted path. Speculative execution Branch predictors must be highly accurate to avoid mispredictions!
10
10 Branch Predictors Must Improve u The cost of a misprediction is proportional to pipeline depth u As pipelines deepen, we need more accurate branch predictors u Pentium 4 pipeline has 20 stages u Future pipelines will have > 32 stages Simulations with SimpleScalar/Alpha u Deeper pipelines allow higher clock rates by decreasing the delay of each pipeline stage u Decreasing misprediction rate from 9% to 4% results in 31% speedup for 32 stage pipeline
11
11 Overview u Branch prediction background u Applying machine learning to branch prediction u Results and analysis u Circuit-level implementation u Future work and conclusions
12
12 Branch Prediction Background
13
13 Branch Prediction Background u The basic mechanism: 2-level adaptive prediction [Yeh & Patt `91] u Uses correlations between branch history and outcome u Examples: u gshare [McFarling `93] u agree [Sprangle et al. `97] u hybrid predictors [Evers et al. `96] This scheme is highly accurate in practice
14
14 Branch Predictor Accuracy u Larger tables and smarter organizations yield better accuracy u Longer histories provide more context for finding correlations u Table size is exponential in history length u The cost is increased access delay and chip area
15
15 Applying Machine Learning to Branch Prediction
16
16 Branch Prediction is a Machine Learning Problem u So why not apply a machine learning algorithm? u Replace 2-bit counters with a more accurate predictor u Tight constraints on prediction mechanism u Must be fast and small enough to work as a component of a microprocessor u Artificial neural networks u Simple model of neural networks in brain cells u Learn to recognize and classify patterns u Most neural nets are slow and complex relative to tables u For branch prediction, we need a small and fast neural method
17
17 A Neural Method for Branch Prediction u We investigated several neural methods u Most were too slow, too big, or not accurate enough u Our choice: The perceptron [Rosenblatt `62, Block `62] u Very high accuracy for branch prediction u Prediction and update are quick, relative to other neural methods u Sound theoretical foundation; perceptron convergence theorem u Proven to work well for many classification problems
18
18 Branch-Predicting Perceptron u Inputs (x’s) are from branch history register u Weights (w’s) are small integers learned by on-line training u Output (y) gives prediction; dot product of x’s and w’s u Training finds correlations between history and outcome
19
19 Training Algorithm
20
20 Organization of the Perceptron Predictor u Keeps a table of perceptrons, indexed by branch address u Inputs are from branch history register u Predict taken if output 0, otherwise predict not taken u Key intuition: table size isn't exponential in history length, so we can consider much longer histories
21
21 Results and Analysis for the Perceptron Predictor
22
22 Experimental Evaluation u Execution and trace driven simulations: u Measure instruction throughput (IPC) and misprediction rates u SimpleScalar/Alpha [Burger & Austin `97] u Alpha 21264-like configuration: u 4-wide issue, 64KB I-cache, 64KB D-cache, 512 entry BTB u SPECint 2000 benchmarks u Technological estimates: u HSPICE for circuit delay estimates u Modified CACTI 2.0 [Agarwal 2000] for PHT delay estimates
23
23 Results: Predictor Accuracy u Perceptron outperforms competitive hybrid predictor by 36% at ~4KB; 1.71% vs. 2.66%
24
24 Results: Large Hardware Budgets u Multi-component hybrid was the most accurate fully dynamic predictor known in the literature [Evers 2000] u Perceptron predictor is even more accurate
25
25 Delay Sensitive Implementation u Even the relatively simple perceptron has high access delay u Our solution: An overriding perceptron predictor u First level is a single-cycle gshare u Second level is a 4KB, 23-bit history perceptron predictor u HSPICE total prediction delay estimates: u 2 cycles at 833 MHz (like Alpha 21264) u 4 cycles at 1.76 GHz (like Pentium 4) u Compare with 4KB hybrid predictor
26
26 Results: IPC with high clock rate u Pentium 4-like: 20 cycle misprediction penalty, 1.76 GHz u 15.8% higher IPC than gshare, 5.7% higher than hybrid
27
27 Analysis: History Length u The fixed-length path branch predictor can also use long histories [Stark, Evers & Patt `98]
28
28 Analysis: Training Times u Perceptron “warms up’’ faster
29
29 Circuit-Level Implementation of a Neural Branch Predictor
30
30 Circuit-Level Implementation Example output computation: 12 weights, Wallace tree of depth 6 followed by 14-bit carry-lookahead adder Delay is 2-4 cycles for longer histories Carry-save adders have O(1) depth, carry-lookahead adder has O(log n) depth
31
31 HSPICE Perceptron Simulations u 2 cycles at 833 MHz, 4 cycles at 1.76 GHz, 180 nm technology
32
32 Future Work and Conclusions
33
33 Future Work with Perceptron Predictor u Let's make the best predictor even better u Better representation u Better training algorithm u Latency is a problem u Crazy people are saying that overriding organizations don't work as well as simple but large predictors [ Me, HPCA 2003 ] u How can we eliminate the latency of the perceptron predictor?
34
34 Future Work with Perceptron Predictor u Value prediction u Predict value of a load to mitigate memory latency u Indirect branch prediction u Virtual dispatch u Switch statements in C u Exit prediction u Predict the taken exit from predicated hyperblocks
35
35 Future Work Characterizing Predictability u Branch predictability, value predictability u How can we characterize algorithms in terms of their predictability? u Given an algorithm, how can we transform it so that its branches and values are easier to predict? u How much predictability is inherent in the algorithm, and how much is an artifact of the program structure? u How can we compare different algorithms' predictability?
36
36 Conclusions u Neural predictors can improve performance for deeply pipelined microprocessors u Perceptron learning is well-suited for microarchitectural implementation u There is still a lot of work left to be done on the perceptron predictor in particular and microarchitectural prediction in general
37
37 The End
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.