Download presentation
Presentation is loading. Please wait.
Published byClarence James Modified over 9 years ago
1
11/12/2013 1 Out-of-the-Box Computing Patents pending IEEE-SVC 2013/11/12 Drinking from the Firehose Cool and cold transfer prediction in the Mill™ CPU Architecture
2
11/12/2013 2 Out-of-the-Box Computing Patents pending addsx(b2, b5) The Mill Architecture Transfer prediction - without delay New with the Mill: Run-ahead prediction Prediction before code is loaded Explicit prefetch prediction No wasted instruction loads Automatic profiling Prediction in cold code
3
11/12/2013 3 Out-of-the-Box Computing Patents pending What is prediction? Prediction is a micro-architecture mechanism to smooth the flow of instructions in today’s slow- memory and long-pipeline CPUs. Like caches, the prediction mechanism and its success or failure is invisible to the program. Present prediction methods work quite well in small, regular benchmarks run on bare machines. They break down when code has irregular flow of control, and when processes are started or switched frequently. Except in performance and power impact.
4
11/12/2013 4 Out-of-the-Box Computing Patents pending The Mill CPU The Mill is a new general-purpose commercial CPU family. The Mill has a 10x single-thread power/performance gain over conventional out-of-order superscalar architectures, yet runs the same programs, without rewrite. This talk will explain: the problems that prediction is intended to alleviate how conventional prediction works the Mill CPU’s novel approach to prediction
5
11/12/2013 5 Out-of-the-Box Computing Patents pending Talks in this series 1.Encoding 2.The Belt 3.Cache hierarchy 4.Prediction 5.Metadata and speculation 6.Specification 7.… You are here Slides and videos of other talks are at: ootbcomp.com/docs
6
11/12/2013 6 Out-of-the-Box Computing Patents pending Caution Gross over-simplification! This talk tries to convey an intuitive understanding to the non-specialist. The reality is more complicated.
7
11/12/2013 7 Out-of-the-Box Computing Patents pending Branches vs. pipelines if (I == 0) F(); else G(); Do we call F() or G() ? loadI eql0 brfl lab callF … lab:callG … 32 cycles (Intel Pentium 4 Prescott) 5 cycles (Mill) cachedecodeexecuteschedule
8
11/12/2013 8 Out-of-the-Box Computing Patents pending Branches vs. pipelines if (I == 0) F(); else G(); cachedecodeexecuteschedule load Ieql 0brflstall call G More stall than work! loadI eql0 brfl lab callF … lab:callG …
9
11/12/2013 9 Out-of-the-Box Computing Patents pending loadI eql0 brfl lab callF … So we guess… if (I == 0) F(); else G(); cachedecodeexecuteschedule load Ieql 0brflcall G Guess right? No stall! inst lab: Guess to call G (correct) callG …
10
11/12/2013 10 Out-of-the-Box Computing Patents pending So we guess… if (I == 0) F(); else G(); cachedecodeexecuteschedule load Ieql 0brflcall F Guess to call F (wrong) inst Guess wrong? Mispredict stalls! lab: loadI eql0 brfl lab callF … callG …
11
11/12/2013 11 Out-of-the-Box Computing Patents pending So we guess… if (I == 0) F(); else G(); cachedecodeexecuteschedule call G Fix prediction: Call G inst stall Finally! lab: loadI eql0 brfl lab callF … callG …
12
11/12/2013 12 Out-of-the-Box Computing Patents pending How the guess works if (I == 0) F(); else G(); cachedecodeexecuteschedule lab: loadI eql0 brfl lab callF … callG …
13
11/12/2013 13 Out-of-the-Box Computing Patents pending How the guess works if (I == 0) F(); else G(); cachedecodeexecuteschedule load Ieql 0brflcall Finst lab: loadI eql0 brfl lab callF … callG …
14
11/12/2013 14 Out-of-the-Box Computing Patents pending How the guess works if (I == 0) F(); else G(); cachedecodeexecuteschedule load Ieql 0brflcall Finst lab: loadI eql0 brfl lab callF … callG … branch history table
15
11/12/2013 15 Out-of-the-Box Computing Patents pending How the guess works if (I == 0) F(); else G(); cachedecodeexecuteschedule load Ieql 0brflstall branch history table call G inst Many fewer stalls! lab: loadI eql0 brfl lab callF … callG …
16
11/12/2013 16 Out-of-the-Box Computing Patents pending So what’s it cost? When (as is typical): one instruction in eight is a branch the predictor guesses right 95% of the time the mispredict penalty is 15 cycles predict failure wastes 8.5% of cycles Simplest fix is to lower the miss penalty. Shorten the pipeline! Mill pipeline is five cycles, not 15. Mill misprediction wastes only 3% of cycles
17
11/12/2013 17 Out-of-the-Box Computing Patents pending The catch - cold code The guess is based on prior history with the branch. What happens if there is no prior history? Cold code == random 50-50 guess In cold code: one instruction in eight is a branch the predictor guesses right 50% of the time the mispredict penalty is 15 cycles predict failure wastes 48% of cycles (23% on a Mill) Ouch!
18
11/12/2013 18 Out-of-the-Box Computing Patents pending But wait – it gets worse! Cold code means no relevant Branch History contents. It also means no relevant cache contents. cachedecodeexecuteschedule DRAM branch history table brflinst 15 cycles 300+ cycles
19
11/12/2013 19 Out-of-the-Box Computing Patents pending Miss cost in cold code In cold code, when: one instruction in eight is a branch the predictor guesses right 50% of the time the mispredict penalty is 15 cycles cache miss penalty is 300 cycles cache line is 64 bytes, 16 instructions cold misses waste 96% of cycles (94% on a Mill) Ouch!
20
11/12/2013 20 Out-of-the-Box Computing Patents pending What to do? Use bigger cache lines Internal fragmentation means no gain Fetch more lines per miss Cache thrashing means no gain Nothing technical works very well.
21
11/12/2013 21 Out-of-the-Box Computing Patents pending What to do? Choose short benchmarks! No problem when benchmark is only a thousand instructions Blame the software! Code bloat is a software vendor problem, not a CPU problem Blame the memory vendor! Memory speed is a memory vendor problem, not a CPU problem This approach works. (for some value of “works”)
22
11/12/2013 22 Out-of-the-Box Computing Patents pending Fundamental problems Don’t know how much to load from DRAM. Mill knows how much will execute. Can’t spot branches until loaded and decoded. Mill knows where branches are, in unseen code Can’t predict spotted branches without history. Mill can predict in never-executed code. The rest of the talk shows how the Mill does this.
23
11/12/2013 23 Out-of-the-Box Computing Patents pending Extended Basic Blocks (EBBs) EBB branch EBB Program counter EBB EBB chain The Mill groups code into Extended Basic Blocks, single-entry multiple-exit sequences of instructions. Branches can only target EBB entry points; it is not possible to jump into the middle of an EBB. Execution flows through a chain of EBBs
24
11/12/2013 24 Out-of-the-Box Computing Patents pending Predicting EBBs EBB branch With an EBB organization, you don’t have to predict each branch. Only one of possibly many branches will pass control out of the EBB – so predict which one. If control enters here - predict that control will exit here The Mill predicts exits, not branches.
25
11/12/2013 25 Out-of-the-Box Computing Patents pending Representing exits Code is sequential in memory inst and is held in cache lines which are also sequential
26
11/12/2013 26 Out-of-the-Box Computing Patents pending Representing exits inst There is one EBB entry point entry and one predicted exit point exit represented as the difference prediction
27
11/12/2013 27 Out-of-the-Box Computing Patents pending Representing exits inst There is one EBB entry point entry and one predicted exit point exit represented as the difference prediction Rather than a byte or instruction count, the Mill predicts: the number of cache lines the number of instructions in the last line line count 2 inst count 3
28
11/12/2013 28 Out-of-the-Box Computing Patents pending Representing exits line count 2 inst count 3 Predictions also contain: prediction offset of the transfer target from the entry point kind – jump, return, inner call, outer call target offset 0xabcd kind jump “When we enter the EBB: fetch two lines, decode from the entry through the third instruction in the second line, and then jump to (entry+0xabcd)”
29
11/12/2013 29 Out-of-the-Box Computing Patents pending The Exit Table line count 2 inst count 3 target 0xabcd kind jump
30
11/12/2013 30 Out-of-the-Box Computing Patents pending The Exit Table line count 2 inst count 3 target 0xabcd kind jump Predictions are stored in the hardware Exit Table exit table pred The Exit Table: is direct-mapped, with victim buffers is keyed by the EBB entry address and history info has check bits to detect collisions can use any history-based algorithm Capacity varies by Mill family member
31
11/12/2013 31 Out-of-the-Box Computing Patents pending Exit chains Starting with an entry point, the Mill can chain through successive predictions without actually looking at the code. Exit Table entry address 123 probe Exit Table using entry address as key returning the keyed prediction 17 ---
32
11/12/2013 32 Out-of-the-Box Computing Patents pending to get the next EBB entry address Exit chains Starting with an entry point, the Mill can chain through successive predictions without actually looking at the code. Exit Table entry address 123 17 --- + ------------ 140 add the offset to the EBB entry address
33
11/12/2013 33 Out-of-the-Box Computing Patents pending rinse and repeat Exit chains Starting with an entry point, the Mill can chain through successive predictions without actually looking at the code. Exit Table entry address 123 17 --- + ------------ 140 -42 --- + ------------ 98 Repeat until: no prediction in table entry seen before (loop) as far as you wanted to go
34
11/12/2013 34 Out-of-the-Box Computing Patents pending Prefetch Exit Table Predictions chained from the Exit Table are prefetched from memory Prefetcher pred entry addrline count Cache/DRAM Prefetches cannot fault or trap, instead stops chaining Prefetches are low priority, use idle cycles to memory
35
11/12/2013 35 Out-of-the-Box Computing Patents pending The Prediction Cache After prefetch, chain predictions are stored in the Prediction Cache pred Prefetcher Prediction cache The Prediction Cache is small, fast, and fully associative. Chaining from the Exit Table stops if a prediction is found to be already in the Cache, typically a loop. Chaining continues in the cache, possibly looping; a miss resumes from the Exit Table.
36
11/12/2013 36 Out-of-the-Box Computing Patents pending The Fetcher Predictions are chained from the Prediction Cache (following loops) to the Fetcher Prediction cache Prefetcher
37
11/12/2013 37 Out-of-the-Box Computing Patents pending The Fetcher pred Fetcher entry addr line count Cache/DRAM Microcache Lines are fetched from the regular cache hierarchy to a microcache attached to the decoder. Prediction cache Predictions are chained from the Prediction Cache (following loops) to the Fetcher
38
11/12/2013 38 Out-of-the-Box Computing Patents pending Microcache The Decoder Prediction chains end at the Decoder, which also receives a stream of the corresponding cache lines from the Microcache. Fetcher pred Decoder The result is that the Decoder has a queue of predictions, and another queue of the matching cache lines, that are kept continuously full and available. It can decode down the predicted path at the full 30+ instructions per cycle speed.
39
11/12/2013 39 Out-of-the-Box Computing Patents pending Timing DecoderExit Table Prediction Cache Prefetcher Microcache Fetcher 3 cycles2 cycles mispredict penalty Vertically aligned units work in parallel Once started, the predictor can sustain one prediction every three cycles from the Exit Table.
40
11/12/2013 40 Out-of-the-Box Computing Patents pending Fundamental problems redux Don’t know how much to load from DRAM. Mill knows how much will execute. Can’t spot branches until loaded and decoded. Mill knows where branches are, in unseen code Can’t predict spotted branches without history. Mill can predict in never-executed code.
41
11/12/2013 41 Out-of-the-Box Computing Patents pending Prediction feedback All predictors use feedback from execution experience to alter predictions to track changing program behavior. ExecuteExit Table pred If a prediction was wrong, then it can be changed to predict what actually did happen. Exit Table contents reflects current history for all contained predictions.
42
11/12/2013 42 Out-of-the-Box Computing Patents pending “All contained predictions”? Not one prediction for each EBB in the program? No! Tables are much to small to hold predictions for all EBBs. In a conventional branch predictor, each prediction is built up over time with increasing experience with the particular branch conventional branch table prediction experience But if the CPU is switched to another process, the prediction is thrown away and overwritten. Every process switch is followed by a period of poor predictions while experience is built up again.
43
11/12/2013 43 Out-of-the-Box Computing Patents pending ? A second source of predictions Like others, the Mill builds predictions from experience. However, it has a second source: the program load module. code static data predic tions program load module The load module is used when there is no experience. exit table Missing predictions are read from the load module. key to decode
44
11/12/2013 44 Out-of-the-Box Computing Patents pending But there’s a catch… Loading a prediction from DRAM (or even L2 cache) takes much longer than a mispredict penalty! By the time it’s loaded we no longer need it! Solution: load bunches of likely-needed predictions code static data predic tions program load module exit table key But – what predictions are likely-needed?
45
11/12/2013 45 Out-of-the-Box Computing Patents pending Likely-needed predictions Should we load on a misprediction? No. We have a prediction – it’s just wrong. Should we load on a missing prediction? No. It may only be a rarely-taken path that aged out of the table. We should bulk-load only when entering a whole new region of program activity that we haven’t been to before (recently), and may stay in for a while, or re-enter. Like a function.
46
11/12/2013 46 Out-of-the-Box Computing Patents pending Likely-needed predictions The Mill bulk-loads the predictions of a function when the call finds no prediction for the entry EBB. int main() { phase1(); phase2(); phase3(); return 0; } Each call triggers loading of the predictions for the code of that function. exit table
47
11/12/2013 47 Out-of-the-Box Computing Patents pending Program phase-change At phase change ( or just code that was swapped out long enough ): 1.Recognize when a chain or misprediction leads to a call for which there is no Exit Table entry. 2.Bulk load the predictions for the function. 3.Start the prediction chain in the called function 4.Chaining will prefetch the predicted code path 5.Execute as fast as the code comes in. Overall delay: one load time for the first predictions one load time for the initial code prefetch two loads total - everything after that in parallel Vs. convential: one code load time per branch
48
11/12/2013 48 Out-of-the-Box Computing Patents pending Where’s the load module get its predictions? The compiler can perfectly predict EBBs that contain no conditional branches. Calls, returns and jumps A profiler can measure conditional behavior. But instrumenting the load module changes the behavior. So the Mill does it for you. Exit table hardware logs experience with predictions. Post-processing of the log updates the load module. Log info is available for JITs and optimizers. Mill programs get faster every time they run.
49
11/12/2013 49 Out-of-the-Box Computing Patents pending The fine print Newly-compiled predictions assume every EBB will execute to the final transfer. This policy causes all cache lines of the EBB to be prefetched, improving performance at the expense of loading unused lines. Later experience corrects the line counts. When experience shows that an EBB in a function is almost never entered (often error code) then it is omitted from the bulk load list, saving Exit Table space and memory traffic.
50
11/12/2013 50 Out-of-the-Box Computing Patents pending Fundamental problem summary Don’t know how much to load from DRAM. Mill knows how much will execute. Can’t spot branches until loaded and decoded. Mill knows where the exits are. Can’t predict spotted branches without history. Mill can predict in never-executed code. Mill programs get faster every time they run.
51
11/12/2013 51 Out-of-the-Box Computing Patents pending Shameless plug For technical info about the Mill CPU architecture: ootbcomp.com/docs To sign up for future announcements, white papers etc. ootbcomp.com/mailing-list ootbcomp.com/investor-list
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.