Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE502: Computer Architecture Superscalar Decode.

Similar presentations


Presentation on theme: "CSE502: Computer Architecture Superscalar Decode."— Presentation transcript:

1 CSE502: Computer Architecture Superscalar Decode

2 CSE502: Computer Architecture Superscalar Decode for RISC ISAs Decode X insns. per cycle (e.g., 4-wide) – Just duplicate the hardware – Instructions aligned at 32-bit boundaries 32-bit inst Decoder decoded inst decoded inst scalar Decoder 32-bit inst Decoder decoded inst decoded inst superscalar 4-wide superscalar fetch 32-bit inst decoded inst decoded inst decoded inst decoded inst decoded inst decoded inst 1-Fetch

3 CSE502: Computer Architecture uop Limits How many uops can a decoder generate? – For complex x86 insts, many are needed (10s, 100s?) – Makes decoder horribly complex – Typically theres a limit to keep complexity under control One x86 instruction 1-4 uops Most instructions translate to 1.5-2.0 uops What if a complex insn. needs more than 4 uops?

4 CSE502: Computer Architecture UROM/MS for Complex x86 Insts UROM (microcode-ROM) stores the uop equivalents – Used for nasty x86 instructions Complex like > 4 uops (PUSHA or STRREP.MOV) Obsolete (like AAA) Microsequencer (MS) handles the UROM interaction

5 CSE502: Computer Architecture UROM/MS Example (3 uop-wide) ADD STORE SUB Cycle 1 ADD STA STD SUB Cycle 2 REP.MOV ADD [ ] Fetch- x86 insts Decode - uops UROM - uops SUB LOAD STORE REP.MOV ADD [ ] XOR Cycle 3 INC mJCC LOAD Cycle 4 Complex instructions, get uops from mcode sequencer Complex instructions, get uops from mcode sequencer REP.MOV ADD [ ] XOR STORE INC mJCC Cycle 5 REP.MOV ADD [ ] XOR … … Cycle n … … … … REP.MOV ADD [ ] XOR mJCC LOAD Cycle n+1 ADD XOR … … … …

6 CSE502: Computer Architecture Superscalar CISC Decode Instruction Length Decode (ILD) – Where are the instructions? Limited decode – just enough to parse prefixes, modes Shift/Alignment – Get the right bytes to the decoders Decode – Crack into uops Do this for N instructions per cycle!

7 CSE502: Computer Architecture ILD Recurrence/Loop PCi= X PCi+1= PCi + sizeof( Mem[PCi] ) PCi+2= PCi+1 + sizeof( Mem[PCi+1] ) = PCi + sizeof( Mem[PCi] ) + sizeof( Mem[PCi+1] ) Cant find start of next insn. before decoding the first Must do ILD serially – ILD of 4 insns/cycle implies cycle time will be 4x Critical component not pipeline-able

8 CSE502: Computer Architecture Bad x86 Decode Implementation Left Shifter Decoder 3 Cycle 3 Decoder 2 Decoder 1 Instruction Bytes (ex. 16 bytes) ILD (limited decode) Length 1 Length 2 + + Cycle 1 Inst 1 Inst 2 Inst 3 Remainder Cycle 2 + + Length 3 bytes decoded ILD (limited decode) ILD dominates cycle time; not scalable

9 CSE502: Computer Architecture Hardware-Intensive Decode Decode from every possible instruction starting point! Giant MUXes to select instruction bytes ILD Decoder ILD Decoder

10 CSE502: Computer Architecture ILD in Hardware-Intensive Approach 6 bytes 4 bytes 3 bytes + + Total bytes decode = 11 Previous: 3 ILD + 2 add Now: 1 ILD + 2 (mux+add) Previous: 3 ILD + 2 add Now: 1 ILD + 2 (mux+add) Decoder

11 CSE502: Computer Architecture Predecoding ILD loop is hardware intensive – Impacts latency – Consumes substantial power If instructions A, B and C are decoded –... lengths for A, B, and C will still be the same next time – No need to repeat ILD Possible to cache the ILD work

12 CSE502: Computer Architecture Decoder Example: AMD K5 (1/2) Predecode Logic From Memory b0 b1 b2 … … b7 8 bytes I$ b0 b1 b2 … … b7 8 bits +5 bits 13 bytes Decode 16 (8-bit inst + 5-bit predecode) 8 (8-bit inst + 5-bit predecode) Up to 4 ROPs Compute ILD on fetch, store ILD in the I$

13 CSE502: Computer Architecture Decoder Example: AMD K5 (2/2) Predecode makes decode easier by providing: – Instruction start/end location (ILD) – Number of ROPs needed per inst – Opcode and prefix locations Power/performance tradeoffs – Larger I$ (increase array by 62.5%) Longer I$ latency, more I$ power consumption – Remove logic from decode Shorter pipeline, simpler logic Cache and reused decode work less decode power – Longer effective I-L1 miss latency (ILD on fill)

14 CSE502: Computer Architecture Decoder Example: Intel P-Pro Only Decoder 0 interfaces with the uROM and MS If insn. in Decoder 1 or Decoder 2 requires > 1 uop 1)do not generate output 2)shift to Decoder 0 on the next cycle 16 Raw Instruction Bytes Decoder 0 Decoder 0 Decoder 1 Decoder 1 Decoder 2 Decoder 2 Branch Address Calculator Branch Address Calculator Fetch resteer if needed mROM 4 uops1 uop

15 CSE502: Computer Architecture Fetch Rate is an ILP Upper Bound Instruction fetch limits performance – To sustain IPC of N, must sustain a fetch rate of N per cycle If you consume 1500 calories per day, but burn 2000 calories per day, then you will eventually starve. – Need to fetch N on average, not on every cycle N-wide superscalar ideally fetches N insns. per cycle This doesnt happen in practice due to: – Instruction cache organization – Branches – … and interaction between the two

16 CSE502: Computer Architecture Instruction Cache Organization To fetch N instructions per cycle... – L1-I line must be wide enough for N instructions PC register selects L1-I line A fetch group is the set of insns. starting at PC – For N-wide machine, [PC,PC+N-1] Decoder Tag Inst Tag Inst Tag Inst Tag Inst Tag Inst Cache Line PC

17 CSE502: Computer Architecture Fetch Misalignment (1/2) If PC = xxx01001, N=4: – Ideal fetch group is xxx01001 through xxx01100 (inclusive) Decoder Tag Inst Tag Inst Tag Inst Tag Inst Tag Inst 000 001 010 011 111 PC: xxx01001 00011011 Line width Fetch group Misalignment reduces fetch width

18 CSE502: Computer Architecture Fetch Misalignment (2/2) Now takes two cycles to fetch N instructions – ½ fetch bandwidth! Decoder Tag Inst Tag Inst Tag Inst Tag Inst Tag Inst 000 001 010 011 111 PC: xxx01001 00011011 Decoder Tag Inst Tag Inst Tag Inst Tag Inst Tag Inst 000 001 010 011 111 PC: xxx01100 00011011 Cycle 1 Cycle 2 Might not be ½ by combining with the next fetch

19 CSE502: Computer Architecture Reducing Fetch Fragmentation (1/2) Make |Fetch Group| < |L1-I Line| Decoder Tag Inst Tag Inst Tag Inst Cache Line PC Can deliver N insns. when PC > N from end of line

20 CSE502: Computer Architecture Reducing Fetch Fragmentation (2/2) Needs a rotator to decode insns. in correct order Decoder Tag Inst Tag Inst Tag Inst Rotator Aligned fetch group PC


Download ppt "CSE502: Computer Architecture Superscalar Decode."

Similar presentations


Ads by Google