Download presentation
Presentation is loading. Please wait.
1
© Kavita Bala, Computer Science, Cornell University Kevin Walsh CS 3410, Spring 2010 Computer Science Cornell University Pipelining See: P&H Chapter 4.5
2
© Kavita Bala, Computer Science, Cornell University 2 The Kids Alice Bob They don’t always get along…
3
© Kavita Bala, Computer Science, Cornell University 3 The Bicycle
4
© Kavita Bala, Computer Science, Cornell University 4 The Materials Saw Drill Glue Paint
5
© Kavita Bala, Computer Science, Cornell University 5 The Instructions N pieces, each built following same sequence: SawDrillGluePaint
6
© Kavita Bala, Computer Science, Cornell University 6 Design 1: Sequential Schedule Alice owns the room Bob can enter when Alice is finished Repeat for remaining tasks No possibility for conflicts
7
© Kavita Bala, Computer Science, Cornell University 7 Elapsed Time for Alice: 4 Elapsed Time for Bob: 4 Total elapsed time: 4*N Can we do better? Sequential Performance time 12345678 … Latency: Throughput: Concurrency:
8
© Kavita Bala, Computer Science, Cornell University 8 Design 2: Pipelined Design Partition room into stages of a pipeline One person owns a stage at a time 4 stages 4 people working simultaneously Everyone moves right in lockstep AliceBobCarolDave
9
© Kavita Bala, Computer Science, Cornell University 9 Pipelined Performance time 1234567… Latency: Throughput: Concurrency:
10
© Kavita Bala, Computer Science, Cornell University 10 Unequal Pipeline Stages 0h1h2h3h… 15 min 30 min 45 min 90 min Latency: Throughput: Concurrency:
11
© Kavita Bala, Computer Science, Cornell University 11 Poorly-balanced Pipeline Stages 0h1h2h3h… 15 min 30 min 45 min 90 min Latency: Throughput: Concurrency:
12
© Kavita Bala, Computer Science, Cornell University 12 Re-Balanced Pipeline Stages 0h1h2h3h… 15 min 30 min 45 min 90 min Latency: Throughput: Concurrency:
13
© Kavita Bala, Computer Science, Cornell University 13 Splitting Pipeline Stages 0h1h2h3h… 15 min 30 min 45 min 45+45 min Latency: Throughput: Concurrency:
14
© Kavita Bala, Computer Science, Cornell University 14 Pipeline Hazards 0h1h2h3h… Q: What if glue step of task 3 depends on output of task 1? Latency: Throughput: Concurrency:
15
© Kavita Bala, Computer Science, Cornell University 15 Lessons Principle: Latencies can be masked by parallel execution Pipelining: Identify pipeline stages Isolate stages from each other Resolve pipeline hazards
16
© Kavita Bala, Computer Science, Cornell University 16 A Processor alu PC imm memo ry d in d o ut addr target offset cmp control =? new pc register file inst extend +4
17
© Kavita Bala, Computer Science, Cornell University 17 Write- Back Memory Instruction Fetch Execute Instruction Decode register file control A Processor alu imm mem ory d in d o ut addr inst PC memo ry compute jump/branch targets new pc +4 extend
18
© Kavita Bala, Computer Science, Cornell University 18 Basic Pipeline Five stage “RISC” load-store architecture 1.Instruction fetch (IF) –get instruction from memory, increment PC 2.Instruction Decode (ID) –translate opcode into control signals and read registers 3.Execute (EX) –perform ALU operation, compute jump/branch targets 4.Memory (MEM) –access memory if needed 5.Writeback (WB) –update register file Slides thanks to Sally McKee & Kavita Bala
19
© Kavita Bala, Computer Science, Cornell University 19 Pipelined Implementation Break instructions across multiple clock cycles (five, in this case) Design a separate stage for the execution performed during each clock cycle Add pipeline registers to isolate signals between different stages
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.