Chapter 12 CPU Structure and Function. Example Register Organizations.

Slides:



Advertisements
Similar presentations
CPU Structure and Function
Advertisements

PIPELINE AND VECTOR PROCESSING
Instruction Set Design
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
Computer Organization and Architecture
Processor structure and function
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Chapter 12 CPU Structure and Function. CPU Sequence Fetch instructions Interpret instructions Fetch data Process data Write data.
Computer Organization and Architecture
Computer Organization and Architecture
Computer Organization and Architecture The CPU Structure.
Chapter 12 Pipelining Strategies Performance Hazards.
Pipelining Fetch instruction Decode instruction Calculate operands (i.e. EAs) Fetch operands Execute instructions Write result Overlap these operations.
RICARFENS AUGUSTIN JARED COELLO OSVALDO QUINONES Chapter 12 Processor Structure and Function.
(6.1) Central Processing Unit Architecture  Architecture overview  Machine organization – von Neumann  Speeding up CPU operations – multiple registers.
Inside The CPU. Buses There are 3 Types of Buses There are 3 Types of Buses Address bus Address bus –between CPU and Main Memory –Carries address of where.
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
CH12 CPU Structure and Function
Instruction Sets and Pipelining Cover basics of instruction set types and fundamental ideas of pipelining Later in the course we will go into more depth.
Micro-operations Are the functional, or atomic, operations of a processor. A single micro-operation generally involves a transfer between registers, transfer.
Pipelining, Branch Prediction, Trends
5-Stage Pipelining Fetch Instruction (FI) Fetch Operand (FO) Decode Instruction (DI) Write Operand (WO) Execution Instruction (EI) S3S3 S4S4 S1S1 S2S2.
Basic Microcomputer Design. Inside the CPU Registers – storage locations Control Unit (CU) – coordinates the sequencing of steps involved in executing.
PART 4: (1/2) Central Processing Unit (CPU) Basics CHAPTER 14: P ROCESSOR S TRUCTURE AND F UNCTION.
Group 5 Tony Joseph Sergio Martinez Daniel Rultz Reginald Brandon Haas Emmanuel Sacristan Keith Bellville.
Edited By Miss Sarwat Iqbal (FUUAST) Last updated:21/1/13
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
CPU Design and PipeliningCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading: Stallings,
Presented by: Sergio Ospina Qing Gao. Contents ♦ 12.1 Processor Organization ♦ 12.2 Register Organization ♦ 12.3 Instruction Cycle ♦ 12.4 Instruction.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
Chapter 11 CPU Structure and Function. CPU Structure CPU must: —Fetch instructions —Interpret instructions —Fetch data —Process data —Write data.
Chapter 8 CPU and Memory: Design, Implementation, and Enhancement The Architecture of Computer Hardware and Systems Software: An Information Technology.
ECE 456 Computer Architecture Lecture #14 – CPU (III) Instruction Cycle & Pipelining Instructor: Dr. Honggang Wang Fall 2013.
Pipelining and Parallelism Mark Staveley
COMPUTER ORGANIZATION AND ASSEMBLY LANGUAGE Lecture 21 & 22 Processor Organization Register Organization Course Instructor: Engr. Aisha Danish.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Pentium Architecture Arithmetic/Logic Units (ALUs) : – There are two parallel integer instruction pipelines: u-pipeline and v-pipeline – The u-pipeline.
Question What technology differentiates the different stages a computer had gone through from generation 1 to present?
PART 4: (1/2) Central Processing Unit (CPU) Basics CHAPTER 12: P ROCESSOR S TRUCTURE AND F UNCTION.
CPU Design and Pipelining – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading:
The Processor & its components. The CPU The brain. Performs all major calculations. Controls and manages the operations of other components of the computer.
Protection in Virtual Mode
Computer Architecture Chapter (14): Processor Structure and Function
Computer Organization CS224
Central Processing Unit Architecture
William Stallings Computer Organization and Architecture 8th Edition
Parallel Processing - introduction
Processor Organization and Architecture
The processor: Pipelining and Branching
Computer Organization and ASSEMBLY LANGUAGE
Computer Architecture
CPU Structure CPU must:
CPU Structure and Function
Chapter 11 Processor Structure and function
William Stallings Computer Organization and Architecture
Presentation transcript:

Chapter 12 CPU Structure and Function

Example Register Organizations

PowerPC Register organization

Registers CPU must have some working space (temporary or scratch pad storage) Top level of memory hierarchy Number and function vary between processor designs How many? how large? how used?

User/Supervisor Visible Registers General Purpose or fixed use, byte, word, double word accessable Data – accumulator?, integer, FP, alphanumeric Address – data pointers, segment mapping Control – IR, PSW, SP, interrupt enb & vector(s), state/context information Note: CPU Architecture & Op Sys are closely tied

Simplified CPU Instruction Sequence Fetch instructions Interpret instructions Fetch Operands (Calc Addr & get data) Execute (Process data) Write results (Calculate Addr & store data)

Instruction Cycle with Indirect Addressing

Instruction Cycle State Diagram

Speed up Can be achieved through: Faster cycle time Implementing parallelism

Prefetch Consider the instruction sequence as: Fetch instruction Execution instruction (often does not access main memory) Can computer fetch next instruction during execution of current instruction ? Called instruction Prefetch What are the implications of Prefetch?

Improved Performance with Prefetch Improved speed, but not doubled, why? —Fetch usually shorter than execution —Any jump or branch means that prefetched instructions are not the required instructions Could we Prefetch more than one instruction ? Could we add “more stages” to further improve performance?

Pipelining For our purpose here consider the instruction sequence as: instruction fetch, decode instruction, fetch data, execute instruction, store result, check for interrupt Consider it as an “assembly line” of operations. Then we can begin the next instruction assembly line sequence before the last has finished. Actually we can fetch the next instruction while the present one is being decoded. This is pipelining This is pipelining.

A Two Stage Instruction Pipeline

Pipeline “stations” Fetch Instruction (FI) Decode Instruction (DI) Calculate Operand Addresses (CO) Fetch Operands (FO) Execute Instruction (EI) Write Operand (WO) Let’s define a possible set of Pipeline stations:

Possible Timing Diagram for Instruction Pipeline Operation Limitation: maximum time for any stage and overhead of transfers

The Impact of a Conditional Branch on Instruction Pipeline Operation Instruction 3 is a conditional branch to instruction 15:

Alternative Pipeline View Instruction 3 is conditional branch to instruction 15:

Speedup Factors with Instruction Pipelining

Branching – Possible approaches Multiple Streams Prefetch Branch Target Loop Buffer Branch Prediction Delayed Branching

Multiple Streams Have two pipelines Prefetch each branch into a separate pipeline Use appropriate pipeline Challenges: Leads to bus & register contention Multiple branches lead to further pipelines being needed

Prefetch Branch Target Target of branch is prefetched in addition to instructions following branch Keep target until branch is executed

Loop Buffer Use Very fast memory (“Loop Buffer Cache”) Maintained by fetch stage of pipeline Check buffer before fetching from memory Very good for small loops or jumps in small code sections

Branch Prediction Predict branch never taken or Predict branch always taken Predict by opcode Use Predict branch taken/not taken switch Maintain branch history table Which is best?

Predict Branch Taken / Not taken Predict never taken —Assume that jump will not happen —Always fetch next instruction Predict always taken —Assume that jump will happen —Always fetch target instruction Which is better – consider possible page faults?

Branch Prediction by Opcode / Switch Predict by Opcode —Some instructions are more likely to result in a jump than others —Can get up to 75% success with this stategy Taken/Not taken switch —Based on previous history —Good for loops —Perhaps good to match programmer style

Maintain Branch Table Perhaps maintain a cache table of three entries: - Address of branch - History of branching - Targets of branch

Intel Pipelining Fetch (Fetch) —From cache or external memory —Put in one of two 16-byte prefetch buffers —Fill buffer with new data as soon as old data consumed —Average 5 instructions fetched per load —Independent of other stages to keep buffers full Decode stage 1 (D1) —Opcode & address-mode info —At most first 3 bytes of instruction —Can direct D2 stage to get rest of instruction Decode stage 2 (D2) —Expand opcode into control signals —Computation of complex address modes Execute (EX) —ALU operations, cache access, register update Writeback (WB) —Update registers & flags —Results sent to cache & bus interface write buffers

80486 Instruction Pipeline Examples