Computer Organization and Architecture The CPU Structure.

Slides:



Advertisements
Similar presentations
CPU Structure and Function
Advertisements

Instruction Set Design
Computer Organization and Architecture
Processor structure and function
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Chapter 12 CPU Structure and Function. CPU Sequence Fetch instructions Interpret instructions Fetch data Process data Write data.
Computer Organization and Architecture
Computer Organization and Architecture
Chapter 16 Control Unit Operation No HW problems on this chapter. It is important to understand this material on the architecture of computer control units,
Chapter 12 Pipelining Strategies Performance Hazards.
Pipelining Fetch instruction Decode instruction Calculate operands (i.e. EAs) Fetch operands Execute instructions Write result Overlap these operations.
Computer System Overview
Chapter 16 Control Unit Implemntation. A Basic Computer Model.
Chapter 12 CPU Structure and Function. Example Register Organizations.
Chapter 15 IA 64 Architecture Review Predication Predication Registers Speculation Control Data Software Pipelining Prolog, Kernel, & Epilog phases Automatic.
7/2/ _23 1 Pipelining ECE-445 Computer Organization Dr. Ron Hayne Electrical and Computer Engineering.
RICARFENS AUGUSTIN JARED COELLO OSVALDO QUINONES Chapter 12 Processor Structure and Function.
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
CH12 CPU Structure and Function
Micro-operations Are the functional, or atomic, operations of a processor. A single micro-operation generally involves a transfer between registers, transfer.
Group 5 Tony Joseph Sergio Martinez Daniel Rultz Reginald Brandon Haas Emmanuel Sacristan Keith Bellville.
Chapter 1 Computer System Overview Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
CSCI 4717/5717 Computer Architecture
Edited By Miss Sarwat Iqbal (FUUAST) Last updated:21/1/13
CPU Design and PipeliningCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading: Stallings,
Presented by: Sergio Ospina Qing Gao. Contents ♦ 12.1 Processor Organization ♦ 12.2 Register Organization ♦ 12.3 Instruction Cycle ♦ 12.4 Instruction.
Top Level View of Computer Function and Interconnection.
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 1 Computer System Overview.
Chapter 11 CPU Structure and Function. CPU Structure CPU must: —Fetch instructions —Interpret instructions —Fetch data —Process data —Write data.
Computer Architecture Lecture 2 System Buses. Program Concept Hardwired systems are inflexible General purpose hardware can do different tasks, given.
EEE440 Computer Architecture
ECE 456 Computer Architecture Lecture #14 – CPU (III) Instruction Cycle & Pipelining Instructor: Dr. Honggang Wang Fall 2013.
ECEG-3202 Computer Architecture and Organization Chapter 3 Top Level View of Computer Function and Interconnection.
COMPUTER ORGANIZATION AND ASSEMBLY LANGUAGE Lecture 21 & 22 Processor Organization Register Organization Course Instructor: Engr. Aisha Danish.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Pipelining Example Laundry Example: Three Stages
PART 4: (1/2) Central Processing Unit (CPU) Basics CHAPTER 12: P ROCESSOR S TRUCTURE AND F UNCTION.
Processor Organization
Structure and Role of a Processor
CPU Design and Pipelining – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading:
The Processor & its components. The CPU The brain. Performs all major calculations. Controls and manages the operations of other components of the computer.
1 Computer Architecture. 2 Basic Elements Processor Main Memory –volatile –referred to as real memory or primary memory I/O modules –secondary memory.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Chapter 12 Processor Structure and Function. Central Processing Unit CPU architecture, Register organization, Instruction formats and addressing modes(Intel.
Computer Architecture Chapter (14): Processor Structure and Function
William Stallings Computer Organization and Architecture 8th Edition
Chapter 9 a Instruction Level Parallelism and Superscalar Processors
Processor Organization and Architecture
Central Processing Unit
Computer Organization and ASSEMBLY LANGUAGE
William Stallings Computer Organization and Architecture 8th Edition
Computer Architecture
CPU Structure CPU must:
CPU Structure and Function
Chapter 11 Processor Structure and function
Presentation transcript:

Computer Organization and Architecture The CPU Structure

CPU Structure CPU must: Fetch instructions Interpret instructions Fetch data Process data Write data

External View of CPU

Internal View of CPU

Registers CPU must have some working space (temporary storage) Called registers Number and function vary between processor designs One of the major design decisions Top level of memory hierarchy

User Visible Registers General Purpose Data Address Condition Codes

General Purpose Registers (1) May be true general purpose May be restricted May be used for data or addressing Data Accumulator Addressing Segment

General Purpose Registers (2) Make them general purpose Increase flexibility and programmer options Increase instruction size & complexity Make them specialized Smaller (faster) instructions Less flexibility

How Many GP Registers? Between Fewer = more memory references More does not reduce memory references and takes up processor real estate See also RISC

How big? Large enough to hold the largest address Data register should be able to hold the value of most date types Some machines allow two contiguous register to be used as one for holding double- length values

Condition Code Registers Sets of individual bits e.g. result of last operation was zero Can be read (implicitly) by programs e.g. Jump if zero Can not (usually) be set by programs

Control & Status Registers Program Counter Instruction Decoding Register Memory Address Register Memory Buffer Register Revision: what do these all do?

Program Status Word A set of bits Includes Condition Codes Sign of last result Zero Carry Equal Overflow Interrupt enable/disable Supervisor

Supervisor Mode Intel ring zero Kernel mode Allows privileged instructions to execute Used by operating system Not available to user programs

Example Register Organizations

Indirect Cycle May require memory access to fetch operands Indirect addressing requires more memory accesses Can be thought of as additional instruction subcycle

Instruction Cycle with Indirect

Instruction Cycle State Diagram

Data Flow (Instruction Fetch) Depends on CPU design In general: Fetch PC contains address of next instruction Address moved to MAR Address placed on address bus Control unit requests memory read Result placed on data bus, copied to MBR, then to IR Meanwhile PC incremented by 1

Data Flow (Data Fetch) IR is examined If indirect addressing, indirect cycle is performed Right most N bits of MBR transferred to MAR Control unit requests memory read Result (address of operand) moved to MBR

Data Flow (Fetch Diagram)

Data Flow (Indirect Diagram)

Data Flow (Execute) May take many forms Depends on instruction being executed May include Memory read/write Input/Output Register transfers ALU operations

Data Flow (Interrupt) Simple Predictable Current PC saved to allow resumption after interrupt Contents of PC copied to MBR Special memory location (e.g. stack pointer) loaded to MAR MBR written to memory PC loaded with address of interrupt handling routine Next instruction (first of interrupt handler) can be fetched

Data Flow (Interrupt Diagram)

Prefetch Fetch accessing main memory Execution usually does not access main memory Can fetch next instruction during execution of current instruction Called instruction prefetch

Acceleration by Pipelining

The Instruction Cycle

Acceleration by Pipelining

Theoretical Performance An ideal pipeline divides a task into k independent sequential subtasks Each subtask requires 1 time unit to complete The task itself requires k time units to complete For n iterations of task, the execution times: With no pipelining: nk time units With pipelining: k + (n-1) time units Speedup of a k-stage pipeline is S = nk / [k+(n-1)] ==> k (for large n)

Acceleration by Pipelining

Pipeline Hazards

Structural Hazards

Data Hazards

Control Hazards

With conditional branch we have a penalty even if the branch has not been taken. This is because we have to wait until the branch condition is available. Branch instructions represent a major problem in assuring an optimal flow through the pipeline.

Brief Summary Structural hazards are due to resource conflicts. Data hazards are produced by data dependencies between instructions. Control hazards are produced as consequence of branch instructions.

Branches Branch instructions can dramatically affect pipeline performance. Control operations are very frequent in current programs. 20% - 35% of the instructions executed are branches (conditional and unconditional). 65% of the branches actually take the branch. Conditional branches are much more frequent than unconditional (more than two times). More than 50% of conditional branches are taken.

Dealing with Branches Multiple Streams Loop buffer Delayed branching Branch prediction static prediction dynamic prediction branch history table

Multiple Streams Have two pipelines Prefetch each branch into a separate pipeline Use appropriate pipeline Leads to bus & register contention Multiple branches lead to further pipelines being needed

Loop Buffer Very fast memory Contain the n most recently fetched instruction Maintained by fetch stage of pipeline Check buffer before fetching from memory Very good for small loops or jumps c.f. cache Used by CRAY-1

Delayed Branching The idea with delayed branching is to let the CPU do some useful work during some of the cycles which are shown to be stalled With delayed branching the CPU always executes the instruction that immediately follows after the branch and only then alters (if necessary) the sequence of execution. The instruction after the branch is said to be in the branch delay slot Leads to bus & register contention Multiple branches lead to further pipelines being needed

Delayed Branching

Comparison

Delayed Branching

Branch Prediction Correct branch prediction is very important and can produce substantial performance improvements. static prediction dynamic prediction To take full advantage of branch prediction, we can have the instructions not only fetched but also begin execution. This is known as speculative execution

Speculative Execution Instructions are executed before the processor is certain that they are in the correct execution path. If it turns out that the prediction was correct, execution goes on without introducing any branch penalty. If, however, the prediction is not fulfilled, the instruction(s) started in ad- vance and all their associated data must be purged and the state previous to their execution restored.

Static Branch Prediction Static prediction techniques do not take into consideration execution history. Predict never taken (Motorola 68020): assumes that the branch is not taken. Predict always taken: assumes that the branch is taken.

Dynamic Branch Prediction Improve the accuracy of prediction by recording the history of conditional branches. One-bit prediction scheme is used in order to record if the last execution resulted in a branch taken or not. The system predicts the same behavior as for the last time. Two-bit prediction scheme with a two-bit scheme predictions can be made depending on the last two instances of execution.

One-Bit Prediction Scheme

Two-Bit Prediction Scheme

Branch History Table History info. can be used not only to predict the outcome of a conditional branch but also to avoid recalculation of the target address. Together with bits used for prediction, the target address can be stored for later use in a branch history table. Using D. B. P with history tables up to 90% of predictions can be correct. Pentium,PowerPC620 use speculative execution with D.B.P based on a branch history table.

Branch History Table