Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department.

Slides:



Advertisements
Similar presentations
ISA Issues; Performance Considerations. Testing / System Verilog: ECE385.
Advertisements

CIS 570 Advanced Computer Systems University of Massachusetts Dartmouth Instructor: Dr. Michael Geiger Fall 2008 Lecture 1: Fundamentals of Computer Design.
CS 136, Advanced Architecture Class Introduction.
Ch1. Fundamentals of Computer Design 3. Principles (5) ECE562/468 Advanced Computer Architecture Prof. Honggang Wang ECE Department University of Massachusetts.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Intro II & Trends Steve Ko Computer Sciences and Engineering University at Buffalo.
10/28/091 Implementing an Instruction Set David E. Culler CS61CL Oct 28, 2009 Lecture 9 UCB CS61CL F09 Lec 9.
CPE 731 Advanced Computer Architecture Instruction Set Principles Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of California,
Processor Technology and Architecture
Mary Jane Irwin ( ) [Adapted from Computer Organization and Design,
ENEE350 Spring07 1 Ankur Srivastava University of Maryland, College Park Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005.”
MIPS Assembler Programming
Computer ArchitectureFall 2007 © October 24nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
1 Roman Japanese Chinese (compute in hex?). 2 COMP 206: Computer Architecture and Implementation Montek Singh Thu, Jan 22, 2009 Lecture 3: Quantitative.
CS61CL Machine Structures Lec 6 – Number Representation David Culler Electrical Engineering and Computer Sciences University of California, Berkeley.
Chapter 4 Processor Technology and Architecture. Chapter goals Describe CPU instruction and execution cycles Explain how primitive CPU instructions are.
Lecture 5 Sept 14 Goals: Chapter 2 continued MIPS assembly language instruction formats translating c into MIPS - examples.
CPSC 614 Computer Architecture Lec 1 - Introduction E. J. Kim Dept. of Computer Science Texas A&M University
ECE 15B Computer Organization Spring 2010 Dmitri Strukov Lecture 6: Logic/Shift Instructions Partially adapted from Computer Organization and Design, 4.
MIPS on FLEET Amir Kamil. Goals Run most MIPS assembly code on FLEET  Attempt to duplicate level of support in SPIM interpreter  MIPS assembly translated.
VHDL Synthesis of a MIPS-32 Processor Bryan Allen Dave Chandler Nate Ransom.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 27: Single-Cycle CPU Datapath Design Instructor: Sr Lecturer SOE Dan Garcia
1 Instant replay  The semester was split into roughly four parts. —The 1st quarter covered instruction set architectures—the connection between software.
An Introduction Chapter Chapter 1 Introduction2 Computer Systems  Programmable machines  Hardware + Software (program) HardwareProgram.
Basic Microcomputer Design. Inside the CPU Registers – storage locations Control Unit (CU) – coordinates the sequencing of steps involved in executing.
Computer System Architecture ESGD2204
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
IT253: Computer Organization Lecture 5: Assembly Language and an Introduction to MIPS Tonga Institute of Higher Education.
Advanced Computer Architecture 0 Lecture # 1 Introduction by Husnain Sherazi.
CS 5513: Computer Architecture Lecture 1: Introduction Daniel A. Jiménez The University of Texas at San Antonio
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 12 Overview and Concluding Remarks.
Computer Architecture Lec 1 - Introduction. 01/19/10Lec 01-intro 2 Outline Computer Science at a Crossroads Computer Architecture v. Instruction Set Arch.
1. 2 Instructions: Words of the language understood by CPU Instruction set: CPU’s vocabulary Instruction Set Architecture (ISA): CPU’s vocabulary together.
Chapter 10 The Assembly Process. What Assemblers Do Translates assembly language into machine code. Assigns addresses to all symbolic labels (variables.
Computer Architecture Lec 1: Introduction Dr. Eng. Amr T. Abdel-Hamid CSEN 601 Spring 2011 Computer Architecture Text book slides: Computer Architec ture:
EEL5708/Bölöni Lec 4.1 Fall 2004 September 10, 2004 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Review: Memory Hierarchy.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
CS.305 Computer Architecture Enhancing Performance with Pipelining Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
CS 3853/3851: Computer Architecture Lecture 1: Introduction Daniel A. Jiménez The University of Texas at San Antonio
CSCI 6307 Foundation of Systems Review: Midterm Exam Xiang Lian The University of Texas – Pan American Edinburg, TX 78539
ECE 15B Computer Organization Spring 2011 Dmitri Strukov Partially adapted from Computer Organization and Design, 4 th edition, Patterson and Hennessy,
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
1 Introduction Outline Computer Science at a Crossroads Computer Architecture v. Instruction Set Arch. What Computer Architecture brings to table.
DR. SIMING LIU SPRING 2016 COMPUTER SCIENCE AND ENGINEERING UNIVERSITY OF NEVADA, RENO CS 219 Computer Organization.
EEL5708/Bölöni Lec 3.1 Fall 2006 Sept 1, 2006 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Lecture 3 Review: Instruction Sets.
CDA 3101 Spring 2016 Introduction to Computer Organization
CSE431 L06 Basic MIPS Pipelining.1Irwin, PSU, 2005 MIPS Pipeline Datapath Modifications  What do we need to add/modify in our MIPS datapath? l State registers.
COM181 Computer Hardware Lecture 6: The MIPs CPU.
For each of these, where could the data be and how would we find it? TLB hit – cache or physical memory TLB miss – cache, memory, or disk Virtual memory.
EEL5708/Bölöni Lec 3.1 Fall 2004 Sept 1, 2004 Lotzi Bölöni Fall 2004 EEL 5708 High Performance Computer Architecture Lecture 3 Review: Instruction Sets.
Overview of microcomputer structure and operation
New-School Machine Structures Parallel Requests Assigned to computer e.g., Search “Katz” Parallel Threads Assigned to core e.g., Lookup, Ads Parallel Instructions.
CS 230: Computer Organization and Assembly Language
Ch1. Fundamentals of Computer Design 3. Principles (5)
CSCE 614 Computer Architecture Lec 1 - Introduction
ELEN 468 Advanced Logic Design
Computer Organization and Design Instruction Sets - 2
Computer Architecture (CS 207 D) Instruction Set Architecture ISA
CSE 520 Advanced Computer Architecture Lec 2 - Introduction
Computer Organization and Design Instruction Sets
Designing MIPS Processor (Single-Cycle) Presentation G
Pipelining: Advanced ILP
Appendix A: Pipelining
Appendix A Classifying Instruction Set Architecture
ECE232: Hardware Organization and Design
The Processor Lecture 3.6: Control Hazards
Guest Lecturer TA: Shreyas Chand
Systems Architecture I
Processor: Datapath and Control
Presentation transcript:

Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department Computer System Architecture ESGD2204 Saturday, 28 th March 2010 Chapter 6 Lecture 11

Chapter 6 ADVANCED COMPUTERS ORGANIZATION AND DESIGN

THE BASIC COMPUTER The Basic Computer has two components, a processor and memory The memory has 4096 words in it –4096 = 2 12, so it takes 12 bits to select a word in memory Each word is 16 bits long CPU RAM BASIC COMPUTER ORGANIZATION & DESIGN

BASIC COMPUTER REGISTERS List of BC Registers DR 16 Data Register Holds memory operand AR 12 Address Register Holds address for memory AC 16 Accumulator Processor register IR 16 Instruction Register Holds instruction code PC 12 Program Counter Holds address of instruction TR 16 Temporary Register Holds temporary data INPR 8 Input Register Holds input character OUTR 8 Output Register Holds output character Registers Registers in the Basic Computer 110 PC 150 IR 150 TR 70 OUTR 150 DR 150 AC 110 AR INPR 07 Memory 4096 x 16 CPU BASIC COMPUTER ORGANIZATION & DESIGN

COMMON BUS SYSTEM Registers AR PC DR LIC LIC LIC AC LIC ALU E IR L TR LIC OUTR LD INPR Memory 4096 x 16 Address Read Write 16-bit Common Bus S0S0 S1S1 S2S2 BASIC COMPUTER ORGANIZATION & DESIGN

Instruction Set Architecture: Critical Interface The computing system as seen by programmers instruction set software hardware Properties of a good abstraction –Lasts through many generations (portability) –Used in many different ways (generality) –Provides convenient functionality to higher levels –Permits an efficient implementation at lower levels –What matters today is performance of complete computer systems

Example: MIPS 0 r0 r1 ° r31 PC lo hi Programmable storage 2^32 x bytes 31 x 32-bit GPRs (R0=0) 32 x 32-bit FP regs (paired DP) HI, LO, PC Data types ? Format ? Addressing Modes? Arithmetic logical Add, AddU, Sub, SubU, And, Or, Xor, Nor, SLT, SLTU, AddI, AddIU, SLTI, SLTIU, AndI, OrI, XorI, LUI SLL, SRL, SRA, SLLV, SRLV, SRAV Memory Access LB, LBU, LH, LHU, LW, LWL,LWR SB, SH, SW, SWL, SWR Control J, JAL, JR, JALR BEq, BNE, BLEZ,BGTZ,BLTZ,BGEZ,BLTZAL,BGEZAL 32-bit instructions on word boundary

computers in 21st Century Understanding design techniques, machine structures, technology factors & evaluation methods that will determine the form of computers in 21st Century Technology Programming Languages Operating Systems History Applications Interface Design (ISA) Measurement & Evaluation Parallelism Computer Architecture: Organization Hardware/Software Boundary Compilers Computer architecture is at a crossroads Institutionalization and renaissance Power, dependability, multi CPU vs. 1 CPU performance

What Computer Architecture Brings to Table Quantitative Principles of Design 1.Take Advantage of Parallelism 2.Principle of Locality 3.Focus on the Common Case 4.Amdahl’s Law 5.The Processor Performance Equation Culture of anticipating and exploiting advances in technology- technology performance trends Careful, quantitative comparisons –Define, quantify, and summarize relative performance –Define and quantify relative cost –Define and quantify dependability –Define and quantify power Culture of crafting well-defined interfaces that are carefully implemented and thoroughly checked

1) Taking Advantage of Parallelism Increasing throughput of server computer via multiple processors or multiple disks Detailed HW design –Carry lookahead adders uses parallelism to speed up computing sums from linear to logarithmic in number of bits per operand –Multiple memory banks searched in parallel in set-associative caches Pipelining: overlap instruction execution to reduce the total time to complete an instruction sequence. –Not every instruction depends on immediate predecessor  executing instructions completely/partially in parallel possible –Classic 5-stage pipeline: 1) Instruction Fetch (Ifetch), 2) Register Read (Reg), 3) Execute (ALU), 4) Data Memory Access (Dmem), 5) Register Write (Reg)

Pipelined Instruction Execution Is Faster I n s t r. O r d e r Time (clock cycles) Reg ALU DMemIfetch Reg ALU DMemIfetch Reg ALU DMemIfetch Reg ALU DMemIfetch Reg Cycle 1Cycle 2Cycle 3Cycle 4Cycle 6Cycle 7Cycle 5

Limits to Pipelining Hazards prevent next instruction from executing during its designated clock cycle –Structural hazards: attempt to use the same hardware to do two different things at once –Data hazards: Instruction depends on result of prior instruction still in the pipeline –Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps). I n s t r. O r d e r Time (clock cycles) Reg ALU DMemIfetch Reg ALU DMem Ifetch Reg ALU DMemIfetch Reg ALU DMem Ifetch Reg

2) The Principle of Locality => Caches ($) The Principle of Locality: –Programs access a relatively small portion of the address space at any instant of time. Two Different Types of Locality: –Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) –Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straight-line code, array access) For 30 years, HW has relied on locality for memory perf. P MEM $

Levels of the Memory Hierarchy CPU Registers 100s Bytes 300 – 500 ps ( ns) L1 and L2 Cache 10s-100s K Bytes ~1 ns - ~10 ns $1000s/ GByte Main Memory G Bytes 80ns- 200ns ~ $100/ GByte Disk 10s T Bytes, 10 ms (10,000,000 ns) ~ $0.25 / GByte Capacity Access Time Cost Tape Vault Semi-infinite sec-min ~$1 / GByte Registers L1 Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntlr bytes OS 4K-8K bytes user/operator Mbytes Upper Level Lower Level Faster Larger L2 Cache cache cntlr bytes Blocks

3) Focus on the Common Case “Make Frequent Case Fast and Rest Right” Common sense guides computer design –Since its engineering, common sense is valuable In making a design trade-off, favor the frequent case over the infrequent case –E.g., Instruction fetch and decode unit used more frequently than multiplier, so optimize it first –E.g., If database server has 50 disks / processor, storage dependability dominates system dependability, so optimize it 1st Frequent case is often simpler and can be done faster than the infrequent case –E.g., overflow is rare when adding 2 numbers, so improve performance by optimizing more common case of no overflow –May slow down overflow, but overall performance improved by optimizing for the normal case What is frequent case and how much performance improved by making case faster => Amdahl’s Law

4) Amdahl’s Law - Partial Enhancement Limits Best to ever achieve:

Example: An I/O bound server gets a new CPU that is 10X faster, but 60% of server time is spent waiting for I/O. A 10X faster CPU allures, but the server is only 1.6X faster. 4) Amdahl’s Law - Partial Enhancement Limits

5) Processor performance equation CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Inst count CPI Cycle time CPU time = Inst Countx CPI xClock Cycle Program X Compiler X (X) Inst. Set. X X Organization X X Technology X

What Determines a Clock Cycle? At transition edge(s) of each clock pulse, state devices sample and save their present input signals Past: 1 cycle = time for signals to pass 10 levels of gates Today: determined by numerous time-of-flight issues + gate delays –clock propagation, wire lengths, drivers Latch or register combinational logic