Overview Instruction set architecture (MIPS)

Slides:



Advertisements
Similar presentations
CpE442 Intro. To Computer Architecture CpE 442 Introduction To Computer Architecture Lecture 1 Instructor: H. H. Ammar These slides are based on the lecture.
Advertisements

CSE431 L01 Introduction.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 01: Introduction Mary Jane Irwin ( )
Chapter 1 CSF 2009 Computer Performance. Defining Performance Which airplane has the best performance? Chapter 1 — Computer Abstractions and Technology.
ENEE350 Spring07 1 Ankur Srivastava University of Maryland, College Park Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005.”
Chapter 1. Introduction This course is all about how computers work But what do we mean by a computer? –Different types: desktop, servers, embedded devices.
CS/ECE 3330 Computer Architecture Chapter 1 Performance / Power.
1  1998 Morgan Kaufmann Publishers Lectures for 2nd Edition Note: these lectures are often supplemented with other materials and also problems from the.
CPEN Digital System Design Chapter 10 – Instruction SET Architecture (ISA) © Logic and Computer Design Fundamentals, 4 rd Ed., Mano Prentice Hall.
EET 4250: Chapter 1 Performance Measurement, Instruction Count & CPI Acknowledgements: Some slides and lecture notes for this course adapted from Prof.
CS / Schlesinger Lec1.1 1/20/99©UCB Spring 1999 Computer Architecture Lecture 1 Introduction and Five Components of a Computer Spring, 1999 Arie Schlesinger.
Lecture 3: Computer Performance
Introduction to Computer Architecture SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING SUMMER 2015 RAMYAR SAEEDI.
Chapter 1 CSF 2009 Computer Abstractions and Technology.
Digital Systems Design L01 Introduction.1 Digital Systems Design Lecture 01: Introduction Adapted from: Mary Jane Irwin ( )
CPE232 Introduction1 CPE 335 Computer Organization Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin (
Gary MarsdenSlide 1University of Cape Town Computer Architecture – Introduction Andrew Hutchinson & Gary Marsden (me) ( ) 2005.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology.
EET 4250: Chapter 1 Computer Abstractions and Technology Acknowledgements: Some slides and lecture notes for this course adapted from Prof. Mary Jane Irwin.
Chapter 1 - The Computer Revolution Chapter 1 — Computer Abstractions and Technology — 1  Progress in computer technology  Underpinned by Moore’s Law.
Lecture 1: Performance EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2013, Dr. Rozier.
Sogang University Advanced Computing System Chap 1. Computer Architecture Hyuk-Jun Lee, PhD Dept. of Computer Science and Engineering Sogang University.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology Sections 1.5 – 1.11.
Chapter 1 — Computer Abstractions and Technology — 1 Understanding Performance Algorithm Determines number of operations executed Programming language,
Computer Organization and Design Computer Abstractions and Technology
Computer Architecture Mehran Rezaei
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
Computer Organization & Assembly Language © by DR. M. Amer.
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Computer Architecture CPSC 350
Morgan Kaufmann Publishers
Computer Architecture Lec 06: Computer Architecture Introduction.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
1 chapter 1 Computer Architecture and Design ECE4480/5480 Computer Architecture and Design Department of Electrical and Computer Engineering University.
DR. SIMING LIU SPRING 2016 COMPUTER SCIENCE AND ENGINEERING UNIVERSITY OF NEVADA, RENO CS 219 Computer Organization.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Yaohang Li.
CSIE30300 Computer Architecture Unit 01: Introduction Hsin-Chou Chi [Adapted from material by and
Introduction Computer Organization Spring 1436/37H (2015/16G) Dr. Mohammed Sinky Computer Architecture
Performance Computer Organization II 1 Computer Science Dept Va Tech January 2009 © McQuain & Ribbens Defining Performance Which airplane has.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Yaohang Li.
1 CHAPTER 1 COMPUTER ABSTRACTIONS AND TECHNOLOGY Parts of these notes have been adapter from those of Prof. Professor Mike Schulte, Prof. D. Patterson,
Chapter 1 Performance & Technology Trends. Outline What is computer architecture? Performance What is performance: latency (response time), throughput.
COD Ch. 1 Introduction + The Role of Performance.
BITS Pilani, Pilani Campus Today’s Agenda Role of Performance.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology.
Chapter 1 Computer Abstractions and Technology
Morgan Kaufmann Publishers
Computer Architecture & Operations I
CS161 – Design and Architecture of Computer Systems
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Morgan Kaufmann Publishers Computer Abstractions and Technology
Computer Architecture & Operations I
Defining Performance Which airplane has the best performance?
Computer Architecture & Operations I
CPE 232 Computer Organization Introduction
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Computer Abstractions and Technology
CSCE 212 Chapter 4: Assessing and Understanding Performance
Computer Architecture CSCE 350
CS2100 Computer Organisation
COSC 3406: Computer Organization
Welcome to Architectures of Digital Systems
Morgan Kaufmann Publishers Computer Performance
COMS 361 Computer Organization
Morgan Kaufmann Publishers Computer Abstractions and Technology
Morgan Kaufmann Publishers Computer Abstractions and Technology
Morgan Kaufmann Publishers Computer Abstractions and Technology
CS4100: 計算機結構 Course Outline
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
CS161 – Design and Architecture of Computer Systems
Presentation transcript:

Overview Instruction set architecture (MIPS) Arithmetic operations & data System performance Processor Datapath and control Pipelining to improve performance Memory hierarchy I/O

Focus How computers work Issues affecting modern processors MIPS instruction set architecture The implementation of MIPS instruction set architecture – MIPS processor design Issues affecting modern processors Pipelining – processor performance improvement Cache – memory system, I/O systems

Why Learn Computer Architecture? You want to call yourself a “computer scientist” Computer architecture impacts every other aspect of computer science You need to make a purchasing decision or offer “expert” advice You want to build software people use – sell many, many copies-(need performance) Both hardware and software affect performance Algorithm determines number of source-level statements Language/compiler/architecture determine machine instructions Processor/memory determine how fast instructions are executed Assessing and understanding performance

Objectives How programs written in a high-level language (e.g., Java/C++) translate into the language of the hardware and how the hardware executes them. The interface between software and hardware and how software instructs hardware to perform the needed functions. The factors that determine the performance of a program The techniques that hardware designers employ to improve performance. As a consequence, you will understand what features may make one computer design better than another for a particular application

Evolution… In the beginning there were only bits… and people spent countless hours trying to program in machine language 01100011001011001110100 Finally before everybody went insane, the assembler was invented: write in mnemonics called assembly language and let the assembler translate (a one to one translation) add A,B This wasn’t for everybody, obviously… (imagine how modern applications would have been possible in assembly), so high-level language were born (and with them compilers to translate to assembly, a many-to-one translation) C= A*(SQRT(B)+3.0)

THE BIG IDEA Levels of abstraction: each layer provides its own (simplified) view and hides the details of the next.

Instruction Set Architecture (ISA) ISA: An abstract interface between the hardware and the lowest level software of a machine that encompasses all the information necessary to write a machine language program that will run correctly, including instructions, registers, memory access, I/O, and so on. “... the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls, the logic design, and the physical implementation.” – Amdahl, Blaauw, and Brooks, 1964 Enables implementations of varying cost and performance to run identical software ABI (application binary interface): The user portion of the instruction set plus the operating system interfaces used by application programmers. Defines a standard for binary portability across computers.

High-level to Machine Language High-level language program (in C) Compiler Assembly language program (for MIPS) Assembler Binary machine language program (for MIPS)

How Do the Pieces Fit Together? Application Operating System Compiler Firmware Instruction Set Architecture Memory system Instr. Set Proc. I/O system Datapath & Control Digital Design Circuit Design For class handout Coordination of many levels of abstraction Under a rapidly changing set of forces Design, measurement, and evaluation

Organization of a computer

Anatomy of Computer 5 classic components Keyboard, Mouse Disk (where Personal Computer Keyboard, Mouse Computer Processor Memory (where programs, data live when running) Devices Disk (where programs, data live when not running) Input Control (“brain”) Datapath (“brawn”) Output Display, Printer Datapath: performs arithmetic operation Control: guides the operation of other components based on the user instructions

Motherboard

Motherboard

Motherboard

Motherboard Layout

Moore’s Law In 1965, Gordon Moore predicted that the number of transistors that can be integrated on a die would double every 18 to 24 months (i.e., grow exponentially with time). Amazingly visionary – million transistor/chip barrier was crossed in the 1980’s. 2300 transistors, 1 MHz clock (Intel 4004) - 1971 16 Million transistors (Ultra Sparc III) 42 Million transistors, 2 GHz clock (Intel Xeon) – 2001 55 Million transistors, 3 GHz, 130nm technology, 250mm2 die (Intel Pentium 4) - 2004 140 Million transistor (HP PA-8500) Tbyte = 2^40 bytes (or 10^12 bytes) Note that Moore’s law is not about speed predictions but about chip complexity

Moore’s Law “Transistor capacity doubles every 18-24 months” “Cramming More Components onto Integrated Circuits” Gordon Moore, Electronics, 1965 # of transistors per cost-effective integrated circuit doubles every 18 months “Transistor capacity doubles every 18-24 months” Speed 2x / 1.5 years (since ‘85); 100X performance in last decade

2014 2016

i9 October 2017 Release

Memory Dynamic Random Access Memory (DRAM) The choice for main memory Volatile (contents go away when power is lost) Fast Relatively small DRAM capacity: 2x / 2 years (since ‘96); 64x size improvement in last decade Static Random Access Memory (SRAM) The choice for cache Much faster than DRAM, but less dense and more costly Magnetic disks The choice for secondary memory Non-volatile Slower Relatively large Capacity: 2x / 1 year (since ‘97) 250X size in last decade Solid state (Flash) memory The choice for embedded computers

Memory Optical disks Magnetic tape Removable, therefore very large Slower than disks Magnetic tape Even slower Sequential (non-random) access The choice for archival

DRAM Capacity Growth 2017 128GB 0.02µm Memories have quadrupled capacity every 3 years (up until 1996) – a 60% increse per year for 20 years. Now is doubling in capacity every two years. 2017

Trend: Memory Capacity year size (Mbit) 1980 0.0625 1983 0.25 1986 1 1989 4 1992 16 1996 64 1998 128 2000 256 512 2G 2010 8G 16G 2017 128G Approx. 2X every 2 years.

Example Machine Organization Workstation design target 25% of cost on processor 25% of cost on memory (minimum memory size) Rest on I/O devices, power supplies, box Computer CPU Memory Devices Control Input That is, any computer, no matter how primitive or advance, can be divided into five parts: 1. The input devices bring the data from the outside world into the computer. 2. These data are kept in the computer’s memory until ... 3. The datapath request and process them. 4. The operation of the datapath is controlled by the computer’s controller. All the work done by the computer will NOT do us any good unless we can get the data back to the outside world. 5. Getting the data back to the outside world is the job of the output devices. The most COMMON way to connect these 5 components together is to use a network of busses. Datapath Output

MIPS R3000 Instruction Set Architecture Registers Instruction Categories Load/Store Computational Jump and Branch Floating Point coprocessor Memory Management Special R0 - R31 PC HI LO 3 Instruction Formats: all 32 bits wide OP rs rt rd sa funct OP rs rt immediate OP jump target

Defining Performance Which airplane is the best?

Response Time and Throughput How long it takes to do a task Throughput Total work done per unit time e.g., tasks/transactions/… per hour How are response time and throughput affected by Replacing the processor with a faster version? Adding more processors? We’ll focus on response time for now…

Relative Performance Define Performance = 1/Execution Time “X is n time faster than Y” Example: time taken to run a program 10s on A, 15s on B Execution TimeB / Execution TimeA = 15s / 10s = 3/2 = 1.5 So A is 1.5 times faster than B

Measuring Execution Time Elapsed time Total response time, including all aspects Processing, I/O, OS overhead, idle time Determines system performance CPU time Time spent processing a given job Discounts I/O time, other jobs’ shares Comprises user CPU time and system CPU time Different programs are affected differently by CPU and system performance

CPU Clocking Operation of digital hardware governed by a constant-rate clock Clock period Clock (cycles) Data transfer and computation Update state Clock frequency (rate): cycles per second (influenced by CPU design) e.g., 4.0GHz = 4000MHz = 4.0×109Hz Clock period: duration of a clock cycle e.g., 250ps = 0.25ns = 250×10–12s also = 1/(clock rate)

CPU Time (for a particular program) Performance improved by Reducing number of clock cycles (cycle count) Increasing clock rate Hardware designer must often trade off clock rate against cycle count Clock Frequency = Clock Rate(GHz) = 1/Clock Period(Cycle Time)

CPU Time Example Computer A: 2GHz clock, 10s CPU time Designing Computer B Aim for 6s CPU time Can do faster clock, but causes 1.2 × clock cycles (A’s) How fast must Computer B clock be?

Instruction Count and Cycles Per Instruction (CPI) Instruction Count per program Determined by program, ISA and compiler Average cycles per instruction Determined by CPU hardware If different instructions have different CPI Average CPI affected by instruction mix

CPI Example Computer A: Cycle Time = 250ps, CPI = 2.0 Computer B: Cycle Time = 500ps, CPI = 1.2 Same ISA Which is faster, and by how much? A is faster… …by this much

CPI in More Detail Weighted average CPI If different instruction classes take different numbers of cycles Weighted average CPI Relative frequency

CPI Example Alternative compiled code sequences using instructions in classes A, B, C Class A B C CPI for class 1 2 3 IC in sequence 1 IC in sequence 2 4 Sequence 1: IC = 5 Clock Cycles = 2×1 + 1×2 + 2×3 = 10 Avg. CPI = 10/5 = 2.0 Sequence 2: IC = 6 Clock Cycles = 4×1 + 1×2 + 1×3 = 9 Avg. CPI = 9/6 = 1.5

Performance Summary The BIG Picture Performance depends on Algorithm: affects IC, possibly CPI Programming language: affects IC, CPI Compiler: affects IC, CPI Instruction set architecture: affects IC, and CPI  CPU Time = IC x CPI x Clock cycle time:

Pitfall: MIPS as a Performance Metric MIPS: Millions of Instructions Per Second Doesn’t account for Differences in ISAs between computers Differences in complexity between instructions CPI varies between programs on a given CPU

Concluding Remarks Cost/performance is improving Due to underlying technology development Hierarchical layers of abstraction In both hardware and software Instruction set architecture The hardware/software interface Execution time: the best performance measure Power is a limiting factor Use parallelism to improve performance