CS 465 Computer Architecture Fall 2009 Lecture 01: Introduction

Slides:



Advertisements
Similar presentations
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Yaohang Li.
Advertisements

Computer Abstractions and Technology
Power calculation for transistor operation What will cause power consumption to increase? CS2710 Computer Organization1.
CpE442 Intro. To Computer Architecture CpE 442 Introduction To Computer Architecture Lecture 1 Instructor: H. H. Ammar These slides are based on the lecture.
CSE431 L01 Introduction.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 01: Introduction Mary Jane Irwin ( )
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
Chapter 1 CSF 2009 Computer Performance. Defining Performance Which airplane has the best performance? Chapter 1 — Computer Abstractions and Technology.
Lec 2 Aug 31 review of lec 1 continue Ch 1 course overview performance measures Ch 1 exercises quiz 1.
CS/ECE 3330 Computer Architecture Chapter 1 Performance / Power.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
EET 4250: Chapter 1 Performance Measurement, Instruction Count & CPI Acknowledgements: Some slides and lecture notes for this course adapted from Prof.
1 CSE SUNY New Paltz Chapter 1 Introduction CSE-45432Introduction to Computer Architecture Dr. Izadi.
Lecture 3: Computer Performance
Introduction to Computer Architecture SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING SUMMER 2015 RAMYAR SAEEDI.
Chapter 1 Section 1.4 Dr. Iyad F. Jafar Evaluating Performance.
CpE442 Intro. To Computer Architecture CpE 442 Introduction To Computer Architecture Lecture 1 Instructor: H. H. Ammar These slides are based on the lecture.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
Chapter 1 CSF 2009 Computer Abstractions and Technology.
CS 161 Spring 2009Course administration Instructor: Laxmi N. Bhuyan Office: 351 Engg. 2 Website:
Digital Systems Design L01 Introduction.1 Digital Systems Design Lecture 01: Introduction Adapted from: Mary Jane Irwin ( )
CPE232 Introduction1 CPE 335 Computer Organization Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin (
Chapter 1 Computer Abstractions and Technology Part II.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology.
EET 4250: Chapter 1 Computer Abstractions and Technology Acknowledgements: Some slides and lecture notes for this course adapted from Prof. Mary Jane Irwin.
Chapter 1 - The Computer Revolution Chapter 1 — Computer Abstractions and Technology — 1  Progress in computer technology  Underpinned by Moore’s Law.
Lecture 1: Performance EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2013, Dr. Rozier.
Sogang University Advanced Computing System Chap 1. Computer Architecture Hyuk-Jun Lee, PhD Dept. of Computer Science and Engineering Sogang University.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology Sections 1.5 – 1.11.
Chapter 1 — Computer Abstractions and Technology — 1 Understanding Performance Algorithm Determines number of operations executed Programming language,
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
Computer Organization and Design Computer Abstractions and Technology
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Chapter 1 Technology Trends and Performance. Chapter 1 — Computer Abstractions and Technology — 2 Technology Trends Electronics technology continues to.
Morgan Kaufmann Publishers
Computer Architecture Lec 06: Computer Architecture Introduction.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
Computer Architecture Lecture 01: Introduction Marwan Al-Namari
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /03/2013 Lecture 3: Computer Performance Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE.
DR. SIMING LIU SPRING 2016 COMPUTER SCIENCE AND ENGINEERING UNIVERSITY OF NEVADA, RENO CS 219 Computer Organization.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology.
CSIE30300 Computer Architecture Unit 01: Introduction Hsin-Chou Chi [Adapted from material by and
Chapter 1 — Computer Abstractions and Technology — 1 Uniprocessor Performance Constrained by power, instruction-level parallelism, memory latency.
Introduction Computer Organization Spring 1436/37H (2015/16G) Dr. Mohammed Sinky Computer Architecture
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Yaohang Li.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
CSE431 L01 Introduction.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 01: Introduction Mary Jane Irwin ( )
Chapter 1 Performance & Technology Trends. Outline What is computer architecture? Performance What is performance: latency (response time), throughput.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface ARM Edition Chapter 1 Computer Abstractions and Technology.
CS4100: 計算機結構 Course Outline 國立清華大學資訊工程學系 九十九年度第二學期.
Computer Architecture Lecture 01: Introduction Marwan Al-Namari
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology.
Computer Architecture & Operations I
Morgan Kaufmann Publishers Technology Trends and Performance
Measuring Performance II and Logic Design
Chapter 1 Computer Abstractions and Technology
Computer Architecture & Operations I
CS161 – Design and Architecture of Computer Systems
Performance Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Morgan Kaufmann Publishers Computer Abstractions and Technology
Overview Instruction set architecture (MIPS)
CPE 232 Computer Organization Introduction
Uniprocessor Performance
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Computer Abstractions and Technology
COSC 3406: Computer Organization
Welcome to Architectures of Digital Systems
CS161 – Design and Architecture of Computer Systems
Presentation transcript:

CS 465 Computer Architecture Fall 2009 Lecture 01: Introduction Daniel Barbará ( cs.gmu.edu/~dbarbara) [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, UCB] Other handouts Course schedule with due dates To handout next time HW#1 Combinations to AV system, etc. 5581 (1988 in 113 IST) Call AV hot line at 8-777-0035

Course Administration Instructor: Daniel Barbará dbarbara@gmu.edu 4420 Eng. Bldg. Text: Required: Computer Organization & Design – The Hardware Software Interface, Patterson & Hennessy, the 4th Edition

Grading Information Grade determinates Course prerequisites Midterm Exam ~25% Final Exam 1 ~35% Homeworks ~40% Due at the beginning of class (or, if its code to be submitted electronically, by 17:00 on the due date). No late assignments will be accepted. Course prerequisites grade of C or better in CS 367 Note: evening midterm exam

Acknowledgements Slides adopted from Dr. Zhong Contributions from Dr. Setia Slides also adopt materials from many other universities IMPORTANT: Slides are not intended as replacement for the text You spent the money on the book, please read it!

Course Topics (Tentative) Instruction set architecture (Chapter 2) MIPS Arithmetic operations & data (Chapter 3) System performance (Chapter 4) Processor (Chapter 5) Datapath and control Pipelining to improve performance (Chapter 6) Memory hierarchy (Chapter 7) I/O (Chapter 8)

Focus of the Course How computers work MIPS instruction set architecture The implementation of MIPS instruction set architecture – MIPS processor design Issues affecting modern processors Pipelining – processor performance improvement Cache – memory system, I/O systems

Why Learn Computer Architecture? You want to call yourself a “computer scientist” Computer architecture impacts every other aspect of computer science You need to make a purchasing decision or offer “expert” advice You want to build software people use – sell many, many copies- (need performance) Both hardware and software affect performance Algorithm determines number of source-level statements Language/compiler/architecture determine machine instructions (Chapter 2 and 3) Processor/memory determine how fast instructions are executed (Chapter 5, 6, and 7) Assessing and understanding performance(Chapter 4)

Outline Today Course logistics Computer architectures overview Trends in computer architectures

Computer Systems Software Hardware Application software – Word Processors, Email, Internet Browsers, Games Systems software – Compilers, Operating Systems Hardware CPU Memory I/O devices (mouse, keyboard, display, disks, networks,……..)

Software O p e r a t i n g s y m A l c o f w T E X V u F I / d v b C S

How Do the Pieces Fit Together? Application Operating System Compiler Firmware Instruction Set Architecture Memory system Instr. Set Proc. I/O system Datapath & Control Digital Design Circuit Design For class handout Coordination of many levels of abstraction Under a rapidly changing set of forces Design, measurement, and evaluation

Instruction Set Architecture software instruction set hardware One of the most important abstractions is ISA A critical interface between HW and SW Example: MIPS Desired properties Convenience (from software side) Efficiency (from hardware side) Contract between software and hardware

What is Computer Architecture Programmer’s view: a pleasant environment Operating system’s view: a set of resources (hw & sw) System architecture view: a set of components Compiler’s view: an instruction set architecture with OS help Microprocessor architecture view: a set of functional units VLSI designer’s view: a set of transistors implementing logic Mechanical engineer’s view: a heater!

What is Computer Architecture Patterson & Hennessy: Computer architecture = Instruction set architecture + Machine organization + Hardware For this course, computer architecture mainly refers to ISA (Instruction Set Architecture) Programmer-visible, serves as the boundary between the software and hardware Modern ISA examples: MIPS, SPARC, PowerPC, DEC Alpha This class will focus on the ISA but not its implementation Analogy: we can talk about the functions of a digital clock (keeping time, displaying the time, setting the alarm) independently from its implementation (quartz crystal, LED displays, plastic buttons)

Organization and Hardware Organization: high-level aspects of a computer’s design Principal components: memory, CPU, I/O, … How components are interconnected How information flows between components E.g. AMD Opteron 64 and Intel Pentium 4: same ISA but different organizations Hardware: detailed logic design and the packaging technology of a computer E.g. Pentium 4 and Mobile Pentium 4: nearly identical organizations but different hardware details

Types of computers and their applications Desktop Run third-party software Office to home applications 30 years old Servers Modern version of what used to be called mainframes, minicomputers and supercomputers Large workloads Built using the same technology in desktops but higher capacity Expandable Scalable Reliable Large spectrum: from low-end (file storage, small businesses) to supercomputers (high end scientific and engineering applications) Gigabytes to Terabytes to Petabytes of storage Examples: file servers, web servers, database servers

Types of computers… Embedded Microprocessors everywhere! (washing machines, cell phones, automobiles, video games) Run one or a few applications Specialized hardware integrated with the application (not your common processor) Usually stringent limitations (battery power) High tolerance for failure (don’t want your airplane avionics to fail!) Becoming ubiquitous Engineered using processor cores The core allows the engineer to integrate other functions into the processor for fabrication on the same chip Using hardware description languages: Verilog, VHDL

Where is the Market? Millions of Computers For “definitions” of desktop, servers, supercomputers (100’s to 1000’s of processors, Gbytes to Tbytes of main memory, Tbytes to Pbytes of secondary storage), and embedded systems (cell phones, automobile control, video games, entertainment systems (digital TVs), PDAs, etc.). The computer (IT) industry is responsible for almost 10% of the GNP of the US. The embedded market has shown the strongest growth (40% compounded annual growth compared to only 9% for desktops – where do laptops fit?). This chart/number does not include the low-end 8-bit and 16-bit embedded processors that are everywhere! This is a good slide to talk about the other performance metrics in addition to speed (or see if the students can come up with them) including Power, space/volume, memory space, cost, reliability

In this class you will learn How programs written in a high-level language (e.g., Java) translate into the language of the hardware and how the hardware executes them. The interface between software and hardware and how software instructs hardware to perform the needed functions. The factors that determine the performance of a program The techniques that hardware designers employ to improve performance. As a consequence, you will understand what features may make one computer design better than another for a particular application

High-level to Machine Language High-level language program (in C) Compiler Assembly language program (for MIPS) Assembler Binary machine language program (for MIPS)

Evolution… In the beginning there were only bits… and people spent countless hours trying to program in machine language 01100011001 011001110100 Finally before everybody went insane, the assembler was invented: write in mnemonics called assembly language and let the assembler translate (a one to one translation) Add A,B This wasn’t for everybody, obviously… (imagine how modern applications would have been possible in assembly), so high-level language were born (and with them compilers to translate to assembly, a many-to-one translation) C= A*(SQRT(B)+3.0)

THE BIG IDEA Levels of abstraction: each layer provides its own (simplified) view and hides the details of the next.

Instruction Set Architecture (ISA) ISA: An abstract interface between the hardware and the lowest level software of a machine that encompasses all the information necessary to write a machine language program that will run correctly, including instructions, registers, memory access, I/O, and so on. “... the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls, the logic design, and the physical implementation.” – Amdahl, Blaauw, and Brooks, 1964 Enables implementations of varying cost and performance to run identical software ABI (application binary interface): The user portion of the instruction set plus the operating system interfaces used by application programmers. Defines a standard for binary portability across computers.

ISA Type Sales Millions of Processor Only includes 32- and 64-bit processors Others includes Samsung, HP, AMD, TI, Transmeta (same ISA as IA-32), … PowerPoint “comic” bar chart with approximate values (see text for correct values)

Organization of a computer

Anatomy of Computer 5 classic components Keyboard, Mouse Disk (where Personal Computer Keyboard, Mouse Computer Processor Memory (where programs, data live when running) Devices Disk (where programs, data live when not running) Input Control (“brain”) Datapath (“brawn”) Output Display, Printer Datapath: performs arithmetic operation Control: guides the operation of other components based on the user instructions

PC Motherboard Closeup Processor chip is hidden under the heat sink DRAM memories are on DIMMS (dual in-line memory modules)

Inside the Pentium 4

Moore’s Law In 1965, Gordon Moore predicted that the number of transistors that can be integrated on a die would double every 18 to 24 months (i.e., grow exponentially with time). Amazingly visionary – million transistor/chip barrier was crossed in the 1980’s. 2300 transistors, 1 MHz clock (Intel 4004) - 1971 16 Million transistors (Ultra Sparc III) 42 Million transistors, 2 GHz clock (Intel Xeon) – 2001 55 Million transistors, 3 GHz, 130nm technology, 250mm2 die (Intel Pentium 4) - 2004 140 Million transistor (HP PA-8500) Tbyte = 2^40 bytes (or 10^12 bytes) Note that Moore’s law is not about speed predictions but about chip complexity

Processor Performance Increase Intel Pentium 4/3000 DEC Alpha 21264A/667 DEC Alpha 21264/600 Intel Xeon/2000 DEC Alpha 5/500 DEC Alpha 4/266 DEC Alpha 5/300 DEC AXP/500 IBM POWER 100 HP 9000/750 IBM RS6000 Another powerpoint “comic” – note that the y axis is log ! x/y where x is the model number and y is the speed in MHz Rate of performance improvement has been between 1.5 and 1.6 times per year – how much longer will Moore’s Law hold? MIPS M2000 SUN-4/260 MIPS M/120

Trend: Microprocessor Capacity Itanium II: 241 million Pentium 4: 55 million Alpha 21264: 15 million Pentium Pro: 5.5 million PowerPC 620: 6.9 million Alpha 21164: 9.3 million Sparc Ultra: 5.2 million Moore’s Law CMOS improvements: Die size: 2X every 3 yrs Line width: halve / 7 yrs Amazingly visionary

Moore’s Law “Transistor capacity doubles every 18-24 months” “Cramming More Components onto Integrated Circuits” Gordon Moore, Electronics, 1965 # of transistors per cost-effective integrated circuit doubles every 18 months “Transistor capacity doubles every 18-24 months” Speed 2x / 1.5 years (since ‘85); 100X performance in last decade

Trend: Microprocessor Performance

Memory Dynamic Random Access Memory (DRAM) The choice for main memory Volatile (contents go away when power is lost) Fast Relatively small DRAM capacity: 2x / 2 years (since ‘96); 64x size improvement in last decade Static Random Access Memory (SRAM) The choice for cache Much faster than DRAM, but less dense and more costly Magnetic disks The choice for secondary memory Non-volatile Slower Relatively large Capacity: 2x / 1 year (since ‘97) 250X size in last decade Solid state (Flash) memory The choice for embedded computers

Memory Optical disks Magnetic tape Removable, therefore very large Slower than disks Magnetic tape Even slower Sequential (non-random) access The choice for archival

DRAM Capacity Growth 512M 256M 128M 64M 16M 4M 1M 256K 64K 16K Memories have quadrupled capacity every 3 years (up until 1996) – a 60% increse per year for 20 years. Now is doubling in capacity every two years. 16K

Trend: Memory Capacity Growth of capacity per chip year size (Mbit) 1980 0.0625 1983 0.25 1986 1 1989 4 1992 16 1996 64 1998 128 2000 256 512 2006 2048 Now 1.4X/yr, or 2X every 2 years. more than 10000X since 1980!

Dramatic Technology Change State-of-the-art PC when you graduate: (at least…) Processor clock speed: 5000 MegaHertz (5.0 GigaHertz) Memory capacity: 4000 MegaBytes (4.0 GigaBytes) Disk capacity: 2000 GigaBytes (2.0 TeraBytes) New units! Mega => Giga, Giga => Tera (Kilo, Mega, Giga, Tera, Peta, Exa, Zetta, Yotta = 1024) Come up with a clever mnemonic, fame!

Example Machine Organization Workstation design target 25% of cost on processor 25% of cost on memory (minimum memory size) Rest on I/O devices, power supplies, box Computer CPU Memory Devices Control Input That is, any computer, no matter how primitive or advance, can be divided into five parts: 1. The input devices bring the data from the outside world into the computer. 2. These data are kept in the computer’s memory until ... 3. The datapath request and process them. 4. The operation of the datapath is controlled by the computer’s controller. All the work done by the computer will NOT do us any good unless we can get the data back to the outside world. 5. Getting the data back to the outside world is the job of the output devices. The most COMMON way to connect these 5 components together is to use a network of busses. Datapath Output

Example Machine Organization TI SuperSPARCtm TMS390Z50 in Sun SPARCstation20 MBus Module SuperSPARC Floating-point Unit L2 $ CC DRAM Controller Integer Unit MBus L64852 MBus control M-S Adapter Inst Cache Ref MMU Data Cache STDIO SBus serial Store Buffer SCSI kbd SBus DMA mouse Ethernet audio RTC Bus Interface SBus Cards Boot PROM Floppy

MIPS R3000 Instruction Set Architecture Registers Instruction Categories Load/Store Computational Jump and Branch Floating Point coprocessor Memory Management Special R0 - R31 PC HI LO 3 Instruction Formats: all 32 bits wide OP rs rt rd sa funct OP rs rt immediate OP jump target

Defining Performance Which airplane has the best performance?

Response Time and Throughput How long it takes to do a task Throughput Total work done per unit time e.g., tasks/transactions/… per hour How are response time and throughput affected by Replacing the processor with a faster version? Adding more processors? We’ll focus on response time for now…

Relative Performance Define Performance = 1/Execution Time “X is n time faster than Y” Example: time taken to run a program 10s on A, 15s on B Execution TimeB / Execution TimeA = 15s / 10s = 1.5 So A is 1.5 times faster than B

Measuring Execution Time Elapsed time Total response time, including all aspects Processing, I/O, OS overhead, idle time Determines system performance CPU time Time spent processing a given job Discounts I/O time, other jobs’ shares Comprises user CPU time and system CPU time Different programs are affected differently by CPU and system performance

CPU Clocking Operation of digital hardware governed by a constant-rate clock Clock period Clock (cycles) Data transfer and computation Update state Clock period: duration of a clock cycle e.g., 250ps = 0.25ns = 250×10–12s Clock frequency (rate): cycles per second e.g., 4.0GHz = 4000MHz = 4.0×109Hz

CPU Time Performance improved by Reducing number of clock cycles Increasing clock rate Hardware designer must often trade off clock rate against cycle count

CPU Time Example Computer A: 2GHz clock, 10s CPU time Designing Computer B Aim for 6s CPU time Can do faster clock, but causes 1.2 × clock cycles How fast must Computer B clock be?

Instruction Count and CPI Instruction Count for a program Determined by program, ISA and compiler Average cycles per instruction Determined by CPU hardware If different instructions have different CPI Average CPI affected by instruction mix

CPI Example Computer A: Cycle Time = 250ps, CPI = 2.0 Computer B: Cycle Time = 500ps, CPI = 1.2 Same ISA Which is faster, and by how much? A is faster… …by this much

CPI in More Detail If different instruction classes take different numbers of cycles Weighted average CPI Relative frequency

CPI Example Alternative compiled code sequences using instructions in classes A, B, C Class A B C CPI for class 1 2 3 IC in sequence 1 IC in sequence 2 4 Sequence 1: IC = 5 Clock Cycles = 2×1 + 1×2 + 2×3 = 10 Avg. CPI = 10/5 = 2.0 Sequence 2: IC = 6 Clock Cycles = 4×1 + 1×2 + 1×3 = 9 Avg. CPI = 9/6 = 1.5

Performance Summary The BIG Picture Performance depends on Algorithm: affects IC, possibly CPI Programming language: affects IC, CPI Compiler: affects IC, CPI Instruction set architecture: affects IC, CPI, Tc

Power Trends In CMOS IC technology §1.5 The Power Wall ×30 5V → 1V ×1000

Reducing Power Suppose a new CPU has The power wall 85% of capacitive load of old CPU 15% voltage and 15% frequency reduction The power wall We can’t reduce voltage further We can’t remove more heat How else can we improve performance?

Uniprocessor Performance §1.6 The Sea Change: The Switch to Multiprocessors Constrained by power, instruction-level parallelism, memory latency

Multiprocessors Multicore microprocessors More than one processor per chip Requires explicitly parallel programming Compare with instruction level parallelism Hardware executes multiple instructions at once Hidden from the programmer Hard to do Programming for performance Load balancing Optimizing communication and synchronization

SPEC CPU Benchmark Programs used to measure performance Supposedly typical of actual workload Standard Performance Evaluation Corp (SPEC) Develops benchmarks for CPU, I/O, Web, … SPEC CPU2006 Elapsed time to execute a selection of programs Negligible I/O, so focuses on CPU performance Normalize relative to reference machine Summarize as geometric mean of performance ratios CINT2006 (integer) and CFP2006 (floating-point)

CINT2006 for Opteron X4 2356 High cache miss rates Name Description IC×109 CPI Tc (ns) Exec time Ref time SPECratio perl Interpreted string processing 2,118 0.75 0.40 637 9,777 15.3 bzip2 Block-sorting compression 2,389 0.85 817 9,650 11.8 gcc GNU C Compiler 1,050 1.72 0.47 24 8,050 11.1 mcf Combinatorial optimization 336 10.00 1,345 9,120 6.8 go Go game (AI) 1,658 1.09 721 10,490 14.6 hmmer Search gene sequence 2,783 0.80 890 9,330 10.5 sjeng Chess game (AI) 2,176 0.96 0.48 37 12,100 14.5 libquantum Quantum computer simulation 1,623 1.61 1,047 20,720 19.8 h264avc Video compression 3,102 993 22,130 22.3 omnetpp Discrete event simulation 587 2.94 690 6,250 9.1 astar Games/path finding 1,082 1.79 773 7,020 xalancbmk XML parsing 1,058 2.70 1,143 6,900 6.0 Geometric mean 11.7 High cache miss rates

SPEC Power Benchmark Power consumption of server at different workload levels Performance: ssj_ops/sec Power: Watts (Joules/sec)

Performance (ssj_ops/sec) SPECpower_ssj2008 for X4 Target Load % Performance (ssj_ops/sec) Average Power (Watts) 100% 231,867 295 90% 211,282 286 80% 185,803 275 70% 163,427 265 60% 140,160 256 50% 118,324 246 40% 920,35 233 30% 70,500 222 20% 47,126 206 10% 23,066 180 0% 141 Overall sum 1,283,590 2,605 ∑ssj_ops/ ∑power 493

Pitfall: Amdahl’s Law Improving an aspect of a computer and expecting a proportional improvement in overall performance §1.8 Fallacies and Pitfalls Example: multiply accounts for 80s/100s How much improvement in multiply performance to get 5× overall? Can’t be done! Corollary: make the common case fast

Fallacy: Low Power at Idle Look back at X4 power benchmark At 100% load: 295W At 50% load: 246W (83%) At 10% load: 180W (61%) Google data center Mostly operates at 10% – 50% load At 100% load less than 1% of the time Consider designing processors to make power proportional to load

Pitfall: MIPS as a Performance Metric MIPS: Millions of Instructions Per Second Doesn’t account for Differences in ISAs between computers Differences in complexity between instructions CPI varies between programs on a given CPU

Concluding Remarks Cost/performance is improving Due to underlying technology development Hierarchical layers of abstraction In both hardware and software Instruction set architecture The hardware/software interface Execution time: the best performance measure Power is a limiting factor Use parallelism to improve performance §1.9 Concluding Remarks