Download presentation
Presentation is loading. Please wait.
Published byNaomi King Modified over 9 years ago
1
Computer Architecture 6001215-3 Lecture 01: Introduction Marwan Al-Namari Alnamari_m@hotmail.com
2
Grading Information Grade determinates l Quiz 15% l Midterm Exam20% l Quiz 25% l Final Exam 150% l Presentation(+CD)10% l Attendance10%
3
Text Book Text: Required: Computer Organization & Design – The Hardware Software Interface, Patterson & Hennessy, the 4th Edition
4
Text Book
5
Course Topics (Tentative) Instruction set architecture (Chapter 2) l MIPS Arithmetic operations & data (Chapter 3) System performance (Chapter 4) Processor (Chapter 5) l Datapath and control Pipelining to improve performance (Chapter 6) Memory hierarchy (Chapter 7) I/O (Chapter 8)
6
Focus of the Course How computers work l MIPS(Microprocessor without Interlocked Pipeline Stages) instruction set architecture in RISC (Reduced Instruction Set Computer) l The implementation of MIPS instruction set architecture – MIPS processor design Issues affecting modern processors l Pipelining – processor performance improvement l Cache – memory system, I/O systems
7
Why Learn Computer Architecture? You want to call yourself a “computer scientist” l Computer architecture impacts every other aspect of computer science You need to make a purchasing decision or offer “expert” advice You want to build software people use – sell many, many copies- (need performance) l Both hardware and software affect performance -Algorithm determines number of source-level statements -Language/compiler/architecture determine machine instructions (Chapter 2 and 3) -Processor/memory determine how fast instructions are executed (Chapter 5, 6, and 7) -Assessing and understanding performance(Chapter 4)
8
Computer Systems Software l Application software – Word Processors, Email, Internet Browsers, Games l Systems software – Compilers, Operating Systems Hardware l CPU l Memory l I/O devices (mouse, keyboard, display, disks, networks,……..)
9
Software
10
How Do the Pieces Fit Together? I/O systemInstr. Set Proc. Compiler Operating System Application Digital Design Circuit Design Instruction Set Architecture Firmware Coordination of many levels of abstraction Under a rapidly changing set of forces Design, measurement, and evaluation Memory system Datapath & Control
11
D.Barbará instruction set software hardware Instruction Set Architecture One of the most important abstractions is ISA A critical interface between HW and SW Example: MIPS Desired properties Convenience (from software side) Efficiency (from hardware side) 11
12
D.Barbará What is Computer Architecture Programmer’s view: a pleasant environment Operating system’s view: a set of resources (hw & sw) System architecture view: a set of components Compiler’s view: an instruction set architecture with OS help Microprocessor architecture view: a set of functional units VLSI ( Very Large-Scale Integration )designer’s view: a set of transistors implementing logic Mechanical engineer’s view: a heater! 12
13
D.Barbará What is Computer Architecture Patterson & Hennessy: Computer architecture = Instruction set architecture + Machine organization + Hardware For this course, computer architecture mainly refers to ISA (Instruction Set Architecture) Programmer-visible, serves as the boundary between the software and hardware Modern ISA examples: MIPS, SPARC, PowerPC, DEC Alpha 13
14
D.Barbará Organization and Hardware Organization: high-level aspects of a computer’s design Principal components: memory, CPU, I/O, … How components are interconnected How information flows between components E.g. AMD Opteron 64 and Intel Pentium 4: same ISA but different organizations Hardware: detailed logic design and the packaging technology of a computer E.g. Pentium 4 and Mobile Pentium 4: nearly identical organizations but different hardware details 14
15
Types of computers and their applications Desktop l Run third-party software l Office to home applications l 30 years old Servers l Modern version of what used to be called mainframes, minicomputers and supercomputers l Large workloads l Built using the same technology in desktops but higher capacity -Expandable -Scalable -Reliable l Large spectrum: from low-end (file storage, small businesses) to supercomputers (high end scientific and engineering applications) -Gigabytes to Terabytes to Petabytes of storage l Examples: file servers, web servers, database servers
16
Types of computers… Embedded l Microprocessors everywhere! (washing machines, cell phones, automobiles, video games) l Run one or a few applications l Specialized hardware integrated with the application (not your common processor) l Usually stringent limitations (battery power) l High tolerance for failure (don’t want your airplane avionics to fail!) l Becoming ubiquitous l Engineered using processor cores -The core allows the engineer to integrate other functions into the processor for fabrication on the same chip -Using hardware description languages: Verilog, VHDL
17
Where is the Market? Millions of Computers
18
In this class you will learn How programs written in a high-level language (e.g., Java) translate into the language of the hardware and how the hardware executes them. The interface between software and hardware and how software instructs hardware to perform the needed functions. The factors that determine the performance of a program The techniques that hardware designers employ to improve performance. As a consequence, you will understand what features may make one computer design better than another for a particular application
19
High-level to Machine Language High-level language program (in C) Assembly language program (for MIPS) Binary machine language program (for MIPS) Compiler Assembler
20
Evolution… In the beginning there were only bits… and people spent countless hours trying to program in machine language 01100011001 011001110100 Finally before everybody went insane, the assembler was invented: write in mnemonics called assembly language and let the assembler translate (a one to one translation) Add A,B This wasn’t for everybody, obviously… (imagine how modern applications would have been possible in assembly), so high-level language were born (and with them compilers to translate to assembly, a many-to-one translation) C= A*(SQRT(B)+3.0)
21
THE BIG IDEA Levels of abstraction: each layer provides its own (simplified) view and hides the details of the next.
22
Instruction Set Architecture (ISA) ISA: An abstract interface between the hardware and the lowest level software of a machine that encompasses all the information necessary to write a machine language program that will run correctly, including instructions, registers, memory access, I/O, and so on. “... the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls, the logic design, and the physical implementation.” – Amdahl, Blaauw, and Brooks, 1964 l Enables implementations of varying cost and performance to run identical software ABI (application binary interface): The user portion of the instruction set plus the operating system interfaces used by application programmers. Defines a standard for binary portability across computers.
23
ISA Type Sales PowerPoint “comic” bar chart with approximate values (see text for correct values) Millions of Processor
24
Organization of a computer
25
Anatomy of Computer Personal Computer Processor Computer Control (“brain”) Datapath (“brawn”) Memory (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk (where programs, data live when not running) 5 classic components Datapath: performs arithmetic operation Control: guides the operation of other components based on the user instructions
26
PC Motherboard Closeup
27
Inside the Pentium 4
28
Moore’s Law In 1965, Gordon Moore predicted that the number of transistors that can be integrated on a die would double every 18 to 24 months (i.e., grow exponentially with time). Amazingly visionary – million transistor/chip barrier was crossed in the 1980’s. l 2300 transistors, 1 MHz clock (Intel 4004) - 1971 l 16 Million transistors (Ultra Sparc III) l 42 Million transistors, 2 GHz clock (Intel Xeon) – 2001 l 55 Million transistors, 3 GHz, 130nm technology, 250mm 2 die (Intel Pentium 4) - 2004 l 140 Million transistor (HP PA-8500)
29
Processor Performance Increase SUN-4/260MIPS M/120 MIPS M2000 IBM RS6000 HP 9000/750 DEC AXP/500 IBM POWER 100 DEC Alpha 4/266 DEC Alpha 5/500 DEC Alpha 21264/600 DEC Alpha 5/300 DEC Alpha 21264A/667 Intel Xeon/2000 Intel Pentium 4/3000
30
CMOS improvements: Die size: 2X every 3 yrs Line width: halve / 7 yrs Itanium II: 241 million Pentium 4: 55 million Alpha 21264: 15 million Pentium Pro: 5.5 million PowerPC 620: 6.9 million Alpha 21164: 9.3 million Sparc Ultra: 5.2 million Moore’s Law Trend: Microprocessor Capacity
31
Moore’s Law “Cramming More Components onto Integrated Circuits” l Gordon Moore, Electronics, 1965 # of transistors per cost-effective integrated circuit doubles every 18 months “Transistor capacity doubles every 18-24 months” Speed 2x / 1.5 years (since ‘85); 100X performance in last decade
32
Trend: Microprocessor Performance
33
Memory Dynamic Random Access Memory (DRAM) l The choice for main memory l Volatile (contents go away when power is lost) l Fast l Relatively small l DRAM capacity: 2x / 2 years (since ‘96); 64x size improvement in last decade Static Random Access Memory (SRAM) l The choice for cache l Much faster than DRAM, but less dense and more costly Magnetic disks l The choice for secondary memory l Non-volatile l Slower l Relatively large l Capacity: 2x / 1 year (since ‘97) 250X size in last decade Solid state (Flash) memory l The choice for embedded computers l Non-volatile
34
Memory Optical disks l Removable, therefore very large l Slower than disks Magnetic tape l Even slower l Sequential (non-random) access l The choice for archival
35
DRAM Capacity Growth 16K 64K 256K 1M 4M 16M 64M 128M 256M 512M
36
Trend: Memory Capacity year size (Mbit) 19800.0625 19830.25 19861 19894 199216 199664 1998128 2000256 2002 512 2006 2048 Now 1.4X/yr, or 2X every 2 years. more than 10000X since 1980! Growth of capacity per chip
37
(Kilo, Mega, Giga, Tera, Peta, Exa, Zetta, Yotta = 10 24 ) Come up with a clever mnemonic, fame! Dramatic Technology Change State-of-the-art PC when you graduate: (at least…) l Processor clock speed: 5000 MegaHertz (5.0 GigaHertz) l Memory capacity: 4000 MegaBytes (4.0 GigaBytes) l Disk capacity:2000 GigaBytes (2.0 TeraBytes) l New units! Mega => Giga, Giga => Tera
38
Example Machine Organization Workstation design target l 25% of cost on processor l 25% of cost on memory (minimum memory size) l Rest on I/O devices, power supplies, box CPU Computer Control Datapath MemoryDevices Input Output
39
Example Machine Organization TI SuperSPARC tm TMS390Z50 in Sun SPARCstation20 Floating-point Unit Integer Unit Inst Cache Ref MMU Data Cache Store Buffer Bus Interface SuperSPARC L2 $ CC MBus Module MBus L64852 MBus control M-S Adapter SBus DRAM Controller SBus DMA SCSI Ethernet STDIO serial kbd mouse audio RTC Boot PROM Floppy SBus Cards
40
MIPS R3000 Instruction Set Architecture Instruction Categories l Load/Store l Computational l Jump and Branch l Floating Point -coprocessor l Memory Management l Special R0 - R31 PC HI LO OP rs rt rdsafunct rs rt immediate jump target 3 Instruction Formats: all 32 bits wide Registers
41
Defining Performance Which airplane has the best performance? §1.4 Performance
42
Response Time and Throughput Response time l How long it takes to do a task Throughput l Total work done per unit time -e.g., tasks/transactions/… per hour How are response time and throughput affected by l Replacing the processor with a faster version? l Adding more processors? We’ll focus on response time for now…
43
Relative Performance Define Performance = 1/Execution Time “X is n time faster than Y” Example: time taken to run a program l 10s on A, 15s on B l Execution Time B / Execution Time A = 15s / 10s = 1.5 l So A is 1.5 times faster than B
44
Measuring Execution Time Elapsed time l Total response time, including all aspects -Processing, I/O, OS overhead, idle time l Determines system performance CPU time l Time spent processing a given job -Discounts I/O time, other jobs’ shares l Comprises user CPU time and system CPU time l Different programs are affected differently by CPU and system performance
45
CPU Clocking Operation of digital hardware governed by a constant-rate clock Clock (cycles) Data transfer and computation Update state Clock period Clock period: duration of a clock cycle l e.g., 250ps = 0.25ns = 250×10 –12 s Clock frequency (rate): cycles per second l e.g., 4.0GHz = 4000MHz = 4.0×10 9 Hz
46
CPU Time Performance improved by l Reducing number of clock cycles l Increasing clock rate l Hardware designer must often trade off clock rate against cycle count
47
CPU Time Example Computer A: 2GHz clock, 10s CPU time Designing Computer B l Aim for 6s CPU time l Can do faster clock, but causes 1.2 × clock cycles How fast must Computer B clock be?
48
Instruction Count and CPI Instruction Count for a program l Determined by program, ISA and compiler Average cycles per instruction l Determined by CPU hardware l If different instructions have different CPI(Cyle Per Instruction) -Average CPI affected by instruction mix
49
CPI Example Computer A: Cycle Time = 250ps, CPI = 2.0 Computer B: Cycle Time = 500ps, CPI = 1.2 Same ISA Which is faster, and by how much? A is faster… …by this much
50
CPI in More Detail If different instruction classes take different numbers of cycles Weighted average CPI Relative frequency
51
CPI Example Alternative compiled code sequences using instructions in classes A, B, C ClassABC CPI for class123 IC in sequence 1212 IC in sequence 2411 Sequence 1: IC = 5 l Clock Cycles = 2×1 + 1×2 + 2×3 = 10 l Avg. CPI = 10/5 = 2.0 Sequence 2: IC = 6 l Clock Cycles = 4×1 + 1×2 + 1×3 = 9 l Avg. CPI = 9/6 = 1.5
52
Performance Summary Performance depends on l Algorithm: affects IC, possibly CPI l Programming language: affects IC, CPI l Compiler: affects IC, CPI l Instruction set architecture: affects IC, CPI, T c The BIG Picture
53
Power Trends In CMOS IC technology §1.5 The Power Wall ×1000 ×30 5V → 1V
54
Reducing Power Suppose a new CPU has l 85% of capacitive load of old CPU l 15% voltage and 15% frequency reduction The power wall l We can’t reduce voltage further l We can’t remove more heat How else can we improve performance?
55
Uniprocessor Performance §1.6 The Sea Change: The Switch to Multiprocessors Constrained by power, instruction-level parallelism, memory latency
56
Multiprocessors Multicore microprocessors l More than one processor per chip Requires explicitly parallel programming l Compare with instruction level parallelism -Hardware executes multiple instructions at once -Hidden from the programmer l Hard to do -Programming for performance -Load balancing -Optimizing communication and synchronization
57
SPEC CPU Benchmark Programs used to measure performance l Supposedly typical of actual workload Standard Performance Evaluation Corp (SPEC) l Develops benchmarks for CPU, I/O, Web, … SPEC CPU2006 l Elapsed time to execute a selection of programs -Negligible I/O, so focuses on CPU performance l Normalize relative to reference machine l Summarize as geometric mean of performance ratios -CINT2006 (integer) and CFP2006 (floating-point)
58
CINT2006 for Opteron X4 2356 NameDescriptionIC×10 9 CPITc (ns)Exec timeRef timeSPECratio perlInterpreted string processing2,1180.750.406379,77715.3 bzip2Block-sorting compression2,3890.850.408179,65011.8 gccGNU C Compiler1,0501.720.47248,05011.1 mcfCombinatorial optimization33610.000.401,3459,1206.8 goGo game (AI)1,6581.090.4072110,49014.6 hmmerSearch gene sequence2,7830.800.408909,33010.5 sjengChess game (AI)2,1760.960.483712,10014.5 libquantumQuantum computer simulation1,6231.610.401,04720,72019.8 h264avcVideo compression3,1020.800.4099322,13022.3 omnetppDiscrete event simulation5872.940.406906,2509.1 astarGames/path finding1,0821.790.407737,0209.1 xalancbmkXML parsing1,0582.700.401,1436,9006.0 Geometric mean11.7 High cache miss rates
59
SPEC Power Benchmark Power consumption of server at different workload levels l Performance: ssj_ops/sec l Power: Watts (Joules/sec)
60
SPECpower_ssj2008 for X4 Target Load %Performance (ssj_ops/sec)Average Power (Watts) 100%231,867295 90%211,282286 80%185,803275 70%163,427265 60%140,160256 50%118,324246 40%920,35233 30%70,500222 20%47,126206 10%23,066180 0%0141 Overall sum1,283,5902,605 ∑ssj_ops/ ∑power493
61
Pitfall: Amdahl’s Law Improving an aspect of a computer and expecting a proportional improvement in overall performance §1.8 Fallacies and Pitfalls l Can’t be done! Example: multiply accounts for 80s/100s l How much improvement in multiply performance to get 5× overall? Corollary: make the common case fast
62
Fallacy: Low Power at Idle Look back at X4 power benchmark l At 100% load: 295W l At 50% load: 246W (83%) l At 10% load: 180W (61%) Google data center l Mostly operates at 10% – 50% load l At 100% load less than 1% of the time Consider designing processors to make power proportional to load
63
Pitfall: MIPS as a Performance Metric MIPS: Millions of Instructions Per Second l Doesn’t account for -Differences in ISAs between computers -Differences in complexity between instructions l CPI varies between programs on a given CPU
64
Concluding Remarks Cost/performance is improving l Due to underlying technology development Hierarchical layers of abstraction l In both hardware and software Instruction set architecture l The hardware/software interface Execution time: the best performance measure Power is a limiting factor l Use parallelism to improve performance §1.9 Concluding Remarks
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.