Download presentation
Presentation is loading. Please wait.
1
Performance COE 301 Computer Organization
ICS 233 Computer Architecture and Assembly Language Dr. Marwan Abu-Amara College of Computer Sciences and Engineering King Fahd University of Petroleum and Minerals [Adapted from slides of Dr. M. Mudawar and Dr. A. El-Maleh, KFUPM]
2
Outline Response Time and Throughput Performance and Execution Time
Clock Cycles Per Instruction (CPI) Single- vs. Multi-cycle CPU Performance Amdahl’s Law Benchmarks
3
What is Performance? How can we make intelligent choices about computers? Why some computer hardware performs better at some programs, but performs less at other programs? How do we measure the performance of a computer? What factors are hardware related? software related? How does machine’s instruction set affect performance? Understanding performance is key to understanding underlying organizational motivation
4
Response Time and Throughput
Time between start and completion of a task, as observed by end user Response Time = CPU Time + Waiting Time (I/O, OS scheduling, etc.) Throughput Number of tasks the machine can run in a given period of time Decreasing execution time improves throughput Example: using a faster version of a processor Less time to run a task more tasks can be executed Increasing throughput can also improve response time Example: increasing number of processors in a multiprocessor More tasks can be executed in parallel Execution time of individual sequential tasks is not changed But less waiting time in scheduling queue reduces response time Have them raise their hands when answering questions
5
Book’s Definition of Performance
For some program running on machine X X is n times faster than Y Execution timeX 1 PerformanceX = PerformanceY PerformanceX Execution timeX Execution timeY = n =
6
What do we mean by Execution Time?
Real Elapsed Time Counts everything: Waiting time, Input/output, disk access, OS scheduling, … etc. Useful number, but often not good for comparison purposes Our Focus: CPU Execution Time Time spent while executing the program instructions Doesn't count the waiting time for I/O or OS scheduling Can be measured in seconds, or Can be related to number of CPU clock cycles
7
Clock Cycles Clock cycle = Clock period = 1 / Clock rate
Clock rate = Clock frequency = Cycles per second 1 Hz = 1 cycle/sec 1 KHz = 103 cycles/sec 1 MHz = 106 cycles/sec 1 GHz = 109 cycles/sec 2 GHz clock has a cycle time = 1/(2×109) = 0.5 nanosecond (ns) We often use clock cycles to report CPU execution time Cycle 1 Cycle 2 Cycle 3 CPU Execution Time = CPU cycles × cycle time Clock rate CPU cycles =
8
Improving Performance
To improve performance, we need to Reduce number of clock cycles required by a program, or Reduce clock cycle time (increase the clock rate) Example: A program runs in 10 seconds on computer X with 2 GHz clock What is the number of CPU cycles on computer X ? We want to design computer Y to run same program in 6 seconds But computer Y requires 10% more cycles to execute program What is the clock rate for computer Y ? Solution: CPU cycles on computer X = 10 sec × 2 × 109 cycles/s = 20 × 109 CPU cycles on computer Y = 1.1 × 20 × 109 = 22 × 109 cycles Clock rate for computer Y = 22 × 109 cycles / 6 sec = 3.67 GHz
9
Single- vs. Multi-cycle CPU
Drawbacks of Single Cycle Processor: Long cycle time All instructions take as much time as the slowest Alternative Solution: Multicycle implementation Break down instruction execution into multiple cycles ALU Instruction Fetch Reg Read ALU Reg Write longest delay Load Instruction Fetch Reg Read ALU Memory Read Reg Write Store Instruction Fetch Reg Read ALU Memory Write Branch Instruction Fetch Reg Read ALU Jump Instruction Fetch Decode
10
Single- vs. Multi-cycle CPU
Break instruction execution into five steps Instruction fetch Instruction decode and register read Execution, memory address calculation, or branch completion Memory access or ALU instruction completion Load instruction completion One step = One clock cycle (clock cycle is reduced) First 2 steps are the same for all instructions Instruction # cycles ALU & Store 4 Branch 3 Load 5 Jump 2
11
Clock Cycles Per Instruction (CPI)
In multi-cycle processors, instructions take different number of cycles to execute Multiplication takes more time than addition Floating point operations take longer than integer ones Accessing memory takes more time than accessing registers CPI is an average number of clock cycles per instruction Important point Changing the cycle time often changes the number of cycles required for various instructions (more later) 1 I1 cycles I2 I3 I6 I4 I5 I7 2 3 4 5 6 7 8 9 10 11 12 13 14 CPI = 14/7 = 2
12
Performance Equation To execute, a given program will require …
Some number of machine instructions Some number of clock cycles Some number of seconds We can relate CPU clock cycles to instruction count Performance Equation: (related to instruction count) CPU cycles = Instruction Count × CPI Time = (Instruction Count × CPI) × cycle time
13
Factors Impacting Performance
Time = Instruction Count × CPI × cycle time I-Count CPI Cycle Program X Compiler ISA Organization Technology
14
Using the Performance Equation
Suppose we have two implementations of the same ISA For a given program Machine A has a clock cycle time of 250 ps and a CPI of 2.2 Machine B has a clock cycle time of 500 ps and a CPI of 1.0 Which machine is faster for this program, and by how much? Solution: Both computers execute same count of instructions = I CPU execution time (A) = I × 2.2 × 250 ps = 550 × I ps CPU execution time (B) = I × 1.0 × 500 ps = 500 × I ps Computer B is faster than A by a factor = = 1.1 550 × I 500 × I
15
Determining the CPI Different types of instructions have different CPI Let CPIi = clocks per instruction for class i of instructions Let Ci = instruction count for class i of instructions Designers often obtain CPI by a detailed simulation Hardware counters are also used for operational CPUs CPI = (CPIi × Ci) i = 1 n ∑ Ci CPU cycles = (CPIi × Ci) i = 1 n ∑
16
Example on Determining the CPI
Problem A compiler designer is trying to decide between two code sequences for a particular machine. Based on the hardware implementation, there are three different classes of instructions: class A, class B, and class C, and they require one, two, and three cycles per instruction class, respectively. The first code sequence has 5 instructions: 2 of A, 1 of B, and 2 of C The second sequence has 6 instructions: 4 of A, 1 of B, and 1 of C Compute the CPU cycles for each sequence. Which sequence is faster? What is the CPI for each sequence? Solution CPU cycles (1st sequence) = (2×1) + (1×2) + (2×3) = = 10 cycles CPU cycles (2nd sequence) = (4×1) + (1×2) + (1×3) = = 9 cycles Second sequence is faster, even though it executes one extra instruction CPI (1st sequence) = 10/5 = 2 CPI (2nd sequence) = 9/6 = 1.5
17
Second Example on CPI Given: instruction mix of a program on a RISC processor What is average CPI? What is the percent of time used by each instruction class? Classi Freqi CPIi ALU 50% 1 Load 20% 5 Store 10% 3 Branch 20% 2 CPIi × Freqi 0.5×1 = 0.5 0.2×5 = 1.0 0.1×3 = 0.3 0.2×2 = 0.4 %Time 0.5/2.2 = 23% 1.0/2.2 = 45% 0.3/2.2 = 14% 0.4/2.2 = 18% Average CPI = = 2.2 How faster would the machine be if load time is 2 cycles? What if two ALU instructions could be executed at once?
18
Single- vs. Multi-cycle CPU
Drawbacks of Single Cycle Processor: Long cycle time All instructions take as much time as the slowest Alternative Solution: Multicycle implementation Break down instruction execution into multiple cycles ALU Instruction Fetch Reg Read ALU Reg Write longest delay Load Instruction Fetch Reg Read ALU Memory Read Reg Write Store Instruction Fetch Reg Read ALU Memory Write Branch Instruction Fetch Reg Read ALU Jump Instruction Fetch Decode
19
Single- vs. Multi-cycle CPU
Break instruction execution into five steps Instruction fetch Instruction decode and register read Execution, memory address calculation, or branch completion Memory access or ALU instruction completion Load instruction completion One step = One clock cycle (clock cycle is reduced) First 2 steps are the same for all instructions Instruction # cycles ALU & Store 4 Branch 3 Load 5 Jump 2
20
Single- vs. Multi-cycle Performance
Assume the following operation times for components: Instruction and data memories: 200 ps ALU and adders: 180 ps Decode and Register file access (read or write): 150 ps Ignore the delays in PC, mux, extender, and wires Which of the following would be faster and by how much? Single-cycle implementation for all instructions Multicycle implementation optimized for every class of instructions Assume the following instruction mix: 40% ALU, 20% Loads, 10% stores, 20% branches, & 10% jumps
21
Single- vs. Multi-cycle Performance
Instruction Class Memory Register Read ALU Operation Data Write Total 200 150 180 680 ps Load 880 ps Store 730 ps Branch 530 ps Jump 350 ps decode and update PC For fixed single-cycle implementation: Clock cycle = For multi-cycle implementation: Average CPI = Speedup = 880 ps determined by longest delay (load instruction) max (200, 150, 180) = 200 ps (maximum delay at any step) 0.4× × ×4+ 0.2× ×2 = 3.8 880 ps / (3.8 × 200 ps) = 880 / 760 = 1.16
22
MIPS as a Performance Measure
MIPS: Millions Instructions Per Second Sometimes used as performance metric Faster machine larger MIPS MIPS specifies instruction execution rate We can also relate execution time to MIPS Instruction Count Execution Time × 106 Clock Rate CPI × 106 MIPS = = Book page 61 has an example to show that a machine with a bigger MIPS performance worse than a machine with a smaller MIPS Inst Count MIPS × 106 Inst Count × CPI Clock Rate Execution Time = =
23
Drawbacks of MIPS Three problems using MIPS as a performance metric
Does not take into account the capability of instructions Cannot use MIPS to compare computers with different instruction sets because the instruction count will differ MIPS varies between programs on the same computer A computer cannot have a single MIPS rating for all programs MIPS can vary inversely with performance A higher MIPS rating does not always mean better performance Example in next slide shows this anomalous behavior
24
MIPS example Two different compilers are being tested on the same program for a 4 GHz machine with three different classes of instructions: Class A, Class B, and Class C, which require 1, 2, and 3 cycles, respectively. The instruction count produced by the first compiler is 5 billion Class A instructions, 1 billion Class B instructions, and 1 billion Class C instructions. The second compiler produces 10 billion Class A instructions, 1 billion Class B instructions, and 1 billion Class C instructions. Which compiler produces a higher MIPS? Which compiler produces a better execution time?
25
Solution to MIPS Example
First, we find the CPU cycles for both compilers CPU cycles (compiler 1) = (5×1 + 1×2 + 1×3)×109 = 10×109 CPU cycles (compiler 2) = (10×1 + 1×2 + 1×3)×109 = 15×109 Next, we find the execution time for both compilers Execution time (compiler 1) = 10×109 cycles / 4×109 Hz = 2.5 sec Execution time (compiler 2) = 15×109 cycles / 4×109 Hz = 3.75 sec Compiler1 generates faster program (less execution time) Now, we compute MIPS rate for both compilers MIPS = Instruction Count / (Execution Time × 106) MIPS (compiler 1) = (5+1+1) × 109 / (2.5 × 106) = 2800 MIPS (compiler 2) = (10+1+1) × 109 / (3.75 × 106) = 3200 So, code from compiler 2 has a higher MIPS rating !!!
26
ExTime with E = ExTime before × (f /s + (1 – f ))
Amdahl’s Law Amdahl's Law is a measure of Speedup How a computer performs after an enhancement E Relative to how it performed previously Enhancement improves a fraction (i.e., percentage) f of exec. time by a factor s and the remaining time is unaffected Performance with E Performance before ExTime before ExTime with E Speedup(E) = = ExTime with E = ExTime before × (f /s + (1 – f )) Speedup(E) = (f /s + (1 – f )) 1
27
Example 1 on Amdahl's Law Suppose a program runs in 100 seconds on a machine, with multiply instruction responsible for 80 seconds of this time. How much do we have to improve the speed of multiplication if we want the program to run 4 times faster? Solution: suppose we improve multiplication by a factor s 25 sec (4 times faster) = 80 sec / s + 20 sec s = 80 / (25 – 20) = 80 / 5 = 16 Improve the speed of multiplication by s = 16 times How about making the program 5 times faster? 20 sec ( 5 times faster) = 80 sec / s + 20 sec s = 80 / (20 – 20) = ∞ Impossible to make 5 times faster!
28
Example 2 on Amdahl's Law Suppose that floating-point square root is responsible for 20% of the execution time of a graphics benchmark and ALL FP instructions are responsible for 60%. The remaining 40% are due to non-FP instructions. Proposal 1: speedup FP SQRT by a factor of 10 Proposal 2: make ALL FP instructions 2x faster Which proposal is better? Answer: Proposal 1: Improve FP SQRT by a factor of 10 Speedup (FP SQRT) = 1/( /10) = 1.22 Proposal 2: Improve ALL FP instructions by a factor of 2 Speedup = 1/( /2) = 1.43 Better
29
Generalization of Amdahl’s Law
Let a task be split into n consecutive parts with fractions of exec. time being f1, f2, …, fn and a corresponding speedup factors of s1, s2, …, sn, respectively Then, a generalization of Amdahl's Law results in
30
Benchmarks Performance best obtained by running a real application
Use programs typical of expected workload Representatives of expected classes of applications Examples: compilers, editors, scientific applications, graphics, ... SPEC (System Performance Evaluation Corporation) Website: Various benchmarks for CPU performance, graphics, high- performance computing, Web servers, etc. Specifies rules for running list of programs and reporting results Valuable indicator of performance and compiler technology SPEC CPU 2006 (12 integer + 17 FP programs)
31
SPEC CPU Benchmarks Wall clock time is used as a performance metric
Benchmarks measure CPU time, because of little I/O
32
Summarizing Performance Results
𝑆𝑃𝐸𝐶 𝑅𝑎𝑡𝑖𝑜 = 𝑇𝑖𝑚𝑒 𝑜𝑛 𝑅𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑟 𝑇𝑖𝑚𝑒 𝑜𝑛 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑟 𝐵𝑒𝑖𝑛𝑔 𝑅𝑎𝑡𝑒𝑑 𝑆𝑃𝐸𝐶 𝑅𝑎𝑡𝑖𝑜 𝐴 𝑆𝑃𝐸𝐶 𝑅𝑎𝑡𝑖𝑜 𝐵 = 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝑅𝑒𝑓 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝐴 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝑅𝑒𝑓 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝐵 = 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝐵 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑇𝑖𝑚𝑒 𝐴 Choice of the Reference computer is irrelevant 𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑀𝑒𝑎𝑛 𝑜𝑓 𝑆𝑃𝐸𝐶 𝑅𝑎𝑡𝑖𝑜𝑠 = 𝑛 𝑖=1 𝑛 𝑆𝑃𝐸𝐶 𝑅𝑎𝑡𝑖𝑜 𝑖
33
Execution Times & SPEC Ratios
Benchmark Ultra 5 Time (sec) Opteron SpecRatio Itanium2 Opteron/ Times Itanium2/ SpecRatios wupwise 1600 51.5 31.06 56.1 28.53 0.92 swim 3100 125.0 24.73 70.7 43.85 1.77 mgrid 1800 98.0 18.37 65.8 27.36 1.49 applu 2100 94.0 22.34 50.9 41.25 1.85 mesa 1400 64.6 21.69 108.0 12.99 0.60 galgel 2900 86.4 33.57 40.0 72.47 2.16 art 2600 92.4 28.13 21.0 123.67 4.40 equake 1300 72.6 17.92 36.3 35.78 2.00 facerec 1900 73.6 25.80 86.9 21.86 0.85 ammp 2200 136.0 16.14 132.0 16.63 1.03 lucas 2000 88.8 22.52 107.0 18.76 0.83 fma3d 120.0 17.48 131.0 16.09 sixtrack 1100 123.0 8.95 68.8 15.99 1.79 apsi 150.0 17.36 231.0 11.27 0.65 Geometric Mean 20.86 27.12 1.30 Geometric mean of ratios = 1.30 = Ratio of Geometric means = / 20.86
34
Things to Remember about Performance
Two common measures: Response Time and Throughput Response Time = duration of a single task Throughput is a rate = Number of tasks per duration of time CPU Execution Time = Instruction Count × CPI × Cycle MIPS = Millions of Instructions Per Second (is a rate) FLOPS = Floating-point Operations Per Second Amdahl's Law is a measure of speedup When improving a fraction of the execution time Benchmarks: real applications are used To compare the performance of computer systems Geometric mean of SPEC ratios (for a set of applications)
35
Performance and Power Power is a key limitation
Battery capacity has improved only slightly over time Need to design power-efficient processors Reduce power by Reducing frequency Reducing voltage Putting components to sleep Performance per Watt: FLOPS per Watt Defined as performance divided by power consumption Important metric for energy-efficiency
36
Power in Integrated Circuits
Power is the biggest challenge facing computer design Power should be brought in and distributed around the chip Hundreds of pins and multiple layers just for power and ground Power is dissipated as heat and must be removed In CMOS IC technology, dynamic power is consumed when switching transistors on and off Dynamic Power = Capacitive Load × Voltage2 × Frequency 40 5V 1V 1000
37
Trends in Clock Rates and Power
Power Wall: Cannot Increase the Clock Rate Heat must be dissipated from a 1.5 × 1.5 cm chip Intel (1985) consumed about 2 Watts Intel Core i7 running at 3.3 GHz consumes 130 Watts This is the limit of what can be cooled by air
38
Example on Power Consumption
Suppose a new CPU has 85% of capacitive load of old CPU 15% voltage and 15% frequency reduction Relate the Power consumption of the new and old CPUs Answer: The Power Wall We cannot reduce voltage further We cannot remove more heat from the integrated circuit How else can we improve performance?
39
Moving to Multicores Move to multicore
40
Processor Performance
Move to multicore ~35000X improvement in processor performance between 1978 and 2012 Performance is slowed down to 22% / year due to power consumption and memory latency
41
Multicore Processors Multiprocessor on a single chip
Requires explicit parallel programming Harder than sequential programming Parallel programming to achieve higher performance Optimizing communication and synchronization Load Balancing In addition, each core supports instruction-level parallelism Parallelism at the instruction level Parallelism is extracted by the hardware or the compiler Each core executes multiple instructions each cycle This type of parallelism is hidden from the programmer
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.