Download presentation
Presentation is loading. Please wait.
Published byMargaret Blake Modified over 8 years ago
1
1 دیباچه انسهانهای دست دوم بررسی نمی کنند، تکرار می کنند؛ انجام نمی دهند، ادای انجام دادن را در می آورند؛ خلق نمی کنند، نمایش می دهند؛ به توان خود و دیگران کاری ندارند، بلکه به رفاقت ها و روابط می اندیشند. اگر آنهایی که عمل می کنند، می اندیشند و کار و تولید می کنند نبودند، چه بر سر دنیا می آمد؟ آین راند (سرچشمه)
2
2 Chapter 1 Fundamentals of Quantitative Design and Analysis Computer Architecture A Quantitative Approach, Fifth Edition
3
3 Computer Technology Performance improvements: Improvements in semiconductor technology Feature size, clock speed Improvements in computer architectures HLL (High Level Language) compilers, UNIX based OSes RISC architectures Together have enabled: Lightweight computers Productivity-based programming languages: C#, Java, Python SaaS, Virtualization, Cloud Applications evolution: Speech, sound, images, video, “augmented/extended reality”, “big data” Introduction
4
4 Single Processor Performance Introduction RISC Move to multi-processor
5
5 Current Trends in Architecture Cannot continue to leverage Instruction-Level parallelism (ILP) Single processor performance improvement ended in 2003 New models for performance: Data-level parallelism (DLP) Thread-level parallelism (TLP) Request-level parallelism (RLP) These require explicit restructuring of the application Introduction
6
6 Classes of Computers Personal Mobile Device (PMD) smart phones, tablet computers (1.8 billion sold 2010) Emphasis on energy efficiency and real-time Desktop Computers Emphasis on price-performance (0.35 billion) Servers Emphasis on availability (very costly downtime!), scalability, throughput (20 million) Clusters / Warehouse Scale Computers Used for “Software as a Service (SaaS)”, etc. Emphasis on availability ($6M/hour-downtime at Amazon.com!) and price-performance (power=80% of Total Cost!) Sub-class: Supercomputers, emphasis: floating-point performance, fast internal networks, big data analytics Embedded Computers (19 billion in 2010) Emphasis: price Classes of Computers
7
7 Parallelism Classes of parallelism in applications: Data-Level Parallelism (DLP) : Operating on Same Data in Parallel Task-Level Parallelism (TLP): Doing Parallel Tasks Classes of architectural parallelism: Instruction-Level Parallelism (ILP): Instruction pipelining Vector architectures/Graphic Processor Units (GPUs): Applying single Instruction to a collection of data Thread-Level Parallelism: DLP or TLP by tightly coupled hardware that allows Multithreading Request-Level Parallelism: Doing largely decoupled tasks in parallel Classes of Computers
8
8 Flynn’s Taxonomy Single instruction stream, single data stream (SISD) Uniprocessor system with ILP Single instruction stream, multiple data streams (SIMD) Vector architectures Multimedia extensions Graphics processor units Multiple instruction streams, single data stream (MISD) No commercial implementation Multiple instruction streams, multiple data streams (MIMD) Tightly-coupled MIMD: thread-level parallelism Loosely-coupled MIMD: request-level parallelism Classes of Computers
9
9 Defining Computer Architecture “Old” view of computer architecture: Instruction Set Architecture (ISA) design i.e. decisions regarding:decisions regarding registers, memory addressing, addressing modes, instruction operands, available operations, control flow instructions, instruction encoding “Real” computer architecture: Specific requirements of the target machine Design to maximize performance within constraints: cost, power, and availability Includes ISA, microarchitecture (Memory System and its interconnection with CPU), hardware Defining Computer Architecture
10
10 Trends in Technology Integrated circuit technology Transistor density: 35%/year Die size: 10-20%/year Integration overall: 40-55%/year DRAM capacity: 25-40%/year (slowing) Flash capacity: 50-60%/year 15-20X cheaper/bit than DRAM Standard for PMDs Magnetic disk technology: 40%/year 15-25X cheaper/bit then Flash 300-500X cheaper/bit than DRAM Networking technology: Discussed in another cource Trends in Technology MOS Technology On chip Cache Multi-Core
11
11 Bandwidth and Latency Bandwidth or throughput Total work done in a given time 10,000-25,000X improvement for processors 300-1200X improvement for memory and disks Latency or response time Time between start and completion of an event 30-80X improvement for processors 6-8X improvement for memory and disks Trends in Technology
12
12 Bandwidth and Latency Log-log plot of bandwidth and latency milestones Trends in Technology
13
13 Copyright © 2012, Elsevier Inc. All rights reserved. Transistors and Wires Feature size Minimum size of transistor or wire 10 microns in 1971 to 0.032 microns in 2011 Transistor performance scales linearly Wire delay does not improve with feature size and transistor switching delay! Integration density scales quadratically Linear performance and quadratic density growth present two challenges: 1. Power 2. Signal Propagation delay Trends in Technology
14
14 Copyright © 2012, Elsevier Inc. All rights reserved. Power and Energy Problem: Get power in, get power out Three power concerns: 1. Maximum Power to maintain supply voltage 2. Thermal Design Power (TDP) Characterizes sustained power consumption Used as target for power supply and cooling system Lower than peak power, higher than average power consumption Power control via: voltage or temperature dependent clock rate + Thermal overload trip 3. Energy Efficiency : energy consumption per task Example: a CPU with 20% more power + 70% less time per task = 1.2*0.7= 0.84 = better energy efficiency Trends in Power and Energy
15
15 Dynamic Energy and Power Dynamic energy Transistor switch from 0 -> 1 or 1 -> 0 ½ x Capacitive load x Voltage 2 Dynamic power ½ x Capacitive load x Voltage 2 x Frequency switched Reducing clock rate reduces power, not energy Reducing voltage lowers both: going from 5V to under 1V in 20 years Trends in Power and Energy
16
16 Copyright © 2012, Elsevier Inc. All rights reserved. Example Trends in Power and Energy
17
17 Power Intel 80386 consumed ~ 4 W 3.3 GHz Intel Core i7 consumes 130 W Heat must be dissipated from 1.5 x 1.5 cm chip This is the limit of what can be cooled by air Trends in Power and Energy
18
18 Copyright © 2012, Elsevier Inc. All rights reserved. Increasing energy efficeincy Do nothing well: turning off the clock of idle units or cores Dynamic Voltage-Frequency Scaling Low power state for DRAM, disks : imposes wake up delay Overclocking, turning off some cores and running others faster. Typically 10% above nominal clock. Trends in Power and Energy
19
19 Static Power Static power consumption Current static (leakage current) x Voltage Scales with number of transistors & on chip cache (SRAM) To reduce: power gating of idle sub modules Race-to-halt: operate at maximum speed to prolong idle periods. The new primary evaluation for design innovation Tasks per joule Performance per watt (Instead of performance per mm²) Trends in Power and Energy
20
20 Trends in Cost Cost relative issues Yield: percent of manufactured devices that pass the tests (doubling the yield halves the cost) Volume: doubling the volume decreases cost by 10% Becoming commodity: increases competition and lowers the cost Trends in Cost
21
21 Copyright © 2012, Elsevier Inc. All rights reserved. Integrated Circuit Cost Integrated circuit Bose-Einstein formula: Wafer yield= 100% Defects per unit area = 0.016-0.057 defects per square cm for 40 nm (2010) N = process-complexity factor = 11.5-15.5 (40 nm, 2010) The manufacturing process dictates the wafer cost, wafer yield and defects per unit area The architect’s design affects the die area, which in turn affects the defects and cost per die Trends in Cost
22
22 Core-i7 floorplan Copyright © 2012, Elsevier Inc. All rights reserved. Trends in Cost
23
23 Copyright © 2012, Elsevier Inc. All rights reserved.
24
24 Cost of Die Processed wafer cost = $5500 Cost of 1 cm² die = $13 Cost of 2.25 cm² die = $51 The cost increases relative to square of the area increase Additional costs: testing, packaging, test after packaging, and multilayer fabrication masks Copyright © 2012, Elsevier Inc. All rights reserved.
25
25 Copyright © 2012, Elsevier Inc. All rights reserved. Dependability Systems alternate between two states of service with respect to SLA/SLO: 1. Service accomplishment, where service is delivered as specified by SLA 2. Service interruption, where the delivered service is different from the SLA “failure(F)=transition from 1 to 2” and “repair(R)=transition from 2 to 1” Module reliability: Mean time to failure (MTTF) = 10^9/FIT Failures In Time (FIT) : no of failures per 1 billion hours Mean time to repair (MTTR) Mean time between failures (MTBF) = MTTF + MTTR Module availability: MTTF / MTBF Dependability
26
26 Copyright © 2012, Elsevier Inc. All rights reserved. Dependability If the age of a module do not affect the probability of failures: The system failure rate = ∑ failure rate of each part Dependability
27
27 Redundancy improves dependability
28
28 Copyright © 2012, Elsevier Inc. All rights reserved. Measuring Performance Typical performance metrics: Response time = execution time : desktop Throughput = total amount of work done in a given time: warehouse Speed of X relative to Y Execution time Y / Execution time X Execution time Wall clock time: includes all system overheads CPU time: only computation time Benchmarks Kernels (e.g. matrix multiply): small, key pieces of real applications Toy programs (e.g. sorting): less than 100 line Synthetic benchmarks (e.g. Dhrystone) Benchmark suites: Standard Performance Evaluation Corporation, www.spec.org & Transaction Processing Council, www.tpc.org www.spec.orgwww.tpc.org Measuring Performance
29
29 SPEC desktop benchmark programs Copyright © 2012, Elsevier Inc. All rights reserved.
30
30 Summerizing Performance Results SPECRatio Copyright © 2012, Elsevier Inc. All rights reserved.
31
31 AMD Opteron Vs. Intel Itanium2
32
32 Principles of Computer Design Principle of Locality Reuse of data and instructions Programs spend 90% of their execution time in only 10% of code Focus on the Common Case Amdahl’s Law : helps to know the performance improvement obtained from optimizing the common case. Principles
33
33 Spend resources proportional to where time is spent
34
34 Using Amdahl’s law to compare design alternatives Copyright © 2012, Elsevier Inc. All rights reserved.
35
35 The effect of 4150x improvement in power supply reliability on overall system reliability. Copyright © 2012, Elsevier Inc. All rights reserved. Amdahl’s law requires to know the fraction of time or other resources consumed by new version
36
36 Copyright © 2012, Elsevier Inc. All rights reserved. Principles of Computer Design The Processor Performance Equation Principles
37
37 Copyright © 2012, Elsevier Inc. All rights reserved. Principles of Computer Design Principles Different instruction types having different CPIs IC i = the number of times instruction i is executed in a program CPUs have counters that can be used to measure CPIs. Easier to use than amdahl’s law Overall
38
38 Copyright © 2012, Elsevier Inc. All rights reserved.
39
39 Putting it all together: Performance, Price, and Power Comparing performance/price of three small servers with SPECpower: (ssj-ops = server side java operations per second) Copyright © 2012, Elsevier Inc. All rights reserved.
40
40 Including power Google server utilization measures show: Less than 1% of servers operate at peak load Average utilization = 10% - 50% overallops/Wops/W/ 1000$ 7103034324 815 -242357254 815-482696213
41
ISA CSCE430/830 Instruction Set Architecture (ISA) Serves as an interface between software and hardware. Provides a mechanism by which the software tells the hardware what should be done. instruction set High level language code : C, C++, Java, Fortran, hardware Assembly language code: architecture specific statements Machine language code: architecture specific bit patterns software compiler assembler
42
ISA CSCE430/830 Instruction Set Design Issues Instruction set design issues include: –Where are operands stored? »registers, memory, stack, accumulator –How many explicit operands are there? »0, 1, 2, or 3 –How is the operand location specified? »register, immediate, indirect,... –What type & size of operands are supported? »byte, int, float, double, string, vector... –What operations are supported? »add, sub, mul, move, compare...
43
ISA CSCE430/830 Classifying ISAs Accumulator (before 1960, e.g. 68HC11 ): 1-addressadd Aacc acc + mem[A] Stack (1960s to 1970s): 0-addressaddtos tos + next Memory-Memory (1970s to 1980s): 2-addressadd A, Bmem[A] mem[A] + mem[B] 3-addressadd A, B, C mem[A] mem[B] + mem[C] Register-Memory (1970s to present, e.g. 80x86 ): 2-addressadd R1, AR1 R1 + mem[A] load R1, AR1 mem[A] Register-Register (Load/Store, RISC) (1960s to present, e.g. MIPS ): 3-addressadd R1, R2, R3R1 R2 + R3 load R1, R2R1 mem[R2] store R1, R2mem[R1] R2
44
ISA CSCE430/830 Code Sequence C = A + B for Four Instruction Sets StackAccumulatorRegister (register-memory) Register (load- store) Push A Push B Add Pop C Load A Add B Store C Load R1, A Add R1, B Store C, R1 Load R1,A Load R2, B Add R3, R1, R2 Store C, R3 memory acc = acc + mem[C] R1 = R1 + mem[C] R3 = R1 + R2
45
ISA CSCE430/830 Types of Addressing Modes (VAX) Addressing ModeExampleAction 1.Register directAdd R4, R3R4 <- R4 + R3 2.Immediate Add R4, #3R4 <- R4 + 3 3.DisplacementAdd R4, 100(R1)R4 <- R4 + M[100 + R1] 4.Register indirect Add R4, (R1)R4 <- R4 + M[R1] 5.IndexedAdd R4, (R1 + R2)R4 <- R4 + M[R1 + R2] 6.Direct Add R4, (1000)R4 <- R4 + M[1000] 7.Memory IndirectAdd R4, @(R3)R4 <- R4 + M[M[R3]] 8.AutoincrementAdd R4, (R2)+R4 <- R4 + M[R2] R2 <- R2 + d 9.AutodecrementAdd R4, (R2)-R4 <- R4 + M[R2] R2 <- R2 - d 10. ScaledAdd R4, 100(R2)[R3]R4 <- R4 + M[100 + R2 + R3*d] Studies by [Clark and Emer] indicate that modes 1-4 account for 93% of all operands on the VAX.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.