Download presentation
Presentation is loading. Please wait.
Published byGriselda Wilcox Modified over 6 years ago
1
Samira Khan University of Virginia Sep 6, 2017
COMPUTER ARCHITECTURE CS 6354 ISA Tradeoffs and Multi-Cores Samira Khan University of Virginia Sep 6, 2017 The content and concept of this course are adapted from CMU ECE 740
2
AGENDA Logistics Review from last lecture Fundamental concepts
ISA Tradeoffs Multi-Cores
3
LOGISTICS Project list Sample project proposals
Posted in Piazza Be prepared to spend time on the project Sample project proposals Project Proposal Due on Sep 20 Project Proposal Presentations: Sep 25 and 27 Groups of at most 2-3 students
4
Project Proposal Problem: Clearly define what is the problem you are trying to solve Novelty: Did any other work try to solve the problem? How did they solve it? What are the shortcomings? Key Idea: What is the initial idea? Why do you think it will work? How is your approach different from the prior work? Methodology: How will you test and evaluate your idea? What tools or simulators will you use? What are the experiments you need to do to prove/disprove your idea? Plan: Describe the steps to finish your project. What will you accomplice at each milestone? What are the things you must need to finish? Can you do more? If you finish it can you submit it to a conference? Which conference do you think is a better fit for the work?
5
LITERATURE SURVEY Goal: Critically analyze related work to your project Pick 2-3 papers related to your project Use the same format as the reviews What is the problem the paper is solving What is the key insight What are the advantages and disadvantages How can you do better Send the list of the papers to the TA by Sep 15 Will become the related work in your proposal
6
VECTOR PROCESSOR -- Works (only) if parallelism is regular (data/SIMD parallelism) ++ Vector operations -- Very inefficient if parallelism is irregular -- How about searching for a key in a linked list? -- Memory (bandwidth) can easily become a bottleneck, especially if 1. compute/memory operation balance is not maintained 2. data is not mapped appropriately to memory banks
7
SYSTOLIC ARRAYS: PROS AND CONS
Advantage: Specialized (computation needs to fit PE organization) improved efficiency, simple design, high concurrency/ performance good to do more with less memory bandwidth requirement Downside: Specialized not generally applicable because computation needs to fit the PE functions/organization
8
ISA VS. MICROARCHITECTURE
What is part of ISA vs. Uarch? Gas pedal: interface for “acceleration” Internals of the engine: implements “acceleration” Add instruction vs. Adder implementation Implementation (uarch) can be various as long as it satisfies the specification (ISA) Bit serial, ripple carry, carry lookahead adders x86 ISA has many implementations: 286, 386, 486, Pentium, Pentium Pro, … Uarch usually changes faster than ISA Few ISAs (x86, SPARC, MIPS, Alpha) but many uarchs Why?
9
TRADEOFFS: SOUL OF COMPUTER ARCHITECTURE
ISA-level tradeoffs Uarch-level tradeoffs System and Task-level tradeoffs How to divide the labor between hardware and software
10
ISA-LEVEL TRADEOFFS: SEMANTIC GAP
Where to place the ISA? Semantic gap Closer to high-level language (HLL) or closer to hardware control signals? Complex vs. simple instructions RISC vs. CISC vs. HLL machines FFT, QUICKSORT, POLY, FP instructions? VAX INDEX instruction (array access with bounds checking) e.g., A[i][j][k] one instruction with bound check
11
SEMANTIC GAP High-Level Language Software Semantic Gap ISA Hardware
Control Signals
12
SEMANTIC GAP High-Level Language Software Semantic Gap ISA CISC RISC
Hardware Control Signals
13
ISA-LEVEL TRADEOFFS: SEMANTIC GAP
Where to place the ISA? Semantic gap Closer to high-level language (HLL) or closer to hardware control signals? Complex vs. simple instructions RISC vs. CISC vs. HLL machines FFT, QUICKSORT, POLY, FP instructions? VAX INDEX instruction (array access with bounds checking) Tradeoffs: Simple compiler, complex hardware vs complex compiler, simple hardware Caveat: Translation (indirection) can change the tradeoff! Burden of backward compatibility Performance? Optimization opportunity: Example of VAX INDEX instruction: who (compiler vs. hardware) puts more effort into optimization? Instruction size, code size
14
X86: SMALL SEMANTIC GAP: STRING OPERATIONS
REP MOVS DEST SRC How many instructions does this take in Alpha?
15
SMALL SEMANTIC GAP EXAMPLES IN VAX
FIND FIRST Find the first set bit in a bit field Helps OS resource allocation operations SAVE CONTEXT, LOAD CONTEXT Special context switching instructions INSQUEUE, REMQUEUE Operations on doubly linked list INDEX Array access with bounds checking STRING Operations Compare strings, find substrings, … Cyclic Redundancy Check Instruction EDITPC Implements editing functions to display fixed format output Digital Equipment Corp., “VAX Architecture Handbook,”
16
CISC vs. RISC Which one is easy to optimize? X: MOV ADD REPMOVS COMP
JMP X REPMOVS Which one is easy to optimize?
17
SMALL VERSUS LARGE SEMANTIC GAP
CISC vs. RISC Complex instruction set computer complex instructions Initially motivated by “not good enough” code generation Reduced instruction set computer simple instructions John Cocke, mid 1970s, IBM 801 Goal: enable better compiler control and optimization RISC motivated by Memory stalls (no work done in a complex instruction when there is a memory stall?) When is this correct? Simplifying the hardware lower cost, higher frequency Enabling the compiler to optimize the code better Find fine-grained parallelism to reduce stalls
18
SMALL VERSUS LARGE SEMANTIC GAP
John Cocke’s RISC (large semantic gap) concept: Compiler generates control signals: open microcode Advantages of Small Semantic Gap (Complex instructions) + Denser encoding smaller code size saves off-chip bandwidth, better cache hit rate (better packing of instructions) + Simpler compiler Disadvantages - Larger chunks of work compiler has less opportunity to optimize - More complex hardware translation to control signals and optimization needs to be done by hardware Read Colwell et al., “Instruction Sets and Beyond: Computers, Complexity, and Controversy,” IEEE Computer 1985.
19
HOW HIGH OR LOW CAN YOU GO?
Very large semantic gap Each instruction specifies the complete set of control signals in the machine Compiler generates control signals Open microcode (John Cocke, 1970s) Gave way to optimizing compilers Very small semantic gap ISA is (almost) the same as high-level language Java machines, LISP machines, object-oriented machines, capability-based machines
20
EFFECT OF TRANSLATION One can translate from one ISA to another ISA to change the semantic gap tradeoffs Examples Intel’s and AMD’s x86 implementations translate x86 instructions into programmer-invisible microoperations (simple instructions) in hardware Transmeta’s x86 implementations translated x86 instructions into “secret” VLIW instructions in software (code morphing software) Think about the tradeoffs
21
TRANSLATION LAYER High-Level Language Control Signals ISA Semantic Gap
Software Hardware High-Level Language Control Signals uISA (uops) Semantic Gap Software Hardware ISA (x86) Translation Layer Exposed to programmer
22
ISA-LEVEL TRADEOFFS: INSTRUCTION LENGTH
Fixed length: Length of all instructions the same + Easier to decode single instruction in hardware + Easier to decode multiple instructions concurrently -- Wasted bits in instructions (Why is this bad?) -- Harder-to-extend ISA (how to add new instructions?) Variable length: Length of instructions different (determined by opcode and sub-opcode) + Compact encoding (Why is this good?) Intel 432: Huffman encoding (sort of). 6 to 321 bit instructions. How? -- More logic to decode a single instruction -- Harder to decode multiple instructions concurrently Tradeoffs Code size (memory space, bandwidth, latency) vs. hardware complexity ISA extensibility and expressiveness Performance? Smaller code vs. imperfect decode
23
ISA-LEVEL TRADEOFFS: UNIFORM DECODE
Uniform decode: Same bits in each instruction correspond to the same meaning Opcode is always in the same location Ditto operand specifiers, immediate values, … Many “RISC” ISAs: Alpha, MIPS, SPARC + Easier decode, simpler hardware + Enables parallelism: generate target address before knowing the instruction is a branch -- Restricts instruction format (fewer instructions?) or wastes space Non-uniform decode E.g., opcode can be the 1st-7th byte in x86 + More compact and powerful instruction format -- More complex decode logic (e.g., more logic to speculatively generate branch target)
24
X86 VS. ALPHA INSTRUCTION FORMATS
25
ISA-LEVEL TRADEOFFS: NUMBER OF REGISTERS
Affects: Number of bits used for encoding register address Number of values kept in fast storage (register file) (uarch) Size, access time, power consumption of register file Large number of registers: + Enables better register allocation (and optimizations) by compiler fewer saves/restores -- Larger instruction size -- Larger register file size -- (Superscalar processors) More complex dependency check logic
26
ISA-LEVEL TRADEOFFS: ADDRESSING MODES
Addressing mode specifies how to obtain an operand of an instruction Register Immediate Memory (displacement, register indirect, indexed, absolute, memory indirect, autoincrement, autodecrement, …) More modes: + help better support programming constructs (arrays, pointer-based accesses) -- make it harder for the architect to design -- too many choices for the compiler? Many ways to do the same thing complicates compiler design Read Wulf, “Compilers and Computer Architecture”
27
X86 VS. ALPHA INSTRUCTION FORMATS
28
x86 register indirect absolute register + displacement register
29
x86 indexed (base + index) scaled (base + index*4)
30
OTHER ISA-LEVEL TRADEOFFS
Load/store vs. Memory/Memory Condition codes vs. condition registers vs. compare&test Hardware interlocks vs. software-guaranteed interlocking VLIW vs. single instruction vs. SIMD 0, 1, 2, 3 address machines (stack, accumulator, 2 or 3-operands) Precise vs. imprecise exceptions Virtual memory vs. not Aligned vs. unaligned access Supported data types Software vs. hardware managed page fault handling Granularity of atomicity Cache coherence (hardware vs. software) …
31
MULTIPLE CORES ON CHIP Simpler and lower power than a single large core Large scale parallelism on chip AMD Barcelona 4 cores Intel Core i7 8 cores IBM Cell BE 8+1 cores IBM POWER7 8 cores Nvidia Fermi 448 “cores” Intel SCC 48 cores, networked Tilera TILE Gx 100 cores, networked Sun Niagara II 8 cores
32
MOORE’S LAW Moore, “Cramming more components onto integrated circuits,” Electronics, 1965.
34
MULTI-CORE Idea: Put multiple processors on the same die.
Technology scaling (Moore’s Law) enables more transistors to be placed on the same die area What else could you do with the die area you dedicate to multiple processors? Have a bigger, more powerful core Have larger caches in the memory hierarchy Integrate platform components on chip (e.g., network interface, memory controllers)
35
WHY MULTI-CORE? Alternative: Bigger, more powerful single core
Larger superscalar issue width, larger instruction window, more execution units, large trace caches, large branch predictors, etc + Improves single-thread performance transparently to programmer, compiler - Very difficult to design (Scalable algorithms for improving single-thread performance elusive) - Power hungry – many out-of-order execution structures consume significant power/area when scaled. Why? - Diminishing returns on performance - Does not significantly help memory-bound application performance (Scalable algorithms for this elusive)
36
MULTI-CORE VS. LARGE SUPERSCALAR
Multi-core advantages + Simpler cores more power efficient, lower complexity, easier to design and replicate, higher frequency (shorter wires, smaller structures) + Higher system throughput on multiprogrammed workloads reduced context switches + Higher system throughput in parallel applications Multi-core disadvantages - Requires parallel tasks/threads to improve performance (parallel programming) - Resource sharing can reduce single-thread performance - Shared hardware resources need to be managed - Number of pins limits data supply for increased demand
37
WHY MULTI-CORE? Alternative: Bigger caches
+ Improves single-thread performance transparently to programmer, compiler + Simple to design - Diminishing single-thread performance returns from cache size. Why? - Multiple levels complicate memory hierarchy
38
CACHE VS. CORE
39
WHY MULTI-CORE? Alternative: Integrate platform components on chip instead + Speeds up many system functions (e.g., network interface cards, Ethernet controller, memory controller, I/O controller) - Not all applications benefit (e.g., CPU intensive code sections)
40
WHY MULTI-CORE? Other alternatives? Dataflow?
Vector processors (SIMD)? Integrating DRAM on chip? Reconfigurable logic? (general purpose?)
41
Samira Khan University of Virginia Sep 6, 2017
COMPUTER ARCHITECTURE CS 6354 ISA Tradeoffs And Multi-Cores Samira Khan University of Virginia Sep 6, 2017 The content and concept of this course are adapted from CMU ECE 740
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.