Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Architecture

Similar presentations


Presentation on theme: "Computer Architecture"— Presentation transcript:

1 Computer Architecture
Chapter 2 c With thanks to M.J. Irwin, D. Patterson, and J. Hennessy for some lecture slide contents

2 Branch Addressing Branch instructions specify
Opcode, two registers, target address Most branch targets are near branch Forward or backward §2.10 MIPS Addressing for 32-Bit Immediates and Addresses op rs rt constant or address 6 bits 5 bits 16 bits PC-relative addressing Target address = PC + offset × 4 PC already incremented by 4 by this time

3 Other Control Flow Instructions
MIPS also has an unconditional branch instruction or jump instruction: j label #go to label Instruction Format (J Format): 0x bit address PC 4 32 26 00 from the low order 26 bits of the jump instruction If-then-else code compilation

4 Jump Addressing Jump (j and jal) targets could be anywhere in text segment Encode full address in instruction op address 6 bits 26 bits Pseudo-Direct jump addressing Target address = PC31…28 : (address × 4)

5 Target Addressing Example
Loop code from earlier example Assume Loop at location 80000 Loop: sll $t1, $s3, 2 80000 19 9 2 add $t1, $t1, $s6 80004 22 32 lw $t0, 0($t1) 80008 35 8 bne $t0, $s5, Exit 80012 5 21 addi $s3, $s3, 1 80016 1 j Loop 80020 20000 Exit: … 80024

6 Aside: Branching Far Away
What if the branch destination is further away than can be captured in 16 bits? The assembler comes to the rescue – it inserts an unconditional jump to the branch target and inverts the condition beq $s0, $s1, L1_far becomes bne $s0, $s1, L2 j L1_far L2:

7 Addressing Mode Summary

8 MIPS Organization So Far
Processor Memory Register File 1…1100 src1 addr src1 data 5 32 src2 addr 32 registers ($zero - $ra) 5 dst addr read/write addr 5 src2 data write data 32 230 words 32 32 32 bits branch offset read data 32 Add PC 32 32 32 32 Add 32 4 write data 0…1100 Fetch PC = PC+4 Decode Exec 32 0…1000 32 4 5 6 7 0…0100 32 ALU 1 2 3 0…0000 32 word address (binary) 32 bits 32 byte address (big Endian)

9 MIPS Instruction Classes Distribution
Frequency of MIPS instruction classes for SPEC2006 Instruction Class Frequency Integer Ft. Pt. Arithmetic 16% 48% Data transfer 35% 36% Logical 12% 4% Cond. Branch 34% 8% Jump 2% 0%

10 Synchronization Two processors sharing an area of memory
P1 writes, then P2 reads Data race if P1 and P2 don’t synchronize Result depends of order of accesses Hardware support required Atomic read/write memory operation No other access to the location allowed between the read and write Could be a single instruction E.g., atomic swap of register ↔ memory Or an atomic pair of instructions §2.11 Parallelism and Instructions: Synchronization

11 Atomic Exchange Support
Need hardware support for synchronization mechanisms to avoid data races where the results of the program can change depending on how events happen to occur Two memory accesses from different threads to the same location, and at least one is a write Atomic exchange (atomic swap) – interchanges a value in a register for a value in memory atomically, i.e., as one operation (instruction) Implementing an atomic exchange would require both a memory read and a memory write in a single, uninterruptable instruction. An alternative is to have a pair of specially configured instructions ll $t1, 0($s1) #load linked sc $t0, 0($s1) #store conditional

12 Atomic Exchange with ll and sc
If the contents of the memory location specified by the ll are changed before the sc to the same address occurs, the sc fails (returns a zero) try: add $t0, $zero, $s4 #$t0=$s4 (exchange value) ll $t1, 0($s1) #load memory value to $t1 sc $t0, 0($s1) #try to store exchange #value to memory, if fail #$t0 will be 0 beq $t0, $zero, try #try again on failure add $s4, $zero, $t1 #load value in $s4 Use care when inserting instructions between the ll and sc. Only register-register instructions can be safely permitted, otherwise it is possible to create deadlock where the processor can never complete the sc because of repeated page faults. If the value in memory between the ll and the sc instructions changes, then sc returns a 0 in $t0 causing the code sequence to try again.

13 The C Code Translation Hierarchy
C program compiler assembly code §2.12 Translating and Starting a Program assembler object code library routines executable linker Four logical phases all programs go through C code – x.c or .TXT assembly code – x.s or .ASM object code – x.o or .OBJ executable – a.out or .EXE machine code loader memory

14 Assembler Pseudoinstructions
Most assembler instructions represent machine instructions one-to-one Pseudoinstructions: figments of the assembler’s imagination  move $t0, $t1 → add $t0, $zero, $t1 blt $t0, $t1, L → slt $at, $t0, $t bne $at, $zero, L $at (register 1): assembler temporary

15 Producing an Object Module
Assembler (or compiler) translates program into machine instructions Provides information for building a complete program from the pieces Header: described contents of object module Text segment: translated instructions Static data segment: data allocated for the life of the program Relocation info: for contents that depend on absolute location of loaded program Symbol table: global definitions and external refs Debug info: for associating with source code

16 Linking Object Modules
Produces an executable image 1. Merges segments 2. Resolve labels (determine their addresses) 3. Patch location-dependent and external refs Could leave location dependencies for fixing by a relocating loader But with virtual memory, no need to do this Program can be loaded into absolute location in virtual memory space

17 Loading a Program Load from image file on disk into memory
1. Read header to determine segment sizes 2. Create virtual address space 3. Copy text and initialized data into memory Or set page table entries so they can be faulted in 4. Set up arguments on stack 5. Initialize registers (including $sp, $fp, $gp) 6. Jump to startup routine Copies arguments to $a0, … and calls main When main returns, do exit syscall

18 Dynamic Linking Only link/load library procedure when it is called
Requires procedure code to be relocatable Avoids image bloat caused by static linking of all (transitively) referenced libraries Automatically picks up new library versions

19 Lazy Linkage Indirection table
Stub: loads routine ID, jumps to linker/loader Linker/loader code Dynamically mapped code

20 Starting Java Applications
Simple portable instruction set for the JVM Compiles bytecodes of “hot” methods into native code for host machine Interprets bytecodes

21 C Sort Example Illustrates use of assembly instructions for a C bubble sort function Swap procedure (leaf) void swap(int v[], int k) { int temp; temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; } v in $a0, k in $a1, temp in $t0 §2.13 A C Sort Example to Put It All Together

22 The Procedure Swap swap: sll $t1, $a1, 2 # $t1 = k * 4
add $t1, $a0, $t1 # $t1 = v+(k*4) # (address of v[k]) lw $t0, 0($t1) # $t0 (temp) = v[k] lw $t2, 4($t1) # $t2 = v[k+1] sw $t2, 0($t1) # v[k] = $t2 (v[k+1]) sw $t0, 4($t1) # v[k+1] = $t0 (temp) jr $ra # return to calling routine

23 The Sort Procedure in C Non-leaf (calls swap)
void sort (int v[], int n) { int i, j; for (i = 0; i < n; i += 1) { for (j = i – 1; j >= 0 && v[j] > v[j + 1]; j -= 1) { swap(v,j); } v in $a0, n in $a1, i in $s0, j in $s1

24 The Procedure Body move $s2, $a0 # save $a0 into $s2
move $s0, $zero # i = 0 for1tst: slt $t0, $s0, $s # $t0 = 0 if $s0 ≥ $s3 (i ≥ n) beq $t0, $zero, exit1 # go to exit1 if $s0 ≥ $s3 (i ≥ n) addi $s1, $s0, – # j = i – 1 for2tst: slti $t0, $s1, # $t0 = 1 if $s1 < 0 (j < 0) bne $t0, $zero, exit2 # go to exit2 if $s1 < 0 (j < 0) sll $t1, $s1, # $t1 = j * 4 add $t2, $s2, $t # $t2 = v + (j * 4) lw $t3, 0($t2) # $t3 = v[j] lw $t4, 4($t2) # $t4 = v[j + 1] slt $t0, $t4, $t # $t0 = 0 if $t4 ≥ $t3 beq $t0, $zero, exit2 # go to exit2 if $t4 ≥ $t3 move $a0, $s # 1st param of swap is v (old $a0) move $a1, $s # 2nd param of swap is j jal swap # call swap procedure addi $s1, $s1, – # j –= 1 j for2tst # jump to test of inner loop exit2: addi $s0, $s0, # i += 1 j for1tst # jump to test of outer loop Move params Outer loop Inner loop Pass params & call Inner loop Outer loop

25 The Full Procedure sort: addi $sp,$sp, – # make room on stack for registers sw $ra, 16($sp) # save $ra on stack sw $s3,12($sp) # save $s3 on stack sw $s2, 8($sp) # save $s2 on stack sw $s1, 4($sp) # save $s1 on stack sw $s0, 0($sp) # save $s0 on stack … # procedure body exit1: lw $s0, 0($sp) # restore $s0 from stack lw $s1, 4($sp) # restore $s1 from stack lw $s2, 8($sp) # restore $s2 from stack lw $s3,12($sp) # restore $s3 from stack lw $ra,16($sp) # restore $ra from stack addi $sp,$sp, # restore stack pointer jr $ra # return to calling routine

26 Compiler Benefits Comparing performance for bubble (exchange) sort
To sort 100,000 words with the array initialized to random values on a Pentium 4 with a 3.06 clock rate, a 533 MHz system bus, with 2 GB of DDR SDRAM, using Linux version gcc opt Relative performance Clock cycles (M) Instr count (M) CPI None 1.00 158,615 114,938 1.38 O1 (medium) 2.37 66,990 37,470 1.79 O2 (full) 2.38 66,521 39,993 1.66 O3 (proc mig) 2.41 65,747 44,993 1.46 The unoptimized code has the best CPI, the O1 version has the lowest instruction count, but the O3 version is the fastest. Why?

27 Effect of Compiler Optimization
Compiled with gcc for Pentium 4 under Linux

28 Speedup Quick vs. Bubble
Sorting in C versus Java Comparing performance for two sort algorithms in C and Java (BubbleSort vs. Quicksort) The JVM/JIT is Sun/Hotspot version 1.3.1/1.3.1 Method Opt Bubble Quick Speedup Quick vs. Bubble Relative performance C Compiler None 1.00 2468 O1 2.37 1.50 1562 O2 2.38 1555 O3 2.41 1.91 1955 Java Interpreted 0.12 0.05 1050 JIT compiler 2.13 0.29 338 Unoptimized C prgram is 8.3 times faster than the interpreted Java code for bubble. Using JIT makes Java 2.1 times faster than the unoptimized C and within a factor of 1.13 of the highest optimized C code. Ratio’s aren’t as close for quicksort, presumably because it is harder to amortize the cost of runtime compilation over the shorter execution time. The last column demonstrates the impact of a better algorithm Observations?

29 Effect of Language and Algorithm

30 Lessons Learned Instruction count and CPI are not good performance indicators in isolation Compiler optimizations are sensitive to the algorithm Java/JIT compiled code is significantly faster than JVM interpreted Comparable to optimized C in some cases Nothing can fix a dumb algorithm!

31 Arrays vs. Pointers Array indexing involves
Multiplying index by element size Adding to array base address Pointers correspond directly to memory addresses Can avoid indexing complexity §2.14 Arrays versus Pointers

32 Example: Clearing an Array
clear1(int array[], int size) { int i; for (i = 0; i < size; i += 1) array[i] = 0; } clear2(int *array, int size) { int *p; for (p = &array[0]; p < &array[size]; p = p + 1) *p = 0; move $t0,$zero # i = 0 loop1: sll $t1,$t0,2 # $t1 = i * 4 add $t2,$a0,$t1 # $t2 = # &array[i] sw $zero, 0($t2) # array[i] = 0 addi $t0,$t0,1 # i = i + 1 slt $t3,$t0,$a1 # $t3 = # (i < size) bne $t3,$zero,loop1 # if (…) # goto loop1 move $t0,$a0 # p = & array[0] sll $t1,$a1,2 # $t1 = size * 4 add $t2,$a0,$t1 # $t2 = # &array[size] loop2: sw $zero,0($t0) # Memory[p] = 0 addi $t0,$t0,4 # p = p + 4 slt $t3,$t0,$t2 # $t3 = #(p<&array[size]) bne $t3,$zero,loop2 # if (…) # goto loop2

33 Comparison of Array vs. Pointer Versions
Multiply “strength reduced” to shift Both versions use sll instead of mul Array version requires shift to be inside loop Part of index calculation for incremented i c.f. incrementing pointer Compiler can achieve same effect as manual use of pointers Induction variable elimination Better to make program clearer and safer Optimizing compilers do these, and many more! See Sec on CD-ROM

34 ARM & MIPS Similarities
ARM: the most popular embedded core Similar basic set of instructions to MIPS §2.16 Real Stuff: ARM Instructions ARM MIPS Date announced 1985 Instruction size 32 bits Address space 32-bit flat Data alignment Aligned Data addressing modes 9 3 Registers 15 × 32-bit 31 × 32-bit Input/output Memory mapped

35 Compare and Branch in ARM
Uses condition codes for result of an arithmetic/logical instruction Negative, zero, carry, overflow are stored in program status Has compare instructions to set condition codes without keeping the result Each instruction can be conditional Top 4 bits of instruction word: condition value Can avoid branches over single instructions, save code space and execution time

36 Instruction Encoding

37 The Intel x86 ISA Evolution with backward compatibility
8080 (1974): 8-bit microprocessor Accumulator, plus 3 index-register pairs 8086 (1978): 16-bit extension to 8080 Complex instruction set (CISC) 8087 (1980): floating-point coprocessor Adds FP instructions and register stack 80286 (1982): 24-bit addresses, MMU Segmented memory mapping and protection 80386 (1985): 32-bit extension (now IA-32) Additional addressing modes and operations Paged memory mapping as well as segments §2.17 Real Stuff: x86 Instructions

38 The Intel x86 ISA Further evolution…
i486 (1989): pipelined, on-chip caches and FPU Compatible competitors: AMD, Cyrix, … Pentium (1993): superscalar, 64-bit datapath Later versions added MMX (Multi-Media eXtension) instructions The infamous FDIV bug Pentium Pro (1995), Pentium II (1997) New microarchitecture (see Colwell, The Pentium Chronicles) Pentium III (1999) Added SSE (Streaming SIMD Extensions) and associated registers Pentium 4 (2001) New microarchitecture Added SSE2 instructions

39 The Intel x86 ISA And further…
AMD64 (2003): extended architecture to 64 bits EM64T – Extended Memory 64 Technology (2004) AMD64 adopted by Intel (with refinements) Added SSE3 instructions Intel Core (2006) Added SSE4 instructions, virtual machine support AMD64 (announced 2007): SSE5 instructions Intel declined to follow, instead… Advanced Vector Extension (announced 2008) Longer SSE registers, more instructions If Intel didn’t extend with compatibility, its competitors would! Technical elegance ≠ market success

40 Basic x86 Registers

41 Basic x86 Addressing Modes
Two operands per instruction Source/dest operand Second source operand Register Immediate Memory Memory addressing modes Address in register Address = Rbase + displacement Address = Rbase + 2scale × Rindex (scale = 0, 1, 2, or 3) Address = Rbase + 2scale × Rindex + displacement

42 x86 Instruction Encoding
Variable length encoding Postfix bytes specify addressing mode Prefix bytes modify operation: Operand length, repetition, locking, …

43 Implementing IA-32 Complex instruction set makes implementation difficult Hardware translates instructions to simpler microoperations Simple instructions: 1-to-1 Complex instructions: 1-to-many Microengine similar to RISC Market share makes this economically viable Comparable performance to RISC Compilers avoid the complex instructions

44 Fallacies Powerful instruction  higher performance
Fewer instructions required But complex instructions are hard to implement May slow down all instructions, including simple ones Compilers are good at making fast code from simple instructions Use assembly code for high performance But modern compilers are better at dealing with modern processors More lines of code  more errors and less productivity §2.18 Fallacies and Pitfalls

45 Fallacies Backward compatibility  instruction set doesn’t change
True: Old instructions never die (Backwards compatibility) But new instructions are certainly added ! x86 instruction set

46 Concluding Remarks Stored program concept (Von Neumann architecture) means “everything is just bits”—numbers, characters, instructions, etc—all stored in and fetched from memory 4 design principles for instruction set architectures (ISA) Simplicity favors regularity Smaller is faster Make the common case fast Good design demands good compromises

47 Concluding Remarks MIPS ISA offers necessary support for HLL constructs SPEC performance measures instruction execution in benchmark programs Instruction class MIPS examples (HLL examples) SPEC2006 Int SPEC2006 FP Arithmetic add, sub, addi (ops used in assignment statements) 16% 48% Data transfer lw, sw, lb, lbu, lh, lhu, sb, lui (references to data structures, e.g. arrays) 35% 36% Logical and, or, nor, andi, ori, sll, srl (ops used in assigment statements) 12% 4% Cond. Branch beq, bne, slt, slti, sltiu (if statements and loops) 34% 8% Jump j, jr, jal (calls, returns, and case/switch) 2% 0%


Download ppt "Computer Architecture"

Similar presentations


Ads by Google