Preliminary Computer Organization and Design 3rd edition, Designers Guide to VHDL. 1.

Slides:



Advertisements
Similar presentations
Henk Corporaal TUEindhoven 2011
Advertisements

Goal: Write Programs in Assembly
Review of the MIPS Instruction Set Architecture. RISC Instruction Set Basics All operations on data apply to data in registers and typically change the.
1 ECE462/562 ISA and Datapath Review Ali Akoglu. 2 Instruction Set Architecture A very important abstraction –interface between hardware and low-level.
1 ECE369 ECE369 Chapter 2. 2 ECE369 Instruction Set Architecture A very important abstraction –interface between hardware and low-level software –standardizes.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 5 MIPS ISA & Assembly Language Programming.
Systems Architecture Lecture 5: MIPS Instruction Set
Chapter 2 Instructions: Language of the Computer
Chapter 2.
1 Chapter 3: Instructions: Language of the Machine More primitive than higher level languages e.g., no sophisticated control flow Very restrictive e.g.,
1 Instructions: Language of the Machine More primitive than higher level languages e.g., no sophisticated control flow Very restrictive e.g., MIPS Arithmetic.
1 CSE SUNY New Paltz Chapter 3 Machine Language Instructions.
S. Barua – CPSC 440 CHAPTER 2 INSTRUCTIONS: LANGUAGE OF THE COMPUTER Goals – To get familiar with.
Recap.
Lecture 5 Sept 14 Goals: Chapter 2 continued MIPS assembly language instruction formats translating c into MIPS - examples.
1  1998 Morgan Kaufmann Publishers Chapter 3 Text in blue is by N. Guydosh Updated 1/27/04 Instructions: Language of the Machine.
Text-book Slides Chapters 1-2 Prepared by the publisher (We have not necessarily followed the same order)
1  2004 Morgan Kaufmann Publishers Chapter 2. 2  2004 Morgan Kaufmann Publishers Instructions: Language of the Machine We’ll be working with the MIPS.
©UCB CPSC 161 Lecture 5 Prof. L.N. Bhuyan
ISA-2 CSCE430/830 MIPS: Case Study of Instruction Set Architecture CSCE430/830 Computer Architecture Instructor: Hong Jiang Courtesy of Prof. Yifeng Zhu.
1  2004 Morgan Kaufmann Publishers Instructions: bne $t4,$t5,Label Next instruction is at Label if $t4≠$t5 beq $t4,$t5,Label Next instruction is at Label.
순천향대학교 정보기술공학부 이 상 정 1 2. Instructions: Language of the Computer.
1 EGRE 426 Fall 09 Handout Pipeline examples continued from last class.
1 CS/EE 362 Hardware Fundamentals Lecture 10 (Chapter 3: Hennessy and Patterson) Winter Quarter 1998 Chris Myers.
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
1  1998 Morgan Kaufmann Publishers Machine Instructions: Language of the Machine Lowest level of programming, control directly the hardware Assembly instructions.
Computer Architecture (CS 207 D) Instruction Set Architecture ISA.
Oct. 25, 2000Systems Architecture I1 Systems Architecture I (CS ) Lecture 9: Alternative Instruction Sets * Jeremy R. Johnson Wed. Oct. 25, 2000.
Small constants are used quite frequently (50% of operands) e.g., A = A + 5; B = B + 1; C = C - 18; Solutions? Why not? put 'typical constants' in memory.
CHAPTER 6 Instruction Set Architecture 12/7/
 1998 Morgan Kaufmann Publishers MIPS arithmetic All instructions have 3 operands Operand order is fixed (destination first) Example: C code: A = B +
Chapter 2 CSF 2009 The MIPS Assembly Language. Stored Program Computers Instructions represented in binary, just like data Instructions and data stored.
1 EGRE 426 Handout 1 8/25/09. 2 Preliminary EGRE 365 is a prerequisite for this class. Class web page egre426/index.html.
Computer Organization CS224 Fall 2012 Lessons 7 and 8.
Computer Organization Rabie A. Ramadan Lecture 3.
1 EGRE 426 Handout 1 8/22/08. 2 Preliminary EGRE 365 is a prerequisite for this class. Class web page egre426/index.html.
EE472 – Spring 2007P. Chiang, with Slide Help from C. Kozyrakis (Stanford) ECE472 Computer Architecture Lecture #3—Oct. 2, 2007 Patrick Chiang TA: Kang-Min.
DR. SIMING LIU SPRING 2016 COMPUTER SCIENCE AND ENGINEERING UNIVERSITY OF NEVADA, RENO Session 7, 8 Instruction Set Architecture.
Instruction Set Architecture Chapter 3 – P & H. Introduction Instruction set architecture interface between programmer and CPU Good ISA makes program.
1  1998 Morgan Kaufmann Publishers Instructions: Language of the Machine More primitive than higher level languages e.g., no sophisticated control flow.
CHAPTER 2 Instruction Set Architecture 3/21/
Computer Architecture & Operations I
Computer Architecture & Operations I
MIPS Assembly.
ECE3055 Computer Architecture and Operating Systems Chapter 2: Procedure Calls & System Software These lecture notes are adapted from those of Professor.
MIPS Instruction Set Advantages
A Closer Look at Instruction Set Architectures
Instruction Set Architecture
ECE3055 Computer Architecture and Operating Systems MIPS ISA
RISC Concepts, MIPS ISA Logic Design Tutorial 8.
Computer Architecture (CS 207 D) Instruction Set Architecture ISA
Instructions - Type and Format
Lecture 4: MIPS Instruction Set
CS170 Computer Organization and Architecture I
Systems Architecture I (CS ) Lecture 5: MIPS Instruction Set*
Systems Architecture Lecture 5: MIPS Instruction Set
Henk Corporaal TUEindhoven 2010
ECE232: Hardware Organization and Design
September 24 Test 1 review More programming
Computer Architecture
September 17 Test 1 pre(re)view Fang-Yi will demonstrate Spim
COMS 361 Computer Organization
COMS 361 Computer Organization
UCSD ECE 111 Prof. Farinaz Koushanfar Fall 2018
COMS 361 Computer Organization
COMS 361 Computer Organization
Machine Instructions.
Review In last lecture, done with unsigned and signed number representation. Introduced how to represent real numbers in float format.
Systems Architecture I (CS ) Lecture 5: MIPS Instruction Set*
Instruction Set Architecture
Presentation transcript:

Preliminary Computer Organization and Design 3rd edition, Designers Guide to VHDL. 1

Computer Organization and Design This book and the slightly more advanced Computer Architecture a Quantitative Approach are the dominant computer architecture text books. Too big. Chapter 1 – Read Chapter 2-7 – will be covered in detail skipping some material. Chapters 8-9 – Will depend on time. Supplement with additional material. 2

Introduction We will learn not just how computers work, but gain an understanding into how and why computers evolved into the current generation. Computer have undergone rapid changes. Increasing performance Decreasing cost 3

History Computers did not really begin until World War II. The widespread use of microprocessors began about 35 years ago. Personal computers were not taken seriously until the introduction of the IBM PC in 1981. 4

Computers have resulted in the information revolution. Agricultural revolution. Several Thousand years. Industrial revolution. Several Hundred years. Information revolution. A couple of decades. 5

My History I started at NASA in 1963 and worked in the computer division. At that time computers were very expensive and very large. Programs were written on punched cards (one line per card) and fed into the computer. The state-of-the-art super computer that I worked with was capable of executing one million instructions per second, it cost several million dollars, and occupied the major portion of a large building. 6

My History continued In the 60’s I did early work in interactive computing techniques. Forerunner of what we do today, but not practical when it required a dedicated super computer connected to a display console. In 1974 I completed my PhD and started working with microprocessors. 7

Moore’s Law One of the founders of Intel, Dr. Gordon Moore, observed in 1965 that the number of transistors in an IC was doubling every year. He predicted that this would continue for a couple of decades and then slow to doubling every 18 months. This prediction has proved remarkably accurate. So much so that it has come to be expected, and Moore’s prediction has become known as Moore’s law. It is worth noting that since Moore made his prediction the consensus at any time has been that it would last for only about ten more years. Economics not technology may be what actually stops Moore’s law. 8

9

10

11

12

13

Single instruction stream – single data stream 14

Multiple instruction stream – multiple data stream. 15

Single instruction stream – single data stream 16

17

Definitions Efficiency is the measurement of how close we come to achieving ideal speed up. 18

19

Assumptions All operations take one unit of time. All instructions and data are available when needed. i.e. We don’t have to wait for memory or communication. This is naïve and completely unrealistic, but can be used to teach some fundamental truths. 20

Conventional uniprocessor (SISD) A1*B1 A2*B2 A3*B3 A4*B4 … An*Bn 1 2 3 4 5 6 21

Multiprocessor MIMD (unlimited processors) SIMD – same results. A1*B1 A2*B2 A3*B3 A4*B4 … An*Bn 1 2 3 4 5 6 22

Multifunction computer (2 * units) A1*B1 A2*B2 A3*B3 A4*B4 … An*Bn 1 2 3 4 5 6 23

SISD – single processor T A1 A2 A3 A4 A5 An 1 2 3 4 5 6 24

MIMD or SIMD – with unlimited processors T A1 A2 A3 A4 A5 An 1 2 3 4 5 6 25

Algorithm can effect speedup. SISD A (B C D + E) A B C D +A E 1 2 3 4 5 6 26

Algorithm can effect speedup. MIMD A (B C D + E) A B C D +A E 1 2 3 4 5 6 27

A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X SISD Method A T A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X 1 2 3 4 5 6 7 8 9 10 11 28

A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X SISD Method B T A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X 1 2 3 4 5 6 7 8 9 10 11 29

A0 + X ( A1 +X ( A2 + X ( A3 + X ( A4 + …)))) SISD Method C T A0 + X ( A1 +X ( A2 + X ( A3 + X ( A4 + …)))) 1 2 3 4 5 6 7 8 9 30

A0 + X ( A1 +X ( A2 + X ( A3 + X ( A4 + …)))) MIMD unlimited processors T A0 + X ( A1 +X ( A2 + X ( A3 + X ( A4 + …)))) 1 2 3 4 5 6 7 8 9 31

A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X MIMD unlimited processors T A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X 1 2 3 4 5 6 7 8 9 32

A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X SIMD unlimited processors T A0 + A1*X + A2*X*X + A3*X*X*X + A4*X*X*X*X 1 2 3 4 5 6 7 8 9 33

Using an MIMD machine with unlimited processors, the time to compute is given by Where, And, for i > 2 For example when N = 9, For N = 3000 T(3000)=23 34

Compute using a SISD computer with a add-multiply arithmetic unit. T(4) = 4 35

Pipeline examples 36

37

38

39

40

41

Using a four segment pipeline with the restriction that the pipeline must empty before a new type (add, multiply, etc.) of operation can begin. T4(4) = 16 segment times. 42

Using a four segment pipeline which does not require that the pipeline empty before a new type of operation can begin. T4(4)=15 segment times. 43

EGRE 426 Fall 09 Handout 02

Pipeline examples continued from last class.

Using a four segment pipeline with the restriction that the pipeline must empty before a new type (add, multiply, etc.) of operation can begin. T4(4) = 16 segment times.

Using a four segment pipeline which does not require that the pipeline empty before a new type of operation can begin. T4(4)=15 segment times.

Chapter 2 The Mips processor In this class we will focus on the Mips processor. The Mips processor is an example of a reduced instruction set computer (RISC). Hennessy and Patterson were early advocates of RISC architecture and were responsible for much of the early development of RISC concepts. Hennessy left Stanford to found MIPS Computer Systems. The first commercial Mips processor was introduced in 1985. The Mips is used in a number of embedded systems including game consoles. We will concentrate on an older 32 bit version of the Mips. 64 bit SIMD versions of Mips processors are available. The impact of the Mips has been significantly reduced by the prevalence of the Intel X86 processors (CISC).

Instructions: Language of the Machine We’ll be working with the MIPS instruction set architecture similar to other architectures developed since the 1980's RISC architecture Almost 100 million MIPS processors manufactured in 2002 used by NEC, Nintendo, Cisco, Silicon Graphics, Sony, …

MIPS arithmetic comment All instructions have 3 operands Operand order is fixed (destination first) Example: C code: a = b + c MIPS ‘code’: add a, b, c # a  b + c (we’ll talk about registers in a bit) “The natural number of operands for an operation like addition is three…requiring every instruction to have exactly three operands, no more and no less, conforms to the philosophy of keeping the hardware simple” However, note that two operands are typical on most computers Add a, b # a  a + b comment

MIPS arithmetic Design Principle: simplicity favors regularity. Of course this complicates some things... C code: a = b + c + d; MIPS code: add a, b, c add a, a, d On the Mips operands must be registers, only 32 registers provided Each register contains 32 bits Design Principle: smaller is faster. Why?

Registers vs. Memory Arithmetic instructions operands must be registers, — only 32 registers provided Compiler associates variables with registers What about programs with lots of variables? Use memory Processor I/O Control Datapath Memory Input Output

Memory Organization Viewed as a large, single-dimension array, with an address. A memory address is an index into the array "Byte addressing" means that the index points to a byte of memory. Addresses Bytes A 32 bit word consists of 4 bytes aligned as shown below. The address of the word points to the most significant byte of the word (Big endian). 8 bits of data 1 8 bits of data 2 8 bits of data 3 8 bits of data 4 8 bits of data 5 8 bits of data 6 8 bits of data ...

Memory Organization Byte addressing is used, but most data items use "words" For MIPS, a word is 32 bits or 4 bytes. 232 bytes with byte addresses from 0 to 232-1 230 words with byte addresses 0, 4, 8, ... 232-4 Words are aligned i.e., what are the least 2 significant bits of a word address? Registers hold 32 bits of data 32 bits of data 4 32 bits of data 8 32 bits of data 12 32 bits of data ...

add $4, $5, $9 Assembler also recognizes add $a0, $a1, $t1

Instructions Why 32? Load and store instructions Example: C code: A[12] = h + A[8]; MIPS code: lw $t0, 32($s3) # $t0  M($s3+32) add $t0, $s2, $t0 # $t0  $s2 + $t0 sw $t0, 48($s3) # ? Can refer to registers by name (e.g., $s2, $t2) instead of number Store word has destination last Remember arithmetic operands are registers, not memory! Can’t write: add 48($s3), $s2, 32($s3) Why 32?

Our First Example Can we figure out the code? $5  k $?  v[] swap(int v[], int k); { int temp; temp = v[k] v[k] = v[k+1]; v[k+1] = temp; } Can we figure out the code? swap: muli $2, $5, 4 add $2, $4, $2 lw $15, 0($2) lw $16, 4($2) sw $16, 0($2) sw $15, 4($2) jr $31

So far we’ve learned: MIPS — loading words but addressing bytes — arithmetic on registers only Instruction Meaning add $s1, $s2, $s3 $s1 = $s2 + $s3 sub $s1, $s2, $s3 $s1 = $s2 – $s3 lw $s1, 100($s2) $s1 = Memory[$s2+100] sw $s1, 100($s2) Memory[$s2+100] = $s1

Machine Language Instructions, like registers and words of data, are also 32 bits long Example: add $t1, $s1, $s2 registers have numbers, $t1=9, $s1=17, $s2=18 Instruction Format: 000000 10001 10010 01000 00000 100000 op rs rt rd shamt funct Can you guess what the field names stand for? See Page 63 Board work: Binary Numbers

Machine Language Consider the load-word and store-word instructions, What would the regularity principle have us do? New principle: Good design demands a compromise Introduce a new type of instruction format I-type for data transfer instructions other format was R-type for register Example: lw $t0, 32($s2) 35 18 9 32 op rs rt 16 bit number Where's the compromise? What size offset would programmer like? Why is it only 16 bits? Why not get rid of rs field and make offset 6 bits bigger?

Stored Program Concept Instructions are bits Programs are stored in memory — to be read or written just like data Fetch & Execute Cycle Instructions are fetched and put into a special register Bits in the register "control" the subsequent actions Fetch the “next” instruction and continue Processor Memory memory for data, programs, compilers, editors, etc.

Control Decision making instructions alter the control flow, i.e., change the "next" instruction to be executed MIPS conditional branch instructions: bne $t0, $t1, Label beq $t0, $t1, Label Example: if (i==j) h = i + j; bne $s0, $s1, Label add $s3, $s0, $s1 Label: ....

Control MIPS unconditional branch instructions: j label Example: if (i!=j) beq $s4, $s5, Lab1 h=i+j; add $s3, $s4, $s5 else j Lab2 h=i-j; Lab1: sub $s3, $s4, $s5 Lab2: ...

So far: op rs rt rd shamt funct R I op rs rt 16 bit address J Instruction Meaning add $s1,$s2,$s3 $s1 = $s2 + $s3 sub $s1,$s2,$s3 $s1 = $s2 – $s3 lw $s1,100($s2) $s1 = Memory[$s2+100] sw $s1,100($s2) Memory[$s2+100] = $s1 bne $s4,$s5,L Next instr. is at Label if $s4 ≠ $s5 beq $s4,$s5,L Next instr. is at Label if $s4 = $s5 j Label Next instr. is at Label Formats: R I J op rs rt rd shamt funct op rs rt 16 bit address op 26 bit address

Control Flow We have: beq, bne, what about Branch-if-less- than? New instruction: if $s1 < $s2 then $t0 = 1 slt $t0, $s1, $s2 else $t0 = 0 Board work: Binary Numbers

Policy of Use Conventions Register 1 ($at) reserved for assembler, 26-27 for operating system

Constants Small constants are used quite frequently (50% of operands) e.g., A = A + 5; B = B + 1; C = C - 18; Solutions? Why not? put 'typical constants' in memory and load them. create hard-wired registers (like $zero) for constants like one. MIPS Instructions: addi $29, $29, 4 slti $8, $18, 10 andi $29, $29, 6 ori $29, $29, 4 Design Principle: Make the common case fast. Which format?

How about larger constants? We'd like to be able to load a 32 bit constant into a register Must use two instructions, new "load upper immediate" instruction lui $t0, 1010101010101010 Then must get the lower order bits right, i.e., ori $t0, $t0, 1010101010101010 1010101010101010 0000000000000000 filled with zeros 1010101010101010 0000000000000000 0000000000000000 1010101010101010 ori 1010101010101010

Assembly Language vs. Machine Language Assembly provides convenient symbolic representation much easier than writing down numbers e.g., destination first Machine language is the underlying reality e.g., destination is no longer first Assembly can provide 'pseudoinstructions' e.g., “move $t0, $t1” exists only in Assembly would be implemented using “add $t0,$t1,$zero” When considering performance you should count real instructions

Overview of MIPS simple instructions all 32 bits wide very structured, no unnecessary baggage only three instruction formats rely on compiler to achieve performance help compiler where we can R I J op rs rt rd shamt funct op rs rt 16 bit address op 26 bit address

Addresses in Branches and Jumps Instructions: bne $t4,$t5,Label Next instruction is at Label if $t4  $t5 beq $t4,$t5,Label Next instruction is at Label if $t4 = $t5 j Label Next instruction is at Label Formats: Addresses are not 32 bits — How do we handle this with load and store instructions? op rs rt 16 bit address I J op 26 bit address

Addresses in Branches op rs rt 16 bit address I Instructions: bne $t4,$t5,Label Next instruction is at Label if $t4≠$t5 beq $t4,$t5,Label Next instruction is at Label if $t4=$t5 Formats: Could specify a register (like lw and sw) and add it to address use Instruction Address Register (PC = program counter) most branches are local (principle of locality) Jump instructions just use high order bits of PC address boundaries of 256 MB op rs rt 16 bit address I

Page 57

To summarize:

Alternative Architectures Design alternative: provide more powerful operations goal is to reduce number of instructions executed danger is a slower cycle time and/or a higher CPI Let’s look (briefly) at IA-32 “The path toward operation complexity is thus fraught with peril. To avoid these problems, designers have moved toward simpler instructions”

IA - 32 1978: The Intel 8086 is announced (16 bit architecture) 1980: The 8087 floating point coprocessor is added 1982: The 80286 increases address space to 24 bits, +instructions 1985: The 80386 extends to 32 bits, new addressing modes 1989-1995: The 80486, Pentium, Pentium Pro add a few instructions (mostly designed for higher performance) 1997: 57 new “MMX” instructions are added, Pentium II 1999: The Pentium III added another 70 instructions (SSE) 2001: Another 144 instructions (SSE2) 2003: AMD extends the architecture to increase address space to 64 bits, widens all registers to 64 bits and other changes (AMD64) 2004: Intel capitulates and embraces AMD64 (calls it EM64T) and adds more media extensions “This history illustrates the impact of the “golden handcuffs” of compatibility “adding new features as someone might add clothing to a packed bag” “an architecture that is difficult to explain and impossible to love”

IA-32 Overview Complexity: Saving grace: Instructions from 1 to 17 bytes long one operand must act as both a source and destination one operand can come from memory complex addressing modes e.g., “base or scaled index with 8 or 32 bit displacement” Saving grace: the most frequently used instructions are not too difficult to build compilers avoid the portions of the architecture that are slow “what the 80x86 lacks in style is made up in quantity, making it beautiful from the right perspective”

IA-32 Registers and Data Addressing Registers in the 32-bit subset that originated with 80386

IA-32 Register Restrictions Registers are not “general purpose” – note the restrictions below

IA-32 Typical Instructions Four major types of integer instructions: Data movement including move, push, pop Arithmetic and logical (destination register or memory) Control flow (use of condition codes / flags ) String instructions, including string move and string compare

ADC AX,[SI+BP-2]

IA-32 instruction Formats Typical formats: (notice the different lengths)

Summary Instruction complexity is only one variable Design Principles: lower instruction count vs. higher CPI / lower clock rate Design Principles: simplicity favors regularity smaller is faster good design demands compromise make the common case fast Instruction set architecture a very important abstraction indeed!

Concluding Remarks Evolution vs. Revolution “More often the expense of innovation comes from being too disruptive to computer users” “Acceptance of hardware ideas requires acceptance by software people; therefore hardware people should learn about software. And if software people want good machines, they must learn more about hardware to be able to communicate with and thereby influence hardware engineers.”