William Stallings Computer Organization and Architecture 8th Edition

Slides:



Advertisements
Similar presentations
Reduced Instruction Set Computers
Advertisements

CPU Structure and Function
Instruction Set Design
Computer Organization and Architecture
CSCI 4717/5717 Computer Architecture
Computer Organization and Architecture
RISC Processors – Page 1 of 46CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: RISC Processors Reading: Stallings, Chapter.
PART 4: (2/2) Central Processing Unit (CPU) Basics CHAPTER 13: REDUCED INSTRUCTION SET COMPUTERS (RISC) 1.
Computer Organization and Architecture
Computer Organization and Architecture
Chapter 13 Reduced Instruction Set Computers (RISC) CISC – Complex Instruction Set Computer RISC – Reduced Instruction Set Computer HW: 13.6, 13.7 (Due.
Chapter XI Reduced Instruction Set Computing (RISC) CS 147 Li-Chuan Fang.
Chapter 13 Reduced Instruction Set Computers (RISC)
Chapter 13 Reduced Instruction Set Computers (RISC) Pipelining.
Major Advances in Computers(1) The family concept —IBM System/ —DEC PDP-8 —Separates architecture from implementation Microporgrammed control unit.
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
RISC. Rational Behind RISC Few of the complex instructions were used –data movement – 45% –ALU ops – 25% –branching – 30% Cheaper memory VLSI technology.
1 Pertemuan 23 Reduced Instruction Set Computer 1 Matakuliah: H0344/Organisasi dan Arsitektur Komputer Tahun: 2005 Versi: 1/1.
Seqeuential Logic State Machines Memory
Chapter 13 Reduced Instruction Set Computers (RISC) CISC – Complex Instruction Set Computer RISC – Reduced Instruction Set Computer.
Reduced Instruction Set Computers (RISC)
Reduced Instruction Set Computers (RISC) Computer Organization and Architecture.
CH12 CPU Structure and Function
Advanced Computer Architectures
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CE-321: Computer.
RISC Processors – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: RISC Processors Reading: Stallings, Chapter 13.
COMPUTER ORGANIZATIONS CSNB123 May 2014Systems and Networking1.
RISC and CISC. Dec. 2008/Dec. and RISC versus CISC The world of microprocessors and CPUs can be divided into two parts:
Computer Organization and Architecture Reduced Instruction Set Computers (RISC) Chapter 13.
CH13 Reduced Instruction Set Computers {Make hardware Simpler, but quicker} Key features  Large number of general purpose registers  Use of compiler.
Edited By Miss Sarwat Iqbal (FUUAST) Last updated:21/1/13
1 Instruction Sets and Beyond Computers, Complexity, and Controversy Brian Blum, Darren Drewry Ben Hocking, Gus Scheidt.
Computer architecture Lecture 11: Reduced Instruction Set Computers Piotr Bilski.
RISC ProcessorsCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: RISC Processors Reading: Stallings, Chapter 13.
Chapter 8 CPU and Memory: Design, Implementation, and Enhancement The Architecture of Computer Hardware and Systems Software: An Information Technology.
RISC architecture and instruction Level Parallelism (ILP) based on “Computer Architecture: a Quantitative Approach” by Hennessy and Patterson, Morgan Kaufmann.
MIPS Processor Chapter 12 S. Dandamudi To be used with S. Dandamudi, “Introduction to Assembly Language Programming,” Second Edition, Springer,
M. Mateen Yaqoob The University of Lahore Spring 2014.
ECEG-3202 Computer Architecture and Organization Chapter 7 Reduced Instruction Set Computers.
Reduced Instruction Set Computers. Major Advances in Computers(1) The family concept —IBM System/ —DEC PDP-8 —Separates architecture from implementation.
COMPUTER ORGANIZATIONS CSNB123 NSMS2013 Ver.1Systems and Networking1.
Basic Concepts Microinstructions The control unit seems a reasonably simple device. Nevertheless, to implement a control unit as an interconnection of.
Topics to be covered Instruction Execution Characteristics
Advanced Architectures
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 8th Edition
Advanced Topic: Alternative Architectures Chapter 9 Objectives
Overview Introduction General Register Organization Stack Organization
Chapter 14 Instruction Level Parallelism and Superscalar Processors
Instruction Level Parallelism and Superscalar Processors
Computer Architecture
Chapter 13 Reduced Instruction Set Computers
Instruction-level Parallelism: Reduced Instruction Set Computers and
Chapter 12 Pipelining and RISC
Created by Vivi Sahfitri
Lecture 4: Instruction Set Design/Pipelining
Chapter 11 Processor Structure and function
COMPUTER ORGANIZATION AND ARCHITECTURE
Presentation transcript:

William Stallings Computer Organization and Architecture 8th Edition Chapter 13 Reduced Instruction Set Computers

Major Advances in Computers(1) The family concept IBM System/360 1964 DEC PDP-8 Separates architecture from implementation Microprogrammed control unit Idea by Wilkes 1951 Produced by IBM S/360 1964 Cache memory IBM S/360 model 85 1969

Major Advances in Computers(2) Solid State RAM (See memory notes) Microprocessors Intel 4004 1971 Pipelining Introduces parallelism into fetch execute cycle Multiple processors

Reduced Instruction Set Computer Key features The Next Step - RISC Reduced Instruction Set Computer Key features Large number of general purpose registers or use of compiler technology to optimize register use Limited and simple instruction set Emphasis on optimising the instruction pipeline

Comparison of processors

Software costs far exceed hardware costs Driving force for CISC Software costs far exceed hardware costs Increasingly complex high level languages Semantic gap Leads to: Large instruction sets More addressing modes Hardware implementations of HLL statements e.g. CASE (switch) on VAX

Improve execution efficiency Support more complex HLLs Intention of CISC Ease compiler writing Improve execution efficiency Complex operations in microcode Support more complex HLLs

Execution Characteristics Operations performed Operands used Execution sequencing Studies have been done based on programs written in HLLs Dynamic studies are measured during the execution of the program

Conditional statements (IF, LOOP) Operations Assignments Movement of data Conditional statements (IF, LOOP) Sequence control Procedure call-return is very time consuming Some HLL instruction lead to many machine code operations

Weighted Relative Dynamic Frequency of HLL Operations [PATT82a] (CISC)   Dynamic Occurrence (Relative frequency of Occurrence) Machine-Instruction Weighted (Surrogate measures of actual time spent executing) Memory-Reference Weighted (Surrogate measures of actual time spent referencing memory) Pascal C ASSIGN 45% 38% 13% 14% 15% LOOP 5% 3% 42% 32% 33% 26% CALL 12% 31% 44% IF 29% 43% 11% 21% 7% GOTO — OTHER 6% 1% 2%

Operands (Dynamic Percentage of Operand References) Mainly local scalar variables Optimisation should concentrate on accessing local variables   Pascal C Average Integer Constant 16% 23% 20% Scalar Variable (80% Local Variables) 58% 53% 55% Array/Structure (+ a reference to an index or a pointer @ item) 26% 24% 25%

Depends on number of parameters passed Depends on level of nesting Procedure Calls Very time consuming Depends on number of parameters passed 98% dynamically called procedures were passed fewer than six arguments 92% dynamically called procedures used fewer than six local scalar variables Depends on level of nesting Most programs do not do a lot of calls followed by lots of returns Rare to have a long uninterrupted sequence of procedure calls followed by the corresponding sequence of returns Most programs remain confined to a rather narrow window of procedure-invocation depth Most variables are local (c.f. locality of reference)

Large number of registers Implications HHL’s can be best supported by giving optimising performance of the most time consuming features typical HHL programs Large number of registers Optimize operand referencing (reducing memory references) Registers have much shorter addresses than memory references Careful design of pipelines Branch prediction etc. Simplified (reduced) instruction set

Large Register File Software solution Hardware solution Require compiler to allocate registers Allocate based on most used variables in a given time Requires sophisticated program analysis Hardware solution Have more registers Thus more variables will be in registers

Registers for Local Variables Store local scalar variables in registers Reduces memory access Every procedure (function) call changes locality Parameters must be passed Results must be returned Variables from calling programs must be restored

Limited range of depth of call Use multiple small sets of registers Register Windows Only few parameters Limited range of depth of call Use multiple small sets of registers Page 488, Figure 13.1 & Page 489, Figure 13.2 Calls switch to a different set of registers Returns switch back to a previously used set of registers

Three areas within a register set Register Windows cont. Three areas within a register set Parameter registers Local registers Temporary registers Temporary registers from one set overlap parameter registers from the next This allows parameter passing without moving data

Overlapping Register Windows

Circular Buffer diagram

Operation of Circular Buffer When a call is made, a current window pointer is moved to show the currently active register window If all windows are in use, an interrupt is generated and the oldest window (the one furthest back in the call nesting) is saved to memory A saved window pointer indicates where the next saved windows should restore to

Allocated by the compiler to memory Global Variables Allocated by the compiler to memory Inefficient for frequently accessed variables Have a set of registers in the processor to use as global variables fixed in number & available to all procedures

Registers v Cache Large Register File Cache All local scalars Recently-used local scalars Individual variables Blocks of memory Compiler-assigned global variables Recently-used global variables Save/Restore based on procedure nesting depth Save/Restore based on cache replacement algorithm Register addressing Memory addressing Reference a local scalar in a register file Virtual register number Window number Reference a memory location in cache Generate a full-width memory address; slower than the use of a register file !! Use cache memory for instructions only Selected Register

Referencing a Scalar - Window Based Register File

Referencing a Scalar - Cache

Compiler Based Register Optimization Assume small number of registers (16-32) Optimizing use is up to compiler HLL programs have no explicit references to registers usually - think about C - register int Assign symbolic or virtual register to each candidate variable Map (unlimited) symbolic registers to real registers Symbolic registers that do not overlap can share real registers If you run out of real registers some variables use memory

Graph Coloring Given a graph of nodes and edges Assign a color to each node Adjacent nodes have different colors Use minimum number of colors Nodes are symbolic registers Two registers that are live in the same program fragment are joined by an edge Try to color the graph with n colors, where n is the number of real registers Nodes that can not be colored are placed in memory

Graph Coloring Approach With reasonably sophisticated register optimization techniques, there is only marginal improvement with more than 32 registers; with minimal optimization there is little benefit in using more than 64 registers

Compiler simplification? Why CISC (1)? Compiler simplification? Disputed… Complex machine instructions harder to exploit Optimization more difficult Smaller programs? Program takes up less memory but… Memory is now cheap May not occupy less bits, just look shorter in symbolic form More instructions require longer op-codes Register references require fewer bits

It is far from clear that CISC is the appropriate solution Why CISC (2)? Faster programs? Bias towards use of simpler instructions More complex control unit Microprogram control store larger thus simple instructions take longer to execute It is far from clear that CISC is the appropriate solution

RISC Characteristics One instruction per cycle RISC machine instructions should operate as fast as microinstructions on CISC machines Register to register operations CISC machines also provide memory-to-memory & register-to-memory instructions Few, simple addressing modes More complex addressing modes can be constructed in software from the simpler ones Few, simple instruction formats Instruction length is fixed; field locations, e.g., op codes, are fixed; op code decoding & register operand accessing can occur simultaneously; instruction fetching is optimized; alignment on word boundaries provide that single instructions do not cross page boundaries Hardwired design (no microcode) Instruction Pipelining Applied More Efficient Interrupts Handled More Responsively More Effective Optimizing Compilers can be Developed

RISC v CISC Not clear cut Many designs borrow from both philosophies e.g. PowerPC and Pentium II Page 498-499

Most instructions are register to register Two phases of execution RISC Pipelining Most instructions are register to register Two phases of execution I: Instruction fetch E: Execute ALU operation with register input and output For load and store Calculate memory address D: Memory Register to memory or memory to register operation

Effects of Pipelining

Optimization of Pipelining Delayed branch Does not take effect until after execution of following instruction This following instruction is the delay slot Delayed Load Register to be target is locked by processor Continue execution of instruction stream until register required Idle until load complete Re-arranging instructions can allow useful work whilst loading Loop Unrolling Replicate body of loop a number of times Iterate loop fewer times Reduces loop overhead Increases instruction parallelism Improved register, data cache or TLB locality

Loop Unrolling Twice Example do i=2, n-1 a[i] = a[i] + a[i-1] * a[i+l] end do Becomes do i=2, n-2, 2 a[i] = a[i] + a[i-1] * a[i+i] a[i+l] = a[i+l] + a[i] * a[i+2] if (mod(n-2,2) = i) then a[n-1] = a[n-1] + a[n-2] * a[n] end if

Normal and Delayed Branch Address Normal Branch Delayed Branch Optimized Delayed Branch 100 LOAD X, rA 101 ADD 1, rA JUMP 105 102 JUMP 106 103 ADD rA, rB NOOP 104 SUB rC, rB 105 STORE rA, Z 106  

Use of Delayed Branch

Controversy Quantitative Qualitative Problems compare program sizes and execution speeds Qualitative examine issues of high level language support and use of VLSI real estate Problems No pair of RISC and CISC that are directly comparable No definitive set of test programs Difficult to separate hardware effects from complier effects Most comparisons done on “toy” rather than production machines Most commercial devices are a mixture

Required Reading Stallings chapter 13 Manufacturer web sites