Download presentation
Presentation is loading. Please wait.
2
1 Introduction to Hardware/Architecture David A. Patterson http://cs.berkeley.edu/~patterson/talks {patterson,kkeeton}@cs.berkeley.edu EECS, University of California Berkeley, CA 94720-1776
3
2 What is a Computer System? I/O systemProcessor Compiler Operating System (Windows 98) Application (Netscape) Digital Design Circuit Design Instruction Set Architecture n Coordination of many levels of abstraction Datapath & Control transistors Memory Hardware Software Assembler
4
3 Levels of Representation High Level Language Program (e.g., C) Assembly Language Program (e.g.,MIPS) Machine Language Program (MIPS) Control Signal Specification Compiler Assembler Machine Interpretation temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; lw$to,0($2) lw$t1,4($2) sw$t1,0($2) sw$t0,4($2) 0000 1001 1100 0110 1010 1111 0101 1000 1010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111 °°°°
5
4 The Instruction Set: a Critical Interface instruction set software hardware
6
5 Instruction Set Architecture (subset of Computer Arch.) “... the attributes of a [computing] system as seen by the programmer, i.e. the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation.” – Amdahl, Blaaw, and Brooks, 1964SOFTWARE -- Organization of Programmable Storage -- Data Types & Data Structures: Encodings & Representations -- Instruction Set -- Instruction Formats -- Modes of Addressing and Accessing Data Items and Instructions -- Exceptional Conditions
7
6 Anatomy: 5 components of any Computer Personal Computer Processor (active) Computer Control (“brain”) Datapath (“brawn”) Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk (where programs, data live when not running) Processor often called (IBMese) “CPU” for “Central Processor Unit”
8
7 Technology Trends: Microprocessor Capacity 2X transistors/Chip Every 1.5 years Called “Moore’s Law”: Alpha 21264: 15 million Pentium Pro: 5.5 million PowerPC 620: 6.9 million Alpha 21164: 9.3 million Sparc Ultra: 5.2 million Moore’s Law
9
8 Technology Trends: Processor Performance 1.54X/yr Processor performance increase/yr mistakenly referred to as Moore’s Law (transistors/chip)
10
9 Computer Technology=>Dramatic Change n Processor m 2X in speed every 1.5 years; 1000X performance in last 15 years n Memory m DRAM capacity: 2x / 1.5 years; 1000X size in last 15 years m Cost per bit: improves about 25% per year n Disk m capacity: > 2X in size every 1.5 years m Cost per bit: improves about 60% per year m 120X size in last decade n State-of-the-art PC “when you graduate” (1997-2001) m Processor clock speed: 1500 MegaHertz (1.5 GigaHertz) m Memory capacity: 500 MegaByte(0.5 GigaBytes) m Disk capacity: 100 GigaBytes(0.1 TeraBytes) m New units! Mega => Giga, Giga => Tera
11
10 Integrated Circuit Costs Die cost = Wafer cost Dies per Wafer * Die yield Die Cost is goes roughly with the cube of the area: fewer dies per wafer * yield worse with die area Flaws Dies
12
11 Die Yield (1993 data) Raw Dices Per Wafer wafer diameterdie area (mm 2 ) 100144196256324400 6”/15cm139 906244 32 23 8”/20cm265 17712490 68 52 10”/25cm431 290206153116 90 die yield23%19%16%12%11%10% typical CMOS process: =2, wafer yield=90%, defect density=2/cm2, 4 test sites/wafer Good Dices Per Wafer (Before Testing!) 6”/15cm31169532 8”/20cm5932191175 10”/25cm96533220139 typical cost of an 8”, 4 metal layers, 0.5um CMOS wafer: ~$2000
13
12 1993 Real World Examples ChipMetalLineWaferDefectAreaDies/YieldDie Cost layerswidthcost/cm 2 mm 2 wafer 386DX20.90$900 1.0 43 360 71%$4 486DX230.80$1200 1.0 81 181 54%$12 PowerPC 60140.80$1700 1.3 121 115 28%$53 HP PA 710030.80$1300 1.0 196 66 27%$73 DEC Alpha30.70$1500 1.2 234 53 19%$149 SuperSPARC30.70$1700 1.6 256 48 13%$272 Pentium30.80$1500 1.5 296 40 9%$417 From "Estimating IC Manufacturing Costs,” by Linley Gwennap, Microprocessor Report, August 2, 1993, p. 15
14
13 IC cost = Die cost + Testing cost + Packaging cost Final test yield Packaging Cost: depends on pins, heat dissipation Other Costs ChipDie Package Test &Total costpinstypecost Assembly 386DX$4 132QFP$1 $4 $9 486DX2$12 168PGA$11 $12 $35 PowerPC 601$53 304QFP$3 $21 $77 HP PA 7100$73 504PGA$35 $16 $124 DEC Alpha$149 431PGA$30 $23 $202 SuperSPARC$272 293PGA$20 $34 $326 Pentium$417 273PGA$19 $37 $473
15
14 System Cost: 1995-96 Workstation SystemSubsystem% of total cost CabinetSheet metal, plastic1% Power supply, fans2% Cables, nuts, bolts1% (Subtotal)(4%) MotherboardProcessor6% DRAM (64MB)36% Video system14% I/O system3% Printed Circuit board1% (Subtotal)(60%) I/O DevicesKeyboard, mouse1% Monitor22% Hard disk (1 GB)7% Tape drive (DAT)6% (Subtotal)(36%)
16
15 COST v. PRICE Component Cost component cost Direct Costs component cost direct costs Gross Margin component cost direct costs gross margin Average Discount list price avg. selling price Input: chips, displays,... Making it: labor, scrap, returns,... Overhead: R&D, rent, marketing, profits,... Commision: channel profit, volume discounts, +33% +25–100% +50–80% (25–31%) (33–45%) (8–10%) (33–14%) (WS–PC) Q: What % of company income on Research and Development (R&D)?
17
16 Outline n Review of Five Technologies: Processor, Memory, Disk, Network Systems m Description / History / Performance Model m State of the Art / Trends / Limits / Innovation n Common Themes across Technologies m Perform.: per access (latency) + per byte (bandwidth) m Fast: Capacity, BW, Cost; Slow: Latency, Interfaces m Moore’s Law affecting all chips in system
18
17 Processor Trends/ History n Microprocessor: main CPU of “all” computers m < 1986, +35%/ yr. performance increase (2X/2.3yr) m >1987 (RISC), +60%/ yr. performance increase (2X/1.5yr) n Cost fixed at $500/chip, power whatever can cool n History of innovations to 2X / 1.5 yr m Pipelining (helps seconds / clock, or clock rate) m Out-of-Order Execution (helps clocks / instruction) m Superscalar (helps clocks / instruction) m Multilevel Caches (helps clocks / instruction) CPU time= Seconds = Instructions x Clocks x Seconds Program Program Instruction Clock CPU time= Seconds = Instructions x Clocks x Seconds Program Program Instruction Clock
19
18 Pipelining is Natural! °Laundry Example °Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, fold, and put away °Washer takes 30 minutes °Dryer takes 30 minutes °“Folder” takes 30 minutes °“Stasher” takes 30 minutes to put clothes into drawers ABCD
20
19 Sequential Laundry Sequential laundry takes 8 hours for 4 loads 30 TaskOrderTaskOrder B C D A Time 30 6 PM 7 8 9 10 11 12 1 2 AM
21
20 Pipelined Laundry: Start work ASAP Pipelined laundry takes 3.5 hours for 4 loads! TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time B C D A 30
22
21 Pipeline Hazard: Stall A depends on D; stall since folder tied up TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time B C D A E F bubble 30
23
22 Out-of-Order Laundry: Don’t Wait A depends on D; rest continue; need more resources to allow out-of-order TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time B C D A 30 E F bubble
24
23 Superscalar Laundry: Parallel per stage More resources, HW match mix of parallel tasks? TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time B C D A E F (light clothing) (dark clothing) (very dirty clothing) (light clothing) (dark clothing) (very dirty clothing) 30
25
24 Superscalar Laundry: Mismatch Mix Task mix underutilizes extra resources TaskOrderTaskOrder 12 2 AM 6 PM 7 8 9 10 11 1 Time 30 (light clothing) (dark clothing) (light clothing) A B D C
26
25 State of the Art: Alpha 21264 n 15M transistors n 2 64KB caches on chip; 16MB L2 cache off chip n Clock 600 MHz (Fastest Cray Supercomputer: T90 2.2 nsec) n 90 watts n Superscalar: fetch up to 6 instructions/clock cycle, retires up to 4 instruction/clock cycle n Execution out-of-order
27
26 Today’s Situation: Microprocessor MIPS MPUs R5000R1000010k/5k n Clock Rate200 MHz 195 MHz1.0x n On-Chip Caches32K/32K 32K/32K 1.0x n Instructions/Cycle 1(+ FP)4 4.0x n Pipe stages55-71.2x n ModelIn-orderOut-of-order--- n Die Size (mm 2 ) 84 2983.5x m without cache, TLB32205 6.3x n Development (man yr..)603005.0x n SPECint_base955.78.81.6x
28
27 Memory History/Trends/State of Art n DRAM: main memory of all computers m Commodity chip industry: no company >20% share m Packaged in SIMM or DIMM (e.g.,16 DRAMs/SIMM) n State of the Art: $152, 128 MB DIMM (16 64-Mbit DRAMs),10 ns x 64b (800MB/sec) n Capacity: 4X/3 yrs (60%/yr..) m Moore’s Law n MB/$: + 25%/yr. n Latency: – 7%/year, Bandwidth: + 20%/yr. (so far) source: www.pricewatch.com, 5/21/98
29
28 Memory Summary n DRAM rapid improvements in capacity, MB/$, bandwidth; slow improvement in latency n Processor-memory interface (cache+memory bus) is bottleneck to delivered bandwidth m Like network, memory “protocol” is major overhead
30
29 Processor Innovations/Limits n Low cost, low power embedded processors m Lots of competition, innovation m Integer perf. embedded proc. ~ 1/2 desktop processor m Strong ARM 110: 233 MHz, 268 MIPS, 0.36W typ., $49 n Very Long Instruction Word (Intel,HP IA-64/Merced) m multiple ops/ instruction, compiler controls parallelism n Consolidation of desktop industry? Innovation? PowerPC PA-RISC MIPS Alpha IA-64 SPARC x86
31
30 Processor Summary n SPEC performance doubling / 18 months m Growing CPU-DRAM performance gap & tax m Running out of ideas, competition? Back to 2X / 2.3 yrs? n Processor tricks not as useful for transactions? m Clock rate increase compensated by CPI increase? m When > 100 MIPS on TPC-C? n Cost fixed at ~$500/chip, power whatever can cool n Embedded processors promising m 1/10 cost, 1/100 power, 1/2 integer performance?
32
31 Processor Limit: DRAM Gap Alpha 21264 full cache miss in instructions executed: 180 ns/1.7 ns =108 clks x 4 or 432 instructions Caches in Pentium Pro: 64% area, 88% transistors
33
32 The Goal: Illusion of large, fast, cheap memory n Fact: Large memories are slow, fast memories are small n How do we create a memory that is large, cheap and fast (most of the time)? n Hierarchy of Levels m Similar to Principle of Abstraction: hide details of multiple levels
34
33 Hierarchy Analogy: Term Paper in Library n Working on paper in library at a desk n Option 1: Every time need a book m Leave desk to go to shelves (or stacks) m Find the book m Bring one book back to desk m Read section interested in m When done with section, leave desk and go to shelves carrying book m Put the book back on shelf m Return to desk to work m Next time need a book, go to first step
35
34 Memory Hierarchy Analogy: Library n Option 2: Every time need a book m Leave some books on desk after fetching them m Only go to shelves when need a new book m When go to shelves, bring back related books in case you need them; sometimes you’ll need to return books not used recently to make space for new books on desk m Return to desk to work m When done, replace books on shelves, carrying as many as you can per trip n Illusion: whole library on your desktop n Buzzword “cache” from French for hidden treasure
36
35 Why Hierarchy works: Natural Locality n The Principle of Locality: m Program access a relatively small portion of the address space at any instant of time. Address Space 02^n - 1 Probability of reference n What programming constructs lead to Principle of Locality?
37
36 Memory Hierarchy: How Does it Work? n Temporal Locality (Locality in Time): Keep most recently accessed data items closer to the processor m Library Analogy: Recently read books are kept on desk m Block is unit of transfer (like book) n Spatial Locality (Locality in Space): Move blocks consists of contiguous words to the upper levels m Library Analogy: Bring back nearby books on shelves when fetch a book; hope that you might need it later for your paper
38
37 Memory Hierarchy Pyramid Levels in memory hierarchy Central Processor Unit (CPU) Size of memory at each level Level 1 Level 2 Level n Increasing Distance from CPU, Decreasing cost / MB “Upper” “Lower” Level 3... (data cannot be in level i unless also in i+1)
39
38 Big Idea of Memory Hierarchy n Temporal locality: keep recently accessed data items closer to processor n Spatial locality: moving contiguous words in memory to upper levels of hierarchy n Uses smaller and faster memory technologies close to the processor m Fast hit time in highest level of hierarchy m Cheap, slow memory furthest from processor n If hit rate is high enough, hierarchy has access time close to the highest (and fastest) level and size equal to the lowest (and largest) level
40
39 Recall : 5 components of any Computer Processor (active) Computer Control (“brain”) Datapath (“brawn”) Memory (passive) (where programs, data live when running) Devices Input Output Keyboard, Mouse Display, Printer Disk, Network Focus on I/O
41
40 Disk Description / History 1973: 1. 7 Mbit/sq. in 140 MBytes 1979: 7. 7 Mbit/sq. in 2,300 MBytes source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces” Sector Track Cylinder Head Platter Arm Embed. Proc. (ECC, SCSI) Track Buffer
42
41 Disk History 1989: 63 Mbit/sq. in 60,000 MBytes 1997: 1450 Mbit/sq. in 2300 Mbytes (2.5” diameter) source: N.Y. Times, 2/23/98, page C3 1997: 3090 Mbit/s. i. 8100 Mbytes (3.5” diameter) 2000: 10,100 Mb/s. i. 25,000 MBytes 2000: 11,000 Mb/s. i. 73,400 MBytes
43
42 State of the Art: Ultrastar 72ZX m 73.4 GB, 3.5 inch disk m 2¢/MB m 16 MB track buffer m 11 platters, 22 surfaces m 15,110 cylinders m 7 Gbit/sq. in. areal density m 17 watts (idle) m 0.1 ms controller time m 5.3 ms avg. seek (seek 1 track => 0.6 ms) m 3 ms = 1/2 rotation m 37 to 22 MB/s to media source: www.ibm.com; www.pricewatch.com; 2/14/00 Latency = Queuing Time + Controller time + Seek Time + Rotation Time + Size / Bandwidth per access per byte { + Sector Track Cylinder Head Platter Arm Embed. Proc. Track Buffer
44
43 Disk Limit n Continued advance in capacity (60%/yr) and bandwidth (40%/yr.) n Slow improvement in seek, rotation (8%/yr) n Time to read whole disk YearSequentiallyRandomly 1990 4 minutes6 hours 200012 minutes 1 week n Dynamically change data layout to reduce seek, rotation delay? Leverage space vs. spindles?
45
44 A glimpse into the future? n IBM microdrive for digital cameras m 340 Mbytes n Disk target in 5-7 years? m building block: 2006 MicroDrive »9GB disk, 50 MB/sec from disk m 10,000 nodes fit into one rack!
46
45 Disk Summary n Continued advance in capacity, cost/bit, BW; slow improvement in seek, rotation n External I/O bus bottleneck to transfer rate, cost? => move to fast serial lines (FC-AL)? n What to do with increasing speed of embedded processor inside disk?
47
46 Connecting to Networks (and Other I/O) n Bus - shared medium of communication that can connect to many devices n Hierarchy of Buses in a PC
48
47 Buses in a PC CPU Memory bus Memory n Data rates Memory: 100 MHz, 8 bytes 800 MB/s (peak) PCI: 33 MHz, 4 bytes wide 132 MB/s (peak) SCSI: “Ultra2” (40 MHz), “Wide” (2 bytes) 80 MB/s (peak) PCI: Internal (Backplane) I/O bus SCSI: External I/O bus (1 to 15 disks) SCSI Interface Ethernet Interface Ethernet Local Area Network
49
48 Why Networks? n Originally sharing I/O devices between computers (e.g., printers) n Then Communicating between computers (e.g, file transfer protocol) n Then Communicating between people (e.g., email) Then Communicating between networks of computers Internet, WWW
50
49 Types of Networks n Local Area Network (Ethernet) m Inside a building: Up to 1 km m (peak) Data Rate: 10 Mbits/sec, 100 Mbits/sec,1000 Mbits/sec m Run, installed by network administrators n Wide Area Network m Across a continent (10km to 10000 km) m (peak) Data Rate: 1.5 Mbits/sec to 2500 Mbits/sec m Run, installed by telephone companies
51
50 ABCs of Networks: 2 Computers n Starting Point: Send bits between 2 computers n Queue (First In First Out) on each end n Can send both ways (“Full Duplex”) n Information sent called a “message” m Note: Messages also called packets
52
51 A Simple Example: 2 Computers n What is Message Format? m (Similar in idea to Instruction Format) m Fixed size? Number bits? 0: Please send data from address in your memory 1: Packet contains data corresponding to request Header(Trailer): information to deliver message Payload: data in message (1 word above) Request/ Response 1 bit 32 bits Address/Data
53
52 Questions About Simple Example n What if more than 2 computers want to communicate? m Need computer “address field” in packet to know which computer should receive it (destination), and to which computer it came from for reply (source) Req./ Resp. 1 bit32 bits Address/DataNet ID Dest.Source 5 bits HeaderPayload
54
53 Questions About Simple Example n What if message is garbled in transit? m Add redundant information that is checked when message arrives to be sure it is OK m 8-bit sum of other bytes: called “Check sum”; upon arrival compare check sum to sum of rest of information in message Req./ Resp. 1 bit32 bits Address/DataNet ID Dest.Source 5 bits HeaderPayload Checksum 8 bits Trailer
55
54 Questions About Simple Example n What if message never arrives? m If tell sender it has arrived (and tell receiver reply has arrived), can resend upon failure m Don’t discard message until get “ACK” (acknowledgment); (Also, if check sum fails, don’t send ACK) Req./ Resp. 2 bits32 bits Address/DataNet ID Dest.Source 5 bits Check 8 bits 00: Request—Please send data from Address 01: Reply—Message contains data corresponding to request 10: Acknowledge (ACK) request 11: Acknowledge (ACK) reply
56
55 Observations About Simple Example n Simple questions such as those above lead to more complex procedures to send/receive message and more complex message formats n Protocol: algorithm for properly sending and receiving messages (packets)
57
56 Ethernet (popular LAN) Packet Format PreambleDest AddrSrc Addr Length of Data 2 Bytes DataCheck n Preamble to recognize beginning of packet n Unique Address per Ethernet Network Interface Card so can just plug in & use (privacy issue?) n Pad ensures minimum packet is 64 bytes m Easier to find packet on the wire n Header+ Trailer: 24B + Pad Pad 8 Bytes6 Bytes 0-1500B0-46B4B
58
57 Software Protocol to Send and Receive n SW Send steps 1: Application copies data to OS buffer 2: OS calculates checksum, starts timer 3: OS sends data to network interface HW and says start n SW Receive steps 3: OS copies data from network interface HW to OS buffer 2: OS calculates checksum, if OK, send ACK; if not, delete message (sender resends when timer expires) 1: If OK, OS copies data to user address space, & signals application to continue
59
58 Protocol for Networks of Networks (WAN)? n Internetworking: allows computers on independent and incompatible networks to communicate reliably and efficiently; m Enabling technologies: SW standards that allow reliable communications without reliable networks m Hierarchy of SW layers, giving each layer responsibility for portion of overall communications task, called protocol families or protocol suites n Abstraction to cope with complexity of communication vs. Abstraction for complexity of computation
60
59 Protocol for Network of Networks n Transmission Control Protocol/Internet Protocol (TCP/IP) m This protocol family is the basis of the Internet, a WAN protocol m IP makes best effort to deliver m TCP guarantees delivery m TCP/IP so popular it is used even when communicating locally: even across homogeneous LAN
61
60 FTP From Stanford to Berkeley n BARRNet is WAN for Bay Area m T3 is 45 Mbit/s leased line (WAN); FDDI is 100 Mbit/s LAN n IP sets up connection, TCP sends file T3 FDDI Ethernet Hennessy Patterson FDDI
62
61 Protocol Family Concept Message TH TH TH TH THTH Actual Logical
63
62 Protocol Family Concept n Key to protocol families is that communication occurs logically at the same level of the protocol, called peer-to-peer, but is implemented via services at the lower level m Danger is each level lower performance if family is implemented as hierarchy (e.g., multiple check sums)
64
63 Message TCP/IP packet, Ethernet packet, protocols n Application sends message TCP data TCP Header IP Header IP Data EH Ethernet Hdr n TCP breaks into 64KB segments, adds 20B header n IP adds 20B header, sends to network n If Ethernet, broken into 1500B packets with headers, trailers (24B) n All Headers, trailers have length field, destination,...
65
64 Shared vs. Switched Based Networks n Shared Media vs. Switched: pairs communicate at same time: “point-to- point” connections n Aggregate BW in switched network is many times shared m point-to-point faster since no arbitration, simpler interface Node Shared Crossbar Switch Node
66
65 Heart of Today’s Data Switch Covert serial bit stream to, say, 128 bit words Unpack header to find destination and place message into memory of proper outgoing port; OK as long as memory much faster than switch rate Covert 128 bit words into serial bit stream Memory
67
66 Network Media (if time) Copper, 1mm think, twisted to avoid attenna effect (telephone) Twisted Pair: Used by cable companies: high BW, good noise immunity Coaxial Cable: Copper core Insulator Braided outer conductor Plastic Covering Light: 3 parts are cable, light source, light detector Fiber Optics Transmitter – L.E.D – Laser Diode Receiver – Photodiode light source Silica Total internal reflection Air
68
67 I/O Pitfall: Relying on Peak Data Rates n Using the peak transfer rate of a portion of the I/O system to make performance projections or performance comparisons n Peak bandwidth measurements often based on unrealistic assumptions about system or unattainable because of other system limitations m In example, Peak Bandwidth FDDI vs. 10 Mbit Ethernet = 10:1, but delivered BW ratio (due to software overhead) is 1.01:1 m Peak PCI BW is 132 MByte/sec, but combined with memory often < 80 MB/s
69
68 Network Description/Innovations n Shared Media vs. Switched: pairs communicate at same time n Aggregate BW in switched network is many times shared m point-to-point faster only single destination, simpler interface m Serial line: 1 – 5 Gbit/sec n Moore’s Law for switches, too m 1 chip: 32 x 32 switch, 1.5 Gbit/sec links, $396 48 Gbit/sec aggregate bandwidth (AMCC S2025)
70
69 Network History/Limits n TCP/UDP/IP protocols for WAN/LAN in 1980s n Lightweight protocols for LAN in 1990s n Limit is standards and efficient SW protocols 10 Mbit Ethernet in 1978 (shared) 100 Mbit Ethernet in 1995 (shared, switched) 1000 Mbit Ethernet in 1998 (switched) m FDDI; ATM Forum for scalable LAN (still meeting) n Internal I/O bus limits delivered BW m 32-bit, 33 MHz PCI bus = 1 Gbit/sec m future: 64-bit, 66 MHz PCI bus = 4 Gbit/sec
71
70 Network Summary n Fast serial lines, switches offer high bandwidth, low latency over reasonable distances n Protocol software development and standards committee bandwidth limit innovation rate m Ethernet forever? n Internal I/O bus interface to network is bottleneck to delivered bandwidth, latency
72
71 Network Summary n Protocol suites allow heterogeneous networking m Another use of principle of abstraction Protocols operation in presence of failures m Standardization key for LAN, WAN n Integrated circuit revolutionizing network switches as well as processors m Switch just a specialized computer n High bandwidth networks with slow SW overheads don’t deliver their promise
73
72 Systems: History, Trends, Innovations n Cost/Performance leaders from PC industry n Transaction processing, file service based on Symmetric Multiprocessor (SMP)servers m 4 - 64 processors m Shared memory addressing n Decision support based on SMP and Cluster (Shared Nothing) n Clusters of low cost, small SMPs getting popular
74
73 1997 State of the Art System: PC n $1140 OEM n 1 266 MHz Pentium II n 64 MB DRAM n 2 UltraDMA EIDE disks, 3.1 GB each n 100 Mbit Ethernet Interface n (PennySort winner) source: www.research.microsoft.com/research/barc/SortBenchmark/PennySort.ps
75
74 1997 State of the Art SMP: Sun E10000 … data crossbar switch 4 address buses … …… bus bridge … … 1 …… scsiscsi … … 23 Mem Xbar bridge Proc s 1 Mem Xbar bridge Proc s 16 Proc n TPC-D,Oracle 8, 3/98 m SMP 64 336 MHz CPUs, 64GB dram, 668 disks (5.5TB) m Disks,shelf$2,128k m Boards,encl.$1,187k m CPUs$912k m DRAM$768k m Power$96k m Cables,I/O$69k m HW total $5,161k scsiscsi scsiscsi scsiscsi scsiscsi scsiscsi scsiscsi scsiscsi scsiscsi scsiscsi source: www.tpc.org
76
75 State of the Art Cluster: Tandem/Compaq SMP n ServerNet switched network n Rack mounted equipment n SMP: 4-PPro, 3GB dram, 3 disks (6/rack) n 10 Disk shelves/rack @ 7 disks/shelf n Total: 6 SMPs (24 CPUs, 18 GB DRAM), 402 disks (2.7 TB) n TPC-C, Oracle 8, 4/98 m CPUs$191k m DRAM, $122k m Disks+cntlr$425k m Disk shelves$94k m Networking$76k m Racks$15k m HW total $926k
77
76 1997 Berkeley Cluster: Zoom Project n 3 TB storage system m 370 8 GB disks, 20 200 MHz PPro PCs, 100Mbit Switched Ethernet m System cost small delta (~30%) over raw disk cost n Application: San Francisco Fine Arts Museum Server m 70,000 art images online m Zoom in 32X; try it yourself! m www.Thinker.org (statue)
78
77 User Decision Support Demand vs. Processor speed CPU speed 2X / 18 months Database demand: 2X / 9-12 months Database-Proc. Performance Gap: “Greg’s Law” “Moore’s Law”
79
78 Berkeley Perspective on Post-PC Era n PostPC Era will be driven by 2 technologies: 1) “Gadgets”:Tiny Embedded or Mobile Devices m ubiquitous: in everything m e.g., successor to PDA, cell phone, wearable computers 2) Infrastructure to Support such Devices m e.g., successor to Big Fat Web Servers, Database Servers
80
79 Intelligent RAM: IRAM Microprocessor & DRAM on a single chip: m 10X capacity vs. SRAM m on-chip memory latency 5-10X, bandwidth 50-100X m improve energy efficiency 2X-4X (no off-chip bus) m serial I/O 5-10X v. buses m smaller board area/volume IRAM advantages extend to: m a single chip system m a building block for larger systems DRAMDRAM fabfab Proc Bus DRAM I/O $$ Proc L2 $ LogicLogic fabfab Bus DRAM I/O
81
80 Other examples: IBM “Blue Gene” n 1 PetaFLOPS in 2005 for $100M? n Application: Protein Folding n Blue Gene Chip m 32 Multithreaded RISC processors + ??MB Embedded DRAM + high speed Network Interface on single 20 x 20 mm chip m 1 GFLOPS / processor n 2’ x 2’ Board = 64 chips (2K CPUs) n Rack = 8 Boards (512 chips,16K CPUs) n System = 64 Racks (512 boards,32K chips,1M CPUs) n Total 1 million processors in just 2000 sq. ft.
82
81 Other examples: Sony Playstation 2 n Emotion Engine: 6.2 GFLOPS, 75 million polygons per second (Microprocessor Report, 13:5) m Superscalar MIPS core + vector coprocessor + graphics/DRAM m Claim: “Toy Story” realism brought to games
83
82 The problem space: big data n Big demand for enormous amounts of data m today: high-end enterprise and Internet applications »enterprise decision-support, data mining databases »online applications: e-commerce, mail, web, archives m future: infrastructure services, richer data »computational & storage back-ends for mobile devices »more multimedia content »more use of historical data to provide better services n Today’s SMP server designs can’t easily scale n Bigger scaling problems than performance!
84
83 The real scalability problems: AME n Availability m systems should continue to meet quality of service goals despite hardware and software failures n Maintainability m systems should require only minimal ongoing human administration, regardless of scale or complexity n Evolutionary Growth m systems should evolve gracefully in terms of performance, maintainability, and availability as they are grown/upgraded/expanded n These are problems at today’s scales, and will only get worse as systems grow
85
84 ISTORE-1 hardware platform n 80-node x86-based cluster, 1.4TB storage m cluster nodes are plug-and-play, intelligent, network- attached storage “bricks” »a single field-replaceable unit to simplify maintenance m each node is a full x86 PC w/256MB DRAM, 18GB disk m more CPU than NAS; fewer disks/node than cluster ISTORE Chassis 80 nodes, 8 per tray 2 levels of switches 20 100 Mbit/s 2 1 Gbit/s Environment Monitoring: UPS, redundant PS, fans, heat and vibration sensors... Intelligent Disk “Brick” Portable PC CPU: Pentium II/266 + DRAM Redundant NICs (4 100 Mb/s links) Diagnostic Processor Disk Half-height canister
86
85 Conclusion n IRAM attractive for two Post-PC applications because of low power, small size, high memory bandwidth m Gadgets: Embedded/Mobile devices m Infrastructure: Intelligent Storage and Networks n PostPC infrastructure requires m New Goals: Availability, Maintainability, Evolution m New Principles: Introspection, Performance Robustness m New Techniques: Isolation/fault insertion, Software scrubbing m New Benchmarks: measure, compare AME metrics
87
86 Questions? Contact us if you’re interested: email: patterson@cs.berkeley.edu http://iram.cs.berkeley.edu/
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.