Dezső Sima September 2008 (Ver. 1.0)  Sima Dezső, 2008 3. Overall design space of main memories.

Slides:



Advertisements
Similar presentations
Chapter Three: Interconnection Structure
Advertisements

HARDWARE Rashedul Hasan..
INPUT-OUTPUT ORGANIZATION
Memory Modules Overview Spring, 2004 Bill Gervasi Senior Technologist, Netlist Chairman, JEDEC Small Modules & DRAM Packaging Committees.
Khaled A. Al-Utaibi 8086 Bus Design Khaled A. Al-Utaibi
Double Data Rate SDRAM – The Next Generation An overview of the industry roadmap for main system memory technology, and details on DDR which represents.
Anshul Kumar, CSE IITD CSL718 : Main Memory 6th Mar, 2006.
Accelerating DRAM Performance
COEN 180 DRAM. Dynamic Random Access Memory Dynamic: Periodically refresh information in a bit cell. Else it is lost. Small footprint: transistor + capacitor.
CS.305 Computer Architecture Memory: Structures Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made.
Chapter 9 Memory Basics Henry Hexmoor1. 2 Memory Definitions  Memory ─ A collection of storage cells together with the necessary circuits to transfer.
DRAM. Any read or write cycle starts with the falling edge of the RAS signal. –As a result the address applied in the address lines will be latched.
Memory Hierarchy.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output.
 2003 Micron Technology, Inc. All rights reserved. Information is subject to change without notice. High Performance Next­ Generation Memory Technology.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 18, 2002 Topic: Main Memory (DRAM) Organization – contd.
Main Memory by J. Nelson Amaral.
Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved The Digital Logic Level.
MIGRATING FROM SDRAM TO DDR Bill Gervasi Vice Chairman, JEDEC Memory Timing Technology Analyst
INPUT-OUTPUT ORGANIZATION
Dezső Sima Spring 2008 (Ver. 1.0)  Sima Dezső, 2008 FB-DIMM technology.
DDR SDRAM ASIC Course Saeed Bakhshi May 2004 Class presentation based on ISSCC2003 paper: A 1.8V, 700Mb/s/pin, 512Mb DDR-II SDRAM with On-Die Termination.
Memory Technology “Non-so-random” Access Technology:
SDRAM Synchronous dynamic random access memory (SDRAM) is dynamic random access memory (DRAM) that is synchronized with the system bus. Classic DRAM has.
Computer Architecture Lecture 08 Fasih ur Rehman.
Charles Kime & Thomas Kaminski © 2008 Pearson Education, Inc. (Hyperlinks are active in View Show mode) Chapter 8 – Memory Basics Logic and Computer Design.
Computer Architecture Part III-A: Memory. A Quote on Memory “With 1 MB RAM, we had a memory capacity which will NEVER be fully utilized” - Bill Gates.
Computer Organization CSC 405 Bus Structure. System Bus Functions and Features A bus is a common pathway across which data can travel within a computer.
Interconnection Structures
CPE232 Memory Hierarchy1 CPE 232 Computer Organization Spring 2006 Memory Hierarchy Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
CSIE30300 Computer Architecture Unit 07: Main Memory Hsin-Chou Chi [Adapted from material by and
Survey of Existing Memory Devices Renee Gayle M. Chua.
4 Linking the Components. © 2005 Pearson Addison-Wesley. All rights reserved Figure 4.1 This chapter focuses on how the hardware layer components are.
COMPUTER ARCHITECTURE (P175B125) Assoc.Prof. Stasys Maciulevičius Computer Dept.
San Jose January 23-24, 2001 Taipei February 14-15, 2001 DDR Penetrates Mobile Computing Bill Gervasi Technology Analyst Chairman, JEDEC Memory Parametrics.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
Buffer-On-Board Memory System 1 Name: Aurangozeb ISCA 2012.
Computer Architecture CST 250 MEMORY ARCHITECTURE Prepared by:Omar Hirzallah.
CPEN Digital System Design
Topics to be covered : How to model memory in Verilog RAM modeling Register Bank.
University of Tehran 1 Interface Design DRAM Modules Omid Fatemi
General Concepts of Computer Organization Overview of Microcomputer.
Dezső Sima September 2008 (Ver. 1.0)  Sima Dezső, Overall design space of main memories.
1.  RAM is our working memory storage. All the data, which the PC uses and works with during operation, are stored here.  Data are stored on drives,
L/O/G/O Input Output Chapter 4 CS.216 Computer Architecture and Organization.
Overview Memory definitions Random Access Memory (RAM)
Dynamic Random Access Memory (DRAM) CS 350 Computer Organization Spring 2004 Aaron Bowman Scott Jones Darrell Hall.
Computer Hardware A computer is made of internal components Central Processor Unit Internal External and external components.
Computer Architecture Lecture 24 Fasih ur Rehman.
Computer operation is of how the different parts of a computer system work together to perform a task.
The Evolution of Dynamic Random Access Memory (DRAM) CS 350 Computer Organization and Architecture Spring 2002 Section 1 Nicole Chung Brian C. Hoffman.
COMP541 Memories II: DRAMs
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
Charles Kime & Thomas Kaminski © 2008 Pearson Education, Inc. (Hyperlinks are active in View Show mode) Chapter 8 – Memory Basics Logic and Computer Design.
1 Memory Hierarchy (I). 2 Outline Random-Access Memory (RAM) Nonvolatile Memory Disk Storage Suggested Reading: 6.1.
Contemporary DRAM memories and optimization of their usage Nebojša Milenković and Vladimir Stanković, Faculty of Electronic Engineering, Niš.
CS35101 Computer Architecture Spring 2006 Lecture 18: Memory Hierarchy Paul Durand ( ) [Adapted from M Irwin (
Computer Architecture Chapter (5): Internal Memory
“With 1 MB RAM, we had a memory capacity which will NEVER be fully utilized” - Bill Gates.
THE COMPUTER MOTHERBOARD AND ITS COMPONENTS Compiled By: Jishnu Pradeep.
COMP541 Memories II: DRAMs
Reducing Hit Time Small and simple caches Way prediction Trace caches
1 Input-Output Organization Computer Organization Computer Architectures Lab Peripheral Devices Input-Output Interface Asynchronous Data Transfer Modes.
RAM Chapter 5.
Chapter 4: MEMORY.
Direct Rambus DRAM (aka SyncLink DRAM)
DRAM Hwansoo Han.
Bob Reese Micro II ECE, MSU
Chapter 13: I/O Systems.
Presentation transcript:

Dezső Sima September 2008 (Ver. 1.0)  Sima Dezső, Overall design space of main memories

Contents 1. Design space of MMs 2. Underlying principles of the implementation of MMs 3. Performance considerations 4. References

Underlying principle of operation 1. Design space of MMs () Instruction Set Architecture (ISA) Micro- architecture Underlying principles of implementation Principles of attaching memory and I/O Von Neumann computational model Figure: Design space of processors

Underlying principle of operation 1. Design space of MMs () Control Set Architecture (CSA) Micro- architecture of the MM Underlying principles of implementation Figure: Design space of main memories (MM)

1. Design space of MMs () Figure: Underlying principle of operation of DRAM devices Underlying principle of operation Basic operation Refreshing (not discussed)

1. Design space of MMs () Basic operation of DRAM devices (Assuming device/bank/row/column addressing) Figure: Basic operation of DRAM devices ReadsActivateReadPrecharge C: Command AD: Device address AB: Bank address AR: Raw address AC: Column address Read data (RD) C AD AB AR C AD AB AC C AD AR AB t t RCD t CL Activate C AD AB AR t RP WritesActivateWritePrecharge Write data (WD) C AD AB AR C AD AB AC C AD AR AB t t RCD t CL t WR Activate C AD AB AR t RP

2. Underlying principles of the implementation of MMs () Underlying principles of the implementation of MMs One/two level implementation Managing the DRAM status Principle of communication Bus topology Type of signaling Type of synchronisation Figure: Main dimensions of the design space of the underlying principles of implementation of MMs Signal grouping for communication

2. Underlying principles of the implementation of MMs () One/two-level implementation One-level implementation Two-level implementation MM is built up of DRAM devices Figure: One/two level implementation of main memories

Figure: One level implementaqtion of the main memory (XDR memory of the Playstation 3 [1] 2. Underlying principles of the implementation of MMs ()

One/two-level implementation One-level implementation Two-level implementation MM is built up of DRAM devices MM is built up of modules, modules consist of DRAM devices Figure: One/two level implementation of main memories

Figure: Two level memory implementation (DDR2 modules on an MSI motherboard [2] 2. Underlying principles of the implementation of MMs ()

One/two-level implementation One-level implementation Two-level implementation MM is built up of DRAM devices MM is built up of modules, modules consist of DRAM devices Type of mounting Expandability Board space needed Signal integrity Typically solderedTypically socketed Not expandable Easily expandable Large boardspace Small boardspace Good signal integrityUnfavorable signal integrity Figure: One/two level implementation of main memories (Earliest PC main memories) XDR memories All other types of main memories E.g.

2. Underlying principles of the implementation of MMs () All other types of main memories (via a second dedicated interface) RDRAM XDR Figure: Options to manage DRAM status This dimension of the design space is not discussed. Managing DRAM status Along with the basic operation Detached from the basic operation

2. Underlying principles of the implementation of MMs () Figure: Principles of communication used in main memories Principle of communication Parallel bus based Packet-based Signals are transferred over a parallel bus in one cycle Signals are transferred over a serial bus in a number of cycles 01 E.g: 16 cycles/packet on a 1-bit wide bus 11 E.g: 4 cycles/packet on a 4-bit wide bus 01 MC t t t E.g: 64 bits in each cycle

2. Underlying principles of the implementation of MMs () Signal grouping for communication Line multiplexingPacket concept In case of parallel bus based transmission In case of packet based transmission

2. Underlying principles of the implementation of MMs () are unidiredctional (they flow in one direction, from the MC to the MM) Data Due to the basic operation Commands and addresses row and column addresses are not used at the same time, that is they may be multiplexed without any performance penalty. Line multiplexing In order to avoid performance impediments data are transferred on a private bus, instead of being multiplexed with commands and addresses. is bidirectional (read data flow from the MC to the MM, write data from the MM to the MC). Assumptions for line multiplexing Additional addresses needed (such as device or bank addresses) are transferred. along with row and column addresses. Read and write data may be multiplexed to reduce cost with a low performance penalty.

2. Underlying principles of the implementation of MMs () Row address/Column address multiplexing Multiplexed Not multiplexed Multiplexed (unidirectional) Read/write data multiplexing Not multiplexed (bi-directional) Asyncr. DRAMs (from the MK4096 on) Synchr. SDRAMs First asynchr. DRAMs (before Mostel’s MK4086) Figure: Multiplexing row and column addresses vs read and write data Multiplexing row addresses/column addresses vs read and write data Line multiplexing

2. Underlying principles of the implementation of MMs () Packet concept Different packet concepts for RDRAM, XDR and FB-DIMM memories.

Figure: RDRAM: Different packet types for Activate/Precharge and Read/Write commands [9] 2. Underlying principles of the implementation of MMs () Activate Read The packet concept of RDRAM memories (1)

Data packets over the bidirectional data bus (DQA/DQB) Row packets over the ROW bus Column packets over the COL bus CR R/W packets over the Serial bus Figure: RDRAM: Memory access packets [10] Row packets Column packets Data packets CR: Control Register R/W: Read/Write 2. Underlying principles of the implementation of MMs () The packet concept of RDRAM memories (2)

2. Underlying principles of the implementation of MMs () Packet concept Different packet concepts for RDRAM, XDR and FB-DIMM memories. RDRAM Different packet types (transferred via different buses) for Activate/Precharge and Read/Write commands Bidirectional data packets Control register read/write packets.

Figure: The packet concept of XDR memories [11] Data packets Request packets CR R/W packets 2. Underlying principles of the implementation of MMs () The packet concept of XDR memories

2. Underlying principles of the implementation of MMs () Packet concept Different packet concepts for RDRAM, XDR and FB-DIMM memories. RDRAM Different packet types (transferred via different buses) for Activate/Precharge and Read/Write commands Bidirectional data packets Control register read/write packets. XDR Unified packet type for memory accesses (called the Request packets) (Activate/Precharge and Read/Write commands) Bidirectional data packets Control register read/write packets.

Southbound packets Memory controller M. module Commands Write data M. module Northbound packets Read data Status Figure: The packet concept of FB-DIMM memories 2. Underlying principles of the implementation of MMs () The packet concept of FB-DIMM memories

2. Underlying principles of the implementation of MMs () Packet concept Different packet concepts for RDRAM, XDR and FB-DIMM memories. RDRAM Different packet types (transferred via different buses) for Activate/Precharge and Read/Write commands Bidirectional data packets Control register read/write packets. XDR Unified packet type for memory accesses (called the Request packets) (Activate/Precharge and Read/Write commands) Bidirectional data packets Control register read/write packets. FB_DIMM Unified packet type for all commands (called the Southbound packets) including memory accesses and control register reads/writes, containing up to 3 commands, or write data and a single command. Unidirectional read data (and status) packets (called the Northbound packets)

2. Underlying principles of the implementation of MMs () Bus topology Multi-drop busPoint-to-point connection Stub-busFly-byDaisy-chained Figure: Bus topologies used to connect DRAM devices or modules to the memory controller Allows to connect more than one devices/modules to the bus Allows to interconnect two units (e.g. a mem. controller and a module) Connection via slots (sockets) E.g. MC DIMMDIMM DIMMDIMM

Figure: Stub bus topology [3] 2. Underlying principles of the implementation of MMs ()

Bus topology Multi-drop busPoint-to-point connection Stub-busFly-byDaisy-chained Figure: Bus topologies used to connect DRAM devices or modules to the memory controller Allows to connect more than one devices/modules to the bus Allows to interconnect two units (e.g. a mem. controller and a module) Connection via slots (sockets) Connection via soldering E.g. MC DIMMDIMM DIMMDIMM DRAMDRAM DRAMDRAM

Figure: Fly-by topology of the RQ bus in a two-channel XDR memory with two XDR devices/channel [5] Fly-by topology 2. Underlying principles of the implementation of MMs ()

Bus topology Multi-drop busPoint-to-point connection Stub-busFly-byDaisy-chained Figure: Bus topologies used to connect DRAM devices or modules to the memory controller Allows to connect more than one devices/modules to the bus Allows to interconnect two units (e.g. a mem. controller and a module) Connection via slots (sockets) Connection via soldering Connecting units to each other (outputs to inputs) E.g. MC DIMMDIMM DIMMDIMM DRAMDRAM DRAMDRAM DIMMDIMM DIMMDIMM

(There are two Command/Address buses (C/A) to reduce loading coming from 9 to 36 DRAMs mounted on the module) Figure: Daisy chained topology of connecting AMBs in FB-DIMM memories [4] 2. Underlying principles of the implementation of MMs ()

Bus topology Multi-drop busPoint-to-point connection Stub-busFly-byDaisy-chained Figure: Bus topologies used to connect DRAM devices or modules to the memory controller Allows to connect more than one devices/modules to the bus Allows to interconnect two units (e.g. a mem. controller and a module) Connection via slots (sockets) Connection via soldering Connecting units to each other (outputs to inputs) E.g. MC DIMMDIMM DIMMDIMM DRAMDRAM DRAMDRAM DIMMDIMM DIMMDIMM DRAMDRAM DRAMDRAM

Figure: Point-to point topology of the data bus (DQ) in a two-channel XDR memory with two XDR devices/channel [5] Point-to-point 2. Underlying principles of the implementation of MMs ()

Bus topology Multi-drop busPoint-to-point bus Stub-busFly-by busDaisy-chained bus Connecting DRAM devices to the MC MC DRAMDRAM DRAMDRAM DRAMDRAM DRAMDRAM Figure: Overview of bus topologies connecting DRAM devices to the memory controller MC DRAMDRAM DRAMDRAM DRAMDRAM DRAMDRAM Used inVery early PCs RDRAMs (except the Serial bus) XDR/XDR2 (memory requests, control register reads/writes) XDR/XDR2 (read/write data)

2. Underlying principles of the implementation of MMs () Figure: Overview of bus topologies connecting DRAM modules to the memory controller MC DIMMDIMM DIMMDIMM Used in Parallel connected main memories RIMMs (with fly-by device connection on the module) MC (FPM/EDO/SDRAM, DDR/DDR2/DDR3) Bus topology Multi-drop busPoint-to-point bus Stub-busFly-by busDaisy-chained bus Connecting DIMMs to the MC

2. Underlying principles of the implementation of MMs () Figure: Contrasting the interconnection of RIMM modules with that of DIMMs [12]

2. Underlying principles of the implementation of MMs () Connecting DIMMs to the MC MC DIMMDIMM Figure: Overview of bus topologies connecting DRAM modules to the memory controller MC DIMMDIMM DIMMDIMM DIMMDIMM DIMMDIMM Used in FB-DIMMs (to connect AMBs) Parallel connected main memories MC DIMMDIMM DIMMDIMM RIMMs (with fly-by device connection on the module) MC (FPM/EDO/SDRAM, DDR/DDR2/DDR3) Not feasible Bus topology Multi-drop busPoint-to-point bus Stub-busFly-by busDaisy-chained bus

2. Underlying principles of the implementation of MMs () Figure: Assessing bus topologies connecting DRAM devices/modules to the memory controller Bus topology Multi-drop busPoint-to-point bus Stub-busFly-by busDaisy-chained bus Attaching DRAM devices to the MC Attaching DIMMs to the MC Unfavorable (due to TL discontinuities) BetterGoodExcellent Signal integrity Peak transfer rate (recently) Up to 16 Gb/s (with increasingly sophisticated termination) Up to 4.8 Gb/s MC DRAMDRAM DRAMDRAM DRAMDRAM DRAMDRAM DRAMDRAM DRAMDRAM DIMMDIMM DIMMDIMM DRAMDRAM DRAMDRAM DIMMDIMM DIMMDIMM

Bus topologies of parallel connected synchronous MMs (Summary 1) 2. Underlying principles of the implementation of MMs () Synchronous DRAMs (except DDR3) read/write data commands addresses Stub bus DDR3 read/write data commands addresses Stub bus Fly-by DQ [3:0/7:0/15:0] CS, RAS. CAS, WE BA [7:0], A [N:0] DQ [3:0/7:0/15:0] CS, RAS. CAS, WE BA [7:0], A [N:0] I/O I I/O I Bus topology Bus designation Buses

Bus topologies of serial connected MMs (Summary 2) XDR read/write data memory requests) control register (CR) reads control register (CR) writes FB-DIMM (AMBs - memory controller) read data/device status memory requests/ write data/CR reads or writes Bus topology Point-to-point Fly-by Daisy-chained DQ [15:0] RQ [11:0] SDI SDO PN [13:0] PS [9:0] 2. Underlying principles of the implementation of MMs () Bus designation Read/write data Row bus Column bus Serial if. CMD, SIO1, SIO0 Fly-by Daisy-chained Buses RDRAM DQA [8:0], DQB [8:0] ROW [2:0] COL [4:0] CMD SIOI, SIOO I/O I I/O I/O I O I O I

2. Underlying principles of the implementation of MMs () Figure: Bus topologies of current MMs to connect DRAM devices or modules to the memory controller Address/control bus Multi-drop bus Point-to-point Stub-busFly-by busDaisy-chained Data bus Multi-drop bus P2P Stub-bus Fly-by Daisy-chnd SDRAM DDR DDR2 (modules) Devices on the modules DDR3 RDRAM (devices, modules) XDR XDR2 (devices) FB-DIMM (AMBs on modules) TBI (devices)

2. Underlying principles of the implementation of MMs () Figure: Signal types used in MMs for control, address and data signals Signals Voltage referenced Single ended Differential LVDS: Low Voltage Differential Signaling LVTTL: Low Voltage TTL (D)RSL: (Differential) Rambus Signaling Level SSTL: Stub Series Terminated Logic V CM : Common Mode Voltage V REF : Reference Voltage t t V REF LVTTL (3.3 V) FPM/EDO SDRAM TTL (5 V) FPM/EDO SSTL SSTL2 (DDR) SSTL1.8 (DDR2) SSTL1.5 (DDR3) RSL (RDRAM) LVDS FB-DIMMs t S+ S- V CM Smaller voltage swings Typ.voltage swings mV DRSL XDR (data) mV V Used in

shorter signal rise/fall times higher speed grades but lower voltage budgethigher requirements for signal integrity 2. Underlying principles of the implementation of MMs () Smaller voltage swings Q = C in x V = I x t t R ~ C in x V/I Q: Charge on the input capacitance of the line (C in ) C in : Input capacitance of the line V: Voltage I: Current strength of the driver t R : Rise time Voltage swing vs signal rise/fall time

Bus topologies and signaling of parallel connected MMs (Summary 1) 2. Underlying principles of the implementation of MMs () Synchronous DRAMs (except DDR3) read/write data commands addresses Stub bus DDR3 read/write data commands addresses Stub bus Fly-by DQ [3:0/7:0/15:0] CS, RAS. CAS, WE BA [7:0], A [N:0] DQ [3:0/7:0/15:0] CS, RAS. CAS, WE BA [7:0], A [N:0] I/O I I/O I Bus topology Bus designation Buses Signaling Volt. ref.

Bus topologies and signaling of serial connected MMs (Summary 2) XDR read/write data memory requests) control register (CR) reads control register (CR) writes FB-DIMM (AMBs - memory controller) read data/device status memory requests/ write data/CR reads or writes Bus topology Point-to-point Fly-by Daisy-chained DQ [15:0] RQ [11:0] SDI SDO PN [13:0] PS [9:0] 2. Underlying principles of the implementation of MMs () Bus designation Read/write data Row bus Column bus Serial if. CMD, SIO1, SIO0 Fly-by Daisy-chained Buses RDRAM DQA [8:0], DQB [8:0] ROW [2:0] COL [4:0] CMD SIOI, SIOO I/O I I/O I/O I O I OIOI Differential Volt. ref. Differential Signaling Volt ref. Volt. ref. CMOS Volt. ref.

2. Underlying principles of the implementation of MMs () Capturing control/address information Central synchronization Source synchronization Mesochronous synchronization The sourcing device (MC or DRAM) sends a strobe signal along with the signals sent A central clock signal is used to latch the signals Keeps the clock frequency between the sender reference clock and the receiver reference clock but not the phase relationship. Synchronisation

Figure: Central clocking of address, command and data lines in an SDRAM device while writing random data [6] Address, command and data lines are latched by the rising edge of the central clock (CLK) Central clocking (SDRAM) 2. Underlying principles of the implementation of MMs ()

Figure: Source synchronous clocking (DDR SDRAMs) of the data lines in a DDR device while writing random data [7] (T DOSS : Write command to first DQS latching transition) Command and address lines are latched by the differential clock (CK, CK#) but write data are latched by the rising edge of the source synchronous data strobe (DQS) Source synchronous clocking of the data lines (DDR) 2. Underlying principles of the implementation of MMs ()

Mesochronous clocking (FB-DIMM) Figure: Mesochronous clocking used to synchronise AMBs in FB-DIMM memories [8] 2. Underlying principles of the implementation of MMs ()

Figure: Synchronisation alternatives SDRAM RDRAM XDR 3 XDR2 3 Capturing control/address information Central synchronization Source synchronization Mesochronous synchronization Capturing data Source synch. Mesochron. synch. FBDIMM Central synch. DDR 1 DDR2 2 DDR3 2 1 : Phase alignement for data reads/writes 2 : Phase alignement for data reads/writes by read/write leveling 3 : Phase alignement for all signals by FlexPhase DDRX?

3. Performance considerations (1) Figure: Peak memory size vs peak bandwidth (BW) of particular DRAM technologies in Intel’s desktop chipsets

, BW GB/s Mem. Size GB QS21 (2D) x P4 Servers, QS (2) x P4 Servers x x 7520 (2) 7501 (2) QS22 (2) (2D) x 7520 (2) x x 840 (2) x 7300 (4) FB-DIMM DDR-2 Core 2 Servers, T2 DDR2 (reg) DDR (reg) RDRAM P4 Servers XDR Servers x 860 (2) x QS20 (2D) ,75 1,061,63,2 4,2 6,48,510,612,821,225,6 Figure: Peak memory size vs peak bandwidth (BW) of particular DRAM technologies in Intel’s server chipsets, IBM’s QS2x blades and Sun’s T2 x SunT2 (4D) 50 51,2 3. Performance considerations (2)

4. References References [1]: Yeung P., „Solving System Engineering Challenges in High Speed Memory Designs,” Rambus Design Seminar, Feb , [2]: [3]: Reddy A., „XDR and XDR2 Overview,”, RDF, Oct , Taiwan, [4]: McTague M. & David H., „ Fully Buffered DIMM (FB-DIMM) Design Considerations,” Febr. 18, 2004, Intel Developer Forum, [5]: Yoshitomi Y., „Elpida DRAM Solutions to Advanced Digital Consumer Electronic Systems,” Rambus Design Seminar, Hsinchu, Elpida, June , [6]: Micron Synchronous DRAM, 64 Mbit, MT48LC16M4A2, MT48LC16M8A2, MT48LC16M16A2, Micron Technology, Inc. Oct [7]: Double Data Rate (DDR) SDRAM MT46V128M4, MT46V64M8, MT46V32M16, Micron Techn. Inc, 2000, [8]: FBDIMM Specification: High Speed Differential Point-to-Point Link at 1.5 V, JESD8-18, Sept. 2006, JEDEC

4. References [9]: Direct Rambus Architecture and Measurements, MindShare, [10]: Crisp R., Direct Rambus Technology: The New Main Memory Standard,”, IEEE Micro, Nov./Dez. 1997, pp [11]: Ishikawa T., „Elpida XDRAM,”, RDF Oct , Taiwan [12]: DeMone P., „Direct Rambus Memory,” Real Word Technologies, ,