Download presentation
Published byPatrick Wilkinson Modified over 9 years ago
1
Combining Memory and a Controller with Photonics through 3D-Stacking to Enable Scalable and Energy-Efficient Systems Aniruddha N. Udipi Naveen Muralimanohar* Rajeev Balasubramonian Al Davis Norm Jouppi* University of Utah and *HP Labs
2
Memory Trends - I Multi-socket, multi-core, multi-thread
Source: Tom’s Hardware Multi-socket, multi-core, multi-thread High bandwidth requirement 1 TB/s by 2017 Edge-bandwidth bottleneck Pin count, per pin bandwidth Signal integrity and off-chip power Limited number of DIMMs Without melting the system Or setting up in the Tundra! Source: ZDNet
3
Memory Trends - II The job of the memory controller is hard
18+ timing parameters for DRAM! Maintenance operations Refresh, scrub, power down, etc. Several DIMM and controller variants Hard to provide interoperability Need processor-side support for new memory features Now throw in heterogeneity Memristors, PCM, STT-RAM, etc.
4
Improving the interface
Memory Interconnect - Efficient application of Silicon Photonics, without modifying DRAM dies DIMM CPU 1 … MC 2 Communication protocol – Streamlined Slot-based Interface Memory interface under severe pressure
5
PART 1 – Memory Interconnect
6
Silicon Photonic Interconnects
We need something that can break the edge-bandwidth bottleneck Ring modulator based photonics Off chip laser source Indirect modulation using resonant rings Relatively cheap coupling on- and off-chip DWDM for high bandwidth density As many as 67 wavelengths possible Limited by Free Spectral Range, and coupling losses between rings Source: Xu et al. Optical Express 16(6), 2008 DWDM 64 λ × 10 Gbps/ λ = 80 GB/s per waveguide
7
Static Photonic Energy
Photonic interconnects Large static power dissipation: ring tuning Much lower dynamic energy consumption – relatively independent of distance Electrical interconnects Relatively small static power dissipation Large dynamic energy consumption Should not over-provision photonic bandwidth, use only where necessary
8
The Questions We’re Trying to Answer
What should the role of electrical signaling be? How do we make photonics less invasive to memory die design? Should we replace all interconnects with photonics? On-chip too? What should the role of 3D be in an optically connected memory? Should we be designing photonic DRAM dies? Stacks? Channels?
9
Contributions Beyond Prior Work
Beamer et al. (ISCA 2010) First paper on fully integrated optical memory Studied electrical-optical balance point Focus on losses, proposed photonic power guiding We build upon this Focus on tuning power constraints Effect of low-swing wires Effect of 3D and daisy-chaining
10
Energy Balance Within a DRAM Chip
Photonic Energy Electrical Energy
11
Single Die Design Full-swing on-chip wires Low-swing on-chip wires
1 Photonic DRAM die Full-swing on-chip wires Low-swing on-chip wires 46% energy reduction going between best full-swing config (4 stops) and best low-swing config (1 stop). Similar to state-of-the-art design, based on prior work. Argues for a specially designed photonic DRAM. More efficient on-chip electrical communication provides the added benefit of allowing fewer photonic resources.
12
3D Stacking Imminent for Capacity
Simply stack photonic dies? Vertical coupling and hierarchical power guiding suggested by prior work This is our baseline design But, more photonic rings in the channel Exactly the same number active as before Energy optimal point shifts towards fewer “stops” single set of rings becomes optimal 2.4x energy consumption, for 8x capacity 8 Optimally Designed Photonic DRAM dies Based on published papers from memory manufacturers
13
Key Idea – Exploiting TSVs
Move all photonic components to a separate interface die, shared by several memory dies Photonics off-chip only TSVs for inter-die communication Best of both worlds; high BW and low static energy Efficient low-swing wires on-die 8 Optimally Designed Photonic DRAM dies 8 Commodity DRAM dies Single photonic interface die
14
Photonic Interface die
Proposed Design ADVANTAGE 1: Increased activity factor, more efficient use of photonics ADVANTAGE 2: Rings are co-located; easier to isolate or tune thermally ADVANTAGE 3: Not disruptive to the design of commodity memory dies DRAM chips Processor DIMM Memory controller Waveguide Photonic Interface die
15
Energy Characteristics
Single die on the channel Four 8-die stacks on the channel Static energy trumps distance-independent dynamic energy
16
Makes the job of the memory controller difficult!
Final System DRAM chips Processor DIMM Waveguide Memory controller Photonic Interface die Makes the job of the memory controller difficult! 23% reduced energy consumption 4X capacity per channel Potential for performance improvements due to increased bank count Less disruptive to memory die design
17
PART 2 – Communication Protocol
18
The Scalability Problem
Large capacity, high bandwidth, and evolving technology trends will increase pressure on the memory interface Processor-side support required for every memory innovation Current micro-management requires several signals Heavy pressure on address/command bus Worse with several independent banks, large amounts of state
19
Proposed Solution Release MC’s tight control, make memory stack more autonomous Move mundane tasks to the interface die Maintenance operation (refresh, scrub, etc.) Routine operations (DRAM precharge, NVM wear leveling) Timing control (18+ constraints for DRAM alone) Coding and any other special requirements
20
What would it take to do this?
“Back-pressure” from the memory But, “Free-for-all” would be inefficient Needs explicit arbitration Novel slot-based interface Memory controller retains control over data bus Memory module only needs address, returns data
21
Memory Access Operation
ML ML > ML x x x S1 S2 Arrival Issue Start looking First free slot Backup slot Time Talk about backup as being more important contribution Slot – Cache line data bus occupancy X – Reserved Slot ML – Memory Latency = Addr. latency + Bank access + Data bus latency
22
Advantages Plug and play Better support for heterogeneous systems
Everything is interchangeable and interoperable Only interface-die support required (communicate ML) Better support for heterogeneous systems Easier DRAM-NVM data movement on the same channel More innovation in the memory system Without processor-side support constraints Fewer commands between processor and memory Energy, performance advantages
23
Target System and Methodology
Terascale memory node in an Exascale system 1 TB of memory, 1 TB/s of bandwidth Assuming 80 GB/s per channel, we need 16 channels, with 64 GB per channel 2 GB dies x 8 dies per stack x 4 stacks per channel Focus on the design of a single channel In-house DRAM simulator + SIMICS PARSEC, STREAM, synthetic random traffic Max. traffic load used, just below channel saturation Say that this is a very aggressive design target right here.
24
Performance Impact – Synthetic Traffic
< 9% latency impact, even at maximum load Virtually no impact on achieved bandwidth
25
Performance Impact – PARSEC/STREAM
Apps have very low BW requirements Scaled down system, similar trends
26
Tying it together – The Interface Die
27
Summary of Design Proposed 3D-stacked interface die with 2 major functions Holds photonic devices for Electrical-Optical-Electrical conversion Photonics only on the busy shared bus between this die and the processor Intra-memory communication all-electrical exploiting TSVs and low-swing wires Holds device controller logic Handles all mundane/routine tasks for the memory devices Refresh, scrub, coding, timing constraints, sleep modes, etc. Processor-side controller deals with more important functions such as scheduling, channel arbitration, etc. Simple speculative slot based interface
28
Key Contributions Efficient application of photonics
23% lower energy 4X capacity, potential for performance improvements Minimally disruptive to memory die design Single memory die design for photonics and electronics Streamlined memory interface More interoperability and flexibility Innovation without processor-side changes Support for heterogeneous memory
29
Backup Slides
30
Laser Power Calculation
The detectors need to receive some minimum amount of photonic power in order to reliably determine 0/1 Depends on their “sensitivity” Going from source to destination, there are several points of power loss – the waveguide, the rings, splitters, couplers, etc. Work backwards to determine total input laser power required Also some concerns about “non-linearity”, when total path loss exceeds a certain amount Rule of thumb ~20dB
31
What if trimming only costs 50 µW/ring?
Full-swing On-chip wires Low swing On-chip wires
32
Concentrated vs. Distributed
Energy considerations More electrical traversal if rings are concentrated in one location Same photonic energy BW considerations Entire BW used for a single cache line in concentrated design Smaller serialization delay, lowered queuing delay - reduced overall memory access latency Going from 1 to 8 stops, 67% latency increase for 12% energy reduction Not worth it! Distributed but cache lines striped across several arrays is one option but this would increase overfetch
33
Thermal Impact Unlike prior work, this is not memory stacked on the processor There is only relatively cool DRAM There isn’t a super-hot layer at the bottom Thermal issues are not a big concern Thermal simulations with Hotspot 5.0 Less than 0.1K temperature rise due to additional activity on the interface die
34
Intermediate Design – Single Stack
Decision on extent of photonic penetration very similar to single die system, except for the addition of TSVs (this came virtually for free in the fully photonic 3D design) But, absolute energy comes down dramatically due to the elimination of idling rings 48% less energy than simply stacking 8 photonic dies together 8 Commodity DRAM dies Single photonic interface die
35
Handling the Uncommon Case
Memory controller (by design) no longer deals with minutiae of per-bank state Data may not be available at the reserved “slot” due to bank conflicts/refresh/low-power wakeup Speculatively reserve a second slot farther away Slot-2 far enough away that it can be reused by a subsequent request in the common case uncommon case suffers a latency penalty based on location of Slot-2 Beyond Slot-2, requests simply have to be retried
36
Optically Connected Memory
Photonics clearly useful off-chip Breaks pin barrier, improves socket-edge BW Reduces energy consumption Allows larger capacity Beyond this, what? How can we best apply photonics to the rest of the memory system?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.