Feb 14 th 2005University of Utah1 Microarchitectural Wire Management for Performance and Power in partitioned architectures Rajeev Balasubramonian Naveen Muralimanohar Karthik Ramani Venkatand Venkatachalapathy Processor Architecture
February 14 th Overview/Motivation Wire delays hamper performance. Power incurred in movement of data 50% of dynamic power is in interconnect switching (Magen et al. SLIP 04) MIT Raw processor’s on-chip network consumes 36% of total chip power (Wang et al. 2003) Abundant number of metal layers
February 14 th Wire characteristics Wire Resistance and capacitance per unit length ± Width R , C Spacing C Delay (as delay RC), Bandwidth
February 14 th Design space exploration Tuning wire width and spacing d 2d B Wires Resistance Capacitance Resistance Capacitance Bandwidth
February 14 th Transmission Lines Similar to L wires - extremely low delay Constraining implementation requirements! Large width Large spacing between wires Design of sensing circuits Implemented in test CMOS chips
February 14 th Design space exploration Tuning Repeater size and spacing Traditional Wires Large repeaters Optimum spacing Power Optimal Wires Smaller repeaters Increased spacing Delay Power
February 14 th Design space exploration Delay Optimized B wires Bandwidth Optimized W wires Power Optimized P wires Power and B/W Optimized PW wires Fast, low bandwidth L wires
February 14 th Heterogeneous Interconnects Intercluster global Interconnect 72 B wires Repeaters sized and spaced for optimum delay 18 L wires Wide wires and large spacing Occupies more area Low latencies 144 PW wires Poor delay High bandwidth Low power
February 14 th Outline Overview Design Space Exploration Heterogeneous Interconnects Employing L wires for performance PW wires: The power optimizers Evaluation Results Conclusion
February 14 th L1 D Cache LSQLSQ Eff. Address Transfer 10c Mem. Dep Resolution 5c Cache Access 5c Data return at 20c L1 Cache pipeline
February 14 th Exploiting L-Wires L1 D Cache LSQLSQ Eff. Address Transfer 10c Partial Mem. Dep Resolution 3c Cache Access 5c 8-bit Transfer 5c Data return at 14c
February 14 th L wires: Accelerating cache access Transmit LSB bits of effective address through L wires Partial comparison of loads and stores in LSQ Faster memory disambiguation Introduces false dependences ( < 9%) Indexing data and tag RAM arrays LSB bits can prefetch data out of L1$ Reduce access latency of loads
February 14 th L wires: Narrow bit width operands Transfer of 10 bit integers on L wires Schedule wake up operations Reduction in branch mispredict penalty A predictor table of 8K two bit counters Identifies 95% of all narrow bit-width results Accuracy of 98% Implemented in the PowerPC!
February 14 th PW wires: Power/Bandwidth efficient Idea: steer non-critical data through energy efficient PW interconnect Transfer of data at instruction dispatch Transfer of input operands to remote register file Covered by long dispatch to issue latency Store data
February 14 th Evaluation methodology L1 D Cache B wires (2 cycles) L wires (1 cycle) PW wires (3 cycles) Cluster A dynamically scheduled clustered modeled with 4 clusters in simplescalar-3.0 Crossbar interconnects Centralized front-end I-Cache & D-Cache LSQ Branch Predictor
February 14 th Evaluation methodology I-Cache D-cache LSQ Cluster Cross bar Ring interconnect A dynamically scheduled 16 cluster modeled in Simplescalar-3.0 Ring latencies B wires ( 4 cycles) PW wires ( 6 cycles) L wires (2 cycles)
February 14 th IPC improvements: L wires L wires improves performance by 4% on four cluster system and 7.1% on a sixteen cluster system
February 14 th Four cluster system: ED 2 gains Link Relativ e metal area IPC Relative processor energy (10%) Relative ED 2 (10%) Relative ED 2 (20%) 144 B PW PW 36 L B PW,36 L B, 36 L
February 14 th Sixteen Cluster system: ED 2 gains LinkIPC Relative Processor Energy (20%) Relative ED 2 (20%) 144 B PW, 36 L B B, 36 L B, 36 L
February 14 th Conclusions Exposing the wire design space to the architecture A case for micro-architectural wire management! A low latency low bandwidth network alone helps improve performance by upto 7% ED 2 improvements of about 11% compared to a baseline processor with homogeneous interconnect Entails hardware complexity
February 14 th Future work A preliminary evaluation looks promising Heterogeneous interconnect entails complexity Design of heterogeneous clusters Energy efficient interconnect
February 14 th Questions and Comments? Thank you!
February 14 th Backup
February 14 th L wires: Accelerating cache access TLB access for page look up Transmit a few bits of Virtual page number on L wires Prefetch data our of L1$ and TLB 18 L wires( 6 tag bits, 8 L1 index and 4 TLB index bits) Wire Type Crossb ar delay Ring hop delay PW wires 36 B wires24 L wires12
February 14 th Model parameters Simplescalar-3.0 with separate integer and floating point queues 32 KB 2 way Instruction cache 32 KB 4 way Data cache 128 entry 8 way I and D TLB
February 14 th Overview/Motivation: ± Three wire implementations employed in this study ± B wires: traditional Optimal delay Huge power consumption ± L wires: Faster than B wires Lesser bandwidth ± PW wires: Reduced power consumption Higher bandwidth compared to B wires Increased delay through the wires