Presentation is loading. Please wait.

Presentation is loading. Please wait.

Programmable processors for wireless base-stations

Similar presentations


Presentation on theme: "Programmable processors for wireless base-stations"— Presentation transcript:

1 Programmable processors for wireless base-stations
Sridhar Rajagopal December 11, 2003

2 Wireless rates  clock rates
1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 10 -3 -2 -1 1 2 3 4 Year Clock frequency (MHz) W-LAN data rate (Mbps) Cellular data rate (Mbps) 4 GHz Mbps 200 MHz 2-10 Mbps 1 Mbps 9.6 Kbps Need to process 100X more bits per clock cycle today than in 1996

3 Base-stations need horsepower
‘Chip rate’ processing ‘Symbol rate’ Decoding Control and protocol RF (Analog) ASIC(s) and/or ASSP(s) FPGA(s) DSP(s) Co-processor(s) DSP or RISC processor Sophisticated signal processing for multiple users Need s of arithmetic operations to process 1 bit Base-stations require > 100 ALUs

4 Power efficiency and flexibility
implies does not waste power – does not imply low power Wireless gets blacked out too Trying to use your cell phone during the blackout was nearly impossible. What went wrong? August 16, 2003: 8:58 AM EDT By Paul R. La Monica, CNN/Money Senior Writer Wireless systems getting harder-to-design Evolving standards, compatibility issues More base-stations per unit area operational and maintenance costs Flexibility provides power-efficiency Base-stations rarely operate at full capacity Varying users, data rates, spreading, modulation, coding Adapt resources to needs

5 Thesis addresses the following problem
I want to design programmable processors for wireless base-stations with 100s of ALUs : map wireless algorithms on these processors power-efficient (adapt resources to needs) (c) decide #ALUs, clock frequency how much programmable? – as programmable as possible

6 Choice : Stream processors
Single processors won’t do ILP, subword parallelism not sufficient Register file explosion with increasing ALUs Multiprocessors Data parallelism in wireless systems SIMD (vector) processors appropriate Stream processors – media processing Share characteristics with wireless systems Shown potential to support s of ALUs Cycle accurate simulator and compiler tools available

7 Thesis contributions (a)Mapping algorithms on stream processors
designing data-parallel algorithm versions tradeoffs between packing, ALU utilization and memory reduced inter-cluster communication network (b)Improve power efficiency in stream processors adapting compute resources to workload variations varying voltage and frequency to real-time requirements (c) Design exploration between #ALUs and clock frequency to minimize power consumption fast real-time performance prediction

8 Outline Background Mapping algorithms to stream processors
Wireless systems Stream processors Mapping algorithms to stream processors Power efficiency Design exploration Broad impact and future work

9 Wireless workloads Time 1996 2004 ? System 2G 3G 4G Users Data rates
Algorithms Estimation Detection Decoding Theoretical Min ALUs @ 1 GHz 32 16 Kbps /user Single-user Correlator Matched filter Viterbi > 2 128 Kbps/user Multi-user Max. likelihood Interference Cancellation > 20 1 Mbps/user MIMO Chip equalizer LDPC > 200 Time ?

10 Key kernels studied for wireless
FFT – Media processing QRD – Media processing Outer product updates Matrix – vector operations matrix – matrix operations Matrix transpose Viterbi decoding LDPC decoding

11 Characteristics of wireless
Compute-bound Finite precision Limited temporal data reuse Streaming data Data parallelism Static, deterministic, regular workloads Limited control flow

12 Parallelism levels in wireless systems
int i,a[N],b[N],sum[N]; // 32 bits short int c[N],d[N],diff[N]; // 16 bits packed for (i = 0; i< 1024; ++i) { sum[i] = a[i] + b[i]; diff[i] = c[i] - d[i]; } Instruction Level Parallelism (ILP) - DSP Subword Parallelism (MMX) - DSP Data Parallelism (DP) – Vector Processor DP can decrease by increasing ILP and MMX – Example: loop unrolling DP ILP MMX

13 Stream Processors : multi-cluster DSPs
Memory: Stream Register File (SRF) + * Internal Memory ILP MMX + + + + controller micro + + + + ILP MMX + + + + controller micro * * * * * * * * * * * * DP adapt clusters to DP Identical clusters, same operations. Power-down unused FUs, clusters VLIW DSP (1 cluster)

14 Programming model Communication Computation
stream<int> a(1024); stream<int> b(1024); stream<int> sum(1024); stream<half2> c(512); stream<half2> d(512); stream<half2> diff(512); add(a,b,sum); sub(c,d,diff); kernel add(istream<int> a, istream<int> b, ostream<int> sum) { int inputA, inputB, output; loop_stream(a) a >> inputA; b >> inputB; output = a + b; sum << output; } kernel sub(istream<half2> c, istream<half2> d, ostream<half2> diff) int inputC, inputD, output; loop_stream(c) c >> inputC; d >> inputD; output = c - d; diff << output; Communication Computation Your new hardware won’t run your old software – Balch’s law

15 Outline Background Mapping algorithms to stream processors
Wireless systems Stream processors Mapping algorithms to stream processors Power efficiency Design exploration Broad impact and future work

16 Viterbi needs odd-even grouping
X(0) X(1) X(2) X(3) X(4) X(5) X(6) X(7) X(8) X(9) X(10) X(11) X(12) X(13) X(14) X(15) DP vector Regular ACS ACS in SWAPs Exploiting Viterbi DP in SWAPs: Use Register exchange (RE) instead of regular traceback Re-order ACS, RE

17 Performance of Viterbi decoding
1 10 100 1000 Number of clusters Frequency needed to attain real-time (in MHz) K = 9 K = 7 K = 5 DSP Max DP Ideal C64x (w/o co-proc) needs ~200 MHz for real-time

18 Patterns in inter-cluster comm
Intercluster comm network fully connected Structure in access patterns can be exploited Broadcasting Matrix-vector multiplication, matrix-matrix multiplication, outer product updates Odd-even grouping Transpose, Packing, Viterbi decoding

19 Odd-even grouping Packing Matrix transpose
overhead when input and output precisions are different Not always beneficial for performance Odd-even grouping required for bringing data to right cluster Matrix transpose Better done in ALUs than in memory Shown to have an order-of-magnitude better performance Done in ALUs as repeated odd-even groupings

20 Odd-even grouping 0 1 2 3 4 5 6 7  0 2 4 8 1 3 5 7 Entire chip length
0/4 1/5 2/6 3/7 4 Clusters Data Inter-cluster communication Entire chip length Limits clock frequency Limits scaling O(C 2 ) wires, O(C ) interconnections, 8 cycles

21 A reduced inter-cluster comm network
0/4 1/5 2/6 3/7 Broadcasting support Odd-even grouping Registers (pipelining) Multiplexer 4 Clusters Demultiplexer Data only nearest neighbor interconnections O(C log(C) ) wires, O(C ) interconnections, 8 cycles

22 Outline Background Mapping algorithms to stream processors
Wireless systems Stream processors Mapping algorithms to stream processors Power efficiency Design exploration Broad impact and future work

23 Flexibility needed in workloads
5 10 15 20 25 Min. ALUs needed at 1 GHz Operation count (in GOPs) (4,7) (4,9) (8,7) (8,9) (16,7) (16,9) (32,7) (32,9) 2G base-station (16 Kbps/user) 3G base-station (128 Kbps/user) (Users, Constraint lengths) 3G Workload variation from ~1 GOPs for 4 users, constraint 7 viterbi to ~23 GOPs for 32 users, constraint 9 viterbi

24 Flexibility affects DP*
U - Users, K - constraint length, N - spreading gain, R - decoding rate Workload Estimation Detection Decoding (U,K) f(U,N) f(U,K,R) (4,7) 32 4 16 (4,9) 64 (8,7) 8 (8,9) (16,7) (16,9) (32,7) (32,9) *Data Parallelism is defined as the parallelism available after subword packing and loop unrolling

25 When DP changes 4  2 clusters Data not in the right SRF banks
Overhead in bringing data to the right banks Via memory Via inter-cluster communication network

26 Adapting #clusters to Data Parallelism
SRF Turned off using voltage gating to eliminate static and dynamic power dissipation Adaptive Multiplexer Network Clusters C C C C No reconfiguration 4: 2 reconfiguration 4:1 reconfiguration All clusters off C C C C

27 Cluster utilization variation
5 10 15 20 25 30 50 100 (32,9) (32,7) Cluster Utilization Cluster Index Cluster utilization variation on a 32-cluster processor (32, 9) = 32 users, constraint length 9 Viterbi

28 Frequency variation on 32 clusters
200 400 600 800 1000 1200 Real-time Frequency (in MHz) (4,7) (4,9) (8,7) (8,9) (16,7) (16,9) (32,7) (32,9) Mem Stall uC Stall Busy

29 Operation Dynamic Voltage-Frequency scaling when system changes significantly Users, data rates … Coarse time scale (every few seconds) Turn off clusters when parallelism changes significantly Memory operations Exceed real-time requirements Finer time scales (100’s of microseconds)

30 Power : Voltage Gating & Scaling
Power can change from W to 300 mW (40x savings) depending on workload changes

31 Outline Background Mapping algorithms to stream processors
Wireless systems Stream processors Mapping algorithms to stream processors Power efficiency Design exploration Broad impact and future work

32 Deciding ALUs vs. clock frequency
No independent variables Clusters, ALUs, frequency, voltage (c,a,m,f) Trade-offs exist How to find the right combination for lowest power!

33 Static design exploration
Static, predictable part (computations) Dynamic part (Memory stalls Microcontroller stalls) Execution Time also helps in quickly predicting real-time performance

34 Sensitivity analysis important
We have a capacitance model [Khailany2003] All equations not exact Need to see how variations affect solutions

35 Design exploration methodology
3 types of parallelism: ILP, MMX, DP For best performance (power) Maximize the use of all Maximize ILP and MMX at expense of DP Loop unrolling, packing Schedule on sufficient number of adders/multipliers If DP remains, set clusters = DP No other way to exploit that parallelism

36 Setting clusters, adders, multipliers
If sufficient DP, linear decrease in frequency with clusters Set clusters depending on DP and execution time estimate To find adders and multipliers, Let compiler schedule algorithm workloads across different numbers of adders and multipliers and let it find execution time Put all numbers in power equation Compare increase in capacitance due to added ALUs and clusters with benefits in execution time Choose the solution that minimizes the power

37 Design exploration for clusters (c)
DP ILP For sufficiently large #adders, #multipliers per cluster Explore Algorithm 1 : 32 clusters Explore Algorithm 2 : 64 clusters Explore Algorithm 3 : 64 clusters Explore Algorithm 4 : 16 clusters

38 Clusters: frequency and power
10 1 2 3 4 Clusters(c) Frequency (MHz) f(c) 10 20 30 40 50 60 70 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Clusters Normalized Power Power f 2 3 32 clusters at frequency = MHz (p = 1) 64 clusters at frequency = MHz (p = 2) 64 clusters at frequency = MHz (p = 3) 3G workload

39 ALU utilization with frequency
1 1.5 2 2.5 3 3.5 4 4.5 5 1.2 1.4 1.6 1.8 2.2 2.4 2.6 2.8 500 600 700 800 900 1000 1100 (32,28) (38,28) #Multipliers (33,34) (50,31) (42,37) (64,31) (36,53) (51,42) (78,18) (43,56) (65,46) #Adders (55,62) (78,27) (67,62) (78,45) Real-Time Frequency (in MHz) with FU utilization(+,*) 3G workload

40 Choice of adders and multipliers
(,fp) Optimal ALU/Cluster Cluster/Total Adders Multipliers Power (0.01,1) 2 1 30 61 (0.01,2) (0.01,3) 3 25 58 (0.1,1) 52 69 (0.1,2) (0.1,3) 51 68 (1,1) 86 89 (1,2) 84 87 (1,3)

41 Exploration results ************************* Final Design Conclusion
Clusters : 64 Multipliers/cluster : 1 Multiplier Utilization: 62% Adders/cluster : 3 Adder Utilization: 55% Real-time frequency : MHz for 128 Kbps/user Exploration done in seconds….

42 Outline Background Mapping algorithms to stream processors
Wireless systems Stream processors Mapping algorithms to stream processors Power efficiency Design exploration Broad impact and future work

43 Broader impact Results not specific to base-stations
High performance, low power system designs Concepts can be extended to handsets Mux network applicable to all SIMD processors Power efficiency in scientific computing Results #2, #3 applicable to all stream applications Design and power efficiency Multimedia, MPEG, …

44 Future work Don’t believe the model is the reality
(Proof is in the pudding) Fabrication needed to verify concepts Cycle accurate simulator Extrapolating models for power LDPC decoding (in progress) Sparse matrix requires permutations over large data Indexed SRF may help 3G requires 1 GHz at 128 Kbps/user 4G equalization at 1 Mbps breaks down (expected)

45 Need for new architectures, definitions and benchmarks
Road ends - conventional architectures[Agarwal2000] Wide range of architectures – DSP, ASSP, ASIP, reconfigurable,stream, ASIC, programmable + Difficult to compare and contrast Need new definitions that allow comparisons Wireless workloads Typically ASIC designs SPEC benchmark needed for programmable designs

46 Conclusions Utilizing s ALUs/clock cycle and mapping algorithms not easy in programmable architectures Data parallel algorithms need to be designed and mapped Power efficiency needs to be provided Design exploration needed to decide #ALUs to meet real-time constraints My thesis lays the initial foundations


Download ppt "Programmable processors for wireless base-stations"

Similar presentations


Ads by Google