Baseband LTE Compression Jinseok Choi and Brian L. Evans Wireless Networking & Communication Group The University of Texas as Austin Collaboration with Robert W. Heath, Jr. , and Jeonghun Park Hi, I’m Jinseok Choi, working with Professor Brian Evans I’m going to introduce a new spatial compression method of LTE Baseband signals in Cloud Radio Access Networks.
Traditional Radio Access Network Network Trend Rapidly growing mobile traffic Dense antenna deployment Cell size reduction Limitations Interference High operation and capital expenditures BS RRH BBU UE Traditional RAN has such communication networks as shown in the picture left. It has RRH in each cell with Baseband processing Units. However, Rapidly growing mobile traffic makes antennas deployed more densely with smaller cell size and this results in some limitation of Traditional RAN It becomes more difficult to handle ‘Hand-off’ and requires higher cost for OPEX and CAPEX [Ericsson, Akamai, 2013] RRH Remote radio head BBU Baseband processing unit UE User equipment
Cloud Radio Access Network RRH BS BBU UE Cloud Radio Access Networks (C-RANs) Separate radio heads and baseband proc. units Share processing resources in the cloud Increase energy efficiency vs. traditional RANs Support growing mobile traffic Radio Interface Transports complex-baseband wireless samples Needs expensive link to support high data rates BS To Deal with such trends and limitation, Cloud Radio Access Networks has been proposed to increase energy efficiency vs. traditional RANs by supporting growing mobile traffic And also C-RAN makes it easier to share processing resources in the cloud Its feature is represented as Separate remote radio heads and baseband processing units Common Public Radio Interface is one of the standards for the link between RRHs and BBUS, which transports complex baseband wireless samples And there is one issue on this link, which it needs expensive link to support high data rate fronthaul cloud RRH Remote radio head BBU Baseband processing unit UE User equipment
Challenge: Fronthaul Capacity Constraints Data Rates Per Sector fronthaul links Number of Antennas LTE Bandwidth 10 MHz 20 MHz 2 1.2288 Gbps 2.4576 Gbps 4 2.4578 Gbps 4.9512 Gbps 8 9.8304 Gbps Very expensive links Poor scalability to LTE-A (100 MHz) This is so called, Fronthaul Capacity constraints As shown in the Table of CPRI Data Rates per Sector, it needs very expensive links and possibly results in poor scalability to LTE-A Therefore, Compression of baseband IQ samples before sending over the links is necessary Compress baseband IQ samples before sending over fronthaul links
Solution 1: Time-Domain Compression [Nieman & Evans, 2013] MMSE quantization for Gaussian signals Lloyd-Max quantization W antennas 5th-order IIR Chebychev Type II filter that pushes noise power to the guard band Noise shaping filter *Operations in uplink are reciprocal Lloyd-Max Quantization Minimizes MSE for a probability density function Derives quantization levels in closed-form Noise Shaping Shapes quantization noise to guard band Increases SQNR So, as I said, The time-domain I/Q samples are quantized with Lloyd-Max quantization followed by recursive error feedback for noise shaping. And Theoretically, further compression by deletion of CP and resampling can be performed to achieve up to 5.3x compression Operations in uplink are reciprocal. Noise Shaping Effect
Validation: Time-Domain Compression [Nieman & Evans, 2013] Contributions Achieves 3x compression Keeps an error vector magnitude (EVM) < 2% Limitations Each antenna baseband IQ stream is separately compressed Channel Quality Index (CQI) = 15 Bandwidth = 5 MHz Ped. A Channel Bandwidth = 1.4 MHz So, as I said, The time-domain I/Q samples are quantized with Lloyd-Max quantization followed by recursive error feedback for noise shaping. And Theoretically, further compression by deletion of CP and resampling can be performed to achieve up to 5.3x compression Operations in uplink are reciprocal.
Solution II: Spatial Domain Compression Main idea To exploit space-time correlation between antennas Can be applied to LTE uplink Split point Time-domain I/Q samples To reduce complexity Split Point CPRI Spatial domain compress The main idea, as mentioned earlier, is to exploit space-time correlation between antennas And this can be applied to LTE Uplink The split point for this method is in time-domain I/Q Samples between RRH and BBUs to reduce complexity So, after antennas receive user signals, they are down-converted, filtered, sampled and compressed. And this compressed signal is transported over the physical link, and decompressed at BBU
System Model LTE Uplink: Single Carrier FDMA Localized Frequency Domain Multiple Access A B C D DFT (M) IDFT (N) Single-Antenna UEs Mr Antennas/RRH Frequency Domain Received Signals in Time-Domain System model is such that We develop compression method for LTE Uplink which uses Localized Single-Carrier Frequency Domain Multiple Access And suppose there are ‘T’ number of users sending their signals to Mr number of Antennas, where Mr is larger then T in a small cell. Thus, the small cell size and densely deployed RRHs with the large # of antennas each, result in correlated received signals So the intuition is to exploit space-time correlation to compress baseband LTE samples Small cell size and densely deployed RRHs with the large # of antennas each, result in correlated received signals. Intuition: exploit space-time correlation to compress baseband LTE samples
Solution II: Spatial Domain Compression AFE PCA Dimension Reduction Link Remote Radio Equipment PHY Process Joint Symbol Detection Dequantization + PCA Decompression Base Station Processor Adaptive Quantization Compression Block (a) (b) (a) = x Forms received signal matrix of OFDM samples V is an eigenvector matrix T is a de-correlated matrix Achieves low-rank approximation by keeping only major principal components Principal Component Analysis (PCA) Original received signal matrix Low-rank approximation for data matrix In order to exploit the space-time correlation between the received signals, we used Principal Component Analysis with following Adaptive quantization for the block of matrix Y ,which consists of the received signals. Mr is the number of Antennas and Nb is the block length which is the number of samples we take for each compression. So Matrix Y can be reconstructed from eigenvector matrix V and de-correlated matrix T. Using PCA, we can achieve low-rank approximation by keeping only major principal components for low-rank matrix So, we only need to transmit ‘th’ number of eigenvectors and de-correlated vectors to recover the matrix Y at BBU with minimum distortion or possibly with noise reduction achieved By doing so, we can reduce the number of samples to be transmitted over the physical Link.
Solution II: Spatial Domain Compression AFE PCA Dimension Reduction Link Remote Radio Equipment PHY Process Joint Symbol Detection Dequantization + PCA Decompression Base Station Processor Adaptive Quantization Compression Block (a) (b) (b) Q Q-1 Link Baseband Processing Adaptive Quantization-Bit Allocation Adaptively allocate quantization bits - Based on quantization noise power T is a de-correlated matrix - ti will have lower amplitude as i increases R for eigenvector is fixed as -1 to 1 - Unitary vector, Qv is adaptively selected Compression Rate (CR) In order to exploit the space-time correlation between the received signals, we used Principal Component Analysis with following Adaptive quantization for the block of matrix Y ,which consists of the received signals. Mr is the number of Antennas and Nb is the block length which is the number of samples we take for each compression. So Matrix Y can be reconstructed from eigenvector matrix V and de-correlated matrix T. Using PCA, we can achieve low-rank approximation by keeping only major principal components for low-rank matrix So, we only need to transmit ‘th’ number of eigenvectors and de-correlated vectors to recover the matrix Y at BBU with minimum distortion or possibly with noise reduction achieved By doing so, we can reduce the number of samples to be transmitted over the physical Link. : Quantization Information : Quantization Bits
Validation – Link Level Simulation Simulation Setting Modulation - 64 QAM # of Antennas - 8 /16 / 32 / 64 cases # of Users - 4 users Resource blocks per User -12 blocks each (total 48/50) Compression Block Length - Nr = 1096 (1024+CP) Channel - Ped. A channel model Parameters for LTE Transmission Transmission BW [MHz] 1.4 3 5 10 15 20 Occupied BW [MHz] 1.08 2.7 4.5 9.0 13.5 18.0 Guardband [MHz] 0.32 0.3 0.5 1.0 1.5 2.0 Sampling Frequency [MHz] 1.92 3.84 7.68 15.36 23.04 30.72 FFT size 128 256 512 1024 1536 2048 # of occupied subcarriers 72 180 300 600 900 1200 # of resource blocks 6 25 50 75 100 # of CP samples (normal) 9 x 6 10 x 1 18 x 6 20 x 1 36 x 6 40 x 1 72 x 6 80 x 1 108 x 6 120 x 1 144 x 6 160 x 1 # of CP samples (extended) 32 64 384 We performed link-level simulation for the validation of our algorithm for 10MHz case the simulation is set as 64QAM Modulation with 8, 16, 32 and 64 antenna cases. And the number of users is fixed as 4. Assigned Resource blocks per user is 12 block, total 48 block out of 50 blocks (which is 96% of full usage). Compression Block Length is 1096 and Pedestrian A channel model is used. [Fundamentals of LTE, Arunabha Ghosh, Jun Zhang, Jeffery G. Andrews, Rias Muhamed, 2010]
Validation – 32 Antennas Analysis Noise Reduction Info. loss Analysis Matrix Degree of Freedom = 16 (4 channel taps, 4 users) - Low Rank Approximation is effective Compression + Noise Reduction - Adaptive Quantization-Bit Allocation is effective Achieves 4.0x compression with 0.3% EVM gain This is the EVM and BER of 32-Antenna compression with respect to SNR As the matrix rank Y was originally 16, with 32 signal vectors, low rank approximation becomes effective, and we use 16 eigenvectors and corresponding decorrelated vectors And we also apply adaptive quantization-bit allocation And it achieves 4.0x compression with 0.3% EVM gain and we can see that it shows much better performance than linear case with higher compression rate And we need to note that there is the part where our method shows better results even than no compression case due to noise reduction as we use all 16 eigenvectors for rank 16 matrix, which means we keep most of the information while discarding noise-dominant elements However, in the case of beyond around 26dB, where noise is significantly smaller than the signal, it shows a little performance loss as we discard the leaked information which was implied in the discarded vectors. <Comment> Matrix Rank = 16 (w.o. noise)
Validation – 64 Antennas Analysis Noise Reduction Info. loss Analysis Matrix Degree of Freedom = 16 (4 channel taps, 4 users) - Low Rank Approximation is very effective Compression + effective Noise Reduction - Adaptive Quantization-Bit Allocation is very effective Achieves 8.0x compression with 0.5% EVM gain This is the EVM and BER of 64-Antenna compression with respect to SNR As the matrix rank Y was originally 16, so with 64 signal vectors, low rank approximation becomes very effective, and we use 16 eigenvectors and corresponding decorrelated vectors And we also apply adaptive quantization-bit allocation And it achieves 8.0x compression with 0.5% EVM gain and we can see that it shows much better performance than linear quantization case with higher compression rate And there also is the region where our method shows better even results than no compression case due to noise reduction as we use all 16 eigenvectors for 16 rank matrix, which means we keep most of the information discarding noise-dominant elements And we can notice that the performance crossing point is now at about 2db higher SNR than that of 32 antenna case, which around 28 to 29dB <Comment> Matrix Rank = 16 (w.o. noise)
Compression Ratio vs. Estimated Complexity UPLINK PCA 64-Rx PCA 32-Rx PCA 16-Rx PCA 8-Rx Guo, et al, 12 Nieman & Evans, 13 Samardzija, et al, 12 Nanba & Agata, 13 Vosoughi, Wu & Cavallaro, 12 Ren, et al, 14 * Arithmetic complexity before quantization
Compression Ratio vs. Estimated Complexity DOWNLINK Guo, et al, 12 Nieman & Evans, 13 Samardzija, et al, 12 Vosoughi, Wu & Cavallaro, 12 Ren, et al, 14 * Arithmetic complexity before quantization
Contribution & Limitation Spatial Domain Compression Achieves 1.9x / 2.5x / 4.0x / 8.0x compression for 8 / 16 / 32 / 64-antenna cases with 4 users Draws noise reduction effect in some favorable network environment Proposes possible solution for future communication network trend Develops fast algorithm based on power method to find major principal components Determination of Optimal Block Size, Quantization-Bit Numbers Development of Spatial Compression Algorithm with 2 to 8 antennas - Slepian Wolf Coding: Separate encoding is as efficient as joint encoding Future Work Block diagram for Slepian-Wolf coding: independent encoding of two correlated data streams. H: entropy
Thank you
Cloud RAN - future Key Assumption CRAN in Massive MIMO Environment RRH BS BBU UE Key Assumption CRAN in Massive MIMO Environment 5Generation (mmWave) # of Antennas of RRH: Large Mr Smaller Cell Size # of Antennas >># of Users Solution Spatial Domain Compression To achieve large compression rate with large # of antennas BS Mr Now, we assumed that based on the current network trend, C-RAN in the future will be in massive MIMO environment which is for 5 Generation technology So, the number of Antennas of RRH will become large and cell size will be further reduced. And this would make the situation that the number of Antennas outnumbers that of users very much. Based on this assumption, I proposed Spatial Domain Compression as a possible solution to achieve large compression rate along with large # of antennas
Validation – 8 Antennas Analysis Matrix Degree of Freedom = 8 (4 channel taps, 4 users) - Low Rank Approximation is very poor - Adaptive Quantization-Bit Allocation is still possible Achieves 1.9x compression with 0.3% EVM loss This is the EVM and BER of 8-Antenna Compression with respect to SNR Red line is our method and Black line is no compression case As the matrix rank Y was originally 16 in this case 8, with 8 signal vectors, low rank approximation is very poor, so we use all 8 eigenvectors but we still can apply adaptive quantization-bit allocation And it achieves 1.9x compression with 0.3% EVM loss
Validation – 16 Antennas Analysis Matrix Degree of Freedom = 16 (4 channel taps, 4 users) - Low Rank Approximation is poor - Adaptive Quantization-Bit Allocation is possible Achieves 2.5x compression with 0.5% EVM loss This is the EVM and BER of 16-Antenna compression with respect to SNR Red line is our method and Black line is no compression case and Blue line is the compression case of simply reducing quantization bit of linear quantizer As the matrix rank Y was originally 16, with 16 signal vectors, low rank approximation is still poor, but we use 13 eigenvectors at the expense of a little information loss cause by discarding the 3 least important eigenvectors and corresponding decorrelated vectors And we also apply adaptive quantization-bit allocation And it achieves 2.5x compression with 0.5% EVM loss and we can see that even though the adaptive quantization-bit is roughly selected, it shows similar or even slightly better performance than linear quantization case. <Comment> Matrix Rank = 16 (w.o. noise)
References Nieman & Evans, 13 Guo, et al, 12 Samardzija, et al, 12 Lloyd-Max Quantization Noise Shaping Guo, et al, 12 Resampling Block Scaling Samardzija, et al, 12 1. K. Nieman and B. Evans, “Time-domain compression of complex-baseband LTE signals for cloud radio access networks,” in Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE, Dec 2013, pp. 1198–1201. 2. Guo, Bin, et al. "CPRI compression transport for LTE and LTE-A signal in CAN."Communications and Networking in China (CHINACOM), 2012 7th International ICST Conference on. 2012. 3. Samardzija, Dragan, et al. "Compressed transport of baseband signals in radio access” Wireless Communications, IEEE Transactions on 11.9 (2012): 3216-3225 3.0x Compression (5.3x in Theory) (UL & DL) Non-Linear Quantization 3.3x Compression (UL & DL) Dithering Signals in Multi-link Case 3.0x Compression (UL & DL)
References (Cont’d) Nanba & Agata, 13 Vosoughi, Wu & Cavallaro, 12 I/Q Sample Width Reduction Free Lossless Audio Codec Vosoughi, Wu & Cavallaro, 12 Lossless Compression Sample Quantizing Ren, et al, 14 Down Sampling Modified Block AGC 4. Nanba, Shinobu, and Akira Agata. "A new IQ data compression scheme for front-haul link in centralized RAN.” Personal, Indoor and Mobile Radio Communications (PIMRC Workshops), 2013 IEEE 24th International Symposium on. IEEE, 2013. 5. Vosoughi, Aida, Michael Wu, and Joseph R. Cavallaro. "Baseband signal compression in wireless base stations." Global Communications Conference (GLOBECOM), 2012 IEEE. IEEE, 2012. 6. Ren, Yuwei, et al. "A compression method for LTE-A signals transported in radio access networks." Telecommunications (ICT), 2014 21st International Conference on. IEEE, 2014 2.0x Compression (UL) 2.0x ~ 3.5x Compression (UL) 2.3x ~ 4.0x Compression (DL) 3.3x Compression (UL & DL)