Download presentation
Presentation is loading. Please wait.
Published byBarry Tucker Modified over 8 years ago
1
ECE 6332, Fall, 2014 Wireless Communication Zhu Han Department of Electrical and Computer Engineering Class 21 Apr. 7 th, 2014
2
Outline MIMO/Space time coding Trellis code modulation BICM Video transmission (optional) Unequal error protection and joint source channel coding Homework 4 –Will announced by email.
3
MIMO Model T: Time index W: Noise
4
Alamouti Space-Time Code Alamouti Space-Time Code Transmitted signals are orthogonal => Simplified receiver Redundance in time and space => Diversity Equivalent diversity gain as maximum ratio combining => Smaller terminals Antenna 1Antenna 2 Time n d0d0 d1d1 Time n + T - d 1 * d0*d0*
5
Space Time Code Performance STBC Block of K symbols Block of T symbols n t transmit antennas Constellation mapper Data in K input symbols, T output symbols T K code rate R=K/T is the code rate full rate If R=1 the STBC has full rate If T= minimum delay If T= n t the code has minimum delay Detector is linear !!! Detector is linear !!!
6
BLAST Bell Labs Layered Space Time Architecture V-BLAST implemented -98 by Bell Labs (40 bps/Hz) Steps for V-BLAST detection 1.Ordering: choosing the best channel 2.Nulling: using ZF or MMSE 3.Slicing: making a symbol decision 4.Canceling: subtracting the detected symbol 5.Iteration: going to the first step to detect the next symbol Time s0 s1 s2 V-BLAST D-BLAST Antenna s1 s2 s3
7
Trellis Coded Modulation 1. Combine both encoding and modulation. (using Euclidean distance only) 2. Allow parallel transition in the trellis. 3. Has significant coding gain (3~4dB) without bandwidth compromise. 4. Has the same complexity (same amount of computation, same decoding time and same amount of memory needed). 5. Has great potential for fading channel. 6. Widely used in Modem
8
Set Partitioning 1. Branches diverging from the same state must have the largest distance. 2. Branches merging into the same state must have the largest distance. 3. Codes should be designed to maximize the length of the shortest error event path for fading channel (equivalent to maximizing diversity). 4. By satisfying the above two criterion, coding gain can be increased.
9
Coding Gain About 3dB
10
Bit-Interleaved Coded Modulation Coded bits are interleaved prior to modulation. Performance of this scheme is quite desirable Relatively simple (from a complexity standpoint) to implement. Binary Encoder Bitwise Interleaver M-ary Modulator Soft Decoder Bitwise Deinterleaver Soft Demodulator Channel
11
BICM Performance Minimum Eb/No (in dB) Code Rate R 00.10.20.30.40.50.60.70.80.91 0 2 4 6 8 10 12 CM BICM M = 2 M = 64 M = 16 M = 4 AWGN Channel, Noncoherent Detection M: Modulation Alphabet Size
12
Video Standard Two camps –H261, H263, H264; –MPEG1 (VCD), MPEG2 (DVD), MPEG4 Spacial Redundancy: JPEG –Intraframe compression –DCT compression + Huffman coding Temporal Redundancy –Interframe compression –Motion estimation
13
Discrete Cosine Transform (DCT) 120108907569738289 127115978175798895 13412210589838796103 13712510792869099106 13111910186808393100 117105877265697885 10088705549536269 8977594438425158 0 – black 255 – white
14
DCT and Huffman Coding 0 – black 255 – white 7009010000000 900000000 -890000000 00000000 00000000 00000000 00000000 00000000
15
Basis vectors
16
Using DCT in JPEG DCT on 8x8 blocks
17
Comparison of DF and DCT
18
Quantization and Coding Zonal Coding: Coefficients outside the zone mask are zeroed. The coefficients outside the zone may contain significant energy Local variations are not reconstructed properly
19
30:1 compression and 12:1 Compression
20
Motion Compensation I-Frame –Independently reconstructed P-Frame –Forward predicted from the last I-Frame or P-Frame B-Frame –forward predicted and backward predicted from the last/next I-frame or P- frame Transmitted as - I P B B B P B B B
21
Motion Prediction
22
Motion Compensation Approach(cont.) Motion Vectors –static background is a very special case, we should consider the displacement of the block. –Motion vector is used to inform decoder exactly where in the previous image to get the data. –Motion vector would be zero for a static background.
23
Motion estimation for different frames XZ Y Available from earlier frame (X) Available from later frame (Z)
24
A typical group of pictures in display order A typical group of pictures in coding order 1 5 2 3 4 9 6 7 8 13 10 11 12 I P B B B P B B B P B B B I B B B P B B B P B B B P
25
Coding of Macroblock Y CBCB CRCR Spatial sampling relationship for MPEG-1 -- Luminance sample -- Color difference sample 01 23 45
26
A Simplified MPEG encoder Frame recorder DCT Quantize Variable- length coder Transmit buffer Prediction encoder De- quantize Inverse DCT Motion predictor Reference frame Rate controller IN OUT Scale factor Buffer fullness Prediction Motion vectors DC
27
MPEG Standards MPEG stands for the Moving Picture Experts Group. MPEG is an ISO/IEC working group, established in 1988 to develop standards for digital audio and video formats. There are five MPEG standards being used or in development. Each compression standard was designed with a specific application and bit rate in mind, although MPEG compression scales well with increased bit rates. They include:Moving Picture Experts Group –MPEG1 –MPEG2 –MPEG4 –MPEG7 –MPEG21 –MP3
28
MPEG Standards MPEG-1 Designed for up to 1.5 Mbit/sec Standard for the compression of moving pictures and audio. This was based on CD-ROM video applications, and is a popular standard for video on the Internet, transmitted as.mpg files. In addition, level 3 of MPEG-1 is the most popular standard for digital compression of audio--known as MP3. MPEG-1 is the standard of compression for VideoCD, the most popular video distribution format thoughout much of Asia. MPEG-2 Designed for between 1.5 and 15 Mbit/sec Standard on which Digital Television set top boxes and DVD compression is based. It is based on MPEG-1, but designed for the compression and transmission of digital broadcast television. The most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced video. MPEG-2 scales well to HDTV resolution and bit rates, obviating the need for an MPEG-3. MPEG-4 Standard for multimedia and Web compression. MPEG-4 is based on object-based compression, similar in nature to the Virtual Reality Modeling Language. Individual objects within a scene are tracked separately and compressed together to create an MPEG4 file. This results in very efficient compression that is very scalable, from low bit rates to very high. It also allows developers to control objects independently in a scene, and therefore introduce interactivity. MPEG-7 - this standard, currently under development, is also called the Multimedia Content Description Interface. When released, the group hopes the standard will provide a framework for multimedia content that will include information on content manipulation, filtering and personalization, as well as the integrity and security of the content. Contrary to the previous MPEG standards, which described actual content, MPEG-7 will represent information about the content. MPEG-7 MPEG-21 - work on this standard, also called the Multimedia Framework, has just begun. MPEG-21 will attempt to describe the elements needed to build an infrastructure for the delivery and consumption of multimedia content, and how they will relate to each other. MPEG-21
29
JPEG JPEG stands for Joint Photographic Experts Group. It is also an ISO/IEC working group, but works to build standards for continuous tone image coding. JPEG is a lossy compression technique used for full-color or gray-scale images, by exploiting the fact that the human eye will not notice small color changes.Joint Photographic Experts Group JPEG 2000 is an initiative that will provide an image coding system using compression techniques based on the use of wavelet technology.
30
DV DV is a high-resolution digital video format used with video cameras and camcorders. The standard uses DCT to compress the pixel data and is a form of lossy compression. The resulting video stream is transferred from the recording device via FireWire (IEEE 1394), a high-speed serial bus capable of transferring data up to 50 MB/sec. –H.261 is an ITU standard designed for two-way communication over ISDN lines (video conferencing) and supports data rates which are multiples of 64Kbit/s. The algorithm is based on DCT and can be implemented in hardware or software and uses intraframe and interframe compression. H.261 supports CIF and QCIF resolutions. –H.263 is based on H.261 with enhancements that improve video quality over modems. It supports CIF, QCIF, SQCIF, 4CIF and 16CIF resolutions. –H.264
31
HDTV 4-7 Mbps 25 - 27 Mbps
32
Unequal Error Protection Multiple Description Coding Video –Base layer vs. enhancement layer
33
Unequal Error Protection For different packets with different importance, different channel coding is used.
34
Joint Source Channel Coding Limited bandwidth If source data is more, less channel protection data. What is the best tradeoff
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.