Download presentation
Presentation is loading. Please wait.
Published byPoppy Golden Modified over 9 years ago
1
Video and Streaming Media Andy Dozier
2
Approach Video Standards – Analog Video – Digital Video Video Quality Parameters – Frame Rate – Color Depth – Resolution Encoding/Decoding Standards
3
Video Standard Summary Analog Video – Composite – Component Digital Video
4
Composite Video Overview Optimized for wireless broadcast operation – Frequency allocations are controlled by the FCC – 54 MHz to 806 MHz (68 Channels) – Allocate 6 MHz/Channel Utilizes a single communication channel – Coaxial cable transmission – Terrestrial broadcast Lowest resolution
5
Composite Video Overview (cont’d) Defined by National Television Systems Committee (NTSC) – Interface Standard (System M-NTSC) documented in ANSI T1.502.1988 M-NTSC Features – Color or monochrome – 30 frames/second – 525 horizontal scan lines (483 usable)
6
Interlacing A refresh rate of 30 frames/second exhibits flicker – One frame is a complete image at a point in time Solution is to divide each frame into two “fields” – One field consists of either all odd, or all even scan lines – Odd and even scan lines are “interlaced” 262.5 horizontal scan lines/field – Each field is refreshed at a rate of 30/second 60 fields/second total Phosphor persistence allows the eye to perceive both fields at the same time – Eliminates flicker problem
7
Composite Video Resolution Horizontal/vertical dimension ratio is 4/3 Usable horizontal scan lines = 483 In order to make a horizontal line consistent in an image, it is necessary for the image line to cover more than one scan line – Number of horizontal image lines = 70% of the number of horizontal scan lines – Vertical resolution is 0.7 X 483, or 338 horizontal line/space pairs Require the same horizontal resolution – 4/3 X 338, or 450 vertical line/space pairs Composite Video Resolution is equivalent to 450 X 338 pixels
8
Composite Video Features: Single Wire or Channel NTSC Standard Suitable for Broadcasting Lowest Resolution Equivalent to 450 X 338 Pixels
9
Color Theory Color theory is based on the psychophysical properties of human color vision – First stated by Herman Grassman of Germany in 1854 Any color can be matched by an additive combination of different amounts of three additive primary colors – Additive primary colors are different from subtractive primary colors Red/Green/Blue (RGB) – In video, phosphors emit light, therefore we use additive primaries
10
Definitions Intrinsic nature of color is called Hue, or “U” Intensity of color is called Saturation, or “V” Hue and saturation taken together define color, or Chrominance, C – Hue + Saturation = Chrominance = C Brightness is described as Luminous Flux – Luminance = Y C and Y totally describe color sensation
11
Color Spatial Resolution For most images, the fine detail picked up by the human eye is conveyed by changes in Luminance – Cannot pick up color for small objects This implies that for very small areas in a scene, the human eye is much more sensitive to changes in Luminance, or brightness of the scene For large areas, the eye responds mostly to colors
12
Analog Component Video NTSC committee desired to design a color TV signal system that was compatible with the black and white (monochrome) system Split the signal into components – Luminance (Y) – Chrominance (C) This signal system accounts for the variation in sensitivity of the eye to different colors Y = 0.30 R + 0.59 G + 0.11 B
13
Analog Component Video (cont’d) A variety of signal systems are used to provide color displays Composite signal systems embed the Chrominance information into the transmitted signal Systems which separate the Y, C, U, and V information are referred to as Component Video systems – Digital and analog versions – Component video provides higher fidelity
14
Analog Component Video YUV Features: Separates Y, U, and V Current Color TV System Combine YUV for transmission Used for Color TV Receivers
15
Analog Component Video Y/C Features: Separates Y and C Intermediate Quality 2-wire system Called “S-Video” Used for Hand-Held Cameras Hi-8 Super VHS
16
Analog Component Video RGB Features: Separates R, G, and B signals Easily transformed into other signal systems Y/C YUV Used for Color Monitors
17
Digital Video Major disadvantages of analog techniques are: – Susceptibility to electromagnetic noise – Quality degrades with multiple generations of copies Digital video techniques represent component signals as streams of “1s” and “0s” – Eliminates degradation of multiple copy generations – Excellent noise immunity – Can be stored on hard disk drives, DVD, and CD-ROM – Can be transported via data networks
18
Digital Video Features Generated by digitizing analog video signals – Composite Digital - D2 Standard – Component Digital - D1 Standard Image quality is defined by three parameters – Frame Resolution and Scaling – Color Depth – Frame Rate
19
Frame Resolution and Scaling Each frame (image) is represented by an array of pixels If the pixel array is equal to the monitor resolution, the image fills the monitor screen – Example: 640 X 480 pixels Partial screen images may be displayed (scaled) Using a full screen resolution of 640 X 480 pixels: – 320 X 240 pixels would fill 1/4 of the screen – 160 X 120 pixels would fill 1/16 of the screen
20
Scaling of Image Size Full Screen1/4 Screen1/16 Screen
21
Color Depth Color depth is defined by the number of bits used to represent the color of each pixel This determines the maximum number of colors that can be represented, and therefore the “realism” of the image. As an example: Red = 8 bits/pixel Green = 8 bits/pixel Blue = 8 bits/pixel Using 24 bits/pixel allows representation of 16.7 Million colors
22
Frame Rate The number of times/second an image is refreshed controls image quality – Flicker – Jerkiness of motion Some encoding systems allow adjustment of the frame rate to stay within the bandwidth allocated by the network – Basic Rate ISDN allows a maximum of 128 kbps – Most high quality video conferencing systems use at least 384/kbps
23
Digital Video Bandwidth Requirements Consider the following: Frame Rate = 60 frames/second Color Depth = 24 bits/pixel Frame Size = 640 X 480 pixels This example would require 442.37 Mbps to transmit uncompressed video in real time We have to consider compression techniques to transmit video for affordable systems
24
Digital Video Bandwidth Requirements Uncompressed D-1 video requires 270 Mbps This implies that it is still impossible to transport an uncompressed D-1 signal over the wide area – Bandwidth is too expensive It is also difficult to transport it over the local area – Requires Gigabit Ethernet
25
Video Stream Bandwidth
26
Intraframe Compression The eye is not as sensitive to changes in color on a small scale as intensity This implies that a video imaging system can “throw away” some of the color information in each frame, and still appear realistic to the human eye – Color sampling can be easily changed (sub-sampling) If this is done consistently for each frame, this technique is referred to as “Intraframe” compression
27
Intraframe Compression Color Subsampling The previous example would require 221 Mbps @ 4:1:1
28
Alternative Intraframe Compression Techniques The key to successful intraframe techniques is that each frame be preserved at the highest resolution possible – Allows editing on a “frame by frame” basis The approach is to “throw away” information that cannot be perceived by the human eye by adjusting parameters
29
Alternative Intraframe Compression (cont’d)
30
JPEG The Joint Photographic Experts Group (JPEG) developed a compression standard for 24-bit “True Color” photographic images – Single frame encoding technology This technique utilizes Intraframe compression – Subsampling of Chroma information – Algorithm quantizes 8 X 8 blocks of pixels Achieves an image compression ratio of 2 to 30 over uncompressed images – One image equals one video frame
31
Motion JPEG Utilizes JPEG encoding for each frame – 30 frames/sec – Variable compression rations (2:1 to 30:1) This allows editing on a “frame by frame” basis – Industry standard for high definition storage and retrieval One drawback is that the MJPEG standard does not encode audio – Proprietary solution required One hour of broadcast video utilizing a 6:1 compression ratio requires 13 GBytes
32
Interframe Compression Significant compression must be achieved to transport and handle video streams via wide area networks (WANs) – Achieved by “Interframe” compression Adjustment of image parameters Data compression achieved by dropping information between frames Common interframe compression techniques available today: – MPEG
33
MPEG Compression In order to achieve significant compression ratios predictive techniques are required These techniques encode one complete frame periodically, and “predict” the changes between these “key frames” – MPEG encodes every 16 th frame Example: Talking head, where only the lips and head of the speaker are moving
34
MPEG Encoding Scheme
35
MPEG Disadvantages Since you have complete information every sixteen frames (~ every ½ second) video editing is more difficult Sound may need to be correlated to the frame of choice
36
Encoding Techniques Encoders are now available at reasonable prices that bring the compression ratios into an affordable range (< 1.5 Mbits/sec) Two types of encoders are available – Symmetric – Asymmetric Symmetric encoders can encode in real time – Used for video streaming applications Asymmetric encoders cannot encode in real time – Used for CD and DVD applications
37
Encoder/Decoder (Codec) Types
38
Streaming Video Originally, video was played via the “Download and Play” method For long video clips, it is more desirable to start playing before waiting for the entire file to download – Streaming video – Requires Isochronous playback – This is achieved by buffering
39
Download and Play
40
Isochronous Playback
41
Video Streaming
42
Video Editing and Authoring In order to create useful applications, it is necessary to capture multiple streams, and combine them into one Multiple rates may also be required for different users After the streams are captured, an “Editing and Authoring” process is required
43
Video Editing Process
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.