Download presentation
Presentation is loading. Please wait.
Published byColleen O’Brien’ Modified over 9 years ago
1
Week 8 - Tutorial Interactive Digital Moving Image Production | CU3003NI | - Pratik Man Singh Pradhan
2
Media Encoding
3
Media Encoding Overview Why and how audio and video are encoded.
4
Encoding Media Encoding refers to the conversion of media files from one form to another (Compression) Encoding is performed for the following purposes Compressing a file to a smaller size (data/frame size) Making it usable on a particular device / software player Practically all audio and video is encoded and compressed for distribution. Uncompressed audio and video are retained for archiving and re-use / re- encoding.
5
Encoding > Decoding Flow Data File Stream Webcam Microphone OB Unit / Studio Control room Uncompressed Video Uncompressed audio Compressed data file Compressed stream Local Storage Transport Network (www) Data File Encoding Engine Decoding Engine
6
Transcoding The techniques used for transcoding are the same as for encoding. The goal of transcoding is not to get a file down to a small size (compression) Transcoding can be seen as ‘translating’ from one form to another maintaining maximum quality. Example: some editing systems may not be capable of processing a particular type of video – footage is transcoded to a form that can be used.
7
Digital Media Files Containers (Wrappers) Encoded media is stored within container formats Containers ‘store’ encoded audio and / or audio ‘streams’ Containers also contain metadata needed for the player to make ‘sense’ of the enclosed media formats. Container formats include QuickTime (MOV), RealMedia (RM), MPEG and OGG (open source format) IMPORTANT: Container formats do not describe the manner in which a file has been encoded. - QT file might not play in QuickTime on a particular machine - The software requires the appropriate Codec to be installed
8
Digital Media Files - Codecs Whether or not a file will play depends on its codec Codec refers to the particular encoding method (algorithm) used to compress and decompress a piece of media (COmpress - DECompress) Codecs specifically describe the type of video or audio compression used Certain codecs play almost universally (MPEG4) Some codecs may require plugins to be installed for playback (Vorbis(OGG), VP3 (Theora))
9
Encoding Applications Encoding is don at the following points A/V production applications (from the timeline) Final Cut Pro (native & via compressor) Protools Within bespoke compression applications Adobe Media Encoder (PC/MAC) Compressor(Apple) MediaCoder (Open Source) As import/export options on media players iTunes (import) QuickTime Pro (export options) On websites such as YouTube (FFMPEG server side encoder) Some encoding applications offer more control than others
10
Lossless and Lossy Compression Lossless Refers to any file type that is a true (verbatim) copy of the original No quality has been lost is saving a file in the following formats Lossless Audio – Flac, WavPac, Monkey’s Audio, ALAC Lossless Video – Animation Codec, Huffyuv, Uncompressed Lossless Graphics – Gif, PNG, Tiff A basic example of lossless compression methods include RLE (Rule Length Encoding) Using the following as an abstraction of the data used to store a segment of audio – [AAAAABBCCCCCDEEEEEEE] = 20bytes RLE would look at the ‘run lengths’ or repeated adjacent runs of data and summarise them as A5B2C5D1E7 = 10bytes
11
Lossless and Lossy Compression Lossless File formats and codecs where a file may look or sound acceptable or as good as the original but is in fact a degraded copy Lossy file formats include Lossy audio – AAC, MP3, Vorbis Lossy video – M2V, H.264 Lossy Graphics - JPEG Lossy compression approximates data in order to make easily represented sequences of data A (very) basic example is to use a similar scenario as before AAAAABAAAAA represents a signal or series of pixels (11 bytes) The compression could represent it as A5B1A5 (6 bytes lossless) Lossy compression decides that the discrepancy is not significant enough to record so instead approximates it back to A (A11 = 3 bytes lossy)
12
Redundancy File compression uses systems based around redundancy Redundancy elements are parts of the sound or image that are not required to be recorded (written) as data in the compressed file Audio uses psychoacoustic principles to determine which sound can be omitted without adversely affecting the overall quality (low/high frequencies, hiss, overlapping sounds) Video uses pixel colour data to determine redundancies Different codecs and encoders view and process these redundancies in different ways (algorithms) with different results Redundancy can be broken into two categories Objective Redundancy Subjective Redundancy
13
Objective Redundancy in Imagery An area of pure black is detected (area spans 15,300 pixels all black) The area is mapped between 4 points (corners of green rectangle) 15,300 pieces of information can be reduced to 5 pieces of information That information can then be decoded in the player and rendered exactly as it was.
14
Subjective Redundancy in Imagery An area is detected where pixels are similar in colour (a;; black / dark grey) The encoder decides that the difference is negligible (won’t be noticed) The area is mapped similarly to before using 1 colour value Information has been discarded and the quality of the compresses file is less than the original.
15
Compressing The goal of compression is to get the smallest file size while retaining maximum ‘meaningful’ information (fidelity/clarity) Compression is always a trade-off between quality and file size The same principle applies to audio/video as to graphics Always work from a high quality source Never compress already compressed media (generation loss) Always retain (archive) a high quality original for future work
16
THE END
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.