Data Hiding in Image and Video Part I: Fundamental Issues and Solutions ECE 738 Class Presentation By Tanaphol Thaipanich

Slides:



Advertisements
Similar presentations
Physical Layer: Signals, Capacity, and Coding
Advertisements

Steganography - A review Lidan Miao 11/03/03. Outline History Motivation Application System model Steganographic methods Steganalysis Evaluation and benchmarking.
Robust Invisible Watermarking of Volume Data Y. Wu 1, X. Guan 2, M. S. Kankanhalli 1, Z. Huang 1 NUS Logo 12.
S Digital Communication Systems Bandpass modulation II.
Digital Watermarking for Telltale Tamper Proofing and Authentication Deepa Kundur, Dimitrios Hatzinakos Presentation by Kin-chung Wong.
Introduction to Watermarking Anna Ukovich Image Processing Laboratory (IPL)
Information Hiding: Watermarking and Steganography
Audio and Video Watermarking Joseph Huang & Weechoon Teo Mr. Pirate.
Desynchronization Attacks on Watermarks
» Copying images is easy » Distributing images is easy » But what if we want to protect our rights to an image?
Three Lessons Learned Never discard information prematurely Compression can be separated from channel transmission with no loss of optimality Gaussian.
Department of Electronic Engineering City University of Hong Kong EE3900 Computer Networks Data Transmission Slide 1 Continuous & Discrete Signals.
Sep 06, 2005CS477: Analog and Digital Communications1 Introduction Analog and Digital Communications Autumn
Bit-Plane Watermarking for SPIHT-Coded Images 台北科技大學資工所 指導教授:楊士萱 學生:廖武傑 2003/07/29.
A Novel Scheme for Hybrid Digital Video Watermarking By Pat P. W. Chan Supervised by Michael R. Lyu 2/2/2004 Computer Science and Engineering Department.
ECE 776 Information Theory Capacity of Fading Channels with Channel Side Information Andrea J. Goldsmith and Pravin P. Varaiya, Professor Name: Dr. Osvaldo.
Lecture 3 Outline Announcements: No class Wednesday Friday lecture (1/17) start at 12:50pm Review of Last Lecture Communication System Block Diagram Performance.
Multiantenna-Assisted Spectrum Sensing for Cognitive Radio
Digital Image Watermarking Er-Hsien Fu EE381K Student Presentation.
Digital Watermarking Parag Agarwal
Formatting and Baseband Modulation
MULTIMEDIA: CRYPTO IS NOT ENOUGH 9/09/2015 | pag. 2.
ECE738 Advanced Image Processing Data Hiding (2 of 3) Curtsey of Professor Min Wu Electrical & Computer Engineering Univ. of Maryland, College Park.
An algorithm for dynamic spectrum allocation in shadowing environment and with communication constraints Konstantinos Koufos Helsinki University of Technology.
Digital Watermarking With Phase Dispersion Algorithm Team 1 Final Presentation SIMG 786 Advanced Digital Image Processing Mahdi Nezamabadi, Chengmeng Liu,
CE 4228 Data Communications and Networking
Watermarking University of Palestine Eng. Wisam Zaqoot May 2010.
Audio Compression Usha Sree CMSC 691M 10/12/04. Motivation Efficient Storage Streaming Interactive Multimedia Applications.
Multimedia Copyright Protection Technologies M. A. Suhail, I. A. Niazy
Huijuan Yang, Alex C. Kot, IEEE Fellow IEEE Transactions on Multimedia, Vol. 9, No. 3, Apr Multimedia Security Final Project R 葉容瑜 R
Blind Pattern Matching Attack on Watermark Systems D. Kirovski and F. A. P. Petitcolas IEEE Transactions on Signal Processing, VOL. 51, NO. 4, April 2003.
DCT-Domain Watermarking Chiou-Ting Hsu and Ja-Ling Wu, "Hidden digital watermarks in images," IEEE Trans. On Image Processing, vol. 8, No. 1, January 1999.
Digital Watermarking SIMG 786 Advanced Digital Image Processing Mahdi Nezamabadi, Chengmeng Liu, Michael Su.
Robustness Studies For a Multi-Mode Information Embedding Scheme for Digital Images Daniel Eliades Mentor: Dr. Neelu Sinha Department of Math and Computer.
Yarmouk university Hijjawi faculty for engineering technology Computer engineering department Primary Graduation project Document security using watermarking.
Digital image processing is the use of computer algorithms to perform image processing on digital images which is a subfield of digital signal processing.
1 Security and Robustness Enhancement for Image Data Hiding Authors: Ning Liu, Palak Amin, and K. P. Subbalakshmi, Senior Member, IEEE IEEE TRANSACTIONS.
Computer Communication & Networks Lecture # 05 Physical Layer: Signals & Digital Transmission Nadeem Majeed Choudhary
Huirong Fu and Edward W. Knightly Rice Networks Group Aggregation and Scalable QoS: A Performance Study.
Data Throughput and Robustness Studies in a New Digital Image Watermarking Technique Based upon Adaptive Segmentation & Space-Frequency Representation.
EE 6332, Spring, 2014 Wireless Communication Zhu Han Department of Electrical and Computer Engineering Class 11 Feb. 19 th, 2014.
The Physical Layer Lowest layer in Network Hierarchy. Physical transmission of data. –Various flavors Copper wire, fiber optic, etc... –Physical limits.
Johann A. Briffa Mahesh Theru Manohar Das A Robust Method For Imperceptible High- Capacity Information Hiding in Images. INTRODUCTION  The art of Hidden.
Compression There is need for compression: bandwidth constraints of multimedia applications exceed the capability of communication channels Ex. QCIF bit.
Quantization Watermarking Design and Analysis of Digital Watermarking, Information Embedding, and Data Hiding Systems Brian Chen, Ph. D. Dissertation,
by Mitchell D. Swanson, Bin Zhu, and Ahmed H. Tewfik
ECE 4710: Lecture #13 1 Bit Synchronization  Synchronization signals are clock-like signals necessary in Rx (or repeater) for detection (or regeneration)
ECE 4710: Lecture #31 1 System Performance  Chapter 7: Performance of Communication Systems Corrupted by Noise  Important Practical Considerations: 
Data Hiding in Image and Video: Part II—Designs and Applications Min Wu, Heather Yu, and Bede Liu.
Multiple watermarking Wu Dan Introduction (I) Multipurpose watermarking Ownership watermarks (very robust) Captioning watermarks ( robust)
Blind image data hiding based on self reference Source : Pattern Recognition Letters, Vol. 25, Aug. 2004, pp Authors: Yulin Wang and Alan Pearmain.
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) Image Compression Quantization independent samples uniform and optimum correlated.
Blind Quality Assessment System for Multimedia Communications Using Tracing Watermarking P. Campisi, M. Carli, G. Giunta and A. Neri IEEE Transactions.
Multi resolution Watermarking For Digital Images Presented by: Mohammed Alnatheer Kareem Ammar Instructor: Dr. Donald Adjeroh CS591K Multimedia Systems.
Digital Watermarking Multimedia Security. 2 What is the Watermark ? Paper Watermark –the technique of impressing into the paper a form, image, or text.
Digital Communications I: Modulation and Coding Course Spring Jeffrey N. Denenberg Lecture 3c: Signal Detection in AWGN.
Spread Spectrum and Image Adaptive Watermarking A Compare/Contrast summary of: “Secure Spread Spectrum Watermarking for Multimedia” [Cox ‘97] and “Image-Adaptive.
MMC LAB Secure Spread Spectrum Watermarking for Multimedia KAIST MMC LAB Seung jin Ryu 1MMC LAB.
[1] National Institute of Science & Technology Technical Seminar Presentation 2004 Suresh Chandra Martha National Institute of Science & Technology Audio.
Presenting: Yossi Salomon Noa Reiter Guides: Dr. Ofer Hadar Mr. Ehud Gonen.
Ikhwannul Kholis Universitas 17 Agustus 1945 Jakarta
High Capacity Data Embedding in JPEG Bit Streams Using Visual Models
Context-based Data Compression
Watermarking with Side Information
Watermarking for Image Authentication ( Fragile Watermarking )
Increasing Watermarking Robustness using Turbo Codes
Parag Agarwal Digital Watermarking Parag Agarwal
Spread Spectrum Watermarking
Improved Spread Spectrum: A New Modulation Technique for Robust Watermarking IEEE Trans. On Signal Processing, April 2003 Multimedia Security.
Author: Minoru Kuribayashi, Hatsukazu Tanaka
Presentation transcript:

Data Hiding in Image and Video Part I: Fundamental Issues and Solutions ECE 738 Class Presentation By Tanaphol Thaipanich

Introduction Data hiding/Digital watermark = schemes to embed secondary data in digital media Data hiding/Digital watermark = schemes to embed secondary data in digital media Application: Ownership Protection, Access control, Authentication Application: Ownership Protection, Access control, Authentication Data hiding = Communication problem Data hiding = Communication problem Embedded Data = Signal to be transmitted Embedded Data = Signal to be transmitted

Introduction Embedding Capacity Vs Robustness Embedding Capacity Vs Robustness Distortion – imperceptibly small for commercial or artistic reason Distortion – imperceptibly small for commercial or artistic reason Actual Noise Conditions Actual Noise Conditions Overestimate – waste capacity Overestimate – waste capacity Underestimate – corruption of embedded bits Underestimate – corruption of embedded bits Uneven distribution of embedding capacity Uneven distribution of embedding capacity # of embeddable bit varies from location to location # of embeddable bit varies from location to location

Data Hiding Framework

Key Elements in Data Hiding System Upper Layers Build on top to obtain additional functionalities Three key elements (1)Mechanism for embedding one bit (2)Perceptual model  imperceptibility (3)Modulation/Multiplexing techniques  multiples bits.

Two Basic Embedded Mechanisms Type – I: Additive Embedding Type – I: Additive Embedding Adding secondary data to host signal  I 1 – I 0 = f(b) Adding secondary data to host signal  I 1 – I 0 = f(b) I 0 = Major noise source I 0 = Major noise source Knowledge of I 0 will enhance detection performance Knowledge of I 0 will enhance detection performance

Two Basic Embedded Mechanisms Type – II: Relationship Enforcement Embedding Type – II: Relationship Enforcement Embedding Deterministic enforcing relationship  b = g(I 1 ) Deterministic enforcing relationship  b = g(I 1 ) Minimize perceptual distortion  I 1 closes to I 0 Minimize perceptual distortion  I 1 closes to I 0 Doesn’t need a knowledge of I 0  Information about b carried in I 1 Doesn’t need a knowledge of I 0  Information about b carried in I 1

Comparison for Type I & II Capacity Vs Robustness under “Blind Detection” Capacity Vs Robustness under “Blind Detection” Simplified Additive Model (Type I) Simplified Additive Model (Type I) For simplicity, M i = i.i.d N(0, σ M 2 ) For simplicity, M i = i.i.d N(0, σ M 2 ) Optimal detector (Min prob. Of error) Optimal detector (Min prob. Of error) Detection Statistic  Normalized correlation Detection Statistic  Normalized correlation

Comparison for Type I & II T N is Gaussian dist with unit variance and this following mean T N is Gaussian dist with unit variance and this following mean Minimize probability of error Minimize probability of error Raise the ratio of total watermark energy to noise power Raise the ratio of total watermark energy to noise power What should we do? What should we do?

Comparison for Type I & II Given the same noise power A watermark with higher power  distortion Using longer signal  lower embedding capacity

Comparison for Type I & II Type II – no interference from host media & coding one bit in small number of host components  High capacity Type II – no interference from host media & coding one bit in small number of host components  High capacity Odd – Even embedding Odd – Even embedding

Comparison for Type I & II Robustness comes from quantization or tolerance zone Robustness comes from quantization or tolerance zone Large Q = more tolerance (-Q/2, Q/2) Large Q = more tolerance (-Q/2, Q/2) Assume host components within +/- Q of kQ to be uniform distribution  MSE = 1/3*Q 2 Assume host components within +/- Q of kQ to be uniform distribution  MSE = 1/3*Q 2 Large Q = larger distortion Large Q = larger distortion

Comparison for Type I & II Type I: Excellent robustness and invisibility when original host is available. For blind detection, using longer watermark to present 1 bit = More robust but less capacity Type II: High data-rate data hiding applications that do not have to survive noise

Quantified Capacity Study Type I embedding Type I embedding Channel model = CICO Channel model = CICO Additive noise  Host interference & Processing Noise – i.i.d Gaussian dist Additive noise  Host interference & Processing Noise – i.i.d Gaussian dist Shannon channel capacity  C = W log 2 (1+S/N) Shannon channel capacity  C = W log 2 (1+S/N) A 2 = power of embedded signal A 2 = power of embedded signal σ I 2 = power of host signal σ I 2 = power of host signal σ 2 = power of processing noise ( σ I 2 >> σ 2 ) σ 2 = power of processing noise ( σ I 2 >> σ 2 ) W = ½ (MTF50) W = ½ (MTF50)

Quantified Capacity Study Type II embedding Type II embedding Channel model = DIDO Channel model = DIDO C DIDO = 1 - h P ; h P = binary entropy C DIDO = 1 - h P ; h P = binary entropy h P = p.log(1/p) + (1-p).log(1/(1-p)) and h P = p.log(1/p) + (1-p).log(1/(1-p)) and

Capacity Comparison for Type I&II Problem setting - fixing MSE introduced by embedding process to E 2 – control perceptual quality Problem setting - fixing MSE introduced by embedding process to E 2 – control perceptual quality Type I: power of embedded signal = E 2 Type I: power of embedded signal = E 2 Gaussian processing noise – σ 2 Gaussian processing noise – σ 2 Host interference – σ I = 10E Host interference – σ I = 10E Type II: MSE = E 2  Q= Type II: MSE = E 2  Q=

Capacity Comparison for Type I&II Type I: Suitable for strong noise condition Type II: Useful under low noise condition

Multi-level Embedding Design by targeting single WNR (watermark-to- noise ratio) Design by targeting single WNR (watermark-to- noise ratio) Actual noise is stronger  no extractable data Actual noise is stronger  no extractable data Actual noise is weaker  waste embedding capacity Actual noise is weaker  waste embedding capacity

Multi-level Embedding Use two targeted value of WNR Use two targeted value of WNR Fraction α 1 of the embedded data survive WNR of x 1 and all embedded data survive a higher WNR of x 2 Fraction α 1 of the embedded data survive WNR of x 1 and all embedded data survive a higher WNR of x 2 Too many embedding level  degradation Too many embedding level  degradation

Handling Uneven Embedding Capacity Unevenly distributed embedding capacity comes from the non-stationary nature of perceptual sources. Ex) Changes made in smooth areas are easier to be perceived than those in texture areas Goal: embed as many bits as possible in each region (Highest capacity)  conveying side information = large overhead  Lower capacity

Handling Uneven Embedding Capacity Easy way to overcome high overhead problem Easy way to overcome high overhead problem Embed a fixed number of bits in each region = no need for side information  CER (Constant Embedding Rate) Embed a fixed number of bits in each region = no need for side information  CER (Constant Embedding Rate) Need to ensure that Need to ensure that Fixed number of bits is small Fixed number of bits is small Size of each region is large Size of each region is large Result in significant waste in embedding capacity Result in significant waste in embedding capacity Does increasing region’s size really help? Does increasing region’s size really help?

Handling Uneven Embedding Capacity Embeddable or unembeddable Blockwise DCT transform (using 8x8  16x16) Compare magnitude of DC/AC coefficient with Perceptual Threshold Smooth region = no embeddable coefficients

Handling Uneven Embedding Capacity 20% of 8x8 Block are smooth 15% of 16x16 Block are smooth Increase block size is ineffective in reducing the number of segment with zero embeddable coefficient What should we do?

Backup Embedding Embed the same data in multiple area Embed the same data in multiple area Backup embedding with L location = Increase block size by L time – Why is it better? Backup embedding with L location = Increase block size by L time – Why is it better? Shuffling is generalization of backup embedding Shuffling is generalization of backup embedding Block size = 1 and locations are specified by permutation function Block size = 1 and locations are specified by permutation function

Quick Questions Q1: Why embedding capacity is important? Q1: Why embedding capacity is important? Q2: MTF – Modulation Transfer Function Q2: MTF – Modulation Transfer FunctionModulation Transfer FunctionModulation Transfer Function Q3: Shuffle? Q3: Shuffle?

Shuffling Random/Non-random  permutation function Focus on “complete random permutation” All permutations are equiprobable (1/S!)

Shuffling Complete random permutation Complete random permutation m r /N = the fraction of segments having r embeddable coefficients m r /N = the fraction of segments having r embeddable coefficients Considering simple scenario  throw balls in holes Considering simple scenario  throw balls in holes

Shuffling

Shuffling Before: 20% of segments have no embeddable coefficient (case: 8x8) Before: 20% of segments have no embeddable coefficient (case: 8x8) After shuffling: E[m 0 /N] = 0.002%  Block After shuffling: E[m 0 /N] = 0.002%  Block Most segments have embeddable coefficients Most segments have embeddable coefficients

Shuffling Image with high fraction of embeddable coefficients  use small segment = high capacity Image with high fraction of embeddable coefficients  use small segment = high capacity Smooth image  use large segment to ensure no region with zero embeddable coefficient Smooth image  use large segment to ensure no region with zero embeddable coefficient What actually is shuffling ? What actually is shuffling ? Allocate embeddable coefficients from non-smooth region to smooth region Allocate embeddable coefficients from non-smooth region to smooth region Any drawback? Any drawback?

Practical Consideration Shuffling = very low probability of getting block with no embeddable coefficient BUT there is still a chance Shuffling = very low probability of getting block with no embeddable coefficient BUT there is still a chance Solution: use primary and secondary shuffle – significantly different from each other Solution: use primary and secondary shuffle – significantly different from each other As discuss before, Shuffling increases sensitivity but it is alright for some applications that benefit comes from hidden data As discuss before, Shuffling increases sensitivity but it is alright for some applications that benefit comes from hidden data

Variable Embedding Rate (VER) Allow more data to be embedded if the average overhead is relatively small compared with average embedding capacity per segment Allow more data to be embedded if the average overhead is relatively small compared with average embedding capacity per segment Main issue = How to convey side information Main issue = How to convey side information Same/Different embedding mechanism Same/Different embedding mechanism Allocate more energy to side information = more robust but reduce capacity Allocate more energy to side information = more robust but reduce capacity Q: Embedding side information using same embedding mechanism – “Key in locked box” Q: Embedding side information using same embedding mechanism – “Key in locked box”

Variable Embedding Rate (VER) Part of embedded data are pre-determined Part of embedded data are pre-determined Ex: Detector decode data using all candidate shuffles  Accurately decoding = actual mechanism Ex: Detector decode data using all candidate shuffles  Accurately decoding = actual mechanism Same idea for segment size (to lessen complexity, using primary / secondary size Same idea for segment size (to lessen complexity, using primary / secondary size

Thank you Have question? Reach me at PPT –

MTF MTF – Spatial frequency response MTF applies only in the horizontal direction MTF 50, MTF 10, MTF 2 - Cut-off Go Back