Deep Learning Hierarchical Representations for Image Steganalysis

Slides:



Advertisements
Similar presentations
Steganography - A review Lidan Miao 11/03/03. Outline History Motivation Application System model Steganographic methods Steganalysis Evaluation and benchmarking.
Advertisements

HMAX Models Architecture Jim Mutch March 31, 2010.
ECE643 DIGITAL IMAGE PROCESSING Steganalysis versus Splicing detection Paper by: Yun Q. Shi, Chunhua Chen, Guorong Xuan and Wei Su By: Nehal Patel Siddharth.
The Nature of Statistical Learning Theory by V. Vapnik
Simple Neural Nets For Pattern Classification
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published.
Autoencoders Mostafa Heidarpour
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
Deep Learning and Neural Nets Spring 2015
Center for Information Security Technologies, Korea University Digital Image Steganalysis Kwang-Soo Lee.
University of Palestine University of Palestine Eng. Wisam Zaqoot Eng. Wisam Zaqoot May 2011 May 2011 Steganalysis ITSS 4201 Internet Insurance and Information.
Multimedia Network Security Lab. On STUT Adaptive Weighting Color Palette Image Speaker:Jiin-Chiou Cheng Date:99/12/16.
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
Hierarchical Distributed Genetic Algorithm for Image Segmentation Hanchuan Peng, Fuhui Long*, Zheru Chi, and Wanshi Siu {fhlong, phc,
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Feedforward semantic segmentation with zoom-out features
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
ImageNet Classification with Deep Convolutional Neural Networks Presenter: Weicong Chen.
April 12, 2016Introduction to Artificial Intelligence Lecture 19: Neural Network Application Design II 1 Now let us talk about… Neural Network Application.
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka.
Neural networks and support vector machines
Big data classification using neural network
Analysis of Sparse Convolutional Neural Networks
Convolutional Neural Network
Deep Learning Amin Sobhani.
Compact Bilinear Pooling
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
Implementing Boosting and Convolutional Neural Networks For Particle Identification (PID) Khalid Teli .
Natural Language Processing of Knee MRI Reports
Training Techniques for Deep Neural Networks
Efficient Deep Model for Monocular Road Segmentation
CS 698 | Current Topics in Data Science
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
Presenter: Hajar Emami
CNNs and compressive sensing Theoretical analysis
SBNet: Sparse Blocks Network for Fast Inference
A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Construct a Convolutional Neural Network with Python
Introduction to Deep Learning with Keras
Deep learning Introduction Classes of Deep Learning Networks
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Smart Robots, Drones, IoT
Creating Data Representations
Neural Networks Geoff Hulten.
On Convolutional Neural Network
Lecture: Deep Convolutional Neural Networks
Analysis of Trained CNN (Receptive Field & Weights of Network)
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Heterogeneous convolutional neural networks for visual recognition
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Attention for translation
Automatic Handwriting Generation
Human-object interaction
Introduction to Neural Networks
Deep Object Co-Segmentation
DRC with Deep Networks Tanmay Lagare, Arpit Jain, Luis Francisco,
Learning and Memorization
JPEG Steganalysis Statistical Offset Tests
Example of training and deployment of deep convolutional neural networks. Example of training and deployment of deep convolutional neural networks. During.
Machine Learning for Cyber
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

Deep Learning Hierarchical Representations for Image Steganalysis Le Li 31446468 ll393@njit.edu

Abstract Introduction Prevailing Image Steganalysis Methods The Proposed CNN for Steganalysis Experimental Result and Analysis

Introduction Background: Nowadays, the most secure steganographic schemes are content-adaptive ones, which tend to embed the secret data in the regions with complex content where the embedding traces are less detectable. Current method-SRM Corresponding to the development of image steganography, substantial progress has also been made in steganalysis, with the aim of revealing the presence of the hidden message in images.

Prevailing Image Steganalysis Methods Nowadays, the prevailing steganalysis method in spatial domain are the Spatial Rich Model (SRM) [4] and its several variants, which mainly consist of following three steps: Noise Residual Computation: Feature Extraction Binary Classification The framework of image steganalysis methods

Prevailing Image Steganalysis Methods Noise Residual Computation: For a test image X = (xi j ) Compute the noise residuals R = (rij) from a pixel predictor:

Prevailing Image Steganalysis Methods 2. Feature Extraction:

Prevailing Image Steganalysis Methods 3. Binary Classification: The final step of steganalysis is to classify an image as a cover or a stego using an elaborately designed classifier (support vector machine (SVM) or ensemble classifier), which needs to be trained through supervised learning prior to practical application.

THE PROPOSED CNN FOR STEGANALYSIS Convolutional Neural Network Architecture The three key steps in the framework of modern steganalysis can be well simulated by a CNN model: the residual computation is actually implemented through convolution; The cascade of multiple convolutional layers in a CNN can be trained to learn or extract features of the original data; As for the classification step, the softmax classifier in CNN acts like the SVM or ensemble classifier.

THE PROPOSED CNN FOR STEGANALYSIS The architecture of the proposed CNN

THE PROPOSED CNN FOR STEGANALYSIS Initialization With High-Pass Filters in SRM We initialize the weights of the first convolutional layer in our convolutional neural network with high-pass filter kernels used in SRM instead of random values. The above initialization strategy acts as a regularization term in machine learning, which dramatically narrows down the feasible parameter space and helps to facilitate the convergence of the network. Besides, these high-pass filters make our network concentrate on the embedding artifacts introduced by steganography rather than the complex image content.

THE PROPOSED CNN FOR STEGANALYSIS Activation Functions: Applying activation functions to neurons can make them selectively respond to useful signals in the input, resulting the so-called sparse features. ReLU: ReLU is a notable choice for the convolutional layer in CNN and it can be formulated as TLU: TLU is slightly modified from ReLU.

Experiment Result and Analysis It is observed that, for all the three involved stegano- graphic schemes, the TLU-CNN achieves consistently better performance in terms of detection error than the baseline ReLU-CNN for most of the tested parameter values, with the best performance obtained when T = 3.

Experiment Result and Analysis The network is named as TLU_n if the ReLUs in its first n convolutional layers are replaced by TLUs and the ReLUs in other layers remain unchanged. As shown in Table, despite of the similar results from TLU_1 to TLU_3, the detection accuracy becomes worse from TLU_4 and beyond. This is because when the network becomes deeper, the distribution of TLU output tends to be consistent with the one of ReLU. Therefore it is preferable to use ReLUs in relatively deep convolutional layers. Considering that the computation of ReLU can be accelerated by CuDNN library and TLU_3 needs more training epochs, we choose TLU_1 as the TLU-CNN model.

SCA-TLU-CNN For image steganography, the probability of each pixel being modified, i.e., the so-called selection channel, when executing embedding could also be known by the us. The TLU-CNN becomes its Selection-Channel-Aware version SCA-TLU-CNN, when the statistical measure of embedding probabilities is incorporated in the network design. SCA-TLU-CNN -The CNN based steganalyzer when the knowledge of selection channel is properly utilized.

Experiment Result and Analysis Detection errors PE of 3 state-of-the-art steganographic schemes:(a) WOW. (b) S-UNIWARD. (c) HILL.

Thanks !