Damageless Information Hiding Technique using Neural Network Keio University Graduate School of Media and Governance Kensuke Naoe.

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Introduction to Artificial Neural Networks
Steganography - A review Lidan Miao 11/03/03. Outline History Motivation Application System model Steganographic methods Steganalysis Evaluation and benchmarking.
Information Hiding: Watermarking and Steganography
A New Scheme For Robust Blind Digital Video Watermarking Supervised by Prof. LYU, Rung Tsong Michael Presented by Chan Pik Wah, Pat Mar 5, 2002 Department.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Maximizing Strength of Digital Watermarks Using Neural Network Presented by Bin-Cheng Tzeng 5/ Kenneth J.Davis; Kayvan Najarian International Conference.
Machine Learning Neural Networks
T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A November 2005Analysis of Attacks on Common Watermarking Techniques 1 A study on the robustness.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Neuro-fuzzy Systems Xinbo Gao School of Electronic Engineering Xidian University 2004,10.
Digital Watermarking Parag Agarwal
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
West Virginia University
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Watermarking University of Palestine Eng. Wisam Zaqoot May 2010.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Multi-Layer Perceptrons Michael J. Watts
Explorations in Neural Networks Tianhui Cai Period 3.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Waqas Haider Khan Bangyal. Multi-Layer Perceptron (MLP)
Appendix B: An Example of Back-propagation algorithm
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Multi-Layer Perceptron
STEGANOGRAPHY AND DIGITAL WATERMARKING KAKATIYA INSTITUTE OF TECHNOLOGY AND SCIENCES,WARANGAL.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Blind image data hiding based on self reference Source : Pattern Recognition Letters, Vol. 25, Aug. 2004, pp Authors: Yulin Wang and Alan Pearmain.
EEE502 Pattern Recognition
Each neuron has a threshold value Each neuron has weighted inputs from other neurons The input signals form a weighted sum If the activation level exceeds.
Neural Networks 2nd Edition Simon Haykin
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
DONE BY S.MURALIRAJAN M.NIRMAL
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Parag Agarwal Digital Watermarking Parag Agarwal
The Network Approach: Mind as a Web
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Damageless Information Hiding Technique using Neural Network Keio University Graduate School of Media and Governance Kensuke Naoe

Abstract An information hiding technique without embedding any data to target content  Pattern recognition model Neural network as classifier (extraction key)  Advantage and disadvantage

Outline Background Motivation Current Problem Proposed Method Experiment results Future Work Coclusion

Background Emergence of the Internet  Contents are widely distributed Information hiding provides reliability  Digital watermarking for Digital Rights Management  Steganography for covert channel

Motivation and current problem Use one information hiding algorithm with another to strengthen the security of the content  Digital watermarking  Steganography  FIngerprinting There are many great information hiding algorithm but have difficulties to collaborate  possibility of obstructing previously embedded data  Applying another information hiding algorithm might result in recalculation of fingerprint for the content

Research Objective To hide or to relate certain information without embedding any information to the target content Ability to collaborate with another information hiding algorithm to strengthen the security

Proposed Method Approach  Embed model to pattern recognition model Neural network as classifier (extraction key)  Only proper extraction key will lead to proper hidden signal

Why use neural network? Has abilities of  Tolerance to noise  Error correction and complementation  Additional learning characteristic Multi-layered Perceptron Model  Backpropagation Learning (Supervised Learning)

Proposed Method (Embedding) 1.Frequency Transformation of content Hidden signal asteacher signal 2.Selection of feature subblock 3.Use feature values as input value for neural network 4. Generation of classifier (extraction key) Coordinate of feature subblocks (extraction key)

Proposed Method (Extraction) 1.Frequency Transformation of content Hidden signal asoutput signal 2.Selection of feature subblock 3.Use feature values as input value for neural network 4. Applying the classifier (encryption key) Coordinate of feature subblocks (encryption key)

What is neural network? neuron ( nervous cell )  It only has a function of receiving a signal and dispatching signal to connected neuron  When organically connected, it has ability to process a complicated task A network built with these neurons are called neural network  Multi layered perceptron model Often used for non-linear pattern classifier

Calculation of network Input value of neuron  Sum product of network weight and output values from previous layer j xjxj yjyj y1y1 yiyi yNyN w 1j w ij wNjwNj

Generating classifier (extraction key) 1.Frequency Transformation of content Hidden signal asteacher signal 2.Selection of feature subblock 3.Use feature values as input value for neural network 4. Generation of classifier (encryption key) Coordinate of feature subblocks (encryption key)

Patterns and signals to conceal pattern hidden signal pattern hidden signal pattern hidden signal

Backpropagation learning

Network 1

Network 2

Network 3

Network 4

Network 5

Further experiments Can proposed method extract from high pass filtered image or jpeg image ?

Network 1

Network 2

Network 3

Network 4

Network 5

Network 1

Network 2

Network 3

Network 4

Network 5

Future work Because it relies on the position of feature sub block, it is weak to geometric attacks  Rotation, expansion, shrinking Key sharing has to rely on another security technology

Conclusion Information hiding technique without embedding any data into target content by using neural network Ability to collaborate with other information hiding algorithm

Thank you

Appendix

Tradeoffs for information hiding Watermarking (Digital Right Management) Steganography (Covert Channel) Fingerprinting (Integrity check) Capacity (Amount of data to be embedded) Not important Small amount is enough Important More the better Not Important More the better Robustness (tolerance against attack to the container) Important Must not be destroyed Not important Content and hidden data are not related Important Should be weak against alteration Invisibility (transparency of hidden data) Important Should not disturb the content Important Existence should be kept secret Not Important Existence can be informed

Three layered perceptron model Three layer model Feed forward model Input function  Sigmoid function Backpropagation learning Input layer Hidden layer Output layer

Sigmoid function Input function for multi-layered perceptron model sigmoid = look like letter of S x y

Selection of feature values 8 8 Feature subblock Has DC value and various values of AC (low, middle, high)

number of hidden neuron=10 threshold=0.05

number of hidden neuron=10 threshold=0.1

number of hidden neuron=20 threshold=0.1