Adversarial Learning for Security System

Slides:



Advertisements
Similar presentations
Rapid Object Detection using a Boosted Cascade of Simple Features Paul Viola, Michael Jones Conference on Computer Vision and Pattern Recognition 2001.
Advertisements

LOGO Classification III Lecturer: Dr. Bo Yuan
A METHODOLOGY FOR EMPIRICAL ANALYSIS OF PERMISSION-BASED SECURITY MODELS AND ITS APPLICATION TO ANDROID.
Hurieh Khalajzadeh Mohammad Mansouri Mohammad Teshnehlab
Side Channel Attacks through Acoustic Emanations
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
Handwritten Hindi Numerals Recognition Kritika Singh Akarshan Sarkar Mentor- Prof. Amitabha Mukerjee.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
Mentor Prof. Amitabha Mukerjee Deepak Pathak Kaustubh Tapi 10346
Convolutional Neural Network
Hand Detection with a Cascade of Boosted Classifiers Using Haar-like Features Qing Chen Discover Lab, SITE, University of Ottawa May 2, 2006.
1 Munther Abualkibash University of Bridgeport, CT.
Comparing TensorFlow Deep Learning Performance Using CPUs, GPUs, Local PCs and Cloud Pace University, Research Day, May 5, 2017 John Lawrence, Jonas Malmsten,
CNN architectures Mostly linear structure
Big data classification using neural network
Generative Adversarial Nets
Debesh Jha and Kwon Goo-Rak
Deep Neural Net Scenery Generation
an introduction to: Deep Learning
Week III: Deep Tracking
Restricted Boltzmann Machines for Classification
Machine Learning for Safer Roads
Worm Origin Identification Using Random Moonwalks
Adversaries.
Active Learning Intrusion Detection using k-Means Clustering Selection
Poisoning Attacks with Back-Gradient Optimization
Human Factors in Security Phishing, Scam, Leaked Credentials
AV Autonomous Vehicles.
Autonomous Cyber-Physical Systems: Autonomous Systems Software Stack
CSSE463: Image Recognition Day 17
A brief introduction to neural network
Generalization ..
State-of-the-art face recognition systems
Introduction to Neural Networks
Deep Learning and Mixed Integer Optimization
Image Processing Platform
Research Interests.
Bolun Wang*, Yuanshun Yao, Bimal Viswanath§ Haitao Zheng, Ben Y. Zhao
Adversarial Evasion-Resilient Hardware Malware Detectors
Introduction to Pattern Recognition
network of simple neuron-like computing elements
Image recognition: Defense adversarial attacks
CSSE463: Image Recognition Day 17
Stealing DNN models: Attacks and Defenses
Remah Alshinina and Khaled Elleithy DISCRIMINATOR NETWORK
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Privacy-preserving and Secure AI
Age and Gender Classification using Convolutional Neural Networks
YOLO-LITE: A Real-Time Object Detection Web Implementation
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 13
Hein H. Aung Matt Anderson, Advisor Introduction
CSSE463: Image Recognition Day 18
Attack and defense on learning-based security system
Textual Video Prediction
CSSE463: Image Recognition Day 17
Neural Network Pipeline CONTACT & ACKNOWLEDGEMENTS
CSSE463: Image Recognition Day 17
Attacks on Remote Face Classifiers
Image Processing and Multi-domain Translation
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Neural Machine Translation using CNN
Angel A. Cantu, Nami Akazawa Department of Computer Science
End-to-End Facial Alignment and Recognition
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems NDSS 2019 Hadi Abdullah, Washington Garcia, Christian Peeters, Patrick.
SDSEN: Self-Refining Deep Symmetry Enhanced Network
Machine Learning, Sebastiano Galazzo
Developments in Adversarial Machine Learning
An introduction to neural network and machine learning
Presentation transcript:

Adversarial Learning for Security System Mentee: Ximin Lin, Mentor: Wei Yang Department of Electrical and Computer Engineering, College of Engineering, University of Illinois at Urbana-Champaign MOTIVATION Recently, deep learning/machine learning is widely used in security, due to their high accuracy in classification. However, due to different designs, the robustness of deep learning network varies. Once the system being compromised by adversarial inputs, the consequence will be catastrophic. For example, as self-driving car becoming a hot topic today, deep learning network is used in traffic sign recognition. If the car misclassify the stop sign as the right turn sign, serious traffic accident can happen. (image from https://www.nrc.nl/nieuws/2017/03/12/sxsw2-zo-kan-je-trump-alles-laten-zeggen-a1549927) To compromise the classification, by changing the image by a few pixels can have classifier misclassify one class as another. To defense this kind of attack, we can firstly use adversarial crafting to craft images that can fool the deep learning model. Then we will retrain our models based on the adversarial images we crafted. By continuously doing this, our model will be more and more robust. INTRODUCTION In this project, we perform two preliminary studies: Crafting adversarial image samples Retraining the model with the adversarial crafting For image classification, we adopt the convolution neural network model from Tensorflow. (image from http://luizgh.github.io/libraries/2015/12/08/getting-started-with-lasagne/) Adversarial Crafting Change of 50 Change of 30 Change of 20 Visualization of Pictures Perturbed . Future work adversarial crafting with malware detection In paper, “Adversarial Perturbations Against Deep Neural Networks for Malware Classification,” it describes about the algorithm for that the adversarial crafting for malware detection. I will try to implement the algorithms described in the paper in the future. We will continue to apply image classification to the traffic sign detection for self-driving cars. Algorithm: find the pixel with smallest perturbation that can cause largest distortion in results. Keep looking for these pixels and change them until model misclassify them as targeted class. CONCLUSIONS For the images classification, we observe that an increase in number of pixels needed to be changed for almost all four pictures. It has proven that the retraining is useful in increasing the model’s robustness. For the malware detection, we initially use all zeros for weight and biases. However, the neural net only reach 50% accuracy no matter how many more steps I train the model. Later I switched to random number generator to generate random number from normal distribution. Sometimes it perform well up to 96% but sometimes 50%. Retraining The basic algorithm is to retrain the model with the newly crafted samples from our adversarial generative network. By going through the same model, we expect the robustness of the model to increase. ACKNOWLEDGEMENTS Papernot, Nicolas, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. "The limitations of deep learning in adversarial settings." In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp. 372-387. IEEE, 2016. Papernot, Nicolas, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. "Distillation as a defense to adversarial perturbations against deep neural networks." In Security and Privacy (SP), 2016 IEEE Symposium on, pp. 582-597. IEEE, 2016. McDaniel, Patrick, Nicolas Papernot, and Z. Berkay Celik. "Machine learning in adversarial settings." IEEE Security & Privacy 14, no. 3 (2016): 68-72. Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528-1540. ACM, 2016. Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial examples in the physical world." arXiv preprint arXiv:1607.02533 (2016). Google Tensorflow Tutorial for experts