Download presentation
Presentation is loading. Please wait.
1
Adversarial Learning for Security System
Mentee: Ximin Lin, Mentor: Wei Yang Department of Electrical and Computer Engineering, College of Engineering, University of Illinois at Urbana-Champaign MOTIVATION Recently, deep learning/machine learning is widely used in security, due to their high accuracy in classification. However, due to different designs, the robustness of deep learning network varies. Once the system being compromised by adversarial inputs, the consequence will be catastrophic. For example, as self-driving car becoming a hot topic today, deep learning network is used in traffic sign recognition. If the car misclassify the stop sign as the right turn sign, serious traffic accident can happen. (image from To compromise the classification, by changing the image by a few pixels can have classifier misclassify one class as another. To defense this kind of attack, we can firstly use adversarial crafting to craft images that can fool the deep learning model. Then we will retrain our models based on the adversarial images we crafted. By continuously doing this, our model will be more and more robust. INTRODUCTION In this project, we perform two preliminary studies: Crafting adversarial image samples Retraining the model with the adversarial crafting For image classification, we adopt the convolution neural network model from Tensorflow. (image from Adversarial Crafting Change of 50 Change of 30 Change of 20 Visualization of Pictures Perturbed . Future work adversarial crafting with malware detection In paper, “Adversarial Perturbations Against Deep Neural Networks for Malware Classification,” it describes about the algorithm for that the adversarial crafting for malware detection. I will try to implement the algorithms described in the paper in the future. We will continue to apply image classification to the traffic sign detection for self-driving cars. Algorithm: find the pixel with smallest perturbation that can cause largest distortion in results. Keep looking for these pixels and change them until model misclassify them as targeted class. CONCLUSIONS For the images classification, we observe that an increase in number of pixels needed to be changed for almost all four pictures. It has proven that the retraining is useful in increasing the model’s robustness. For the malware detection, we initially use all zeros for weight and biases. However, the neural net only reach 50% accuracy no matter how many more steps I train the model. Later I switched to random number generator to generate random number from normal distribution. Sometimes it perform well up to 96% but sometimes 50%. Retraining The basic algorithm is to retrain the model with the newly crafted samples from our adversarial generative network. By going through the same model, we expect the robustness of the model to increase. ACKNOWLEDGEMENTS Papernot, Nicolas, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. "The limitations of deep learning in adversarial settings." In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp IEEE, 2016. Papernot, Nicolas, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. "Distillation as a defense to adversarial perturbations against deep neural networks." In Security and Privacy (SP), 2016 IEEE Symposium on, pp IEEE, 2016. McDaniel, Patrick, Nicolas Papernot, and Z. Berkay Celik. "Machine learning in adversarial settings." IEEE Security & Privacy 14, no. 3 (2016): Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp ACM, 2016. Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial examples in the physical world." arXiv preprint arXiv: (2016). Google Tensorflow Tutorial for experts
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.