Privacy-preserving and Secure AI

Slides:



Advertisements
Similar presentations
The Threat Landscape Jan Threat Report 2.
Advertisements

Classification The Threat Environment Joyce Corell, NCSC Assistant Director for Supply Chain National Defense Industrial Association Global Supply Chain.
JShield: Towards Real-time and Vulnerability-based Detection of Polluted Drive-by Download Attacks Yinzhi Cao*, Xiang Pan**, Yan Chen** and Jianwei Zhuge***
Big Data Analytics and Challenge Presented by Saurabh Rastogi Asst. Prof. in Maharaja Agrasen Institute of Technology B.Tech(IT), M.Tech(IT)
IBM Security Network Protection (XGS)
A Survey of Mobile Phone Sensing Michael Ruffing CS 495.
Presentation By Deepak Katta
Market Analysis Decision Group.
Mobile Device Security Challenges  Mustaque Ahamad, Director, Georgia Tech Information Security Center  Patricia Titus, VP and Global Chief Information.
SURF:SURF: Detecting and Measuring Search Poisoning Long Lu, Roberto Perdisci, and Wenke Lee Georgia Tech and University of Georgia.
Computing Essentials 2014 Privacy, Security and Ethics © 2014 by McGraw-Hill Education. This proprietary material solely for authorized instructor use.
Computer Science Open Research Questions Adversary models –Define/Formalize adversary models Need to incorporate characteristics of new technologies and.
Printing: This poster is 48” wide by 36” high. It’s designed to be printed on a large-format printer. Customizing the Content: The placeholders in this.
Man vs. Machine: Adversarial Detection of Malicious Crowdsourcing Workers Gang Wang, Tianyi Wang, Haitao Zheng, Ben Y. Zhao, UC Santa Barbara, Usenix Security.
2011/11/1 1 Long Lu, Wenke Lee College of Computing Georgia Inst. of Technology Roberto Perdisci Dept. of Computer Science University of Georgia.
Android Security Extensions. Android Security Model Main objective is simplicity Users should not be bothered Does the user care? Most do not care…until.
Security Analytics Thrust Anthony D. Joseph (UCB) Rachel Greenstadt (Drexel), Ling Huang (Intel), Dawn Song (UCB), Doug Tygar (UCB)
The Koobface Botnet and the Rise of Social Malware Kurt Thomas David M. Nicol
Sky Advanced Threat Prevention
Research Direction Advisor: Frank,Yeong-Sung Lin Presented by Jia-Ling Pan 2010/10/211NTUIM OPLAB.
EE515/IS523: Security 101: Think Like an Adversary Evading Anomarly Detection through Variance Injection Attacks on PCA Benjamin I.P. Rubinstein, Blaine.
Decision Group April 2010 Market Analysis. Agenda  Market  DPI/DPC Market Size  Market Segments  Forensic Solution Market  Competitors  Decision.
Enterprise’ Ever-Evolving Challenge & Constraints Dealing with BYOD Challenges Enable Compliance to Regulations Stay Current with New Consumption Models.
Ton den Braber Channel Manager Benelux Dell SonicWALL The Promises and Pitfalls of BYOD.
DOWeR Detecting Outliers in Web Service Requests Master’s Presentation of Christian Blass.
Intercept X Early Access Program July 2017
Deployment Planning Services
He Xiangnan Research Fellow National University of Singapore
SmartCatch Systems Putting Intelligence into Surveillance
TriggerScope: Towards Detecting Logic Bombs in Android Applications
Stealing Machine Learning Models via Prediction APIs
Alina Oprea Associate Professor, CCIS Northeastern University
R SE to the challenges of ntelligent systems
DeepXplore: Automated Whitebox Testing of Deep Learning Systems
Twitter Augmented Android Malware Detection
Opening Remarks of Research Forum Deep Learning and Security Workshop 2017 Chang Liu UC Berkeley.
Active Learning Intrusion Detection using k-Means Clustering Selection
Wenjing Lou Complex Networks and Security Research (CNSR) Lab
Privacy-preserving Machine Learning
Poisoning Attacks with Back-Gradient Optimization
Emerging Cyber Tech for Evolving Cyber Threats Chris Hankin
Today’s Risk. Today’s Solutions. Cyber security and
Data Security Team 1.
Deceptive News Prediction Clickbait Score Inference
Dieudo Mulamba November 2017
AI in Cyber-security: Examples of Algorithms & Techniques
11/17/2018 9:32 PM © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN.
A Tour of Machine Learning Security
Research Interests.
Bolun Wang*, Yuanshun Yao, Bimal Viswanath§ Haitao Zheng, Ben Y. Zhao
Securing the Threats of Tomorrow, Today.
Image recognition: Defense adversarial attacks
Stealing DNN models: Attacks and Defenses
Privacy-preserving Prediction
Information Security Research and Education at Aalto
Why IBM Watson.
Binghui Wang, Le Zhang, Neil Zhenqiang Gong
GANG: Detecting Fraudulent Users in OSNs
Attack and defense on learning-based security system
Adversarial Learning for Security System
Attacks on Remote Face Classifiers
Counter APT Counter APT HUNT operations combine best of breed endpoint detection response technology with an experienced cadre of cybersecurity experts.
Trusting Machine Learning Algorithms for Safeguards Applications
Exploiting Unintended Feature Leakage in Collaborative Learning
When Machine Learning Meets Security – Secure ML or Use ML to Secure sth.? ECE 693.
Machine Learning.
Houston Code Wars Bob Moore March 2, 2019 WWAS 2019 | Confidential.
Exploiting Unintended Feature Leakage in Collaborative Learning
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems NDSS 2019 Hadi Abdullah, Washington Garcia, Christian Peeters, Patrick.
Developments in Adversarial Machine Learning
Presentation transcript:

Privacy-preserving and Secure AI FCAI Minisymposium

Machine Learning in the presence of adversaries Focus on failures of applying machine learning when not thinking about adversaries N. Asokan http://asokan.org/asokan/ @nasokan (joint work with Mika Juuti, Jian Liu, Andrew Paverd and Samuel Marchal)

Machine Learning is ubiquitous The ML market size is expected to grow by 44% annually over next five years In 2016, companies invested up to $9 Billion in AI-based startups Machine Learning and Deep Learning is getting more and attention... Talk about Growth of industry use of ML Proportion of startups that use ML Google searches for deep learning CTO talks about usage of ML: Visions of future Conclude with: This is great, and promises to empower society with good things But more and more interest to adversaries [1] http://www.marketsandmarkets.com/PressReleases/machine-learning.asp [2] McKinsey Global Institute, ”Artificial Intelligence: The Next Digital Frontier?”

How do we evaluate ML-based systems? Effectiveness of inference measures of accuracy Performance inference speed and memory consumption ... Meet these criteria even in the presence of adversaries

Security and Privacy of Machine Learning

Adversarial examples + 0.1⋅ = Which class is this? School bus Ostrich Szegedy et al., “Intriguing Properties of Neural Networks” 2014. https://arxiv.org/abs/1312.6199v4

Machine Learning pipeline Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Walk through the components Interact with crowd: Where would the adversary be? Analyst Where is the adversary? What is its target?

Prediction Service Provider API Compromised input Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Speed limit 80km/h Speed limit 80km/h Add more visual image of what the client is doing: x  x+e Traffic signs misunderstood Analyst Evade model Dang et al., “Evading Classifiers by Morphing in the Dark”, ACM CCS ’17. https://arxiv.org/abs/1705.07535 Evtimov et al., “Robust Physical-World Attacks on Deep Learning Models”. https://arxiv.org/abs/1707.08945 Zhang et al., “DolphinAttack: Inaudible Voice Commands”, ACM CCS ’17. https://arxiv.org/abs/1708.09537

Invert model, infer membership Malicious client Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners Inference ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Stolen data Analyst Invert model, infer membership Shokri et al., “Membership Inference Attacks Against Machine Learning Models”. IEEE S&P ’16. https://arxiv.org/pdf/1610.05820.pdf Fredrikson et al., “Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures”. ACM CCS’15. https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf

Prediction Service Provider API Malicious client Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Stolen model Analyst Extract/steal model Tramer et al., “Stealing ML models via prediction APIs”, Usenix SEC ’16. https://arxiv.org/abs/1609.02943 Papernot et al., “Practical Black-Box Attacks against Machine Learning”, ASIACCS ’17. http://doi.acm.org/10.1145/3052973.3053009

Malicious prediction service 𝐷𝑎𝑡𝑎𝑏𝑎𝑠𝑒 Add: “X uses app” Is this app malicious? Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client X Interact with crowd: Where would the adversary be? Analyst Profile users Malmi and Weber. “You are what apps you use Demographic prediction based on user's apps”, ICWSM ‘16. https://arxiv.org/abs/1603.00059 Liu et al, Oblivious Neural Network Predictions via MiniONN Transformations, ACM CCS ’17. https://ssg.aalto.fi/research/projects/mlsec/ppml/

Compromised toolchain: adversary inside training pipeline Prediction Service Provider API 𝐿𝑖𝑏𝑠 𝐿𝑖𝑏𝑠 Data owners Crafted query ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 Interact with crowd: Where would the adversary be? Stolen data Analyst Violate privacy Song et al., “Machine Learning models that remember too much”, ACM CCS ’17. https://arxiv.org/abs/1709.07886 Hitja et al., “Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning”, ACM CCS ’17. http://arxiv.org/abs/1702.07464

Influence ML model (model poisoning) Malicious data owner Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model ML model Hi Tay! 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Interact with crowd: Where would the adversary be? Analyst Influence ML model (model poisoning) https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot https://www.theguardian.com/technology/2017/nov/07/youtube-accused-violence-against-young-children-kids-content-google-pre-school-abuse

FCAI goals in privacy-preserving and secure AI Develop realistic models of adversary capabilities Understand privacy and security threats posed by these adversaries Develop effective countermeasures Privacy and security concerns in AI applications are multilateral No one size fits all

Agenda for today Security and machine learning Application: Luiza Sayfullina: Android Malware Classification: How to Deal With Extremely Sparse Features Attack/defense: Mika Juuti: Stealing DNN Models: Attacks and Defenses Application: Nikolaj Tatti (F-Secure): Reducing False Positives in Intrusion Detection Privacy of machine learning Antti Honkela: Differential Privacy and Machine Learning Mikko Heikkilä: Differentially private Bayesian learning on distributed data Adrian Flanagan (Huawei): Privacy Preservation with Federated Learning in Personalized Recommendation Systems