Presentation is loading. Please wait.

Presentation is loading. Please wait.

Privacy-preserving and Secure AI

Similar presentations


Presentation on theme: "Privacy-preserving and Secure AI"— Presentation transcript:

1 Privacy-preserving and Secure AI
FCAI Minisymposium

2 Machine Learning in the presence of adversaries
Focus on failures of applying machine learning when not thinking about adversaries N. Asokan @nasokan (joint work with Mika Juuti, Jian Liu, Andrew Paverd and Samuel Marchal)

3 Machine Learning is ubiquitous
The ML market size is expected to grow by 44% annually over next five years In 2016, companies invested up to $9 Billion in AI-based startups Machine Learning and Deep Learning is getting more and attention... Talk about Growth of industry use of ML Proportion of startups that use ML Google searches for deep learning CTO talks about usage of ML: Visions of future Conclude with: This is great, and promises to empower society with good things But more and more interest to adversaries [1] [2] McKinsey Global Institute, ”Artificial Intelligence: The Next Digital Frontier?”

4 How do we evaluate ML-based systems?
Effectiveness of inference measures of accuracy Performance inference speed and memory consumption ... Meet these criteria even in the presence of adversaries

5 Security and Privacy of Machine Learning

6 Adversarial examples + 0.1⋅ = Which class is this? School bus
Ostrich Szegedy et al., “Intriguing Properties of Neural Networks”

7 Machine Learning pipeline
Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Walk through the components Interact with crowd: Where would the adversary be? Analyst Where is the adversary? What is its target?

8 Prediction Service Provider API
Compromised input Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Speed limit 80km/h Speed limit 80km/h Add more visual image of what the client is doing: x  x+e Traffic signs misunderstood Analyst Evade model Dang et al., “Evading Classifiers by Morphing in the Dark”, ACM CCS ’17. Evtimov et al., “Robust Physical-World Attacks on Deep Learning Models”. Zhang et al., “DolphinAttack: Inaudible Voice Commands”, ACM CCS ’17.

9 Invert model, infer membership
Malicious client Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners Inference ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Stolen data Analyst Invert model, infer membership Shokri et al., “Membership Inference Attacks Against Machine Learning Models”. IEEE S&P ’16. Fredrikson et al., “Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures”. ACM CCS’15.

10 Prediction Service Provider API
Malicious client Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Stolen model Analyst Extract/steal model Tramer et al., “Stealing ML models via prediction APIs”, Usenix SEC ’16. Papernot et al., “Practical Black-Box Attacks against Machine Learning”, ASIACCS ’17.

11 Malicious prediction service
𝐷𝑎𝑡𝑎𝑏𝑎𝑠𝑒 Add: “X uses app” Is this app malicious? Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client X Interact with crowd: Where would the adversary be? Analyst Profile users Malmi and Weber. “You are what apps you use Demographic prediction based on user's apps”, ICWSM ‘16. Liu et al, Oblivious Neural Network Predictions via MiniONN Transformations, ACM CCS ’17.

12 Compromised toolchain: adversary inside training pipeline
Prediction Service Provider API 𝐿𝑖𝑏𝑠 𝐿𝑖𝑏𝑠 Data owners Crafted query ML model ML model 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 Interact with crowd: Where would the adversary be? Stolen data Analyst Violate privacy Song et al., “Machine Learning models that remember too much”, ACM CCS ’17. Hitja et al., “Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning”, ACM CCS ’17.

13 Influence ML model (model poisoning)
Malicious data owner Prediction Service Provider API 𝐿𝑖𝑏𝑠 Data owners ML model ML model Hi Tay! 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝐷𝑎𝑡𝑎𝑠𝑒𝑡 𝑇𝑟𝑎𝑖𝑛𝑒𝑟 Client Interact with crowd: Where would the adversary be? Analyst Influence ML model (model poisoning)

14 FCAI goals in privacy-preserving and secure AI
Develop realistic models of adversary capabilities Understand privacy and security threats posed by these adversaries Develop effective countermeasures Privacy and security concerns in AI applications are multilateral No one size fits all

15 Agenda for today Security and machine learning
Application: Luiza Sayfullina: Android Malware Classification: How to Deal With Extremely Sparse Features Attack/defense: Mika Juuti: Stealing DNN Models: Attacks and Defenses Application: Nikolaj Tatti (F-Secure): Reducing False Positives in Intrusion Detection Privacy of machine learning Antti Honkela: Differential Privacy and Machine Learning Mikko Heikkilä: Differentially private Bayesian learning on distributed data Adrian Flanagan (Huawei): Privacy Preservation with Federated Learning in Personalized Recommendation Systems


Download ppt "Privacy-preserving and Secure AI"

Similar presentations


Ads by Google