Download presentation
Presentation is loading. Please wait.
1
The AI Security Paradox
Dr. Computer Engineering and Computer Science University of Louisville - cecs.louisville.edu/ry Director – CyberSecurity Lab @romanyam /roman.yampolskiy
2
@romanyam roman.yampolskiy@louisville.edu
What is AI Safety? + = Cybersecurity AI Safety & Security AI From internal (AI) and external (humans) agents. Science and engineering aimed at creating safe and secure machines.
3
Future of Cybersecurity
@romanyam Future of Cybersecurity
4
AI for Cybersecurity (Example-IBM Watson)
@romanyam AI for Cybersecurity (Example-IBM Watson)
5
AI IS in Charge @romanyam roman.yampolskiy@louisville.edu
Energy: Nuclear Power Plants Utilities: Water Plants/Electrical Grid Military: Nuclear Weapons Communications: Satellites Stock Market: 75+% of all trade orders generated by Automated Trading Systems Aviation: Uninterruptible Autopilot System
6
What is Next? SuperIntelligence is Coming …
@romanyam What is Next? SuperIntelligence is Coming … 6
7
@romanyam roman.yampolskiy@louisville.edu
SuperSmart
8
Ultrafast Extreme Events (UEEs)
@romanyam SuperFast Ultrafast Extreme Events (UEEs) Abrupt Rise of New Machine Ecology Beyond Human Response Time. By Johnson et al. Nature. Scientific Reports 3, #2627 (2013)
9
@romanyam roman.yampolskiy@louisville.edu
SuperComplex "That was a little-known part of the software that no airline operators or pilots knew about."
10
SuperViruses @romanyam roman.yampolskiy@louisville.edu
Relying on Kindness of Machines? The Security Threat of Artificial Agents. By Randy Eshelman and Douglas Derrick. JFQ 77, 2nd Quarter 2015.
11
@romanyam roman.yampolskiy@louisville.edu
SuperSoldiers
12
@romanyam roman.yampolskiy@louisville.edu
SuperConcerns “The development of full artificial intelligence could spell the end of the human race.” “I think we should be very careful about artificial intelligence” “… there’s some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon.” "I am in the camp that is concerned about super intelligence" “…eventually they'll think faster than us and they'll get rid of the slow humans…”
13
Taxonomy of Pathways to Dangerous AI
@romanyam Taxonomy of Pathways to Dangerous AI Roman V. Yampolskiy. Taxonomy of Pathways to Dangerous Artificial Intelligence. 30th AAAI Conference on Artificial Intelligence (AAAI-2016). 2nd International Workshop on AI, Ethics and Society (AIEthicsSociety2016). Phoenix, Arizona, USA. February 12-13th, 2016. Deliberate actions of not-so-ethical people (on purpose – a, b) [Security] Hackers, criminals, military, corporations, governments, cults, psychopaths, etc. Side effects of poor design (engineering mistakes – c, d) [Safety] Bugs, misaligned values, bad data, wrong goals, etc. Miscellaneous cases, impact of the surroundings of the system (environment – e, f) [Safety]/[Security] Soft errors, SETI Runaway self-improvement process (Independently – g, h) [Safety] Wireheading, Emergent Phenomena, “Treacherous Turn” Purposeful design of dangerous AI is just as likely to include all other types of safety problems and will have the direst consequences, that is the most dangerous type of AI, and the one most difficult to defend against.
14
Who Could be an Attacker?
@romanyam Who Could be an Attacker? Militaries developing cyber-weapons and robot soldiers to achieve dominance. Governments attempting to use AI to establish hegemony, control people, or take down other governments. Corporations trying to achieve monopoly, destroying the competition through illegal means. Hackers attempting to steal information, resources or destroy cyberinfrastructure targets. Doomsday cults attempting to bring the end of the world by any means. Psychopaths trying to add their name to history books in any way possible. Criminals attempting to develop proxy systems to avoid risk and responsibility. With AI as a Service anyone is a potential bad actor.
15
@romanyam roman.yampolskiy@louisville.edu
What Might They Do? Terrorist acts Infrastructure sabotage Hacking systems/robots Social Engineering Attacks Privacy violating datamining Resource depletion (crash stock market)
16
AI Confinement Problem
@romanyam AI Confinement Problem 16
17
@romanyam roman.yampolskiy@louisville.edu
AI Regulation
18
@romanyam roman.yampolskiy@louisville.edu
Security VS Privacy
19
@romanyam roman.yampolskiy@louisville.edu
Conclusions AI failures and attacks will grow in frequency and severity proportionate to AI’s capability. Governments need to work to ensure protection of citizens.
20
The End! /Roman.Yampolskiy Roman.Yampolskiy@louisville.edu
Director, CyberSecurity Lab Computer Engineering and Computer Science University of Louisville - cecs.louisville.edu/ry @romanyam /Roman.Yampolskiy All images used in this presentation are copyrighted to their respective owners and are used for educational purposes only.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.