The AI Security Paradox Dr. Roman.Yampolskiy@louisville.edu Computer Engineering and Computer Science University of Louisville - cecs.louisville.edu/ry Director – CyberSecurity Lab @romanyam /roman.yampolskiy
@romanyam roman.yampolskiy@louisville.edu What is AI Safety? + = Cybersecurity AI Safety & Security AI From internal (AI) and external (humans) agents. Science and engineering aimed at creating safe and secure machines.
Future of Cybersecurity @romanyam roman.yampolskiy@louisville.edu Future of Cybersecurity
AI for Cybersecurity (Example-IBM Watson) @romanyam roman.yampolskiy@louisville.edu AI for Cybersecurity (Example-IBM Watson)
AI IS in Charge @romanyam roman.yampolskiy@louisville.edu Energy: Nuclear Power Plants Utilities: Water Plants/Electrical Grid Military: Nuclear Weapons Communications: Satellites Stock Market: 75+% of all trade orders generated by Automated Trading Systems Aviation: Uninterruptible Autopilot System
What is Next? SuperIntelligence is Coming … @romanyam roman.yampolskiy@louisville.edu What is Next? SuperIntelligence is Coming … 6
@romanyam roman.yampolskiy@louisville.edu SuperSmart
Ultrafast Extreme Events (UEEs) @romanyam roman.yampolskiy@louisville.edu SuperFast Ultrafast Extreme Events (UEEs) Abrupt Rise of New Machine Ecology Beyond Human Response Time. By Johnson et al. Nature. Scientific Reports 3, #2627 (2013)
@romanyam roman.yampolskiy@louisville.edu SuperComplex "That was a little-known part of the software that no airline operators or pilots knew about."
SuperViruses @romanyam roman.yampolskiy@louisville.edu Relying on Kindness of Machines? The Security Threat of Artificial Agents. By Randy Eshelman and Douglas Derrick. JFQ 77, 2nd Quarter 2015.
@romanyam roman.yampolskiy@louisville.edu SuperSoldiers
@romanyam roman.yampolskiy@louisville.edu SuperConcerns “The development of full artificial intelligence could spell the end of the human race.” “I think we should be very careful about artificial intelligence” “… there’s some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon.” "I am in the camp that is concerned about super intelligence" “…eventually they'll think faster than us and they'll get rid of the slow humans…”
Taxonomy of Pathways to Dangerous AI @romanyam roman.yampolskiy@louisville.edu Taxonomy of Pathways to Dangerous AI Roman V. Yampolskiy. Taxonomy of Pathways to Dangerous Artificial Intelligence. 30th AAAI Conference on Artificial Intelligence (AAAI-2016). 2nd International Workshop on AI, Ethics and Society (AIEthicsSociety2016). Phoenix, Arizona, USA. February 12-13th, 2016. Deliberate actions of not-so-ethical people (on purpose – a, b) [Security] Hackers, criminals, military, corporations, governments, cults, psychopaths, etc. Side effects of poor design (engineering mistakes – c, d) [Safety] Bugs, misaligned values, bad data, wrong goals, etc. Miscellaneous cases, impact of the surroundings of the system (environment – e, f) [Safety]/[Security] Soft errors, SETI Runaway self-improvement process (Independently – g, h) [Safety] Wireheading, Emergent Phenomena, “Treacherous Turn” Purposeful design of dangerous AI is just as likely to include all other types of safety problems and will have the direst consequences, that is the most dangerous type of AI, and the one most difficult to defend against.
Who Could be an Attacker? @romanyam roman.yampolskiy@louisville.edu Who Could be an Attacker? Militaries developing cyber-weapons and robot soldiers to achieve dominance. Governments attempting to use AI to establish hegemony, control people, or take down other governments. Corporations trying to achieve monopoly, destroying the competition through illegal means. Hackers attempting to steal information, resources or destroy cyberinfrastructure targets. Doomsday cults attempting to bring the end of the world by any means. Psychopaths trying to add their name to history books in any way possible. Criminals attempting to develop proxy systems to avoid risk and responsibility. With AI as a Service anyone is a potential bad actor.
@romanyam roman.yampolskiy@louisville.edu What Might They Do? Terrorist acts Infrastructure sabotage Hacking systems/robots Social Engineering Attacks Privacy violating datamining Resource depletion (crash stock market)
AI Confinement Problem @romanyam roman.yampolskiy@louisville.edu AI Confinement Problem 16
@romanyam roman.yampolskiy@louisville.edu AI Regulation
@romanyam roman.yampolskiy@louisville.edu Security VS Privacy
@romanyam roman.yampolskiy@louisville.edu Conclusions AI failures and attacks will grow in frequency and severity proportionate to AI’s capability. Governments need to work to ensure protection of citizens.
The End! /Roman.Yampolskiy Roman.Yampolskiy@louisville.edu Director, CyberSecurity Lab Computer Engineering and Computer Science University of Louisville - cecs.louisville.edu/ry @romanyam /Roman.Yampolskiy All images used in this presentation are copyrighted to their respective owners and are used for educational purposes only.