Artificial Intelligence and Ethics

Slides:



Advertisements
Similar presentations
Environmental Problems, Their Causes and Sustainability
Advertisements

Kristine Belknap The Ethics of Robotics.
G5BAIM Artificial Intelligence Methods
CSE (c) S. Tanimoto, 2008 Big Issues & The Future of AI 1 Big Issues and the Future of Artificial Intelligence Outline: Common Sense, Ontologies.
CSE (c) S. Tanimoto, 2007 Social Issues 1 Social Issues and the Future of Artificial Intelligence Outline: What’s wrong with “David” in Kubrick/Spielberg.
Section 4.2 Social Responsibility: The duty to do what is best for the good of society. Producing goods and services that are beneficial to society and.
DO NOW: Write down 2 ways science has affected society?
NEZHMETDINOVA FARIDA Kazan State Agricultural University Kazan, Russia Where do we run?
Technology in Movies: Gattaca (1997) By: Hyeok Chun YoungGyoo Hwang Hooly Sim Jordan Woods.
Robotic Ethics Shahid Iqbal Tarar. Robotics and Ethics Is new science or branch or a field of application of Engineering? Actually Discipline born from.
SEMINAR REPORT ON K.SWATHI. INTRODUCTION Any automatically operated machine that functions in human like manner Any automatically operated machine that.
SPLASH 2008: CHALLENGES AND OPPORTUNITIES IN THE 21 ST CENTURY Artificial Intelligence.
What Safety Concerns Does Nanotechnology Raise? Mainstream nanotechnology –Environmental –Privacy Molecular nanotechnology –Self-replication –Economic.
Robotic Ethics Shahid Iqbal Tarar. Robotics and Ethics A new science or an integral part of Engineering? Actually Discipline born from Computer Science,
DEFINITION OF TECHNOLOGY
John Fitzpatrick IET-600 Impact of Technology Dr. Zargari.
Seminar 2 Ethics of Technology and Science February 11, 2016 Group 2 Martin Andersson Shaohui Chen Jessica Garcia Yuanyuan Han Sofia Kontos.
Humans in the Biosphere. This Island “Earth” What are the limiting factors on an island? – Space, food, fresh water… The Earth is an closed system: –
TECHNOLOGY PROGRESS By Joseph Kinyenye, NS241, Fall 2011 INTRODUCTION The Progress of Technology has benefited us in many ways; but it also comes with.
Department of Electronics & Communication CONTENTS INTRODUCTION WHAT IS BLUE BRAIN WHAT IS VIRTUAL BRAIN FUNCTION OF NATURAL BRAIN BRAIN SIMULATION CURRENT.
CHAPTER ONE: SCIENCE AND THE ENVIRONMENT. Section One: Understanding Our Environment  Environmental Science: the study of the impact of humans on the.
Ecolog 2.
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
Ethics & Social Responsibility
Sustainability The ability of earth’s various natural systems and human cultural systems and economies to survive and adapt to changing environmental conditions.
Ensuring Safe AI via a Moral Emotion Motivational & Control System
Mechanical & Manufacturing Engineering Program
CSE 190 Neural Networks: Ethical Issues in Artificial Intelligence
Section 2: The Environment and Society
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
Ethics & Social Responsibility
Ecolog 2.
Multidisciplinary nature of environmental studies Lecture #1
LAS 432Competitive Success/tutorialrank.com
LAS 432 Education for Service-- tutorialrank.com.
Roboethics By Kevin and James.
Artificial Intelligence
The AI Security Paradox
ETHICAL ISSUES WITH ARTIFICIAL INTELLIGENCE
What Is Environmental Science?
Ecolog 2.
Warm-Up What is an environmental problem that affects you? How does this affect you? What could you personally do to help solve this problem? COMPLETE.
Section 2: The Environment and Society
Ecolog 2.
“The Tragedy of the Commons”
TOWARD A COLLECTIVE SPACE SECURITY
Section 2: The Environment and Society
Legal and Moral Complexities of Artificial General Intelligence
Ecolog 2.
Section 2: The Environment and Society
Section 2: The Environment and Society
Environment: The Science behind the Stories
G5BAIM Artificial Intelligence Methods
Science and the Environment
Social Issues and the Future of Artificial Intelligence
Ecolog 2.
Section 2: The Environment and Society
Section 2: The Environment and Society
Section 2: The Environment and Society
Ecology 2.
Warm Up:.
Cyber Literacy Week 9.
Ecolog 2.
Social Issues and the Future of Artificial Intelligence
Ecolog 2.
The AI Security Paradox Dr. Computer Engineering and Computer Science University of Louisville - cecs.louisville.edu/ry.
Nathan Vargoshe What is Artificial Intelligence?
Ecolog 2.
Environmental Science Ch1 Sec 2 The Environment and Society
Ethics, Innovation and the Law
Presentation transcript:

Artificial Intelligence and Ethics Mohd Nasir Ayob, Le Fu, Oscar Janson, Natasha Kamerlin, Arvind Parwal

Problem: Your research group has made a major breakthrough in advanced artificial intelligence (AI). The AI can develop technology with superhuman efficiency and could vastly outperform us in practically every field. However, the drawback is a certain risk that the AI will take over and kill humanity. Should you continue with the research? If you could simulate how big that risk is, at what percentage of risk is it acceptable to continue with the development? Will it be good for humanity with abundance in everything?

Pros Cons Could solve or help us solve any problem Explosive progress in all scientific and technological fields, e.g., very powerful computers, advanced weaponry, space travel Disease, poverty, environmental destruction, unnecessary suffering of all kinds could be reduced or eliminated Give us indefinite lifespan, e.g., by stopping/reversing ageing process or option to upload ourselves Create a “utopia” Could become unstoppably powerful Extinction risk to our species as a whole

How could this happen? Super intelligence would quickly lead to more advanced super intelligence Numbers could increase rapidly by creating replicas, uploading on another hardware Capable of independent initiative and making its own plans Need not have humanlike motives Need not have inhibitions Would do anything to achieve its top goal

AI Research: Safe? The prime goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas, Hypothetically AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons. There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. Scientists recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm.

Can AI be dangerous? The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.

Ethics of AI in fiction The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The Three Laws of Robotics (Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround". The Three Laws are: 1, A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2, A robot must obey the orders by human beings except where such orders would conflict with the First Law. 3, A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.  

Autonomy matrix Security Economy Sustainability Competitiveness Happiness Knowledge 1. Continue the research with no restriction Possibilities: Better defence against alien attack because of higher technological advancement. Risks: Human extinction. Possibilities: Higher utilization of resources.   Risks: Hostile AIs use their superior intelligence to rob financial markets. Possibilities: Development of advanced nanotechnology, renewable energy and biotechnology will not exploit natural resources. Risks: Environ- mental collaps. Possibilities: Evolvement to type 1, 2 and 3 civilizations. Risks: Enslaved or extinct by AIs. Possibilities: Increased fulfillment, pursuing the dreams, eliminate deceases. Possibilities: Increased knowledge for everyone. 2. Stop the research Possibilities: Stop the threat from AI. Risk: Less technological development which can in longer time frame risk humanity by unability to colonize other planets and solar system. Less resistence against asteroid catastroph or alien attacks. Possibilities: Save money. Risk: Losing the possible value of higher utilization of resources. Possibillities: Avoidance of eventual pollution from AI developed technology ex. Nanoparticles, GMO food. Risks: Environmental threats of today will escalate. Possibillities: Redistribute resources. Risks: Not reaching maximum performance level for humanity. Risks: Not pursuing the dreams, less fulfillment, no extended lifespan. Possibilities: Redistribute research money. Risk: Limited knowledge.

3. Give AI moral status Possibillities: Better integration of AI in society. Risks: Lowered defence against hostile AI. Possibillities: Increase growth in market with AI as integrated citizens.   Possibillities: AI can be integrated with humans. For example human mind get uploaded in computers and networks. Human-computer hybrids can enhance human performance. Risks: Where is the difference between human and machine? Possibillities: decreased tensions between AI and humans. Possibillities: Human-computer hybrids can easily download humanities entire knowledge. Risks: Research on AI might be inhibited in the same way as animal trials. 4. Limit the intelligence Possibillities: The AI can not be superior to humanity. Risks: See stop the research Possibillities: Avoid economic exploitation of the financial market. Risks: Miss value from inventions from superintelligent AIs. Possibillities: Use AI for better the environment but eventually avoiding the negative effects. Possibillities: Facilitate life by optimizing routine processes.

5. Program fixed ethical values Possibillity: The AI will follow the ethics we have programmed and will not kill us. Risks: The AI will reprogram itself to acquire more resourses and kills humanity.   Possibillity: Avoid economic exploitation of the financial market. Possibillity: Less risk of environmental harmfulness. 6. Program ethical values that can be evolved Possibillity: The AI can evolve an ethic standard that are more evolved than our ethic are today. Risks: What happen if that ethical standards allow the AI to kill humanity.

Conclusion 1. Yields maximum positive effects but also have a high possibility for human extinction. 2. Gives a reduced risk for human extinction, but also no benefits from AI and also in the longer run lower technological level which can be a risk when facing a threat from space. 3. AI can be integrated as citizen in the society. 4. AI can not have superior intelligence to humans. Miss value from inventions from superintelligence. 5. The AI will follow humans ethical standards but can reprogram itself to accuire more resources and kill humanity. 6. The AI can evolve an ethic standard that are more evolved than or ethics today. That ethics might allow AI to kill humans. Choice: 4 or 5.