Download presentation
Presentation is loading. Please wait.
1
Artificial Intelligence and Ethics
Mohd Nasir Ayob, Le Fu, Oscar Janson, Natasha Kamerlin, Arvind Parwal
2
Problem: Your research group has made a major breakthrough in advanced artificial intelligence (AI). The AI can develop technology with superhuman efficiency and could vastly outperform us in practically every field. However, the drawback is a certain risk that the AI will take over and kill humanity. Should you continue with the research? If you could simulate how big that risk is, at what percentage of risk is it acceptable to continue with the development? Will it be good for humanity with abundance in everything?
3
Pros Cons Could solve or help us solve any problem
Explosive progress in all scientific and technological fields, e.g., very powerful computers, advanced weaponry, space travel Disease, poverty, environmental destruction, unnecessary suffering of all kinds could be reduced or eliminated Give us indefinite lifespan, e.g., by stopping/reversing ageing process or option to upload ourselves Create a “utopia” Could become unstoppably powerful Extinction risk to our species as a whole
4
How could this happen? Super intelligence would quickly lead to more advanced super intelligence Numbers could increase rapidly by creating replicas, uploading on another hardware Capable of independent initiative and making its own plans Need not have humanlike motives Need not have inhibitions Would do anything to achieve its top goal
5
AI Research: Safe? The prime goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas, Hypothetically AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons. There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. Scientists recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm.
6
Can AI be dangerous? The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.
7
Ethics of AI in fiction The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The Three Laws of Robotics (Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround". The Three Laws are: 1, A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2, A robot must obey the orders by human beings except where such orders would conflict with the First Law. 3, A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
8
Autonomy matrix Security Economy Sustainability Competitiveness Happiness Knowledge 1. Continue the research with no restriction Possibilities: Better defence against alien attack because of higher technological advancement. Risks: Human extinction. Possibilities: Higher utilization of resources. Risks: Hostile AIs use their superior intelligence to rob financial markets. Possibilities: Development of advanced nanotechnology, renewable energy and biotechnology will not exploit natural resources. Risks: Environ- mental collaps. Possibilities: Evolvement to type 1, 2 and 3 civilizations. Risks: Enslaved or extinct by AIs. Possibilities: Increased fulfillment, pursuing the dreams, eliminate deceases. Possibilities: Increased knowledge for everyone. 2. Stop the research Possibilities: Stop the threat from AI. Risk: Less technological development which can in longer time frame risk humanity by unability to colonize other planets and solar system. Less resistence against asteroid catastroph or alien attacks. Possibilities: Save money. Risk: Losing the possible value of higher utilization of resources. Possibillities: Avoidance of eventual pollution from AI developed technology ex. Nanoparticles, GMO food. Risks: Environmental threats of today will escalate. Possibillities: Redistribute resources. Risks: Not reaching maximum performance level for humanity. Risks: Not pursuing the dreams, less fulfillment, no extended lifespan. Possibilities: Redistribute research money. Risk: Limited knowledge.
9
3. Give AI moral status Possibillities: Better integration of AI in society. Risks: Lowered defence against hostile AI. Possibillities: Increase growth in market with AI as integrated citizens. Possibillities: AI can be integrated with humans. For example human mind get uploaded in computers and networks. Human-computer hybrids can enhance human performance. Risks: Where is the difference between human and machine? Possibillities: decreased tensions between AI and humans. Possibillities: Human-computer hybrids can easily download humanities entire knowledge. Risks: Research on AI might be inhibited in the same way as animal trials. 4. Limit the intelligence Possibillities: The AI can not be superior to humanity. Risks: See stop the research Possibillities: Avoid economic exploitation of the financial market. Risks: Miss value from inventions from superintelligent AIs. Possibillities: Use AI for better the environment but eventually avoiding the negative effects. Possibillities: Facilitate life by optimizing routine processes.
10
5. Program fixed ethical values
Possibillity: The AI will follow the ethics we have programmed and will not kill us. Risks: The AI will reprogram itself to acquire more resourses and kills humanity. Possibillity: Avoid economic exploitation of the financial market. Possibillity: Less risk of environmental harmfulness. 6. Program ethical values that can be evolved Possibillity: The AI can evolve an ethic standard that are more evolved than our ethic are today. Risks: What happen if that ethical standards allow the AI to kill humanity.
11
Conclusion 1. Yields maximum positive effects but also have a high possibility for human extinction. 2. Gives a reduced risk for human extinction, but also no benefits from AI and also in the longer run lower technological level which can be a risk when facing a threat from space. 3. AI can be integrated as citizen in the society. 4. AI can not have superior intelligence to humans. Miss value from inventions from superintelligence. 5. The AI will follow humans ethical standards but can reprogram itself to accuire more resources and kill humanity. 6. The AI can evolve an ethic standard that are more evolved than or ethics today. That ethics might allow AI to kill humans. Choice: 4 or 5.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.