[8] Neuromorphic AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.
[11] Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc.
For simple decisions, Nick Bostrom and Eliezer Yudkowsky have argued that decision trees (such as ID3) are more transparent than neural networks and genetic algorithms,[13] while Chris Santos-Lang argued in favor of machine learning on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".
In the review of 84[17] ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity.
[35] The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.
AI systems may be less accurate for black people, as was the case in the development of an AI-based pulse oximeter that overestimated blood oxygen levels in patients with darker skin, causing issues with their hypoxia treatment.
[47] Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise.
Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.
This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.
[71] A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency.
[dubious – discuss] This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system.
[82] AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to Generative artificial intelligence that can create a painting about whatever one desires.
"[100] Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.
[120] In 2024, the Defense Advanced Research Projects Agency funded a program, Autonomy Standards and Ideals with Military Operational Values (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.
"[123] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.
The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.
[127] Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".
While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large.
Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise.
[137] AI researchers such as Stuart J. Russell,[138] Bill Hibbard,[102] Roman Yampolskiy,[139] Shannon Vallor,[140] Steven Umbrello[141] and Luciano Floridi[142] have proposed design strategies for developing beneficial machines.
Examples include Nvidia's [143] Llama Guard, which focuses on improving the safety and alignment of large AI models, [144] and Preamble's customizable guardrail platform.
[145] These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, including prompt injection attacks, by embedding ethical guidelines into the functionality of AI models.
[145] Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters,[146] or leveraging real-time monitoring mechanisms to identify and address vulnerabilities.
[147] These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.
[149] The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.
The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R – Rossum's Universal Robots, Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, robota)[176] but was also an international success after it premiered in 1921.
George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.
During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics.
This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.
Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.