Before the 21st century the ethics of machines had largely been the subject of science fiction, mainly due to computing and artificial intelligence (AI) limitations.
[7] A variety of perspectives of this nascent field can be found in the collected edition Machine Ethics[8] that stems from that symposium.
[12] In 2014, the US Office of Naval Research announced that it would distribute $7.5 million in grants over five years to university researchers to study questions of machine ethics as applied to autonomous robots,[13] and Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, which raised machine ethics as the "most important...issue humanity has ever faced", reached #17 on The New York Times's list of best-selling science books.
[22][23] In 2009, in an experiment at the Ecole Polytechnique Fédérale of Lausanne's Laboratory of Intelligent Systems, AI robots were programmed to cooperate with each other and tasked with searching for a beneficial resource while avoiding a poisonous one.
They noted that some machines have acquired various forms of semi-autonomy, including the ability to find power sources on their own and to independently choose targets to attack with weapons.
[26] The U.S. Navy funded a report that indicates that as military robots become more complex, we should pay greater attention to the implications of their ability to make autonomous decisions.
[29] Preliminary work has been conducted on methods of integrating artificial general intelligences (full ethical agents as defined above) with existing legal and social frameworks.
[30] Big data and machine learning algorithms have become popular in numerous industries, including online advertising, credit ratings, and criminal sentencing, with the promise of providing more objective, data-driven results, but have been identified as a potential way to perpetuate social inequalities and discrimination.
[31] The U.S. judicial system has begun using quantitative risk assessment software when making decisions related to releasing people on bail and sentencing in an effort to be fairer and reduce the imprisonment rate.
[32] A 2016 ProPublica report analyzed recidivism risk scores calculated by one of the most commonly used tools, the Northpointe COMPAS system, and looked at outcomes over two years.
[32] It has been argued that such pretrial risk assessments violate Equal Protection rights on the basis of race, due to factors including possible discriminatory intent by the algorithm itself, under a theory of partial legal capacity for artificial intelligences.
[33] In 2016, the Obama administration's Big Data Working Group—an overseer of various big-data regulatory frameworks—released reports warning of "the potential of encoding discrimination in automated decisions" and calling for "equal opportunity by design" for applications such as credit scoring.
[34][35] The reports encourage discourse among policy-makers, citizens, and academics alike, but recognize that no solution yet exists for the encoding of bias and discrimination into algorithmic systems.
In March 2018, in an effort to address rising concerns over machine learning's impact on human rights, the World Economic Forum and Global Future Council on Human Rights published a white paper with detailed recommendations on how best to prevent discriminatory outcomes in machine learning.
[36] The World Economic Forum developed four recommendations based on the UN Guiding Principles of Human Rights to help address and prevent discriminatory outcomes in machine learning:[36] In January 2020, Harvard University's Berkman Klein Center for Internet and Society published a meta-study of 36 prominent sets of principles for AI, identifying eight key themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.
Isaac Asimov's Three Laws of Robotics are not usually considered suitable for an artificial moral agent,[39] but whether Kant's categorical imperative can be used has been studied.
The negative effects of this approach can be seen in Microsoft's Tay, a chatterbot that learned to repeat racist and sexually charged tweets.
This Genie declares that it will return in 50 years and demands that it be provided with a definite set of morals it will then immediately act upon.
At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems.