[3] By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously.
At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity".
The institute drafted an open letter directed to the broader AI research community,[4] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.
[7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks.
... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem".
[10] Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig,[1] Professor Stuart J. Russell of the University of California, Berkeley,[11] and other AI experts, robot makers, programmers, and ethicists.