Superintelligence: Paths, Dangers, Strategies

Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would most likely follow surprisingly quickly.

For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved mathematical conjecture) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical material optimized for computation) to assist in the calculation.

[7] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.

[8][9][10] Bostrom's work on superintelligence has also influenced Bill Gates’s concern for the existential risks facing humanity over the coming century.

[14] The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.

[3] Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.

"[2] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age".