Nick Bostrom

[5] Bostrom is the author of Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002), Superintelligence: Paths, Dangers, Strategies (2014) and Deep Utopia: Life and Meaning in a Solved World (2024).

Bostrom believes that advances in artificial intelligence (AI) may lead to superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".

He was interested in a wide variety of academic areas, including anthropology, art, literature, and science.

During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V.

[4][11] He discusses existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".

[16] In a paper called "The Vulnerable World Hypothesis",[17] Bostrom suggests that there may be some technologies that destroy human civilization by default[a] when discovered.

He also gives counterfactual thought experiments of how such vulnerabilities could have historically occurred, e.g. if nuclear weapons had been easier to develop or had ignited the atmosphere (as Robert Oppenheimer had feared).

[20] He considers that "sentience is a matter of degree"[21] and that digital minds can in theory be engineered to have a much higher rate and intensity of subjective experience than humans, using less resources.

In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies,[30] although he is no longer involved with either of these organisations.

The story explores how status quo bias and learned helplessness can prevent people from taking action to defeat aging even when the means to do so are at their disposal.

[33] Bostrom's work also considers potential dysgenic effects in human populations but he thinks genetic engineering can provide a solution and that "In any case, the time-scale for human natural genetic evolution seems much too grand for such developments to have any significant effect before other developments will have made the issue moot".

[35] Bostrom's theory of the unilateralist's curse has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.

[39] He argues that an AI with the ability to improve itself might initiate an intelligence explosion, resulting (potentially rapidly) in a superintelligence.

[40] Such a superintelligence could have vastly superior capabilities, notably in strategizing, social manipulation, hacking or economic productivity.

becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins.

[6] The book became a New York Times Best Seller and received positive feedback from personalities such as Stephen Hawking, Bill Gates, Elon Musk, Peter Singer and Derek Parfit.

[43][44] Yann LeCun considers that there is no existential risk, asserting that superintelligent AI will have no desire for self-preservation[45] and that experts can be trusted to make it safe.

According to him, not only machines would be better than humans at working, but they would also undermine the purpose of many leisure activities, providing extreme welfare while challenging the quest for meaning.

[55][56] The apology, posted on his website,[53] stated that "the invocation of a racial slur was repulsive" and that he "completely repudiate[d] this disgusting email".

[57][58][59][55] According to Andrew Anthony of The Guardian, "The apology did little to placate Bostrom’s critics, not least because he conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics.