[1] Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
[2] Sharing an office and working closely with the Centre for Effective Altruism, the institute's stated objective was to focus research where it can make the greatest positive difference for humanity in the long term.
In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Superintelligence: Paths, Dangers, Strategies.
In a 2002 paper, Bostrom defined an "existential risk" as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".
[15][16] FHI researchers have also studied the impact of technological progress on social and institutional risks, such as totalitarianism, automation-driven unemployment, and information hazards.
FHI's bioethics research focuses on the potential consequences of gene therapy, life extension, brain implants and brain–computer interfaces, and mind uploading.
[20] FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress.