Connor Leahy

[4][5][6] He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.

[2] Leahy is sceptical of reinforcement learning from human feedback as a solution to the alignment problem.

But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding.”[8] He was one of the signatories of the 2023 open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

"[9][10] In November 2023, Leahy was invited to speak at the inaugural AI Safety Summit.

He worried that the summit would fail to deal with the risks from "god like AI" stemming from the AI alignment problem, arguing that “If you build systems that are more capable than humans at manipulation, business, politics, science and everything else, and we do not control them, then the future belongs to them, not us.” He cofounded the campaign group ControlAI to advocate for governments to implement a pause on the development of artificial general intelligence.