After earning a master's degree in computer science, he pursued a PhD in machine learning at the Australian National University under the supervision of Marcus Hutter.
[2] Leike made a six-month postdoctoral fellowship at the Future of Humanity Institute before joining DeepMind to focus on empirical AI safety research,[2] where he collaborated with Shane Legg.
[1] In June 2023, he and Ilya Sutskever became the co-leaders of the newly introduced "superalignment" project, which aimed to determine how to align future artificial superintelligences within four years to ensure their safety.
[4] In May 2024, Leike announced his resignation from OpenAI, following the departure of Ilya Sutskever, Daniel Kokotajlo and several other AI safety employees from the company.
Leike wrote that "Over the past years, safety culture and processes have taken a backseat to shiny products", and that he "gradually lost trust" in OpenAI's leadership.