[1][2][3][4] On May 25, 2019, Zeng led the team that published the Beijing Artificial Intelligence Principles, proposed as an initiative for the long-term research, governance and planning of AI, and the "realization of beneficial AI for mankind and nature".
[7] In July 2023, Zeng addressed the United Nations Security Council in a meeting on the risks posed by recent strides in artificial intelligence.
He said that AI models “cannot be trusted as responsible agents that can help humans to make decisions,” and warned of the risk of extinction posed by both near-term and long-term AI, arguing that “in the long term, we haven’t given superintelligence any practical reasons why they should protect humans”.
Zeng stated that humans should always be responsible for final decision-making on the use of nuclear weapons, and that the United Nations must produce an international framework on AI development and governance, to ensure global peace and security.
[8][9] In October 2023, UN Secretary-General António Guterres announced the creation of an advisory body on issues surrounding the international governance of AI, of which Zeng would be a member.