Geoffrey Hinton

[11][12] With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks,[13] although they were not the first to propose the approach.

[20] The image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky[21] and Ilya Sutskever for the ImageNet challenge 2012[22] was a breakthrough in the field of computer vision.

[23] Hinton received the 2018 Turing Award, often referred to as the "Nobel Prize of Computing", together with Yoshua Bengio and Yann LeCun, for their work on deep learning.

[27][28] He was also awarded with John Hopfield the 2024 Nobel Prize in Physics for foundational discoveries and inventions that enable machine learning with artificial neural networks.

"[31] He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence.

[8] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins.

[43] In 2004, Hinton and collaborators successfully proposed the launch of a new program at CIFAR, Neural Computation and Adaptive Perception[44] (or NCAP, which today is named Learning in Machines & Brains).

[48] He co-founded DNNresearch Inc. in 2012 with his two graduate students Alex Krizhevsky and Ilya Sutskever at the University of Toronto’s department of computer science.

[49][50] Hinton's research concerns ways of using neural networks for machine learning, memory, perception, and symbol processing.

[9][31] Notable former PhD students and postdoctoral researchers from his group include Peter Dayan,[64] Sam Roweis,[64] Max Welling,[64] Richard Zemel,[38][2] Brendan Frey,[3] Radford M. Neal,[4] Yee Whye Teh,[5] Ruslan Salakhutdinov,[6] Ilya Sutskever,[7] Yann LeCun,[65] Alex Graves,[64] Zoubin Ghahramani,[64] and Peter Fitzhugh Brown.

[68] His certificate of election for the Royal Society reads: Geoffrey E. Hinton is internationally known for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher.

[73] In 2016, he was elected a foreign member of National Academy of Engineering "for contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision".

[75] He won the BBVA Foundation Frontiers of Knowledge Award (2016) in the Information and Communication Technologies category, "for his pioneering and highly influential work" to endow machines with the ability to learn.

[76] Together with Yann LeCun, and Yoshua Bengio, Hinton won the 2018 Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

[80] In 2021, he received the Dickson Prize in Science from the Carnegie Mellon University[81] and in 2022 the Princess of Asturias Award in the Scientific Research category, along with Yann LeCun, Yoshua Bengio, and Demis Hassabis.

[84] In 2024, he was jointly awarded the Nobel Prize in Physics with John Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks."

[29][85] When the New York Times reporter Cade Metz asked Hinton to explain in simpler terms how the Boltzmann machine could "pretrain" backpropagation networks, Hinton quipped that Richard Feynman reportedly said: "Listen, buddy, if I could explain it in a couple of minutes, it wouldn't be worth the Nobel Prize.".

[86] That same year, he received the VinFuture Prize grand award alongside Yoshua Bengio, Yann LeCun, Jen-Hsun Huang, and Fei-Fei Li for groundbreaking contributions to neural networks and deep learning algorithms.

[87] In 2025 he was awarded the Queen Elizabeth Prize for Engineering jointly with Yoshua Bengio, Bill Dally, John Hopfield, Yann LeCun, Jen-Hsun Huang and Fei-Fei Li.

"[31] However, in a March 2023 interview with CBS, he said that "general-purpose AI" might be fewer than 20 years away and could bring about changes "comparable in scale with the industrial revolution or electricity.

[94] Hinton was previously optimistic about the economic effects of AI, noting in 2018 that: "The phrase 'artificial general intelligence' carries with it the implication that this sort of single robot is suddenly going to be smarter than you.

[98] Hinton moved from the U.S. to Canada in part due to disillusionment with Ronald Reagan-era politics and disapproval of military funding of artificial intelligence.

[40] In August 2024, Hinton co-authored a letter with Yoshua Bengio, Stuart Russell, and Lawrence Lessig in support of SB 1047, a California AI safety bill that would require companies training models which cost more than US$100 million to perform risk assessments before deployment.

In 2016, from left to right,
Russ Salakhutdinov , Richard S. Sutton , Geoffrey Hinton, Yoshua Bengio , and Steve Jurvetson