AlphaGo Zero

"[3] Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was "no longer constrained by the limits of human knowledge".

David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalized AI algorithms by removing the need to learn from humans.

[1]: Appendix: Methods AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers.

[11] It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level.

[15] According to Hassabis, AlphaGo's algorithms are likely to be of the most benefit to domains that require an intelligent search through an enormous space of possibilities, such as protein folding (see AlphaFold) or accurately simulating chemical reactions.

[17] Mark Pesce of the University of Sydney called AlphaGo Zero "a big technological advance" taking us into "undiscovered territory".

[21] Chinese Go professional Ke Jie commented on the remarkable accomplishments of the new program: "A pure self-learning AlphaGo is the strongest.

[8] AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go.

Differences between AZ and AGZ include:[8] An open source program, Leela Zero, based on the ideas from the AlphaGo papers is available.