Computer Go

Many of the algorithms such as alpha-beta minimax that performed well as AIs for checkers and chess fell apart on Go's 19x19 board, as there were too many branching possibilities to consider.

The application of Monte Carlo tree search to Go algorithms provided a notable improvement in the late 2000s decade, with programs finally able to achieve a low-dan level: that of an advanced amateur.

High-dan amateurs and professionals could still exploit these programs' weaknesses and win consistently, but computer performance had advanced past the intermediate (single-digit kyu) level.

The tantalizing unmet goal of defeating the best human players without a handicap, long thought unreachable, brought a burst of renewed interest.

So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess.Prior to 2015, the best Go programs only managed to reach amateur dan level.

In April 1981, Jonathan K Millen published an article in Byte discussing Wally, a Go program with a 15x15 board that fit within the KIM-1 microcomputer's 1K RAM.

[10] Bruce F. Webster published an article in the magazine in November 1984 discussing a Go program he had written for the Apple Macintosh, including the MacFORTH source.

[11] Programs for Go were weak; a 1983 article estimated that they were at best equivalent to 20 kyu, the rating of a naive novice player, and often restricted themselves to smaller boards.

[12] AIs who played on the Internet Go Server (IGS) on 19x19 size boards had around 20–15 kyu strength in 2003, after substantial improvements in hardware.

There was a case in the 1994 World Computer Go Championship where the winning program, Go Intellect, lost all three games against the youth players while receiving a 15-stone handicap.

However, computers "score" a terminal leaf of the tree by repeated random playouts (similar to Monte Carlo strategies for other problems).

In May 2017, AlphaGo beat Ke Jie, who at the time was ranked top in the world,[29][30] in a three-game match during the Future of Go Summit.

Many considered a strong Go-playing program something that could be achieved only in the far future, as a result of fundamental advances in general artificial intelligence technology.

Therefore, a large part of the computer Go development effort was during these times focused on ways of representing human-like expert knowledge and combining this with local search to answer questions of a tactical nature.

The large board size prevents an alpha-beta searcher from achieving deep look-ahead without significant search extensions or pruning heuristics.

However, within time and memory constraints, it is not generally possible to determine with complete accuracy which moves could affect the 'life' of a group of stones.

This involves playing out all hypothetical moves on the board up to a certain point, then using an evaluation function to estimate the value of that position for the current player.

In order to quickly store a full-sized Go board in a transposition table, a hashing technique for mathematically summarizing is generally necessary.

Zobrist hashing is very popular in Go programs because it has low collision rates, and can be iteratively updated at each move with just two XORs, rather than being calculated from scratch.

In 1996, Tim Klinger and David Mechner acknowledged the beginner-level strength of the best AIs and argued that "it is our belief that with better tools for representing and maintaining Go knowledge, it will be possible to develop stronger Go programs.

Most of the relatively successful results come from programmers' individual skills at Go and their personal conjectures about Go, but not from formal mathematical assertions; they are trying to make the computer mimic the way they play Go.

Even if an expert system recognizes a pattern and knows how to play a local skirmish, it may miss a looming deeper strategic problem in the future.

[citation needed] This problem can be mitigated by adding some domain knowledge in the move generation and a greater level of search depth on top of the random evolution.

Monte-Carlo based Go engines have a reputation of being much more willing to play tenuki, moves elsewhere on the board, rather than continue a local fight than human players.

For example, Crazy Stone learns move generation patterns from several hundred sample games, using a generalization of the Elo rating system.

Computer Go research results are being applied to other similar fields such as cognitive science, pattern recognition and machine learning.

The series of Ing prizes was set to expire either 1) in the year 2000 or 2) when a program could beat a 1-dan professional at no handicap for 40,000,000 NT dollars.

The main problem is that Go playing software, which usually communicates using the standardized Go Text Protocol (GTP), will not always agree with respect to the alive or dead status of stones.

While there is no general way for two different programs to "talk it out" and resolve the conflict, this problem is avoided for the most part by using Chinese, Tromp-Taylor, or American Go Association (AGA) rules in which continued play (without penalty) is required until there is no more disagreement on the status of any stones on the board.

The CGOS Go Server usually sees programs resign before a game has even reached the scoring phase, but nevertheless supports a modified version of Tromp-Taylor rules requiring a full play out.