Evaluation function

A significant body of evidence now exists for several games like chess, shogi and go as to the general composition of evaluation functions for them.

Deeper search favors less near-term tactical factors and more subtle long-horizon positional motifs in the evaluation.

An evaluation function also implicitly encodes the value of the right to move, which can vary from a small fraction of a pawn to win or loss.

Historically in computer chess, the terms of an evaluation function are constructed (i.e. handcrafted) by the engine developer, as opposed to discovered through training neural networks.

The general approach for constructing handcrafted evaluation functions is as a linear combination of various weighted terms determined to influence the value of a position.

Initially, neural network based evaluation functions generally consisted of one neural network for the entire evaluation function, with input features selected from the board and whose output is an integer, normalized to the centipawn scale so that a value of 100 is roughly equivalent to a material advantage of a pawn.

The distributed computing project Leela Chess Zero was started shortly after to attempt to replicate the results of Deepmind's AlphaZero paper.

The values in the tables are bonuses/penalties for the location of each piece on each space, and encode a composite of many subtle factors difficult to quantify analytically.

[16] In fact, the most basic NNUE architecture is simply the 12 piece-square tables described above, a neural network with only one layer and no activation functions.

Historically, evaluation functions in Computer Go took into account both territory controlled, influence of stones, number of prisoners and life and death of groups on the board.

However, modern go playing computer programs largely use deep neural networks in their evaluation functions, such as AlphaGo, Leela Zero, Fine Art, and KataGo, and output a win/draw/loss percentage rather than a value in number of stones.