In machine learning, automatic basis function construction (or basis discovery) is the mathematical method of looking for a set of task-independent basis functions that map the state space to a lower-dimensional embedding, while still representing the value function accurately.
Automatic basis construction is independent of prior knowledge of the domain, which allows it to perform well where expert-constructed basis functions are difficult or impossible to create.
In reinforcement learning (RL), most real-world Markov Decision Process (MDP) problems have large or continuous state spaces, which typically require some sort of approximation to be represented efficiently.
A Markov decision process with finite state space and fixed policy is defined with a 5-tuple
Bellman equation is defined as: When the number of elements in
is commonly being approximated via a linear combination of basis function
A good construction method should have the following characteristics: In this approach, Mahadevan analyzes the connectivity graph between states to determine a set of basis functions.
[3] The normalized graph Laplacian is defined as: Here W is an adjacency matrix which represents the states of fixed policy MDP which forms an undirected graph (N,E).
D is a diagonal matrix related to nodes' degrees.
In discrete state space, the adjacency matrix
could be constructed by simply checking whether two states are connected, and D could be calculated by summing up every row of W. In continuous state space, we could take random walk Laplacian of W. This spectral framework can be used for value function approximation (VFA).
Given the fixed policy, the edge weights are determined by corresponding states' transition probability.
To get smooth value approximation, diffusion wavelets are used.
[3] Krylov basis construction uses the actual transition matrix instead of random walk Laplacian.
The assumption of this method is that transition model P and reward r are available.
The vectors in Neumann series are denoted as
is enough to represent any value function,[4] and m is the degree of minimal polynomial of
, the value function can be written as: Bellman error (or BEBFs) is defined as:
Loosely speaking, Bellman error points towards the optimal value function.
[6] The sequence of BEBF form a basis space which is orthogonal to the real value function space; thus with sufficient number of BEBFs, any value function can be represented exactly.
Bellman Average Reward Bases (or BARBs)[7] is similar to Krylov Bases, but the reward function is being dilated by the average adjusted transition matrix
[8] BARBs converges faster than BEBFs and Krylov when
There are two principal types of basis construction methods.
The first type of methods are reward-sensitive, like Krylov and BEBFs; they dilate the reward function geometrically through transition matrix.
approaches to 1, Krylov and BEBFs converge slowly.
This is because the error Krylov based methods are restricted by Chebyshev polynomial bound.
[5] To solve this problem, methods such as BARBs are proposed.
BARBs is an incremental variant of Drazin bases, and converges faster than Krylov and BEBFs when
Another type is reward-insensitive proto value basis function derived from graph Lapalacian.
This method uses graph information, but the construction of adjacency matrix makes this method hard to analyze.