Thompson sampling

Thompson sampling,[1][2][3] named after William R. Thompson, is a heuristic for choosing actions that address the exploration–exploitation dilemma in the multi-armed bandit problem.

It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

The aim of the player is to play actions under the various contexts, such as to maximize the cumulative rewards.

Specifically, in each round, the player obtains a context

following a distribution that depends on the context and the issued action.

The elements of Thompson sampling are as follows:[3]: sec.

4 Thompson sampling consists of playing the action

according to the probability that it maximizes the expected reward; action

In practice, the rule is implemented by sampling.

, i.e. the expected reward given the sampled parameters, the action, and the current context.

Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them.

In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models.

[1] It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems.

[4][5][6][7][8][9] A first proof of convergence for the bandit case has been shown in 1997.

[6] A related approach (see Bayesian control rule) was published in 2010.

[5] In 2010 it was also shown that Thompson sampling is instantaneously self-correcting.

[9] Asymptotic convergence results for contextual bandits were published in 2011.

[7] Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising,[10] and accelerated learning in decentralized decision making.

[11] A Double Thompson Sampling (D-TS)[12] algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedback comes in the form of pairwise comparison.

Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates.

Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.

A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations.

[5] In this formulation, an agent is conceptualized as a mixture over a set of behaviours.

If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent.

be the actions issued by an agent up to time

In practice, the Bayesian control amounts to sampling, at each time step, a parameter

, where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations

Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees.

Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic".

Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling[13] or unify regret analysis across both these algorithms and many classes of problems.

Concrete example of Thompson sampling applied to simulate treatment efficacy evaluation