Domain adaptation

It addresses the challenge of training a model on one data distribution (the source domain) and applying it to a related but different data distribution (the target domain).

A common example is spam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain).

Domain adaptation techniques can also leverage unrelated data sources to improve learning.

When multiple source distributions are involved, the problem extends to multi-source domain adaptation.

[1] Domain adaptation is a specialized area within transfer learning.

In domain adaptation, the source and target domains share the same feature space but differ in their data distributions.

In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).

Common distribution shifts are classified as follows:[3][4] Domain adaptation problems typically assume that some data from the target domain is available during training.

Usually in supervised learning (without domain adaptation), we suppose that the examples

) such that it commits the least error possible for labelling new examples coming from the distribution

The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions

The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?

[7][8] A method for adapting consists in iteratively "auto-labeling" the target examples.

[9] The principle is simple: Note that there exist other iterative approaches, but they usually need target labeled examples.

[10][11] The goal is to find or construct a common representation space for the two domains.

The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task.

This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.

[12][13] The goal is to construct a Bayesian hierarchical model

[14] Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:

Distinction between usual machine learning setting and transfer learning , and positioning of domain adaptation