Sparse approximation

Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, signal processing, machine learning, medical imaging, and more.

(typically assumed to be full-rank) is referred to as the dictionary, and

, this linear system admits in general infinitely many possible solutions, and among these we seek the one with the fewest non-zeros.

pseudo-norm, which counts the number of non-zero components of

The underlying motivation for such a sparse decomposition is the desire to provide the simplest possible explanation of

can be viewed as a molecule composed of a few fundamental elements taken from

While the above posed problem is indeed NP-Hard, its solution can often be found using approximation algorithms.

One such option is a convex relaxation of the problem, obtained by using the

simply sums the absolute values of the entries in

This is known as the basis pursuit (BP) algorithm, which can be handled using any linear programming solver.

An alternative approximation method is a greedy technique, such as the matching pursuit (MP), which finds the location of the non-zeros one at a time.

(using the spark (mathematics), the mutual coherence or the restricted isometry property) and the level of sparsity in the solution,

, the sparse representation problem can be shown to have a unique solution, and BP and MP are guaranteed to find it perfectly.

-norm on the data-fitting term, the sparse decomposition problem becomes or put in a Lagrangian form, where

Just as in the noiseless case, these two problems are NP-Hard in general, but can be approximated using pursuit algorithms.

Similarly, matching pursuit can be used for approximating the solution of the above problems, finding the locations of the non-zeros one at a time until the error threshold is met.

Here as well, theoretical guarantees suggest that BP and MP lead to nearly optimal solutions depending on the properties of

[4] [5] [6] Another interesting theoretical result refers to the case in which

) admit closed-form solutions in the form of non-linear shrinkage.

[4] There are several variations to the basic sparse approximation problem.

Structured sparsity: In the original version of the problem, any of the atoms in the dictionary can be picked.

[7] Collaborative (joint) sparse coding: The original version of the problem is defined for a single signal

In this case, the pursuit task aims to recover a set of sparse representations that best describe the data while forcing them to share the same (or close-by) support.

[8] Other structures: More broadly, the sparse approximation problem can be cast while forcing a specific desired structure on the pattern of non-zero locations in

Two cases of interest that have been extensively studied are tree-based structure, and more generally, a Boltzmann distributed support.

[9] As already mentioned above, there are various approximation (also referred to as pursuit) algorithms that have been developed for addressing the sparse representation problem: We mention below a few of these main methods.

In most of these applications, the unknown signal of interest is modeled as a sparse combination of a few atoms from a given dictionary, and this is used as the regularization of the problem.

These problems are typically accompanied by a dictionary learning mechanism that aims to fit

The use of sparsity-inspired models has led to state-of-the-art results in a wide set of applications.

[12][13][14] Recent work suggests that there is a tight connection between sparse representation modeling and deep-learning.