When too many patterns are stored, spurious memories appear which quickly proliferate, so that the energy landscape becomes disordered and no retrieval is anymore possible.
Quantum associative memories[2][3][4] (in their simplest realization) store patterns in a unitary matrix U acting on the Hilbert space of n qubits.
Many quantum machine learning algorithms in this category are based on variations of the quantum algorithm for linear systems of equations[33] (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix.
Researchers are extensively studying VQCs, as it uses the power of quantum computation to learn in a short time and also use fewer parameters than its classical counterparts.
Quantum walks have been proposed to enhance Google's PageRank algorithm[53] as well as the performance of reinforcement learning agents in the projective simulation framework.
As the depth of the quantum circuit advances on NISQ devices, the noise level rises, posing a significant challenge to accurately computing costs and gradients on training models.
[citation needed] Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society.
A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution.
[61] Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.
[63] Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks.
[62] The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets.
This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling.
[69] The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts.
The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models.
The main strategy is to carry out an iterative optimization process in the NISQ[79] devices, without the negative impact of noise, which is possibly incorporated into the circuit parameter, and without the need for quantum error correction.
[83] Despite the fact that the QCNN model does not include the corresponding quantum operation, the fundamental idea of the pooling layer is also offered to assure validity.
Its function is to shrink the representation's spatial size while preserving crucial features, which allows it to reduce the number of parameters, streamline network computing, and manage over-fitting.
Translational invariance, which requires identical blocks of parameterized quantum gates within a layer, is a distinctive feature of the QCNN architecture.
[84] Dissipative QNNs (DQNNs) are constructed from layers of qubits coupled by perceptron called building blocks, which have an arbitrary unitary design.
[85][86] The input states information are transported through the network in a feed-forward fashion, layer-to-layer transition mapping on the qubits of the two adjacent layers, as the name implies.
Unlike the approach taken by other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well.
[90] Additionally, since classical HMMs are a particular kind of Bayes net, an exciting aspect of HQMMs is that the techniques used show how we can perform quantum-analogous Bayesian inference, which should allow for the general construction of the quantum versions of probabilistic graphical models.
[95] For this purpose, gates instead of features act as players in a coalitional game with a value function that depends on measurements of the quantum circuit of interest.
Additionally, a quantum version of the classical technique known as LIME (Linear Interpretable Model-Agnostic Explanations)[99] has also been proposed, known as Q-LIME.
[106] The earliest experiments were conducted using the adiabatic D-Wave quantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009.
[107] Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of quantum machine learning for future technological implementations.
[113] Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule.
[114] A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015.
[119] However, in a more recent publication from 2021, these claims could not be reproduced for Neural Network weight initialization and no significant advantage of using QRNGs over PRNGs was found.
[120] The work also demonstrated that the generation of fair random numbers with a gate quantum computer is a non-trivial task on NISQ devices, and QRNGs are therefore typically much more difficult to use in practice than PRNGs.