Boson sampling

Specifically, this requires reliable sources of single photons (currently the most widely used ones are parametric down-conversion crystals), as well as a linear interferometer.

[8] Finally, the scheme also necessitates high efficiency single photon-counting detectors, such as those based on current-biased superconducting nanowires, which perform the measurements at the output of the circuit.

Specifically, the proofs of the exact boson sampling problem cannot be directly applied here, since they are based on the #P-hardness of estimating the exponentially-small probability

Therefore, if the linear optical circuit implements a Haar random unitary matrix, the adversarial sampler will not be able to detect which of the exponentially many probabilities

of a specific measurement outcome at the output of the interferometer is related to the permanent of submatrices of a unitary matrix, a boson sampling machine does not allow its estimation.

Therefore, the estimate obtained from a boson sampler is not more efficient that running the classical polynomial-time algorithm by Gurvits for approximating the permanent of any matrix to within additive error.

[17] As already mentioned above, for the implementation of a boson sampling machine one necessitates a reliable source of many indistinguishable photons, and this requirement currently remains one of the main difficulties in scaling up the complexity of the device.

Namely, despite recent advances in photon generation techniques using atoms, molecules, quantum dots and color centers in diamonds, the most widely used method remains the parametric down-conversion (PDC) mechanism.

The main advantages of PDC sources are the high photon indistinguishability, collection efficiency and relatively simple experimental setups.

Recently, however, a new scheme has been proposed to make the best use of PDC sources for the needs of boson sampling, greatly enhancing the rate of M-photon events.

This approach has been named scattershot boson sampling,[18][19] which consists of connecting N (N>M) heralded single-photon sources to different input ports of the linear interferometer.

Therefore, for N≫M, this results in an exponential improvement in the single photon generation rate with respect to the usual, fixed-input boson sampling with M sources.

Furthermore, scattershot boson sampling has been also recently implemented with six photon-pair sources coupled to integrated photonic circuits of nine and thirteen modes, being an important leap towards a convincing experimental demonstration of the quantum computational supremacy.

Such a twofold scattershot boson sampling model is also computationally hard, as proven by making use of the symmetry of quantum mechanics under time reversal.

This is precisely equivalent to scattershot boson sampling, except for the fact that our measurement of the herald photons has been deferred till the end of the experiment, instead of happening at the beginning.

[21] Finally, a linear optics platform for implementing a boson sampling experiment where input single-photons undergo an active (non-linear) Gaussian transformation is also available.

[8] These experiments altogether constitute the proof-of-principle demonstrations of an operational boson sampling device, and route towards its larger-scale implementations.

A first scattershot boson sampling experiment has been recently implemented[20] using six photon-pair sources coupled to integrated photonic circuits with 13 modes.

[30] The output of a universal quantum computer running, for example, Shor's factoring algorithm, can be efficiently verified classically, as is the case for all problems in the non-deterministic polynomial-time (NP) complexity class.

Namely, as the latter is related to the problem of estimating matrix permanents (falling into #P-hard complexity class), it is not understood how to verify correct operation for large versions of the setup.

However, within current technologies the assumption of a symmetric setting is not justified (the tracking of the measurement statistics is fully accessible), and therefore the above argument does not apply.

It is then possible to define a rigorous and efficient test to discriminate the boson sampling statistics from an unbiased probability distribution.

The opportunity then exists to tune between ideally indistinguishable (quantum) and perfectly distinguishable (classical) data and measure the change in a suitably constructed metric.

One can analyze the probability to find a k-fold coincidence measurement outcomes (without any multiply populated input mode), which is significantly higher for distinguishable particles than for bosons due to the bunching tendency of the latters.

[28] A different approach to confirm that the boson sampling machine behaves as the theory predicts is to make use of fully reconfigurable optical circuits.

This approach allows also to exclude other physical models, such as mean-field states, which mimic some collective multiparticle properties (including bosonic clouding).

This scalable scheme, however, is rather promising, in the light of considerable development in the construction and manipulation of coupled superconducting qubits and specifically the D-Wave machine.

[40] Coarse-grained boson sampling has been proposed as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications.

[41][42][43] The first related proof-of-principle experiment was performed with a photonic boson-sampling machine (fabricated by a direct femtosecond laser-writing technique),[44] and confirmed many of the theoretical predictions.

Gaussian boson sampling has been analyzed as a search component for computing binding propensity between molecules of pharmacological interest as well.