This technique allows sub-second (~10 Hz) large volumetric imaging ([~0.1 to 1 mm]3) with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods.
[2] The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006[1] and has been working on the cutting edge since then.
LFM can be built upon the traditional setup of a wide-field fluorescence microscope and a standard CCD camera or sCMOS.
In addition, the apertures and the focal-length of each lens and the dimensions of the sensor and microlens array should all be properly chosen to ensure that there is neither overlap nor empty areas between adjacent subimages behind the corresponding microlenses.
By simply summing all the pixels in each subimage behind the microlens (equivalent to collecting all radiation coming from different angles that falls on the same position), the image is focused exactly on the plane that conjugates with the microlens array plane:
if the origin of the coordinate system of each subimage is located on the principal optic axis of the corresponding microlens.
Consequently, a focal stack is generated to recapitulate the instant 3D imaging of the object space.
[5] In addition, any reconstructed 2D image focused at an arbitrary depth corresponds to a 2D slice of a 4D light field in the Fourier domain, where the algorithm complexity can be reduced from
This measurement can be easily done by placing a fluorescent bead at the center of the original focus plane and recording its light field, based on which the PSF's 3D shape is ascertained by synthetically focusing on varied depth.
Given that the PSF is acquired with the same LFM setup and digital refocusing procedure as the focal stack, this measurement correctly reflects the angular range of rays captured by the objective (including any falloff in intensity); therefore, this synthetic PSF is actually free of noise and aberrations.
The shape of the PSF can be considered identical everywhere within our desired field of view (FOV); hence, multiple measurements can be avoided.
In the Fourier domain, the actual intensity of voxels has a very simple relation with the focal stack and the PSF:
However, it may not be possible to directly solve the equation above, given the fact that the aperture is of limited size, resulting in the PSF being bandlimited (i.e., its Fourier transform has zeros).
is improved iteratively by calculating the difference between the actual focal stack
can be viewed as a shear followed by projection, the result should be proportional to a dilated 2D slice of the 4D Fourier transform of a light field.
Precisely, a refocused image can be generated from the 4D Fourier spectrum of a light field by extracting an 2D slice, applying an inverse 2D transform, and scaling.
If all we have available are samples of the light field, instead of use Fourier slice theorem for continuous signal mentioned above, we adopt discrete Fourier slice theorem, which is a generalization of the discrete Radon transform, to compute refocused image.
Under the condition of incoherent propagation among different voxels, the light field transmission from the object space to the sensor can be linearly linked by a
measurement matrix, in which the information of PSF is incorporated: In the ray-optics scenario, a focal stack is generated via synthetically focusing of rays, and then deconvolution with a synthesized PSF is applied to diminish the blurring caused by the wave nature of light.
–describing light field transmission–is directly calculated based on propagation of waves.
Unlike transitional optical microscopes whose PSF shape is invariant (e.g., Airy Pattern) with respect to position of the emitter, an emitter in each voxel generates a unique pattern on the sensor of a LFM.
on the sensor plane when an isotropic point source of unit amplitude is placed at some 3D position
is added to match the fact that a PSF contributes more at the center of a voxel than at the edges.
The linear superposition integral is based on the assumption that fluorophores in each infinitesimal volume
experience an incoherent, stochastic emission process, considering their rapid, random fluctuations.
Again, due to the limited bandwidth, the photon shot noise, and the huge matrix dimension, it is impossible to directly solve the inverse problem as:
Instead, a stochastic relation between a discrete light field and FOV more resembles:
Based on the idea of maximizing the likelihood of the measured light field
remains the diagonal arguments of a matrix and sets its off-diagonal elements to zero.
Starting with initial work at Stanford University applying Light Field Microscopy to calcium imaging in larval zebrafish (Danio Rerio),[10] a number of articles have now applied Light Field Microscopy to functional neural imaging including measuring the neuron dynamic activities across the whole brain of C. elegans,[11] whole-brain imaging in larval zebrafish,[11][12] imaging calcium and voltage activity sensors across the brain of fruit flies (Drosophila) at up to 200 Hz,[13] and fast imaging of 1mm x 1mm x 0.75mm volumes in the hippocampus of mice navigating a virtual environment.