Hyperacuity

Visual acuity is measured by the smallest letters that can be distinguished on a chart and is governed by the anatomical spacing of the mosaic of sensory elements on the retina.

Yet spatial distinctions can be made on a finer scale still: misalignment of borders can be detected with a precision up to 10 times better than visual acuity, as already shown by Ewald Hering in 1899.

Light impinges on the mosaic of receptor sense cells, rods and cones, which covers the retinal surface without gaps or overlap, just like the detecting pixels in the film plane of digital cameras.

Shown below are the images of two separate short lines; the precision of the read-out of their location difference transcends the dimension of the mosaic elements.

That the hyperacuity apparatus involves signals from a range of individual receptor cells, usually in more than one location of the stimulus space, has implications concerning performance in these tasks.

Low contrast, close proximity of neighboring stimuli (crowding), and temporal asynchrony of pattern components are examples of factors that cause reduced performance.

While none of them gained empirical support so far, the plausibility of the former had been critically questioned by the discrete nature of neural firing [7] The optics of the human eye are extremely simple, the main imaging component being a single element lens which can change its strength by muscular control.

Overington & his team sought (and found), instead, a way to approximate to a hexagonal matrix, while at the same time retaining a conventional Cartesian layout for processing.

Although there are many and varied spatial interactions evident in the early neural networks of the human visual system, only a few are of great importance in high fidelity information sensing.

The general finding from primate receptive field studies is that any such local group yields no output for a uniform input illumination.

Very useful further evidence of the processes going on in his area comes from the electron-microscopy studies of Kolb [11] These clearly show the neural structures which lead to difference signals being transmitted further.

Such a separation of positive and negative components is totally compatible with retinal physiology and is one possible function for the known pair of midget bipolar channels for each receptor.

This 30 degree separation of orientations agrees with angular spacing of such units deduced to be desirable by John Canny from a mathematical approach.

[16] In the absence of specific details, it seemed that a roughly best compromise between computational efficiency and simplicity on the one hand and adequate orientation al tuning on the other should be of extent 5 x 1 pixels.

Furthermore, the interplay of first and second difference data provides very powerful means of analyzing motion, stereo, color, texture & other scene properties.

[19] Hyperacuity has been identified in many animal species, for example in the detection of prey by the electric fish,[20] echolocation in the bat,[21] and in the ability of rodents to localize objects based on mechanical deformations of their whiskers.

[22] In clinical vision tests,[23] hyperacuity has a special place because its processing is at the interfaces of the eye's optics, retinal functions, activation of the primary visual cortex and the perceptual apparatus.

Ewald Hering's model, published 1899, of how a Vernier acuity stimulus is coded by a receptor array. Receptors marked c signal a different position code along the horizontal direction from either the position a code or the position b code. [ 1 ]
Acuity/Resolution versus Hyperacuity/Localization Top: Two stars imaged on the mosaic of retinal receptor cells can be resolved only if their separation leaves at least one intervening mosaic element with a detectably different intensity, otherwise the pattern is indistinguishable from a single elongated star. Bottom: Two targets can be localized relative to each other to values transcending the spacing of the mosaic units; the hyperacuity mechanism achieves this by identifying, with sub-pixel precision, the light center of each target, across all the pixels it covers