[7] As the distractors represent the differing individual features of the target more equally amongst themselves (distractor-ratio effect), reaction time(RT) increases and accuracy decreases.
[12] In many cases, top-down processing affects conjunction search by eliminating stimuli that are incongruent with one's previous knowledge of the target-description, which in the end allows for more efficient identification of the target.
[21][22] It is also possible to measure the role of attention within visual search experiments by calculating the slope of reaction time over the number of distractors present.
[35] Experiments show that these features include luminance, colour, orientation, motion direction, and velocity, as well as some simple aspects of form.
Evidence that attention and thus later visual processing is needed to integrate two or more features of the same object is shown by the occurrence of illusory conjunctions, or when features do not combine correctly For example, if a display of a green X and a red O are flashed on a screen so briefly that the later visual process of a serial search with focal attention cannot occur, the observer may report seeing a red X and a green O.
[37] Preattentive processes are those performed in the first stage of the FIT model, in which the simplest features of the object are being analyzed, such as color, size, and arrangement.
Chan and Hayward[37] have conducted multiple experiments supporting this idea by demonstrating the role of dimensions in visual search.
While exploring whether or not focal attention can reduce the costs caused by dimension-switching in visual search, they explained that the results collected supported the mechanisms of the feature integration theory in comparison to other search-based approaches.
In the guided search model by Jeremy Wolfe,[39] information from top-down and bottom-up processing of the stimulus is used to create a ranking of items in order of their attentional priority.
In contrast, during inefficient search, the reaction time to identify the target increases linearly with the number of distractor items present.
[40][41][42][43] Ashbridge, Walsh, and Cowey in (1997)[44] demonstrated that during the application of transcranial magnetic stimulation (TMS) to the right parietal cortex, conjunction search was impaired by 100 milliseconds after stimulus onset.
Conversely, the authors further identify that for conjunction search, the superior parietal lobe and the right angular gyrus elicit bilaterally during fMRI experiments.
In contrast, Leonards, Sunaert, Vam Hecke and Orban (2000)[46] identified that significant activation is seen during fMRI experiments in the superior frontal sulcus primarily for conjunction search.
This research hypothesises that activation in this region may in fact reflect working memory for holding and maintaining stimulus information in mind in order to identify the target.
[48][49][50] Moreover, research into monkeys and single cell recording found that the superior colliculus is involved in the selection of the target during visual search as well as the initiation of movements.
[52] Conversely, Bender and Butter (1987)[53] found that during testing on monkeys, no involvement of the pulvinar nucleus was identified during visual search tasks.
It has been shown that during visual exploration of complex natural scenes, both humans and nonhuman primates make highly stereotyped eye movements.
Research has suggested that effective visual search may have developed as a necessary skill for survival, where being adept at detecting threats and identifying food was essential.
[62] Debates are ongoing whether both faces and objects are detected and processed in different systems and whether both have category specific regions for recognition and identification.
[73] This could be due to evolutionary developments as the need to be able to identify faces that appear threatening to the individual or group is deemed critical in the survival of the fittest.
[74] More recently, it was found that faces can be efficiently detected in a visual search paradigm, if the distracters are non-face objects,[75][76][77] however it is debated whether this apparent 'pop out' effect is driven by a high-level mechanism or by low-level confounding features.
Event-related potentials (ERPs) showed longer latencies and lower amplitudes in older subjects than young adults at the P3 component, which is related to activity of the parietal lobes.
[95] An experiment conducted by Tales et al. (2000)[93] investigated the ability of patients with AD to perform various types of visual search tasks.
Studies have consistently shown that autistic individuals performed better and with lower reaction times in feature and conjunctive visual search tasks than matched controls without autism.
[100] Second, autistic individuals show superior performance in discrimination tasks between similar stimuli and therefore may have an enhanced ability to differentiate between items in the visual search display.
In the past decade, there has been extensive research into how companies can maximise sales using psychological techniques derived from visual search to determine how products should be positioned on shelves.
Pieters and Warlop (1999)[103] used eye tracking devices to assess saccades and fixations of consumers while they visually scanned/searched an array of products on a supermarket shelf.
Their research suggests that consumers specifically direct their attention to products with eye-catching properties such as shape, colour or brand name.
This study suggests that efficient search is primarily used, concluding that consumers do not focus on items that share very similar features.
It was found that for exploratory search, individuals would pay less attention to products that were placed in visually competitive areas such as the middle of the shelf at an optimal viewing height.