Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design.
In addition, eye trackers are increasingly being used for assistive and rehabilitative applications such as controlling wheelchairs, robotic arms, and prostheses.
For example, Louis Émile Javal observed in 1879 that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops (called fixations) and quick saccades.
The first non-intrusive eye-trackers were built by Guy Thomas Buswell in Chicago, using beams of light that were reflected on the eye, then recording on film.
If this hypothesis is correct, then when a subject looks at a word or object, he or she also thinks about it (process cognitively), and for exactly as long as the recorded fixation.
However, gaze-contingent techniques offer an interesting option in order to disentangle overt and covert attentions, to differentiate what is fixated and what is processed.
This can include how users react to drop-down menus or where they focus their attention on a website so the developer knows where to place an advertisement.
Video-based eye trackers typically use the corneal reflection (the first Purkinje image) and the center of the pupil as features to track over time.
It is a very light-weight approach that, in contrast to current video-based eye-trackers, requires low computational power, works under different lighting conditions and can be implemented as an embedded, self-contained wearable system.
[37] Bright-pupil tracking creates greater iris/pupil contrast, allowing more robust eye-tracking with all iris pigmentation, and greatly reduces interference caused by eyelashes and other obscuring features.
Eye tracking in human–computer interaction (HCI) typically investigates the scanpath for usability purposes, or as a method of input in gaze-contingent displays, also known as gaze-based interfaces.
Graphical presentation is rarely the basis of research results, since they are limited in terms of what can be analysed - research relying on eye-tracking, for example, usually requires quantitative measures of the eye movement events and their parameters, The following visualisations are the most commonly used: Animated representations of a point on the interface This method is used when the visual behavior is examined individually indicating where the user focused their gaze in each moment, complemented with a small path that indicates the previous saccade movements, as seen in the image.
Heat maps An alternative static representation, used mainly for the agglomerated analysis of the visual exploration patterns in a group of users.
[46] The relative position of eye and head, even with constant gaze direction, influences neuronal activity in higher visual areas.
An accurate and reliable calibration is essential for obtaining valid and repeatable eye movement data, and this can be a significant challenge for non-verbal subjects or those who have unstable gaze.
[48] Interpretation of the results still requires some level of expertise, however, because a misaligned or poorly calibrated system can produce wildly erroneous data.
[54] Specific applications include the tracking eye movement in language reading, music reading, human activity recognition, the perception of advertising, playing of sports, distraction detection and cognitive load estimation of drivers and pilots and as a means of operating computers by people with severe motor impairment.
[23] In the field of virtual reality, eye tracking is used in head mounted displays for a variety of purposes including to reduce processing load by only rendering the graphical area within the user's gaze.
[55] In recent years, the increased sophistication and accessibility of eye-tracking technologies have generated a great deal of interest in the commercial sector.
By examining fixations, saccades, pupil dilation, blinks and a variety of other behaviors, researchers can determine a great deal about the effectiveness of a given medium or product.
Specifically, eye-tracking can be used to assess search efficiency, branding, online advertisements, navigation usability, overall design and many other site components.
Knowing this allows researchers to assess in great detail how often a sample of consumers fixates on the target logo, product or ad.
[24] The goal was to use deep learning to examine images of drivers and determine their level of drowsiness by "classify[ing] eye states."
[26] People with severe motor impairment can use eye tracking for interacting with computers[59] as it is faster than single switch scanning techniques and intuitive to operate.
[60][61] Motor impairment caused by Cerebral Palsy[62] or Amyotrophic lateral sclerosis often affects speech, and users with Severe Speech and Motor Impairment (SSMI) use a type of software known as Augmentative and Alternative Communication (AAC) aid,[63] that displays icons, words and letters on screen[64] and uses text-to-speech software to generate spoken output.
[73] Research on interacting with multi-functional displays in simulator environment showed that eye tracking can improve the response times and perceived cognitive load significantly over existing systems.
Recent studies investigated eye gaze controlled interaction with HUD (Head Up Display) that eliminates eyes-off-road distraction.
[93] Besides, cartographers have employed eye tracking to investigate various factors affecting map reading, including attributes such as color or symbol density.
[107][108] With the aid of machine learning techniques, eye tracking data may indirectly reveal information about a user's ethnicity, personality traits, fears, emotions, interests, skills, and physical and mental health condition.
Eye activities are not always under volitional control, e.g., "stimulus-driven glances, pupil dilation, ocular tremor, and spontaneous blinks mostly occur without conscious effort, similar to digestion and breathing”.