If, however, the pilot devotes time to interpret the engine gauge, and manipulate the aircraft accordingly, only to discover that the flight turbulence has not changed, the pilot may be inclined to ignore future error recommendations conveyed by an engine gauge—a form of automation complacency leading to disuse.
Conversely, omission errors occur when automated devices fail to detect or indicate problems and the user does not notice because they are not properly monitoring the system.
[12] Omission errors occur when the human decision-maker fails to notice an automation failure, either due to low vigilance or overtrust in the system.
It also renders users more likely to conclude their assessment of a situation too hastily after being prompted by an automatic aid to take a specific course of action.
[13] One study, published in the Journal of the American Medical Informatics Association, found that the position and prominence of advice on a screen can impact the likelihood of automation bias, with prominently displayed advice, correct or not, is more likely to be followed; another study, however, seemed to discount the importance of this factor.
[13] According to another study, a greater amount of on-screen detail can make users less "conservative" and thus increase the likelihood of automation bias.
[13] One study showed that making individuals accountable for their performance or the accuracy of their decisions reduced automation bias.
[5] "The availability of automated decision aids," states one study by Linda Skitka, "can sometimes feed into the general human tendency to travel the road of least cognitive effort.
"[5] One study also found that when users are made aware of the reasoning process employed by a decision support system, they are likely to adjust their reliance accordingly, thus reducing automation bias.
[5] Training that focuses on automation bias in aviation has succeeded in reducing omission errors by student pilots.
In so doing, the pilot becomes a complacent monitor, thereby running the risk of missing critical information conveyed by the automated gauges.
This creates scenarios in which the operator may be expending unnecessary cognitive resources when the automation is in fact reliable, but also increasing the odds of identifying potential errors in the weather gauges should they occur.
To calibrate the pilot's perception of reliability, automation should be designed to maintain workload at appropriate levels while also ensuring the operator remains engaged with monitoring tasks.
One 1998 study found that pilots with approximately 440 hours of flight experience detected more automation failures than did nonpilots, although both groups showed complacency effects.
[12] In a 2005 study, experienced air-traffic controllers used high-fidelity simulation of an ATC (Free Flight) scenario that involved the detection of conflicts among "self-separating" aircraft.
When the device failed near the end of the simulation process, considerably fewer controllers detected the conflict than when the situation was handled manually.
Yet while CDSS, when used properly, bring about an overall improvement in performance, they also cause errors that may not be recognized owing to automation bias.
Given the highly serious nature of some of the potential consequences of automation bias in the health-care field, it is especially important to be aware of this problem when it occurs in clinical settings.
One study found more automation bias among older users, but it was noted that could be a result not of age but of experience.
[13] A 2005 study found that when primary-care physicians used electronic sources such as PubMed, Medline, and Google, there was a "small to medium" increase in correct answers, while in an equally small percentage of instances the physicians were misled by their use of those sources, and changed correct to incorrect answers.
[12] Automation bias can be a crucial factor in the use of intelligent decision support systems for military command-and-control operations.
One 2004 study found that automation bias effects have contributed to a number of fatal military decisions, including friendly-fire killings during the Iraq War.
Researchers have sought to determine the proper level of automation for decision support systems in this field.
This is for example discussed in the report of National Transportation Safety Board about the fatal accident between an UBER test vehicle and pedestrian Elaine Herzberg.
[21] Excessively checking and questioning automated assistance can increase time pressure and task complexity, thus reducing benefit, so some automated decision support systems balance positive and negative effects rather than attempt to eliminate negative effects.