[1] The author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'.
[4] McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked.
This underappreciation is known from such diverse fields as computer chess,[10] marketing,[11] agricultural automation,[8] hospitality[12] and optical character recognition.
"These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world.
"[11] Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name.
[4] AI tools to advise humans regarding weapons deployment were developed by both sides and received very limited usage during that time.
[4] They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day.
[citation needed] Patty Tascarella wrote in 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding.
[25] When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, public perception of chess playing shifted from a difficult mental task to a routine operation.
[10] Notably, John McCarthy, an AI pioneer and founder of the term "artificial intelligence", was disappointed by Deep Blue.