The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence").
There were two major "winters" approximately 1974–1980 and 1987–2000,[3] and several smaller episodes, including the following: Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s.
Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment, leading to the current (as of 2025[update]) AI boom.
Natural language processing (NLP) research has its roots in the early 1930s and began its existence with the work on machine translation (MT).
[4] However, significant advancements and applications began to emerge after the publication of Warren Weaver's influential memorandum, Machine translation of languages: fourteen essays in 1949.
Headlines about the Georgetown–IBM experiment proclaimed phrases like "The bilingual machine," "Robot brain translates Russian into King's English,"[7] and "Polyglot brainchild.
"[8] However, the actual demonstration involved the translation of a curated set of only 49 Russian sentences into English, with the machine's vocabulary limited to just 250 words.
[6] To put things into perspective, a 2006 study made by Paul Nation found that humans need a vocabulary of around 8,000 to 9,000-word families to comprehend written texts with 98% accuracy.
By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem.
The "winter" of neural network approach came to an end in the middle 1980s, when the work of John Hopfield, David Rumelhart and others revived large scale interest.
[15] In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom.
The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory.
J. C. R. Licklider, the founding director of DARPA's computing division, believed in "funding people, not projects"[23] and he and several successors allowed AI's leaders (such as Marvin Minsky, John McCarthy, Herbert A. Simon or Allen Newell) to spend it almost any way they liked.
DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems.
[28] As described in:[29] In 1971, the Defense Advanced Research Projects Agency (DARPA) began an ambitious five-year experiment in speech understanding.
[32] Reddy gives a review of progress in speech understanding at the end of the DARPA project in a 1976 article in Proceedings of the IEEE.
One in every 11 ACM members was in SIGART.In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world.
The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation.
[37] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on.
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case-based reasoning or universal database access.
Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.
According to HP Newquist in The Brain Makers, "On June 1, 1992, The Fifth Generation Project ended not with a successful roar, but with a whimper.
[43][44] In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative.
As originally proposed the project would begin with practical, achievable goals, which even included artificial general intelligence as long-term objective.
[45] Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally", "eviscerating" SCI.
A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.
[49][50] In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems,[51][52] but the field is rarely credited for these successes.
A turning point was in 2012 when AlexNet (a deep learning network) won the ImageNet Large Scale Visual Recognition Challenge with half as many errors as the second place winner.
[59] The 2022 release of OpenAI's AI chatbot ChatGPT which as of January 2023 has over 100 million users,[60] has reinvigorated the discussion about artificial intelligence and its effects on the world.