The Georgetown experiment, which involved successful fully automatic translation of more than sixty Russian sentences into English in 1954, was one of the earliest recorded projects.
[4] Consequently, the success of the experiment ushered in an era of significant funding for machine translation research in the United States.
Interest grew in statistical models for machine translation, which became more common and also less expensive in the 1980s as available computational power increased.
Although there exists no autonomous system of "fully automatic high quality translation of unrestricted text,"[5][6][7] there are many programs now available that are capable of providing useful output within strict constraints.
[9] In the mid-1930s the first patents for "translating machines" were applied for by Georges Artsrouni, for an automatic bilingual dictionary using paper tape.
[3] Early systems used large bilingual dictionaries and hand-coded rules for fixing the word order in the final output which was eventually considered too restrictive in linguistic developments at the time.
At the time, this type of semantic ambiguity could only be solved by writing source texts for machine translation in a controlled language that uses a vocabulary in which each word has exactly one meaning.
The report recommended, however, that tools be developed to aid translators – automatic dictionaries, for example – and that some research in computational linguistics should continue to be supported.
In the US the main exceptions were the founders of SYSTRAN (Peter Toma) and Logos (Bernard Scott), who established their companies in 1968 and 1970 respectively and served the US Department of Defense.
In 1970, the SYSTRAN system was installed for the United States Air Force, and subsequently by the Commission of the European Communities in 1976.
[13] While research in the 1960s concentrated on limited language pairs and input, demand in the 1970s was for low-cost systems that could translate a range of technical and commercial documents.
[citation needed] As a result of the improved availability of microcomputers, there was a market for lower-end machine translation systems.
With the fifth-generation computer, Japan intended to leap over its competition in computer hardware and software, and one project that many large Japanese electronics firms found themselves involved in was creating software for translating into and from English (Fujitsu, Toshiba, NTT, Brother, Catena, Matsushita, Mitsubishi, Sharp, Sanyo, Hitachi, NEC, Panasonic, Kodensha, Nova, Oki).
[citation needed] Research during the 1980s typically relied on translation through some variety of intermediary linguistic representation involving morphological, syntactic, and semantic analysis.
[14][15] A defining feature of both of these approaches was the neglect of syntactic and semantic rules and reliance instead on the manipulation of large text corpora.
In different research projects in Europe (like TC-STAR)[17] and in the United States (STR-DUST and DARPA Global autonomous language exploitation program), solutions for automatically translating Parliamentary speeches and broadcast news was developed.
In these scenarios the domain of the content was no longer limited to any special area, but rather the speeches to be translated cover a variety of topics.
[citation needed] Further advancements in the attention layer, transformation and back propagation techniques have made NMTs flexible and adopted in most machine translation, summarization and chatbot technologies.