In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol.
[4] The idea of using digital computers for translation of natural languages was proposed as early as 1947 by England's A. D. Booth[5] and Warren Weaver at Rockefeller Foundation in the same year.
"The memorandum written by Warren Weaver in 1949 is perhaps the single most influential publication in the earliest days of machine translation.
A demonstration was made in 1954 on the APEXC machine at Birkbeck College (University of London) of a rudimentary translation of English into French.
A similar application, also pioneered at Birkbeck College at the time, was reading and composing Braille texts by computer.
A Georgetown University MT research team, led by Professor Michael Zarechnak, followed (1951) with a public demonstration of its Georgetown-IBM experiment system in 1954.
[10][11] David G. Hays "wrote about computer-assisted language processing as early as 1957" and "was project leader on computational linguistics at Rand from 1955 to 1968.
Real progress was much slower, however, and after the ALPAC report (1966), which found that the ten-year-long research had failed to fulfill expectations, funding was greatly reduced.
Beginning in the late 1980s, as computational power increased and became less expensive, more interest was shown in statistical models for machine translation.
By 1998, "for as little as $29.95" one could "buy a program for translating in one direction between English and a major European language of your choice" to run on a PC.
[14] MT on the web started with SYSTRAN offering free translation of small texts (1996) and then providing this via AltaVista Babelfish,[14] which racked up 500,000 requests a day (1997).
[14] Atlantic Magazine wrote in 1998 that "Systran's Babelfish and GlobaLink's Comprende" handled "Don't bank on it" with a "competent performance.
"[18] Franz Josef Och (the future head of Translation Development AT Google) won DARPA's speed MT competition (2003).
[21][22][23] A deep learning-based approach to MT, neural machine translation has made rapid progress in recent years.
However, the current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test benchmarks[24] i.e., it lacks statistical significance power.
A shallow approach that involves "ask the user about each ambiguity" would, by Piron's estimate, only automate about 25% of a professional translator's job, leaving the harder 75% still to be done by a human.
Heuristic or statistical based MT takes input from various sources in standard form of a language.
Due to their portability, such instruments have come to be designated as mobile translation tools enabling mobile business networking between partners speaking different languages, or facilitating both foreign language learning and unaccompanied traveling to foreign countries without the need of the intermediation of a human translator.
[citation needed] Within these languages, the focus is on key phrases and quick communication between military members and civilians through the use of mobile phone apps.
[52] The Information Processing Technology Office in DARPA hosted programs like TIDES and Babylon translator.
[53] The notable rise of social networking on the web in recent years has created yet another niche for the application of machine translation software – in utilities such as Facebook, or instant messaging clients such as Skype, Google Talk, MSN Messenger, etc.
Lineage W gained popularity in Japan because of its machine translation features allowing players from different countries to communicate.
[54] Despite being labelled as an unworthy competitor to human translation in 1966 by the Automated Language Processing Advisory Committee put together by the United States government,[55] the quality of machine translation has now been improved to such levels that its application in online collaboration and in the medical field are being investigated.
[59][60] Legal language poses a significant challenge to machine translation tools due to its precise nature and atypical use of normal words.
In certain applications, however, e.g., product descriptions written in a controlled language, a dictionary-based machine-translation system has produced satisfactory translations that require no human intervention save for quality inspection.
[68] Relying exclusively on unedited machine translation ignores the fact that communication in human language is context-embedded and that it takes a person to comprehend the context of the original text with a reasonable degree of probability.
[69] The late Claude Piron wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved.
[70] In addition to disambiguation problems, decreased accuracy can occur due to varying levels of training data for machine translating programs.
Two videos uploaded to YouTube in April 2017 involve two Japanese hiragana characters えぐ (e and gu) being repeatedly pasted into Google Translate, with the resulting translations quickly degrading into nonsensical phrases such as "DECEARING EGG" and "Deep-sea squeeze trees", which are then read in increasingly absurd voices;[71][72] the full-length version of the video currently has 6.9 million views as of March 2022.
[update][73] In the early 2000s, options for machine translation between spoken and signed languages were severely limited.