Dictionary-based machine translation

[1] It can also be used to speed up manual translation, if the person carrying it out is fluent in both languages and therefore capable of correcting syntax and grammar.

LMT, introduced around 1990,[2] is a Prolog-based machine-translation system that works on specially made bilingual dictionaries, such as the Collins English-German (CEG), which have been rewritten in an indexed form which is easily readable by computers.

Furthermore, PanEBMT supports multiple incremental operations on its corpus, which facilitates a biased translation used for filtering purposes.

Douglas Hofstadter through his "Le Ton beau de Marot: In Praise of the Music of Language" proves what a complex task translation is.

As Kay puts it "More substantial successes in these enterprises will require a sharper image of the world than any that can be made out simply from the statistics of language use" [(page xvii) Parallel Text Processing: Alignment and Use of Translation Corpora].

Development in lexical semantics and computational linguistics during the time period between 1990 and 1996 made it possible for "natural language processing" (NLP) to flourish, gaining new capabilities, nevertheless benefiting machine translation in general.

This method has emerged in response to two problems plaguing the statistical extraction of bilingual lexicons: "(1) How can noisy parallel corpora be used?

[7] After the 1980s, machine translation became mainstream again, enjoying an even bigger popularity than in the 1950s and 1960s as well as rapid expansion, largely based on the text corpora approach.

The basic concept of machine translation can be traced back to the 17th century in the speculations surrounding "universal languages and mechanical dictionaries".

[7] The first true practical machine translation suggestions were made in 1933 by Georges Artsrouni in France and Petr Trojanskij in Russia.

This engineering feat mesmerised the public and the governments of both the US and USSR who therefore stimulated large-scale funding in machine translation research.

Thus machine translation lost in popularity until the 1980s, when advances in linguistics and technology helped revitalise the interest in this field.

The lessons taught by the RUSLAN experiment are that a transfer-based approach of translation retains its quality regardless of how close the languages are.

This is because queries tend to be short, a couple of words, which, despite not providing a lot of context it is a more feasible than translating whole documents, due to practical reasons.

From A to A