Meaning–text theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk,[1] for the construction of models of natural language.
The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications, including machine translation, phraseology, and lexicography.
[citation needed] Linguistic models in meaning–text theory operate on the principle that language consists in a mapping from the content or meaning (semantics) of an utterance to its form or text (phonetics).
Lexemes with purely grammatical function such as lexically-governed prepositions are not included at this level of representation; values of inflectional categories that are derived from SemR but implemented by the morphology are represented as subscripts on the relevant lexical nodes that they bear on.
Syntactic relations between lexical items at this level are not restricted and are considered to be completely language-specific, although many are believed to be similar (or at least isomorphic) across languages.
This is the first representational level at which linear precedence is considered to be linguistically significant, effectively grouping word-order together with morphological processes and prosody, as one of the three non-lexical means with which languages can encode syntactic structure.
The lexicon in meaning–text theory is represented by an explanatory combinatorial dictionary (ECD)[9][10] which includes entries for all of the LUs of a language along with information speakers must know regarding their syntactics (the LU-specific rules and conditions on their combinatorics).
[14] An example of a simple LF is Magn(L), which represents collocations used in intensification such as heavy rain, strong wind, or intense bombardment.