[1] Morris and Hirst[1] define that lexical chains make use of semantic context for interpreting words, concepts, and sentences.
Using their intuition, they identify lexical chains in text documents and built their structure considering Halliday and Hassan's[2] observations.
Silber and McCoy[6] also investigates text summarization, but their approach for constructing the lexical chains runs in linear time.
Budanitsky and Kirst[9][10] compare several measurements of semantic distance and relatedness using lexical chains in conjunction with WordNet.
Moldovan and Adrian[12] study the use of lexical chains for finding topically related words for question answering systems.
According to their findings, topical relations via lexical chains improve the performance of question answering systems when combined with WordNet.
Ercan and Cicekli[14] explore the effects of lexical chains in the keyword extraction task through a supervised machine learning perspective.
In Wei et al.[15] combine lexical chains and WordNet to extract a set of semantically related words from texts and use them for clustering.
Their approach uses an ontological hierarchical structure to provide a more accurate assessment of similarity between terms during the word sense disambiguation task.
Even though the applicability of lexical chains is diverse, there is little work exploring them with recent advances in NLP, more specifically with word embeddings.
Gonzales et al. [17] use word-sense embeddings to produce lexical chains that are integrated with a neural machine translation model.
In FLLC II, the lexical chains are assembled dynamically according to the semantic content for each term evaluated and the relationship with its adjacent neighbors.
The semantic relationship is obtained through WordNet, which works a ground truth to indicate which lexical structure connects two words (e.g., hypernyms, hyponyms, meronyms).