Never-Ending Language Learning

[1] NELL was programmed by its developers to be able to identify a basic set of fundamental semantic relationships between a few hundred predefined categories of data, such as cities, companies, emotions and sports teams.

Since the beginning of 2010, the Carnegie Mellon research team has been running NELL around the clock, sifting through hundreds of millions of web pages looking for connections between the information it already knows and what it finds through its search process – to make new connections in a manner that is intended to mimic the way humans learn new information.

[3] Oren Etzioni of the University of Washington lauded the system's "continuous learning, as if NELL is exercising curiosity on its own, with little human help".

[1] By 2018, NELL had "acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea,biscuits)), while learning thousands of interrelated functions that continually improve its reading competence over time.

"'[9] A 2023 paper commented that "While the never-ending part seems like the right approach, NELL still had the drawback that its focus remained much too grounded on object-language descriptions, and relied on web pages as its only source, which significantly influenced the type of grammar, symbolism, slang, etc.