[1][2] In 1981, Ehud Shapiro introduced several ideas that would shape the field in his new approach of model inference, an algorithm employing refinement and backtracing to search for a complete axiomatisation of given examples.
[1] Muggleton and Wray Buntine introduced predicate invention and inverse resolution in 1988.
FOIL, introduced by Ross Quinlan in 1990[7] was based on upgrading propositional learning algorithms AQ and ID3.
[8] Golem, introduced by Muggleton and Feng in 1990, went back to a restricted form of Plotkin's least generalisation algorithm.
[8][10][11] Aleph, a descendant of Progol introduced by Ashwin Srinivasan in 2001, is still one of the most widely used systems as of 2022[update].
[10] At around the same time, the first practical applications emerged, particularly in bioinformatics, where by 2000 inductive logic programming had been successfully applied to drug design, carcinogenicity and mutagenicity prediction, and elucidation of the structure and function of proteins.
The success of those initial applications and the lack of progress in recovering larger traditional logic programs shaped the focus of the field.
This technique was pioneered with the Metagol system introduced by Muggleton, Dianhuan Lin, Niels Pahlavi and Alireza Tamaddoni-Nezhad in 2014.
[14] This allows ILP systems to work with fewer examples, and brought successes in learning string transformation programs, answer set grammars and general algorithms.
As of 2022[update], learning from entailment is by far the most popular setting for inductive logic programming.
, does not impose a restriction on h, but forbids any generation of a hypothesis as long as the positive facts are explainable without it.
The goal is then to output a hypothesis that is complete, meaning every positive example is a model of
Bottom-up methods to search the subsumption lattice have been investigated since Plotkin's first work on formalising induction in clausal logic in 1970.
Two types of inverse resolution operator are in use in inductive logic programming: V-operators and W-operators.
[23] Inverse resolution was first introduced by Stephen Muggleton and Wray Buntine in 1988 for use in the inductive logic programming system Cigol.
[23] The ILP systems Progol,[11] Hail [24] and Imparo [25] find a hypothesis H using the principle of the inverse entailment[11] for theories B, E, H:
Therefore, an alternative hypothesis search can be conducted using the inverse subsumption (anti-subsumption) operation instead, which is less non-deterministic than anti-entailment.
Questions of completeness of a hypothesis search procedure of specific inductive logic programming system arise.
[30] Evolutionary algorithms in ILP use a population-based approach to evolve hypotheses, refining them through selection, crossover, and mutation.
Methods like EvoLearner have been shown to outperform traditional approaches on structured machine learning benchmarks.
It can be considered as a form of statistical relational learning within the formalism of probabilistic logic programming.
In the former, one is given the structure (the clauses) of H and the goal is to infer the probabilities annotations of the given clauses, while in the latter the goal is to infer both the structure and the probability parameters of H. Just as in classical inductive logic programming, the examples can be given as examples or as (partial) interpretations.
[35] Parameter learning for languages following the distribution semantics has been performed by using an expectation-maximisation algorithm or by gradient descent.
An expectation-maximisation algorithm consists of a cycle in which the steps of expectation and maximization are repeatedly performed.
Their approach involves generating the underlying graphical model in a preliminary step and then applying expectation-maximisation.
[35][37] In the same year, Meert, W. et al. introduced a method for learning parameters and structure of ground probabilistic logic programs by considering the Bayesian networks equivalent to them and applying techniques for learning Bayesian networks.
[38][35] ProbFOIL, introduced by De Raedt and Ingo Thon in 2010, combined the inductive logic programming system FOIL with ProbLog.
[39][35] In 2011, Elena Bellodi and Fabrizio Riguzzi introduced SLIPCASE, which performs a beam search among probabilistic logic programs by iteratively refining probabilistic theories and optimizing the parameters of each theory using expectation-maximisation.
[40] Its extension SLIPCOVER, proposed in 2014, uses bottom clauses generated as in Progol to guide the refinement process, thus reducing the number of revisions and exploring the search space more effectively.
Text taken from A History of Probabilistic Inductive Logic Programming, Fabrizio Riguzzi, Elena Bellodi and Riccardo Zese, Frontiers Media.