This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine.
[1] Additionally, the concept of 'inference' has expanded to include the process through which trained neural networks generate predictions or decisions.
This type of inference plays a crucial role in various applications, including (but not limited to) image recognition, natural language processing, and autonomous vehicles.
The inference phase in these applications is typically characterized by a high volume of data inputs and real-time processing requirements.
Prior to the development of expert systems and inference engines, artificial intelligence researchers focused on more powerful theorem prover environments that offered much fuller implementations of first-order logic.
Focusing on IF-THEN statements (what logicians call modus ponens) still gave developers a very powerful general mechanism to represent logic, but one that could be used efficiently with computational resources.
This innovation of integrating the inference engine with a user interface led to the second early advancement of expert systems: explanation capabilities.
The execution of the rules will often result in new facts or goals being added to the knowledge base, which will trigger the cycle to repeat.
In forward chaining, the engine looks for rules where the antecedent (left hand side) matches some fact in the knowledge base.
Lisp was a frequent platform for early AI research due to its strong capability to do symbolic manipulation.
Prolog focused primarily on backward chaining and also featured various commercial versions and optimizations for efficiency and robustness.
[5] As expert systems prompted significant interest from the business world, various companies, many of them started or guided by prominent AI researchers created productized versions of inference engines.