[1] Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code.
[3][4][5][6][7] They were created in the 1970s and then proliferated in the 1980s,[8] being then widely regarded as the future of AI — before the advent of successful artificial neural networks.
[10] Thus, in the late 1950s, right after the information age had fully arrived, researchers started experimenting with the prospect of using computer technology to emulate human decision making.
For example, biomedical researchers started creating computer-aided systems for diagnostic applications in medicine and biology.
However, researchers realized that there were significant limits when using traditional methods such as flow charts,[13] [14] statistical pattern matching,[15] or probability theory.
[22] The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use"[23] – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop very general-purpose problem solvers (foremostly the conjunct work of Allen Newell and Herbert Simon).
Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization.
Universities offered expert system courses and two-thirds of the Fortune 500 companies applied the technology in daily business activities.
[8][30] Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe.
They were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts.
Until then, the main development environment for expert systems had been high end Lisp machines from Xerox, Symbolics, and Texas Instruments.
With the rise of the PC and client-server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC-based tools.
Also, new vendors, often financed by venture capital (such as Aion Corporation, Neuron Data, Exsys, VP-Expert, and many others[33][34]), started appearing regularly.
Surprisingly, the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, and in many cases out-performed the human counterparts.
At the start of these early studies, researchers were hoping to develop entirely automatic (i.e., completely computerized) expert systems.
This situation radically changed after Richard M. Karp published his breakthrough paper: “Reducibility among Combinatorial Problems” in the early 1970s.
[36] Thanks to Karp's work, together with other scholars, like Hubert L. Dreyfus,[37] it became clear that there are certain limits and possibilities when one designs computer algorithms.
[40] Other researchers suggest that Expert Systems caused inter-company power struggles when the IT organization lost its exclusivity in software modifications to users or Knowledge Engineers.
In early expert systems such as Mycin and Dendral, these facts were represented mainly as flat assertions about variables.
In later expert systems developed with commercial shells, the knowledge base took on more structure and used concepts from object-oriented programming.
The inference engine may also include abilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.
A simple example of forward chaining would be to assert Man(Socrates) to the system and then trigger the inference engine.
A significant area for research was the generation of explanations from the knowledge base in natural English rather than simply by showing the more formal but less intuitive rules.
With an expert system shell it was possible to enter a few rules and have a prototype developed in days rather than the months or year typically associated with complex IT projects.
Also, as expert systems moved from prototypes in the lab to deployment in the business world, issues of integration and maintenance became far more critical.
[57][58] Performance could be especially problematic because early expert systems were built using tools (such as earlier Lisp versions) that interpreted code expressions without first compiling them.
This provided a powerful development environment, but with the drawback that it was virtually impossible to match the efficiency of the fastest compiled languages (such as C).
These issues were resolved mainly by the client–server paradigm shift, as PCs were gradually accepted in the IT environment as a legitimate platform for serious business system development and as affordable minicomputer servers provided the processing power needed for AI applications.
[61] Other problems are related to the overfitting and overgeneralization effects when using known facts and trying to generalize to other cases not described explicitly in the knowledge base.
The general problem it solved—designing a solution given a set of constraints—was one of the most successful areas for early expert systems applied to business domains such as salespeople configuring Digital Equipment Corporation (DEC) VAX computers and mortgage loan application development.