In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by the programmer.
The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s.
Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional programming language that places no constraints on the order in which operations are performed.
[4] Logic programming, with its current syntax of facts and rules, can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in artificial intelligence.
Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and Seymour Papert.
[5] Although it was based on the proof methods of logic, Planner, developed by Carl Hewitt at MIT, was the first language to emerge within this proceduralist paradigm.
[7] For the sake of efficiency, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time.
Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover.
[13] In the meanwhile, Alain Colmerauer in Marseille was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering.
The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David H. D. Warren in Edinburgh in 1977.
Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp.
The FGCS project aimed to use logic programming to develop advanced Artificial Intelligence applications on massively parallel computers.
Although the project initially explored the use of Prolog, it later adopted the use of concurrent logic programming, because it was closer to the FGCS computer architecture.
[19] In the meanwhile, more declarative logic programming approaches, including those based on the use of Prolog, continued to make progress independently of the FGCS project.
Work in this field became prominent around 1977, when Hervé Gallaire and Jack Minker organized a workshop on logic and databases in Toulouse.
[34] The difference between the two declarative semantics can be seen with the definitions of addition and multiplication in successor arithmetic, which represents the natural numbers 0, 1, 2, ... as a sequence of terms of the form 0, s(0), s(s(0)), ....
But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated: The completion of this program is: The notion of completion is closely related to John McCarthy's circumscription semantics for default reasoning,[36] and to Ray Reiter's closed world assumption.
In the satisfiability semantics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic program.
[43] The simplest metaprogram is the so-called "vanilla" meta-interpreter: where true represents an empty conjunction, and (B,C) is a composite term representing the conjunction of B and C. The predicate clause(A,B) means that there is a clause of the form A :- B. Metaprogramming is an application of the more general use of a metalogic or metalanguage to describe and reason about another language, called the object language.
In his popular Introduction to Cognitive Science,[44] Paul Thagard includes logic and rules as alternative approaches to modelling human thinking.
This suggests that Thagard's conclusion (page 56) that: Much of human knowledge is naturally described in terms of rules, and many kinds of thinking such as planning can be modeled by rule-based systems.
[45] They show how the non-monotonic character of logic programs can be used to explain human performance on a variety of psychological tasks.
In The Proper Treatment of Events,[46] Michiel van Lambalgen and Fritz Hamm investigate the use of constraint logic programming to code "temporal notions in natural language by looking at the way human beings construct time".
This difficulty does not arise, however, when logic programs are used to represent the existing, explicit rules of a business organisation or legal authority.
[52] The SLD resolution rule of inference is neutral about the order in which subgoals in the bodies of clauses can be selected for solution.
For example, the toy blocks world example above can be implemented without frame axioms using destructive change of state: The sequence of move events and the resulting locations of the blocks can be computed by executing the query: Various extensions of logic programming have been developed to provide a logical framework for such destructive change of state.
Constraint predicates are not defined by the facts and rules in the program, but are predefined by some domain-specific model-theoretic structure or theory.
In ALP, these predicates are declared as abducible (or assumable), and are used as in abductive reasoning to explain observations, or more generally to add new facts to the program (as assumptions) to solve goals.
In this case there are many alternative solutions, including: Here tick is an event that marks the passage of time without initiating or terminating any fluents.
Its development was given a big impetus in the 1980s by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).