No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line.
The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce.
[6] However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property.
Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment.
The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x, if x is a philosopher, then x is a scholar".
The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x.
The existential quantifier "there exists" expresses the idea that the claim "x is a philosopher and x is not a scholar" holds for some choice of x.
An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables.
These entities form the domain of discourse or universe, which is usually required to be a nonempty set.
For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases.
A common convention is: Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read.
is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables
Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra.
A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation.
Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule.
Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms.
When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered.
This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928.
He established two theorems for systems of this type: Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations.
For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible.
This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers
Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols.
It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components.
Infinitely long sentences arise in areas of mathematics including topology and model theory.
The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞.
In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables.
For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line.
Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search.
In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification.
Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences.