Foundations of mathematics

These foundations were tacitly assumed to be definitive until the introduction of infinitesimal calculus by Isaac Newton and Gottfried Wilhelm Leibniz in the 17th century.

This new area of mathematics involved new methods of reasoning and new basic concepts (continuous functions, derivatives, limits) that were not well founded, but had astonishing consequences, such as the deduction from Newton's law of gravitation that the orbits of the planets are ellipses.

During the 19th century, progress was made towards elaborating precise definitions of the basic concepts of infinitesimal calculus, notably the natural and real numbers.

Their adequation with their physical origins does not belong to mathematics anymore, although their relation with reality is still used for guiding mathematical intuition: physical reality is still used by mathematicians to choose axioms, find which theorems are interesting to prove, and obtain indications of possible proofs.

Most civilisations developed some mathematics, mainly for practical purposes, such as counting (merchants), surveying (delimitation of fields), prosody, astronomy, and astrology.

It seems that ancient Greek philosophers were the first to study the nature of mathematics and its relation with the real world.

In the Posterior Analytics, Aristotle (384–322 BC) laid down the logic for organizing a field of knowledge by means of primitive concepts, axioms, postulates, definitions, and theorems.

Aristotle's logic reached its high point with Euclid's Elements (300 BC), a treatise on mathematics structured with very high standards of rigor: Euclid justifies each proposition by a demonstration in the form of chains of syllogisms (though they do not always conform strictly to Aristotelian templates).

Aristotle's syllogistic logic, together with its exemplification by Euclid's Elements, are recognized as scientific achievements of ancient Greece, and remained as the foundations of mathematics for centuries.

This geometrical view of non-integer numbers remained dominant until the end of Middle Ages, although the rise of algebra led to consider them independently from geometry, which implies implicitly that there are foundational primitives of mathematics.

For example, the transformations of equations introduced by Al-Khwarizmi and the cubic and quartic formulas discovered in the 16th century result from algebraic manipulations that have no geometric counterpart.

In 1637, René Descartes published La Géométrie, in which he showed that geometry can be reduced to algebra by means coordinates, which are numbers determining the position of a point.

Isaac Newton (1642–1727) in England and Leibniz (1646–1716) in Germany independently developed the infinitesimal calculus for dealing with mobile points (such as planets in the sky) and variable quantities.

Despite its lack of firm logical foundations, infinitesimal calculus was quickly adopted by mathematicians, and validated by its numerous applications; in particular the fact that the planet trajectories can be deduced from the Newton's law of gravitation.

This need of quantification over infinite sets is one of the motivation of the development of higher-order logics during the first half of the 20th century.

Karl von Staudt developed a purely geometric approach to this problem by introducing "throws" that form what is presently called a field, in which the cross ratio can be expressed.

Apparently, the problem of the equivalence between analytic and synthetic approach was completely solved only with Emil Artin's book Geometric Algebra published in 1957.

This was started with Charles Sanders Peirce in 1881 and Richard Dedekind in 1888, who defined a natural numbers as the cardinality of a finite set.

For example, Henri Poincaré stated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act.

A dramatic change arose with the work of Georg Cantor who was the first mathematician to systematically study infinite sets.

The first one led to intuitionism and constructivism, and consisted to restrict the logical rules for remaining closer to intuition, while the second, which has been called formalism, considers that a theorem is true if it can be deduced from axioms by applying inference rules (formal proof), and that no "trueness" of the axioms is needed for the validity of a theorem.

This formula game enables us to express the entire thought-content of the science of mathematics in a uniform manner and develop it in such a way that, at the same time, the interconnections between the individual propositions and facts become clear ...

Hermann Weyl posed these very questions to Hilbert: What "truth" or objectivity can be ascribed to this theoretic construction of the world, which presses far beyond the given, is a profound philosophical problem.

As noted by Weyl, formal logical systems also run the risk of inconsistency; in Peano arithmetic, this arguably has already been settled with several proofs of consistency, but there is debate over whether or not they are sufficiently finitary to be meaningful.

Some theories tend to focus on mathematical practice, and aim to describe and analyze the actual working of mathematicians as a social group.

Bertrand Russell and Alfred North Whitehead championed this theory initiated by Gottlob Frege and influenced by Richard Dedekind.

Several set theorists followed this approach and actively searched for axioms that may be considered as true for heuristic reasons and that would decide the continuum hypothesis.

Typically, they see this as ensured by remaining open-minded, practical and busy; as potentially threatened by becoming overly-ideological, fanatically reductionistic or lazy.

It is just that philosophical principles have not generally provided us with the right preconceptions.Weinberg believed that any undecidability in mathematics, such as the continuum hypothesis, could be potentially resolved despite the incompleteness theorem, by finding suitable further axioms to add to set theory.

Gödel's completeness theorem establishes an equivalence in first-order logic between the formal provability of a formula and its truth in all possible models.