Nonstandard calculus

Non-rigorous calculations with infinitesimals were widely used before Karl Weierstrass sought to replace them with the (ε, δ)-definition of limit starting in the 1870s.

For almost one hundred years thereafter, mathematicians such as Richard Courant viewed infinitesimals as being naive and vague or meaningless.

[1] Contrary to such views, Abraham Robinson showed in 1960 that infinitesimals are precise, clear, and meaningful, building upon work by Edwin Hewitt and Jerzy Łoś.

According to Howard Keisler, "Robinson solved a three hundred year old problem by giving a precise treatment of infinitesimals.

The use of infinitesimals can be found in the foundations of calculus independently developed by Gottfried Leibniz and Isaac Newton starting in the 1660s.

John Wallis refined earlier techniques of indivisibles of Cavalieri and others by exploiting an infinitesimal quantity he denoted

[3] They drew on the work of such mathematicians as Pierre de Fermat, Isaac Barrow and René Descartes.

In early calculus the use of infinitesimal quantities was criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley in his book The Analyst.

Augustin Louis Cauchy developed a versatile spectrum of foundational approaches, including a definition of continuity in terms of infinitesimals and a (somewhat imprecise) prototype of an ε, δ argument in working with differentiation.

Karl Weierstrass formalized the concept of limit in the context of a (real) number system without infinitesimals.

Following the work of Weierstrass, it eventually became common to base calculus on ε, δ arguments instead of infinitesimals.

This approach used technical machinery from mathematical logic to create a theory of hyperreal numbers that interpret infinitesimals in a manner that allows a Leibniz-like development of the usual rules of calculus.

An alternative approach, developed by Edward Nelson, finds infinitesimals on the ordinary real line itself, and involves a modification of the foundational setting by extending ZFC through the introduction of a new unary predicate "standard".

Discarding the "error term" is accomplished by an application of the standard part function.

Dispensing with infinitesimal error terms was historically considered paradoxical by some writers, most notably George Berkeley.

Once the hyperreal number system (an infinitesimal-enriched continuum) is in place, one has successfully incorporated a large part of the technical difficulties at the foundational level.

Thus, the epsilon, delta techniques that some believe to be the essence of analysis can be implemented once and for all at the foundational level, and the students needn't be "dressed to perform multiple-quantifier logical stunts on pretense of being taught infinitesimal calculus", to quote a recent study.

[4] More specifically, the basic concepts of calculus such as continuity, derivative, and integral can be defined using infinitesimals without reference to epsilon, delta.

To give an intuitive idea for the hyperreal approach, note that, naively speaking, nonstandard analysis postulates the existence of positive numbers ε which are infinitely small, meaning that ε is smaller than any standard positive real, yet greater than zero.

for the relation of being infinitely close as above, the definition can be extended to arbitrary (standard or nonstandard) points as follows: A function f is microcontinuous at x if whenever

A function f on an interval I is uniformly continuous if its natural extension f* in I* has the following property:[5] for every pair of hyperreals x and y in I*, if

This definition has a reduced quantifier complexity when compared with the standard (ε, δ)-definition.

It has the same quantifier complexity as the definition of uniform continuity in terms of sequences in standard calculus, which however is not expressible in the first-order language of the real numbers.

Example 3: similarly, the failure of uniform continuity for the squaring function is due to the absence of microcontinuity at a single infinite hyperreal point.

By the transfer principle, the natural extension of the Dirichlet function takes the value 1 at an.

While the thrust of Robinson's approach is that one can dispense with the approach using multiple quantifiers, the notion of limit can be easily recaptured in terms of the standard part function st, namely if and only if whenever the difference x − a is infinitesimal, the difference f(x) − L is infinitesimal, as well, or in formulas: cf.

The standard (ε, δ)-style definition, on the other hand, does have quantifier alternations: To show that a real continuous function f on [0,1] has a maximum, let N be an infinite hyperinteger.

Note that a similar result holds for differentiability at the endpoints a, b provided the sign of the infinitesimal h is suitably restricted.

For the second theorem, the Riemann integral is defined as the limit, if it exists, of a directed family of Riemann sums; these are sums of the form where Such a sequence of values is called a partition or mesh and the width of the mesh.

One immediate application is an extension of the standard definitions of differentiation and integration to internal functions on intervals of hyperreal numbers.