Regula falsi

As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation x + ⁠x/4⁠ = 15.

Simple false position is aimed at solving problems involving direct proportion.

Double false position is aimed at solving more difficult problems that can be written algebraically in the form: determine x such that if it is known that Double false position is mathematically equivalent to linear interpolation.

Indeed, the rule as given by Robert Recorde in his Ground of Artes (c. 1542) is:[2] Gesse at this woorke as happe doth leade.

[3][1] Double false position arose in late antiquity as a purely arithmetical algorithm.

There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section.

[6] Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the Book of the Two Errors (Kitāb al-khaṭāʾayn).

The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa (10th century), an Arab mathematician from Baalbek, Lebanon.

Within the tradition of medieval Muslim mathematics, double false position was known as hisāb al-khaṭāʾayn ("reckoning by two errors").

[7] Leonardo of Pisa (Fibonacci) devoted Chapter 13 of his book Liber Abaci (AD 1202) to explaining and demonstrating the uses of double false position, terming the method regulis elchatayn after the al-khaṭāʾayn method that he had learned from Arab sources.

For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556.

Regula Falsi appears as the Latinized version of Rule of False as early as 1690.

[2] Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth.

For instance, in 1568 Humphrey Baker says:[2] The Rule of falsehoode is so named not for that it teacheth anye deceyte or falsehoode, but that by fayned numbers taken at all aduentures, it teacheth to finde out the true number that is demaunded, and this of all the vulgar Rules which are in practise) is ye most excellence.The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions.

These methods proceed by producing a sequence of shrinking intervals [ak, bk], at the kth step, such that (ak, bk) contains a root of f. These methods start with two x-values, initially found by trial-and-error, at which f (x) has opposite signs.

A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root.

The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval.

Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy.

The convergence rate of the bisection method could possibly be improved by using a different solution estimate.

The regula falsi method calculates the new solution estimate as the x-intercept of the line segment joining the endpoints of the function on the current bracketing interval.

This line is a secant or chord of the graph of the function f. In point-slope form, its equation is given by Now choose ck to be the x-intercept of this line, that is, the value of x for which y = 0, and substitute these values to obtain Solving this equation for ck gives: This last symmetrical form has a computational advantage: As a solution is approached, ak and bk will be very close together, and nearly always of the same sign.

The fact that regula falsi always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed.

As a result, unlike the bisection method, the width of the bracket does not tend to zero (unless the zero is at an inflection point around which sign(f ) = −sign(f")).

As a consequence, the linear approximation to f (x), which is used to pick the false position, does not improve as rapidly as possible.

Regula falsi's failure mode is easy to detect: The same end-point is retained twice in a row.

The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations.

The Illinois algorithm halves the y-value of the retained end point in the next estimate computation when the new y-value (that is, f (ck)) has the same sign as the previous one (f (ck − 1)), meaning that the end point of the previous step will be retained.

[10][12] Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.

on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end.

And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions.

The first two iterations of the false position method. The red curve shows the function f and the blue lines are the secants.
Plot of function F , its exact root (point K ), and the approximated root