[20] Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web,[21] R in statistics,[22][23] J, K and Q in financial analysis, and XQuery/XSLT for XML.
The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT).
[48] John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style?
In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs.
Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming.
[citation needed] The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell.
With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990.
This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one.
Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations.
Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples.
Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata).
Hughes 1984 argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams.
[2] Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis.
Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code.
The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable "result".
This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations.
[citation needed] Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs.
[86] For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations.
Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion.
Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly).
Launchbury 1993[66] discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008[94] give some practical advice for analyzing and fixing them.
In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile,[95] which can be attributed to various compiler optimizations, such as inlining.
Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style.
Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#.
[112] Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family.
Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature.
[10][12][117][118][119] Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers[3][4] and has been applied to problems such as training-simulation software[5] and telescope control.
[15] Haskell, though initially intended as a research language,[17] has also been applied in areas such as aerospace systems, hardware design and web programming.
[124] Scala has been widely used in Data science,[125] while ClojureScript,[126] Elm[127] or PureScript[128] are some of the functional frontend programming languages used in production.
Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review.