Reactive programming

With this paradigm, it is possible to express static (e.g., arrays) or dynamic (e.g., event emitters) data streams with ease, and also communicate that an inferred dependency within the associated execution model exists, which facilitates the automatic propagation of the changed data flow.

[citation needed] Another example is a hardware description language such as Verilog, where reactive programming enables changes to be modeled as they propagate through circuits.

[citation needed] Reactive programming has been proposed as a way to simplify the creation of interactive user interfaces and near-real-time system animation.

[citation needed] For example, in a model–view–controller (MVC) architecture, reactive programming can facilitate changes in an underlying model being reflected automatically in an associated view.

Another approach involves the specification of general-purpose languages that include support for reactivity.

In such a graph, nodes represent the act of computing, and edges model dependency relationships.

Such a runtime employs said graph, to help it keep track of the various computations, which must be executed anew, once an involved input changes value.

In this case, information is proliferated along a graph's edges, which consist only of deltas describing how the previous node was changed.

This approach is especially important when nodes hold large amounts of state data, which would otherwise be expensive to recompute from scratch.

Delta propagation is essentially an optimization that has been extensively studied via the discipline of incremental computing, whose approach requires runtime satisfaction involving the view-update problem.

This problem is infamously characterized by the use of database entities, which are responsible for the maintenance of changing data views.

This can, however, have performance implications, such as delaying the delivery of values (due to the order of propagation).

In some cases, therefore, reactive languages permit glitches, and developers must be aware of the possibility that values may temporarily fail to correspond to the program source, and that some expressions may evaluate multiple times (for instance, t > seconds may evaluate twice: once when the new value of seconds arrives, and once more when t updates).

Reactive programming languages can range from very explicit ones where data flows are set up by using arrows, to implicit where the data flows are derived from language constructs that look similar to those of imperative or functional programming.

Reactive programming libraries for dynamic languages (such as the Lisp "Cells" and Python "Trellis" libraries) can construct a dependency graph from runtime analysis of the values read during a function's execution, allowing data flow specifications to be both implicit and dynamic.

Here differentiated reactive programming could potentially be used to give the spell checker lower priority, allowing it to be delayed while keeping other data-flows instantaneous.

It could be problematic simply to naively propagate a change using a stack, because of potential exponential update complexity if the data structure has a certain shape.

[citation needed] This could potentially make reactive programming highly memory consuming.

For example, the observer pattern commonly describes data-flows between whole objects/classes, whereas object-oriented reactive programming could target the members of objects/classes.

Not only does this facilitate event-based reactions, but it makes reactive programs instrumental to the correctness of software.

An example of a rule based reactive programming language is Ampersand, which is founded in relation algebra.