Constraint programming

Constraint programming (CP)[1] is a paradigm for solving combinatorial problems that draws on a wide range of techniques from artificial intelligence, computer science, and operations research.

Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found.

This typically draws upon standard methods like chronological backtracking and constraint propagation, but may use customized code like a problem-specific branching heuristic.

This variant of logic programming is due to Jaffar and Lassez,[2] who extended in 1987 a specific class of constraints that were introduced in Prolog II.

The first implementations of constraint logic programming were Prolog III, CLP(R), and CHIP.

The two paradigms share many important features, like logical variables and backtracking.

Today most Prolog implementations include one or more libraries for constraint logic programming.

The difference between the two is largely in their styles and approaches to modeling the world.

Definition — A constraint satisfaction problem on finite domains (or CSP) is defined by a triplet

where: Three categories of constraints exist: Definition —  An assignment (or model)

Definition — A solution of a CSP is a total assignment that satisfies all the constraints of the problem.

During the search of the solutions of a COP, a user can wish for: Languages for constraint-based programming follow one of two approaches:[3] Constraint propagation in constraint satisfaction problems is a typical example of a refinement model, and formula evaluation in spreadsheets are a typical example of a perturbation model.

The refinement model is more general, as it does not restrict variables to have a single value, it can lead to several solutions to the same problem.

However, the perturbation model is more intuitive for programmers using mixed imperative constraint object-oriented languages.

They can be used to reduce the search space and make the problem easier to solve.

This leads to a reduction of the search space, making the problem easier to solve by some algorithms.

[1] Backtracking search is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.

Local search is an incomplete method for finding a solution to a problem.

It is based on iteratively improving an assignment of the variables until all constraints are satisfied.

It refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner.

The syntax for expressing constraints over finite domains depends on the host language.

The following is a Prolog program that solves the classical alphametic puzzle SEND+MORE=MONEY in constraint logic programming: The interpreter creates a variable for each letter in the puzzle.

The operator ins is used to specify the domains of these variables, so that they range over the set of values {0,1,2,3, ..., 9}.

When the interpreter evaluates these constraints, it reduces the domains of these two variables by removing the value 0 from them.

Then, the constraint all_different(Digits) is considered; it does not reduce any domain, so it is simply stored.

Constraint propagation may solve the problem by reducing all domains to a single value, it may prove that the problem has no solution by reducing a domain to the empty set, but may also terminate without proving satisfiability or unsatisfiability.