If the algorithm can prove this fact, it can directly consider a different value for
In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump.
These methods have different costs, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree.
The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching.
is a leaf of the search tree (which correspond to nodes having only leaves as children in the figures of this article.)
A safe jump can be found by simply evaluating, for every value
The previous algorithm only backjumps when the values of a variable can be shown inconsistent with the current partial solution without further branching.
In other words, it allows for a backjump only at leaf nodes in the search tree.
An internal node of the search tree represents an assignment of a variable that is consistent with the previous ones.
If no solution extends this assignment, the previous algorithm always backtracks: no backjump is done in this case.
As a result, searching for a prefix that is inconsistent with these values of the last variable does not succeed.
This return is due to a number of dead ends, points where the algorithm has proved a partial solution inconsistent.
In order to further backjump, the algorithm has to take into account that the impossibility of finding solutions is due to these dead ends.
In particular, the safe jumps are indexes of prefixes that still make these dead ends to be inconsistent partial solutions.
Due to the potentially high number of nodes that are in the subtree of
In other words, a backjump indicates that the visit of a region of the search tree had been a mistake.
This fact can be exploited by collecting, in each node, a set of previously assigned variables whose evaluation suffices to prove that no solution exists in the subtree rooted at the node.
Since nodes that are skipped from backjumping are never retracted from, their sets are automatically ignored.
The rationale of graph-based backjumping is that a safe jump can be found by checking which of the variables
have been tried, this set contains the indexes of the variables whose evaluations allow proving that no solution can be found by visiting the subtree rooted at
As a result, the algorithm can backjump to the highest index in this set.
When retracting from a leaf node, the set of variables that are in constraint with it is created and "sent back" to its parent, or ancestor in case of backjumping.
At every node, the highest index of a variable that is in one of the constraints collected at the leaves is a safe jump.
While the violated constraint chosen in each leaf does not affect the safety of the resulting jump, choosing constraints of highest possible indices increases the highness of the jump.
In other words, excluding common variables, the constraint that has the all lower indices is preferred.
In a leaf node, the algorithm chooses the lowest index
Among the constraints that are violated in this evaluation, it chooses the most preferred one, and collects all its indices less than
, the lowest collected index identifies a safe jump.
In particular, the algorithm collects, in each node, all sets coming from its descendants that have not been skipped by backjumping.
Conflict-directed backjumping was proposed for Constraint Satisfaction Problems by Patrick Prosser in his seminal 1993 paper [4]