Although the existence of these constants has been proven, their exact values are unknown.
They are named after Václav Chvátal and David Sankoff, who began investigating them in the mid-1970s.
for each positive integer k, where k is the number of characters in the alphabet from which the random strings are drawn.
The sequence of these numbers grows inversely proportionally to the square root of k.[3] However, some authors write "the Chvátal–Sankoff constant" to refer to
[4] A common subsequence of two strings S and T is a string whose characters appear in the same order (not necessarily consecutively) both in S and in T. The problem of computing a longest common subsequence has been well studied in computer science.
It can be solved in polynomial time by dynamic programming;[5] this basic algorithm has additional speedups for small alphabets (the Method of Four Russians),[6] for strings with few differences,[7] for strings with few matching pairs of characters,[8] etc.
This problem and its generalizations to more complex forms of edit distance have important applications in areas that include bioinformatics (in the comparison of DNA and protein sequences and the reconstruction of evolutionary trees), geology (in stratigraphy), and computer science (in data comparison and revision control).
[7] One motivation for studying the longest common subsequences of random strings, given already by Chvátal and Sankoff, is to calibrate the computations of longest common subsequences on strings that are not random.
[1] The Chvátal–Sankoff constants describe the behavior of the following random process.
Given parameters n and k, choose two length-n strings S and T from the same k-symbol alphabet, with each character of each string chosen uniformly at random, independently of all the other characters.
Compute a longest common subsequence of these two strings, and let
be the random variable whose value is the length of this subsequence.
is (up to lower-order terms) proportional to n, and the kth Chvátal–Sankoff constant
It follows from a lemma of Michael Fekete[9] that the limit exists, and equals the supremum of the values
[2] The exact values of the Chvátal–Sankoff constants remain unknown, but rigorous upper and lower bounds have been proven.
which each depend only on a finite probability distribution, one way to prove rigorous lower bounds on
; however, this method scales exponentially in n, so it can only be implemented for small values of n, leading to weak lower bound.
In his Ph.D. thesis, Vlado Dančík pioneered an alternative approach in which a deterministic finite automaton is used to read symbols of two input strings and produce a (long but not optimal) common subsequence of these inputs.
The behavior of this automaton on random inputs can be analyzed as a Markov chain, the steady state of which determines the rate at which it finds elements of the common subsequence for large values of n. This rate is necessarily a lower bound on the Chvátal–Sankoff constant.
[10] By using Dančík's method, with an automaton whose state space buffers the most recent h characters from its two input strings, and with additional techniques for avoiding the expensive steady-state Markov chain analysis of this approach, Lueker (2009) was able to perform a computerized analysis with n = 15 that proved
Similar methods can be generalized to non-binary alphabets.
Lower bounds obtained in this way for various values of k are:[4] Dančík & Paterson (1995) also used automata-theoretic methods to prove upper bounds on the Chvátal–Sankoff constants, and again Lueker (2009) extended these results by computerized calculations.
This result disproved a conjecture of J. Michael Steele that
grow inversely proportionally to the square root of k. More precisely,[3] There has also been research into the distribution of values of the longest common subsequence, generalizing the study of the expectation of this value.
For instance, the standard deviation of the length of the longest common subsequence of random strings of length n is known to be proportional to the square root of n.[13] One complication in performing this sort of analysis is that the random variables describing whether the characters at different pairs of positions match each other are not independent of each other.
For a more mathematically tractable simplification of the longest common subsequence problem, in which the allowed matches between pairs of symbols are not controlled by whether those symbols are equal to each other but instead by independent random variables with probability 1/k of being 1 and (k − 1)/k of being 0, it has been shown that the distribution of the longest common subsequence length is controlled by the Tracy–Widom distribution.