In mathematics, Hall's conjecture is an open question on the differences between perfect squares and perfect cubes.
This question arose from consideration of the Mordell equation in the theory of integer points on elliptic curves.
The original version of Hall's conjecture, formulated by Marshall Hall, Jr. in 1970, says that there is a positive constant C such that for any integers x and y for which y2 ≠ x3, Hall suggested that perhaps C could be taken as 1/5, which was consistent with all the data known at the time the conjecture was proposed.
In 1965, Davenport proved an analogue of the above conjecture in the case of polynomials: if f(t) and g(t) are nonzero polynomials over the complex numbers C such that g(t)3 ≠ f(t)2 in C[t], then The weak form of Hall's conjecture, stated by Stark and Trotter around 1980, replaces the square root on the right side of the inequality by any exponent less than 1/2: for any ε > 0, there is some constant c(ε) depending on ε such that for any integers x and y for which y2 ≠ x3, The original, strong, form of the conjecture with exponent 1/2 has never been disproved, although it is no longer believed to be true and the term Hall's conjecture now generally means the version with the ε in it.
For example, in 1998, Noam Elkies found the example 4478849284284020423079182 − 58538865167812233 = -1641843, for which compatibility with Hall's conjecture would require C to be less than .0214 ≈ 1/50, so roughly 10 times smaller than the original choice of 1/5 that Hall suggested.
(the first 44 entries in the table) but may be incomplete past that point.