Galactic algorithms were so named by Richard Lipton and Ken Regan,[1] because they will never be used on any data sets on Earth.
bit operations, but as the constants hidden by the big O notation are large, it is never used in practice.
The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits.
It is the most theoretically sound of any known algorithm that can take an arbitrary number and tell if it is prime.
All other known algorithms fall short on at least one of these criteria, but the shortcomings are minor and the calculations are much faster, so they are used instead.
ECPP in practice runs much faster than AKS, but it has never been proven to be polynomial time.
There is also a deterministic version of the Miller-Rabin test, which runs in polynomial time over all inputs, but its correctness depends on the generalized Riemann hypothesis (which is widely believed, but not proven).
The existence of these (much) faster alternatives means AKS is not used in practice.
Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, needing
These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical.
"[5] Claude Shannon showed a simple but asymptotically optimal code that can reach the theoretical capacity of a communication channel.
-bit message, then decoding by finding the closest code word.
is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel.
big enough to beat existing codes is also completely impractical.
[6] These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity.
and the big O notation hides a constant that depends superexponentially on
cannot be reasonably computed as the constant is greater than 2 pentated by 4, or 2 tetrated by 65536, that is,
In cryptography jargon, a "break" is any attack faster in expectation than brute force – i.e., performing one trial decryption for each possible key.
For many cryptographic systems, breaks are known, but are still practically infeasible with current technology.
For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum.
[12] A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats.
Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime.
Hutter search is related to Solomonoff induction, which is a formalization of Bayesian inference.
All computable theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories.
Simulated annealing, when used with a logarithmic cooling schedule, has been proven to find the global optimum of any optimization problem.
[15] However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems.
[16] The expected linear time MST algorithm is able to discover the minimum spanning tree of a graph in
[17] However, the constant factor that is hidden by the Big O notation is huge enough to make the algorithm impractical.
[19] Researchers have found an algorithm that achieves the provably best-possible[20] asymptotic performance in terms of time-space tradeoff.
[21] But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon.