Computationally bounded adversary

In previous models the best that could be done was ensuring correct decoding for up to d/2 errors, where d was the Hamming distance of the code.

The problem with doing it this way is that it does not take into consideration the actual amount of computing power available to the adversary.

Rather, it only concerns itself with how many bits of a given code word can change and still have the message decode properly.

In the computationally bounded adversary model the channel – the adversary – is restricted to only being able to perform a reasonable amount of computation to decide which bits of the code word need to change.

Once the channel has been given this restriction it becomes possible to construct codes that are both faster to encode and decode compared to previous methods that can also handle a large number of errors.

The guarantee that an algorithm will succeed no matter what is, of course, highly alluring.

A real-life adversary cannot spend an indefinite amount of time examining a message in order to find the one error pattern which an algorithm would struggle with.

In the worst-case scenario, Quicksort makes O(n2) comparisons; however, such an occurrence is rare.

Let us suppose an adversary wishes to force the Quicksort algorithm to make O(n2) comparisons.

Similarly, it is unreasonable to assume an adversary for an encoding and decoding system would be able to test every single error pattern in order to find the most effective one.

Even if the attacker is bounded it is still possible that they might be able to overcome the stochastic model with a bit of cleverness.

The stochastic model has no real way to fight against this sort of attack and as such is unsuited to dealing with the kind of "intelligent" threats that would be preferable to have defenses against.

Therefore, a computationally bounded adversarial model has been proposed as a compromise between the two.

Since any computationally bounded adversary could in O(n) time flip a coin for each bit, it is intuitively clear that any encoding and decoding system which can work against this adversary must also work in the stochastic noise model.

The converse is less simple; however, it can be shown that any system which works in the stochastic noise model can also efficiently encode and decode against a computationally bounded adversary, and only at an additional cost which is polynomial in n.[1] The following method to accomplish this was designed by Dick Lipton, and is taken from:[1] Let

Furthermore, let both the sender and receiver share some random permutation function

Similarly to the Quicksort comparison above, if the channel wants to do something smart, it must first test all the permutations.

However, this is infeasible for a computationally bounded adversary, so the most it can do is make a random error pattern

By assuming a computationally bounded adversary, it is possibly to design a locally decodable code which is both efficient and near-optimal, with a negligible error probability.

These codes are used in complexity theory for things like self-correcting computations, probabilistically checkable proof systems, and worst-case to average-case hardness reductions in the constructions of pseudo-random generators.

They are useful in cryptography as a result of their connection with private information retrieval protocols.

[2] Furthermore, it is possible to construct codes which surpass known bounds for worst-case codes—specifically, unique decoding with a

An illustration of the method. The first row gives the initial encoded message; the second, after random permutation and random R; the third, after the adversary adds N; the fourth, after unpermuting; the fifth, the encoded message with the adversary's error removed.
An illustration of the method. The first row gives the initial encoded message; the second, after random permutation and random R; the third, after the adversary adds N; the fourth, after unpermuting; the fifth, the encoded message with the adversary's error removed.