Random oracles first appeared in the context of complexity theory, in which they were used to argue that complexity class separations may face relativization barriers, with the most prominent case being the P vs NP problem, two classes shown in 1981 to be distinct relative to a random oracle almost surely.
[1] They made their way into cryptography by the publication of Mihir Bellare and Phillip Rogaway in 1993, which introduced them as a formal cryptographic model to be used in reduction proofs.
However, it only proves such properties in the random oracle model, making sure no major design flaws are present.
In 1986, Amos Fiat and Adi Shamir[5] showed a major application of random oracles – the removal of interaction from protocols for the creation of signatures.
In 1989, Russell Impagliazzo and Steven Rudich[6] showed the limitation of random oracles – namely that their existence alone is not sufficient for secret-key exchange.
[7] Oracle cloning with improper domain separation breaks security proofs and can lead to successful attacks.
[11] In general, if a protocol is proven secure, attacks to that protocol must either be outside what was proven, or break one of the assumptions in the proof; for instance if the proof relies on the hardness of integer factorization, to break this assumption one must discover a fast integer factorization algorithm.
Although the Baker–Gill–Solovay theorem[12] showed that there exists an oracle A such that PA = NPA, subsequent work by Bennett and Gill,[13] showed that for a random oracle B (a function from {0,1}n to {0,1} such that each input element maps to each of 0 or 1 with probability 1/2, independently of the mapping of all other inputs), PB ⊊ NPB with probability 1.