Ray Solomonoff (July 25, 1926 – December 7, 2009)[1][2] was an American mathematician who invented algorithmic probability,[3] his General Theory of Inductive Inference (also known as Universal Inductive Inference),[4] and was a founder of algorithmic information theory.
[5] He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability.
[10] Algorithmic probability is a mathematically formalized combination of Occam's razor,[11][12][13][14] and the Principle of Multiple Explanations.
[10] Although he is best known for algorithmic probability and his general theory of inductive inference, he made many other important discoveries throughout his life, most of them directed toward his goal in artificial intelligence: to develop a machine that could solve hard problems using probabilistic methods.
From 1947–1951 he attended the University of Chicago, studying under Professors such as Rudolf Carnap and Enrico Fermi, and graduated with an M.S.
From his earliest years he was motivated by the pure joy of mathematical discovery and by the desire to explore where no one had gone before.
[citation needed] At the age of 16, in 1942, he began to search for a general method to solve mathematical problems.
Solomonoff wanted to pursue a bigger question, how to make machines more generally intelligent, and how computers could use probability for this purpose.
He wrote three papers, two with Anatol Rapoport, in 1950–52,[16] that are regarded as the earliest statistical analysis of networks.
Prior to the 1960s, the usual method of calculating probability was based on frequency: taking the ratio of favorable results to the total number of trials.
As part of this work, he produced the philosophical foundation for the use of Bayes rule of causation for prediction.
Solomonoff showed and in 1964 proved that the choice of machine, while it could add a constant factor would not change the probability ratios very much.
The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was more concerned with randomness of a sequence.
"[20] He then shows how this idea can be used to generate the universal a priori probability distribution and how it enables the use of Bayes rule in inductive inference.
Other scientists who had been at the 1956 Dartmouth Summer Conference (such as Newell and Simon) were developing the branch of Artificial Intelligence that used machines governed by if-then rules, fact based.
Solomonoff was developing the branch of Artificial Intelligence that focussed on probability and prediction; his specific view of A.I.
In 1968 he found a proof for the efficacy of Algorithmic Probability,[21] but mainly because of lack of general interest at that time, did not publish it until 10 years later.
There will always be descriptions outside that system's search space, which will never be acknowledged or considered, even in an infinite amount of time.
In many of his papers he described how to search for solutions to problems and in the 1970s and early 1980s developed what he felt was the best way to update the machine.
About 1984, at an annual meeting of the American Association for Artificial Intelligence (AAAI), it was decided that probability was in no way relevant to A.I.
A protest group formed, and the next year there was a workshop at the AAAI meeting devoted to "Probability and Uncertainty in AI."
[22] As part of the protest at the first workshop, Solomonoff gave a paper on how to apply the universal distribution to problems in A.I.
Throughout his career Solomonoff was concerned with the potential benefits and dangers of A.I., discussing it in many of his published reports.
In 1997,[28] 2003 and 2006 he showed that incomputability and subjectivity are both necessary and desirable characteristics of any high performance induction system.
In Feb. 2008, he gave the keynote address at the Conference "Current Trends in the Theory and Application of Computer Science" (CTTACS), held at Notre Dame University in Lebanon.