Info-metrics is an interdisciplinary approach to scientific modeling, inference and efficient information processing.
It is the science of modeling, reasoning, and drawing inferences under conditions of noisy and limited information.
From the point of view of the sciences, this framework is at the intersection of information theory, statistical methods of inference, applied mathematics, computer science, econometrics, complexity theory, decision analysis, modeling, and the philosophy of science.
Info-metrics provides a constrained optimization framework to tackle under-determined or ill-posed problems – problems where there is not sufficient information for finding a unique solution.
Such problems are very common across all sciences: available information is incomplete, limited, noisy and uncertain.
Info-metrics is useful for modelling, information processing, theory building, and inference problems across the scientific spectrum.
The info-metrics framework can also be used to test hypotheses about competing theories or causal mechanisms.
Info-metrics evolved from the classical maximum entropy formalism, which is based on the work of Shannon.
Since the mid 1980s and especially in the mid 1990s the maximum entropy approach was generalized and extended to handle a larger class of problems in the social and behavioral sciences, especially for complex problems and data.
Define the informational content of a single outcome
Observing an outcome at the tails of the distribution (a rare event) provides much more information than observing another, more probable, outcome.
The entropy[1] is the expected information content of an outcome of the random variable X whose probability distribution is P:
Consider the problem of modeling and inferring the unobserved probability distribution of some K-dimensional discrete random variable given just the mean (expected value) of that variable.
Within the info-metrics framework, the solution is to maximize the entropy of the random variable subject to the two constraints: mean and normalization.
This yields the usual maximum entropy solution.
The solutions to that problem can be extended and generalized in several ways.
Second, the same approach can be used for continuous random variables, for all types of conditional models (e.g., regression, inequality and nonlinear models), and for many constraints.
Inference based on information resulting from repeated independent experiments.
The following example is attributed to Boltzmann and was further popularized by Jaynes.
The experiment is the independent repetitions of tossing the same die.
Suppose one only observes the empirical mean value, y, of N tosses of a six-sided die, and seeks to infer the probabilities that a specific value of the face will show up in the next toss of the die.
Maximizing the entropy (and using log base 2) subject to these two constraints (mean and normalization) yields the most uninformed solution.
If the die is unfair (or loaded) with a mean of 4, the resulting maximum entropy solution will be
Using these preferences and constraints, as well as the observed information, such as the market mean return, and covariances, of each asset over some time period, the entropy maximization framework can be used to find the optimal portfolio weights.
In this case, the entropy of the portfolio represents its diversity.
This framework can be modified to include other constraints such as minimal variance, maximal diversity etc.
That model involves inequalities and can be further generalized to include short sales.
More such examples and related code can be found on [3][4] An extensive list of work related to info-metrics can be found here: http://info-metrics.org/bibliography.html Marco Frittelli.
"The minimal entropy martingale measure and the valuation problem in incomplete markets".
"A generalized information theoretical approach to tomographic reconstruction".