The min-entropy, in information theory, is the smallest of the Rényi family of entropies, corresponding to the most conservative way of measuring the unpredictability of a set of outcomes, as the negative logarithm of the probability of the most likely outcome.
The min-entropy is never greater than the ordinary or Shannon entropy (which measures the average unpredictability of the outcomes) and that in turn is never greater than the Hartley or max-entropy, defined as the logarithm of the number of outcomes with nonzero probability.
As with the classical Shannon entropy and its quantum generalization, the von Neumann entropy, one can define a conditional version of min-entropy.
To interpret a conditional information measure, suppose Alice and Bob were to share a bipartite quantum state
The conditional entropy measures the average uncertainty Bob has about Alice's state upon sampling from his own system.
This concept is useful in quantum cryptography, in the context of privacy amplification (See for example [1]).
is a classical finite probability distribution, its min-entropy can be defined as[2]
One way to justify the name of the quantity is to compare it with the more standard definition of entropy, which reads
From an operational perspective, the min-entropy equals the negative logarithm of the probability of successfully guessing the outcome of a random draw from
This is because it is optimal to guess the element with the largest probability and the chance of success equals the probability of that element.
A natural way to generalize "min-entropy" from classical to quantum states is to leverage the simple observation that quantum states define classical probability distributions when measured in some basis.
There is however the added difficulty that a single quantum state can result in infinitely many possible probability distributions, depending on how it is measured.
equals the negative logarithm of the probability of successfully guessing the outcome of any measurement of
represent the measurement outcomes in the POVM formalism, and
A more concise method to write the double maximization is to observe that any element of any POVM is a Hermitian operator such that
In fact, this maximization can be performed explicitly and the maximum is obtained when
is defined to be where the infimum ranges over all density operators
These quantities can be seen as generalizations of the von Neumann entropy.
Indeed, the von Neumann entropy can be expressed as This is called the fully quantum asymptotic equipartition theorem.
For example, the smooth min-entropy satisfy a data-processing inequality:[4] Henceforth, we shall drop the subscript
from the min-entropy when it is obvious from the context on what state it is evaluated.
Suppose an agent had access to a quantum system
It can be shown[5] that this optimum can be expressed in terms of the min-entropy as If the state
is classical, this reduces to the expression above for the guessing probability.
The proof is from a paper by König, Schaffner, Renner in 2008.
From the definition of the min-entropy, we have This can be re-written as subject to the conditions We notice that the infimum is taken over compact sets and hence can be replaced by a minimum.
can be written as We can express the dual problem as a maximization over operators
This means that we can express the objective function of the dual problem as as desired.
is a partly classical state as above, then the quantity that we are after reduces to We can interpret
as a guessing strategy and this then reduces to the interpretation given above where an adversary wants to find the string