Mathematically, the logit is the inverse of the standard logistic function
Thus, the logit is a type of function that maps probability values from
For each choice of base, the logit function takes values between negative and positive infinity.
is given by the inverse-logit: The difference between the logits of two probabilities is the logarithm of the odds ratio (R), thus providing a shorthand for writing the correct combination of odds ratios only by adding and subtracting: Several approaches have been explored to adapt linear regression methods to a domain where the output is a probability value
In many cases, such efforts have focused on modeling this problem by mapping the range
and then running the linear regression on these transformed values.
[2] In 1934, Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit, an abbreviation for "probability unit".
[2] In 1944, Joseph Berkson used log of odds and called this function logit, an abbreviation for "logistic unit", following the analogy for probit: "I use this term [logit] for
following Bliss, who called the analogous function which is linear on
"Log odds was used extensively by Charles Sanders Peirce (late 19th century).
[7] Barnard also coined the term lods as an abstract form of "log-odds",[8] but suggested that "in practice the term 'odds' should normally be used, since this is more familiar in everyday life".
In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution.
is the CDF of the standard normal distribution, as just mentioned: As shown in the graph on the right, the logit and probit functions are extremely similar when the probit function is scaled, so that its slope at y = 0 matches the slope of the logit.