Bernoulli distribution

Three examples of Bernoulli distribution:

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,[1] is the discrete probability distribution of a random variable which takes the value 1 with probability

Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question.

Such questions lead to outcomes that are Boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q.

It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and p would be the probability of tails).

The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution).

It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.

is a random variable with a Bernoulli distribution, then: The probability mass function

of this distribution, over possible outcomes k, is This can also be expressed as or as The Bernoulli distribution is a special case of the binomial distribution with

[4] The kurtosis goes to infinity for high and low values of

the two-point distributions including the Bernoulli distribution have a lower excess kurtosis, namely −2, than any other probability distribution.

form an exponential family.

The maximum likelihood estimator of

The expected value of a Bernoulli random variable

is This is because for a Bernoulli distributed random variable

we find The variance of a Bernoulli distributed

is We first find From this follows With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside

When we take the standardized Bernoulli distributed random variable

we find that this random variable attains

Thus we get The raw moments are all equal because

The central moment of order

is given by The first six central moments are The higher central moments can be expressed more compactly in terms of

The first six cumulants are Entropy is a measure of uncertainty or randomness in a probability distribution.

For a Bernoulli random variable

is defined as: The entropy is maximized when

, indicating the highest level of uncertainty when both outcomes are equally likely.

Fisher information measures the amount of information that an observable random variable

carries about an unknown parameter

For the Bernoulli distribution, the Fisher information with respect to the parameter

is given by: Proof: This represents the probability of observing

The probability mass distribution function of a Bernoulli experiment along with its corresponding cumulative distribution function.