In statistics, separation is a phenomenon associated with models for dichotomous or categorical outcomes, including logistic and probit regression.
If the outcome values are (seemingly) perfectly determined by the predictor (e.g., y = 0 when x ≤ 2) then the condition "complete separation" is said to occur.
This observed form of the data is important because it sometimes causes problems with the estimation of regression coefficients.
For example, maximum likelihood (ML) estimation relies on maximization of the likelihood function, where e.g. in case of a logistic regression with completely separated data the maximum appears at the parameter space's margin, leading to "infinite" estimates, and, along with that, to problems with providing sensible standard errors.
[6] Alternatively, one may avoid the problems associated with likelihood maximization by switching to a Bayesian approach to inference.