In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown population parameter (for example, the population mean).
Examples are given by confidence distributions, randomized estimators, and Bayesian posteriors.
[1] Most importantly, we prefer point estimators that have the smallest mean square errors.
Consistency is about whether the point estimate stays close to the value when the parameter increases its size.
[1] Generally, we must consider the distribution of the population when determining the efficiency of estimators.
But in many cases the raw data, which are too numerous and too costly to store, are not suitable for this purpose.
[6][8] The Minimum Message Length (MML) point estimator is based in Bayesian information theory and is not so directly related to the posterior distribution.
Special cases of Bayesian filters are important: Several methods of computational statistics have close connections with Bayesian analysis: Below are some commonly used methods of estimating unknown parameters which are expected to provide estimators having some of these important properties.
This estimator method attempts to acquire unknown parameters that maximize the likelihood function.
the normal distribution) and uses the values of parameters in the model that maximize a likelihood function to find the most suitable match for the data.
[10] However, due to the simplicity, this method is not always accurate and can be biased easily.
In the method of least square, we consider the estimation of parameters using some specified form of the expectation and second moment of the observations.
For fitting a curve of the form y = f( x, β0, β1, ,,,, βp) to the data (xi, yi), i = 1, 2,…n, we may use the method of least squares.
[2] The method of minimum-variance unbiased estimator minimizes the risk (expected loss) of the squared-error loss-function.
Median-unbiased estimator minimizes the risk of the absolute-error loss function.
Best linear unbiased estimator, also known as the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero.
We can calculate the upper and lower confidence limits of the intervals from the observed data.
[1] In general, with a normally-distributed sample mean, Ẋ, and with a known value for the standard deviation, σ, a 100(1-α)% confidence interval for the true μ is formed by taking Ẋ ± e, with e = z1-α/2(σ/n1/2), where z1-α/2 is the 100(1-α/2)% cumulative value of the standard normal curve, and n is the number of data values in that column.
[12] Here two limits are computed from the set of observations, say ln and un and it is claimed with a certain degree of confidence (measured in probabilistic terms) that the true value of γ lies between ln and un.
Thus we get an interval (ln and un) which we expect would include the true value of γ(θ).
[2] This estimation provides a range of values which the parameter is expected to lie.
It generally gives more information than point estimates and are preferred when making inferences.