In statistics, a generalized p-value is an extended version of the classical p-value, which except in a limited number of applications, provides only approximate solutions.
Conventional statistical methods do not provide exact solutions to many statistical problems, such as those arising in mixed models and MANOVA, especially when the problem involves a number of nuisance parameters.
As a result, practitioners often resort to approximate statistical methods or asymptotic statistical methods that are valid only when the sample size is large.
[1] Use of approximate and asymptotic methods may lead to misleading conclusions or may fail to detect truly significant results from experiments.
While conventional statistical methods do not provide exact solutions to such problems as testing variance components or ANOVA under unequal variances, exact tests for such problems can be obtained based on generalized p-values.
[1][2] In order to overcome the shortcomings of the classical p-values, Tsui and Weerahandi[2] extended the classical definition so that one can obtain exact solutions for such problems as the Behrens–Fisher problem and testing variance components.
This is accomplished by allowing test variables to depend on observable random vectors as well as their observed values, as in the Bayesian treatment of the problem, but without having to treat constant parameters as random variables.
To describe the idea of generalized p-values in a simple example, consider a situation of sampling from a normal population with the mean
Inferences on all unknown parameters can be based on the distributional results and Now suppose we need to test the coefficient of variation,
While the problem is not trivial with conventional p-values, the task can be easily accomplished based on the generalized test variable where
, a quantity that can be easily evaluated via Monte Carlo simulation or using the non-central t-distribution.