[1] A greedy optimisation procedure and thus fast version were subsequently developed.
[2][3] The RVM has an identical functional form to the support vector machine, but provides probabilistic classification.
[4] Compared to that of support vector machines (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations).
However RVMs use an expectation maximization (EM)-like learning method and are therefore at risk of local minima.
This is unlike the standard sequential minimal optimization (SMO)-based algorithms employed by SVMs, which are guaranteed to find a global optimum (of the convex problem).