The MM algorithm is an iterative optimization method which exploits the convexity of a function in order to find its maxima or minima.
The MM stands for “Majorize-Minimization” or “Minorize-Maximization”, depending on whether the desired optimization is a minimization or a maximization.
[1][2] However, in the EM algorithm conditional expectations are usually involved, while in the MM algorithm convexity and inequalities are the main focus, and it is easier to understand and apply in most cases.
[3] The historical basis for the MM algorithm can be dated back to at least 1970, when Ortega and Rheinboldt were performing studies related to line search methods.
[4] The same concept continued to reappear in different areas in different forms.
In 2000, Hunter and Lange put forth "MM" as a general framework.
have applied the method in a wide range of subject areas, such as mathematics, statistics, machine learning and engineering.
[citation needed] The MM algorithm works by finding a surrogate function that minorizes or majorizes the objective function.
will converge to a local optimum or a saddle point as m goes to infinity.
Majorize-Minimization is the same procedure but with a convex objective to be minimised.
One can use any inequality to construct the desired majorized/minorized version of the objective function.