Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards.
POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.
Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.
Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction.
[3] Both have been used for behavior recognition[4] and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.
[6][7] Markov-chains have been used as a forecasting methods for several topics, for example price trends,[8] wind power[9] and solar irradiance.