Solving a Optimization Problems using MM & EM Algorithms

D Reddy, D Reddy
2018 unpublished
Discuss an optimization method that relies heavily on convexity arguments and particularly useful in high-dimensional problems such as image reconstruction. This article proposes for finding maximization problems, when it is successful the MM algorithm substitutes a simple optimization problem for a difficult optimization problem. Simplicity can be attained avoiding large matrix inversions, linearizing optimization, separating the variables of optimization problems and also dealing with
more » ... ealing with equality and inequality constraints. MM algorithms are useful extensions of the well-known class of EM algorithms are some notion of missing data. Data can be missing in the ordinary and theoretical sense of a failure to record certain observations on certain cases. In particular, competing statistical methods of maximization must incorporate special techniques to cope with parameter constraints. Key points: Allele frequency estimation-Linear Logistic Regression-Bradley-Terry Model of Ranking-Poisson Processes-Hidden Markov Chains. Introduction: Most practical optimization problems refuse exact solution. Discuss an optimization a method directly applied on its convexity arguments and is particularly useful in high-dimensional problems image reconstructions. This iterative method is called the MM algorithm. In minimization problems, the first M of MM stands for majorize and the second M for minimize. In maximization problems, the first M of MM stands for minorize and the second M for maximize. In simplifying the original problem, may the attention of iteration with slower rate of convergence. Statisticians have vigorously developed a special case of the MM algorithm called the EM algorithm, which resolves around notions of missing data, generally it's more obvious connection to convexity, and its weaker reliance on difficult statistical principles. Maximum likelihood is the dominant form of estimation in applied statistics. The methods for finding maximum likelihood estimates are paramount importance. We can think of the E, or expectation, step of algorithm as filling in the missing data, this action replaces the likelihood of the observed data by minorizing function. This surrogate function is then maximized in the M step. Because the surrogate function usually much simpler than the likelihood, we can solve the M step analytically is that the EM algorithm is iterative. Desirable feature, the EM handles parameter constraints gracefully. Constraint satisfaction is by definition built into the solution of the M step. For example, the EM algorithm often converges at an extremely slow rate in a neighborhood of the maximum point. This rate directly reflects the amount of missing data problem. In the absence of concavity, there is also no guarantee that the EM algorithm will converge to the global maximum.
fatcat:o6gukh4vvbd5tk6tifoeyoz6pu