EM ALGORITHM OF MULTIVARIATE MIXTURE MODEL
The most popular approach for maximizing log likelihood function is the EM algorithm. It is an iterative algorithm, which, starting from initial guess of a parameters Θ(0), generates a sequence of estimations Θ(1),Θ(2),…,Θ(j),…, with increasing loglikelihood (i.e., logp(X|Θ(j))>logp(X|Θ(j-1)). Each iteration j of the algorithm consists of two steps called expectation step (E-step) and maximization step (M-step) followed by a convergence check. For the GMMs these steps are defined as follows:
E-step: Given the set of mixture parameters Θ(j-1) from the previous iteration, for each m=1,…,K and i=1,…,N, the expected value that a feature vector xi was generated from mth component is computed as:
\(h_{m}^{(j)}({\bf x}^{i})={\alpha_{m}^{(j)}p_{m}({\bf x}^{i}\vert \theta_{m}^{(j-1)}) \over \sum\nolimits_{k=1}^{K}\alpha_{k}^{(j)}p_{k}({\bf x}^{i}\vert \theta_{k}^{(j-1)})}, \)