*Chapter 10*

*Chapter 10*

**Gaussian Mixture Model**

## 10.1 Introduction

As we know that each Gaussian is represented by a combination of mean and variance, if we have a mixture of *M* Gaussian distributions, then the *weight* of each Gaussian will be a third parameter related to each Gaussian distribution in a Gaussian mixture model (GMM). The following equation represents a GMM with *M* components.

$$p\left(x\right|\mathrm{\theta})=\mathrm{\sum}_{k=1}^{M}{w}_{k}p\left(x\right|{\mathrm{\theta}}_{k})$$

where *w*_{k} represents the weight of the *k*th component. The mean and covariance of *k*th components are represented by θ_{k} = (µ_{k}, ∑_{k}). *p*(*x*|θ_{k}), which is the Gaussian density of the *k*th component and is a *D*-variate Gaussian function of the following form:

$$p\left(x\right|{\mathrm{\theta}}_{k})\text{or}\text{}p\left(x\right|({\mu}_{k},{\mathrm{\sum}}_{k}))=\frac{{\displaystyle 1}}{{\displaystyle 2{\pi}^{D/2}{\left|{\mathrm{\sum}}_{k}\right|}^{1/2}}}{\text{e}}^{\{-(1/2)(x-{\mu}_{k}{)}^{\prime}{\mathrm{\sum}}_{k}^{-1}(x-{\mu}_{k})\}}$$

Sum of values of *w _{k}* for different values ...

Get *Machine Learning* now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.