Skip to Main Content
Machine Learning
book

Machine Learning

by Mohssen Mohammed, Muhammad Badruddin Khan, Eihab Mohammed Bashier
August 2016
Intermediate to advanced content levelIntermediate to advanced
204 pages
3h 51m
English
CRC Press
Content preview from Machine Learning

Chapter 10

Gaussian Mixture Model

10.1 Introduction

As we know that each Gaussian is represented by a combination of mean and variance, if we have a mixture of M Gaussian distributions, then the weight of each Gaussian will be a third parameter related to each Gaussian distribution in a Gaussian mixture model (GMM). The following equation represents a GMM with M components.

p(x|θ)=k=1Mwkp(x|θk)

where wk represents the weight of the kth component. The mean and covariance of kth components are represented by θk = (µk, ∑k). p(xk), which is the Gaussian density of the kth component and is a D-variate Gaussian function of the following form:

p(x|θk)or p(x|(μk,k))=12πD/2|k|1/2e{(1/2)(xμk)k1(xμk)}

Sum of values of wk for different values ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Machine Learning

Machine Learning

Marco Gori
Machine Learning

Machine Learning

Sergios Theodoridis
Machine Learning

Machine Learning

Subramanian Chandramouli, Saikat Dutt, Amit Kumar Das

Publisher Resources

ISBN: 9781315354415