Mixture of experts

The idea behind mixture of experts is to use a set of linear regressions for each sub space of the original data space and combine them with weighting functions that will successively give weight to each linear regression.

Consider the following example dataset, which we generate with the following toy code:


e1 = rnorm(20,0,2)
e2 = rnorm(20,0,3)
y1 = 1+2.5*x1 + e1
y2 = 35+-1.5*x2 + e2

Plotting the result, and doing a simple linear regression on it, gives the following:

Mixture of experts

Obviously, the linear regression does not capture the behavior of the data at all. It barely captures ...

Get Learning Probabilistic Graphical Models in R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.