Encoding and decoding

The training of RBM can be thought of as two passes, the forward encoding path (construction) and the backward decoding (reconstruction). In an unsupervised setting, where we like to train the networks to model the distribution of input data, the forward and backward passes are done as follows.

In a forward pass, the raw input value from the data (for example, pixel values from an image) are represented by the visible nodes. Then they are multiplied with the weights and added with the hidden bias value (note, the visible bias value is not used in the forward pass). The resulting values are passed through an activation ...

Get Deep Learning Essentials now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.