Index

A

Activation file
Architecture of CUDA

B

Backpropagation of errors
chain rule
derivative of
hidden-layers
hidden neurons
mean squared error
MLFN
neurons activation
parameters
SoftMax outputs

C

Clear all data option
Conjugate gradient optimization
change
concept of
difficulties
dimensions
function minimization
implementations
multivariate
negative gradient
power of
second-order methods
simple quadratic function
traditional backpropagation, momentum
vectors
CUDA code, RBM training
bias vectors and weight matrix
calling parameter list
components
development
device and clean up, parameters
epoch loop
fetching
gradient length and dot product, reduction
hidden-to-visible analysis
issue efficiency and stall reasons
load balancing
memory access statistics
occupancy
pipe utilization
initialization ...

Get Deep Belief Nets in C++ and CUDA C: Volume 1: Restricted Boltzmann Machines and Supervised Feedforward Networks now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.