Index
A
Activation file
Architecture of CUDA
B
Backpropagation of errors
chain rule
derivative of
hidden-layers
hidden neurons
mean squared error
MLFN
neurons activation
parameters
SoftMax outputs
C
Clear all data option
Conjugate gradient optimization
change
concept of
difficulties
dimensions
function minimization
implementations
multivariate
negative gradient
power of
second-order methods
simple quadratic function
traditional backpropagation, momentum
vectors
CUDA code, RBM training
bias vectors and weight matrix
calling parameter list
components
development
device and clean up, parameters
epoch loop
fetching
gradient length and dot product, reduction
hidden-to-visible analysis
issue efficiency and stall reasons
load balancing
memory access statistics
occupancy
pipe utilization
initialization ...
Get Deep Belief Nets in C++ and CUDA C: Volume 1: Restricted Boltzmann Machines and Supervised Feedforward Networks now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.