We will replicate the exact model as described in the original DeepSpeech paper. As explained earlier, the model consists of both recurrent and nonrecurrent layers. We will now look at the get_layers function in the code:
with tf.name_scope('Lyr1'): B1 = tf.get_variable(name='B1', shape=[n_h], initializer=tf.random_normal_initializer(stddev=0.046875)) H1 = tf.get_variable(name='H1', shape=[n_inp + 2*n_inp*n_ctx, n_h], initializer=tf.contrib.layers.xavier_initializer(uniform=False)) logits1 = tf.add(tf.matmul(X_batch, H1), B1) relu1 = tf.nn.relu(logits1) clipped_relu1 = tf.minimum(relu1,20.0) Lyr1 = tf.nn.dropout(clipped_relu1, 0.5)with tf.name_scope('Lyr2'): B2 = tf.get_variable(name='B2', shape=[n_h], initializer=tf.random_normal_initializer(stddev=0.046875)) ...