Skip to Main Content
Practical Convolutional Neural Networks
book

Practical Convolutional Neural Networks

by Mohit Sewak, Md. Rezaul Karim, Pradeep Pujari
February 2018
Intermediate to advanced content levelIntermediate to advanced
218 pages
5h 31m
English
Packt Publishing
Content preview from Practical Convolutional Neural Networks

Attention mechanism for image captioning

From the introduction, so far, it must be clear to you that the attention mechanism works on a sequence of objects, assigning each element in the sequence a weight for a specific iteration of a required output. With every next step, not only the sequence but also the weights in the attention mechanism can change. So, attention-based architectures are essentially sequence networks, best implemented in deep learning using RNNs (or their variants).

The question now is: how do we implement a sequence-based attention on a static image, especially the one represented in a convolutional neural network (CNN)? Well, let's take an example that sits right in between a text and image to understand this. Assume ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Understanding Convolutional Neural Networks (CNNs)

Understanding Convolutional Neural Networks (CNNs)

Nell Watson
Practical Machine Learning for Computer Vision

Practical Machine Learning for Computer Vision

Valliappa Lakshmanan, Martin Görner, Ryan Gillard

Publisher Resources

ISBN: 9781788392303Supplemental Content