11Hardware Implementation of RNN Using FPGA
Nikhil Bhosale*, Sayali Battuwar, Gunjan Agrawal and S.D. Nagarale
Department of Electronics and Telecommunication, Pimpri Chinchwad College of Engineering, Pune, India
Abstract
Today, recurrent neural network (RNN) is an important machine learning technology, which is widely used in various applications, because the development field of RNN is often used in sequence-related applications, and long-term and shortterm memory (LSTM) enhance the recurrent neural network. It contains complex arithmetic logic. In order to achieve high accuracy, researchers are always building large LSTM networks that consume a lot of time and energy. Data sequences can be learned and stored by recurrent neural networks (RNNs) [4]. Since RNNs are repetitive, it can sometimes be difficult to parallelize all calculations on general-purpose hardware. The processor currently does not provide much parallelism, and due to the sequential components of the RNN model, the parallelism provided by the GPU is limited. We used Python to demonstrate the hardware implementation of a long-and short-term memory (LSTM) repetitive network in Xilinx FPGAs [6, 7]. This article describes an FPGA platform survey to investigate FPGA applications within the scope of this project. In this project, we designed a repetitive neural network (RNN) and implemented a hardware interface on the PYNQ board equipped with XILINX PYNQZ2 [5]. In addition to rich programmable logic resources [ ...
Get Artificial Intelligence Applications and Reconfigurable Architectures now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.