scispace - formally typeset
Search or ask a question
Author

F. Keddous

Bio: F. Keddous is an academic researcher. The author has contributed to research in topics: Computer science. The author has an hindex of 1, co-authored 1 publications receiving 1 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The obtained results, and the comparison with other works designed to accelerate the same types of architectures, show the e-ciency and the competitiveness of the pro- posed accelerator design by significantly improved performance and resource utilization.
Abstract: we present a new efficient OpenCL-based Accelerator for large scale Convolutional Neural Networks called “Fast Inference on FPGAs for Convolution Neural Network” (FFCNN). FFCNN is based on a deeply pipelined OpenCL kernels architecture. As pointed out before, high-level synthesis tools such as the OpenCL framework can easily port codes originally designed for CPUs/GPUs to FPGAs, but it is still difficult to make OpenCL codes run efficiently on FPGAs. This work aims to propose an efficient FPGA implementation of OpenCL High-Performance Computing Applications. To do so, a Data reuse and task mapping techniques are also presented to improve design efficiency. In addition, the following motivations were taken into account when developing FFCNN: • FFCNN has been designed to be easily implemented on Intel OpenCL SDK based FPGA design flow. • In FFFCN, different techniques have been integrated to improve the memory band- with and throughput. A performance analysis is conducted on two deep CNN for Large-Scale Images classifica- tion. The obtained results, and the comparison with other works designed to accelerate the same types of architectures, show the efficiency and the competitiveness of the pro- posed accelerator design by significantly improved performance and resource utilization.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The development history and application fields of some representative neural networks are introduced and the importance of studying deep learning technology is pointed out, as well as the reasons and advantages of using FPGA to accelerate deep learning.
Abstract: Deep learning based on neural networks has been widely used in image recognition, speech recognition, natural language processing, automatic driving, and other fields and has made breakthrough progress. FPGA stands out in the field of accelerated deep learning with its advantages such as flexible architecture and logic units, high energy efficiency ratio, strong compatibility, and low delay. In order to track the latest research results of neural network optimization technology based on FPGA in time and to keep abreast of current research hotspots and application fields, the related technologies and research contents are reviewed. This paper introduces the development history and application fields of some representative neural networks and points out the importance of studying deep learning technology, as well as the reasons and advantages of using FPGA to accelerate deep learning. Several common neural network models are introduced. Moreover, this paper reviews the current mainstream FPGA-based neural network acceleration technology, method, accelerator, and acceleration framework design and the latest research status, pointing out the current FPGA-based neural network application facing difficulties and the corresponding solutions, as well as prospecting the future research directions. We hope that this work can provide insightful research ideas for the researchers engaged in the field of neural network acceleration based on FPGA.

3 citations