scispace - formally typeset
Proceedings ArticleDOI

PhotoFourier: A Photonic Joint Transform Correlator-Based Neural Network Accelerator

Reads0
Chats0
TLDR
In this article , the authors proposed the PhotoFourier JTC-based convolutional neural network accelerator, which achieves more than 28× better energy-delay product compared to state-of-the-art photonic neural network accelerators.
Abstract
The last few years have seen a lot of work to address the challenge of low-latency and high-throughput convolutional neural network inference. Integrated photonics has the potential to dramatically accelerate neural networks because of its low-latency nature. Combined with the concept of Joint Transform Correlator (JTC), the computationally expensive convolution functions can be computed instantaneously (time of flight of light) with almost no cost. This ‘free’ convolution computation provides the theoretical basis of the proposed PhotoFourier JTC-based CNN accelerator. PhotoFourier addresses a myriad of challenges posed by on-chip photonic computing in the Fourier domain including 1D lenses and high-cost optoelectronic conversions. The proposed PhotoFourier accelerator achieves more than 28× better energy-delay product compared to state-of-art photonic neural network accelerators.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Training Neural Networks for Execution on Approximate Hardware

Tianmu Li, +2 more
- 08 Apr 2023 - 
TL;DR: In this paper , the authors demonstrate how training needs to be specialized for approximate hardware, and propose methods to speed up the training process by up to 18X, which is a significant speedup compared to traditional deep learning.
Proceedings ArticleDOI

Michelson interferometric methods for full optical complex convolution

TL;DR: In this paper , the authors presented the first demonstration of simultaneous amplitude and phase modulation of an optical two-dimensional signal in the Fourier plane of a thin lens, where two spatial light modulators (SLMs) arranged in a Michelson interferometer modulated the amplitude and the phase while being simultaneously in the focal plane of two Fourier lenses.
Proceedings ArticleDOI

Design and testing of silicon photonic 4F system for convolutional neural networks

TL;DR: In this paper , the implementation of the main components and the modeling for non-idealities that might occur are presented. But, the main operation that CNN has to perform, has a high computational cost, raising power consumption and latency, especially for large matrices.
Journal ArticleDOI

DOTA: A Dynamically-Operated Photonic Tensor Core for Energy-Efficient Transformer Accelerator

TL;DR: In this article , a customized high-performance and energy-efficient photonic Transformer accelerator, DOTA, is proposed to overcome the fundamental limitation of existing ONNs, consisting of a crossbar array of interference-based optical vector dot-product engines.
References
More filters
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Journal ArticleDOI

ImageNet classification with deep convolutional neural networks

TL;DR: A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Proceedings ArticleDOI

You Only Look Once: Unified, Real-Time Object Detection

TL;DR: Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background, and outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.