scispace - formally typeset
Proceedings ArticleDOI

O2NN: Optical Neural Networks with Differential Detection-Enabled Optical Operands

Reads0
Chats0
TLDR
Wang et al. as mentioned in this paper proposed a novel ONN engine O2NN based on wavelength-division multiplexing and differential detection to enable high-performance, robust, and versatile photonic neural computing with both light operands.
Abstract
Optical neuromorphic computing has demonstrated promising performance with ultra-high computation speed, high bandwidth, and low energy consumption. The traditional optical neural network (ONN) architectures realize neuromorphic computing via electrical weight encoding. However, previous ONN design methodologies can only handle static linear projection with stationary synaptic weights, thus fail to support efficient and flexible computing when both operands are dynamically-encoded light signals. In this work, we propose a novel ONN engine O2NN based on wavelength-division multiplexing and differential detection to enable high-performance, robust, and versatile photonic neural computing with both light operands. Balanced optical weights and augmented quantization are introduced to enhance the representability and efficiency of our architecture. Static and dynamic variations are discussed in detail with a knowledge-distillation-based solution given for robustness improvement. Discussions on hardware cost and efficiency are provided for a comprehensive comparison with prior work. Simulation and experimental results show that the proposed ONN architecture provides flexible, efficient, and robust support for high-performance photonic neural computing with fully-optical operands under low-bit quantization and practical variations.

read more

Citations
More filters
Proceedings ArticleDOI

ELight: Enabling Efficient Photonic In-Memory Neurocomputing with Life Enhancement

TL;DR: In this paper , a synergistic optimization framework, ELight, is proposed to minimize the overall write efforts for efficient and reliable optical in-memory neurocomputing, which reduces the total number of writes and dynamic power with comparable accuracy.
Journal ArticleDOI

Light in AI: Toward Efficient Neurocomputing With Optical Neural Networks—A Tutorial

TL;DR: An overview of state-of-the-art cross-layer co-design methodologies for scalable, robust, and self-learnable ONN designs across the circuit, architecture, and algorithm levels is given.
Posted Content

Towards Memory-Efficient Neural Networks via Multi-Level in situ Generation

TL;DR: In this article, a general and unified framework is proposed to trade expensive memory transactions with ultra-fast on-chip computations, directly translating to performance improvement by jointly exploring the intrinsic correlations and bit-level redundancy within DNN kernels and propose a multi-level in situ generation mechanism with mixed-precision bases.
Journal ArticleDOI

Silicon Photonics for Future Computing Systems

TL;DR: In this paper , the authors provide an overview of silicon photonics technology and its applications in the design and improvement of current and future computing systems, and discuss several research opportunities to push forward the application of silicon-on-insulator (SOI) waveguides.
Journal ArticleDOI

ELight: Toward Efficient and Aging-Resilient Photonic In-Memory Neurocomputing

TL;DR: This work proposes a holistic solution, ELight, to tackle both the aging issue and the post-aging reliability issue, where a proactive aging- aware optimization framework minimizes the overall PCM write cost and a post- aging tolerance scheme overcomes the effect of aged PCM.
References
More filters
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Posted Content

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

TL;DR: DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bit width parameter gradients, is proposed and can achieve comparable prediction accuracy as 32-bit counterparts.
Journal ArticleDOI

Neuromorphic photonic networks using silicon photonic weight banks.

TL;DR: First observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks are reported, and a mathematical isomorphism between the silicon photonics circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis.
Related Papers (5)