scispace - formally typeset
Open AccessJournal ArticleDOI

InP photonic integrated multi-layer neural networks: Architecture and performance analysis

Bin Shi
- 01 Jan 2022 - 
- Vol. 7, Iss: 1, pp 010801-010801
Reads0
Chats0
TLDR
In this paper , the authors investigated the impact of fully monolithically integrated linear and nonlinear functions on the all-optical neuron output with respect to the number of synapses/neuron and data rate.
Abstract
We demonstrate the use of a wavelength converter, based on cross-gain modulation in a semiconductor optical amplifier (SOA), as a nonlinear function co-integrated within an all-optical neuron realized with SOA and wavelength-division multiplexing technology. We investigate the impact of fully monolithically integrated linear and nonlinear functions on the all-optical neuron output with respect to the number of synapses/neuron and data rate. Results suggest that the number of inputs can scale up to 64 while guaranteeing a large input power dynamic range of 36 dB with neglectable error introduction. We also investigate the performance of its nonlinear transfer function by tuning the total input power and data rate: The monolithically integrated neuron performs about 10% better in accuracy than the corresponding hybrid device for the same data rate. These all-optical neurons are then used to simulate a 64:64:10 two-layer photonic deep neural network for handwritten digit classification, which shows an 89.5% best-case accuracy at 10 GS/s. Moreover, we analyze the energy consumption for synaptic operation, considering the full end-to-end system, which includes the transceivers, the optical neural network, and the electrical control part. This investigation shows that when the number of synapses/neuron is >18, the energy per operation is <20 pJ (6 times higher than when considering only the optical engine). The computation speed of this two-layer all-optical neural network system is 47 TMAC/s, 2.5 times faster than state-of-the-art graphics processing units, while the energy efficiency is 12 pJ/MAC, 2 times better. This result underlines the importance of scaling photonic integrated neural networks on chip.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

All-optical ultrafast ReLU function for energy-efficient nanophotonic deep learning

TL;DR: This work experimentally demonstrates an all-optical Rectified Linear Unit (ReLU), which is the most widely used nonlinear activation function for deep learning, using a periodically-poled thin-film lithium niobate nanophotonic waveguide to achieve ultra-low energies in the regime of femtojoules per activation with near-instantaneous operation.
Journal ArticleDOI

Deep learning in light–matter interactions

TL;DR: The emerging opportunities and challenges of deep learning in photonics are discussed, shining light on how deep learning advances photonics.
Journal ArticleDOI

Artificial optoelectronic spiking neuron based on a resonant tunnelling diode coupled to a vertical cavity surface emitting laser

TL;DR: In this article , an opto-electro-optical (O/E/O) artificial neuron built with a resonant tunnelling diode (RTD) coupled to a photodetector as a receiver and a vertical cavity surface emitting laser as a transmitter was investigated.
Journal ArticleDOI

A Codesigned Integrated Photonic Electronic Neuron

TL;DR: In this article , a precision-scalable integrated photonic-electronic Multiply-Accumulate Neuron (PEMAN) is proposed, which relies on an analog photonic engine to perform reduced-precision multiplications at high speed and low power, and an electronic front end for accumulation and application of the nonlinear activation function by means of a nonlinear encoding in the analog-to-digital converter (ADC).
Proceedings ArticleDOI

Leveraging Lithium Niobate on Insulator Technology for Photonic Analog Computing

TL;DR: In this article , the authors exploit cascaded low-loss and low-driving-voltage travelling wave Lithium Niobate on Insulator (LNOI) modulators to perform multiply-accumulate operations at high speed and low power consumption.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Loihi: A Neuromorphic Manycore Processor with On-Chip Learning

TL;DR: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Journal ArticleDOI

TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip

TL;DR: This work developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture, and successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition.
Journal ArticleDOI

Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations

TL;DR: Neurogrid as discussed by the authors is a real-time neuromorphic system for simulating large-scale neural models in real time using 16 Neurocores, including axonal arbor, synapse, dendritic tree, and soma.
Related Papers (5)