scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: The original architecture is improved and PROPCNNs are achieved with translation invariance through replacing strided convolutional operation and fully connected operation with spectral pooling and global average pooling.

6 citations

Journal ArticleDOI
18 Apr 2022-Optica
TL;DR: This work proposed and fabricated a photonic spiking neuron chip based on an integrated Fabry–Pérot laser with a saturable absorber for the first time and proposed time-multiplexed spike encoding to realize functional PSNN far beyond the hardware integration scale limit.
Abstract: Photonic neuromorphic computing has emerged as a promising avenue toward building a low-latency and energy-efficient non-von-Neuman computing system. Photonic spiking neural network (PSNN) exploits brain-like spatiotemporal processing to realize high-performance neuromorphic computing. However, the nonlinear computation of PSNN remains a significant challenging. Here, we proposed and fabricated a photonic spiking neuron chip based on an integrated Fabry–Pérot laser with a saturable absorber (FP-SA) for the first time. The nonlinear neuron-like dynamics including temporal integration, threshold and spike generation, refractory period, and cascadability were experimentally demonstrated, which offers an indispensable fundamental building block to construct the PSNN hardware. Furthermore, we proposed time-multiplexed spike encoding to realize functional PSNN far beyond the hardware integration scale limit. PSNNs with single/cascaded photonic spiking neurons were experimentally demonstrated to realize hardware-algorithm collaborative computing, showing capability in performing classification tasks with supervised learning algorithm, which paves the way for multi-layer PSNN for solving complex tasks.

5 citations

DOI
TL;DR: In this article , a multi-transverse-mode optical processor (MTMOP) was proposed to measure the optical phase, required for programming the optical processor, without use of conventional optical phase detection techniques (e.g., coherent detection).
Abstract: We design a Multi-Transverse-Mode Optical Processor (MTMOP) on 220 nm thick Silicon Photonics exploiting the first two quasi-transverse electric modes (TE0 and TE1). The objective is to measure the optical phase, required for programming the optical processor, without use of conventional optical phase detection techniques (e.g., coherent detection). In the proposed design, we use a novel but simple building block that converts the optical phase to optical power. Mode TE0 carries the main optical signal while mode TE1 is for programming purposes. The MTMOP operation relies on the fact that the group velocity of TE0 and TE1 propagating through a mode-sensitive phase shifter are different. The mode-sensitive phase shifter is a waveguide with 0.96 μm width underneath a titanium-tungsten heater. Increasing the width of the phase shifter to 4 μm, the propagation becomes mode-insensitive. We use an unbalanced Mach-Zehnder interferometer (MZI) consists of a mode-sensitive and mode-insensitive phase shifters in the two arms. We set the bias of the phase shifters so that TE0 propagating in the two arms constructively interfere while this will not be the case for TE1. Hence, we detect the phase shift applied to TE0 by measuring the variation in the optical power of TE1. To the best of our knowledge, this design is the first attempt towards realizing a programmable optical processor with fully integrated programming unit exploiting multimode silicon photonics.

5 citations

Journal ArticleDOI
TL;DR: This paper systematically investigates the optical sequential logic and pipelining in electronic-photonic computing, which together offer a solution to potential problems in latency and power budget as the size of electronic-Photonic computing circuits scales up considerably to achieve much more complex functions.
Abstract: The recent rapid progress in integrated photonics has catalyzed the development of integrated optical computing in this post-Moore's law era. Electronic-photonic digital computing, as a new paradigm to achieve high-speed and power-efficient computation, has begun to attract attention. In this paper, we systematically investigate the optical sequential logic and pipelining in electronic-photonic computing, which together offer a solution to potential problems in latency and power budget as the size of electronic-photonic computing circuits scales up considerably to achieve much more complex functions. Pipelining and sequential logic open up the possibility of high-speed very-large-scale electronic-photonic digital computing.

5 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Fortunately, the state-of-the-art modulators [34]–[36] and photodetectors [37], [38] are capable of providing ultrasmall tsi and tpd , for example 10 ps or even less....

    [...]

Proceedings ArticleDOI
23 Jun 2019
TL;DR: In this article, feedback controlled microring resonators (MRR) are used as activation functions in CMOS Neural Network (NN) implementations to overcome the interconnect problem of NN implementations.
Abstract: To overcome the interconnect problem of CMOS Neural Network (NN) implementations (increased power consumption while inhibiting speed), small-scale linear optics-based solutions have been proposed to replace the electronic NN layer in multiple works — e.g. [1–3]. Nevertheless, an all-optical NN is difficult to achieve as it would imply substituting the existing electro-optic signal conversion and digital-driven activation function necessary between NN layers. In this work, we demonstrate how feedback controlled microring resonators (MRR) can be used as activation functions in NNs. The design we focus on is shown in Fig. 1-a. Pulses of light at different frequencies carry signals while weights are applied using PIN ring modulators with proper free spectral range. Pulses are used to ensure that the detuning due to heating of the device is mostly avoided. The light is then coupled in the main ring resonator responsible for the non-linear transfer function. The power dependent response is governed by an interplay between free carrier dispersion and free carrier absorption [4]. An electronic feedback loop will ensure carrier lifetime control, crucial for output stability and reproducibility. The output of this resonator is then filtered again to extract the necessary signal, before passing it to the next NN layer.

5 citations


Additional excerpts

  • ...[1-3]....

    [...]

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]