scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
20 Jul 2018
TL;DR: A protocol for training photonic neural networks based on adjoint methods by physically backpropagating an optical error signal and calculating the gradient of the network with respect to its tunable degrees of freedom.
Abstract: Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multiplications, which are used heavily in artificial neural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.

301 citations


Cites methods from "Deep learning with coherent nanopho..."

  • ...Although we focus our discussion on one particular recently proposed hardware implementation [3], our conclusions are derived starting from Maxwell’s equations, and the ideas could therefore extend to other photonic neural network platforms, as well as to other applications....

    [...]

  • ...[3], we assume that the matrix-vector multiplications are implemented using an Optical Interference Unit (OIU), consisting of a mesh of reconfigurable Mach-Zehnder interferometers....

    [...]

Journal ArticleDOI
28 Jul 2020
TL;DR: Striking results that leverage physics to enhance the computing capabilities of artificial neural networks, using resistive switching materials, photonics, spintronics and other technologies are reviewed.
Abstract: Neuromorphic computing takes inspiration from the brain to create energy-efficient hardware for information processing, capable of highly sophisticated tasks. Systems built with standard electronics achieve gains in speed and energy by mimicking the distributed topology of the brain. Scaling-up such systems and improving their energy usage, speed and performance by several orders of magnitude requires a revolution in hardware. We discuss how including more physics in the algorithms and nanoscale materials used for data processing could have a major impact in the field of neuromorphic computing. We review striking results that leverage physics to enhance the computing capabilities of artificial neural networks, using resistive switching materials, photonics, spintronics and other technologies. We discuss the paths that could lead these approaches to maturity, towards low-power, miniaturized chips that could infer and learn in real time. Neuromorphic computing takes inspiration from the brain to create energy-efficient hardware for information processing, capable of highly sophisticated tasks. Including more physics in the algorithms and nanoscale materials used for computing could have a major impact in this field.

292 citations

Journal ArticleDOI
TL;DR: It is proved that the Pockels effect remains strong even in nanoscale devices, and shown as a practical example data modulation up to 50 Gbit s−1.
Abstract: The electro-optical Pockels effect is an essential nonlinear effect used in many applications. The ultrafast modulation of the refractive index is, for example, crucial to optical modulators in photonic circuits. Silicon has emerged as a platform for integrating such compact circuits, but a strong Pockels effect is not available on silicon platforms. Here, we demonstrate a large electro-optical response in silicon photonic devices using barium titanate. We verify the Pockels effect to be the physical origin of the response, with r42 = 923 pm V−1, by confirming key signatures of the Pockels effect in ferroelectrics: the electro-optic response exhibits a crystalline anisotropy, remains strong at high frequencies, and shows hysteresis on changing the electric field. We prove that the Pockels effect remains strong even in nanoscale devices, and show as a practical example data modulation up to 50 Gbit s−1. We foresee that our work will enable novel device concepts with an application area largely extending beyond communication technologies. Electro-optic modulators based on epitaxial barium titanate (BTO) integrated on silicon exhibit speeds up to 50 Gbit s–1 while the Pockels coefficient of the BTO film is found to be approaching the bulk value.

283 citations

Journal ArticleDOI
01 Nov 2020
Abstract: Department of Electrical and Computer Engineering, National University of Singapore, Singapore, 117576, Singapore Center for Intelligent Sensors and MEMS, National University of Singapore, Singapore, 117608, Singapore Hybrid-Integrated Flexible (Stretchable) Electronic Systems Program, National University of Singapore, Singapore, 117608, Singapore NUS Suzhou Research Institute (NUSRI), Suzhou, 215123, China NUS Graduate School for Integrative Science and Engineering, National University of Singapore, Singapore, 117456, Singapore

267 citations

Journal ArticleDOI
TL;DR: The unique material properties, structural transformation, and thermo-optic effects of well-established classes of chalcogenide PCMs are outlined and the emerging deep learning-based approaches for the optimization of reconfigurable MSs and the analysis of light-matter interactions are discussed.
Abstract: Nanophotonics has garnered intensive attention due to its unique capabilities in molding the flow of light in the subwavelength regime. Metasurfaces (MSs) and photonic integrated circuits (PICs) enable the realization of mass-producible, cost-effective, and highly efficient flat optical components for imaging, sensing, and communications. In order to enable nanophotonics with multi-purpose functionalities, chalcogenide phase-change materials (PCMs) have been introduced as a promising platform for tunable and reconfigurable nanophotonic frameworks. Integration of non-volatile chalcogenide PCMs with unique properties such as drastic optical contrasts, fast switching speeds, and long-term stability grants substantial reconfiguration to the more conventional static nanophotonic platforms. In this review, we discuss state-of-the-art developments as well as emerging trends in tunable MSs and PICs using chalcogenide PCMs. We outline the unique material properties, structural transformation, electro-optic, and thermo-optic effects of well-established classes of chalcogenide PCMs. The emerging deep learning-based approaches for the optimization of reconfigurable MSs and the analysis of light-matter interactions are also discussed. The review is concluded by discussing existing challenges in the realization of adjustable nanophotonics and a perspective on the possible developments in this promising area.

265 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]