scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: A graph-topological approach is introduced that defines the general class of feedforward networks and identifies columns of non-interacting nodes that can be adjusted simultaneously by simultaneously nullifying the power in one output of each node via optoelectronic feedback onto adjustable phase shifters or couplers.
Abstract: Reconfigurable photonic mesh networks of tunable beamsplitter nodes can linearly transform $N$ -dimensional vectors representing input modal amplitudes of light for applications such as energy-efficient machine learning hardware, quantum information processing, and mode demultiplexing. Such photonic meshes are typically programmed and/or calibrated by tuning or characterizing each beam splitter one-by-one, which can be time-consuming and can limit scaling to larger meshes. Here we introduce a graph-topological approach that defines the general class of feedforward networks and identifies columns of non-interacting nodes that can be adjusted simultaneously. By virtue of this approach, we can calculate the necessary input vectors to program entire columns of nodes in parallel by simultaneously nullifying the power in one output of each node via optoelectronic feedback onto adjustable phase shifters or couplers. This parallel nullification approach is robust to fabrication errors, requiring no prior knowledge or calibration of node parameters and reducing programming time by a factor of order $N$ to being proportional to the optical depth (number of node columns). As a demonstration, we simulate our programming protocol on a feedforward optical neural network model trained to classify handwritten digit images from the MNIST dataset with up to 98% validation accuracy.

63 citations


Cites background or methods from "Deep learning with coherent nanopho..."

  • ...Our parallel calibration protocol is similar in principle to current calibration protocols [3], [6], [7], [10], but with notable differences (e....

    [...]

  • ...[6], with nonlinearity layers implemented on the computer) or a fully integrated all-optical neural network (as in [26], with nonlinearities implemented on the device)....

    [...]

  • ...Parallel nullification is therefore a promising option for realizing machine learning models on reconfigurable devices [6], [31]....

    [...]

  • ...tions, including photonic neural networks [6], universal linear quantum computing [3], and photon random walks [7], may need to have the mesh implement some specific matrix that is calculated externally....

    [...]

  • ..., thermal crosstalk [6]) and beamsplitter fabrication errors....

    [...]

Journal ArticleDOI
TL;DR: A coherent linear neuron architecture that relies on a dual-IQ modulation cell as its basic neuron element, introducing distinct optical elements for weight amplitude and weight sign representation and exploiting binary optical carrier phase-encoding for positive/negative number representation is demonstrated.
Abstract: Neuromorphic photonics aims to transfer the high-bandwidth and low-energy credentials of optics into neuromorphic computing architectures. In this effort, photonic neurons are trying to combine the optical interconnect segments with optics that can realize all critical constituent neuromorphic functions, including the linear neuron stage and the activation function. However, aligning this new platform with well-established neural network training models in order to allow for the synergy of the photonic hardware with the best-in-class training algorithms, the following requirements should apply: i) the linear photonic neuron has to be able to handle both positive and negative weight values, ii) the activation function has to closely follow the widely used mathematical activation functions that have already shown an enormous performance in demonstrated neural networks so far. Herein, we demonstrate a coherent linear neuron architecture that relies on a dual-IQ modulation cell as its basic neuron element, introducing distinct optical elements for weight amplitude and weight sign representation and exploiting binary optical carrier phase-encoding for positive/negative number representation. We present experimental results of a typical IQ modulator performing as an elementary two-input linear neuron cell and successfully implementing all-optical linear algebraic operations with 104-ps long optical pulses. We also provide the theoretical proof and formulation of how to extend a dual-IQ modulation cell into a complete $N$ -input coherent linear neuron stage that requires only a single-wavelength optical input and avoids the resource-consuming Wavelength Division Multiplexing (WDM) weighting schemes. An 8-input coherent linear neuron is then combined with an experimentally validated optical sigmoid activation function into a physical layer simulation environment, with respective training and physical layer simulation results for the MNIST dataset revealing an average accuracy of 97.24% and 94.37%, respectively.

63 citations


Cites background or methods from "Deep learning with coherent nanopho..."

  • ...attempting to transfer the neuromorphic computing principles over optics [9], [10]....

    [...]

  • ...This design follows the Reck-proposal [9] and requires a N2 number of MZIs for an N -input configuration, scaling quadratically with the fan-in value....

    [...]

  • ...As such, it requires in total a number of 3N + 2 phase shifting elements, implying significant benefits compared to the N(2)-scaling architectures [9] suggested so far for lower losses and lower power consumption requirements....

    [...]

  • ...This design follows the Reck-proposal [9] and requires a N(2) number of MZIs for an N -input configuration, scaling quadratically with the fan-in value....

    [...]

  • ...This has led to the introduction of neuromorphic photonics [9]–[11] as a new scientific area, indicating...

    [...]

Journal ArticleDOI
TL;DR: This work analyzes the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view and determines mathematical rules describing the performance limits of the networks in relation to the number of diffractivefaces.
Abstract: Precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics. These advances around the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine learning tasks through light-matter interaction and diffraction. Here, we analyze the information processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view. We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network, up to a limit that is dictated by the extent of the input and output fields-of-view. Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view, and exhibit depth advantages in terms of their statistical inference, learning and generalization capabilities for different image classification tasks, when compared with a single trainable diffractive surface. These analyses and conclusions are broadly applicable to various forms of diffractive surfaces, including e.g., plasmonic and/or dielectric-based metasurfaces and flat optics that can be used to form all-optical processors.

63 citations

Journal ArticleDOI
20 Apr 2019
TL;DR: This work proposes a way to perform linear operations using complex optical media such as multimode fibers or scattering media as a computational platform driven by wavefront shaping to offer the prospect of reconfigurable, robust, and easy-to-fabricate linear optical analog computation units.
Abstract: Performing linear operations using optical devices is a crucial building block in many fields ranging from telecommunications to optical analog computation and machine learning. For many of these applications, key requirements are robustness to fabrication inaccuracies, reconfigurability, and scalability. We propose a way to perform linear operations using complex optical media such as multimode fibers or scattering media as a computational platform driven by wavefront shaping. Given a large random transmission matrix representing light propagation in such a medium, we can extract any desired smaller linear operator by finding suitable input and output projectors. We demonstrate this concept by finding input wavefronts using a spatial light modulator that cause the complex medium to act as a desired complex-valued linear operator on the optical field. We experimentally build several 16×16 operators and discuss the fundamental limits of the scalability of our approach. It offers the prospect of reconfigurable, robust, and easy-to-fabricate linear optical analog computation units.

62 citations

Journal ArticleDOI
TL;DR: In this paper , an integrated end-to-end photonic deep neural network (PDNN) was proposed to perform sub-nanosecond image classification through direct processing of the optical waves impinging on the on-chip pixel array as they propagate through layers of neurons.
Abstract: Deep neural networks with applications from computer vision to medical diagnosis1-5 are commonly implemented using clock-based processors6-14, in which computation speed is mainly limited by the clock frequency and the memory access time. In the optical domain, despite advances in photonic computation15-17, the lack of scalable on-chip optical non-linearity and the loss of photonic devices limit the scalability of optical deep networks. Here we report an integrated end-to-end photonic deep neural network (PDNN) that performs sub-nanosecond image classification through direct processing of the optical waves impinging on the on-chip pixel array as they propagate through layers of neurons. In each neuron, linear computation is performed optically and the non-linear activation function is realized opto-electronically, allowing a classification time of under 570 ps, which is comparable with a single clock cycle of state-of-the-art digital platforms. A uniformly distributed supply light provides the same per-neuron optical output range, allowing scalability to large-scale PDNNs. Two-class and four-class classification of handwritten letters with accuracies higher than 93.8% and 89.8%, respectively, is demonstrated. Direct, clock-less processing of optical data eliminates analogue-to-digital conversion and the requirement for a large memory module, allowing faster and more energy efficient neural networks for the next generations of deep learning systems.

62 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]