scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors proposed and validated an all-optical reservoir computing (RC) scheme based on a silicon microring (MR) and time multiplexing, where the input layer is encoded in the intensity of a pump beam, which is nonlinearly transferred to the free carrier concentration in the MR and imprinted on a secondary probe.
Abstract: Photonic implementations of reservoir computing (RC) promise to reach ultra-high bandwidth of operation with moderate training efforts. Several optoelectronic demonstrations reported state of the art performances for hard tasks as speech recognition, object classification and time series prediction. Scaling these systems in space and time faces challenges in control complexity, size and power demand, which can be relieved by integrated optical solutions. Silicon photonics can be the disruptive technology to achieve this goal. However, the experimental demonstrations have been so far focused on spatially distributed reservoirs, where the massive use of splitters/combiners and the interconnection loss limits the number of nodes. Here, we propose and validate an all optical RC scheme based on a silicon microring (MR) and time multiplexing. The input layer is encoded in the intensity of a pump beam, which is nonlinearly transferred to the free carrier concentration in the MR and imprinted on a secondary probe. We harness the free carrier dynamics to create a chain-like reservoir topology with 50 virtual nodes. We give proof of concept demonstrations of RC by solving two nontrivial tasks: the delayed XOR and the classification of Iris flowers. This forms the basic building block from which larger hybrid spatio-temporal reservoirs with thousands of nodes can be realized with a limited set of resources.

4 citations

Posted Content
TL;DR: An optical machine vision system that uses trainable matter in the form of diffractive layers to transform and encode the spatial information of objects into the power spectrum of the diffracted light, which is used to perform optical classification of objects with a single-pixel spectroscopic detector.
Abstract: Machine vision systems mostly rely on lens-based optical imaging architectures that relay the spatial information of objects onto high pixel-count opto-electronic sensor arrays, followed by digital processing of this information. Here, we demonstrate an optical machine vision system that uses trainable matter in the form of diffractive layers to transform and encode the spatial information of objects into the power spectrum of the diffracted light, which is used to perform optical classification of objects with a single-pixel spectroscopic detector. Using a time-domain spectroscopy setup with a plasmonic nanoantenna-based detector, we experimentally validated this framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit. We also report the coupling of this spectral encoding achieved through a diffractive optical network with a shallow electronic neural network, separately trained to reconstruct the images of handwritten digits based on solely the spectral information encoded in these ten distinct wavelengths within the diffracted light. These reconstructed images demonstrate task-specific image decompression and can also be cycled back as new inputs to the same diffractive network to improve its optical object classification. This unique framework merges the power of deep learning with the spatial and spectral processing capabilities of trainable matter, and can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with spectrally encoded classification tasks performed through diffractive networks.

4 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed an architecture using resonant metamaterial waveguides loaded with Ge2Sb2Te5 (GST) nanoantenna, and present a numerical study of its performance.
Abstract: Heterogeneous integration of phase change materials (PCM) into photonic integrated circuits is of current interest for all-optical signal processing and photonic in-memory computing. The basic building block consists of waveguides or resonators embedded with state-switchable PCM cells evanescently coupled to the optical mode. Despite recent advances, further improvements are desired in performance metrics like switching speeds, switching energies, device footprint, and fan-out. We propose an architecture using resonant metamaterial waveguides loaded with Ge2Sb2Te5 (GST) nanoantenna, and present a numerical study of its performance. Our proposed design is predicted to have a write energy of 16 pJ, an erase energy of 190 pJ (which is three to four times lower than previous reports), and, an order of magnitude improvement in the write-process figure-of-merit. Additional advantages include lowered ON state insertion loss and GST volume reduction.

4 citations

Journal ArticleDOI
22 Oct 2021
TL;DR: In this paper, an integrated coherent network of micro-ring resonators can be used in reconfigurable photonic processors, where the control of the phase in the different arms of the coherent network can determine the implemented functionality.
Abstract: Silicon photonics have widespread applications in optical communications, photonic sensors, and quantum information processing systems. Different photonic integrated circuits often require similar basic functional elements such as tunable filters, optical switches, wavelength de-multiplexers, optical delay lines, and polarization crosstalk unscrambling. Other optical signal processing functional elements may be needed in specific applications, for example, the differentiation with respect to time of time-varying optical signals and the implementation of very high extinction interferometers in some integrated quantum photonic circuits. Just as reconfigurable electronic processors in microelectronics have advantages in terms of ready availability and low cost from large-volume generic manufacturing and are useful for configuration into different functionalities in the form of field-programmable gate arrays, here, we show how an integrated coherent network of micro-ring resonators can be used in reconfigurable photonic processors. We demonstrate the implementation of optical filters, optical delay lines, optical space switching fabric, high extinction ratio Mach–Zehnder interferometer, and photonic differentiation in a reconfigurable network where the control of the phase in the different arms of the coherent network can determine the implemented functionality.

4 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]