scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
20 Dec 2018
TL;DR: Progress in such “programmable nanophotonic processors” as well as emerging applications of the technology to problems including classical and quantum information processing and machine learning are covered.
Abstract: Advances in photonic integrated circuits have recently enabled electrically reconfigurable optical systems that can implement universal linear optics transformations on spatial mode sets. This review paper covers progress in such “programmable nanophotonic processors” as well as emerging applications of the technology to problems including classical and quantum information processing and machine learning.

263 citations


Cites background or methods from "Deep learning with coherent nanopho..."

  • ...This adaptation of deep neural networks to integrated photonics was tested on a simple vowel recognition problem [2]....

    [...]

  • ...PNPs implementing matrices or quantum gates [2,15] (which can be specified as unitary matrices) are generally programmed using a category (2) protocol....

    [...]

  • ...Some machine learning algorithms, including neural networks, appear suited for analog computing architectures, including analog complementary metal-oxide semiconductor (CMOS) circuits [69], memristor arrays [70,71], photonic networks [2], and magnetic devices [72]....

    [...]

  • ...[2], it is possible to directly map the mathematical description of a multilayer perceptron, the most basic form of deep neural network, onto arrays of PNPs connected by nonlinear optical components....

    [...]

  • ...PNPs are already finding applications in proof-of-concept demonstrations including classical computing systems [1–3], quantum computing systems [15], self-calibrating mode mixers [26], and matrix processors [2,15,27]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors provide a comprehensive review of the development of silicon photonics and the foundry services which enable the productization, including various efforts to develop and release PDK devices.
Abstract: Many breakthroughs in the laboratories often do not bridge the gap between research and commercialization. However, silicon photonics bucked the trend, with industry observers estimating the commercial market to close in on a billion dollars in 2020 [45] . Silicon photonics leverages the billions of dollars and decades of research poured into silicon semiconductor device processing to enable high yield, robust processing, and most of all, low cost. Silicon is also a good optical material, with transparency in the commercially important infrared wavelength bands, and is a suitable platform for large-scale photonic integrated circuits. Silicon photonics is therefore slated to address the world's ever-increasing needs for bandwidth. It is part of an emerging ecosystem which includes designers, foundries, and integrators. In this paper, we review most of the foundries that presently enable silicon photonics integrated circuits fabrication. Some of these are pilot lines of major research institutes, and others are fully commercial pure-play foundries. Since silicon photonics has been commercially active for some years, foundries have released process design kits (PDK) that contain a standard device library. These libraries represent optimized and well-tested photonic elements, whose performance reflects the stability and maturity of the integration platforms. We will document the early works in silicon photonics, as well as its commercial status. We will provide a comprehensive review of the development of silicon photonics and the foundry services which enable the productization, including various efforts to develop and release PDK devices. In this context, we will report the long-standing efforts and contributions that previously IME/A*STAR and now AMF has dedicated to accelerating this journey.

251 citations

Journal ArticleDOI
TL;DR: The Towards Oxide-Based Electronics (TO-BE) Action as mentioned in this paper has been recently running in Europe and has involved as participants several hundred scientists from 29 EU countries in a wide four-year project.

251 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...An impressive step towards this paradigm has been realized recently in silicon integrated photonic meshes comprising hundreds of optical components in millimeter-sized chips, demonstrating key aspects of an optical neural network processing [635]....

    [...]

Journal ArticleDOI
20 Jun 2018
TL;DR: A network of up to 2500 diffractively coupled photonic nodes are demonstrated, forming a large scale Recurrent Neural Network, using a Digital Micro Mirror Device, to realize reinforcement learning.
Abstract: Photonic neural network implementation has been gaining considerable attention as a potentially disruptive future technology. Demonstrating learning in large-scale neural networks is essential to establish photonic machine learning substrates as viable information processing systems. Realizing photonic neural networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware has been lacking so far. We demonstrate a network of up to 2025 diffractively coupled photonic nodes, forming a large-scale recurrent neural network. Using a digital micro mirror device, we realize reinforcement learning. Our scheme is fully parallel, and the passive weights maximize energy efficiency and bandwidth. The computational output efficiently converges, and we achieve very good performance.

245 citations

Journal ArticleDOI
TL;DR: This work proposes an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit (DPU) that can efficiently support different neural networks and achieve a high model complexity with millions of neurons.
Abstract: There is an ever-growing demand for artificial intelligence. Optical processors, which compute with photons instead of electrons, can fundamentally accelerate the development of artificial intelligence by offering substantially improved computing performance. There has been long-term interest in optically constructing the most widely used artificial-intelligence architecture, that is, artificial neural networks, to achieve brain-inspired information processing at the speed of light. However, owing to restrictions in design flexibility and the accumulation of system errors, existing processor architectures are not reconfigurable and have limited model complexity and experimental performance. Here, we propose the reconfigurable diffractive processing unit, an optoelectronic fused computing architecture based on the diffraction of light, which can support different neural networks and achieve a high model complexity with millions of neurons. Along with the developed adaptive training approach to circumvent system errors, we achieved excellent experimental accuracies for high-speed image and video recognition over benchmark datasets and a computing performance superior to that of cutting-edge electronic computing platforms. Linear diffractive structures are by themselves passive systems but researchers here exploit the non-linearity of a photodetector to realize a reconfigurable diffractive ‘processing’ unit. High-speed image and video recognition is demonstrated.

245 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]