scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors show that the sensitivity of the network laser spectrum to the spatial shape of the pump profile enables control of the lasing spectrum through non-uniform pump patterns.
Abstract: Recently, random lasing in complex networks has shown efficient lasing over more than 50 localised modes, promoted by multiple scattering over the underlying graph. If controlled, these network lasers can lead to fast-switching multifunctional light sources with synthesised spectrum. Here, we observe both in experiment and theory high sensitivity of the network laser spectrum to the spatial shape of the pump profile, with some modes for example increasing in intensity by 280% when switching off 7% of the pump beam. We solve the nonlinear equations within the steady state ab-initio laser theory (SALT) approximation over a graph and we show selective lasing of around 90% of the strongest intensity modes, effectively programming the spectrum of the lasing networks. In our experiments with polymer networks, this high sensitivity enables control of the lasing spectrum through non-uniform pump patterns. We propose the underlying complexity of the network modes as the key element behind efficient spectral control opening the way for the development of optical devices with wide impact for on-chip photonics for communication, sensing, and computation.

5 citations

DOI
TL;DR: A multi-level encoding and decoding scheme that could be adapted to a variety of photonic information processing architectures for photonic neural networks, photonics tensor cores, and programmable photonic.
Abstract: The resurgence of artificial intelligence enabled by deep learning and high performance computing has seen a dramatic increase of demand in the accuracy of deep learning model which has come at the cost of computational complexity. The fundamental operations in deep learning models are matrix multiplications, and large scale matrix operations and data-centric tasks have experienced bottlenecks from current digital electronic hardware in terms of performance and scalability. Recent research on photonic processors have found solutions to enable applications in machine learning, neuromorphic computing and high performance computing using basic photonic processing elements on integrated silicon photonic platform. However, efficient and scalable photonic computing requires an information encoding/decoding scheme. Here, we propose a multi-level encoding and decoding scheme, and experimentally demonstrate it with a wavelength-multiplexed silicon photonic processor. We also discuss the scalability of our proposed scheme by introducing a photonic general matrix multiply compiler, and consider the effects of speed, bit precision, and noise. Our proposed scheme could be adapted to a variety of photonic information processing architectures for photonic neural networks, photonics tensor cores, and programmable photonic.

5 citations

Posted Content
TL;DR: A new family of neural networks based on the Schrödinger equation (SE-NET) is shown, which enables stable training even for the deep SE-NET model because the unitarity of the system is kept under the training.
Abstract: We show a new family of neural networks based on the Schrodinger equation (SE-NET) In this analogy, the trainable weights of the neural networks correspond to the physical quantities of the Schrodinger equation These physical quantities can be trained using the complex-valued adjoint method Since the propagation of the SE-NET can be described by the evolution of physical systems, its outputs can be computed by using a physical solver As a demonstration, we implemented the SE-NET using the finite difference method The trained network is transferable to actual optical systems Based on this concept, we show a numerical demonstration of end-to-end machine learning with an optical frontend Our results extend the application field of machine learning to hybrid physical-digital optimizations

5 citations


Cites background or methods from "Deep learning with coherent nanopho..."

  • ...tem performance. On the basic research side, optical neuromorphic accelerators are being intensively investigated as candidates for the DNN processor. They typically use a Mach-Zehnder interferometer [37] or a discrete diffractive optical element [30, 5, 3] as a weight element. Adjoint optimization of these elements has already been shown in [30, 3, 20]. In general, increasing the number of nodes for ...

    [...]

  • ... structure; e.g. compressed sensing, computational imaging [39, 11, 6], and optical communication [19]. Another application is an optical processor for ultrafast and energy-efficient inference engines [37]. This is because the operation of the the transferred network is performed at the speed of light, which does not require any principal energy consumption Scalability of physical SE-NET: Our SE-NET-ba...

    [...]

Journal ArticleDOI
TL;DR: In this paper, a machine-learned quantum gate driven by a classical control is used to achieve phase-covariant cloning in a reinforcement learning scenario having fidelity of the clones as reward.
Abstract: We report on experimental implementation of a machine-learned quantum gate driven by a classical control. The gate learns optimal phase-covariant cloning in a reinforcement learning scenario having fidelity of the clones as reward. In our experiment, the gate learns to achieve nearly optimal cloning fidelity allowed for this particular class of states. This makes it a proof of present-day feasibility and practical applicability of the hybrid machine learning approach combining quantum information processing with classical control. Moreover, our experiment can be directly generalized to larger interferometers where the computational cost of classical computer is much lower than the cost of boson sampling.

5 citations

DOI
TL;DR: In this paper , a hybrid photonic-electronic computing architecture was proposed to perform large-scale coherent matrix-matrix multiplication, bypassing the requirements of high-speed electronic readout and frequent reprogramming of photonic weights.
Abstract: Advances in deep learning research over the past decade have been enabled by an increasingly unsustainable demand for compute power. This trend has dramatically outpaced the slowing growth in the performance and efficiency of electronic computing hardware. Here, we propose a hybrid photonic-electronic computing architecture which leverages a photonic crossbar array and homodyne detection to perform large-scale coherent matrix-matrix multiplication. This approach bypasses the requirements of high-speed electronic readout and frequent reprogramming of photonic weights which significantly reduces energy consumption and latency in the limit of large matrices—two major factors limiting efficiency for many analog computing approaches.

5 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]