scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a novel quantum-optical memristor based on integrated photonics and acting on single photons is introduced and experimentally demonstrated, and underline the practical potential of the device by numerically simulating instances of quantum reservoir computing, where they predict an advantage in the use of quantum memristors over classical architectures.
Abstract: Quantum computer technology harnesses the features of quantum physics for revolutionizing information processing and computing. As such, quantum computers use physical quantum gates that process information unitarily, even though the final computing steps might be measurement-based or non-unitary. The applications of quantum computers cover diverse areas, reaching from well-known quantum algorithms to quantum machine learning and quantum neural networks. The last of these is of particular interest by belonging to the promising field of artificial intelligence. However, quantum neural networks are technologically challenging as the underlying computation requires non-unitary operations for mimicking the behavior of neurons. A landmark development for classical neural networks was the realization of memory-resistors, or "memristors". These are passive circuit elements that keep a memory of their past states in the form of a resistive hysteresis and thus provide access to nonlinear gate operations. The quest for realising a quantum memristor led to a few proposals, all of which face limited technological practicality. Here we introduce and experimentally demonstrate a novel quantum-optical memristor that is based on integrated photonics and acts on single photons. We characterize its memristive behavior and underline the practical potential of our device by numerically simulating instances of quantum reservoir computing, where we predict an advantage in the use of our quantum memristor over classical architectures. Given recent progress in the realization of photonic circuits for neural networks applications, our device could become a building block of immediate and near-term quantum neuromorphic architectures.

32 citations

Journal ArticleDOI
TL;DR: In this paper, an optronic convolutional neural network (OPCNN) is proposed, in which all computation operations are executed in optics, and data transmission and control is executed in electronics.
Abstract: Although deeper convolutional neural networks (CNNs) generally obtain better performance on classification tasks, they incur higher computation costs. To address this problem, this study proposes the optronic convolutional neural network (OPCNN) in which all computation operations are executed in optics, and data transmission and control are executed in electronics. In OPCNN, we implement convolutional layers with multi input images by the lenslet 4f system, downsampling layers by optical-strided convolution and obtaining nonlinear activation by adjusting the camera's curve and fully connected layers by optical dot product. The OPCNN demonstrates good performance on the classification tasks in simulations and experiments and achieves better performance than other current optical convolutional neural networks by comparison due to the more complex architecture. The scalability of OPCNN contributes to building deeper networks when facing complicated datasets.

32 citations

Journal ArticleDOI
TL;DR: Research is summarized, future opportunities for AI in the domains of photonics, nanophotonics, plasmonics and photonic materials discovery, including metamaterials are discussed.
Abstract: Artificial intelligence (AI) is the most important new methodology in scientific research since the adoption of quantum mechanics and it is providing exciting results in numerous fields of science and technology. In this review we summarize research and discuss future opportunities for AI in the domains of photonics, nanophotonics, plasmonics and photonic materials discovery, including metamaterials.

32 citations


Cites methods from "Deep learning with coherent nanopho..."

  • ...Shen et al proposed a theoretical fully optical neural network architecture where each layer of the network is composed of an optical interference unit (OIU) to perform the linear matrix multiplication and an optical nonlinear unit (ONU) that acts as the nonlinear activation (figure 9(a)) [97]....

    [...]

  • ...They experimentally demonstrated that this system is capable of vowel recognition with an accuracy comparable of that of a conventional digital computer [95]....

    [...]

  • ...So far, two different approaches have been used for the physical realization of photonic networks: the first was suggested by Shen et al [95] and relied on nanophotonic circuits; the other proposed by Lin et al [96] is based on diffractive optical elements....

    [...]

Journal ArticleDOI
TL;DR: This work proposes the design of an optical ANN-based imaging system that has the ability to self-study image signals from an incoherent light source in different colors and shows that the signals transmitted through the multimode fiber can be used for image identification purposes and can be reconstructed using ANNs with a low number of nodes.
Abstract: The rapid growth of applications that rely on artificial neural network (ANN) concepts gives rise to a staggering increase in the demand for hardware implementations of neural networks. New types of hardware that can support the requirements of high-speed associative computing while maintaining low power consumption are sought, and optical artificial neural networks fit the task well. Inherently, optical artificial neural networks can be faster, support larger bandwidth, and produce less heat than their electronic counterparts. Here we propose the design of an optical ANN-based imaging system that has the ability to self-study image signals from an incoherent light source in different colors. Our design consists of a combination of a multimode fiber and a multi-core optical fiber realizing a neural network. We show that the signals, transmitted through the multimode fiber, can be used for image identification purposes and can also be reconstructed using ANNs with a low number of nodes. An all-optical solution can then be achieved by realizing these networks with the multi-core optical neural network fiber.

32 citations

Journal ArticleDOI
01 Jun 2021
TL;DR: This paper is an extensive study on the feasibility of training deep neural networks that can be deployed on photonic hardware that employ sinusoidal activation elements, along with the development of methods that allow for successfully training these networks, while taking into account the physical limitations of the employed hardware.
Abstract: Deep learning (DL) has achieved state-of-the-art performance in many challenging problems. However, DL requires powerful hardware for both training and deployment, increasing the cost and energy requirements and rendering large-scale applications especially difficult. Recognizing these difficulties, several neuromorphic hardware solutions have been proposed, including photonic hardware that can process information close to the speed of light and can benefit from the enormous bandwidth available on photonic systems. However, the effect of using these photonic-based neuromorphic architectures, which impose additional constraints that are not usually considered when training DL models, is not yet fully understood and studied. The main contribution of this paper is an extensive study on the feasibility of training deep neural networks that can be deployed on photonic hardware that employ sinusoidal activation elements, along with the development of methods that allow for successfully training these networks, while taking into account the physical limitations of the employed hardware. Different DL architectures and four datasets of varying complexity were used for extensively evaluating the proposed method.

32 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Perhaps among the most promising solutions for providing hardware implementations of deep neural networks is using photonics [11]....

    [...]

  • ...These provides significant advantages over the currently used solutions, often outperforming them by several order of magnitude [11]....

    [...]

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]