scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Proceedings ArticleDOI
07 Jul 2019
TL;DR: A matrix computation processor based on an on-chip optical linear neural network, which solves different matrix equations by self-learning technology, showing possibility of usage in the field of optical artificial intelligence.
Abstract: In this paper, we present a matrix computation processor based on an on-chip optical linear neural network. This processor can solves different matrix equations, showing possibility of usage in the field of optical artificial intelligence. We present a matrix computation processor based on an on-chip optical linear neural network, which solves different matrix equations by self-learning technology.

3 citations

Journal ArticleDOI
TL;DR: Based on device properties, indirect feedback tuning (IFT) is proposed to simultaneously alleviate thermal and process variations and can improve the BER of silicon photonic networks to 10−9 under different variation situations.
Abstract: Silicon photonics is the leading candidate technology for high-speed and low-energy-consumption networks. Thermal and process variations are the two main challenges of achieving high-reliability photonic networks. Thermal variation is due to the heat issues created by application, floorplan, and environment, while process variation is caused by fabrication variability in the deposition, masking, exposition, etching, and doping. Tuning techniques are then required to overcome the impact of the variations and efficiently stabilize the performance of silicon photonic networks. We extend our previous optical switch integration model, BOSIM, to support the variation and thermal analyses. Based on device properties, we propose indirect feedback tuning (IFT) to simultaneously alleviate thermal and process variations. IFT can improve the BER of silicon photonic networks to 10−9 under different variation situations. Compared to state-of-the-art techniques, IFT can achieve an up to $1.52 \times 10^{8}$ times bit-error-rate improvement and 4.11X better heater energy efficiency. Indirect feedback does not require high-speed optical signal detection, and thus, the circuit design of IFT saves up to 61.4% of the power and 51.2% of the area compared to state-of-the-art designs.

3 citations


Cites methods from "Deep learning with coherent nanopho..."

  • ...The inference unit of the optical neural network was fabricated and proposed in 2016 [6]....

    [...]

Proceedings ArticleDOI
03 Mar 2019
TL;DR: An optical neuron is experimentally demonstrated using an optical logistic Sigmoid activation function yielding a 100% improvement compared to state-of-the-art, employing a sequence of 100psec long pulses.
Abstract: We experimentally demonstrate an optical neuron using an optical logistic Sigmoid activation function. Successful thresholding at 4 different power levels was achieved yielding a 100% improvement compared to state-of-the-art, employing a sequence of 100psec long pulses.

3 citations

Proceedings ArticleDOI
26 Feb 2020
TL;DR: An all-optical sigmoid activation function as well as a single-λ linear neuron is presented and its linear algebra operational credentials are experimentally demonstrated by the means of a typical IQ modulator operated at 10Gbaud/s.
Abstract: The identification of neuromorphic computing as a highly promising alternative computing system has been emerged from its potential to increase rapidly the computational efficiency that is currently restricted by Moore’s law end. First electronic neuromorphic chips like IBM’s TrueNorth and Intel’s Loihi revealed a tremendous performance improvement in terms of computational speed and density; however, they are still operating in MHz rates. To this end, neuromorphic photonic integrated circuits can further increase the computational speed and density, having a large portfolio of components with GHz-bandwidth and low-energy. Herein, we present an all-optical sigmoid activation function as well as a single-λ linear neuron. The all-optical sigmoid activation function comprises a Semiconductor Optical Amplifier-Mach-Zehnder Interferometer (SOA-MZI) configured in differentially-biased scheme followed by an SOA. Its thresholding capabilities have been experimentally demonstrated with 100psec optical pulses. Then, we introduce an all-optical phase-encoded weighting scheme and we experimentally demonstrate its linear algebra operational credentials by the means of a typical IQ modulator operated at 10Gbaud/s.

3 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Inspired by the well-known speed and energy benefits of photonics that are gradually turning interconnection into the stronghold of optical technologies [5–9], recent research efforts are already attempting to transfer the neuromorphic computing principles over optics [10,11]....

    [...]

  • ...The I and Q MZIs are driven by the VI and VQ differential voltages imprinting on the real part of EI and EQ the I and Q signals, respectively, with the phase shifter at its Q branch being controlled by a VPM voltage level to achieve the orthogonality between I and Q via a π/2 phase shift....

    [...]

  • ...The respective DC power supplies are employed in order to properly bias the I and Q MZIs as well as to define the 0 or π phase shift at the phase shifter, determining in this way whether addition or subtraction will be carried out between w1x1 and w2x2....

    [...]

  • ...Coherent layouts that exploit the phase of the optical carrier electric field for sign encoding purposes can yield single-wavelength and single-laser linear neuron deployments, but have been demonstrated so far only in a rather complex spatial layout for matrix multiplication purposes with multiple cascaded MZIs [10] This design follows the Reck-proposal and requires a N2 number of MZIs for an N-input configuration, scaling quadratically with the fan-in value....

    [...]

  • ...Coherent layouts that exploit the phase of the optical carrier electric field for sign encoding purposes can yield single-wavelength and single-laser linear neuron deployments, but have been demonstrated so far only in a rather complex spatial layout for matrix multiplication purposes with multiple cascaded MZIs [10] This design follows the Reck-proposal and requires a N(2) number of MZIs for an N-input configuration, scaling quadratically with the fan-in value....

    [...]

Proceedings ArticleDOI
01 Oct 2020
TL;DR: This paper proposes an adaptive data-driven initialization approach for recurrent photonic neural networks that is activation-agnostic, while it takes into account the actual distribution of the data used to train the network, overcoming a number of significant limitations of existing approaches.
Abstract: Photonic Deep Learning (DL) accelerators are among the most promising approaches for providing fast and energy efficient neural network implementations for several applications. However, photonic accelerators require using different activation functions compared to those typically used in DL. This renders the training process especially difficult to tune, often requiring several trials just for selecting the appropriate initialization hyper-parameters for the network. This process becomes even more difficult for recurrent networks, where exploding gradient phenomena can further destabilize the training process. In this paper, we propose an adaptive data-driven initialization approach for recurrent photonic neural networks. The proposed method is activation-agnostic, while it takes into account the actual distribution of the data used to train the network, overcoming a number of significant limitations of existing approaches. The proposed method is simple and easy to implement, yet it leads to significant improvements in the performance of DL models, as it was experimentally demonstrated using two large-scale challenging time-series datasets.

3 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Photonic implementations of DL models are among the most promising approaches for providing very fast and low energy neuromorphic solutions for DL applications [4]....

    [...]

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]