scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: Results indicate that a combination of epitaxial rare-earth oxide thin films and horizontal slot waveguides provides a promising platform for light amplification and generation on silicon.
Abstract: We have epitaxially grown high-quality single-crystal rare-earth oxide thin films, including Gd2O3 and erbium-incorporated (ErGd)2O3, on silicon-on-insulator substrate, and investigated their optical properties when embedded in horizontal slot waveguides. (ErGd)2O3 with an erbium concentration in the mid-1021 cm−3 range shows well-resolved Stark-split photoluminescence emission peaks in the telecommunications band and a photoluminescence lifetime-concentration product as large as 2.67×1018 s·cm−3 at room-temperature. Using these materials, horizontal slot waveguides with strong optical confinement in low-refractive-index rare-earth oxide layers, have been fabricated for silicon-based integrated active photonic devices. Thanks to the strong light-matter interaction, a large waveguide modal absorption of 88 dB/cm related to erbium ions is achieved, leading to a large potential optical gain. Intense emissions from the waveguides are also observed, with a radiation efficiency on the order of 10−4. These results indicate that a combination of epitaxial rare-earth oxide thin films and horizontal slot waveguides provides a promising platform for light amplification and generation on silicon.

8 citations

Posted Content
TL;DR: This work proposes an ultrahigh speed, spiking neuromorphic processor architecture built upon single flux quantum (SFQ) based artificial neurons (JJ-Neuron), which has the potential to provide higher performance and power efficiency over the state of the art including CMOS, memristors and nanophotonics devices.
Abstract: Artificial neural networks inspired by brain operations can improve the possibilities of solving complex problems more efficiently. Today's computing hardware, on the other hand, is mainly based on von Neumann architecture and CMOS technology, which is inefficient at implementing neural networks. For the first time, we propose an ultrahigh speed, spiking neuromorphic processor architecture built upon single flux quantum (SFQ) based artificial neurons (JJ-Neuron). Proposed architecture has the potential to provide higher performance and power efficiency over the state of the art including CMOS, memristors and nanophotonics devices. JJ-Neuron has the ultrafast spiking capability, trainability with commodity design software even after fabrication and compatibility with commercial CMOS and SFQ foundry services. We experimentally demonstrate the soma part of the JJ-Neuron for various activation functions together with peripheral SFQ logic gates. Then, the neural network is trained for the IRIS dataset and we have shown 100% match with the results of the offline training with 1.2x${10}^{10}$ synaptic operations per second (SOPS) and 8.57x${10}^{11}$ SOPS/W performance and power efficiency, respectively. In addition, scalability for ${10}^{18}$ SOPS and ${10}^{17}$ SOPS/W is shown which is at least five orders of magnitude more efficient than the state of the art CMOS circuits and one order of magnitude more efficient than estimations of nanophotonics-based architectures.

8 citations

Proceedings ArticleDOI
05 Mar 2021
TL;DR: A complete portfolio of neuromorphic photonic subsystems and architectures, highlighting their utilization in practical application scenario for time series classification and fiber transmission links and a promising roadmap when plasmo-photonic hardware is adopted.
Abstract: Neuromorphic computing has emerged as a highly-promising compute alternative, migrating from von-Neuman architectures towards mimicking the human brain for sustaining computational power increases within a reduced power consumption envelope. Electronic neuromorphic chips like IBM’s TrueNorth, Intel’s Loihi and Mythic’s AI platform reveal a tremendous performance improvement in terms of computational speed and density; at the same time, neuromorphic photonic layouts are constantly gaining ground in exploiting their large component portfolio for enabling GHz-bandwidth and low-energy neurons. Progressing in tight synergy with appropriate training techniques, this evolution has already started to translate into performance improvements in end-to-end applications, highlighting the practical perspectives of the new neural network hardware when effectively synergized with new training frameworks. Herein, we present a complete portfolio of neuromorphic photonic subsystems and architectures, highlighting their utilization in practical application scenario for time series classification and fiber transmission links. Our work extends along feed-forward and recurrent photonic NN models, demonstrating experimental results together with the required training methods for bridging the gap between software-deployed NNs and the photonic hardware. We report on the experimentally validated performance of a 10GHz photonic time series classification engine, presenting also preliminary results on how photonic neurons can replace DSP modules in end-to-end fiber transmission schemes. The perspectives of these layouts to yield energy and area efficiency benefits are discussed through a detailed energy and area breakdown of neuromorphic photonic technologies, highlighting a promising roadmap when plasmo-photonic hardware is adopted.

8 citations

Posted Content
TL;DR: The Photonic Recurrent Ising Sampler (PRIS), an algorithm tailored for photonic parallel networks, that can sample distributions of arbitrary Ising problems, is presented.
Abstract: The inability of conventional electronic architectures to efficiently solve large combinatorial problems motivates the development of novel computational hardware. There has been much effort recently toward developing photonic networks which exploit fundamental properties enshrined in the wave nature of light and of its interaction with matter: high-speed, low-power, optical passivity, and parallelization. However, unleashing the true potential of photonic architectures requires the development of featured algorithms which optimally exploit these fundamental properties. We here present the Photonic Recurrent Ising Sampler (PRIS), a heuristic method tailored for photonic parallel networks that allows for fast and efficient sampling from distributions of combinatorially hard Ising problems. The PRIS provides sample solutions which converge in probability to the ground state of arbitrary Ising models. By running the PRIS at various noise levels, we probe the critical behavior of universality classes and their critical exponents. In addition to the attractive features of photonic networks, the PRIS relies on intrinsic dynamic noise and eigenvalue dropout to find ground states more efficiently. Our work paves the way to orders-of-magnitude speedups in heuristic methods via photonic implementations of the PRIS. We also hint at a broader class of (meta)heuristic algorithms derived from the PRIS, such as combined simulated annealing on the noise and eigenvalue dropout levels.

8 citations


Cites background or methods from "Deep learning with coherent nanopho..."

  • ...There has been much effort recently in developing photonic networks that can exploit fundamental properties enshrined in the wave nature of light and of its interaction with matter: high-speed, low-power, optical passivity, and parallelization [1], [2]....

    [...]

  • ...As the PRIS can be implemented with high-speed parallel photonic networks, the on-chip real time of a unit step can be less than a nanosecond [1] (and the initial setup time for a given Ising model is typically of the order of microseconds with thermal phase shifters)....

    [...]

Journal ArticleDOI
TL;DR: This article compares the response of different electro-optic architectures where part of the input optical signal is converted into the electrical domain and used to self-phase modulate the intensity of the remaining optical signal.
Abstract: The implementation of nonlinear activation functions is one of the key challenges that optical neural networks face. To the date, different approaches have been proposed, including switching to digital implementations, electro-optical or all optical. In this article, we compare the response of different electro-optic architectures where part of the input optical signal is converted into the electrical domain and used to self-phase modulate the intensity of the remaining optical signal. These architectures are made up of Mach Zehnder Interferometers (MZI) and microring resonators (MRR). We have compared the corresponding transfer functions with commonly used activation functions in state-of-the-art machine learning models and carried out an in-depth analysis of the capabilities of those architectures to generate the proposed activation functions. We demonstrate that a ring assisted MZI and a two-ring assisted MZI present the highest expressivity among the proposed structures. To the best of our knowledge, this is the first time that a quantified analysis of the capabilities of optical devices to mimic state-of-the-art activation functions is presented. The obtained activation functions are benchmarked on two machine learning examples: classification task using the Iris dataset, and image recognition using the MNIST dataset. We use complex-valued feed-forward neural networks and get test accuracies of 97% and 95% respectively.

8 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]