scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors study the energy efficiency of integrated silicon photonic MAC circuits based on Mach-Zehnder modulators and microring resonators, and describe the bounds on energy efficiency and scaling limits for NxN optical networks with today's technology, based on the optical and electrical link budget.
Abstract: Digital accelerators in the latest generation of CMOS processes support multiply and accumulate (MAC) operations at energy efficiencies spanning 10-to-100~fJ/Op. But the operating speed for such MAC operations are often limited to a few hundreds of MHz. Optical or optoelectronic MAC operations on today's SOI-based silicon photonic integrated circuit platforms can be realized at a speed of tens of GHz, leading to much lower latency and higher throughput. In this paper, we study the energy efficiency of integrated silicon photonic MAC circuits based on Mach-Zehnder modulators and microring resonators. We describe the bounds on energy efficiency and scaling limits for NxN optical networks with today's technology, based on the optical and electrical link budget. We also describe research directions that can overcome the current limitations.

22 citations

Journal ArticleDOI
TL;DR: In this article , a noise-resilient deep learning coherent photonic neural network layout is proposed to operate at 10GMAC/sec/axon compute rates and follow a noise resilient training model.
Abstract: The explosive growth of deep learning applications has triggered a new era in computing hardware, targeting the efficient deployment of multiply-and-accumulate operations. In this realm, integrated photonics have come to the foreground as a promising energy efficient deep learning technology platform for enabling ultra-high compute rates. However, despite integrated photonic neural network layouts have already penetrated successfully the deep learning era, their compute rate and noise-related characteristics are still far beyond their promise for high-speed photonic engines. Herein, we demonstrate experimentally a noise-resilient deep learning coherent photonic neural network layout that operates at 10GMAC/sec/axon compute rates and follows a noise-resilient training model. The coherent photonic neural network has been fabricated as a silicon photonic chip and its MNIST classification performance was experimentally evaluated to support accuracy values of >99% and >98% at 5 and 10GMAC/sec/axon, respectively, offering 6× higher on-chip compute rates and >7% accuracy improvement over state-of-the-art coherent implementations.

22 citations

Journal ArticleDOI
TL;DR: In this paper, the authors show that the system can display resonator and integrator features depending on parameters and that multiple pulses can be emitted in response to larger perturbations.
Abstract: Semiconductor lasers with coherent forcing are expected to behave similarly to simple neuron models in response to external perturbations, as long as the physics describing them can be approximated by that of an overdamped pendulum with fluid torque. Beyond the validity range of this approximation, more complex features can be expected. We perform experiments and numerical simulations which show that the system can display resonator and integrator features depending on parameters and that multiple pulses can be emitted in response to larger perturbations.

22 citations

Journal ArticleDOI
TL;DR: The proposed methodology opens up a new route for realizing ultra-wideband illusion scattering of electromagnetic wave, which is important for stealth and microwave applications.
Abstract: We propose a full optimization procedure for designing mantle cloaks enclosing arbitrary objects, using sub-wavelength conformal frequency selective surface (FSS). Rely on the scattering cancellation principle of mantle cloak characterized by an average surface reactance, a personal computer can achieve this design procedure. By combing a Bayesian optimization (BO) with an electromagnetic solver, we can automatically find the optimal parameters of a conformal mantle cloak which can nearly cancel the scattering from the enclosed objects. It is shown that the results obtained by our method coincide with those from a rigorous analytical model and the numerical results by full parametric scanning. The proposed methodology opens up a new route for realizing ultra-wideband illusion scattering of electromagnetic wave, which is important for stealth and microwave applications.

22 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a double lateral Si3N4 waveguide for germanium-on-silicon photodetectors (Geon-Si PDs), which can serve as a novel waveguide-integrated coupling configuration.
Abstract: Up to now, the light coupling schemes of germanium-on-silicon photodetectors (Ge-on-Si PDs) could be divided into three main categories: (1) vertical (or normal-incidence) illumination, which can be from the top or back of the wafer/chip, and waveguide-integrated coupling including (2) butt coupling and (3) evanescent coupling. In evanescent coupling the input waveguide can be positioned on top, at the bottom, or lateral to the absorber. Here, to the best of our knowledge, we propose the first concept of Ge-on-Si PD with double lateral silicon nitride (Si3N4) waveguides, which can serve as a novel waveguide-integrated coupling configuration: double lateral coupling. The Ge-on-Si PD with double lateral Si3N4 waveguides features uniform optical field distribution in the Ge region, which is very beneficial to improving the operation speed for high input power. The proposed Ge-on-Si PD is comprehensively characterized by static and dynamic measurements. The typical internal responsivity is evaluated to be 0.52 A/W at an input power of 25 mW. The equivalent circuit model and theoretical 3 dB opto-electrical (OE) bandwidth investigation of Ge-on-Si PD with lateral coupling are implemented. Based on the small-signal (S21) radio-frequency measurements, under 4 mA photocurrent, a 60 GHz bandwidth operating at −3 V bias voltage is demonstrated. When the photocurrent is up to 12 mA, the 3 dB OE bandwidth still has 36 GHz. With 1 mA photocurrent, the 70, 80, 90, and 100 Gbit/s non-return-to-zero (NRZ) and 100, 120, 140, and 150 Gbit/s four-level pulse amplitude modulation clear openings of eye diagrams are experimentally obtained without utilizing any offline digital signal processing at the receiver side. In order to verify the high-power handling performance in high-speed data transmission, we investigate the eye diagram variations with the increase of photocurrents. The clear open electrical eye diagrams of 60 Gbit/s NRZ under 20 mA photocurrent are also obtained. Overall, the proposed lateral Si3N4 waveguide structure is flexibly extendable to a light coupling configuration of PDs, which makes it very attractive for developing high-performance silicon photonic integrated circuits in the future.

22 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]