scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Proceedings ArticleDOI
17 Aug 2020
TL;DR: This work proposes a hybrid opto-electronic computing architecture targeting the acceleration of DNNs based on the residue number system (RNS), and combines the use of Wavelength Division Multiplexing (WDM) and RNS for efficient execution.
Abstract: Deep Neural Networks (DNNs) are currently used in many fields, including critical real-time applications. Due to its compute-intensive nature, speeding up DNNs has become an important topic in current research. We propose a hybrid opto-electronic computing architecture targeting the acceleration of DNNs based on the residue number system (RNS). In this novel architecture, we combine the use of Wavelength Division Multiplexing (WDM) and RNS for efficient execution. WDM is used to enable a high level of parallelism while reducing the number of optical components needed to decrease the area of the accelerator. Moreover, RNS is used to generate optical components with short optical critical paths. In addition to speed, this has the advantage of lowering the optical losses and reducing the need for high laser power. Our RNS compute modules use one-hot encoding and thus enable fast switching between the electrical and optical domains. In this work, we demonstrate how to implement the different DNN computational kernels using WDM-enabled RNS based integrated photonics. We provide an accelerator architecture that uses our designed components and perform design space exploration to select efficient architecture parameters. Compared to memristor crossbars, our residue matrix-vector multiplication unit has two orders of magnitude higher peak performance. Our experimental evaluation using DNN benchmarks illustrates that our architecture can perform more than 19 times faster than the state of the art GPUs under the same power budget.

10 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Recent works which implemented the optical NN accelerators with different technologies have been proposed, including microdisk weight banks [33], microring weight banks [56], diffractive optical layers [31], and Mach-Zehnder interferometers [48]....

    [...]

Journal ArticleDOI
TL;DR: In this article , the authors demonstrated a 2 × 2 Mach-Zehnder interferometer (MZI) TO switch with a high extinction ratio of more than 27 dB and a switching rise/fall time of 4.92/4.97 μs.
Abstract: The mid-infrared (MIR, 2–20 μm) waveband is of great interest for integrated photonics in many applications such as on-chip spectroscopic chemical sensing, and optical communication. Thermo-optic switches are essential to large-scale integrated photonic circuits at MIR wavebands. However, current technologies require a thick cladding layer, high driving voltages or may introduce high losses in MIR wavelengths, limiting the performance. This paper has demonstrated thermo-optic (TO) switches operating at 2 μm by integrating graphene onto silicon-on-insulator (SOI) structures. The remarkable thermal and optical properties of graphene make it an excellent heater material platform. The lower loss of graphene at MIR wavelength can reduce the required cladding thickness for the thermo-optics phase shifter from micrometers to tens of nanometers, resulting in a lower driving voltage and power consumption. The modulation efficiency of the microring resonator (MRR) switch was 0.11 nm/mW. The power consumption for 8-dB extinction ratio was 5.18 mW (0.8 V modulation voltage), and the rise/fall time was 3.72/3.96 μs. Furthermore, we demonstrated a 2 × 2 Mach-Zehnder interferometer (MZI) TO switch with a high extinction ratio of more than 27 dB and a switching rise/fall time of 4.92/4.97 μs. A comprehensive analysis of the device performance affected by the device structure and the graphene Fermi level was also performed. The theoretical figure of merit (2.644 mW−1μs−1) of graphene heaters is three orders of magnitude higher than that of metal heaters. Such results indicate graphene is an exceptional nanomaterial for future MIR optical interconnects.

10 citations

Proceedings ArticleDOI
09 Sep 2019
TL;DR: The data show that the strong optical contrast between the 2H and 1T’ structures persists even as the thermodynamic barrier between them is reduced by alloying, which bodes well for alloy design of phase-change materials.
Abstract: We use the (Mo,W)Te2 system to explore the potential of transition metal dichalcogenides (TMDs) as phase-change materials for integrated photonics. We measure the complex optical constant of MoTe2 in both the 2H and 1T’ phases by spectroscopic ellipsometry. We find that both phases have large refractive index, which is good for confined lightmatter interaction volume. The change Δn between phases is of σ(1), which is large and comparable to established phase-change materials. However, both phases have large optical loss, which limits to figure of merit throughout the measured range. We further measure the NIR reflectivity of MoTe2 and Mo0.91W0.09Te2, in both the 2H and 1T’ phases. The data show that the strong optical contrast between the 2H and 1T’ structures persists even as the thermodynamic barrier between them is reduced by alloying. This bodes well for alloy design of phase-change materials.

10 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Integrated photonic circuits offer the possibility to process massive data flows with faster speed and lower energy consumption than electronic circuits, and to construct beyond-von Neumann computing architectures including functions such as compute-in-memory and deep learning.(1,2) To fulfill the requirement of aggressivelyminiaturized integration, photonic materials should have strong light-matter interaction and process compatibility with other materials....

    [...]

Journal ArticleDOI
Yin Xu1, Chenxi Zhu1, Xin Hu2, Yue Dong1, Bo Zhang1, Yi Ni1 
TL;DR: In this paper, the authors proposed an on-chip silicon device with shallowly etched rectangular slots on the top surface of silicon nanowire for mode-division-multiplexing (MDM) transmission.
Abstract: Ever-increasing capacity requirements of optical interconnects drive the emergence and fast development of mode-division-multiplexing (MDM) transmission on-chip, where efficient mode control and conversion components become indispensable. Here, we propose an on-chip silicon ${\text{TM}_0} \mbox{-} \text{to}\mbox {-} {\text{TM}_1}$TM0-to-TM1 mode-order converter by leveraging shallowly etched rectangular slots on the top surface of silicon nanowire. The mode conversion region consists of two rectangular slots on the same side of a silicon nanowire and a smaller one between them to realize the efficient mode-order conversion from input ${\text{TM}_0}$TM0 to output ${\text{TM}_1}$TM1 mode with the help of multimode interference and accumulated phase difference. By studying the etching pattern on the silicon nanowire in detail, we have achieved an on-chip ${\text{TM}_0} \mbox{-} \text{to}\mbox {-} {\text{TM}_1}$TM0-to-TM1 mode-order converter with a high conversion efficiency of 97.5% and low modal crosstalk ${ \lt }{-} {23}\;\text{dB}$<−23dB in a conversion length of 11.8 µm; further, the insertion loss is only 0.29 dB at the wavelength of 1.55 µm. Moreover, the device working bandwidth and fabrication tolerance are also analyzed. Note that the proposed shallowly etched slots on the silicon nanowire can also be further developed to achieve the on-chip ${\text{TM}_0} \mbox{-} \text{to} \mbox{-} {\text{TM}_2}$TM0-to-TM2 mode-order conversion. With these characteristics, such a device could boost the development of MDM transmission on-chip with more TM-polarized mode channels.

10 citations

Proceedings ArticleDOI
24 Jan 2021
TL;DR: In this article, the authors proposed a noise-aware approach for training neural networks realized on photonic hardware, which can alleviate some of the limitations that hinders its application, including the need to re-train DL models in order to be compliant with the underlying hardware architecture, as well as the existence of various noise sources.
Abstract: Photonic-based neuromorphic hardware holds the credentials for providing fast and energy efficient implementations of computationally complex Deep Learning (DL) models. At the same time, the unique nature of neuromorphic photonics also imposes a number of limitations that hinders its application, including the need to re-train DL models in order to be compliant with the underlying hardware architecture, as well as the existence of various noise sources, which are prevalent in virtually all neuromorphic photonic architectures and negatively affect the accuracy of the deployed models. In this paper we propose a novel noise-aware approach for training neural networks realized on photonic hardware, which can alleviate some of these limitations. To this end we first provide an extensive characterization of the various noise sources that affect sigmoid-based recurrent photonic architectures, as well as provide an extensive study on the effect of various signal-to-noise-ratios (SNRs) levels on the performance of such DL models. The effectiveness of the proposed method is demonstrated on a challenging forecasting problem that involves high frequency financial time series using a state-of-the-art recurrent photonic architecture, which natu-rally fits the requirements of such latency-critical applications. Apart from providing more accurate models, the proposed method opens several interesting future research directions on co-designing neuromorphic photonics, including developing DL models that can work on lower SNRs, leading to more energy efficient solutions.

10 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]