scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: Improvements to D2NNs are introduced by changing the training loss function and reducing the impact of vanishing gradients in the error back-propagation step to create hybrid classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end D 2NN with a layer-to-layer distance of a few wavelengths.
Abstract: Optical machine learning offers advantages in terms of power efficiency, scalability, and computation speed. Recently, an optical machine learning method based on diffractive deep neural networks (D2NNs) has been introduced to execute a function as the input light diffracts through passive layers, designed by deep learning using a computer. Here, we introduce improvements to D2NNs by changing the training loss function and reducing the impact of vanishing gradients in the error back-propagation step. Using five phase-only diffractive layers, we numerically achieved a classification accuracy of 97.18% and 89.13% for optical recognition of handwritten digits and fashion products, respectively; using both phase and amplitude modulation (complex-valued) at each layer, our inference performance improved to 97.81% and 89.32%, respectively. Furthermore, we report the integration of D2NNs with electronic neural networks to create hybrid classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end D2NN with a layer-to-layer distance of a few wavelengths, also reducing the complexity of the successive electronic network. Using a five-layer phase-only D2NN jointly optimized with a single fully connected electronic layer, we achieved a classification accuracy of 98.71% and 90.04% for the recognition of handwritten digits and fashion products, respectively. Moreover, the input to the electronic network was compressed by >7.8 times down to 10 × 10 pixels. Beyond creating low-power and high-frame rate machine learning platforms, D2NN-based hybrid neural networks will find applications in smart optical imager and sensor design.

139 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...2921376 optical implementations of various neural network architectures [4]–[10], with a recent resurgence [11]–[22], following the avail-...

    [...]

Journal ArticleDOI
TL;DR: In this article, a fully-functioned all optical neural networks (AONNs) are presented, in which linear operations are programmed by spatial light modulators and Fourier lenses, and optical nonlinear activation functions are realized with electromagnetically induced transparency in laser-cooled atoms.
Abstract: Artificial neural networks (ANNs) have now been widely used for industry applications and also played more important roles in fundamental researches. Although most ANN hardware systems are electronically based, optical implementation is particularly attractive because of its intrinsic parallelism and low energy consumption. Here, we propose and demonstrate fully-functioned all optical neural networks (AONNs), in which linear operations are programmed by spatial light modulators and Fourier lenses, and optical nonlinear activation functions are realized with electromagnetically induced transparency in laser-cooled atoms. Moreover, all the errors from different optical neurons here are independent, thus the AONN could scale up to a larger system size with final error still maintaining in a similar level of a single neuron. We confirm its capability and feasibility in machine learning by successfully classifying the order and disorder phases of a typical statistic Ising model. The demonstrated AONN scheme can be used to construct various ANNs of different architectures with the intrinsic parallel computation at the speed of light.

139 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss the potential of using silicon photonics in photonic applications, with the main drivers for using Si in photonics arise from the quality of wafers and the superior processing capabilities developed and funded by the microelectronics industry.
Abstract: Silicon (Si) photonics research and development started more than 30 years ago and has intensified in the last 15 years as levels of device functionality, photonic integration, and commercialization have all increased [1]. The key drivers for using Si in photonics arise from the quality of wafers and the superior processing capabilities developed and funded by the microelectronics industry. It has the promise to revolutionize the photonics industry in the same way that CMOS design and processing revolutionized the microelectronics industry, by driving down photonic chip cost while enabling higher levels of photonic integration and functionality. Commercialization, so far, has focused on optical communication (Telecom [2] and data center [3]) and biosensing [4], with a wealth of future application areas, including high-performance computing [5], automotive (lidar) [6], optical switches [7], and artificial intelligence [8].

138 citations

Journal ArticleDOI
TL;DR: In this article, a multimode photonic computing core consisting of an array of programable mode converters based on on-waveguide metasurfaces made of phase-change materials is demonstrated.
Abstract: Neuromorphic photonics has recently emerged as a promising hardware accelerator, with significant potential speed and energy advantages over digital electronics for machine learning algorithms, such as neural networks of various types. Integrated photonic networks are particularly powerful in performing analog computing of matrix-vector multiplication (MVM) as they afford unparalleled speed and bandwidth density for data transmission. Incorporating nonvolatile phase-change materials in integrated photonic devices enables indispensable programming and in-memory computing capabilities for on-chip optical computing. Here, we demonstrate a multimode photonic computing core consisting of an array of programable mode converters based on on-waveguide metasurfaces made of phase-change materials. The programmable converters utilize the refractive index change of the phase-change material Ge2Sb2Te5 during phase transition to control the waveguide spatial modes with a very high precision of up to 64 levels in modal contrast. This contrast is used to represent the matrix elements, with 6-bit resolution and both positive and negative values, to perform MVM computation in neural network algorithms. We demonstrate a prototypical optical convolutional neural network that can perform image processing and recognition tasks with high accuracy. With a broad operation bandwidth and a compact device footprint, the demonstrated multimode photonic core is promising toward large-scale photonic neural networks with ultrahigh computation throughputs. Integrated optical computing requires programmable photonic and nonlinear elements. The authors demonstrate a phase-change metasurface mode converter, which can be programmed to control the waveguide mode contrast, and build an optical convolutional neural network to perform image processing tasks.

136 citations

Journal ArticleDOI
24 Sep 2019
TL;DR: The bottleneck and the paradigm shift of digital computing are reviewed and an array of PAXEL architectures and applications are reviewed, including artificial neural networks, reservoir computing, pass-gate logic, decision making, and compressed sensing are reviewed.
Abstract: In the emerging Internet of things cyber-physical system-embedded society, big data analytics needs huge computing capability with better energy efficiency. Coming to the end of Moore’s law of the electronic integrated circuit and facing the throughput limitation in parallel processing governed by Amdahl’s law, there is a strong motivation behind exploring a novel frontier of data processing in post-Moore era. Optical fiber transmissions have been making a remarkable advance over the last three decades. A record aggregated transmission capacity of the wavelength division multiplexing system per a single-mode fiber has reached 115 Tbit/s over 240 km. It is time to turn our attention to data processing by photons from the data transport by photons. A photonic accelerator (PAXEL) is a special class of processor placed at the front end of a digital computer, which is optimized to perform a specific function but does so faster with less power consumption than an electronic general-purpose processor. It can process images or time-serial data either in an analog or digital fashion on a real-time basis. Having had maturing manufacturing technology of optoelectronic devices and a diverse array of computing architectures at hand, prototyping PAXEL becomes feasible by leveraging on, e.g., cutting-edge miniature and power-efficient nanostructured silicon photonic devices. In this article, first the bottleneck and the paradigm shift of digital computing are reviewed. Next, we review an array of PAXEL architectures and applications, including artificial neural networks, reservoir computing, pass-gate logic, decision making, and compressed sensing. We assess the potential advantages and challenges for each of these PAXEL approaches to highlight the scope for future work toward practical implementation.

136 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]