scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a review of photonic matrix multiplication methods is presented, mainly including the plane light conversion method, Mach-Zehnder interferometer method and wavelength division multiplexing method.
Abstract: Matrix computation, as a fundamental building block of information processing in science and technology, contributes most of the computational overheads in modern signal processing and artificial intelligence algorithms. Photonic accelerators are designed to accelerate specific categories of computing in the optical domain, especially matrix multiplication, to address the growing demand for computing resources and capacity. Photonic matrix multiplication has much potential to expand the domain of telecommunication, and artificial intelligence benefiting from its superior performance. Recent research in photonic matrix multiplication has flourished and may provide opportunities to develop applications that are unachievable at present by conventional electronic processors. In this review, we first introduce the methods of photonic matrix multiplication, mainly including the plane light conversion method, Mach-Zehnder interferometer method and wavelength division multiplexing method. We also summarize the developmental milestones of photonic matrix multiplication and the related applications. Then, we review their detailed advances in applications to optical signal processing and artificial neural networks in recent years. Finally, we comment on the challenges and perspectives of photonic matrix multiplication and photonic acceleration.

59 citations

Journal ArticleDOI
TL;DR: Optimized by unique structures such as photonic crystal waveguide, slot wave guide, and microring resonator, these 2D material‐based photonic devices can be further improved in light‐matter interactions, providing a powerful design for silicon photonic integrated circuits.
Abstract: 2D materials, such as graphene, black phosphorous and transition metal dichalcogenides, have gained persistent attention in the past few years thanks to their unique properties for optoelectronics. More importantly, introducing 2D materials into silicon photonic devices will greatly promote the performance of optoelectronic devices, including improvement of response speed, reduction of energy consumption, and simplification of fabrication process. Moreover, 2D materials meet the requirements of complementary metal-oxide-semiconductor compatible silicon photonic manufacturing. A comprehensive overview and evaluation of state-of-the-art 2D photonic integrated devices for telecommunication applications is provided, including light sources, optical modulators, and photodetectors. Optimized by unique structures such as photonic crystal waveguide, slot waveguide, and microring resonator, these 2D material-based photonic devices can be further improved in light-matter interactions, providing a powerful design for silicon photonic integrated circuits.

59 citations

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate PCM-clad silicon photonic switches with a low insertion loss of 1 dB and a compact coupling length of 30 µm while maintaining a small crosstalk less than 10 dB over a bandwidth of 30 nm.
Abstract: An optical equivalent of the field-programmable gate array (FPGA) is of great interest to large-scale photonic integrated circuits. Previous programmable photonic devices relying on the weak, volatile thermo-optic or electro-optic effect usually suffer from a large footprint and high energy consumption. Phase change materials (PCMs) offer a promising solution due to the large non-volatile change in the refractive index upon phase transition. However, the large optical loss in PCMs poses a serious problem. Here, by exploiting an asymmetric directional coupler design, we demonstrate PCM-clad silicon photonic 1 \times 2 and 2 \times 2 switches with a low insertion loss of ~1 dB and a compact coupling length of ~30 {\mu}m while maintaining a small crosstalk less than ~10 dB over a bandwidth of 30 nm. The reported optical switches will function as the building blocks of the meshes in the optical FPGAs for applications such as optical interconnects, neuromorphic computing, quantum computing, and microwave photonics.

59 citations

Journal ArticleDOI
TL;DR: In this paper, a mesh of silicon photonics Mach-Zehnder interferometers is used to self-configure and reset itself after significantly perturbing the mixing, without turning off the beams.
Abstract: Propagation of light beams through scattering or multimode systems may lead to randomization of the spatial coherence of the light. Although information is not lost, its recovery requires a coherent interferometric reconstruction of the original signals, which have been scrambled into the modes of the scattering system. Here, we show that we can automatically unscramble four optical beams that have been arbitrarily mixed in a multimode waveguide, undoing the scattering and mixing between the spatial modes through a mesh of silicon photonics Mach-Zehnder interferometers. Using embedded transparent detectors and a progressive tuning algorithm, the mesh self-configures automatically and reset itself after significantly perturbing the mixing, without turning off the beams. We demonstrate the recovery of four separate 10 Gbits/s information channels, with residual cross-talk between beams of -20dB. This principle of self-configuring and self-resetting in optical systems should be applicable in a wide range of optical applications.

59 citations

Journal ArticleDOI
TL;DR: A new approach to ONNs based on integrated Kerr micro-combs that is programmable, highly scalable and capable of reaching ultra-high speeds is reported, demonstrating the building block of the ONN — a single neuron perceptron — by mapping synapses onto 49 wavelengths.
Abstract: Optical artificial neural networks (ONNs) have significant potential for ultra-high computing speed and energy efficiency. We report a new approach to ONNs based on integrated Kerr micro-combs that is programmable, highly scalable and capable of reaching ultra-high speeds, demonstrating the building block of the ONN — a single neuron perceptron — by mapping synapses onto 49 wavelengths to achieve a single-unit throughput of 11.9 Giga-OPS at 8 bits per OP, or 95.2 Gbps. We test the perceptron on handwritten-digit recognition and cancer-cell detection — achieving over 90% and 85% accuracy, respectively. By scaling the perceptron to a deep learning network using off-the-shelf telecom technology we can achieve high throughput operation for matrix multiplication for real-time massive data processing.

58 citations


Cites background or methods from "Deep learning with coherent nanopho..."

  • ...Spatially multiplexed schemes such as integrated coherent photonic circuits [3] and diffractive frameworks [10], have successfully demonstrated classification tasks involving vowels and handwritten digits with low-power passive operation, although with a tradeoff...

    [...]

  • ...When trained on enough data, ANNs can outperform humans and other computational algorithms [1-5] in tasks ranging from image recognition and language translation to risk evaluation and, interestingly, sophisticated board games [6]....

    [...]

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]