scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
06 Jan 2021-Nature
TL;DR: In this paper, a universal optical vector convolutional accelerator operating at more than ten TOPS (trillions (1012) of operations per second, or tera-ops per second), generating convolutions of images with 250,000 pixels was used for facial image recognition.
Abstract: Convolutional neural networks, inspired by biological visual cortex systems, are a powerful category of artificial neural networks that can extract the hierarchical features of raw data to provide greatly reduced parametric complexity and to enhance the accuracy of prediction. They are of great interest for machine learning tasks such as computer vision, speech recognition, playing board games and medical diagnosis1–7. Optical neural networks offer the promise of dramatically accelerating computing speed using the broad optical bandwidths available. Here we demonstrate a universal optical vector convolutional accelerator operating at more than ten TOPS (trillions (1012) of operations per second, or tera-ops per second), generating convolutions of images with 250,000 pixels—sufficiently large for facial image recognition. We use the same hardware to sequentially form an optical convolutional neural network with ten output neurons, achieving successful recognition of handwritten digit images at 88 per cent accuracy. Our results are based on simultaneously interleaving temporal, wavelength and spatial dimensions enabled by an integrated microcomb source. This approach is scalable and trainable to much more complex networks for demanding applications such as autonomous vehicles and real-time video recognition. An optical vector convolutional accelerator operating at more than ten trillion operations per second is used to create an optical convolutional neural network that can successfully recognize handwritten digit images with 88 per cent accuracy.

375 citations

Journal ArticleDOI
TL;DR: This work proposes a design for an optical convolutional layer based on an optimized diffractive optical element and demonstrates in simulation and with an optical prototype that the classification accuracies of the optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.
Abstract: Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.

342 citations

Journal ArticleDOI
TL;DR: In this paper, a new class of optical phase change materials (O-PCM) based on Ge-Sb-Se-Te (GSST) was proposed to break the coupling between refractive index and optical loss allowing low loss performance benefits.
Abstract: Optical phase change materials (O-PCMs), a unique group of materials featuring exceptional optical property contrast upon a solid-state phase transition, have found widespread adoption in photonic applications such as switches, routers and reconfigurable meta-optics. Current O-PCMs, such as Ge–Sb–Te (GST), exhibit large contrast of both refractive index (Δn) and optical loss (Δk), simultaneously. The coupling of both optical properties fundamentally limits the performance of many applications. Here we introduce a new class of O-PCMs based on Ge–Sb–Se–Te (GSST) which breaks this traditional coupling. The optimized alloy, Ge2Sb2Se4Te1, combines broadband transparency (1–18.5 μm), large optical contrast (Δn = 2.0), and significantly improved glass forming ability, enabling an entirely new range of infrared and thermal photonic devices. We further demonstrate nonvolatile integrated optical switches with record low loss and large contrast ratio and an electrically-addressed spatial light modulator pixel, thereby validating its promise as a material for scalable nonvolatile photonics. Here, the authors introduce optical phase change materials based on Ge-Sb-Se-Te which breaks the coupling between refractive index and optical loss allowing low-loss performance benefits. They demonstrate low losses in nonvolatile photonic circuits and electrical pixelated switching have been demonstrated.

336 citations

Journal ArticleDOI
Wei Ma1, Feng Cheng1, Yihao Xu1, Qinlong Wen1, Yongmin Liu1 
TL;DR: This work proposes to represent metamaterials and model the inverse design problem in a probabilistically generative manner, enabling to elegantly investigate the complex structure–performance relationship in an interpretable way, and solve the one‐to‐many mapping issue that is intractable in a deterministic model.
Abstract: The research of metamaterials has achieved enormous success in the manipulation of light in a prescribed manner using delicately designed subwavelength structures, so-called meta-atoms. Even though modern numerical methods allow for the accurate calculation of the optical response of complex structures, the inverse design of metamaterials, which aims to retrieve the optimal structure according to given requirements, is still a challenging task owing to the nonintuitive and nonunique relationship between physical structures and optical responses. To better unveil this implicit relationship and thus facilitate metamaterial designs, it is proposed to represent metamaterials and model the inverse design problem in a probabilistically generative manner, enabling to elegantly investigate the complex structure-performance relationship in an interpretable way, and solve the one-to-many mapping issue that is intractable in a deterministic model. Moreover, to alleviate the burden of numerical calculations when collecting data, a semisupervised learning strategy is developed that allows the model to utilize unlabeled data in addition to labeled data in an end-to-end training. On a data-driven basis, the proposed deep generative model can serve as a comprehensive and efficient tool that accelerates the design, characterization, and even new discovery in the research domain of metamaterials, and photonics in general.

333 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...realize all-optical implementation of various machine learning and artificial intelligence techniques as demonstrated by recent exciting works [58, 59]....

    [...]

Journal ArticleDOI
20 Aug 2018
TL;DR: This work uses Deep Neural Networks to classify and reconstruct a large database of handwritten digits from the intensity of the speckle patterns that result after the images propagated through multimode fibers (MMF).
Abstract: Deep neural networks (DNNs) are used to classify and reconstruct the input images from the intensity of the speckle patterns that result after the inputs are propagated through multimode fiber (MMF). We were able to demonstrate this result for fibers up to 1 km long by training the DNNs with a database of 16,000 handwritten digits. Better recognition accuracy was obtained when the DNNs were trained to first reconstruct the input and then classify based on the recovered image. We observed remarkable robustness against environmental instabilities and tolerance to deviations of the input pattern from the patterns with which the DNN was originally trained.

310 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]