scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: A hybrid on-chip full-transceiver consisting of a deterministically integrated detector coupled to a selected nanowire quantum dot through a filtering circuit made of a silicon nitride waveguide and a ring resonator filter is demonstrated.
Abstract: Integration of superconducting nanowire single-photon detectors and quantum sources with photonic waveguides is crucial for realizing advanced quantum integrated circuits. However, scalability is hindered by stringent requirements on high-performance detectors. Here we overcome the yield limitation by controlled coupling of photonic channels to pre-selected detectors based on measuring critical current, timing resolution, and detection efficiency. As a proof of concept of our approach, we demonstrate a hybrid on-chip full-transceiver consisting of a deterministically integrated detector coupled to a selected nanowire quantum dot through a filtering circuit made of a silicon nitride waveguide and a ring resonator filter, delivering 100 dB suppression of the excitation laser. In addition, we perform extensive testing of the detectors before and after integration in the photonic circuit and show that the high performance of the superconducting nanowire detectors, including timing jitter down to 23 ± 3 ps, is maintained. Our approach is fully compatible with wafer-level automated testing in a cleanroom environment.

23 citations

Journal ArticleDOI
TL;DR: In this paper, a quantum resonant tunneling (QRT) nanostructure monolithic integrated into a sub-λ metal-cavity nanolight-emitting diode (nanoLED) was proposed for spike-based operation of interest for neuromorphic optical computing.
Abstract: Abstract Event-activated biological-inspired subwavelength (sub-λ) photonic neural networks are of key importance for future energy-efficient and high-bandwidth artificial intelligence systems. However, a miniaturized light-emitting nanosource for spike-based operation of interest for neuromorphic optical computing is still lacking. In this work, we propose and theoretically analyze a novel nanoscale nanophotonic neuron circuit. It is formed by a quantum resonant tunneling (QRT) nanostructure monolithic integrated into a sub-λ metal-cavity nanolight-emitting diode (nanoLED). The resulting optical nanosource displays a negative differential conductance which controls the all-or-nothing optical spiking response of the nanoLED. Here we demonstrate efficient activation of the spiking response via high-speed nonlinear electrical modulation of the nanoLED. A model that combines the dynamical equations of the circuit which considers the nonlinear voltage-controlled current characteristic, and rate equations that takes into account the Purcell enhancement of the spontaneous emission, is used to provide a theoretical framework to investigate the optical spiking dynamic properties of the neuromorphic nanoLED. We show inhibitory- and excitatory-like optical spikes at multi-gigahertz speeds can be achieved upon receiving exceptionally low (sub-10 mV) synaptic-like electrical activation signals, lower than biological voltages of 100 mV, and with remarkably low energy consumption, in the range of 10–100 fJ per emitted spike. Importantly, the energy per spike is roughly constant and almost independent of the incoming modulating frequency signal, which is markedly different from conventional current modulation schemes. This method of spike generation in neuromorphic nanoLED devices paves the way for sub-λ incoherent neural elements for fast and efficient asynchronous neural computation in photonic spiking neural networks.

23 citations

Journal ArticleDOI
TL;DR: In this paper, a supervised machine learning-based classification of quantum emitters as "single" or "not-single" based on their sparse autocorrelation data is implemented. But the classification accuracy of over 90% within an integration time of less than a second, realizing roughly a hundredfold speedup compared to the conventional, Levenberg-Marquardt approach.
Abstract: Deterministic nanoassembly may enable unique integrated on-chip quantum photonic devices. Such integration requires a careful large-scale selection of nanoscale building blocks such as solid-state single-photon emitters by the means of optical characterization. Second-order autocorrelation is a cornerstone measurement that is particularly time-consuming to realize on a large scale. We have implemented supervised machine learning-based classification of quantum emitters as "single" or "not-single" based on their sparse autocorrelation data. Our method yields a classification accuracy of over 90% within an integration time of less than a second, realizing roughly a hundredfold speedup compared to the conventional, Levenberg-Marquardt approach. We anticipate that machine learning-based classification will provide a unique route to enable rapid and scalable assembly of quantum nanophotonic devices and can be directly extended to other quantum optical measurements.

23 citations

Posted Content
TL;DR: This paper outlines how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output, and proposes using knowledge distillation combined with noise injection during training to achieve more noise robust networks.
Abstract: The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as two times greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.

23 citations

Journal ArticleDOI
01 Dec 2021-PhotoniX
TL;DR: In this article, a systemic view on recent advancements in nanophotonic components designed by intelligence algorithms is presented, manifesting a development trend from performance optimizations towards inverse creations of novel designs.
Abstract: Applying intelligence algorithms to conceive nanoscale meta-devices becomes a flourishing and extremely active scientific topic over the past few years. Inverse design of functional nanostructures is at the heart of this topic, in which artificial intelligence (AI) furnishes various optimization toolboxes to speed up prototyping of photonic layouts with enhanced performance. In this review, we offer a systemic view on recent advancements in nanophotonic components designed by intelligence algorithms, manifesting a development trend from performance optimizations towards inverse creations of novel designs. To illustrate interplays between two fields, AI and photonics, we take meta-atom spectral manipulation as a case study to introduce algorithm operational principles, and subsequently review their manifold usages among a set of popular meta-elements. As arranged from levels of individual optimized piece to practical system, we discuss algorithm-assisted nanophotonic designs to examine their mutual benefits. We further comment on a set of open questions including reasonable applications of advanced algorithms, expensive data issue, and algorithm benchmarking, etc. Overall, we envision mounting photonic-targeted methodologies to substantially push forward functional artificial meta-devices to profit both fields.

23 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]