scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: This review proposes the concept of an intelligent photonic system (IPS), illustrating it as a developing architecture with three different versions, and discusses the challenges towards an IPS and provides some prospects for the future development.
Abstract: The emerging intelligence technologies represented by deep learning have broadened their applications to various fields. Beyond the conventional electronics-based processing systems, the convergence of photonics and artificial intelligence (AI) technology enhances the performance and learning ability of AI. In this review, we propose the concept of an intelligent photonic system (IPS), illustrating it as a developing architecture with three different versions. For each version of IPS, we review several representative studies. Moreover we discuss the challenges towards an IPS and provide some prospects for the future development.

6 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: Iris flowers classification is demonstrated for the first time by implementing a trained 3-layer neural network with an SOA-based InP cross-connect chip with accuracy 9.2% lower than what obtained via a computer.
Abstract: We demonstrate for the first time Iris flowers classification by implementing a trained 3-layer neural network with an SOA-based InP cross-connect chip. Classification accuracy of 85.8% is achieved, 9.2% lower than what obtained via a computer.

6 citations

Journal ArticleDOI
TL;DR: In this paper, an out-of-equilibrium driving of a strongly coupled pair of photonic integrated Kerr microresonators, which at the single-particle level generate well understood dissipative Kerr solitons, exhibit emergent nonlinear phenomena.
Abstract: Emergent phenomena are ubiquitous in nature and refer to spatial, temporal, or spatiotemporal pattern formation in complex nonlinear systems driven out of equilibrium that is not contained in the microscopic descriptions at the single-particle level. Examples range from novel phases of matter in both quantum and classical many-body systems, to galaxy formation or neural dynamics. Two characteristic phenomena are length scales that exceed the characteristic interaction length and spontaneous symmetry breaking. Recent advances in integrated photonics indicate that the study of emergent phenomena is possible in complex coupled nonlinear optical systems. Here we demonstrate that out-of-equilibrium driving of a strongly coupled ("dimer") pair of photonic integrated Kerr microresonators, which at the "single-particle" (i.e. individual resonator) level generate well understood dissipative Kerr solitons, exhibit emergent nonlinear phenomena. By exploring the dimer phase diagram, we find unexpected and therefore unpredicted regimes of soliton hopping, spontaneous symmetry breaking, and periodically emerging (in)commensurate dispersive waves. These phenomena are not included in the single-particle description and related to the parametric frequency conversion between hybridized supermodes. Moreover, by controlling supermode hybridization electrically, we achieve wide tunability of spectral interference patterns between dimer solitons and dispersive waves. Our findings provide the first critical step towards the study of emergent nonlinear phenomena in soliton networks and multimode lattices.

6 citations

Journal ArticleDOI
14 Nov 2018
TL;DR: The concept of optical dynamic range compression is introduced and its utilities in the non-uniform quantization, enhancing the signal-to-noise ratio as well as reshaping signal’s statistical distribution and extending the detection range in light detection and ranging systems are discussed.
Abstract: We introduce the concept of optical dynamic range compression and discuss its utilities in the non-uniform quantization, enhancing the signal-to-noise ratio as well as reshaping signal’s statistical distribution and extending the detection range in light detection and ranging systems. The technology represents a photonics hardware accelerator that reduces the burden on the dynamic range of the photodetection and the data acquisition including the required number of bits of the analog-to-digital converter. The energy of photons that are intentionally blocked can be harvested using a two-photon photovoltaic effect. Implementations using other approaches are also discussed.

6 citations

Proceedings ArticleDOI
18 Apr 2021
TL;DR: This work demonstrates the ability of optical soliton crystal micro-combs to exceed other approaches in performance for the most demanding practical optical communications applications.
Abstract: We report ultrahigh bandwidth applications of Kerr microcombs to optical neural networks and to optical data transmission, at data rates from 44 Terabits/s (Tb/s) to approaching 100 Tb/s. Convolutional neural networks (CNNs) are a powerful category of artificial neural networks that can extract the hierarchical features of raw data to greatly reduce the network complexity and enhance the accuracy for machine learning tasks such as computer vision, speech recognition, playing board games and medical diagnosis [1-7]. Optical neural networks can dramatically accelerate the computing speed to overcome the inherent bandwidth bottleneck of electronics. We use a new and powerful class of micro-comb called soliton crystals that exhibit robust operation and stable generation as well as a high intrinsic efficiency with an extremely low spacing of 48.9 GHz. We demonstrate a universal optical vector convolutional accelerator operating at 11 Tera-OPS/s (TOPS) on 250,000 pixel images for 10 kernels simultaneously — enough for facial image recognition. We use the same hardware to sequentially form a deep optical CNN with ten output neurons, achieving successful recognition of full 10 digits with 900 pixel handwritten digit images. We also report world record high data transmission over standard optical fiber from a single optical source, at 44.2 Terabits/s over the C-band, with a spectral efficiency of 10.4 bits/s/Hz, with a coherent data modulation format of 64 QAM. We achieve error free transmission across 75 km of standard optical fiber in the lab and over a field trial with a metropolitan optical fiber network. Our work demonstrates the ability of optical soliton crystal micro-combs to exceed other approaches in performance for the most demanding practical optical communications applications.

6 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...While other approaches achieve limited integration of the weight and sum circuits [8, 12] - probably the most challenging issue — advanced integrated light sources have not been demonstrated....

    [...]

  • ...We note that handwritten digit recognition, although widely employed as a benchmark test in digital hardware, is still (for full 10 digit (0 - 9) recognition) beyond the capability of existing analog reconfigurable ONNs....

    [...]

  • ...Optical neural networks (ONNs) [8-18] are promising candidates for next-generation neuromorphic computation, since they have the potential to overcome the bandwidth bottleneck of their electrical counterparts [6, 19-22] and achieve ultra-high computing speeds, enabled Figure 1 | Operation principle of the Tera-FLOPS photonic...

    [...]

  • ...As such, although in some approaches [8, 11, 12], the latency is low due to the short physical path lengths, the computing speed remains very low due to the absence of high-speed data interfaces (i....

    [...]

  • ...Significant progress has been made in highly parallel, high-speed and trainable ONNs [8-18, 23-27], including approaches that have...

    [...]

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]