scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors review how 3D-2D transformations are tackled using iterative techniques and neural networks in a variety of fields such as optical tomography, additive manufacturing, and 3D optical memories.
Abstract: Abstract The prospect of massive parallelism of optics enabling fast and low energy cost operations is attracting interest for novel photonic circuits where 3-dimensional (3D) implementations have a high potential for scalability. Since the technology for data input–output channels is 2-dimensional (2D), there is an unavoidable need to take 2D-nD transformations into account. Similarly, the 3D-2D and its reverse transformations are also tackled in a variety of fields such as optical tomography, additive manufacturing, and 3D optical memories. Here, we review how these 3D-2D transformations are tackled using iterative techniques and neural networks. This high-level comparison across different, yet related fields could yield a useful perspective for 3D optical design.

3 citations

Journal ArticleDOI
TL;DR: An optical reservoir computing system in free space is proposed, using second-harmonic generation for nonlinear kernel functions and a scattering medium to enhance reservoir nodes interconnection and has potential for parallel data processing tasks such as video prediction, speech translation, and so on.
Abstract: We propose and experimentally demonstrate an optical reservoir computing system in free space, using second-harmonic generation for nonlinear kernel functions and a scattering medium to enhance reservoir nodes interconnection. We test it for one-step and multi-step predication of Mackey-Glass time series with different input-mapping methods on a spatial light modulator. For one-step prediction, we achieve 1.8 × 10-3 normalized mean squared error (NMSE). For the multi-step prediction, we explore two different mapping methods: linear-combination and concatenation, achieving 16-step prediction with NMSE as low as 3.5 × 10-4. Robust and superior for multi-step prediction, our approach and design have potential for parallel data processing tasks such as video prediction, speech translation, and so on.

3 citations

Journal ArticleDOI
28 Dec 2022-ACS Nano
TL;DR: In this paper , an artificial-intelligence-enhanced metamaterial waveguide sensing platform (AIMWSP) was proposed for aqueous mixture analysis in the MIR.
Abstract: As miniaturized solutions, mid-infrared (MIR) waveguide sensors are promising for label-free compositional detection of mixtures leveraging plentiful absorption fingerprints. However, the quantitative analysis of liquid mixtures is still challenging using MIR waveguide sensors, as the absorption spectrum overlaps for multiple organic components accompanied by strong water absorption background. Here, we present an artificial-intelligence-enhanced metamaterial waveguide sensing platform (AIMWSP) for aqueous mixture analysis in the MIR. With the sensitivity-improved metamaterial waveguide and assistance of machine learning, the MIR absorption spectra of a ternary mixture in water can be successfully distinguished and decomposed to single-component spectra for predicting concentration. A classification accuracy of 98.88% for 64 mixing ratios and 92.86% for four concentrations below the limit of detection (972 ppm, based on 3σ) with steps of 300 ppm are realized. Besides, the mixture concentration prediction with root-mean-squared error varying from 0.107 vol % to 1.436 vol % is also achieved. Our work indicates the potential of further extending this sensing platform to MIR spectrometer-on-chip aiming for the data analytics of multiple organic components in aqueous environments.

3 citations

Journal ArticleDOI
25 Mar 2022-PhotoniX
TL;DR: In this paper , the impact ionization coefficient ratio is one crucial parameter for avalanche photodiode optimization, which significantly affects the excess noise and the gain bandwidth product (GBP).
Abstract: Abstract High-speed optical interconnects of data centers and high performance computers (HPC) have become the rapid development direction in the field of optical communication owing to the explosive growth of market demand. Currently, optical interconnect systems are moving towards higher capacity and integration. High-sensitivity receivers with avalanche photodiodes (APDs) are paid more attention due to the capability to enhance gain bandwidth. The impact ionization coefficient ratio is one crucial parameter for avalanche photodiode optimization, which significantly affects the excess noise and the gain bandwidth product (GBP). The development of silicon-germanium (Si-Ge) APDs are promising thanks to the low impact ionization coefficient ratio of silicon, the simple structure, and the CMOS compatible process. Separate absorption charge multiplication (SACM) structures are typically adopted in Si-Ge APDs to achieve high bandwidth and low noise. This paper reviews design and optimization in high-speed Si-Ge APDs, including advanced APD structures, APD modeling and APD receivers.

3 citations

Journal ArticleDOI
TL;DR: In this paper , a new architecture capable of power monitoring any waveguide segment in a programmable feedforward photonic circuit is proposed, which can solve complex tasks using photonics-accelerated matrix multiplication on a chip, and which may require calibration and training mechanisms.
Abstract: Abstract Programmable feedforward photonic meshes of Mach–Zehnder interferometers are computational optical circuits that have many classical and quantum computing applications including machine learning, sensing, and telecommunications. Such devices can form the basis of energy-efficient photonic neural networks, which solve complex tasks using photonics-accelerated matrix multiplication on a chip, and which may require calibration and training mechanisms. Such training can benefit from internal optical power monitoring and physical gradient measurement for optimizing controllable phase shifts to maximize some task merit function. Here, we design and experimentally verify a new architecture capable of power monitoring any waveguide segment in a feedforward photonic circuit. Our scheme is experimentally realized by modulating phase shifters in a 6 × 6 triangular mesh silicon photonic chip, which can non-invasively (i.e., without any internal “power taps”) resolve optical powers in a 3 × 3 triangular mesh based on response measurements in only two output detectors. We measure roughly 3% average error over 1000 trials in the presence of systematic manufacturing and environmental drift errors and verify scalability of our procedure to more modes via simulation.

3 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]