scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors proposed a blind source separation (BSS) algorithm to undo modal crosstalk in a short-reach multimode optical fiber interconnect for intra-data-center communications.
Abstract: Space-division multiplexing is a widely used technique to improve data-carrying capacities in both wireless and optical communication systems. However, tightly packed spatial channels cause severe crosstalk. High data rates and large channel counts impose severe constraints on resolving the crosstalk using traditional digital signal processing algorithms and electronic circuits. In order to solve these issues, this paper presents a silicon photonic system combining high-speed silicon photonic devices with a novel blind source separation (BSS) algorithm. We first demonstrate using photonic BSS to undo modal crosstalk in a short-reach multimode optical fiber interconnect for intra-data-center communications. The proposed photonic BSS system inherits the advantages of photonic matrix processor and the “blindness” of BSS, leading to superior energy and cost efficiency and reduced latency, while allowing to recover the signals using a sub-Nyquist sampling rate and in a free-running mode, and offering unmatched agility in signal format and data rate. The feasibility of using photonic processors for mode crosstalk equalization has been recently demonstrated, assisted with training sequences. Our approach, photonic BSS, in contrast, can tackle the more difficult problem of making the receiver transparent to any data rate and modulation format, and workable with slow and cost-effective electronics. In addition, we find that photonic BSS has a much better scaling law for space-division multiplexing (SDM)-based communication systems than digital signal processing (DSP). When compared to state-of-the-art DSP, photonic BSS can reduce system power consumption, speed, and latency by several orders of magnitude, particularly for high-capacity communications with high data rates per channel and a large number of channels. Photonic BSS has the added advantages of being agnostic to transmission content, making it exceptional at protecting communication privacy. This paper also discusses our previous work in demonstrating photonic BSS for privacy protection in wireless multiple-in multiple-out (MIMO) communications using silicon photonic micoring resonator (MRR) weight banks.

11 citations

Journal ArticleDOI
TL;DR: In this article, the authors apply the dynamics-based recognition approach to an optoelectronic delay system and demonstrate that the use of the delay system allows for image recognition and nonlinear classifications using only a few control signals.
Abstract: Deep learning is the backbone of artificial intelligence technologies, and it can be regarded as a kind of multilayer feedforward neural network. An essence of deep learning is information propagation through layers. This suggests that there is a connection between deep neural networks and dynamical systems in the sense that information propagation is explicitly modeled by the time-evolution of dynamical systems. In this study, we perform pattern recognition based on the optimal control of continuous-time dynamical systems, which is suitable for physical hardware implementation. The learning is based on the adjoint method to optimally control dynamical systems, and the deep (virtual) network structures based on the time evolution of the systems are used for processing input information. As a key example, we apply the dynamics-based recognition approach to an optoelectronic delay system and demonstrate that the use of the delay system allows for image recognition and nonlinear classifications using only a few control signals. This is in contrast to conventional multilayer neural networks, which require a large number of weight parameters to be trained. The proposed approach provides insight into the mechanisms of deep network processing in the framework of an optimal control problem and presents a pathway for realizing physical computing hardware.

11 citations

Journal ArticleDOI
TL;DR: In this paper , a lensless opto-electronic neural network architecture for machine vision applications was proposed, which optimizes a passive optical mask by means of a task-oriented neural network design, performs the optical convolution calculation operation using the lensless architecture, and reduces the device size and amount of calculation required.
Abstract: Machine vision faces bottlenecks in computing power consumption and large amounts of data. Although opto-electronic hybrid neural networks can provide assistance, they usually have complex structures and are highly dependent on a coherent light source; therefore, they are not suitable for natural lighting environment applications. In this paper, we propose a novel lensless opto-electronic neural network architecture for machine vision applications. The architecture optimizes a passive optical mask by means of a task-oriented neural network design, performs the optical convolution calculation operation using the lensless architecture, and reduces the device size and amount of calculation required. We demonstrate the performance of handwritten digit classification tasks with a multiple-kernel mask in which accuracies of as much as 97.21% were achieved. Furthermore, we optimize a large-kernel mask to perform optical encryption for privacy-protecting face recognition, thereby obtaining the same recognition accuracy performance as no-encryption methods. Compared with the random MLS pattern, the recognition accuracy is improved by more than 6%.

11 citations

Journal ArticleDOI
TL;DR: The first experimental optical absorption spectra of isolated CdSe 2 + and Cd2 Se 2 + species in the photon energy range ℏω = 1.9-4.9 eV are presented and an excellent overall agreement between experimental spectra and excited state calculations is found.
Abstract: We present the first experimental optical absorption spectra of isolated CdSe2+ and Cd2Se2+ species in the photon energy range ℏω = 1.9–4.9 eV. We probe the optical response by measuring photodissociation cross sections and combine our results with time-dependent density functional theory and equation-of-motion coupled cluster calculations. Structural candidates for the time-dependent excited state calculations are generated by a density functional theory based genetic algorithm as a global geometry optimization tool. This approach allows us to determine the cluster geometries present in our molecular beams by a comparison of experimental spectra with theoretical predictions for putative global minimum candidates. For CdSe2+, an excellent agreement between the global minimum and the experimental results is presented. We identify the global minimum geometry of Cd2Se2+ as a trapezium, which is built up of a neutral Se2 and a cationic Cd2+ unit, in contrast to what was previously proposed. We find an excellent overall agreement between experimental spectra and excited state calculations. We further study the influence of total and partial charges on the optical and geometric properties of Cd2Se2 and compare our findings to CdSe quantum dots and to bulk CdSe.

11 citations

Posted Content
TL;DR: In this article, the authors presented the design and analysis of cascadable all-optical NAND gates using diffractive neural networks, which can be cascaded to perform complex logical functions by successively feeding the output of one diffractive NAND gate into another.
Abstract: Owing to its potential advantages such as scalability, low latency and power efficiency, optical computing has seen rapid advances over the last decades. A core unit of a potential all-optical processor would be the NAND gate, which can be cascaded to perform an arbitrary logical operation. Here, we present the design and analysis of cascadable all-optical NAND gates using diffractive neural networks. We encoded the logical values at the input and output planes of a diffractive NAND gate using the relative optical power of two spatially-separated apertures. Based on this architecture, we numerically optimized the design of a diffractive neural network composed of 4 passive layers to all-optically perform NAND operation using the diffraction of light, and cascaded these diffractive NAND gates to perform complex logical functions by successively feeding the output of one diffractive NAND gate into another. We demonstrated the cascadability of our diffractive NAND gates by using identical diffractive designs to all-optically perform AND and OR operations, as well as a half-adder. Cascadable all-optical NAND gates composed of spatially-engineered passive diffractive layers can serve as a core component of various optical computing platforms.

11 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]