scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a 4-layer serial optical neural network (SONN) was constructed and trained for classification of both analog and digital signals with simulated accuracy rates of over 79.2% with proper individuality variance rates.
Abstract: Deep learning is able to functionally mimic the human brain and thus, it has attracted considerable recent interest. Optics-assisted deep learning is a promising approach to improve forward-propagation speed and reduce the power consumption of electronic-assisted techniques. However, present methods are based on a parallel processing approach that is inherently ineffective in dealing with the serial data signals at the core of information and communication technologies. Here, we propose and demonstrate a sequential optical deep learning concept that is specifically designed to directly process high-speed serial data. By utilizing ultra-short optical pulses as the information carriers, the neurons are distributed at different time slots in a serial pattern, and interconnected to each other through group delay dispersion. A 4-layer serial optical neural network (SONN) was constructed and trained for classification of both analog and digital signals with simulated accuracy rates of over 79.2% with proper individuality variance rates. Furthermore, we performed a proof-of-concept experiment of a pseudo-3-layer SONN to successfully recognize the ASCII codes of English letters at a data rate of 12 gigabits per second. This concept represents a novel one-dimensional realization of artificial neural networks, enabling a direct application of optical deep learning methods to the analysis and processing of serial data signals, while offering a new overall perspective for temporal signal processing.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the authors show that achromatic phase shift can lead to an optical switch with a far broader bandwidth, overcoming the limitations of conventional switches, which has the potential to open up opportunities for on-chip processing of ultrabroadband optical pulses.
Abstract: Switching and routing of broadband optical signals is important for a number of emerging applications involving reprogrammable optical processors and microwave photonic signal processing. Conventional optical switches, based on static refractive-index modulation, are fundamentally limited in their switching bandwidth by disperse phase shifts. The authors show that $d\phantom{\rule{0}{0ex}}y\phantom{\rule{0}{0ex}}n\phantom{\rule{0}{0ex}}a\phantom{\rule{0}{0ex}}m\phantom{\rule{0}{0ex}}i\phantom{\rule{0}{0ex}}c$ refractive-index modulation can lead to achromatic phase shift, and thus an optical switch with a far broader bandwidth, overcoming the limitations of conventional switches. This has the potential to open up opportunities for on-chip processing of ultrabroadband optical pulses.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the impact of a nonlinear port on the measured statistical electromagnetic properties of a ray-chaotic complex enclosure in the short wavelength limit was investigated, where a vector network analyzer is upgraded with a high power option which enables calibrated scattering parameter measurements up to +43 dBm.
Abstract: The Random Coupling Model (RCM) is a statistical approach for studying the scattering properties of linear wave chaotic systems in the semi-classical regime. Its success has been experimentally verified in various over-moded wave settings, including both microwave and acoustic systems. It is of great interest to extend its use to nonlinear systems. This paper studies the impact of a nonlinear port on the measured statistical electromagnetic properties of a ray-chaotic complex enclosure in the short wavelength limit. A Vector Network Analyzer is upgraded with a high power option which enables calibrated scattering (S) parameter measurements up to +43 dBm. By attaching a diode to the excitation antenna, amplitude-dependent S-parameters are observed. We have systematically studied how the key components in the RCM are affected by this nonlinear port, including the radiation impedance, short ray orbit corrections, and statistical properties. By applying the newly developed radiation efficiency extension to the RCM, we find that the diode admittance increases with excitation amplitude. This reduces the amount of power entering the cavity through the port, so that the diode effectively acts as a protection element.

8 citations

Journal ArticleDOI
TL;DR: Dang et al. as discussed by the authors proposed a photonic and memristor-based CNN architecture for end-to-end training and prediction, which achieves at least 34x speedup, 34x improvement in computational efficiency, and 38.5x energy savings, during training.
Abstract: Author(s): Dang, D; Chittamuru, SVR; Pasricha, S; Mahapatra, R; Sahoo, D | Abstract: Training deep learning networks involves continuous weight updates across the various layers of the deep network while using a backpropagation algorithm (BP). This results in expensive computation overheads during training. Consequently, most deep learning accelerators today employ pre-trained weights and focus only on improving the design of the inference phase. The recent trend is to build a complete deep learning accelerator by incorporating the training module. Such efforts require an ultra-fast chip architecture for executing the BP algorithm. In this article, we propose a novel photonics-based backpropagation accelerator for high performance deep learning training. We present the design for a convolutional neural network, BPLight-CNN, which incorporates the silicon photonics-based backpropagation accelerator. BPLight-CNN is a first-of-its-kind photonic and memristor-based CNN architecture for end-to-end training and prediction. We evaluate BPLight-CNN using a photonic CAD framework (IPKISS) on deep learning benchmark models including LeNet and VGG-Net. The proposed design achieves (i) at least 34x speedup, 34x improvement in computational efficiency, and 38.5x energy savings, during training; and (ii) 29x speedup, 31x improvement in computational efficiency, and 38.7x improvement in energy savings, during inference compared to the state-of-the-art designs. All these comparisons are done at a 16-bit resolution; and BPLight-CNN achieves these improvements at a cost of approximately 6% lower accuracy compared to the state-of-the-art.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe the characteristics of a large integrated linear optical device containing Mach-Zehnder interferometers and describe its potential use as a physically unclonable function.
Abstract: In this article we describe the characteristics of a large integrated linear optical device containing Mach-Zehnder interferometers and describe its potential use as a physically unclonable function. We propose that any tunable interferometric device of practical scale will be intrinsically unclonable and will possess an inherent randomness that can be useful for many practical applications. The device under test has the additional use-case as a general-purpose photonic manipulation tool, with various applications based on the experimental results of our prototype. Once our tunable interferometric device is set to work as a physically unclonable function, we find that there are approximately 6.85x10E35 challenge-response pairs, where each challenge can be quickly reconfigured by tuning the interferometer array for subsequent challenges.

8 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]