scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a broadband nonvolatile electrically controlled 2 × 2 programmable unit in silicon photonics based on the phase change material Ge 2 Sb 2 Te 5 is presented.
Abstract: : Programmable photonic integrated circuits (PICs) have recently gained signi fi cant interest because of their potential in creating next-generation technologies ranging from arti fi cial neural networks and microwave photonics to quantum information processing. The fundamental building block of such programmable PICs is a 2 × 2 programmable unit , traditionally controlled by the thermo-optic or free-carrier dispersion. However, these implementations are power-hungry and volatile and have a large footprint (typically >100 μ m). Therefore, a truly “ set-and-forget ” -type 2 × 2 programmable unit with zero static power consumption is highly desirable for large-scale PICs. Here, we report a broadband nonvolatile electrically controlled 2 × 2 programmable unit in silicon photonics based on the phase-change material Ge 2 Sb 2 Te 5 . The directional coupler-type programmable unit exhibits a compact coupling length (64 μ m), small insertion loss ( ∼ 2 dB), and minimal crosstalk (< − 8 dB) across the entire telecommunication C-band while maintaining a record-high endurance of over 2800 switching cycles without signi fi cant performance degradation. This nonvolatile programmable unit constitutes a critical component for realizing future generic programmable silicon photonic systems.

23 citations

Journal ArticleDOI
TL;DR: An ultra-power-efficient 2 × 2 Si Mach-Zehnder interferometer optical switch with III-V/Si hybrid metal-oxide-semiconductor (MOS) phase shifter promising for fabricating large-scale Si photonic integrated circuits that require efficient, low-loss, and high-speed optical phase control.
Abstract: We have demonstrated an ultra-power-efficient 2 × 2 Si Mach–Zehnder interferometer optical switch with III-V/Si hybrid metal-oxide-semiconductor (MOS) phase shifters. The efficient low-loss phase modulation enables low-crosstalk and broadband switching in conjunction with multimode interference couplers consisting of tapered input and output ports. Owing to the negligible gate leakage current in the hybrid MOS capacitor, the power consumption required for switching is 0.18 nW, approximately 107 times smaller than that of a Si thermo-optic phase shifter. We also demonstrated a switching time of less than 20 ns. The III-V/Si hybrid MOS phase shifter is promising for fabricating large-scale Si photonic integrated circuits that require efficient, low-loss, and high-speed optical phase control.

23 citations

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate ultra-low-power refractive index tuning in a hybrid barium titanate (BTO)-silicon nitride (SiN) platform integrated on silicon.
Abstract: As the optical analogue to integrated electronics, integrated photonics has already found widespread use in data centers in the form of optical interconnects. As global network traffic continues its rapid expansion, the power consumption of such circuits becomes a critical consideration. Electrically tunable devices in photonic integrated circuits contribute significantly to the total power budget, as they traditionally rely on inherently power-consuming phenomena such as the plasma dispersion effect or the thermo-optic effect for operation. Here, we demonstrate ultra-low-power refractive index tuning in a hybrid barium titanate (BTO)-silicon nitride (SiN) platform integrated on silicon. We achieve tuning by exploiting the large electric field-driven Pockels effect in ferroelectric BTO thin films of sub-100 nm thickness. The extrapolated power consumption for tuning a free spectral range (FSR) in racetrack resonator devices is only 106 nW/FSR, several orders of magnitude less than many previous reports. We demonstrate the technological potential of our hybrid BTO-SiN technology by compensating thermally induced refractive index variations over a temperature range of 20 {\deg}C and by using our platform to fabricate tunable multiresonator optical filters. Our hybrid BTO-SiN technology significantly advances the field of ultra-low-power integrated photonic devices and allows for the realization of next-generation efficient photonic circuits for use in a variety of fields, including communications, sensing, and computing.

22 citations

Journal ArticleDOI
TL;DR: In this paper , a polarization-multiplexed all-optical diffractive processor is proposed to perform multiple, arbitrarily-selected linear transformations through a single diffractive network trained using deep learning, where an array of preselected linear polarizers are positioned between trainable transmissive diffractive materials that are isotropic, and different target linear transformations are uniquely assigned to different combinations of input/output polarization states.
Abstract: Abstract Research on optical computing has recently attracted significant attention due to the transformative advances in machine learning. Among different approaches, diffractive optical networks composed of spatially-engineered transmissive surfaces have been demonstrated for all-optical statistical inference and performing arbitrary linear transformations using passive, free-space optical layers. Here, we introduce a polarization-multiplexed diffractive processor to all-optically perform multiple, arbitrarily-selected linear transformations through a single diffractive network trained using deep learning. In this framework, an array of pre-selected linear polarizers is positioned between trainable transmissive diffractive materials that are isotropic, and different target linear transformations (complex-valued) are uniquely assigned to different combinations of input/output polarization states. The transmission layers of this polarization-multiplexed diffractive network are trained and optimized via deep learning and error-backpropagation by using thousands of examples of the input/output fields corresponding to each one of the complex-valued linear transformations assigned to different input/output polarization combinations. Our results and analysis reveal that a single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations with a negligible error when the number of trainable diffractive features/neurons ( N ) approaches $$N_pN_iN_o$$ N p N i N o , where N i and N o represent the number of pixels at the input and output fields-of-view, respectively, and N p refers to the number of unique linear transformations assigned to different input/output polarization combinations. This polarization-multiplexed all-optical diffractive processor can find various applications in optical computing and polarization-based machine vision tasks.

22 citations

Journal ArticleDOI
TL;DR: A new kind of photoelectronic synaptic transistors are proposed using the Al-Zn-O (AZO) as coplanar gate and the laterally-coupled poly (vinyl alcohol) (PVA) electrolyte membrane as neurotransmitter, and key synaptic functions such as excitatory postsynaptic current and paired-pulse-facilitation were successfully emulated.
Abstract: The hardware implementation of biological synapses is very necessary for a new brain-like neuromorphic computation system. In recent years, optoelectronic synaptic devices have become the application platform for next generation neuromorphic system and artificial neural network. Here, a new kind of photoelectronic synaptic transistors are proposed using the Al-Zn-O (AZO) as coplanar gate and the laterally-coupled poly (vinyl alcohol) (PVA) electrolyte membrane as neurotransmitter. The key synaptic functions such as excitatory postsynaptic current (EPSC) and paired-pulse-facilitation (PPF) were successfully emulated. More importantly, by exposing an ultraviolet (UV) laser, the transformation of short-term memory (STM) to long-term memory (LTM) can be mimicked in our neuromorphic devices. Furthermore, an energy-band diagram is finally proposed for a better understanding of the underlying mechanism of LTM behavior. These results represent an important step toward the next-generation neural networks enabled by photo-electric hybrid nano-electronics, and point to the potential of more sophisticated neuromorphic computations.

22 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]