scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a power-efficient and operationally simple AlGaAs on insulator microcomb source is used to drive CMOS SiPh engines for optical data transmissions and microwave photonics respectively.
Abstract: Microcombs have sparked a surge of applications over the last decade, ranging from optical communications to metrology. Despite their diverse deployment, most microcomb-based systems rely on a tremendous amount of bulk equipment to fulfill their desired functions, which is rather complicated, expensive and power-consuming. On the other hand, foundry-based silicon photonics (SiPh) has had remarkable success in providing versatile functionality in a scalable and low-cost manner, but its available chip-based light sources lack the capacity for parallelization, which limits the scope of SiPh applications. Here, we bridge these two technologies by using a power-efficient and operationally-simple AlGaAs on insulator microcomb source to drive CMOS SiPh engines. We present two important chip-scale photonic systems for optical data transmissions and microwave photonics respectively: The first microcomb-based integrated photonic data link is demonstrated, based on a pulse-amplitude 4-level modulation scheme with 2 Tbps aggregate rate, and a highly reconfigurable microwave photonic filter with unprecedented integration level is constructed, using a time stretch scheme. Such synergy of microcomb and SiPh integrated components is an essential step towards the next generation of fully integrated photonic systems.

41 citations

Journal ArticleDOI
TL;DR: In this paper, two typical neuroevolution algorithms are used to determine the hyper-parameters of ONNs and optimize the weights (phase shifters) in the connections.
Abstract: Recently, optical neural networks (ONNs) integrated into photonic chips have received extensive attention because they are expected to implement the same pattern recognition tasks in electronic platforms with high efficiency and low power consumption. However, there are no efficient learning algorithms for the training of ONNs on an on-chip integration system. In this article, we propose a novel learning strategy based on neuroevolution to design and train ONNs. Two typical neuroevolution algorithms are used to determine the hyper-parameters of ONNs and to optimize the weights (phase shifters) in the connections. To demonstrate the effectiveness of the training algorithms, the trained ONNs are applied in classification tasks for an iris plants dataset, a wine recognition dataset and modulation formats recognition. The calculated results demonstrate that the accuracy and stability of the training algorithms based on neuroevolution are competitive with other traditional learning algorithms. In comparison to previous works, we introduce an efficient training method for ONNs and demonstrate their broad application prospects in pattern recognition, reinforcement learning and so on.

40 citations

Journal ArticleDOI
20 Oct 2021
TL;DR: In this article, the authors present a deterministic approach to correcting circuit errors by locally correcting hardware errors within individual optical gates, and apply their approach to simulations of large scale optical neural networks and infinite impulse response filters implemented in programmable photonics, finding that they remain resilient to component error well beyond modern day process tolerances.
Abstract: Programmable photonic circuits of reconfigurable interferometers can be used to implement arbitrary operations on optical modes, providing a flexible platform for accelerating tasks in quantum simulation, signal processing, and artificial intelligence. A major obstacle to scaling up these systems is static fabrication error, where small component errors within each device accrue to produce significant errors within the circuit computation. Mitigating this error usually requires numerical optimization dependent on real-time feedback from the circuit, which can greatly limit the scalability of the hardware. Here we present a deterministic approach to correcting circuit errors by locally correcting hardware errors within individual optical gates. We apply our approach to simulations of large scale optical neural networks and infinite impulse response filters implemented in programmable photonics, finding that they remain resilient to component error well beyond modern day process tolerances. Our results highlight a potential way to scale up programmable photonics to hundreds of modes with current fabrication processes.

40 citations

Journal ArticleDOI
Pascal Stark1, Folkert Horst1, Roger Dangel1, Jonas R. Weiss1, Bert Jan Offrein1 
TL;DR: The ability of integrated photonics to operate at very high speeds opens opportunities for time-critical real-time applications, while chip-level integration paves the way to cost-effective manufacturing and assembly.
Abstract: Abstract Photonics offers exciting opportunities for neuromorphic computing. This paper specifically reviews the prospects of integrated optical solutions for accelerating inference and training of artificial neural networks. Calculating the synaptic function, thereof, is computationally very expensive and does not scale well on state-of-the-art computing platforms. Analog signal processing, using linear and nonlinear properties of integrated optical devices, offers a path toward substantially improving performance and power efficiency of these artificial intelligence workloads. The ability of integrated photonics to operate at very high speeds opens opportunities for time-critical real-time applications, while chip-level integration paves the way to cost-effective manufacturing and assembly.

40 citations

Journal ArticleDOI
TL;DR: This perspective is focused on recent progress in the implementation of functional oxide thin‐films into photovoltaic and neuromorphic applications toward the envisioned goal of self‐powered photvoltaic neuromorphic systems or a solar brain.
Abstract: New device concepts and new computing principles are needed to balance our ever-growing appetite for data and information with the realization of the goals of increased energy efficiency, reduction in CO emissions, and the circular economy. Neuromorphic or synaptic electronics is an emerging field of research aiming to overcome the current computer's Von-Neumann bottleneck by building artificial neuronal systems to mimic the extremely energy efficient biological synapses. The introduction of photovoltaic and/or photonic aspects into these neuromorphic architectures will produce self-powered adaptive electronics but may also open new possibilities in artificial neuroscience, artificial neural communications, sensing, and machine learning which would enable, in turn, a new era for computational systems owing to the possibility of attaining high bandwidths with much reduced power consumption. This perspective is focused on recent progress in the implementation of functional oxide thin-films into photovoltaic and neuromorphic applications toward the envisioned goal of self-powered photovoltaic neuromorphic systems or a solar brain.

40 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]