scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: A unified model for thermo-optic feedback tuning that can be specialized to different applications is presented, its recent advances are reviewed, and its future trends are discussed.
Abstract: As Moore’s law approaching its end, electronics is hitting its power, bandwidth, and capacity limits. Photonics is able to overcome the performance limits of electronics but lacks practical photonic register and flexible control. Combining electronics and photonics provides the best of both worlds and is widely regarded as an important post-Moore’s direction. For stability and dynamic operations considerations, feedback tuning of photonic devices is required. For silicon photonics, the thermo-optic effect is the most frequently used tuning mechanism due to the advantages of high efficiency and low loss. However, it brings new design requirements, creating new design challenges. Emerging applications, such as optical phased array, optical switches, and optical neural networks, employ a large number of photonic devices, making PCB tuning solutions no longer suitable. Electronic-photonic-converged solutions with compact footprints will play an important role in system scalability. In this paper, we present a unified model for thermo-optic feedback tuning that can be specialized to different applications, review its recent advances, and discuss its future trends.

4 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...1(b)[2]], EPC is moving to the next design hierarchy, that is, electronic-photonic heterogeneously-converging integrated circuits....

    [...]

  • ...(b) Design hierarchy of electronic-photonic convergence[2]....

    [...]

Proceedings ArticleDOI
18 Jan 2021
TL;DR: In this paper, the authors review several physical synthesis techniques for advanced neural network processors and argue that datapath design is an essential methodology in the above procedures due to the organized computational graph of neural networks.
Abstract: The remarkable breakthroughs in deep learning have led to a dramatic thirst for computational resources to tackle interesting real-world problems Various neural network processors have been proposed for the purpose, yet, far fewer discussions have been made on the physical synthesis for such specialized processors, especially in advanced technology nodes In this paper, we review several physical synthesis techniques for advanced neural network processors We especially argue that datapath design is an essential methodology in the above procedures due to the organized computational graph of neural networks As a case study, we investigate a wafer-scale deep learning accelerator placement problem in detail

4 citations

Journal ArticleDOI
TL;DR: Sagnac interference has been widely used for reflection manipulation, precision measurements, and spectral engineering in optical systems as mentioned in this paper , which offers attractive advantages by yielding a reduced system complexity without the need for phase control between different pathways, thus offering a high degree of stability against external disturbance and a low wavelength dependence.
Abstract: As a fundamental optical approach to interferometry, Sagnac interference has been widely used for reflection manipulation, precision measurements, and spectral engineering in optical systems. Compared to other interferometry configurations, it offers attractive advantages by yielding a reduced system complexity without the need for phase control between different pathways, thus offering a high degree of stability against external disturbance and a low wavelength dependence. The advance of integration fabrication techniques has enabled chip-scale Sagnac interferometers with greatly reduced footprint and improved scalability compared to more conventional approaches implemented by spatial light or optical fiber devices. This facilitates a variety of integrated photonic devices with bidirectional light propagation, showing new features and capabilities compared to unidirectional-light-propagation devices, such as Mach–Zehnder interferometers (MZIs) and ring resonators (RRs). This paper reviews functional integrated photonic devices based on Sagnac interference. First, the basic theory of integrated Sagnac interference devices is introduced, together with comparisons to other integrated photonic building blocks, such as MZIs, RRs, photonic crystal cavities, and Bragg gratings. Next, the applications of Sagnac interference in integrated photonics, including reflection mirrors, optical gyroscopes, basic filters, wavelength (de)interleavers, optical analogues of quantum physics, and others, are systematically reviewed. Finally, the open challenges and future perspectives are discussed.

4 citations

Peer ReviewDOI
TL;DR: In this article , the physical implementation of basic optical calculations, such as differentiation and integration, using metamaterials, and the realization of all-optical artificial neural networks are reviewed.
Abstract: Abstract. The explosion in the amount of information that is being processed is prompting the need for new computing systems beyond existing electronic computers. Photonic computing is emerging as an attractive alternative due to performing calculations at the speed of light, the change for massive parallelism, and also extremely low energy consumption. We review the physical implementation of basic optical calculations, such as differentiation and integration, using metamaterials, and introduce the realization of all-optical artificial neural networks. We start with concise introductions of the mathematical principles behind such optical computation methods and present the advantages, current problems that need to be overcome, and the potential future directions in the field. We expect that our review will be useful for both novice and experienced researchers in the field of all-optical computing platforms using metamaterials.

4 citations

Posted Content
TL;DR: A fiber-compatible scheme for measurement and feed-forward, whose performance is benchmarked by carrying out remote preparation of single-photon polarization states at telecom-wavelengths, whose methods are useful for photonic quantum experiments including computing, communication, and teleportation.
Abstract: Both photonic quantum computation and the establishment of a quantum internet require fiber-based measurement and feed-forward in order to be compatible with existing infrastructure. Here we present an all-fiber scheme for measurement and feed-forward, whose performance is benchmarked by carrying out remote preparation of single-photon polarization states at telecom-wavelengths. The result of a projective measurement on one photon deterministically controls the path a second photon takes with ultrafast optical switches. By placing well-calibrated passive polarization optics in the paths, we achieve a measurement and feed-forward fidelity of (99.0$\pm$ 0.5)%. Our methods are useful for photonic quantum experiments including computing, communication, and teleportation.

4 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]