scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
01 Dec 2021-PhotoniX
TL;DR: In this article, the authors review several nonlinear optical applications, such as electric-field-induced second-harmonic generation, entangled photon pair generation, terahertz generation, all-optical modulation, and high harmonic generation that they envision meta-optics may have distinct advantages over their bulk counterparts.
Abstract: Nonlinear optical effects have enabled numerous applications such as laser frequency conversion, ultrafast electro-optical, and all-optical modulation. Both gaseous and bulk media have conventionally been used for free-space nonlinear optical applications, yet they often require complex phase-matching techniques for efficient operation and may have limited operation bandwidth due to the material absorption. In the last decade, meta-optics made of subwavelength antennas or films have emerged as novel nonlinear optical media that may potentially overcome certain limitations of bulk crystals. Due to resonant enhancements of the pump laser field as well as the use of materials with extreme nonlinearity such as epsilon-near-zero materials, meta-optics can achieve strong nonlinear responses with a subwavelength thickness. Here, we review several nonlinear optical applications, such as electric-field-induced second-harmonic generation, entangled photon pair generation, terahertz generation, all-optical modulation, and high-harmonic generation that we envision meta-optics may have distinct advantages over their bulk counterparts. We outline the challenges still faced by nonlinear meta-optics and point out some potential directions.

4 citations

Proceedings ArticleDOI
24 Feb 2020
TL;DR: Experimental results comparing the performance of the CIM to quantum annealers (QAs) on two classes of NP-hard optimization problems: ground state calculation of the Sherrington-Kirkpatrick (SK) model and MAX-CUT show an exponential performance penalty relative to CIMs.
Abstract: The coherent Ising machine (CIM) is a network of optical parametric oscillators (OPOs) that solves for the ground state of Ising problems through OPO bifurcation dynamics. Here, we present experimental results comparing the performance of the CIM to quantum annealers (QAs) on two classes of NP-hard optimization problems: ground state calculation of the Sherrington-Kirkpatrick (SK) model and MAX-CUT. While the two machines perform comparably on sparsely-connected problems such as cubic MAX-CUT, on problems with dense connectivity, the QA shows an exponential performance penalty relative to CIMs. We attribute this to the embedding overhead required to map dense problems onto the sparse hardware architecture of the QA, a problem that can be overcome in photonic architectures such as the CIM.

4 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...Photonics offers unique advantages for information processing, including high data rates,(8) low latency, low power consumption,(9) elimination of the interconnect bottleneck,(10) and the ability to perform linear algebra operations with passive optics.(11,12) Motivated by this, we have proposed an optical annealer called the coherent Ising machine...

    [...]

Journal ArticleDOI
TL;DR: In this article , a method for learning the precision of each layer of a pre-trained model without retraining network weights was proposed to reduce energy consumption by up to 89% for computer vision models and by 24% for natural language processing models such as BERT.
Abstract: Analog electronic and optical computing exhibit tremendous advantages over digital computing for accelerating deep learning when operations are executed at low precision. Although digital architectures support programmable precision to increase efficiency, analog computing architectures today only support a single, static precision. In this work, we characterize the relationship between the effective number of bits (ENOB) of precision of analog processors, which is limited by noise, and digital bit precision for quantized neural networks. We propose extending analog computing architectures to support dynamic levels of precision by repeating operations and averaging the result, decreasing the impact of noise. To utilize dynamic precision, we propose a method for learning the precision of each layer of a pre-trained model without retraining network weights. We evaluate this method on analog architectures subject to shot noise, thermal noise, and weight noise and find that employing dynamic precision reduces energy consumption by up to 89% for computer vision models such as Resnet50 and by 24% for natural language processing models such as BERT. In one example, we apply dynamic precision to a shot-noise limited homodyne optical neural network and simulate inference at an optical energy consumption of 2.7 aJ/MAC for Resnet50 and 1.6 aJ/MAC for BERT with ${< }2\%$ accuracy degradation, implying that the optical energy consumption is unlikely to be the dominant cost.

4 citations

Journal ArticleDOI
TL;DR: In this paper , the authors review the recent progress of silicon-based slow-light electro-optic modulators towards future communication requirements and discuss the existing challenges and development directions of silicon based slow light electrooptic modulation for the practical applications.
Abstract: As an important optoelectronic integration platform, silicon photonics has achieved significant progress in recent years, demonstrating the advantages on low power consumption, low cost, and complementary metal–oxide–semiconductor (CMOS) compatibility. Among the different silicon photonics devices, the silicon electro-optic modulator is a key active component to implement the conversion of electric signal to optical signal. However, conventional silicon Mach–Zehnder modulators and silicon micro-ring modulators both have their own limitations, which will limit their use in future systems. For example, the conventional silicon Mach–Zehnder modulators are hindered by large footprint, while the silicon micro-ring modulators have narrow optical bandwidth and high temperature sensitivity. Therefore, developing a new structure for silicon modulators to improve the performance is a crucial research direction in silicon photonics. Meanwhile, slow-light effect is an important physical phenomenon that can reduce the group velocity of light. Applying slow-light effect on silicon modulators through photonics crystal and waveguide grating structures is an attractive research point, especially in the aspect of reducing the device footprint. In this paper, we review the recent progress of silicon-based slow-light electro-optic modulators towards future communication requirements. Beginning from the principle of slow-light effect, we summarize the research of silicon photonic crystal modulators and silicon waveguide grating modulators in detail. Simultaneously, the experimental results of representative silicon slow-light modulators are compared and analyzed. Finally, we discuss the existing challenges and development directions of silicon-based slow-light electro-optic modulators for the practical applications.

4 citations

Journal ArticleDOI
TL;DR: It is believed that PAICs may play a critical role in the deployment of data processing technology with the conceivable exhaustion of Moore’s Law.
Abstract: Artificial intelligence chips (AICs) are the intersection of integrated circuits and artificial intelligence (AI), involving structure design, algorithm analysis, chip fabrication and application scenarios. Due to their excellent ability in data processing, AICs show a long-term industrial prospect in big data services, cloud centers, etc. However, with the conceivable exhaustion of Moore’s Law, the size of traditional electronic AICs (EAICs) is gradually approaching the limit, and an architectural update is highly required. Photonic artificial intelligence chips (PAIC) utilize light beam propagation in the silicon waveguide, contributing to a high parallelism configuration, fast calculation speed and low latency. Due to light manipulation, PAICs perform well in anti-electromagnetic interference and energy conservation. This invited paper summarized the recent research on PAICs. The characteristics of different hardware structures are discussed. The current widely used training algorithm is given and the Photonic Design Automatic (PDA) simulation platform is introduced. In addition, the authors’ related work on PAICs is presented and we believe that PAICs may play a critical role in the deployment of data processing technology.

4 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]