scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Proceedings ArticleDOI
03 Oct 2022
TL;DR: In this paper , the authors review the major achievements in recent years, showing how high integration can lead to better performance, but it could also limit the scalability of the overall system.
Abstract: Photonic tensor core circuits have been widely explored as possible hardware accelerators for the next generation of machine learning applications, due to the large bandwidth, low latency, and energy saving that light has. Many architectures have been presented, especially exploiting photonic integrated circuits. However, most of the proposed solutions lack some features, such as integration, scalability, or energy saving. In this paper, we review the major achievements in recent years, showing how high integration can lead to better performance, but it could also limit the scalability of the overall system.

3 citations

Journal ArticleDOI
30 May 2022-PhotoniX
TL;DR: In this paper , an intelligent processor composed of photonic and electronic neurons for OAM spectrum measurement was proposed, where optical layers extract invisible topological charge information from incoming light and a shallow electronic layer predicts the exact spectrum.
Abstract: Abstract Orbital angular momentum (OAM) detection underpins almost all aspects of vortex beams’ advances such as communication and quantum analogy. Conventional schemes are frustrated by low speed, complicated system, limited detection range. Here, we devise an intelligent processor composed of photonic and electronic neurons for OAM spectrum measurement in a fast, accurate and direct manner. Specifically, optical layers extract invisible topological charge information from incoming light and a shallow electronic layer predicts the exact spectrum. The integration of optical-computing promises us a compact single-shot system with high speed and energy efficiency (optical operations / electronic operations ~ $${10}^{3}$$ 10 3 ), neither necessitating reference wave nor repetitive steps. Importantly, our processor is endowed with salient generalization ability and robustness against diverse structured light and adverse effects (mean squared error ~ $$10^{(-5)}$$ 10 ( - 5 ) ). We further raise a universal model interpretation paradigm to reveal the underlying physical mechanisms in the hybrid processor, as distinct from conventional ‘black-box’ networks. Such interpretation algorithm can improve the detection efficiency up to 25-fold. We also complete the theory of optoelectronic network enabling its efficient training. This work not only contributes to the explorations on OAM physics and applications, and also broadly inspires the advanced links between intelligent computing and physical effects.

3 citations

Proceedings ArticleDOI
01 Sep 2020
TL;DR: This work presents the classification performance analysis of a single layer optical neural network implemented by Reck and Clements meshes in the presence of experimental imperfections.
Abstract: Mach Zehnder Interferometer-based reconfigurable structures are promising candidates for fast and power-efficient computations in optical neural networks. This work presents the classification performance analysis of a single layer optical neural network implemented by Reck and Clements meshes in the presence of experimental imperfections.

3 citations


Cites methods from "Deep learning with coherent nanopho..."

  • ...In [4] and [5], the ONNs implemented by a Reck structure could achieve a classification accuracy of approximately 25% lower than that of a digital NN trained on the same datasets....

    [...]

Journal ArticleDOI
TL;DR: In this article, the effect of electrostatic doping on the optical properties of transition metal dichalcogenides (TMDs) at near infrared (NIR) wavelengths was investigated.
Abstract: Two dimensional materials such as graphene and transition metal dichalcogenides (TMDs) are promising for optical modulation, detection, and light emission since their material properties can be tuned on-demand via electrostatic doping. The optical properties of TMDs have been shown to change drastically with doping in the wavelength range near the excitonic resonances. However, little is known about the effect of doping on the optical properties of TMDs away from these resonances, where the material is transparent and therefore could be leveraged in photonic circuits. Here, we probe the electro-optic response of monolayer TMDs at near infrared (NIR) wavelengths (i.e. deep in the transparency regime), by integrating them on silicon nitride (SiN) photonic structures to induce strong light$-$matter interaction with the monolayer. We dope the monolayer to carrier densities of ($7.2 \pm 0.8$) $\times$ $10^{13} \textrm{cm}^{-2}$, by electrically gating the TMD using an ionic liquid. We show strong electro-refractive response in monolayer tungsten disulphide (WS$_2$) at NIR wavelengths by measuring a large change in the real part of refractive index $\Delta$n = $0.53$, with only a minimal change in the imaginary part $\Delta$k = $0.004$. The doping induced phase change ($\Delta$n), compared to the induced absorption ($\Delta$k) measured for WS$_2$ ($\Delta$n/$\Delta$k $\sim 125$), a key metric for photonics, is an order of magnitude higher than the $\Delta$n/$\Delta$k for bulk materials like silicon ($\Delta$n/$\Delta$k $\sim 10$), making it ideal for various photonic applications. We further utilize this strong tunable effect to demonstrate an electrostatically gated SiN-WS$_2$ phase modulator using a WS$_2$-HfO$_2$ (Hafnia)-ITO (Indium Tin Oxide) capacitive configuration, that achieves a phase modulation efficiency (V$_\pi$L) of 0.8 V $\cdot$ cm with a RC limited bandwidth of 0.3 GHz.

3 citations

Journal ArticleDOI
01 Jan 2023
TL;DR: A comprehensive survey of literature on intelligent computing is presented in this paper , covering its theory fundamentals, the technological fusion of intelligence and computing, important applications, challenges, and future perspectives.
Abstract: Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence, and internet of things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human–computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: Intelligent computing is not only intelligence oriented but also intelligence driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing. Intelligent computing is still in its infancy, and an abundance of innovations in the theories, systems, and applications of intelligent computing is expected to occur soon. We present the first comprehensive survey of literature on intelligent computing, covering its theory fundamentals, the technological fusion of intelligence and computing, important applications, challenges, and future perspectives. We believe that this survey is highly timely and will provide a comprehensive reference and cast valuable insights into intelligent computing for academic and industrial researchers and practitioners.

3 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]