scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors reviewed the recent progress of integrated circuits and optoelectronic chips and focused on the research status, technical challenges and development trend of devices, chips, and integrated technologies of typical IC and Optoelectronics chips.
Abstract: Integrated circuits (ICs) and optoelectronic chips are the foundation stones of the modern information society. The IC industry has been driven by the so-called “Moore’s law” in the past 60 years, and now has entered the post Moore’s law era. In this paper, we review the recent progress of ICs and optoelectronic chips. The research status, technical challenges and development trend of devices, chips and integrated technologies of typical IC and optoelectronic chips are focused on. The main contents include the development law of IC and optoelectronic chip technology, the IC design and processing technology, emerging memory and chip architecture, brain-like chip structure and its mechanism, heterogeneous integration, quantum chip technology, silicon photonics chip technology, integrated microwave photonic chip, and optoelectronic hybrid integrated chip.

30 citations

Journal ArticleDOI
TL;DR: A multifunctional inverse sensing approach for a specific environment that can reconstruct the information of scattered photons and characterize multiple optical parameters simultaneously and can be upgraded dynamically after learning more data is developed.
Abstract: Inverse sensing is an important research direction to provide new perspectives for optical sensing. For inverse sensing, the primary challenge is that scattered photon has a complicated profile, which is hard to derive a general solution. Instead of a general solution, it is more feasible and practical to derive a solution based on a specific environment. With deep learning, we develop a multifunctional inverse sensing approach for a specific environment. This inverse sensing approach can reconstruct the information of scattered photons and characterize multiple optical parameters simultaneously. Its functionality can be upgraded dynamically after learning more data. It has wide measurement range and can characterize the optical signals behind obstructions. The high anti-noise performance, flexible implementation, and extremely high threshold to optical damage or saturation make it useful for a wide range of applications, including self-driving car, space technology, data security, biological characterization, and integrated photonics.

30 citations

Journal ArticleDOI
TL;DR: In this paper, the adjoint-gradient method is used to design optical systems composed of multiple unique metasurfaces aligned in sequence and separated by distances much larger than the design wavelengths, which enables thousands or millions of independent design variables to be optimized in parallel, with little or no intervention required by the user.
Abstract: Metasurfaces are an emerging technology that may supplant many of the conventional optics found in imaging devices, displays, and precision scientific instruments. Here, we develop a method for designing optical systems composed of multiple unique metasurfaces aligned in sequence and separated by distances much larger than the design wavelengths. Our approach is based on computational inverse design, also known as the adjoint-gradient method. This technique enables thousands or millions of independent design variables (e.g., the shapes of individual meta-atoms) to be optimized in parallel, with little or no intervention required by the user. The assumptions underlying our method are as follows: we use the local periodic approximation to determine the phase-response of a given meta-atom, we use the scalar wave approximation to propagate light fields between metasurface layers, and we do not consider multiple reflections between metasurface layers (analogous to a sequential-optics ray-tracer). To demonstrate the broad applicability of our method, we use it to design an achromatic doublet metasurface lens, a spectrally-multiplexed holographic element, and an ultra-compact optical neural network for classifying handwritten digits.

30 citations

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a near-term experimental platform for realizing an associative memory, which can simultaneously store many memories by using spinful bosons coupled to a degenerate multimode optical cavity, with the cavity modes serving as synapses, connecting a network of superradiant atomic spin ensembles.
Abstract: We introduce a near-term experimental platform for realizing an associative memory. It can simultaneously store many memories by using spinful bosons coupled to a degenerate multimode optical cavity. The associative memory is realized by a confocal cavity QED neural network, with the cavity modes serving as the synapses, connecting a network of superradiant atomic spin ensembles, which serve as the neurons. Memories are encoded in the connectivity matrix between the spins, and can be accessed through the input and output of patterns of light. Each aspect of the scheme is based on recently demonstrated technology using a confocal cavity and Bose-condensed atoms. Our scheme has two conceptually novel elements. First, it introduces a new form of random spin system that interpolates between a ferromagnetic and a spin-glass regime as a physical parameter is tuned---the positions of ensembles within the cavity. Second, and more importantly, the spins relax via deterministic steepest-descent dynamics, rather than Glauber dynamics. We show that this nonequilibrium quantum-optical scheme has significant advantages for associative memory over Glauber dynamics: These dynamics can enhance the network's ability to store and recall memories beyond that of the standard Hopfield model. Surprisingly, the cavity QED dynamics can retrieve memories even when the system is in the spin glass phase. Thus, the experimental platform provides a novel physical instantiation of associative memories and spin glasses as well as provides an unusual form of relaxational dynamics that is conducive to memory recall even in regimes where it was thought to be impossible.

30 citations


Cites background from "Deep learning with coherent nanopho..."

  • ...In contrast, early work aiming to use classical, optics-based spin representations sought to take advantage of the natural parallelism of light propagation [17–19]; such work continues in the context of silicon photonic integrated circuits and other coupled classical oscillator systems [20, 21]....

    [...]

  • ...[20] Y....

    [...]

Journal ArticleDOI
TL;DR: Realization of integrated quantum photonics is a key step toward scalable quantum applications such as quantum computing, sensing, information processing, and quantum material metrology.
Abstract: Realization of integrated quantum photonics is a key step toward scalable quantum applications such as quantum computing, sensing, information processing, and quantum material metrology. To enable ...

29 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]