scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep learning with coherent nanophotonic circuits

01 Jul 2017-Vol. 11, Iss: 7, pp 441-446
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.
Citations
More filters
Journal ArticleDOI
Sashi Satpathy1
26 Jan 2022-eLight
TL;DR: In this article , a set of transmissive diffractive surfaces are trained to all-optically reconstruct images of arbitrary objects that are completely covered by unknown, random phase diffusers.
Abstract: Abstract Imaging through diffusers presents a challenging problem with various digital image reconstruction solutions demonstrated to date using computers. Here, we present a computer-free, all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive diffractive surfaces are trained to all-optically reconstruct images of arbitrary objects that are completely covered by unknown, random phase diffusers. After the training stage, which is a one-time effort, the resulting diffractive surfaces are fabricated and form a passive optical network that is physically positioned between the unknown object and the image plane to all-optically reconstruct the object pattern through an unknown, new phase diffuser. We experimentally demonstrated this concept using coherent THz illumination and all-optically reconstructed objects distorted by unknown, random diffusers, never used during training. Unlike digital methods, all-optical diffractive reconstructions do not require power except for the illumination light. This diffractive solution to see through diffusers can be extended to other wavelengths, and might fuel various applications in biomedical imaging, astronomy, atmospheric sciences, oceanography, security, robotics, autonomous vehicles, among many others.

50 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a reinforcement learning (RL) experiment where the learning of an agent is boosted by utilizing a quantum communication channel with the environment, and further show that the combination with classical communication enables the evaluation of such an improvement, and additionally allows for optimal control of the learning progress.
Abstract: Increasing demand for algorithms that can learn quickly and efficiently has led to a surge of development within the field of artificial intelligence (AI). An important paradigm within AI is reinforcement learning (RL), where agents interact with environments by exchanging signals via a communication channel. Agents can learn by updating their behaviour based on obtained feedback. The crucial question for practical applications is how fast agents can learn to respond correctly. An essential figure of merit is therefore the learning time. While various works have made use of quantum mechanics to speed up the agent's decision-making process, a reduction in learning time has not been demonstrated yet. Here we present a RL experiment where the learning of an agent is boosted by utilizing a quantum communication channel with the environment. We further show that the combination with classical communication enables the evaluation of such an improvement, and additionally allows for optimal control of the learning progress. This novel scenario is therefore demonstrated by considering hybrid agents, that alternate between rounds of quantum and classical communication. We implement this learning protocol on a compact and fully tunable integrated nanophotonic processor. The device interfaces with telecom-wavelength photons and features a fast active feedback mechanism, allowing us to demonstrate the agent's systematic quantum advantage in a setup that could be readily integrated within future large-scale quantum communication networks.

49 citations

Journal ArticleDOI
TL;DR: Ozcan et al. as mentioned in this paper applied a pruning algorithm to select an optimized ensemble of D2NNs that collectively improved the image classification accuracy, achieving blind testing accuracies of 61.14% and 62.05% on the classification of CIFAR-10 test images.
Abstract: A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive deep neural networks (D2NNs) form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers. D2NNs have demonstrated success in various tasks, including object classification, the spectral encoding of information, optical pulse shaping and imaging. Here, we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning. After independently training 1252 D2NNs that were diversely engineered with a variety of passive input filters, we applied a pruning algorithm to select an optimized ensemble of D2NNs that collectively improved the image classification accuracy. Through this pruning, we numerically demonstrated that ensembles of N = 14 and N = 30 D2NNs achieve blind testing accuracies of 61.14 ± 0.23% and 62.13 ± 0.05%, respectively, on the classification of CIFAR-10 test images, providing an inference improvement of >16% compared to the average performance of the individual D2NNs within each ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems. Scientists in USA have demonstrated significant improvements in the performance of diffractive optical networks, marking a major step forward for their use in optics-based computation and machine learning. There is renewed interest in optical computing hardware due to its potential advantages, including parallelization, power efficiency, and computation speed. Diffractive optical networks utilize deep learning-based design of successive diffractive layers to all-optically process information as the light is transmitted from the input to the output plane. Led by Aydogan Ozcan, a team of researchers from University of California, Los Angeles has significantly improved the statistical inference performance of diffractive optical networks using feature engineering and ensemble learning. Using a pruning algorithm, they searched through 1,252 unique diffractive networks to design ensembles of desired size that substantially improve the overall system’s all-optical image classification accuracy.

49 citations

Journal ArticleDOI
TL;DR: In this article, a 4 × 4 reconfigurable Mach-Zehnder interferometer (MZI)-based linear optical processor is investigated through its theoretical analyses and characterized experimentally.
Abstract: A 4 × 4 reconfigurable Mach–Zehnder interferometer (MZI)-based linear optical processor is investigated through its theoretical analyses and characterized experimentally. The linear transformation matrix of the structure is theoretically determined using its building block, which is a 2 × 2 reconfigurable MZI. To program the device, the linear transformation matrix of a given application is decomposed into that of the constituent MZIs of the structure. Thus, the required phase shifts for implementing the transformation matrix of the application by means of the optical processor are determined theoretically. Due to random phase offsets in the MZIs resulting from fabrication process variations, they are initially configured through an experimental protocol. The presented calibration scheme allows to straightforwardly characterize the MZIs to mitigate the possible input phase errors and determine the bar and cross states of each MZI for tuning it at the required sate before programming the device. After the configuration process, the device can be programmed to construct the linear transformation matrix of the application. In this regard, using the required bias voltages, the phase shifts obtained from the decomposition process are applied to the phase shifters of the MZIs in the device.

49 citations


Cites methods from "Deep learning with coherent nanopho..."

  • ...Depending on the application of interest, the DMM section can be used to extend the 4 × 4 optical processor to a larger structure by cascading it with another SU(4), as in [6] to perform SVD optically, or in [3] for deep-learning matrix multiplications in neural networks applications....

    [...]

  • ...In this regard, the phase shifters in such a mesh of MZIs are employed for its simple experimental calibration, which makes the structure suitable candidate to serve as a reconfigurable linear optical processor [1]–[3]....

    [...]

  • ...This paper presents the theoretical and experimental analysis of a 4 × 4 reconfigurable MZI-based optical processor that can be configured for a given application to perform linear functions, such as matrix multiplications in computational systems like optical neural networks [3], [4] quantum transport simulations [5],...

    [...]

Journal ArticleDOI
TL;DR: In this paper, a decoder for optical inference of single or whole classes of keys through symmetric and asymmetric decryption was proposed. But the decoder was designed for operation in the near-infrared region.
Abstract: Optical machine learning has emerged as an important research area that, by leveraging the advantages inherent to optical signals, such as parallelism and high speed, paves the way for a future where optical hardware can process data at the speed of light. In this work, we present such optical devices for data processing in the form of single-layer nanoscale holographic perceptrons trained to perform optical inference tasks. We experimentally show the functionality of these passive optical devices in the example of decryptors trained to perform optical inference of single or whole classes of keys through symmetric and asymmetric decryption. The decryptors, designed for operation in the near-infrared region, are nanoprinted on complementary metal-oxide–semiconductor chips by galvo-dithered two-photon nanolithography with axial nanostepping of 10 nm1,2, achieving a neuron density of >500 million neurons per square centimetre. This power-efficient commixture of machine learning and on-chip integration may have a transformative impact on optical decryption3, sensing4, medical diagnostics5 and computing6,7.

48 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations


"Deep learning with coherent nanopho..." refers background or methods in this paper

  • ...The computational resolution of ONNs is limited by practical non-idealities, including (1) thermal crosstalk between phase shifters in interferometers, (2) optical coupling drift, (3) the finite precision with which an optical phase can be set (16 bits in our case), (4) photodetection noise and (5) finite photodetection dynamic range (30 dB in our case)....

    [...]

  • ...(3) Once a neural network is trained, the architecture can be passive, and computation on the optical signals will be performed without additional energy input....

    [...]

  • ...We used four instances of the OIU to realize the following matrix transformations in the spatial-mode basis: (1) U((1))Σ((1)), (2) V((1)), (3) U((2))Σ((2)) and (4) V((2))....

    [...]

  • ...Transformations (1) and (2) realize the first matrix M((1)), and (3) and (4) implement M((2))....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Deep learning with coherent nanopho..." refers methods in this paper

  • ...ANNs can be trained by feeding training data into the input layer and then computing the output by forward propagation; weighting parameters in each matrix are subsequently optimized using back propagation [16]....

    [...]