scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Ultrafast machine vision with 2D material neural network image sensors

04 Mar 2020-Nature (Nature Publishing Group)-Vol. 579, Iss: 7797, pp 62-66
TL;DR: It is demonstrated that an image sensor can itself constitute an ANN that can simultaneously sense and process optical images without latency, and is trained to classify and encode images with high throughput, acting as an artificial neural network.
Abstract: Machine vision technology has taken huge leaps in recent years, and is now becoming an integral part of various intelligent systems, including autonomous vehicles and robotics. Usually, visual information is captured by a frame-based camera, converted into a digital format and processed afterwards using a machine-learning algorithm such as an artificial neural network (ANN)1. The large amount of (mostly redundant) data passed through the entire signal chain, however, results in low frame rates and high power consumption. Various visual data preprocessing techniques have thus been developed2-7 to increase the efficiency of the subsequent signal processing in an ANN. Here we demonstrate that an image sensor can itself constitute an ANN that can simultaneously sense and process optical images without latency. Our device is based on a reconfigurable two-dimensional (2D) semiconductor8,9 photodiode10-12 array, and the synaptic weights of the network are stored in a continuously tunable photoresponsivity matrix. We demonstrate both supervised and unsupervised learning and train the sensor to classify and encode images that are optically projected onto the chip with a throughput of 20 million bins per second.
Citations
More filters
Journal ArticleDOI
TL;DR: The opportunities, progress and challenges of integrating two-dimensional materials with in-memory computing and transistor-based computing technologies, from the perspective of matrix and logic computing, are discussed.
Abstract: Rapid digital technology advancement has resulted in a tremendous increase in computing tasks imposing stringent energy efficiency and area efficiency requirements on next-generation computing. To meet the growing data-driven demand, in-memory computing and transistor-based computing have emerged as potent technologies for the implementation of matrix and logic computing. However, to fulfil the future computing requirements new materials are urgently needed to complement the existing Si complementary metal–oxide–semiconductor technology and new technologies must be developed to enable further diversification of electronics and their applications. The abundance and rich variety of electronic properties of two-dimensional materials have endowed them with the potential to enhance computing energy efficiency while enabling continued device downscaling to a feature size below 5 nm. In this Review, from the perspective of matrix and logic computing, we discuss the opportunities, progress and challenges of integrating two-dimensional materials with in-memory computing and transistor-based computing technologies. This Review discusses the recent progress and future prospects of two-dimensional materials for next-generation nanoelectronics.

402 citations

Journal ArticleDOI
17 Nov 2020
TL;DR: In this paper, the authors examine the concept of near-senor and in-sensor computing in which computation tasks are moved partly to the sensory terminals, exploring the challenges facing the field and providing possible solutions for the hardware implementation of integrated sensing and processing units using advanced manufacturing technologies.
Abstract: The number of nodes typically used in sensory networks is growing rapidly, leading to large amounts of redundant data being exchanged between sensory terminals and computing units. To efficiently process such large amounts of data, and decrease power consumption, it is necessary to develop approaches to computing that operate close to or inside sensory networks, and that can reduce the redundant data movement between sensing and processing units. Here we examine the concept of near-sensor and in-sensor computing in which computation tasks are moved partly to the sensory terminals. We classify functions into low-level and high-level processing, and discuss the implementation of near-sensor and in-sensor computing for different physical sensing systems. We also analyse the existing challenges in the field and provide possible solutions for the hardware implementation of integrated sensing and processing units using advanced manufacturing technologies. This Perspective examines the concept of near-senor and in-sensor computing in which computation tasks are moved partly to the sensory terminals, exploring the challenges facing the field and providing possible solutions for the hardware implementation of integrated sensing and processing units using advanced manufacturing technologies.

297 citations

Journal ArticleDOI
TL;DR: This work proposes an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit (DPU) that can efficiently support different neural networks and achieve a high model complexity with millions of neurons.
Abstract: There is an ever-growing demand for artificial intelligence. Optical processors, which compute with photons instead of electrons, can fundamentally accelerate the development of artificial intelligence by offering substantially improved computing performance. There has been long-term interest in optically constructing the most widely used artificial-intelligence architecture, that is, artificial neural networks, to achieve brain-inspired information processing at the speed of light. However, owing to restrictions in design flexibility and the accumulation of system errors, existing processor architectures are not reconfigurable and have limited model complexity and experimental performance. Here, we propose the reconfigurable diffractive processing unit, an optoelectronic fused computing architecture based on the diffraction of light, which can support different neural networks and achieve a high model complexity with millions of neurons. Along with the developed adaptive training approach to circumvent system errors, we achieved excellent experimental accuracies for high-speed image and video recognition over benchmark datasets and a computing performance superior to that of cutting-edge electronic computing platforms. Linear diffractive structures are by themselves passive systems but researchers here exploit the non-linearity of a photodetector to realize a reconfigurable diffractive ‘processing’ unit. High-speed image and video recognition is demonstrated.

245 citations

Journal ArticleDOI
01 Dec 2021
TL;DR: In this paper, the authors highlight a few emerging trends in photonics that they think are likely to have major impact at least in the upcoming decade, spanning from integrated quantum photonics and quantum computing, through topological/non-Hermitian photonics, to AI-empowered nanophotonics and photonic machine learning.
Abstract: Let there be light–to change the world we want to be! Over the past several decades, and ever since the birth of the first laser, mankind has witnessed the development of the science of light, as light-based technologies have revolutionarily changed our lives. Needless to say, photonics has now penetrated into many aspects of science and technology, turning into an important and dynamically changing field of increasing interdisciplinary interest. In this inaugural issue of eLight, we highlight a few emerging trends in photonics that we think are likely to have major impact at least in the upcoming decade, spanning from integrated quantum photonics and quantum computing, through topological/non-Hermitian photonics and topological insulator lasers, to AI-empowered nanophotonics and photonic machine learning. This Perspective is by no means an attempt to summarize all the latest advances in photonics, yet we wish our subjective vision could fuel inspiration and foster excitement in scientific research especially for young researchers who love the science of light.

184 citations

DOI
25 Nov 2021
TL;DR: In this paper, the development of 2D field-effect transistors for use in future VLSI technologies is reviewed, and the key performance indicators for aggressively scaled 2D transistors are discussed.
Abstract: Field-effect transistors based on two-dimensional (2D) materials have the potential to be used in very large-scale integration (VLSI) technology, but whether they can be used at the front end of line or at the back end of line through monolithic or heterogeneous integration remains to be determined. To achieve this, multiple challenges must be overcome, including reducing the contact resistance, developing stable and controllable doping schemes, advancing mobility engineering and improving high-κ dielectric integration. The large-area growth of uniform 2D layers is also required to ensure low defect density, low device-to-device variation and clean interfaces. Here we review the development of 2D field-effect transistors for use in future VLSI technologies. We consider the key performance indicators for aggressively scaled 2D transistors and discuss how these should be extracted and reported. We also highlight potential applications of 2D transistors in conventional micro/nanoelectronics, neuromorphic computing, advanced sensing, data storage and future interconnect technologies. This Review examines the development of field-effect transistors based on two-dimensional materials and considers the challenges that need to be addressed for the devices to be incorporated into very large-scale integration (VLSI) technology.

178 citations

References
More filters
Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
01 Jan 1988-Nature
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

23,814 citations

Journal Article
01 Jan 1972-Optik
TL;DR: In this article, an algorithm is presented for the rapid solution of the phase of the complete wave function whose intensity in the diffraction and imaging planes of an imaging system are known.

5,197 citations

Journal ArticleDOI
TL;DR: In this article, the authors examined the methods used to synthesize transition metal dichalcogenides (TMDCs) and their properties with particular attention to their charge density wave, superconductive and topological phases, along with their applications in devices with enhanced mobility and with the use of strain engineering to improve their properties.
Abstract: Graphene is very popular because of its many fascinating properties, but its lack of an electronic bandgap has stimulated the search for 2D materials with semiconducting character. Transition metal dichalcogenides (TMDCs), which are semiconductors of the type MX2, where M is a transition metal atom (such as Mo or W) and X is a chalcogen atom (such as S, Se or Te), provide a promising alternative. Because of its robustness, MoS2 is the most studied material in this family. TMDCs exhibit a unique combination of atomic-scale thickness, direct bandgap, strong spin–orbit coupling and favourable electronic and mechanical properties, which make them interesting for fundamental studies and for applications in high-end electronics, spintronics, optoelectronics, energy harvesting, flexible electronics, DNA sequencing and personalized medicine. In this Review, the methods used to synthesize TMDCs are examined and their properties are discussed, with particular attention to their charge density wave, superconductive and topological phases. The use of TMCDs in nanoelectronic devices is also explored, along with strategies to improve charge carrier mobility, high frequency operation and the use of strain engineering to tailor their properties. Two-dimensional transition metal dichalcogenides (TMDCs) exhibit attractive electronic and mechanical properties. In this Review, the charge density wave, superconductive and topological phases of TMCDs are discussed, along with their synthesis and applications in devices with enhanced mobility and with the use of strain engineering to improve their properties.

3,436 citations