scispace - formally typeset
Search or ask a question
Author

Thomas Ferreira de Lima

Bio: Thomas Ferreira de Lima is an academic researcher from Princeton University. The author has contributed to research in topics: Neuromorphic engineering & Photonics. The author has an hindex of 24, co-authored 81 publications receiving 1991 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: First observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks are reported, and a mathematical isomorphism between the silicon photonics circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis.
Abstract: Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using “neural compiler” to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.

518 citations

Journal ArticleDOI
TL;DR: Recent advances in integrated photonic neuromorphic neuromorphic systems are reviewed, current and future challenges are discussed, and the advances in science and technology needed to meet those challenges are outlined.
Abstract: Research in photonic computing has flourished due to the proliferation of optoelectronic components on photonic integration platforms. Photonic integrated circuits have enabled ultrafast artificial neural networks, providing a framework for a new class of information processing machines. Algorithms running on such hardware have the potential to address the growing demand for machine learning and artificial intelligence, in areas such as medical diagnosis, telecommunications, and high-performance and scientific computing. In parallel, the development of neuromorphic electronics has highlighted challenges in that domain, in particular, related to processor latency. Neuromorphic photonics offers sub-nanosecond latencies, providing a complementary opportunity to extend the domain of artificial intelligence. Here, we review recent advances in integrated photonic neuromorphic systems, discuss current and future challenges, and outline the advances in science and technology needed to meet those challenges.

454 citations

Journal ArticleDOI
TL;DR: The field is reaching a critical juncture at which there is a shift from studying single devices to studying an interconnected network of lasers, and the recent research in the information processing abilities of such lasers are reviewed, dubbed “photonic neurons,” “laser neurons” or “optical neurons.”
Abstract: Recently, there has been tremendous interest in excitable optoelectronic devices and in particular excitable semiconductor lasers that could potentially enable unconventional processing approaches beyond conventional binary-logic-based approaches. In parallel, there has been renewed investigation of non-von Neumann architectures driven in part by incipient limitations in aspects of Moore’s law. These neuromorphic architectures attempt to decentralize processing by interweaving interconnection with computing while simultaneously incorporating time-resolved dynamics, loosely classified as spiking (a.k.a. excitability). The rapid and efficient advances in CMOS-compatible photonic interconnect technologies have led to opportunities in optics and photonics for unconventional circuits and systems. Effort in the budding research field of photonic spike processing aims to synergistically integrate the underlying physics of photonics with bio-inspired processing. Lasers operating in the excitable regime are dynamically analogous with the spiking dynamics observed in neuron biophysics but roughly 8 orders of magnitude faster. The field is reaching a critical juncture at which there is a shift from studying single devices to studying an interconnected network of lasers. In this paper, we review the recent research in the information processing abilities of such lasers, dubbed “photonic neurons,” “laser neurons,” or “optical neurons.” An integrated network of such lasers on a chip could potentially grant the capacity for complex, ultrafast categorization and decision making to provide a range of computing and signal processing applications, such as sensing and manipulating the radio frequency spectrum and for hypersonic aircraft control.

213 citations

Journal ArticleDOI
TL;DR: This work describes the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations, and investigates the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems.
Abstract: It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes ( ${>}\text{100}\; \mu$ m), large vector sizes ( $N > 500)$ , and low noise precision ( ${\leq} 4$ bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empirically-validated device and system models. We show significant potential improvements over digital electronics in energy ( ${>}10^2$ ), speed ( ${>}10^3$ ), and compute density ( ${>}10^2$ ).

187 citations

Journal ArticleDOI
TL;DR: The silicon photonics modulator neuron constitutes the final piece needed to make photonic neural networks fully integrated on currently available silicon photonic platforms.
Abstract: There has been a recently renewed interest in neuromorphic photonics, a field promising to access pivotal and unexplored regimes of machine intelligence. Progress has been made on isolated neurons and analog interconnects; nevertheless, this renewal has yet to produce a demonstration of a silicon photonic neuron capable of interacting with other like neurons. We report a modulator-class photonic neuron fabricated in a conventional silicon photonic process line. We demonstrate behaviors of transfer function configurability, fan-in, inhibition, time-resolved processing, and, crucially, autaptic cascadability -- a sufficient set of behaviors for a device to act as a neuron participating in a network of like neurons. The silicon photonic modulator neuron constitutes the final piece needed to make photonic neural networks fully integrated on currently available silicon photonic platforms.

175 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
01 Jul 2017
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.

1,955 citations

01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
07 Sep 2018-Science
TL;DR: 3D-printed D2NNs are created that implement classification of images of handwritten digits and fashion products, as well as the function of an imaging lens at a terahertz spectrum.
Abstract: Deep learning has been transforming our ability to execute advanced inference tasks using computers. Here we introduce a physical mechanism to perform machine learning by demonstrating an all-optical diffractive deep neural network (D2NN) architecture that can implement various functions following the deep learning-based design of passive diffractive layers that work collectively. We created 3D-printed D2NNs that implement classification of images of handwritten digits and fashion products, as well as the function of an imaging lens at a terahertz spectrum. Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can execute; will find applications in all-optical image analysis, feature detection, and object classification; and will also enable new camera designs and optical components that perform distinctive tasks using D2NNs.

1,145 citations