scispace - formally typeset
Search or ask a question
Author

Andrew Lines

Bio: Andrew Lines is an academic researcher from Intel. The author has contributed to research in topics: Asynchronous communication & Asynchronous system. The author has an hindex of 13, co-authored 21 publications receiving 1713 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.
Abstract: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.

2,331 citations

Patent
04 May 2009
TL;DR: In this paper, a gate level circuit description corresponding to the circuit design is generated, and a minimal number of buffers are added to selected ones of the pipelines such that a performance constraint is satisfied.
Abstract: Methods and apparatus are described for optimizing a circuit design. A gate level circuit description corresponding to the circuit design is generated. The gate level circuit description includes a plurality of pipelines across a plurality of levels. Using a linear programming technique, a minimal number of buffers is added to selected ones of the pipelines such that a performance constraint is satisfied.

74 citations

Patent
26 Jan 2004
TL;DR: In this article, the authors describe a system-on-a-chip (SOC) which includes a plurality of synchronous modules, each synchronous module having an associated clock domain characterized by a data rate.
Abstract: Methods and apparatus are described relating to a system-on-a-chip which includes a plurality of synchronous modules, each synchronous module having an associated clock domain characterized by a data rate, the data rates comprising a plurality of different data rates. The system-on-a-chip also includes a plurality of clock domain converters. Each clock domain converter is coupled to a corresponding one of the synchronous modules, and is operable to convert data between the clock domain of the corresponding synchronous module and an asynchronous domain characterized by transmission of data according to an asynchronous handshake protocol. An asynchronous crossbar is coupled to the plurality of clock domain converters, and is operable in the asynchronous domain to implement a first-in-first-out (FIFO) channel between any two of the clock domain converters, thereby facilitating communication between any two of the synchronous modules.

71 citations

Patent
13 Jul 2004
TL;DR: In this article, a static random access memory (SRAM) is provided including a plurality of SRAM state elements and SRAM environment circuitry, which is operable to interface with external asynchronous circuitry and to enable reading of and writing to the state elements in a delay-insensitive manner.
Abstract: A static random access memory (SRAM) is provided including a plurality of SRAM state elements and SRAM environment circuitry. The SRAM environment circuitry is operable to interface with external asynchronous circuitry and to enable reading of and writing to the SRAM state elements in a delay-insensitive manner provided that at least one timing assumption is met.

46 citations

Patent
05 Jan 2006
TL;DR: In this paper, a shared memory is described as having a plurality of receive ports with a first-and a second-data-rate memory array, and buffering is used to decouple operation of the receive and transmit ports at the first data rate from operation of memory array at the second-and third-data rates.
Abstract: A shared memory is described having a plurality of receive ports and a plurality of transmit ports characterized by a first data rate. A memory includes a plurality of memory banks organized in rows and columns and is characterized by a second data rate. Non-blocking receive and transmit crossbar circuitry is operable to connect any of the receive and transmit ports, respectively, with any of the memory banks. Buffering is operable to decouple operation of the receive and transmit ports at the first data rate from operation of the memory array at the second data rate. Scheduling circuitry is operable to facilitate striping of each data segment of a frame across the memory banks in one of the rows, and to facilitate striping of successive data segments of the frame across successive rows in the array.

39 citations


Cited by
More filters
Journal ArticleDOI
27 Nov 2019-Nature
TL;DR: An overview of the developments in neuromorphic computing for both algorithms and hardware is provided and the fundamentals of learning and hardware frameworks are highlighted, with emphasis on algorithm–hardware codesign.
Abstract: Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign. The authors review the advantages and future prospects of neuromorphic computing, a multidisciplinary engineering concept for energy-efficient artificial intelligence with brain-inspired functionality.

877 citations

Journal ArticleDOI
TL;DR: This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras.
Abstract: Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of is), very high dynamic range (140dB vs. 60dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.

697 citations

Journal ArticleDOI
TL;DR: In this paper, the spin degree of freedom of electrons and/or holes, which can also interact with their orbital moments, is described with respect to the spin generation methods as detailed in Sections 2-~-9.

614 citations

Journal ArticleDOI
01 Jul 2018
TL;DR: This Review Article examines the development of organic neuromorphic devices, considering the different switching mechanisms used in the devices and the challenges the field faces in delivering neuromorphic computing applications.
Abstract: Neuromorphic computing could address the inherent limitations of conventional silicon technology in dedicated machine learning applications. Recent work on silicon-based asynchronous spiking neural networks and large crossbar arrays of two-terminal memristive devices has led to the development of promising neuromorphic systems. However, delivering a compact and efficient parallel computing technology that is capable of embedding artificial neural networks in hardware remains a significant challenge. Organic electronic materials offer an attractive option for such systems and could provide biocompatible and relatively inexpensive neuromorphic devices with low-energy switching and excellent tunability. Here, we review the development of organic neuromorphic devices. We consider different resistance-switching mechanisms, which typically rely on electrochemical doping or charge trapping, and report approaches that enhance state retention and conductance tuning. We also discuss the challenges the field faces in implementing low-power neuromorphic computing, such as device downscaling and improving device speed. Finally, we highlight early demonstrations of device integration into arrays, and consider future directions and potential applications of this technology.

568 citations

Journal ArticleDOI
31 Jul 2019-Nature
TL;DR: The Tianjic chip is presented, which integrates neuroscience-oriented and computer-science-oriented approaches to artificial general intelligence to provide a hybrid, synergistic platform and is expected to stimulate AGI development by paving the way to more generalized hardware platforms.
Abstract: There are two general approaches to developing artificial general intelligence (AGI)1: computer-science-oriented and neuroscience-oriented. Because of the fundamental differences in their formulations and coding schemes, these two approaches rely on distinct and incompatible platforms2–8, retarding the development of AGI. A general platform that could support the prevailing computer-science-based artificial neural networks as well as neuroscience-inspired models and algorithms is highly desirable. Here we present the Tianjic chip, which integrates the two approaches to provide a hybrid, synergistic platform. The Tianjic chip adopts a many-core architecture, reconfigurable building blocks and a streamlined dataflow with hybrid coding schemes, and can not only accommodate computer-science-based machine-learning algorithms, but also easily implement brain-inspired circuits and several coding schemes. Using just one chip, we demonstrate the simultaneous processing of versatile algorithms and models in an unmanned bicycle system, realizing real-time object detection, tracking, voice control, obstacle avoidance and balance control. Our study is expected to stimulate AGI development by paving the way to more generalized hardware platforms. The ‘Tianjic’ hybrid electronic chip combines neuroscience-oriented and computer-science-oriented approaches to artificial general intelligence, demonstrated by controlling an unmanned bicycle.

545 citations