scispace - formally typeset
Search or ask a question
Author

Garrett S. Rose

Bio: Garrett S. Rose is an academic researcher from University of Tennessee. The author has contributed to research in topics: Neuromorphic engineering & Memristor. The author has an hindex of 32, co-authored 164 publications receiving 4031 citations. Previous affiliations of Garrett S. Rose include Florida Polytechnic University & Mitre Corporation.


Papers
More filters
Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations

Journal ArticleDOI
TL;DR: This work relates logic encryption to fault propagation analysis in IC testing and develop a fault analysis-based logic encryption technique that enables a designer to controllably corrupt the outputs.
Abstract: Globalization of the integrated circuit (IC) design industry is making it easy for rogue elements in the supply chain to pirate ICs, overbuild ICs, and insert hardware Trojans. Due to supply chain attacks, the IC industry is losing approximately $4 billion annually. One way to protect ICs from these attacks is to encrypt the design by inserting additional gates such that correct outputs are produced only when specific inputs are applied to these gates. The state-of-the-art logic encryption technique inserts gates randomly into the design, but does not necessarily ensure that wrong keys corrupt the outputs. Our technique ensures that wrong keys corrupt the outputs. We relate logic encryption to fault propagation analysis in IC testing and develop a fault analysis-based logic encryption technique. This technique enables a designer to controllably corrupt the outputs. Specifically, to maximize the ambiguity for an attacker, this technique targets 50% Hamming distance between the correct and wrong outputs (ideal case) when a wrong key is applied. Furthermore, this 50% Hamming distance target is achieved using a smaller number of additional gates when compared to random logic encryption.

420 citations

Journal ArticleDOI
TL;DR: The results show that the hardware-based training scheme proposed in the paper can alleviate and even cancel out the majority of the noise issue and apply it to brain-state-in-a-box (BSB) neural networks.
Abstract: By mimicking the highly parallel biological systems, neuromorphic hardware provides the capability of information processing within a compact and energy-efficient platform. However, traditional Von Neumann architecture and the limited signal connections have severely constrained the scalability and performance of such hardware implementations. Recently, many research efforts have been investigated in utilizing the latest discovered memristors in neuromorphic systems due to the similarity of memristors to biological synapses. In this paper, we explore the potential of a memristor crossbar array that functions as an autoassociative memory and apply it to brain-state-in-a-box (BSB) neural networks. Especially, the recall and training functions of a multianswer character recognition process based on the BSB model are studied. The robustness of the BSB circuit is analyzed and evaluated based on extensive Monte Carlo simulations, considering input defects, process variations, and electrical fluctuations. The results show that the hardware-based training scheme proposed in the paper can alleviate and even cancel out the majority of the noise issue.

348 citations

Journal ArticleDOI
TL;DR: In this article, a series of architecturally scalable quantum annealing (QA) processors consisting of networks of manufactured interacting spins (qubits) were built and the energy eigen spectrum of two-and eight-qubit systems within one such processor was measured.
Abstract: : Entanglement lies at the core of quantum algorithms designed to solve problems that are intractable by classical approaches. One such algorithm, quantum annealing (QA), provides a promising path to a practical quantum processor. We have built a series of architecturally scalable QA processors consisting of networks of manufactured interacting spins (qubits). Here, we use qubit tunneling spectroscopy to measure the energy eigen spectrum of two- and eight-qubit systems within one such processor, demonstrating quantum coherence in these systems. We present experimental evidence that, during a critical portion of QA, the qubits become entangled and entanglement persists even as these systems reach equilibrium with a thermal environment. Our results provide an encouraging sign that QA is a viable technology for large scale quantum computing.

252 citations

Journal ArticleDOI
TL;DR: In this paper, a series of scalable quantum annealing (QA) processors consisting of networks of manufactured interacting spins (qubits) were built and the authors used qubit tunneling spectroscopy to measure the energy eigenspectrum of two and eight qubit systems.
Abstract: Entanglement lies at the core of quantum algorithms designed to solve problems that are intractable by classical approaches. One such algorithm, quantum annealing (QA), provides a promising path to a practical quantum processor. We have built a series of scalable QA processors consisting of networks of manufactured interacting spins (qubits). Here, we use qubit tunneling spectroscopy to measure the energy eigenspectrum of two- and eight-qubit systems within one such processor, demonstrating quantum coherence in these systems. We present experimental evidence that, during a critical portion of QA, the qubits become entangled and that entanglement persists even as these systems reach equilibrium with a thermal environment. Our results provide an encouraging sign that QA is a viable technology for large-scale quantum computing.

199 citations


Cited by
More filters
Journal ArticleDOI
18 Jun 2016
TL;DR: This work proposes a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory, and distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving.
Abstract: Processing-in-memory (PIM) is a promising solution to address the "memory wall" challenges for future computer systems. Prior proposed PIM architectures put additional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has showed its potential to be used for main memory. Moreover, with its crossbar array structure, ReRAM can perform matrix-vector multiplication efficiently, and has been widely studied to accelerate neural network (NN) applications. In this work, we propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, a portion of ReRAM crossbar arrays can be configured as accelerators for NN applications or as normal memory for a larger memory space. We provide microarchitecture and circuit designs to enable the morphable functions with an insignificant area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. Benefiting from both the PIM architecture and the efficiency of using ReRAM for NN computation, PRIME distinguishes itself from prior work on NN acceleration, with significant performance improvement and energy saving. Our experimental results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by ~2360× and the energy consumption by ~895×, across the evaluated machine learning benchmarks.

1,197 citations

Journal ArticleDOI
01 Dec 1983-Nature
TL;DR: In this paper, a considerable collection of totally free of expense Book for people from every single stroll of life has been gathered to gather a sizable library of preferred cost-free as well as paid files.
Abstract: Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files. Whatever our proffesion, the art of electronics can be excellent resource for reading. Find the existing reports of word, txt, kindle, ppt, zip, pdf, as well as rar in this site. You can definitely check out online or download this book by below. Currently, never miss it. This is really going to save you time and your money in something should think about. If you're seeking then search around for online. Without a doubt there are several these available and a lot of them have the freedom. However no doubt you receive what you spend on. An alternate way to get ideas would be to check another the art of electronics. GO TO THE TECHNICAL WRITING FOR AN EXPANDED TYPE OF THIS THE ART OF ELECTRONICS, ALONG WITH A CORRECTLY FORMATTED VERSION OF THE INSTANCE MANUAL PAGE ABOVE.

1,146 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field is provided, and the challenges and suggested solutions to help researchers understand the existing research gaps.
Abstract: In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.

1,084 citations