scispace - formally typeset
Search or ask a question
Author

Frank Vahid

Bio: Frank Vahid is an academic researcher from University of California, Riverside. The author has contributed to research in topics: Cache & Software. The author has an hindex of 48, co-authored 251 publications receiving 8140 citations. Previous affiliations of Frank Vahid include University of California & University of California, Irvine.


Papers
More filters
Book
01 Jul 1994
TL;DR: 1. System-Design Methodology and Environment, Models and Architectures, and Specification Languages: Answering Machine in SpecCharts.
Abstract: 1 Introduction 2 Models and Architectures 3 Specification Languages 4 A Specification Example 5 Translation to VHDL 6 System Partitioning 7 Design Quality Estimation 8 Specification Refinement into Synthesizable Models 9 System-Design Methodology and Environment Appendix: Answering Machine in SpecCharts Bibliography Index

609 citations

Book
17 Oct 2001
TL;DR: This paper presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process engineering process, called a single-Purpose Processor, by simplifying the design process.
Abstract: Preface. Introduction. Custom Single-Purpose Processors: Hardware. General-Purpose Processors: Software. Standard Single-Purpose Processors: Peripherals. Memory. Interfacing. Digital Camera Example. State Machine and Concurrent Process Models. Control Systems. IC Technology. Design Technology. Appendix A: Online Resources. Index.

352 citations

Proceedings ArticleDOI
01 May 2003
TL;DR: This work introduces a novel cache architecture intended for embedded microprocessor platforms that can be configured by software to be direct-mapped, two-way, or four-way set associative, using a technique the authors call way concatenation, having very little size or performance overhead.
Abstract: Energy consumption is a major concern in many embedded computing systems. Several studies have shown that cache memories account for about 50% of the total energy consumed in these systems. The performance of a given cache architecture is largely determined by the behavior of the application using that cache. Desktop systems have to accommodate a very wide range of applications and therefore the manufacturer usually sets the cache architecture as a compromise given current applications, technology and cost. Unlike desktop systems, embedded systems are designed to run a small range of well-defined applications. In this context, a cache architecture that is tuned for that narrow range of applications can have both increased performance as well as lower energy consumption. We introduce a novel cache architecture intended for embedded microprocessor platforms. The cache can be configured by software to be direct-mapped, two-way, or four-way set associative, using a technique we call way concatenation, having very little size or performance overhead. We show that the proposed cache architecture reduces energy caused by dynamic power compared to a way-shutdown cache. Furthermore, we extend the cache architecture to also support a way shutdown method designed to reduce the energy from static power that is increasing in importance in newer CMOS technologies. Our study of 23 programs drawn from Powerstone, MediaBench and Spec2000 show that tuning the cache's configuration saves energy for every program compared to conventional four-way set-associative as well as direct mapped caches, with average savings of 40% compared to a four-way conventional cache.

323 citations

Journal ArticleDOI
TL;DR: An extensive set of technical challenges are enumerated and specific applications are used to elaborate and provide insight into each specific concept in the Cyber-Physical System.
Abstract: The Cyber-Physical System (CPS) is a term describing a broad range of complex, multi-disciplinary, physically-aware next generation engineered system that integrates embedded computing technologies (cyber part) into the physical world. In order to define and understand CPS more precisely, this article presents a detailed survey of the related work, discussing the origin of CPS, the relations to other research fields, prevalent concepts, and practical applications. Further, this article enumerates an extensive set of technical challenges and uses specific applications to elaborate and provide insight into each specific concept. CPS is a very broad research area and therefore has diverse applications spanning different scales. Additionally, the next generation technologies are expected to play an important role on CPS research. All of CPS applications need to be designed considering the cutting-edge technologies, necessary system-level requirements, and overall impact on the real world.

263 citations

Journal ArticleDOI
TL;DR: The authors highlight existing tools and methods for solving the key problems of system specification and design, including specification capture, design exploration, hierarchical modeling, software and hardware synthesis, and cosimulation and describe a "specify-explore-refine" methodology.
Abstract: Embedded-system specification and design consists of describing a system's desired functionality and mapping that functionality for implementation by a set of system components such as processors, ASICs, memories, and buses. This paper discusses the key problems of system specification and design, including specification capture, design exploration, hierarchical modeling, software and hardware synthesis, and cosimulation. The authors highlight existing tools and methods for solving those problems and describe a "specify-explore-refine" methodology for meeting today's embedded-system product development requirements. >

187 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: The hardware aspects of reconfigurable computing machines, from single chip architectures to multi-chip systems, including internal structures and external coupling are explored, and the software that targets these machines is focused on.
Abstract: Due to its potential to greatly accelerate a wide variety of applications, reconfigurable computing has become a subject of a great deal of research. Its key feature is the ability to perform computations in hardware to increase performance, while retaining much of the flexibility of a software solution. In this survey, we explore the hardware aspects of reconfigurable computing machines, from single chip architectures to multi-chip systems, including internal structures and external coupling. We also focus on the software that targets these machines, such as compilation tools that map high-level algorithms directly to the reconfigurable substrate. Finally, we consider the issues involved in run-time reconfigurable systems, which reuse the configurable hardware during program execution.

1,666 citations

Patent
14 Jun 2016
TL;DR: Newness and distinctiveness is claimed in the features of ornamentation as shown inside the broken line circle in the accompanying representation as discussed by the authors, which is the basis for the representation presented in this paper.
Abstract: Newness and distinctiveness is claimed in the features of ornamentation as shown inside the broken line circle in the accompanying representation.

1,500 citations

Proceedings ArticleDOI
07 Aug 2002
TL;DR: A packet switched platform for single chip systems which scales well to an arbitrary number of processor like resources which is the onchip communication infrastructure comprising the physical layer, the data link layer and the network layer of the OSI protocol stack.
Abstract: We propose a packet switched platform for single chip systems which scales well to an arbitrary number of processor like resources. The platform, which we call Network-on-Chip (NOC), includes both the architecture and the design methodology. The NOC architecture is a m/spl times/n mesh of switches and resources are placed on the slots formed by the switches. We assume a direct layout of the 2-D mesh of switches and resources providing physical- and architectural-level design integration. Each switch is connected to one resource and four neighboring switches, and each resource is connected to one switch. A resource can be a processor core, memory, an FPGA, a custom hardware block or any other intellectual property (IP) block, which fits into the available slot and complies with the interface of the NOC. The NOC architecture essentially is the onchip communication infrastructure comprising the physical layer, the data link layer and the network layer of the OSI protocol stack. We define the concept of a region, which occupies an area of any number of resources and switches. This concept allows the NOC to accommodate large resources such as large memory banks, FPGA areas, or special purpose computation resources such as high performance multi-processors. The NOC design methodology consists of two phases. In the first phase a concrete architecture is derived from the general NOC template. The concrete architecture defines the number of switches and shape of the network, the kind and shape of regions and the number and kind of resources. The second phase maps the application onto the concrete architecture to form a concrete product.

1,304 citations