scispace - formally typeset
Search or ask a question
Author

Uwe Meyer-Baese

Bio: Uwe Meyer-Baese is an academic researcher from Florida State University. The author has contributed to research in topics: Field-programmable gate array & Digital signature. The author has an hindex of 16, co-authored 91 publications receiving 1438 citations. Previous affiliations of Uwe Meyer-Baese include Florida A&M University & Florida A&M University – Florida State University College of Engineering.


Papers
More filters
Book
15 Sep 2001
TL;DR: This edition has a new chapter on microprocessors, new sections on special functions using MAC calls, intellectual property core design and arbitrary sampling rate converters, and over 100 new exercises.
Abstract: Field-Programmable Gate Arrays (FPGAs) are revolutionizing digital signal processing as novel FPGA families are replacing ASICs and PDSPs for front-end digital signal processing algorithms So the efficient implementation of these algorithms is critical and is the main goal of this book It starts with an overview of today's FPGA technology, devices, and tools for designing state-of-the-art DSP systems A case study in the first chapter is the basis for more than 40 design examples throughout The following chapters deal with computer arithmetic concepts, theory and the implementation of FIR and IIR filters, multirate digital signal processing systems, DFT and FFT algorithms, advanced algorithms with high future potential, and adaptive filters Each chapter contains exercises The VERILOG source code and a glossary are given in the appendices, while the accompanying CD-ROM contains the examples in VHDL and Verilog code as well as the newest Altera "Quartus II web edition" software This edition has a new chapter on microprocessors, new sections on special functions using MAC calls, intellectual property core design and arbitrary sampling rate converters, and over 100 new exercises

615 citations

Journal ArticleDOI
TL;DR: A procedure for intellectual property protection of digital circuits called IPP@HDL is presented, which relies on hosting the bits of the digital signature within memory structures or combinational logic that are part of the system at the high level description of the design.
Abstract: In this paper, a procedure for intellectual property protection (IPP) of digital circuits called IPP@HDL is presented. Its aim is to protect the author rights in the development and distribution of reusable modules by means of an electronic signature. The technique relies on hosting the bits of the digital signature within memory structures or combinational logic that are part of the system, at the high level description of the design. Thus, the area of the system is not increased and the signature is difficult to change or to remove without damaging the design. The technique also includes a procedure for secure signature extraction requiring minimal modifications to the system and without interfering its normal operation. The benefits of the presented procedure are illustrated with programmable logic and cell-based application-specific integrated circuit examples with several signature lengths. These design examples show no performance degradation and a negligible area increase, while probabilistic analyses show that the proposed IPP scheme offers high resistance against attacks.

123 citations

Journal ArticleDOI
TL;DR: A novel customizable architecture of a neuromorphic robust optical flow (multichannel gradient model) based on reconfigurable hardware with the properties of the cortical motion pathway is presented, thus obtaining a useful framework for building future complex bioinspired real-time systems with high computational complexity.
Abstract: Motion estimation from image sequences, called optical flow, has been deeply analyzed by the scientific community. Despite the number of different models and algorithms, none of them covers all problems associated with real-world processing. This paper presents a novel customizable architecture of a neuromorphic robust optical flow (multichannel gradient model) based on reconfigurable hardware with the properties of the cortical motion pathway, thus obtaining a useful framework for building future complex bioinspired real-time systems with high computational complexity. The presented architecture is customizable and adaptable, while emulating several neuromorphic properties, such as the use of several information channels of small bit width, which is the nature of the brain. This paper includes the resource usage and performance data, as well as a comparison with other systems. This hardware platform has many application fields in difficult environments due to its bioinspired nature and robustness properties, and it can be used as starting point in more complex systems.

86 citations

Book ChapterDOI
02 Sep 2002
TL;DR: A new demonstration of the synergy between the residue number system (RNS) and FPL technology is presented and a new RNS-based direct digital synthesizer that does not need a scaler circuit is introduced.
Abstract: Currently, several design barriers inhibit the implementation of high-precision digital signal processing (DSP) systems with field programmable logic (FPL) devices A new demonstration of the synergy between the residue number system (RNS) and FPL technology is presented in this paper The quantifiable benefits of this approach are studied in the context of a high-end communications digital receiver A new RNS-based direct digital synthesizer (DDS) that does not need a scaler circuit is introduced The programmable decimation FIR filter is based on the arithmetic benefits associated with Galois fields and supports tuning the IF frequency as well as its bandwidth Results show the proposed methodology requires fewer resources than classical designs, while throughput advantage is about 65%

68 citations

Journal ArticleDOI
TL;DR: A useful framework for building bio-inspired systems in real-time environments, reducing computational complexity is presented, and a complete quantization study of neuromorphic robust optical flow architecture is performed, using properties found in the cortical motion pathway.

47 citations


Cited by
More filters
01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Journal ArticleDOI
TL;DR: Simulation results for a set of ISCAS-89 benchmark circuits and the advanced-encryption-standard IP core show that high levels of security can be achieved at less than 5% area and power overhead under delay constraint.
Abstract: Hardware intellectual-property (IP) cores have emerged as an integral part of modern system-on-chip (SoC) designs. However, IP vendors are facing major challenges to protect hardware IPs from IP piracy. This paper proposes a novel design methodology for hardware IP protection using netlist-level obfuscation. The proposed methodology can be integrated in the SoC design and manufacturing flow to simultaneously obfuscate and authenticate the design. Simulation results for a set of ISCAS-89 benchmark circuits and the advanced-encryption-standard IP core show that high levels of security can be achieved at less than 5% area and power overhead under delay constraint.

468 citations

Journal ArticleDOI
TL;DR: This work uses a first-published methodology to compare one commercial and three academic tools on a common set of C benchmarks, aiming at performing an in-depth evaluation in terms of performance and the use of resources.
Abstract: High-level synthesis (HLS) is increasingly popular for the design of high-performance and energy-efficient heterogeneous systems, shortening time-to-market and addressing today’s system complexity. HLS allows designers to work at a higher-level of abstraction by using a software program to specify the hardware functionality. Additionally, HLS is particularly interesting for designing field-programmable gate array circuits, where hardware implementations can be easily refined and replaced in the target device. Recent years have seen much activity in the HLS research community, with a plethora of HLS tool offerings, from both industry and academia. All these tools may have different input languages, perform different internal optimizations, and produce results of different quality, even for the very same input description. Hence, it is challenging to compare their performance and understand which is the best for the hardware to be implemented. We present a comprehensive analysis of recent HLS tools, as well as overview the areas of active interest in the HLS research community. We also present a first-published methodology to evaluate different HLS tools. We use our methodology to compare one commercial and three academic tools on a common set of C benchmarks, aiming at performing an in-depth evaluation in terms of performance and the use of resources.

433 citations

Proceedings Article
01 Jan 2006
TL;DR: In this article, a variational model for optic flow computation based on non-linearised and higher order constancy assumptions is proposed, which is also capable of dealing with large displacements.
Abstract: In this paper, we suggest a variational model for optic flow computation based on non-linearised and higher order constancy assumptions. Besides the common grey value constancy assumption, also gradient constancy, as well as the constancy of the Hessian and the Laplacian are proposed. Since the model strictly refrains from a linearisation of these assumptions, it is also capable to deal with large displacements. For the minimisation of the rather complex energy functional, we present an efficient numerical scheme employing two nested fixed point iterations. Following a coarse-to-fine strategy it turns out that there is a theoretical foundation of so-called warping techniques hitherto justified only on an experimental basis. Since our algorithm consists of the integration of various concepts, ranging from different constancy assumptions to numerical implementation issues, a detailed account of the effect of each of these concepts is included in the experimental section. The superior performance of the proposed method shows up by significantly smaller estimation errors when compared to previous techniques. Further experiments also confirm excellent robustness under noise and insensitivity to parameter variations.

426 citations

Journal ArticleDOI
15 Jul 2014
TL;DR: This tutorial will provide a review of some of the existing counterfeit detection and avoidance methods, and discuss the challenges ahead for implementing these methods, as well as the development of new Detection and avoidance mechanisms.
Abstract: As the electronic component supply chain grows more complex due to globalization, with parts coming from a diverse set of suppliers, counterfeit electronics have become a major challenge that calls for immediate solutions. Currently, there are a few standards and programs available that address the testing for such counterfeit parts. However, not enough research has yet addressed the detection and avoidance of all counterfeit partsVrecycled, remarked, overproduced, cloned, out-of-spec/defective, and forged documentationVcurrently infiltrating the electronic component supply chain. Even if they work initially, all these parts may have reduced lifetime and pose reliability risks. In this tutorial, we will provide a review of some of the existing counterfeit detection and avoidance methods. We will also discuss the challenges ahead for im- plementing these methods, as well as the development of new detection and avoidance mechanisms.

424 citations