scispace - formally typeset
Search or ask a question

Showing papers on "Logarithmic number system published in 2001"


Journal ArticleDOI
TL;DR: In this paper, the authors discuss employing alternative number systems to reduce power dissipation in portable devices and high-performance systems, focusing on two alternative number representations that are quite different from the conventional linear number representations, namely the logarithmic number system (LNS) and the residue number system.
Abstract: The authors discuss employing alternative number systems to reduce power dissipation in portable devices and high-performance systems. They focus on two alternative number systems that are quite different from the conventional linear number representations, namely the logarithmic number system (LNS) and the residue number system (RNS). Both have recently attracted the interest of researchers for their low-power properties. The authors address aspects of the conventional arithmetic representations, the impact of logarithmic arithmetic on power dissipation, and discuss the low-power aspects of residue arithmetic.

88 citations


Proceedings ArticleDOI
11 Jun 2001
TL;DR: The potential of reducing power dissipation in a digital system using the logarithmic number system (LNS) is investigated and the average number of logic transitions is reduced, thus compensating the power-dissipation overhead due to the unavoidable linear-to-logarithmic and logarithsmic- to-linear conversion.
Abstract: The potential of reducing power dissipation in a digital system using the logarithmic number system (LNS) is investigated. To provide a quantitative measure of power savings, the equivalence of an LNS to a linear fixed-point system is initially explored. The bit assertion activity of an LNS encoded signal is studied for both uniform and correlated Gaussian inputs. It is shown that LNS reduces the average bit assertion probability by more than 50%, in certain cases, over an equivalent linear representation. Finally, the impact of LNS on the hardware architecture and, thus, to power dissipation, is discussed. It is found that the average number of logic transitions is reduced by several times, for certain arithmetic operations and word lengths, thus compensating the power-dissipation overhead due to the unavoidable linear-to-logarithmic and logarithmic-to-linear conversion.

76 citations


Patent
Manjirnath Chatterjee1
22 Mar 2001
TL;DR: In this article, a system and method for producing an output logarithmic digital signal from an input digital signal having a plurality of bit values in which the output signal has a precision defined by a parameter is described.
Abstract: A system and method for producing an output logarithmic digital signal from an input digital signal having a plurality of bit values in which the output logarithmic signal has a precision defined by a parameter is described. The system ( 45 ) includes a search circuit ( 50 ), an interpolation circuit ( 55 ) in coupled with the search circuit, a shift circuit ( 60 ) in coupled with the interpolation circuit and a combiner ( 65 ) that produces an output logarithmic digital signal ( 90 ) from a received search circuit output ( 75 ) and a received shift circuit output ( 88 ).

29 citations


Proceedings ArticleDOI
11 Jun 2001
TL;DR: Adopting both generalizations of the double-base number system shows that large reductions in hardware complexity are achievable compared to an equivalent precision logarithmic number system.
Abstract: A recently introduced double-base number representation has proved to be successful in improving the performance of several algorithms in cryptography and digital signal processing. The index-calculus version of this number system can be regarded as a two-dimensional extension of the classical logarithmic number system. This paper builds on previous special results by generalizing the number system both in multiple dimensions (multiple bases) and by the use of multiple digits. Adopting both generalizations the paper shows that large reductions in hardware complexity are achievable compared to an equivalent precision logarithmic number system.

24 citations


Book ChapterDOI
27 Aug 2001
TL;DR: An implementation of a complete RLS Lattice and Normalised RLS lattice cores for Virtex outperform (4-5 times) the standard DSP solution based on 32 bit floating point TMS320C3x/4x 50MHz processors.
Abstract: We present an implementation of a complete RLS Lattice and Normalised RLS Lattice cores for Virtex. The cores accept 24-bit fixed point inputs and produce 24-bit fixed point prediction error. Internally, the computations are based on 32bit logarithmic arithmetic. On Virtex XCV2000E- 6, it takes 22% and 27% of slices respectively and performs at 45 MHz. The cores outperform (4-5 times) the standard DSP solution based on 32 bit floating point TMS320C3x/4x 50MHz processors

16 citations


Proceedings ArticleDOI
01 Jan 2001
TL;DR: The ELM device is described, and its operation is illustrated using an example from a class of RLS algorithms, to suggest that this can deliver approximately twofold improvements in speed and accuracy.
Abstract: In contrast to all other microprocessors, which use floating-point for their real arithmetic, the European Logarithmic Microprocessor is the world's first device to use the logarithmic number system for this purpose. Simulation work has already suggested that this can deliver approximately twofold improvements in speed and accuracy. This paper describes the ELM device, and illustrates its operation using an example from a class of RLS algorithms.

13 citations


Proceedings ArticleDOI
04 Sep 2001
TL;DR: A design is given for a quadratic interpolator needed by the logarithmic number system (LNS), to minimize memory requirements and system complexity, at the expense of a slight increase in approximation error.
Abstract: A design is given for a quadratic interpolator needed by the logarithmic number system (LNS). Unlike previous LNS designs that have attempted to produce results consistently better than a floating-paint representation of the same word size (32 bits), the design goal is to minimize memory requirements and system complexity, at the expense of a slight increase in approximation error. Simulation results have shown this goal causes only a modest impact on overall accuracy, but the memory savings are significant. Despite a slight increase in error compared to prior LNS implementations, on average, the error is still less than conventional number representations satisfying the IEEE-754 standard. Proposed applications for the interpolator include multimedia, signal processing, graphics and reconfigurable computing.

13 citations


Proceedings ArticleDOI
06 May 2001
TL;DR: The equivalence of an LNS to a linear fixed-point system is initially explored and a related theorem is introduced, and LNS is shown to reduce the average bit assertion probability by more than 60%, over an equivalent linear representation.
Abstract: In this paper, the logarithmic number system (LNS) is exploited to save power The equivalence of an LNS to a linear fixed-point system is initially explored and a related theorem is introduced Then, activity is studied for both uniform, and Gaussian correlated input distributions LNS is shown to reduce the average bit assertion probability by more than 60%, in certain cases, over an equivalent linear representation The impact of LNS to power dissipation through architecture simplification is also discussed

11 citations


Proceedings ArticleDOI
30 Mar 2001
TL;DR: This paper resurrects an early proposal to express complex numbers in a single "binary" representation and provides a fail-safe procedure for obtaining the quotient of two complex numbers expressed in this representation.
Abstract: Computer operations involving complex numbers, essential in such applications as digital signal processing and image processing, are usually performed in a "divide-and-conquer" approach dealing separately with the real and imaginary parts and then accumulating the results. There have been several proposals to treat complex numbers as a single unit but all seem to have floundered on the basic problem of the division process without which, of course, it is impossible to carry out all but the most basic arithmetic. This paper resurrects an early proposal to express complex numbers in a single "binary" representation and provides a fail-safe procedure for obtaining the quotient of two complex numbers expressed in this representation.

9 citations


Patent
05 Feb 2001
TL;DR: A logarithmic arithmetic unit is defined in this paper as a unit of floating-point arithmetic with a fixed-point part of the floating-points and a table memory.
Abstract: A logarithmic arithmetic unit includes first logarithmic operation part multiplying an exponent part of floating-point data by a prescribed value, a logarithmic table memory outputting a logarithmic value corresponding to bit data expressing a digit higher than a prescribed digit of a fixed-point part of the floating-point data, divisional precision decision part deciding divisional precision on the basis of the exponent part, division part performing division on a dividend obtained by subtracting the bit data from the fixed-point part and a divisor of the bit data and obtaining a result of division of a number of digits set on the basis of the divisional precision, second logarithmic operation part obtaining the logarithmic value of a value obtained by dividing the fixed-point part by the bit data and sum operation part adding outputs from the first and second logarithmic operation parts and the logarithmic table memory to each other.

9 citations


Proceedings ArticleDOI
W.G. Natter1, B. Nowrouzian
01 Jan 2001
TL;DR: This paper is concerned with an overview of the salient features of combining the online and digit-serial arithmetic techniques for the design, development, and hardware implementation of algorithms for high-speed arithmetic operations.
Abstract: This paper is concerned with an overview of the salient features of combining the online and digit-serial arithmetic techniques for the design, development, and hardware implementation of algorithms for high-speed arithmetic operations. The online technique processes digital signals as generated and consumed by current practical analog-to-digital and digital-to-analog converters. The digit-serial technique permits a trade-off between speed and area in a corresponding hardware implementation. As online operations require redundant representations, carries do not propagate through long paths, thereby reducing the delays in hardware implementation. Moreover, the most significant digits of the result of an online operation can be fed as the inputs to other online operations after a small number of bit-serial clock cycles called latency. The area of online operations was a concern in the past, but one can now circumvent this problem by employing the digit-serial technique.

Proceedings ArticleDOI
24 Jul 2001
TL;DR: Preliminary experiments suggest modest LNS-FPGA implementations, like the design of this paper, are more cost effective than pure software and can be as cost effective as more expensive L NS- FPGA implementations that attempt to maximise speed.
Abstract: Field Programmable Gate Arrays (FPGAs) have some difficulty with the implementation of floating-point operations. In particular, devoting the large number of slices needed by floating-point multipliers prohibits incorporating floating point into smaller, less expensive FPGAs. An alternative is the Logarithmic Number System (LNS), where multiplication and division are easy and fast. LNS also has the advantage of lower power consumption than fixed point. The problem with LNS has been the implementation of addition. There are many price/performance tradeoffs in the LNS design space between pure software and specialised-high-speed hardware. This paper focuses on a compromise between these extremes. We report on a small RISC core of our own design (loosely inspired by the popular ARM processor) in which only 4 percent additional investment in FPGA resources beyond that required for the integer RISC core more than doubles the speed of LNS addition compared to a pure software approach. Our approach shares resources in the datapath of the non-LNS parts of the RISC so that the only significant cost is the decoding and control for the LNS instruction. Since adoption of LNS depends on its cost effectiveness (e.g., FLOPs/slice), we compare our design against an earlier LNS ALU implemented in a similar FPGA. Our preliminary experiments suggest modest LNS-FPGA implementations, like ours, are more cost effective than pure software and can be as cost effective as more expensive LNS-FPGA implementations that attempt to maximise speed. Thus, our LNS-RISC fits in the Virtex-300, which is not possible for a comparable design.© (2001) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Dissertation
01 Jan 2001
TL;DR: This thesis describes the design and VLSI implementation of a chip performing complex multiplication and addition in complex logarithmic number system using the new architecture and the design of Content Addressable Read Only Memory specifically designed for this work.
Abstract: High Speed Complex Multiply and Add Operation Using Complex Logarithmic Number System Derek Yiu Chung So Master of Applïed Science Graduate Department of Electrical and Cornputer Engineering University of Toronto 2001 Digital Signal Processing (DSP) applications are becoming increasingly important and are computationally intensive. An architecture was proposed recently which uses a high r adk redundant CORDIC algorit hm to perform complex logarit hmic number system arithmetic. This architecture is shown to be superior than existing arithmetic units using floating point number system. This thesis describes the design and VLSI implementation of a chip performing complex multiplication and addition in complex logarithmic number system using the new architecture. It also describes the design of Content Addressable Read Only Memory specifically designed for this work. The design is implemented in 0.18 P m , single poly, six metal layer, salicide CMOS technology and fabricated through Canadian ~ficroelectronics Corporation by Taiwan Semiconductor Manufacturing Company. The size of the core is 1897pm x 870pm and HSPICE simulation shows the chip is able to operate at (1.8V, 25OC) 294 MHz with a throughput of 2.35 GFLOPS.

Proceedings ArticleDOI
02 Sep 2001
TL;DR: Four types of VLSI architectures for the hardware realization of the FLOS-CM algorithm are introduced and a logarithmic architecture is shown to require up to 50% less area and be 14% faster than a linear fixed-point arithmetic counterpart.
Abstract: Four types of VLSI architectures for the hardware realization of the FLOS-CM algorithm are introduced in this paper. Each architecture is appropriate for a particular environment. The FLOS-CM algorithm is found to be amenable for implementation using logarithmic arithmetic. A logarithmic architecture is shown to require up to 50% less area and be 14% faster than a linear fixed-point arithmetic counterpart. In terms of Area/spl times/Time and Area/spl times/Time/sup 2/ complexities, the logarithmic architecture is up to 120% better.