scispace - formally typeset
Search or ask a question
Author

Mauricio Ayala-Rincón

Bio: Mauricio Ayala-Rincón is an academic researcher from University of Brasília. The author has contributed to research in topics: Rewriting & Mathematical proof. The author has an hindex of 16, co-authored 140 publications receiving 1054 citations. Previous affiliations of Mauricio Ayala-Rincón include Universidade do Estado de Santa Catarina.


Papers
More filters
Book ChapterDOI
02 Jul 2007
TL;DR: A two dimensional version of KB3D, which is mechanically verified in the Prototype Verification System (PVS), that computes resolution maneuvers that are optimal with respect to ground speed and heading changes.
Abstract: Highly accurate positioning systems and new broadcasting technology have enabled air traffic management concepts where the responsibility for aircraft separation resides on pilots rather than on air traffic controllers. The Formal Methods Group at the National Institute of Aerospace and NASA Langley Research Center has proposed and formally verified an algorithm, called KB3D, for distributed three dimensional conflict resolution. KB3D computes resolution maneuvers where only one component of the velocity vector, i.e., ground speed, vertical speed, or heading, is modified. Although these maneuvers are simple to implement by a pilot, they are not necessarily optimal from a geometrical point of view. In general, optimal resolutions require the combination of all the components of the velocity vector. In this paper, we propose a two dimensional version of KB3D, which we call KB2D, that computes resolution maneuvers that are optimal with respect to ground speed and heading changes. The algorithm has been mechanically verified in the Prototype Verification System (PVS). The verification relies on algebraic proof techniques for the manipulation of the geometrical concepts relevant to the algorithm as well as standard deductive techniques available in PVS.

52 citations

Proceedings ArticleDOI
13 Apr 2011
TL;DR: This work presents an architecture to compute matrix inversions in a hardware reconfigurable FPGA with single-precision floating-point representation, whose main unit is the processing component for Gauss-Jordan elimination.
Abstract: This work presents an architecture to compute matrix inversions in a hardware reconfigurable FPGA with single-precision floating-point representation, whose main unit is the processing component for Gauss-Jordan elimination. This component consists of other smaller arithmetic units, organized to maintain the accuracy of the results without the need to internally normalize and de-normalize the floating-point data. The implementation of the operations and the whole unit take advantage of the resources available in the Virtex-5 FPGA. The performance and resource consumption of the implementation are improvements in comparison with different more elaborated architectures whose implementations are more complex for low cost applications. Benchmarks are done with solutions implemented previously in FPGA and software, such as Matlab.

36 citations

Journal ArticleDOI
TL;DR: The validation of five dispatching algorithms for elevator systems that were implemented on Spartan 3 FPGA-based boards in an integrated approach reducing the area and improving performance are described.

35 citations

Journal ArticleDOI
21 Nov 2010
TL;DR: A parameterizable floating-point library for arithmetic operators based on FPGAs was implemented and a tradeoff analysis of the hardware implementation was performed, which enables the designer to choose the suitable bit-width representation and error associated, as well as the area cost, elapsed time and power consumption for each arithmetic operator.
Abstract: Many scientific and engineering applications require to perform a large number of arithmetic operations that must be computed in an efficient manner using a high precision and a large dynamic range. Commonly, these applications are implemented on personal computers taking advantage of the floating-point arithmetic to perform the computations and high operational frequencies. However, most common software architectures execute the instructions in a sequential way due to the von Neumann model and, consequently, several delays are introduced in the data transfer between the program memory and the Arithmetic Logic Unit (ALU). There are several mobile applications which require to operate with a high performance in terms of accuracy of the computations and execution time as well as with low power consumption. Modern Field Programmable Gate Arrays (FPGAs) are a suitable solution for high performance embedded applications given the flexibility of their architectures and their parallel capabilities, which allows the implementation of complex algorithms and performance improvements. This paper describes a parameterizable floating-point library for arithmetic operators based on FPGAs. A general architecture was implemented for addition/subtraction and multiplication and two different architectures based on the Goldschmidt’s and the Newton-Raphson algorithms were implemented for division and square root. Additionally, a tradeoff analysis of the hardware implementation was performed, which enables the designer to choose, for general purpose applications, the suitable bit-width representation and error associated, as well as the area cost, elapsed time and power consumption for each arithmetic operator. Synthesis results have demonstrated the effectiveness of the implemented cores on commercial FPGAs and showed that the most critical parameter is the dedicated Digital Signal Processing (DSP) slices consumption. Simulation results were addressed to compute the mean square error (MSE) and maximum absolute error demonstrating the correctness of the implemented floating-point library and achieving and experimental error analysis. The Newton-Raphson algorithm achieves similar MSE results as the Goldschmidt’s algorithm, operating with similar frequencies; however, the first one saves more logic area and dedicated DSP blocks.

33 citations

Proceedings ArticleDOI
24 Mar 2010
TL;DR: A flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm, which by sharing the same resources can be used in two operation modes.
Abstract: Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.

32 citations


Cited by
More filters
01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Journal ArticleDOI
TL;DR: This highly successful textbook, widely regarded as the “bible of computer algebra”, gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems.
Abstract: Computer algebra systems are now ubiquitous in all areas of science and engineering. This highly successful textbook, widely regarded as the “bible of computer algebra”, gives a thorough introduction to the algorithmic basis of the mathematical engine in computer algebra systems. Designed to accompany oneor two-semester courses for advanced undergraduate or graduate students in computer science or mathematics, its comprehensiveness and reliability has also made it an essential reference for professionals in the area. Special features include: detailed study of algorithms including time analysis; implementation reports on several topics; complete proofs of the mathematical underpinnings; and a wide variety of applications (among others, in chemistry, coding theory, cryptography, computational logic, and the design of calendars and musical scales). A great deal of historical information and illustration enlivens the text. In this third edition, errors have been corrected and much of the Fast Euclidean Algorithm chapter has been renovated.

937 citations

Journal ArticleDOI
TL;DR: This survey presented a comprehensive investigation of PSO, including its modifications, extensions, and applications to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology.
Abstract: Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.

836 citations

Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This chapter presents the basic concepts of term rewriting that are needed in this book and suggests several survey articles that can be consulted.
Abstract: In this chapter we will present the basic concepts of term rewriting that are needed in this book. More details on term rewriting, its applications, and related subjects can be found in the textbook of Baader and Nipkow [BN98]. Readers versed in German are also referred to the textbooks of Avenhaus [Ave95], Bundgen [Bun98], and Drosten [Dro89]. Moreover, there are several survey articles [HO80, DJ90, Klo92, Pla93] that can also be consulted.

501 citations