scispace - formally typeset
Search or ask a question

Showing papers by "Vincenzo Piuri published in 2000"


Journal ArticleDOI
TL;DR: In this paper, the authors describe specification and design of a geographically distributed system based on commercially standard components for workbenches in networked computer laboratories, which can be used for a preliminary familiarization and experimentation with instrumentation and measurement procedures.
Abstract: Evolution and cost of measurement equipment, continuous training, and distance learning make it difficult to provide a complete set of updated workbenches to every student. For a preliminary familiarization and experimentation with instrumentation and measurement procedures, the use of virtual equipment is often considered more than sufficient from the didactic point of view, while the hands-on approach with real instrumentation and measurement systems still remains necessary to complete and refine the student's practical expertise. Creation and distribution of workbenches in networked computer laboratories therefore becomes attractive and convenient. This paper describes specification and design of a geographically distributed system based on commercially standard components.

167 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient composite technique for track profile extraction with real-time image processing that high throughput is obtained by algorithmic prefiltering to restrict the image area containing the track profile, while high accuracy is achieved by neural reconstruction of the profile itself.
Abstract: Checking railway status is critical to guarantee high operating safety, proper maintenance schedule, and low maintenance and operating costs. This operation consists of the analysis of the rail profile and level as well as overall geometry and undulation. Traditional detection systems are based on mechanical devices in contact with the track. Innovative approaches are based on laser scanning and image analysis. This paper presents an efficient composite technique for track profile extraction with real-time image processing. High throughput is obtained by algorithmic prefiltering to restrict the image area containing the track profile, while high accuracy is achieved by neural reconstruction of the profile itself.

118 citations


Journal ArticleDOI
TL;DR: The architecture of an efficient implementation of a two-dimensional DCT/IDCT transform processor via a serial-parallel systolic array that does not require transposition is presented.
Abstract: The Discrete Cosine and Inverse Discrete Cosine Transforms are widely used tools in many digital signal and image processing applications. The complexity of these algorithms often requires dedicated hardware support to satisfy the performance requirements of hard real-time applications. This paper presents the architecture of an efficient implementation of a two-dimensional DCT/IDCT transform processor via a serial-parallel systolic array that does not require transposition.

38 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: HRBF are reframed for the regular sampling case, and they are compared with Wavelet Decomposition, and results show that HRBF are much more tolerant to errors in the parameters when errors occurs in the configuration phase, while they are more sensitive to the errors which occurs since the network has been configured.
Abstract: Hierarchical Radial Basis Functions Networks (HRBF) have been recently introduced as a tool for adaptive multiscale image reconstruction from range data. They are based on local operation on the data and are able to give a sparse approximation. In this paper HRBF are reframed for the regular sampling case, and they are compared with Wavelet Decomposition. Results show that HRBF, thanks to their constructive approach to approximation, are much more tolerant to errors in the parameters when errors occurs in the configuration phase, while they are more sensitive to the errors which occurs since the network has been configured.

5 citations


Proceedings ArticleDOI
25 Oct 2000
TL;DR: This paper presents a low-cost approach to concurrent error detection in a high-performance CORDIC processor based on a conditional-sum scheme that allows fault detection at a low increase in circuit complexity and latency.
Abstract: This paper presents a low-cost approach to concurrent error detection in a high-performance CORDIC processor based on a conditional-sum scheme. The specific characteristics of the CORDIC computation and the processor allow fault detection at a low increase in circuit complexity and latency. The detection scheme is based on use of the AN codes for the arithmetic part and on duplication of the rotation direction generators. Granular-pipelining has been applied to provide a variety of different performance tradeoffs, all with the same fault detection capabilities.

4 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: Parametric results show that the LYSFT dramatically outperforms RT and RST for improving the quality-level and a cost model is proposed and shown to relate the defect-level with the number of tests performed and it is also shown that the most cost-effective strategy.
Abstract: In this paper, approaches to testing and evaluating the quality-level (QL) of multichip module (MCM) instrumentations are proposed. A MCM is generally composed of a number of sets (or strata) of chips with different Known-Good-Yields (KGYs), so-called stratified, since each stratum is procured from a separate manufacturer. A stratified technique is proposed for testing MCM instrumentations. The proposed testing approach is accomplished in the presence of uneven known-good-yield (KGY) for MCMs consisting of different sets (or strata) of chips. This approach referred to as the lowest yield-stratum first-testing (LYSFT) considers the unevenness of KGY between strata for testing the chips and improving the QL. For comparison purposes, exhaustive testing (ET), random testing (RT) and random stratified testing (RST) are also evaluated by using a Markov-chain model. A Markov-chain model is used to analyze the quality-levels of these testing approaches and is solved analytically in O(SN/sup 3/) for the LYSFT (where S is the number of strata and N is the number of chips in the MCM). A cost model is proposed and shown as figure of merit and shown to relate the defect-level with the number of tests performed. Parametric results show that the LYSFT dramatically outperforms RT and RST for improving the quality-level. A considerable reduction in tests can be achieved by the LYSFT at a very small loss in quality-level compared with ET. Based on the proposed cost model, it is also shown that the LYSFT is the most cost-effective strategy.

4 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: This paper presents a methodology for selecting the best composite model for an embedded system in a constrained environment and envisaged constraints are computational complexity and latency which respectively address the computational complexity issue in SW and HW realizations, respectively.
Abstract: Surely, the main goal when designing an embedded system is performance maximization. Nevertheless, physical constraints such as silicon area and/or power consumption take and active part in the system design by limiting, most of time, the solution space and hence reducing the system performance. In this paper we present a methodology for selecting the best composite model for an embedded system in a constrained environment. The envisaged constraints are computational complexity and latency which respectively address the computational complexity issue in SW and HW realizations, respectively. It is assumed that the algorithm to be implemented in the embedded system is not given and must be constructed by relying on some (input, output) examples. Models considered for the system identification phase can be linear (e.g., ARMAX), nonlinear (e.g., neural based models) or composite (a suitable mix of linear and nonlinear models). The best solution is then selected from the candidate ones to optimally satisfy the application requirements.

3 citations



Proceedings ArticleDOI
01 Jul 2000
TL;DR: The paper addresses the issues related to the integration of soft-computing paradigms within conventional development environments for embedded systems and carries out the analysis at a behavioural abstraction level.
Abstract: There is an ever-increasing use of embedded systems; fast prototyping, time to market and severe implementation constraints must be faced to provide an effective - low cost - solution for a given application. To this end, several algorithmic formalisms are available to describe and validate complex systems at a behavioural level in order to minimise development costs and facilitate the integration of design and implementation constraints. Unfortunately, a soft-computing paradigm cannot be directly manipulated by conventional development environments for embedded systems unless an algorithmic description is available. In general, such a description is the result of a training procedure which, by following the selection of the most suitable soft-computing paradigm, configures it. The paper addresses the issues related to the integration of soft-computing paradigms within conventional development environments for embedded systems. The analysis is carried out at a behavioural abstraction level.

1 citations


Proceedings ArticleDOI
25 Oct 2000
TL;DR: The paper deals with design of concurrently (on-line) self-testing control units of the micro-program type and can be extended and is applicable to cover other control unit architectures.
Abstract: The paper deals with design of concurrently (on-line) self-testing control units. The control units of the micro-program type are considered. At the same time, the material can easily be extended and is applicable to cover other control unit architectures.

1 citations