Author
Sergio Carrato
Other affiliations: University of Siegen
Bio: Sergio Carrato is an academic researcher from University of Trieste. The author has contributed to research in topics: Detector & Interpolation. The author has an hindex of 18, co-authored 120 publications receiving 1579 citations. Previous affiliations of Sergio Carrato include University of Siegen.
Papers published on a yearly basis
Papers
More filters
TL;DR: The performance, relative merits and limitations of each of the approaches are comprehensively discussed and contrasted, and the related topic of camera operation recognition is also reviewed.
Abstract: Temporal video segmentation is the first step towards automatic annotation of digital video for browsing and retrieval. This article gives an overview of existing techniques for video segmentation that operate on both uncompressed and compressed video stream. The performance, relative merits and limitations of each of the approaches are comprehensively discussed and contrasted. The gradual development of the techniques and how the uncompressed domain methods were tailored and applied into compressed domain are considered. In addition to the algorithms for shot boundaries detection, the related topic of camera operation recognition is also reviewed.
447 citations
16 Sep 1996
TL;DR: A novel scheme for edge-preserving image interpolation is introduced, which is based on the use of a simple nonlinear filter which accurately reconstructs sharp edges, with superior performances with respect to other interpolation techniques.
Abstract: A novel scheme for edge-preserving image interpolation is introduced, which is based on the use of a simple nonlinear filter which accurately reconstructs sharp edges. Simulation results show the superior performances of the proposed approach with respect to other interpolation techniques.
161 citations
TL;DR: An image coding system is proposed which, thanks to both the peculiar sample distribution and a suitably designed interpolation scheme, yields good subjective quality images at low bit rates, avoiding the annoying artefacts which are typical of block-based coding techniques as JPEG.
Abstract: A novel irregular and nonuniform sampling scheme for image data is presented in this paper. Its main characteristics are the particular distribution of the samples along image edges and in textured areas, which is obtained following a multiresolution approach, and the low computational complexity. As an example of application, an image coding system is proposed which, thanks to both the peculiar sample distribution and a suitably designed interpolation scheme, yields good subjective quality images at low bit rates, avoiding the annoying artefacts which are typical of block-based coding techniques as JPEG.
58 citations
TL;DR: In this article, a nonlinear spatio-temporal filter capable of attenuating noise in image sequences without corrupting image details is presented, and the results of several simulations on real-world sequences are shown.
Abstract: A nonlinear spatio-temporal filter capable of significantly attenuating noise in image sequences without corrupting image details is presented. The characteristics of the filter are described, and the results of several simulations on real-world sequences are shown. A real-time implementation of the algorithm on a last generation DSP is also described. It is shown that, by suitably exploiting the computational capability of the DSP, it is possible to process CIF images at 10 frames/sec.
47 citations
01 Oct 2008-Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment
TL;DR: In this article, the authors presented a new bidimensional detector setup, based on cross delay line technology, specifically developed for time resolved experiments and particularly suited to work in conjunction with pump-andprobe systems.
Abstract: We present a new bidimensional detector setup, based on cross delay line technology, specifically developed for time resolved experiments and particularly suited to work in conjunction with pump-and-probe systems. Thanks to the particular architecture of the acquisition electronics, the detector is able to correlate each event with the time it occurred in a way which preserves the picoseconds time resolution of pump-and-probe techniques and, more generally, can perform time resolved acquisition in the nanosecond or picoseconds scale. The acquisition setup count rate, up to more than 4 Mcounts/s in time resolved mode, exceeds the performances of the best two-dimensional detectors working in counting mode presently available on electron analysers. First experimental results, obtained both on bench tests and in UHV conditions, where the detector has been mounted on an electron analyser, confirm the validity of the approach and show the potentiality of time resolved acquisition applied to electron spectroscopy analysis.
46 citations
Cited by
More filters
01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.
2,933 citations
TL;DR: Simulation results demonstrate that the new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation.
Abstract: This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the interpolation at a higher resolution based on the geometric duality between the low-resolution covariance and the high-resolution covariance. The edge-directed property of covariance-based adaptation attributes to its capability of tuning the interpolation coefficients to match an arbitrarily oriented step edge. A hybrid approach of switching between bilinear interpolation and covariance-based adaptive interpolation is proposed to reduce the overall computational complexity. Two important applications of the new interpolation algorithm are studied: resolution enhancement of grayscale images and reconstruction of color images from CCD samples. Simulation results demonstrate that our new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation.
1,933 citations
01 Jan 2017
1,687 citations
TL;DR: This work divides the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities, and separates the different proposed methods with respect to each processing stage, favoring a comparative viewpoint.
Abstract: Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one--after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges.
1,021 citations
01 Oct 1996
TL;DR: The self-organizing map method, which converts complex, nonlinear statistical relationships between high-dimensional data into simple geometric relationships on a low-dimensional display, can be utilized for many tasks: reduction of the amount of training data, speeding up learning nonlinear interpolation and extrapolation, generalization, and effective compression of information for its transmission.
Abstract: The self-organizing map (SOM) method is a new, powerful software tool for the visualization of high-dimensional data. It converts complex, nonlinear statistical relationships between high-dimensional data into simple geometric relationships on a low-dimensional display. As it thereby compresses information while preserving the most important topological and metric relationships of the primary data elements on the display, it may also be thought to produce some kind of abstractions. The term self-organizing map signifies a class of mappings defined by error-theoretic considerations. In practice they result in certain unsupervised, competitive learning processes, computed by simple-looking SOM algorithms. Many industries have found the SOM-based software tools useful. The most important property of the SOM, orderliness of the input-output mapping, can be utilized for many tasks: reduction of the amount of training data, speeding up learning nonlinear interpolation and extrapolation, generalization, and effective compression of information for its transmission.
845 citations