scispace - formally typeset
Search or ask a question
Author

Anna Tonazzini

Bio: Anna Tonazzini is an academic researcher from National Research Council. The author has contributed to research in topics: Image restoration & Blind signal separation. The author has an hindex of 23, co-authored 98 publications receiving 1564 citations. Previous affiliations of Anna Tonazzini include International School for Advanced Studies & Istituto di Scienza e Tecnologie dell'Informazione.


Papers
More filters
Journal ArticleDOI
M. Bramanti, Emanuele Salerno, Anna Tonazzini, S. Pasini1, A. Gray 
TL;DR: In this paper, an acoustic pyrometry method for the reconstruction of temperature maps inside power plant boilers is presented based on measuring times-of-flight of acoustic waves along a number of straight paths in a cross-section of the boiler; via an integral relationship, these times depend on the temperature of the gaseous medium along the paths.
Abstract: The paper presents an acoustic pyrometry method for the reconstruction of temperature maps inside power plant boilers. It is based on measuring times-of-flight of acoustic waves along a number of straight paths in a cross-section of the boiler; via an integral relationship, these times depend on the temperature of the gaseous medium along the paths. On this basis, 2D temperature maps can be reconstructed using suitable inversion techniques. The structure of a particular system for the measurement of the times-of-flight is described, and two classes of reconstruction algorithms are presented. The algorithms proposed have been applied to both simulated and experimental data measured in power plants of the Italian National Electricity Board (ENEL). The results obtained appear fairly satisfactory, considering the small data sets that it was possible to acquire in the tested boilers.

148 citations

Journal ArticleDOI
TL;DR: A new model for bleed-through in grayscale document images is proposed, based on the availability of the recto and verso pages, and it is shown that blind source separation can be successfully applied in this case too.
Abstract: Ancient documents are usually degraded by the presence of strong background artifacts. These are often caused by the so-called bleed-through effect, a pattern that interferes with the main text due to seeping of ink from the reverse side. A similar effect, called show-through and due to the nonperfect opacity of the paper, may appear in scans of even modern, well-preserved documents. These degradations must be removed to improve human or automatic readability. For this purpose, when a color scan of the document is available, we have shown that a simplified linear pattern overlapping model allows us to use very fast blind source separation techniques. This approach, however, cannot be applied to grayscale scans. This is a serious limitation, since many collections in our libraries and archives are now only available as grayscale scans or microfilms. We propose here a new model for bleed-through in grayscale document images, based on the availability of the recto and verso pages, and show that blind source separation can be successfully applied in this case too. Some experiments with real-ancient documents arepresented and described.

131 citations

Journal ArticleDOI
TL;DR: A novel approach to restoring digital document images, viewing the problem as one of separating overlapped texts and then reformulating it as a blind source separation problem, approached through independent component analysis techniques, which have the advantage that no models are required for the background.
Abstract: We propose a novel approach to restoring digital document images, with the aim of improving text legibility and OCR performance These are often compromised by the presence of artifacts in the background, derived from many kinds of degradations, such as spots, underwritings, and show-through or bleed-through effects So far, background removal techniques have been based on local, adaptive filters and morphological-structural operators to cope with frequent low-contrast situations For the specific problem of bleed-through/show-through, most work has been based on the comparison between the front and back pages This, however, requires a preliminary registration of the two images Our approach is based on viewing the problem as one of separating overlapped texts and then reformulating it as a blind source separation problem, approached through independent component analysis techniques These methods have the advantage that no models are required for the background In addition, we use the spectral components of the image at different bands, so that there is no need for registration Examples of bleed-through cancellation and recovery of underwriting from palimpsests are provided

110 citations

Journal ArticleDOI
TL;DR: In this paper, an independent component analysis (ICA) algorithm is proposed to separate signals of different origin in sky maps at several frequencies. But it works without prior assumptions on either the frequency dependence or the angular power spectrum of the various signals; rather, it learns directly from the input data how to identify the statistically independent components, on the assumption that all but one of the components have non-Gaussian distributions.
Abstract: We implement an independent component analysis (ICA) algorithm to separate signals of different origin in sky maps at several frequencies. Owing to its self-organizing capability, it works without prior assumptions on either the frequency dependence or the angular power spectrum of the various signals; rather, it learns directly from the input data how to identify the statistically independent components, on the assumption that all but, at most, one of the components have non-Gaussian distributions. We have applied the ICA algorithm to simulated patches of the sky at the four frequencies (30, 44, 70 and 100GHz) used by the Low Frequency Instrument of the European Space Agency's Planck satellite. Simulations include the cosmic microwave background (CMB), the synchrotron and thermal dust emissions, and extragalactic radio sources. The effects of the angular response functions of the detectors and of instrumental noise have been ignored in this first exploratory study. The ICA algorithm reconstructs the spatial distribution of each component with rms errors of about 1per cent for the CMB, and 10per cent for the much weaker Galactic components. Radio sources are almost completely recovered down to a flux limit corresponding to .0.7sCMB, where sCMB is the rms level of the CMB fluctuations. The signal recovered has equal quality on all scales larger than the pixel size. In addition, we show that for the strongest components (CMB and radio sources) the frequency scaling is recovered with per cent precision. Thus, algorithms of the type presented here appear to be very promising tools for component separation. On the other hand, we have been dealing here with a highly idealized situation. Work to include instrumental noise, the effect of different resolving powers at different frequencies and a more complete and realistic characterization of astrophysical foregrounds is in progress.

105 citations

Journal ArticleDOI
TL;DR: This paper proposes an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps, and finds that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant.
Abstract: This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

83 citations


Cited by
More filters
01 Jan 1990
TL;DR: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article, where the authors present an overview of their work.
Abstract: An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article.

2,933 citations

Journal ArticleDOI
TL;DR: The various applications of neural networks in image processing are categorised into a novel two-dimensional taxonomy for image processing algorithms and their specific conditions are discussed in detail.
Abstract: We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hopfield neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension specifies the type of task performed by the algorithm: preprocessing, data reduction/feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses specific constraints to a neural-based approach. These specific conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and specifically to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments.

1,100 citations

Journal ArticleDOI
Martin Rein1
TL;DR: The fluid dynamic phenomena of liquid drop impact are described and reviewed in this article, and specific conditions under which the above phenomena did occur in experiments are analyzed and the characteristics of drop impact phenomena are described in detail.
Abstract: The fluid dynamic phenomena of liquid drop impact are described and reviewed. These phenomena include bouncing, spreading and splashing on solid surfaces, and bouncing, coalescence and splashing on liquid surfaces. Further, cavitation and the entrainment of gas into an impacted liquid may be observed. In order to distinguish properly between the results of different experiments different impact scenarios are discussed. The specific conditions under which the above phenomena did occur in experiments are analyzed and the characteristics of drop impact phenomena are described in detail.

1,081 citations

Journal ArticleDOI
TL;DR: In this paper, the impact of drops impinging one by one on a solid surface is studied experimentally and theoretically, and it is shown that the splashing threshold corresponds to the onset of a velocity discontinuity propagating over the liquid layer on the wall.
Abstract: The impact of drops impinging one by one on a solid surface is studied experimentally and theoretically. The impact process is observed by means of a charge-coupled-device camera, its pictures processed by computer. Low-velocity impact results in spreading and in propagation of capillary waves, whereas at higher velocities splashing (i.e. the emergence of a cloud of small secondary droplets, absent in the former case) sets in. Capillary waves are studied in some detail in separate experiments. The dynamics of the extension of liquid lamellae produced by an impact in the case of splashing is recorded. The secondary-droplet size distributions and the total volume of these droplets are measured, and the splashing threshold is found as a function of the impact parameters.The pattern of the capillary waves is predicted to be self-similar. The calculated wave profile agrees well with the experimental data. It is shown theoretically that the splashing threshold corresponds to the onset of a velocity discontinuity propagating over the liquid layer on the wall. This discontinuity shows several aspects of a shock. In an incompressible liquid such a discontinuity can only exist in the presence of a sink at its front. The latter results in the emergence of a circular crown-like sheet virtually normal to the wall and propagating with the discontinuity. It is predicted theoretically and recorded in the experiment. The crown is unstable owing to the formation of cusps at the free rim at its top edge, which results in the splashing effect. The onset velocity of splashing and the rate of propagation of the kinematic discontinuity are calculated and the theoretical results agree fairly well with the experimental data. The structure of the discontinuity is shown to match the outer solution.

767 citations

01 Jan 1983
TL;DR: The neocognitron recognizes stimulus patterns correctly without being affected by shifts in position or even by considerable distortions in shape of the stimulus patterns.
Abstract: Suggested by the structure of the visual nervous system, a new algorithm is proposed for pattern recognition. This algorithm can be realized with a multilayered network consisting of neuron-like cells. The network, “neocognitron”, is self-organized by unsupervised learning, and acquires the ability to recognize stimulus patterns according to the differences in their shapes: Any patterns which we human beings judge to be alike are also judged to be of the same category by the neocognitron. The neocognitron recognizes stimulus patterns correctly without being affected by shifts in position or even by considerable distortions in shape of the stimulus patterns.

649 citations