scispace - formally typeset
Search or ask a question
Topic

Iterative reconstruction

About: Iterative reconstruction is a research topic. Over the lifetime, 41296 publications have been published within this topic receiving 841132 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A systematic reconstruction-based method for deciding the highest-order ZERNike moments required in a classification problem is developed and the superiority of Zernike moment features over regular moments and moment invariants was experimentally verified.
Abstract: The problem of rotation-, scale-, and translation-invariant recognition of images is discussed. A set of rotation-invariant features are introduced. They are the magnitudes of a set of orthogonal complex moments of the image known as Zernike moments. Scale and translation invariance are obtained by first normalizing the image with respect to these parameters using its regular geometrical moments. A systematic reconstruction-based method for deciding the highest-order Zernike moments required in a classification problem is developed. The quality of the reconstructed image is examined through its comparison to the original one. The orthogonality property of the Zernike moments, which simplifies the process of image reconstruction, make the suggest feature selection approach practical. Features of each order can also be weighted according to their contribution to the reconstruction process. The superiority of Zernike moment features over regular moments and moment invariants was experimentally verified. >

1,971 citations

Journal Article
TL;DR: The general principles behind all EM algorithms are discussed and in detail the specific algorithms for emission and transmission tomography are derived and the specification of necessary physical features such as source and detector geometries are discussed.
Abstract: Two proposed likelihood models for emission and transmission image reconstruction accurately incorporate the Poisson nature of photon counting noise and a number of other relevant physical features As in most algebraic schemes, the region to be reconstructed is divided into small pixels For each pixel a concentration or attenuation coefficient must be estimated In the maximum likelihood approach these parameters are estimated by maximizing the likelihood (probability of the observations) EM algorithms are iterative techniques for finding maximum likelihood estimates In this paper we discuss the general principles behind all EM algorithms and derive in detail the specific algorithms for emission and transmission tomography The virtues of the EM algorithms include (a) accurate incorporation of a good physical model, (b) automatic inclusion of non-negativity constraints on all parameters, (c) an excellent measure of the quality of a reconstruction, and (d) global convergence to a single vector of parameter estimates We discuss the specification of necessary physical features such as source and detector geometries Actual reconstructions are deferred to a later time

1,921 citations

Journal ArticleDOI
TL;DR: The co‐localization coefficients can provide relevant quantitative information about the positional relation between biological objects or processes, and are tested on images of real biological specimens.
Abstract: A method to measure the degree of co-localization of objects in confocal dual-colour images has been developed. This image analysis produced two coefficients that represent the fraction of co-localizing objects in each component of a dual-channel image. The generation of test objects with a Gaussian intensity distribution, at well-defined positions in both components of dual-channel images, allowed an accurate investigation of the reliability of the procedure. To do that, the co-localization coefficients were determined before degrading the image with background, cross-talk and Poisson noise. These synthesized sources of image deterioration represent sources of deterioration that must be dealt with in practical confocal imaging, namely dark current, non-specific binding and cross-reactivity of fluorescent probes, optical cross-talk and photon noise. The degraded images were restored by filtering and cross-talk correction. The co-localization coefficients of the restored images were not significantly different from those of the original undegraded images. Finally, we tested the procedure on images of real biological specimens. The results of these tests correspond with data found in the literature. We conclude that the co-localization coefficients can provide relevant quantitative information about the positional relation between biological objects or processes.

1,888 citations

Proceedings ArticleDOI
01 Sep 2009
TL;DR: Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost.
Abstract: We propose in this paper to unify two different approaches to image restoration: On the one hand, learning a basis set (dictionary) adapted to sparse signal descriptions has proven to be very effective in image reconstruction and classification tasks. On the other hand, explicitly exploiting the self-similarities of natural images has led to the successful non-local means approach to image restoration. We propose simultaneous sparse coding as a framework for combining these two approaches in a natural manner. This is achieved by jointly decomposing groups of similar signals on subsets of the learned dictionary. Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost.

1,812 citations

Journal ArticleDOI
TL;DR: An iterative algorithm, based on recent work in compressive sensing, that minimizes the total variation of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative is developed.
Abstract: An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories.

1,786 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
92% related
Pixel
136.5K papers, 1.5M citations
91% related
Image segmentation
79.6K papers, 1.8M citations
91% related
Convolutional neural network
74.7K papers, 2M citations
89% related
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023704
20221,549
20211,744
20202,051
20192,271
20182,084