scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Image Processing in 1992"


Journal Article•DOI•
TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.
Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >

3,925 citations


Journal Article•DOI•
Arnaud E. Jacquin1•
TL;DR: The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations, that relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis and approximates an original image by a Fractal image.
Abstract: The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that (i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and (ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation-a fractal code-which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers. >

1,386 citations


Journal Article•DOI•
TL;DR: The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients and gave high to acceptable quality reconstruction at compression ratios for the Miss America and Lena monochrome images.
Abstract: The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients. The transformed coefficients were coded hierarchically and individually quantized in accordance with the local estimated noise sensitivity of the human visual system (HVS). The algorithm can be mapped easily onto VLSI. For the Miss America and Lena monochrome images, the technique gave high to acceptable quality reconstruction at compression ratios of 0.3-0.2 and 0.64-0.43 bits per pixel (bpp), respectively. >

857 citations


Journal Article•DOI•
TL;DR: An error analysis based on an objective mean-square-error (MSE) criterion is used to motivate regularization and two approaches for choosing the regularization parameter and estimating the noise variance are proposed.
Abstract: The application of regularization to ill-conditioned problems necessitates the choice of a regularization parameter which trades fidelity to the data with smoothness of the solution. The value of the regularization parameter depends on the variance of the noise in the data. The problem of choosing the regularization parameter and estimating the noise variance in image restoration is examined. An error analysis based on an objective mean-square-error (MSE) criterion is used to motivate regularization. Two approaches for choosing the regularization parameter and estimating the noise variance are proposed. The proposed and existing methods are compared and their relationship to linear minimum-mean-square-error filtering is examined. Experiments are presented that verify the theoretical results. >

551 citations


Journal Article•DOI•
T. Kim1•
TL;DR: Owing to the structure of SMVQ and OMVQ, simple variable length noiseless codes can achieve as much as 60% bit rate reduction over fixed-length noisless codes.
Abstract: A class of vector quantizers with memory that are known as finite state vector quantizers (FSVQs) in the image coding framework is investigated. Two FSVQ designs, namely side match vector quantizers (SMVQs) and overlap match vector quantizers (OMVQs), are introduced. These designs take advantage of the 2-D spatial contiguity of pixel vectors as well as the high spatial correlation of pixels in typical gray-level images. SMVQ and OMVQ try to minimize the granular noise that causes visible pixel block boundaries in ordinary VQ. For 512 by 512 gray-level images, SMVQ and OMVQ can achieve communication quality reproduction at an average of 1/2 b/pixel per image frame, and acceptable quality reproduction. Because block boundaries are less visible, the perceived improvement in quality over ordinary VQ is even greater. Owing to the structure of SMVQ and OMVQ, simple variable length noiseless codes can achieve as much as 60% bit rate reduction over fixed-length noiseless codes. >

319 citations


Journal Article•DOI•
TL;DR: Convolution backprojection (CBP) image reconstruction has been proposed as a means of producing high-resolution synthetic-aperture radar (SAR) images by processing data directly in the polar recording format which is the conventional recording format for spotlight mode SAR.
Abstract: Convolution backprojection (CBP) image reconstruction has been proposed as a means of producing high-resolution synthetic-aperture radar (SAR) images by processing data directly in the polar recording format which is the conventional recording format for spotlight mode SAR. The CBP algorithm filters each projection as it is recorded and then backprojects the ensemble of filtered projections to create the final image in a pixel-by-pixel format. CBP reconstruction produces high-quality images by handling the recorded data directly in polar format. The CBP algorithm requires only 1-D interpolation along the filtered projections to determine the precise values that must be contributed to the backprojection summation from each projection. The algorithm is thus able to produce higher quality images by eliminating the inaccuracies of 2-D interpolation, as well as using all the data recorded in the spectral domain annular sector more effectively. The computational complexity of the CBP algorithm is O(N/sup 3/). >

319 citations


Journal Article•DOI•
TL;DR: Experiments are presented which show that GVC is capable of yielding good identification results and a comparison of the GCV criterion with maximum-likelihood (ML) estimation shows theGCV often outperforms ML in identifying the blur and image model parameters.
Abstract: The point spread function (PSF) of a blurred image is often unknown a priori; the blur must first be identified from the degraded image data before restoring the image. Generalized cross-validation (GCV) is introduced to address the blur identification problem. The GCV criterion identifies model parameters for the blur, the image, and the regularization parameter, providing all the information necessary to restore the image. Experiments are presented which show that GVC is capable of yielding good identification results. A comparison of the GCV criterion with maximum-likelihood (ML) estimation shows the GCV often outperforms ML in identifying the blur and image model parameters. >

251 citations


Journal Article•DOI•
TL;DR: A modified Hopfield neural network model for regularized image restoration is presented, which allows negative autoconnections for each neuron and allows a neuron to have a bounded time delay to communicate with other neurons.
Abstract: A modified Hopfield neural network model for regularized image restoration is presented. The proposed network allows negative autoconnections for each neuron. A set of algorithms using the proposed neural network model is presented, with various updating modes: sequential updates; n-simultaneous updates; and partially asynchronous updates. The sequential algorithm is shown to converge to a local minimum of the energy function after a finite number of iterations. Since an algorithm which updates all n neurons simultaneously is not guaranteed to converge, a modified algorithm is presented, which is called a greedy algorithm. Although the greedy algorithm is not guaranteed to converge to a local minimum, the l/sub 1/ norm of the residual at a fixed point is bounded. A partially asynchronous algorithm is presented, which allows a neuron to have a bounded time delay to communicate with other neurons. Such an algorithm can eliminate the synchronization overhead of synchronous algorithms. >

233 citations


Journal Article•DOI•
TL;DR: Three fast search routines to be used in the encoding phase of vector quantization (VQ) image compression systems are presented and show that the proposed algorithms need only 3-20% of the number of mathematical operations required by a full search.
Abstract: Three fast search routines to be used in the encoding phase of vector quantization (VQ) image compression systems are presented. These routines, which are based on geometric considerations, provide the same results as an exhaustive (or full) search. Examples show that the proposed algorithms need only 3-20% of the number of mathematical operations required by a full search and fewer than 50% of the operations required by recently proposed alternatives. >

154 citations


Journal Article•DOI•
TL;DR: A system model and its corresponding inversion for synthetic aperture radar (SAR) imaging are presented and it is shown that the transformed data provide samples of the spatial Fourier transform of the target's reflectivity function.
Abstract: A system model and its corresponding inversion for synthetic aperture radar (SAR) imaging are presented. The system model incorporates the spherical nature of a radar's radiation pattern at far field. The inverse method based on this model performs a spatial Fourier transform (Doppler processing) on the recorded signals with respect to the available coordinates of a translational radar (SAR) or target (inverse SAR). It is shown that the transformed data provide samples of the spatial Fourier transform of the target's reflectivity function. The inverse method can be modified to incorporate deviations of the radar's motion from its prescribed straight line path. The effects of finite aperture on resolution, reconstruction, and sampling constraints for the imaging problem are discussed. >

138 citations


Journal Article•DOI•
TL;DR: Computationally efficient multiframe Wiener filtering algorithms that account for both intraframe (spatial) and interframe (temporal) correlations are proposed for restoring image sequences that are degraded by both blur and noise.
Abstract: Computationally efficient multiframe Wiener filtering algorithms that account for both intraframe (spatial) and interframe (temporal) correlations are proposed for restoring image sequences that are degraded by both blur and noise. One is a general computationally efficient multiframe filter, the cross-correlated multiframe (CCMF) Wiener filter, which directly utilizes the power and cross power spectra of only N*N matrices, where N is the number of frames used in the restoration. In certain special cases the CCMF lends itself to a closed-form solution that does not involve any matrix inversion. A special case is the motion-compensated multiframe (MCMF) filter, where each frame is assumed to be a globally shifted version of the previous frame. In this case, the interframe correlations can be implicitly accounted for using the estimated motion information. Thus the MCMF filter requires neither explicit estimation of cross correlations among the frames nor matrix inversion. Performance and robustness results are given. >

Journal Article•DOI•
TL;DR: It is shown that dual polarization SAR data can yield segmentation resultS similar to those obtained with fully polarimetric SAR data, and the performance of the MAP segmentation technique is evaluated.
Abstract: A statistical image model is proposed for segmenting polarimetric synthetic aperture radar (SAR) data into regions of homogeneous and similar polarimetric backscatter characteristics. A model for the conditional distribution of the polarimetric complex data is combined with a Markov random field representation for the distribution of the region labels to obtain the posterior distribution. Optimal region labeling of the data is then defined as maximizing the posterior distribution of the region labels given the polarimetric SAR complex data (maximum a posteriori (MAP) estimate). Two procedures for selecting the characteristics of the regions are then discussed. Results using real multilook polarimetric SAR complex data are given to illustrate the potential of the two selection procedures and evaluate the performance of the MAP segmentation technique. It is also shown that dual polarization SAR data can yield segmentation resultS similar to those obtained with fully polarimetric SAR data. >

Journal Article•DOI•
TL;DR: An adaptive morphological filter is constructed on the basis of the NOP and NCP that can remove any details consisting of fewer pixels than a given number N, while preserving the other details.
Abstract: Novel types of opening operator (NOP) and closing operator (NCP) are proposed. An adaptive morphological filter is then constructed on the basis of the NOP and NCP. The filter can remove any details consisting of fewer pixels than a given number N, while preserving the other details. Efficient algorithms are also developed for the implementation of the NOP and NCP. >

Journal Article•DOI•
TL;DR: Using DPCM and PCM in HDTV subband coding, it is found that QMFs have an edge over the rest in subband compression, compared to the rest of the filter sets.
Abstract: The authors compare the subband compression capabilities of eight filter sets (consisting of linear-phase quadrature mirror filters (QMFs), perfect reconstruction filters, and nonlinear phase wavelets) at different bit rates, using-a filter-based bit allocation procedure. Using DPCM and PCM in HDTV subband coding, it is found that QMFs have an edge over the rest. >

Journal Article•DOI•
TL;DR: In this article, a distance transformation technique for a binary digital image using a gray-scale mathematical morphology approach is presented, which can significantly reduce the tremendous cost of global operations to that of small neighborhood operations suitable for parallel pipelined computers.
Abstract: A distance transformation technique for a binary digital image using a gray-scale mathematical morphology approach is presented. Applying well-developed decomposition properties of mathematical morphology, one can significantly reduce the tremendous cost of global operations to that of small neighborhood operations suitable for parallel pipelined computers. First, the distance transformation using mathematical morphology is developed. Then several approximations of the Euclidean distance are discussed. The decomposition of the Euclidean distance structuring element is presented. The decomposition technique employs a set of 3 by 3 gray scale morphological erosions with suitable weighted structuring elements and combines the outputs using the minimum operator. Real-valued distance transformations are considered during the processes and the result is approximated to the closest integer in the final output image. >

Journal Article•DOI•
TL;DR: A formulation for maximum-likelihood (ML) blur identification based on parametric modeling of the blur in the continuous spatial coordinates makes it possible to find the ML estimate of the extent of arbitrary point spread functions that admit a closed-form parametric description in the continuously coordinates.
Abstract: A formulation for maximum-likelihood (ML) blur identification based on parametric modeling of the blur in the continuous spatial coordinates is proposed. Unlike previous ML blur identification methods based on discrete spatial domain blur models, this formulation makes it possible to find the ML estimate of the extent, as well as other parameters, of arbitrary point spread functions that admit a closed-form parametric description in the continuous coordinates. Experimental results are presented for the cases of 1-D uniform motion blur, 2-D out-of-focus blur, and 2-D truncated Gaussian blur at different signal-to-noise ratios. >

Journal Article•DOI•
TL;DR: The authors show that full-search entropy-constrained vector quantization of image subbands results in the best performance, but is computationally expensive.
Abstract: Vector quantization for entropy coding of image subbands is investigated. Rate distortion curves are computed with mean square error as a distortion criterion. The authors show that full-search entropy-constrained vector quantization of image subbands results in the best performance, but is computationally expensive. Lattice quantizers yield a coding efficiency almost indistinguishable from optimum full-search entropy-constrained vector quantization. Orthogonal lattice quantizers were found to perform almost as well as lattice quantizers derived from dense sphere packings. An optimum bit allocation rule based on a Lagrange multiplier formulation is applied to subband coding. Coding results are shown for a still image. >

Journal Article•DOI•
H.J. Trussell1, S. Fogel1•
TL;DR: A modification of the Landweber iteration is developed to utilize the space-variant PSF to produce an estimate of the original image.
Abstract: Sequential imaging cameras are designed to record objects in motion. When the speed of the objects exceeds the temporal resolution of the shutter, the image is blurred. Because objects in a scene are often moving in different directions at different speeds, the degradation of a recorded image may be characterized by a space-variant point spread function (PSF). The sequential nature of such images can be used to determine the relative motion of various parts of the image. This information can be used to estimate the space-variant PSF. A modification of the Landweber iteration is developed to utilize the space-variant PSF to produce an estimate of the original image. >

Journal Article•DOI•
TL;DR: A phase unwrapping algorithm is presented which works for 2D data known only within a set of nonconnected regions with possibly nonconvex boundaries and the main application addressed is magnetic resonance imaging (MRI) where phase maps are useful.
Abstract: Phase unwrapping refers to the determination of phase from modulo 2 pi data, some of which may not be reliable. In 2D, this is equivalent to confining the support of the phase function to one or more arbitrarily shaped regions. A phase unwrapping algorithm is presented which works for 2D data known only within a set of nonconnected regions with possibly nonconvex boundaries. The algorithm includes the following steps: segmentation to identify connectivity, phase unwrapping within each segment using a Taylor series expansion, phase unwrapping between disconnected segments along an optimum path, and filling of phase information voids. The optimum path for intersegment unwrapping is determined by a minimum spanning tree algorithm. Although the algorithm is applicable to any 2D data, the main application addressed is magnetic resonance imaging (MRI) where phase maps are useful. >

Journal Article•DOI•
TL;DR: It is demonstrated in a particularly simple manner in the Fourier domain using a vector analog of the well-known projection slice theorem that the solenoidal part of v(r) is uniquely determined by the line integrals of v (r).
Abstract: The problem of reconstructing a vector field v(r) from its line integrals (through some domain D) is generally undetermined since v(r) is defined by two component functions. When v(r) is decomposed into its irrotational and solenoidal components, it is shown that the solenoidal part is uniquely determined by the line integrals of v(r). This is demonstrated in a particularly simple manner in the Fourier domain using a vector analog of the well-known projection slice theorem. In addition, under the constraint that v(r) is divergenceless in D, a formula for the scalar potential phi (r) is given in terms of the normal component of v(r) on the boundary D. An important application of vector tomography, i.e., a fluid velocity field from reciprocal acoustic travel time measurements or Doppler backscattering measurements, is considered. >

Journal Article•DOI•
TL;DR: Several major deconvolution techniques commonly used for seismic applications are studied and adapted for ultrasonic NDE (nondestructive evaluation) applications and their relative merits are presented based on a complete set of simulations on some real ultrasonic pulse echoes.
Abstract: Several major deconvolution techniques commonly used for seismic applications are studied and adapted for ultrasonic NDE (nondestructive evaluation) applications. Comparisons of the relative merits of these techniques are presented based on a complete set of simulations on some real ultrasonic pulse echoes. Methods that rely largely on a reflection seismic model, such as one-at-a-time L/sub 1/ spike extraction and MVD (minimum variance deconvolution), are not suitable for the NDE applications discussed here because they are limited by their underlying model. L/sub 2/ and Wiener filtering, on the other hand, do not assume such a model and are, therefore, more flexible and suitable for these applications. The L/sub 2/ solutions, however, are often noisy due to numerical ill conditions. This problem is partially solved in Wiener filtering, simply by adding a constant desensitizing factor q. The computational complexities of these Wiener filtering-based techniques are relatively moderate and are, therefore, more suitable for potential real-time implementations. >

Journal Article•DOI•
TL;DR: The authors describe an adaptive buffer instrumented version of 2-D ECSBC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems and provides relative performance evaluations.
Abstract: The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. >

Journal Article•DOI•
TL;DR: The novelty of this approach is that the transform coefficients of all image blocks are coded and transmitted in absolute magnitude order and the resulting ordered-by-magnitude transmission is accomplished without sacrificing coding efficiency by using partition priority coding.
Abstract: An approach is based on the block discrete cosine transform (DCT). The novelty of this approach is that the transform coefficients of all image blocks are coded and transmitted in absolute magnitude order. The resulting ordered-by-magnitude transmission is accomplished without sacrificing coding efficiency by using partition priority coding. Coding and transmission are adaptive to the characteristics of each individual image. and therefore, very efficient. Another advantage of this approach is its high progression effectiveness. Since the largest transform coefficients that capture the most important characteristics of images are coded and transmitted first, this method is well suited for progressive image transmission. Further compression of the image-data is achieved by multiple distribution entropy coding, a technique based on arithmetic coding. Experiments show that the approach compares favorably with previously reported DCT and subband image codecs. >

Journal Article•DOI•
J.A. Vlontzos1, Sun-Yuan Kung•
TL;DR: A hierarchical system for character recognition with hidden Markov model knowledge sources which solve both the context sensitivity problem and the character instantiation problem is presented, thus permitting real-time multifont and multisize printed character recognition as well as handwriting recognition.
Abstract: A hierarchical system for character recognition with hidden Markov model knowledge sources which solve both the context sensitivity problem and the character instantiation problem is presented. The system achieves 97-99% accuracy using a two-level architecture and has been implemented using a systolic array, thus permitting real-time (1 ms per character) multifont and multisize printed character recognition as well as handwriting recognition. >

Journal Article•DOI•
TL;DR: The results obtained consistently demonstrate the efficacy of the proposed TDJPL implementations, and illustrate the success in its use for adaptive restoration of images.
Abstract: The two dimensional (2D) joint process lattice (TDJPL) and its implementations for image restoration applications are examined. A 2D adaptive lattice algorithm (TDAL) is first developed. Convergence properties of the algorithm are covered for the 2D adaptive lattice least mean squares (TDAL-LMS) case. The complexity of the normalized algorithm is slightly more than that of the TDAL-LMS, but it is a faster-converging algorithm. Implementations of the proposed TDJPL estimator as a 2D adaptive lattice noise canceler and as a 2D adaptive lattice line enhancer are then considered. The performance of both schemes is evaluated using artificially degraded image data at different signal-to-noise ratios (SNRs). The results show that substantial noise reduction has been achieved, and the high improvement in the mean square error, even at very low input SNR, is ensured. The results obtained consistently demonstrate the efficacy of the proposed TDJPL implementations, and illustrate the success in its use for adaptive restoration of images. >

Journal Article•DOI•
TL;DR: A 2D recursive low-pass filter with adaptive coefficients for restoring images degraded by Gaussian noise is proposed and can easily be extended so that simultaneous noise removal and edge enhancement is possible.
Abstract: A 2D recursive low-pass filter with adaptive coefficients for restoring images degraded by Gaussian noise is proposed. Some of the ideas developed are also submitted for nonGaussian noise. The adaptation is performed with respect to three local image features-edges, spots, and flat regions-for which detectors are developed by extending some existing methods. It is demonstrated that the filter can easily be extended so that simultaneous noise removal and edge enhancement is possible. A comparison with other approaches is made. Some examples illustrate the performance of the filter. >

Journal Article•DOI•
A.J. Devaney1•
TL;DR: The problem of reconstructing the complex index of refraction of a weakly inhomogeneous scattering object from measurements of the magnitude (intensity) of the transmitted wavefields in a set of scattering experiments within the context of diffraction tomography (DT) is addressed.
Abstract: The problem of reconstructing the complex index of refraction of a weakly inhomogeneous scattering object from measurements of the magnitude (intensity) of the transmitted wavefields in a set of scattering experiments within the context of diffraction tomography (DT) is addressed. It is shown that high quality approximate reconstructions can be obtained from such intensity data using standard reconstruction procedures of DT. The physical basis for the success of these procedures when applied to intensity data is discussed and computer simulations are presented comparing the approximate reconstructions generated from intensity data with the optimum reconstructions generated from both the magnitude and phase of the transmitted wavefields. >

Journal Article•DOI•
TL;DR: It is shown that the imaging information obtained by the inversion of phased array scan data is equivalent to the image reconstructed from its synthesized array counterpart.
Abstract: The author presents a system model and inversion for the beam-steered data obtained by linearly varying the relative phase among the elements of an array, also known as phased array scan data. The system model and inversion incorporate the radiation pattern of the array's elements. The inversion method utilizes the time samples of the echoed signals for each scan angle instead of range focusing. It is shown that the temporal Fourier transform of the phased array scan data provides the distribution of the spatial Fourier transform of the reflectivity function for the medium to be imaged. The extent of this coverage is related to the array's length and the temporal frequency bandwidth of the transmitted pulsed signal. Sampling constraints and reconstruction procedure for the imaging system are discussed. It is shown that the imaging information obtained by the inversion of phased array scan data is equivalent to the image reconstructed from its synthesized array counterpart. >

Journal Article•DOI•
TL;DR: A vector quantization scheme based on the classified vector quantification (CVQ) concept, called predictive classified vectors quantization (PCVQ), is presented, which achieves bit rate reductions over the CVQ ranging from 20 to 32% for two commonly used color test images while maintaining the same acceptable image quality.
Abstract: A vector quantization scheme based on the classified vector quantization (CVQ) concept, called predictive classified vector quantization (PCVQ), is presented. Unlike CVQ where the classification information has to be transmitted, PCVQ predicts it, thus saving valuable bit rate. Two classifiers, one operating in the Hadamard domain and the other in the spatial domain, were designed and tested. The classification information was predicted in the spatial domain. The PCVQ schemes achieved bit rate reductions over the CVQ ranging from 20 to 32% for two commonly used color test images while maintaining the same acceptable image quality. Bit rates of 0.70-0.93 bits per pixel (bpp) were obtained depending on the image and PCVQ scheme used. >

Journal Article•DOI•
TL;DR: A two-dimensional method which uses a full-planes image model to generate a more accurate filtered estimate of an image that has been corrupted by additive noise and full-plane blur is presented.
Abstract: A two-dimensional method which uses a full-plane image model to generate a more accurate filtered estimate of an image that has been corrupted by additive noise and full-plane blur is presented. Causality is maintained within the filtering process by using multiple concurrent block estimators. In addition, true state dynamics are preserved, resulting in an accurate Kalman gain matrix. Simulation results on a test image corrupted by additive white Gaussian noise are presented for various image models and compared to those of the previous block Kalman filtering methods. >