scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 1993"


Journal ArticleDOI
TL;DR: The iterative image recovery algorithm described in this paper uses a numerical finite-element solution to the diffusion equation as the photon propagation model to compare the influence of absorbing and scattering inhomogeneities embedded in a homogeneous tissue sample on boundary measurements.
Abstract: The development of an optical tomographic imaging system for biological tissue based on time-resolved near-infrared transillumination has received considerable interest recently The reconstruction problem is ill posed because of scatter-dominated photon propagation, and hence it requires both an accurate and fast transport model and a robust solution convergence scheme The iterative image recovery algorithm described in this paper uses a numerical finite-element solution to the diffusion equation as the photon propagation model The model itself is used to compare the influence of absorbing and scattering inhomogeneities embedded in a homogeneous tissue sample on boundary measurements to estimate the possibility of separating absorption and scattering images Images of absorbers and scatterers reconstructed from both mean-time-of-flight and logarithmic intensity data are presented It is found that mean-time-of-flight data offer increased resolution for reconstructing the scattering coefficient, whereas intensity data are favorable for reconstructing absorption

176 citations


Journal ArticleDOI
TL;DR: This article gives an axiomatic derivation of how a multiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses analgebraic structure similar to that possessed by the derivatives of the traditional scale-space representation in the continuous domain.
Abstract: This article shows how discrete derivative approximations can be defined so thatscale-space properties hold exactly also in the discrete domain. Starting from a set of natural requirements on the first processing stages of a visual system,the visual front end, it gives an axiomatic derivation of how a multiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses analgebraic structure similar to that possessed by the derivatives of the traditional scale-space representation in the continuous domain. A family of kernels is derived that constitutediscrete analogues to the continuous Gaussian derivatives. The representation has theoretical advantages over other discretizations of the scale-space theory in the sense that operators that commute before discretizationcommute after discretization. Some computational implications of this are that derivative approximations can be computeddirectly from smoothed data and that this will giveexactly the same result as convolution with the corresponding derivative approximation kernel. Moreover, a number ofnormalization conditions are automatically satisfied. The proposed methodology leads to a scheme of computations of multiscale low-level feature extraction that is conceptually very simple and consists of four basic steps: (i)large support convolution smoothing, (ii)small support difference computations, (iii)point operations for computing differential geometric entities, and (iv)nearest-neighbour operations for feature detection. Applications demonstrate how the proposed scheme can be used for edge detection and junction detection based on derivatives up to order three.

117 citations


Journal ArticleDOI
TL;DR: In this paper, a diagrammar of differential invariants and tensors is proposed for the analysis of local image structure, based on scale-space theory, tensor calculus, and the theory of invariants.
Abstract: We present a formalism for studying local image structure in a systematic, coordinate-independent, and robust way, based on scale-space theory, tensor calculus, and the theory of invariants. We concentrate ondifferential invariants. The formalism is of general applicability to the analysis of grey-tone images of various modalities, defined on aD-dimensional spatial domain. We propose a “diagrammar” of differential invariants and tensors, i.e., a diagrammatic representation of image derivatives in scale-space together with a set of simple rules for representing meaningful local image properties. All local image properties on a given level of inner scale can be represented in terms of such diagrams, and, vice versa, all diagrams represent coordinate-independent combinations of image derivatives, i.e., true image properties. We presentcomplete andirreducible sets of (nonpolynomial) differential invariants appropriate for the description of local image structure up to any desired order. Any differential invariant can be expressed in terms ofpolynomial invariants, pictorially represented by closed diagrams. Here we consider a complete, irreducible set of polynomial invariants up to second order (inclusive). Examples of differential invariants up to fourth order (inclusive), calculated for synthetic, noiseperturbed, 2-dimensional test images, are included to illustrate the main theory.

103 citations


Journal ArticleDOI
TL;DR: The local cosine transform (LCT) can be added as an optional step for improving the quality of existing DCT (JPEG) encoders by reducing the blocking effect and smoothing the image quality.
Abstract: This paper presents the local cosine transform (LCT) as a new method for the reduction and smoothing of the blocking effect that appears at low bit rates in image coding algorithms based on the discrete cosine transform (DCT). In particular, the blocking effect appears in the JPEG baseline sequential algorithm. Two types of LCT were developed: LCT-IV is based on the DCT type IV, and LCT-II is based on DCT type II, which is known as the standard DCT. At the encoder side the image is first divided into small blocks of pixels. Both types of LCT have basis functions that overlap adjacent blocks. Prior to the DCT coding algorithm a preprocessing phase in which the image is multiplied by smooth cutoff functions (or bells) that overlap adjacent blocks is applied. This is implemented by folding the overlapping parts of the bells back into the original blocks, and thus it permits the DCT algorithm to operate on the resulting blocks. At the decoder side the inverse LCT is performed by unfolding the samples back to produce the overlapped bells. The purpose of the multiplication by the bell is to reduce the gaps and inaccuracies that may be introduced by the encoder during the quantization step. LCT-IV and LCT-II were applied on images as a preprocessing phase followed by the JPEG baseline sequential compression algorithm. For LCT-IV, the DCT type IV replaced the standard DCT as the kernel of the transform coding. In both cases, for the same low bit rates the blocking effect was smoothed and reduced while the image quality in terms of mean-square error became better. Subjective tests performed on a group of observers also confirm these results. Thus the LCT can be added as an optional step for improving the quality of existing DCT (JPEG) encoders. Advantages over other methods that attempt to reduce the blocking effect due to quantization are also described.

73 citations


Journal ArticleDOI
TL;DR: This paper introduces a new extension of the finite-dimensional spline-based approach for incorporating edge information, and derives explicit formulas for these edge warps, evaluates the quadratic form expressing bending energies of their formal combinations, and shows the resulting spectrum of edge features in typical scenes.
Abstract: In many current medical applications of image analysis, objects are detected and delimited by boundary curves or surfaces. Yet the most effective multivariate statistics available pertain to labeled points (landmarks) only. In the finite-dimensional feature space that landmarks support, each case of a data set is equivalent to a deformation map deriving it from the average form. This paper introduces a new extension of the finite-dimensional spline-based approach for incorporating edge information. In this implementation edgels are restricted to landmark loci: they are interpreted as pairs of landmarks at infinitesimal separation in a specific direction. The effect of changing edge direction is a singular perturbation of the thin-plate spline for the landmarks alone. An appropriate normalization yields a basis for image deformations corresponding to changes of edge direction without landmark movement; this basis complements the basis of landmark deformations ignoring edge information. We derive explicit formulas for these edge warps, evaluate the quadratic form expressing bending energies of their formal combinations, and show the resulting spectrum of edge features in typical scenes. These expressions will aid all investigations into medical images that entail comparisons of anatomical scene analyses to a normative or typical form.

70 citations


Book ChapterDOI
TL;DR: It is shown that the radar imaging problem can be set up as a problem of inference on the wavelet coefficients of an image corrupted by additive noise, and a simple hypothesis-testing technique for solving the problem at a prespecified significance level is studied.
Abstract: We consider the problem of forming radar images under a diffuse-target statistical model for the reflections off a target surface. The desired image is the scattering function S(f, τ), which describes the second-order statistics of target reflectivity in delay—Doppler coordinates. Our estimation approach is obtained by application of the maximum-likelihood principle and a regularization procedure based on a wavelet representation for the logarithm of S(f, τ). This approach offers the ability to capture significant components of In S(f, τ) at different resolution levels and guarantees nonnegativity of the scattering function estimates. We show that the radar imaging problem can be set up as a problem of inference on the wavelet coefficients of an image corrupted by additive noise. A simple hypothesis-testing technique for solving the problem at a prespecified significance level is studied. The significance level of the test is selected according to the desired noise/resolution trade-off. The regularization technique is applicable to a broad class of speckle-noise reduction problems.

47 citations


Journal ArticleDOI
TL;DR: An automatic method for identification of the center point of the left ventricle of the myocardium during systole in two-dimensional short-axis echocardiographic images is described, providing a first step toward the long-term goal of automatic recognition of all the endocardial and epicardial borders.
Abstract: An automatic method for identification of the center point of the left ventricle of the myocardium during systole in two-dimensional short-axis echocardiographic images is described. This method, based on the use of large matched filters, identifies a single fixed center point during systole by locating three key features: the epicardial border along the posterior wall, the epicardial border along the anterior wall, and the endocardial border along the anterior wall. Thus it provides a first step toward the long-term goal of automatic recognition of all the endocardial and epicardial borders. An index (or normalized output value) associated with the filter used to approximate the epicardial boundary along the posterior wall provides an indication of the quality of the image and a reliability measurement of the estimate. When this method was tested on 207 image sequences, 18 images were identified by this index (applied to the end diastolic frame) as unsuitable for processing. In the remaining 189 image sequences, 173 of the automatically defined center points were judged to be in good agreement with estimates made on the end diastolic frame by an independent expert observer. Thus only 16 automatically defined centers were judged to be in poor agreement. Comparisons of the computer and expert-observer estimates were also made for the three key border locations.

26 citations


Book ChapterDOI
TL;DR: A group theoretic approach to image representation and analysis is presented in this article, where the concept of a wavelet transform is extended to incorporate different types of groups and the wavelet approach is generalized to Lie groups that satisfy conditions of compactness and commutability.
Abstract: A group theoretic approach to image representation and analysis is presented. The concept of a wavelet transform is extended to incorporate different types of groups. The wavelet approach is generalized to Lie groups that satisfy conditions of compactness and commutability and to groups that are determined in a particular way by subgroups that satisfy these conditions. These conditions are fundamental to finding the invariance measure for the admissibility condition of a mother wavelet-type transform. The following special cases of interest in image representation and in biological and computer vision are discussed: 2-and 3-D rigid motion, similarity and Lorentzian groups, and 2-D projective groups obtained from 3-D camera rotation.

24 citations


Journal ArticleDOI
Peter Veelaert1
TL;DR: The main result is the proof of the equivalence of local flatness, evenness, and the chord property for certain infinite digital point sets in spaces of arbitrary dimension.
Abstract: This paper investigates the properties of digital hyperplanes of arbitrary dimension. We extend previous results that have been obtained for digital straight lines and digital planes, namely, Hung's evenness, Rosenfeld's chord, and Kim's chordal triangle property. To characterize digital hyperplanes we introduce the notion of digital flatness. We make a distinction between flatness and local flatness. The main tool we use is Helly's First Theorem, a classical result on convex sets, by means of which precise and verifiable conditions are given for the flatness of digital point sets. The main result is the proof of the equivalence of local flatness, evenness, and the chord property for certain infinite digital point sets in spaces of arbitrary dimension.

24 citations


Book ChapterDOI
TL;DR: In this article, the reconstruction of a tomographic image from the wavelet transform of its projections with a 1-D analyzing wavelet was studied, and it was shown that it allows us to reconstruct a 2-D wavelet decomposition of the image.
Abstract: In tomography an image is reconstructed from its projections from different directions. In this paper we study the reconstruction of a tomographic image from the wavelet transform of its projections with a 1-D analyzing wavelet. We then show that it allows us to reconstruct a 2-D wavelet decomposition of the image. The properties of the generated 2-D analyzing wavelet are studied. When the 1-D analyzing wavelet is even, the 2-D analyzing wavelet is isotropic. The extension of this idea to directional wavelets is also presented. The wavelet transform obtained in this case is defined with respect to a scale parameter and a rotation angle. For illustration, results on simulated and x-ray computerized tomography medical images are presented.

14 citations


Book ChapterDOI
TL;DR: A simple focusing technique for wavelet decompositions is developed that allows us to single out interesting parts of an image and obtain variable compression rates over the image.
Abstract: We develop a simple focusing technique for wavelet decompositions. This allows us to single out interesting parts of an image and obtain variable compression rates over the image. We also study similar techniques for image enhancement.

Book ChapterDOI
TL;DR: Two different representations associated with the Heisenberg group are presented in order to incorporate multiscale resolution with the ability to observe new information on scale resolution that may be useful for image compression and extension.
Abstract: This paper presents two different representations associated with the Heisenberg group in order to incorporate multiscale resolution.

Journal ArticleDOI
TL;DR: The first and second moments of the granulometric pattern spectrum are expressed for a random binary image that is formed either as the union of a deterministic signal withrandom point noise or as the set subtraction of random point noise from a determable signal.
Abstract: The first and second moments of the granulometric pattern spectrum are expressed for a random binary image that is formed either as the union of a deterministic signal with random point noise or as the set subtraction of random point noise from a deterministic signal. The granulometry is generated by a vertical (or horizontal) linear structuring element, and there are no constraints placed on the structure of the uncorrupted signal. Because the noise is random, the image on which the granulometry is run is random. Hence the pattern spectrum is a random function with random-variable moments. For both the union and subtractive cases, expressions are found for the expectation of the pattern-spectrum mean and variance, where the expectation is relative to the noise intensity. In each case a recursive formula is obtained for the key expression.

Journal ArticleDOI
TL;DR: A method for recovering the spectrum of both the interface and the body reflectance for images composed of dichromatic surfaces by using finite-dimensional linear models to approximate spectral functions is presented.
Abstract: A method for recovering the spectrum of both the interface and the body reflectance for images composed of dichromatic surfaces is presented. An important assumption is that the spectrum of the interface component is the same as that of the illuminant. It is also assumed that the image is presegmented into dichromatic patches, that surfaces possess specularities, and that these highlights change geometrically differently from the shading. The method is based on minimizing the sum of squares of deviations from the dichromatic model over all the patches in the image, by using finite-dimensional linear models to approximate spectral functions. We point out shortcomings in the accuracy of such models when specularities are present in images. Results are presented for synthesized images made up of shaded patches with highlights. It is shown that the method does reasonably well in recovering interface and body colors as well as the illuminant spectrum and body spectral reflectance function.

Journal ArticleDOI
TL;DR: A general deterministic theory for the study of phase transitions when a single parameter, namely, the temperature, is varied is developed, based on a technique known as themean-field approximation.
Abstract: The use of Gibbs random fields (GRF) to model images poses the important problem of the dependence of the patterns sampled from the Gibbs distribution on its parameters. Sudden changes in these patterns as the parameters are varied are known asphase transitions. In this paper we concentrate on developing a general deterministic theory for the study of phase transitions when a single parameter, namely, the temperature, is varied. This deterministic framework is based on a technique known as themean-field approximation, which is widely used in statistical physics. Our mean-field theory is general in that it is valid for any number of gray levels, any pairwise interaction potential, any neighborhood structure or size, and any set of constraints imposed on the desired images. The mean-field approximation is used to compute closed-form estimates of the critical temperatures at which phase transitions occur for two texture models widely used in the image modeling literature: the Potts model and the autobinomial model. The mean-field model allows us to gain insight into the Gibbs model behavior in the neighborhood of these temperatures. These analytical results are verified by computer simulations that use a novel mean-field descent algorithm. An important spinoff of our mean-field theory is that it allows us to compute approximations for the correlation functions of GRF models, thus bridging the gap between neighborhood-based and correlation-baseda priori image models.

Book ChapterDOI
TL;DR: A fast algorithm using parallelism for compactly supported wavelet transforms that satisfym-scale scaling equations form ≥ 2 is established and several special examples demonstrate that the new formulation and algorithm offer unique advantages over existing wavelet algorithms are provided.
Abstract: This paper provides a new formulation of wavelet transforms in terms of generalized matrix products. After defining the generalized matrix product, a fast algorithm using parallelism for compactly supported wavelet transforms that satisfy m-scale scaling equations for m ≥ 2 is established. Several special examples, such as the Fourier-wavelet matrix expansion and wavelet decompositions and reconstructions, that demonstrate that the new formulation and algorithm offer unique advantages over existing wavelet algorithms are provided.

Journal ArticleDOI
TL;DR: It is proved that three (or more) views of three points are sufficient to decide if the motion of the points conserves angular momentum and, if it does, to compute a unique 3D interpretation.
Abstract: Monocular observers perceive as three-dimensional (3D) many displays that depict three points rotating rigidly in space but rotating about an axis that is itself tumbling. No theory of structure from motion currently available can account for this ability. We propose a formal theory for this ability based on the constraint of Poinsot motion, i.e., rigid motion with constant angular momentum. In particular, we prove that three (or more) views of three (or more) points are sufficient to decide if the motion of the points conserves angular momentum and, if it does, to compute a unique 3D interpretation. Our proof relies on an upper semicontinuity theorem for finite morphisms of algebraic varieties. We discuss some psychophysical implications of the theory.

Journal ArticleDOI
TL;DR: A method that performs direct rendering of such a surface that will minimize the number of operations is focused on and it can be used to efficiently calculate the values of a bivariate polynomial in a rectangular grid and in parallel.
Abstract: The display of an implicitly defined surface is obtained by its projection onto the viewing plane. Ray casting is a technique that accomplishes this projection byfiring a mapping ray through each pixel of the screen into the world space. The intersection points of this ray with the surface are found; these points are further tested to determine which one is visible and is within the viewing volume. Finally, if a point that satisfies the above conditions is found, then the point is further processed for the determination of its shading value.