scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Mathematical Imaging and Vision in 2003"


Journal ArticleDOI
TL;DR: Some recent results in statistical modeling of natural images that attempt to explain patterns of non-Gaussian behavior of image statistics, i.e. high kurtosis, heavy tails, and sharp central cusps are reviewed.
Abstract: Statistical analysis of images reveals two interesting properties: (i) invariance of image statistics to scaling of images, and (ii) non-Gaussian behavior of image statistics, i.e. high kurtosis, heavy tails, and sharp central cusps. In this paper we review some recent results in statistical modeling of natural images that attempt to explain these patterns. Two categories of results are considered: (i) studies of probability models of images or image decompositions (such as Fourier or wavelet decompositions), and (ii) discoveries of underlying image manifolds while restricting to natural images. Applications of these models in areas such as texture analysis, image classification, compression, and denoising are also considered.

561 citations


Journal ArticleDOI
TL;DR: A fully automated, non-rigid image registration algorithm that not only produces accurate and smooth solutions but also allows for an automatic rigid alignment and an implementation based on the numerical solution of the underlying Euler-Lagrange equations.
Abstract: A fully automated, non-rigid image registration algorithm is presented. The deformation field is found by minimizing a suitable measure subject to a curvature based constraint. It is a well-known fact that non-rigid image registration techniques may converge poorly if the initial position is not sufficiently near to the solution. A common approach to address this problem is to perform a time consuming rigid pre-registration step. In this paper we show that the new curvature registration not only produces accurate and smooth solutions but also allows for an automatic rigid alignment. Thus, in contrast to other popular registration schemes, the new method no longer requires a pre-registration step. Furthermore, we present an implementation of the new scheme based on the numerical solution of the underlying Euler-Lagrange equations. The real discrete cosine transform is the backbone of our implementation and leads to a stable and fast {\cal O}(N log N) algorithm, where N denotes the number of voxels. Finally, we report on some numerical test runs.

295 citations


Journal ArticleDOI
TL;DR: New methodologies and associated theorems for retrieving complete stored patterns from noisy or incomplete patterns using morphological associative memories are presented, derived from the notions of morphological independence, strong independence, minimal representations of patterns vectors, and kernels.
Abstract: Morphological neural networks are based on a new paradigm for neural computing Instead of adding the products of neural values and corresponding synaptic weights, the basic neural computation in a morphological neuron takes the maximum or minimum of the sums of neural values and their corresponding synaptic weights By taking the maximum (or minimum) of sums instead of the sum of products, morphological neuron computation is nonlinear before thresholding As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models In this paper we restrict our attention to morphological associative memories After a brief review of morphological neural computing and a short discussion about the properties of morphological associative memories, we present new methodologies and associated theorems for retrieving complete stored patterns from noisy or incomplete patterns using morphological associative memories These methodologies are derived from the notions of morphological independence, strong independence, minimal representations of patterns vectors, and kernels Several examples are provided in order to illuminate these novel concepts

99 citations


Journal ArticleDOI
TL;DR: Several analysis tools, such as statistics, line moments and invariants, Fourier and other series expansions, curvature scale space image, wavelet, and Radon transform are described.
Abstract: In this paper, a systematic review of various contour functions and methods of their analysis, as applied in the field of shape description and characterization, is presented. Contour functions are derived from planar object outlines and are used as an intermediate representation from which various shape properties can be obtained. All the functions are introduced and analyzed following the same scheme, thus making it possible to compare various representations. Although only a small subset of contour functions is included in the survey (cross-section, radius-vector, support, width, parametric, complex, tangent-angle, curvature, polynomial, and parametric cubic), the paper demonstrates a multitude of techniques for shape description that are based on this approach. Several analysis tools, such as statistics, line moments and invariants, Fourier and other series expansions, curvature scale space image, wavelet, and Radon transform are described.

90 citations


Journal ArticleDOI
TL;DR: This work proposes and analyzes extensions of the Mumford-Shah functional for color images based on the concept of images as surfaces and reviews the relevant theoretical background and computer vision literature.
Abstract: We propose and analyze extensions of the Mumford-Shah functional for color images. Our main motivation is the concept of images as surfaces. We also review most of the relevant theoretical background and computer vision literature.

83 citations


Journal ArticleDOI
TL;DR: It becomes clear in this paper that the theories of connectivity classes and of hyperconnectivity unify all relevant notions of connectivity, and provide a solid theoretical foundation for studying classical and fuzzy approaches to connectivity.
Abstract: Connectivity is a concept of great relevance to image processing and analysis. It is extensively used in image filtering and segmentation, image compression and coding, motion analysis, pattern recognition, and other applications. In this paper, we provide a theoretical tour of connectivity, with emphasis on those concepts of connectivity that are relevant to image processing and analysis. We review several notions of connectivity, which include classical topological and graph-theoretic connectivity, fuzzy connectivity, and the theories of connectivity classes and of hyperconnectivity. It becomes clear in this paper that the theories of connectivity classes and of hyperconnectivity unify all relevant notions of connectivity, and provide a solid theoretical foundation for studying classical and fuzzy approaches to connectivity, as well as for constructing new examples of connectivity useful for image processing and analysis applications.

77 citations


Journal ArticleDOI
TL;DR: A new technique of minimizing their energy that avoids explicit detection/connection of T-junctions is described, which is described as the minimizer of an energy.
Abstract: Given an image that depicts a scene with several objects in it, the goal of segmentation with depth is to automatically infer the shapes of the objects and the occlusion relations between them. Nitzberg, Mumford and Shiota formulated a variational approach to this problem: in their model, the solution is obtained as the minimizer of an energy. We describe a new technique of minimizing their energy that avoids explicit detection/connection of T-junctions.

73 citations


Journal ArticleDOI
TL;DR: A new method to extract the vortices, sources, and sinks from the dense motion field preliminary estimated between two images of a fluid video is proposed based on an analytic representation of the flow.
Abstract: In this paper we propose a new method to extract the vortices, sources, and sinks from the dense motion field preliminary estimated between two images of a fluid video. This problem is essential in meteorology for instance to identify and track depressions or convective clouds in satellite images. The knowledge of such points allows in addition a compact representation of the flow which is very useful in both experimental and theoretical fluid mechanics. The method we propose here is based on an analytic representation of the flow. This approach has the advantage of being robust, simple, fast and requires few parameters.

66 citations


Journal ArticleDOI
TL;DR: A new set theoretic interpretation of recording and recall in binary AMMs is given and a generalization using fuzzy set theory is provided, in particular on binary autoassociative morphological memories.
Abstract: Morphological neural networks (MNNs) are a class of artificial neural networks whose operations can be expressed in the mathematical theory of minimax algebra. In a morphological neural net, the usual sum of weighted inputs is replaced by a maximum or minimum of weighted inputs (in this context, the weighting is performed by summing the weight and the input). We speak of a max product, a min product respectively. In recent years, a number of different MNN models and applications have emerged. The emphasis of this paper is on morphological associative memories (MAMs), in particular on binary autoassociative morphological memories (AMMs). We give a new set theoretic interpretation of recording and recall in binary AMMs and provide a generalization using fuzzy set theory.

60 citations


Journal ArticleDOI
TL;DR: A construction method that is based on the extrema point preservation of the Erosion/Dilation Morphological Scale Spaces is proposed to improve Associative Morphological Memories robustness to general noise.
Abstract: Associative Morphological Memories are the analogous construct to Linear Associative Memories defined on the lattice algebra (/Bbb R, +, ∨, ∧). They have excellent recall properties for noiseless patterns. However they suffer from the sensitivity to specific noise models, that can be characterized as erosive and dilative noise. To improve their robustness to general noise we propose a construction method that is based on the extrema point preservation of the Erosion/Dilation Morphological Scale Spaces. Here we report on their application to the tasks of face localization in grayscale images and appearance based visual self-localization of a mobile robot.

58 citations


Journal ArticleDOI
TL;DR: Algorithms for estimating affine motion from video image sequences utilizing properties of the Radon transform to estimate image motion in a multiscale framework to achieve very accurate results are presented.
Abstract: The demand for more effective compression, storage, and transmission of video data is ever increasing To make the most effective use of bandwidth and memory, motion-compensated methods rely heavily on fast and accurate motion estimation from image sequences to compress not the full complement of frames, but rather a sequence of reference frames, along with “differences” between these frames which results from estimated frame-to-frame motion Motivated by the need for fast and accurate motion estimation for compression, storage, and transmission of video as well as other applications of motion estimation, we present algorithms for estimating affine motion from video image sequences Our methods utilize properties of the Radon transform to estimate image motion in a multiscale framework to achieve very accurate results We develop statistical and computational models that motivate the use of such methods, and demonstrate that it is possible to improve the computational burden of motion estimation by more than an order of magnitude, while maintaining the degree of accuracy afforded by the more direct, and less efficient, 2-D methods

Journal ArticleDOI
TL;DR: Algorithms to measure the above parameters of air-sea gas transfer during night-time and show how to combine physical modeling and quantitative digital image processing algorithms to identify transport models are presented.
Abstract: The study of dynamical processes at the sea surface interface using infrared image sequence analysis has gained tremendous popularity in recent years. Heat is transferred by similar transport mechanisms as gases relevant to global climatic changes. These similarities lead to the use of infrared cameras to remotely visualize and quantitatively estimate parameters of the underlying processes. Relevant parameters that provide important evidence about the models of air-sea gas transfer are the temperature difference across the thermal sub layer, the probability density function of surface renewal and the flow field at the surface. Being a driving force in air sea interactions, it is of equal importance to measure heat fluxes. In this paper we will present algorithms to measure the above parameters of air-sea gas transfer during night-time and show how to combine physical modeling and quantitative digital image processing algorithms to identify transport models. The image processing routines rely on an extension of optical flow computations to incorporate brightness changes in a total least squares (TLS) framework. Statistical methods are employed to support a model of gas transfer and estimate its parameters. Measurements in a laboratory environment were conducted and results verified with ground truth data gained from traditional measurement techniques.

Journal ArticleDOI
TL;DR: In this paper tube methods for reconstructing discontinuous data from noisy and blurred observation data are considered and it is shown that discrete bounded variation-regularization and the taut-string algorithm select reconstructions in a tube.
Abstract: In this paper tube methods for reconstructing discontinuous data from noisy and blurred observation data are considered It is shown that discrete bounded variation (BV)-regularization (commonly used in inverse problems and image processing) and the taut-string algorithm (commonly used in statistics) select reconstructions in a tube A version of the taut-string algorithm applicable for higher dimensional data is proposed This formulation results in a bilateral contact problem which can be solved very efficiently using an active set strategy As a by-product it is shown that the Lagrange multiplier of the active set strategy is an efficient parameter for edge detection

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the deep structure of a scale space image and focused on scale space critical points, points with vanishing gradient with respect to both spatial and scale direction.
Abstract: We investigate the deep structure of a scale space image. We concentrate on scale space critical points—points with vanishing gradient with respect to both spatial and scale direction. We show that these points are always saddle points. They turn out to be extremely useful, since the iso-intensity manifolds through these points provide a scale space hierarchy tree and induce a “pre-segmentation”: a segmentation without a priori knowledge. Furthermore, both these scale space saddles and the so-called catastrophe points form the critical points of the parameterised critical curves—the curves along which the spatial critical points move in scale space. This enables one to localise these two types of special points relatively easy and automatically. Experimental results concerning the hierarchical representation and pre-segmentation are given and show results that correspond to a fair degree to both the mathematical and the intuitive forecast.

Journal ArticleDOI
TL;DR: A 2-D version of Leap-Frog is applied to a non optimization problem in computer vision, namely the recovery (so far as possible) of an unknown surface from 3 noisy camera images.
Abstract: 1-D Leap-Frog (L. Noakes, J. Math. Australian Soc. A, Vol. 64, pp. 37–50, 1999) is an iterative scheme for solving a class of nonquadratic optimization problems. In this paper a 2-D version of Leap-Frog is applied to a non optimization problem in computer vision, namely the recovery (so far as possible) of an unknown surface from 3 noisy camera images. This contrasts with previous work on photometric stereo, in which noise is added to the gradient of the height function rather than camera images. Given a suitable initial guess, 2-D Leap-Frog is proved to converge to the maximum-likelihood estimate for the vision problem. Performance is illustrated by examples.

Journal ArticleDOI
TL;DR: This work analyzes the possibility of using multiplicative operator splittings to process images from different perspectives and examines the potential utility of using multiple timestep methods combined with AOS schemes, as means to expedite the diffusion process.
Abstract: Operator splitting is a powerful concept used in many diversed fields of applied mathematics for the design of effective numerical schemes. Following the success of the additive operator splitting (AOS) in performing an efficient nonlinear diffusion filtering on digital images, we analyze the possibility of using multiplicative operator splittings to process images from different perspectives. We start by examining the potential of using fractional step methods to design a multiplicative operator splitting as an alternative to AOS schemes. By means of a Strang splitting, we attempt to use numerical schemes that are known to be more accurate in linear diffusion processes and apply them on images. Initially we implement the Crank-Nicolson and DuFort-Frankel schemes to diffuse noisy signals in one dimension and devise a simple extrapolation that enables the Crank-Nicolson to be used with high accuracy on these signals. We then combine the Crank-Nicolson in 1D with various multiplicative operator splittings to process images. Based on these ideas we obtain some interesting results. However, from the practical standpoint, due to the computational expenses associated with these schemes and the questionable benefits in applying them to perform nonlinear diffusion filtering when using long timesteps, we conclude that AOS schemes are simple and efficient compared to these alternatives. We then examine the potential utility of using multiple timestep methods combined with AOS schemes, as means to expedite the diffusion process. These methods were developed for molecular dynamics applications and are used efficiently in biomolecular simulations. The idea is to split the forces exerted on atoms into different classes according to their behavior in time, and assign longer timesteps to nonlocal, slowly-varying forces such as the Coulomb and van der Waals interactions, whereas the local forces like bond and angle are treated with smaller timesteps. Multiple timestep integrators can be derived from the Trotter factorization, a decomposition that bears a strong resemblance to a Strang splitting. Both formulations decompose the time propagator into trilateral products to construct multiplicative operator splittings which are second order in time, with the possibility of extending the factorization to higher order expansions. While a Strang splitting is a decomposition across spatial dimensions, where each dimension is subsequently treated with a fractional step, the multiple timestep method is a decomposition across scales. Thus, multiple timestep methods are a realization of the multiplicative operator splitting idea. For certain nonlinear diffusion coefficients with favorable properties, we show that a simple multiple timestep method can improve the diffusion process.

Journal ArticleDOI
TL;DR: This paper proposes a generalized shock filter model for one-dimensional signal restoration, and proposes an effective numerical scheme to discretize the proposed model, and derives a two-dimensional numerical scheme directly from the one- dimensional model following a space-split strategy.
Abstract: Considerable interest has recently been given to signal processing models based on partial differential equations. Successively improved models based on hyperbolic partial differential equation types are proposed in the literature. These models yield interesting resultss however, it would be of great interest to generalize them in order to increase their efficiency. In this paper, we propose a generalized shock filter model for one-dimensional signal restoration. After justifying the existence and uniqueness of the solutions in an adequate vector space, we propose an effective numerical scheme to discretize the proposed model, and derive a two-dimensional numerical scheme directly from the one-dimensional model following a space-split strategy. We then prove a stability result for both schemes. We conclude our study by providing high-quality experimental results for one- and two-dimensional signal enhancement and restoration, and showing the influence the shock speed control has on processing time.

Journal ArticleDOI
TL;DR: This paper gives a complete categorization of all ambiguous configurations for a 1D (calibrated or uncalibrated) perspective camera irrespective of the number of points and views to solve structure and motion problems for 1D retina vision.
Abstract: In this paper we investigate, determine and classify the critical configurations for solving structure and motion problems for 1D retina vision. We give a complete categorization of all ambiguous configurations for a 1D (calibrated or uncalibrated) perspective camera irrespective of the number of points and views. It is well-known that the calibrated and uncalibrated case are linked through the circular points. This link enables us to solve for both cases simultaneously. Another important tool is the duality in exchanging points and cameras and its corresponding Cremona transformation. These concepts are generalized to the 1D case and used for the investigation of ambiguous configurations. Several examples and illustrations are also provided to explain the results and to provide geometrical insight.

Journal ArticleDOI
TL;DR: A rigorous proof of Döhlers results is given and his approach is generalized to the d-dimensional case.
Abstract: Median filters are frequently used in signal analysis because of their smoothing properties and their insensitivity with respect to outliers in the data. Since median filters are nonlinear filters, the tools of linear theory are not applicable to them. One approach to deal with nonlinear filters consists in investigating their root images (fixed elements or signals transparent to the filter). Whereas for one-dimensional median filters the set of all root signals can be completely characterized, this is not true for higher dimensional filters. In 1989, Dohler stated a result on certain root images for two-dimensional median filters. Although Dohlers results are true for a wide class of median filters, his arguments were not correct and his assertions do not hold universally. In this paper we give a rigorous proof of Dohlers results. Moreover, his approach is generalized to the d-dimensional case.

Journal ArticleDOI
TL;DR: The implementation of a visual navigation system using a new method for preprocessing and organizing discrete scalar volume data of any dimension on external storage and the results of the initial experiments with three dimensional volume data are presented and future extensions of the DMT organizing technology are described.
Abstract: We present a new method for preprocessing and organizing discrete scalar volume data of any dimension on external storage. We describe our implementation of a visual navigation system using our method. The techniques have important applications for out-of-core visualization of volume data sets and image understanding. The applications include extracting isosurfaces in a manner that helps reduce both I/O and disk seek time, a priori topologically correct isosurface simplification (prior to extraction), and producing a visual atlas of all topologically distinct objects in the data set. The preprocessing algorithm computes regions of space that we call topological zone components, so that any isosurface component (contour) is completely contained in a zone component and all contours contained in a zone component are topologically equivalent. The algorithm also constructs a criticality tree that is related to the recently studied contour tree. However, unlike the contour tree, the zones and the criticality tree hierarchically organize the data set. We demonstrate that the techniques work on both irregularly and regularly gridded data, and can be extended to data sets with nonunique values, by the mathematical analysis we call Digital Morse Theory (DMT), so that perturbation of the data set is not required. We present the results of our initial experiments with three dimensional volume data (CT) and describe future extensions of our DMT organizing technology.

Journal ArticleDOI
TL;DR: An imaging, image processing, and image analysis framework for facilitating the separation of flow and chemistry effects on local flame front structures is presented and allows a more detailed investigation of turbulent flame phenomena.
Abstract: We present an imaging, image processing, and image analysis framework for facilitating the separation of flow and chemistry effects on local flame front structures. Image data of combustion processes are obtained by a novel technique that combines simultaneous measurements of distribution evolutions of OH radicals and of instantaneous velocity fields in turbulent flames. High-speed planar laser induced fluorescence (PLIF) of OH radicals is used to track the response of the flame front to the turbulent flow field. Instantaneous velocity field measurements are simultaneously performed using particle image velocimetry (PIV). Image analysis methods are developed to process the experimentally captured data for the quantitative study of turbulence/chemistry interactions. The flame image sequences are smoothed using nonlinear diffusion filtering and flame boundary contours are automatically segmented using active contour models. OH image sequences are analyzed using a curve matching algorithm that incorporates level sets and geodesic path computation to track the propagation of curves representing successive flame contours within a sequence. This makes it possible to calculate local flame front velocities, which are strongly affected by turbulence/chemistry interactions. Since the PIV data resolves the turbulent flow field, the combined technique allows a more detailed investigation of turbulent flame phenomena.

Journal ArticleDOI
TL;DR: The symbolic method of the classical theory of invariants is described, and its results are extended and implemented as an algorithm for computing algebraic invariants of projective, affine, and Euclidean transformations.
Abstract: Combining implicit polynomials and algebraic invariants for representing and recognizing complicated objects proves to be a powerful technique. In this paper, we explore the findings of the classical theory of invariants for the calculation of algebraic invariants of implicit curves and surfaces, a theory largely disregarded in the computer vision community by a shadow of skepticism. Here, the symbolic method of the classical theory is described, and its results are extended and implemented as an algorithm for computing algebraic invariants of projective, affine, and Euclidean transformations. A list of some affine invariants of 4th degree implicit polynomials generated by the proposed algorithm is presented along with the corresponding symbolic representations, and their use in recognizing objects represented by implicit polynomials is illustrated through experiments. An affine invariant fitting algorithm is also proposed and the performance is studied.

Journal ArticleDOI
TL;DR: A new method, based on curve evolution, for the reconstruction of a 3D curve from two different projections, is presented based on the minimization of an energy functional and its associated PDE is solved using the level set formulation, giving the existence and uniqueness results.
Abstract: We present a new method, based on curve evolution, for the reconstruction of a 3D curve from two different projections It is based on the minimization of an energy functional Following the work on geodesic active contours by Caselles et al (in Int Conf on Pattern Recognition, 1996, Vol 43, pp 693–737), we then transform the problem of minimizing the functional into a problem of geodesic computation in a Riemann space The Euler-Lagrange equation of this new functional is derived and its associated PDE is solved using the level set formulation, giving the existence and uniqueness results We apply the model to the reconstruction of a vessel from a biplane angiography

Journal ArticleDOI
TL;DR: A new shared weight network architecture that contains both neural network and morphological network functionality is introduced that provides speed-up of two orders of magnitude compared to a Pentium III 500 MHz software implementation.
Abstract: We propose a system for solving pixel-based multi-spectral image classification problems with high throughput pipelined hardware. We introduce a new shared weight network architecture that contains both neural network and morphological network functionality. We then describe its implementation on Reconfigurable Computers. The implementation provides speed-up for our system in two ways. (1) In the optimization of our network, using Evolutionary Algorithms, for new features and data sets of interest. (2) In the application of an optimized network to large image databases, or directly at the sensor as required. We apply our system to 4 feature identification problems of practical interest, and compare its performance to two advanced software systems designed specifically for multi-spectral image classification. We achieve comparable performance in both training and testing. We estimate speed-up of two orders of magnitude compared to a Pentium III 500 MHz software implementation.

Journal ArticleDOI
TL;DR: The mathematics of this Lie group model based on linear vector fields is investigated further and it is proved that these groups are able to transform any generic shape to any other, and shape variabilities may be modelled by elements of a (k − 1)2-dimensional vector space.
Abstract: The problem of modelling variabilities of ensembles of objects leads to the study of the ‘shape’ of point sets and of transformations between point sets. Linear models are only able to amply describe variabilities between shapes that are sufficiently close and require the computation of a mean configuration. Olsen and Nielsen (2000) introduced a Lie group model based on linear vector fields and showed that this model could describe a wider range of variabilities than linear models. The purpose of this paper is to investigate the mathematics of this Lie group model further and determine its expressibility. This is a necessary foundation for any future work on inference techniques in this model. Let Σkm denote Kendall's shape space of sets of k points in m-dimensional Euclidean space (k > m): this consists of point sets up to equivalence under rotation, scaling and translation. Not all linear transformations on point sets give well-defined transformations of shapes. However, we show that a subgroup of transformations determined by invertible real matrices of size k − 1 does act on Σkm. For m > 2, this group is maximal, whereas for m e 2, the maximal group consists of the invertible complex matrices. It is proved that these groups are able to transform any generic shape to any other. Moreover, we establish that for k > m + 1 this may be done via one-parameter subgroups. Each one-parameter subgroup is given by exponentiation of an arbitrary (k − 1) × (k − 1) matrix. Shape variabilities may thus be modelled by elements of a (k − 1)2-dimensional vector space.

Journal ArticleDOI
TL;DR: The advantage of this new technique over other known algorithms is that it generates a minimal sequence of not necessarily convex subsets of the elementary square, which means subsets with smaller cardinality are generated and a faster implementation of the corresponding dilations and erosions can be achieved.
Abstract: This paper presents a greedy algorithm for decomposing convex structuring elements as sequence of Minkowski additions of subsets of the elementary square (i.e., the 3 × 3 square centered at the origin). The technique proposed is very simple and it is based on algebraic and geometric properties of Minkowski additions. Besides its simplicity, the advantage of this new technique over other known algorithms is that it generates a minimal sequence of not necessarily convex subsets of the elementary square. Thus, subsets with smaller cardinality are generated and a faster implementation of the corresponding dilations and erosions can be achieved. Experimental results, proof of correctness and analysis of computational time complexity of the algorithm are also given.

Journal ArticleDOI
TL;DR: A unique intrinsic coordinate system for non-singular bounded quartics is defined that incorporates usable alignment information contained in the polynomial representation, a complete set of geometric invariants, and thus an associated canonical form for a quartic.
Abstract: This paper outlines a new geometric parameterization of 2D curves where parameterization is in terms of geometric invariants and parameters that determine intrinsic coordinate systems. This new approach handles two fundamental problems: single-computation alignment, and recognition of 2D shapes under Euclidean or affine transformations. The approach is model-based: every shape is first fitted by a quartic represented by a fourth degree 2D polynomial. Based on the decomposition of this equation into three covariant conics, we are able, in both the Euclidean and the affine cases, to define a unique intrinsic coordinate system for non-singular bounded quartics that incorporates usable alignment information contained in the polynomial representation, a complete set of geometric invariants, and thus an associated canonical form for a quartic. This representation permits shape recognition based on 11 Euclidean invariants, or 8 affine invariants. This is illustrated in experiments with real data sets.

Journal ArticleDOI
TL;DR: A method to locally restore multiple blurred images that yields an extended boundary of the restored image and a method for finding solutions that are compactly (finitely) supported to the multichannel deconvolution equation when the impulse responses are also compactly supported distributions.
Abstract: We introduce a method to locally restore multiple blurred images that yields an extended boundary of the restored image. In particular, we provide a method for finding solutions that are compactly (finitely) supported to the multichannel deconvolution equation when the impulse responses are also compactly supported distributions. This method allows us to broaden the scope of calculating solutions for any particular type of function beyond what was possible in D.F. Walnut (J. Fourier Anal. Appl., Vol. 4, No. 6, pp. 669–709, 1998). During reconstruction, noise from the blurred images is added locally in neighborhoods dependent on the support of the restoration filters. Numerical simulations indicate that these filters may highly increase the magnitude of the noise but with a slight regularization of the filters, the results obtained show the effectiveness of this method.

Journal ArticleDOI
TL;DR: The principal normal vectors for points on the digital boundary of a binary planar object are defined as the motion of Points on the boundary curve according to principalnormal vectors to define digital curvature flow.
Abstract: In this paper, we define digital curvature flow for planar digital objects. We define the principal normal vectors for points on the digital boundary of a binary planar object. Digital curvature flow is defined as the motion of points on the boundary curve according to principal normal vectors. Therefore, digital curvature flow is a digital version of curvature flow. Furthermore, this transformation could also be considered as discrete curvature flow on isotetic polygons, all edges of which are parallel to the axes of an orthogonal coordinate system.