scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1993"


Journal ArticleDOI
TL;DR: In this article, a class of vector-space bases is introduced for sparse representation of discretizations of integral operators, where an operator with a smooth, nonoscillatory kernel possessing a finite number of singularities in each row or column is represented in these bases as a sparse matrix, to high precision.
Abstract: A class of vector-space bases is introduced for the sparse representation of discretizations of integral operators An operator with a smooth, nonoscillatory kernel possessing a finite number of singularities in each row or column is represented in these bases as a sparse matrix, to high precision A method is presented that employs these bases for the numerical solution of second-kind integral equations in time bounded by $O(n\log ^2 n)$, where n is the number of points in the discretization Numerical results are given which demonstrate the effectiveness of the approach, and several generalizations and applications of the method are discussed

378 citations


Journal ArticleDOI
TL;DR: In this article, a maximum-likelihood (ML) algorithm for estimation and correction of phase errors induced in synthetic-aperture-radar (SAR) imagery is proposed.
Abstract: We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M − 1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within the basic structure of the phase gradient autofocus (PGA) algorithm, replacing the original phase-estimation kernel. We show that, in practice, the new algorithm provides excellent restorations to defocused SAR imagery, typically in only one or two iterations. The performance of the new phase estimator is demonstrated essentially to achieve the Cramer–Rao lower bound on estimation-error variance for all but small values of target-toclutter ratio. We also show that for the case in which M is equal to 2, the ML estimator is similar to that of the original PGA method but achieves better results in practice, owing to a bias inherent in the original PGA phase estimation kernel. Finally, we discuss the relationship of these algorithms to the shear-averaging and spatial correlation methods, two other phase-correction techniques that utilize the same phase-estimation kernel but that produce substantially poorer performance because they do not employ several fundamental signal-processing steps that are critical to the algorithms of the PGA class.

160 citations


Journal ArticleDOI
TL;DR: Two canonical decompositions of mappings between complete lattices are presented, based on the mathematical morphology elementary mappings: erosions, anti-erosions, dilations and anti-dilations, by introducing the concept of morphological connection, that extends the notion of Galois connection.

126 citations


Journal ArticleDOI
TL;DR: This article gives an axiomatic derivation of how a multiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses analgebraic structure similar to that possessed by the derivatives of the traditional scale-space representation in the continuous domain.
Abstract: This article shows how discrete derivative approximations can be defined so thatscale-space properties hold exactly also in the discrete domain. Starting from a set of natural requirements on the first processing stages of a visual system,the visual front end, it gives an axiomatic derivation of how a multiscale representation of derivative approximations can be constructed from a discrete signal, so that it possesses analgebraic structure similar to that possessed by the derivatives of the traditional scale-space representation in the continuous domain. A family of kernels is derived that constitutediscrete analogues to the continuous Gaussian derivatives. The representation has theoretical advantages over other discretizations of the scale-space theory in the sense that operators that commute before discretizationcommute after discretization. Some computational implications of this are that derivative approximations can be computeddirectly from smoothed data and that this will giveexactly the same result as convolution with the corresponding derivative approximation kernel. Moreover, a number ofnormalization conditions are automatically satisfied. The proposed methodology leads to a scheme of computations of multiscale low-level feature extraction that is conceptually very simple and consists of four basic steps: (i)large support convolution smoothing, (ii)small support difference computations, (iii)point operations for computing differential geometric entities, and (iv)nearest-neighbour operations for feature detection. Applications demonstrate how the proposed scheme can be used for edge detection and junction detection based on derivatives up to order three.

117 citations


Journal ArticleDOI
TL;DR: The authors present a kernel design technique based on using the Radon transform of the modulus of the ambiguity function of the signal for determination of angles and distances of radially distributed contents of the autoterms in the ambiguity domain, which effectively reduces the cross-terms and noise for linear FM signals.
Abstract: The authors present a kernel design technique based on using the Radon transform of the modulus of the ambiguity function of the signal for determination of angles and distances of radially distributed contents of the autoterms in the ambiguity domain. The proposed kernel effectively reduces the cross-terms and noise for linear FM signals. The result is a tool for high-resolution time-frequency representation of nonstationary, primarily linear FM signals. >

102 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: An image registration algorithm which achieves subpixel accuracy using a frequency-domain technique is proposed, which is efficient compared with the conventional approaches based on interpolation or correlations in the spatial/frequency domain.
Abstract: An image registration algorithm which achieves subpixel accuracy using a frequency-domain technique is proposed. This approach is efficient compared with the conventional approaches based on interpolation or correlations in the spatial/frequency domain. This approach can achieve subpixel accuracy registration even when images contain aliasing errors due to undersampling. The FFTs (fast Fourier transforms) of the images have computational complexities smaller than the interpolation or convolution computations by orders of magnitude. The accuracy of the proposed approach is demonstrated through computer simulations for different types of images. >

75 citations


Patent
26 Nov 1993
TL;DR: In this article, a two-dimensional median filter with a diamond-shaped five-point kernel was used to remove speckle artifacts in an ultrasound image using a 2D image.
Abstract: A method for reducing speckle artifact in an ultrasound image using a two-dimensional median filter having a diamond-shaped five-point kernel. The entire pixel image data is passed through the filter in a manner such that the center point of the kernel is effectively stepped down each range vector in sequence. The magnitudes of the pixel data at each of the five points in the kernel are compared and the value which has the middle magnitude is adopted as a new pixel value, which is substituted for the old pixel value at the center point. After a new filtered vector has been formed from the new pixel values produced at successive center points by stepping down one acoustic vector, the kernel is shifted by one vector and stepped down range again. This process continues through the entire set of vectors until a new set of filtered vectors is formed. This filter will remove speckle holes on the order of one pixel in size while preserving good edge definition.

68 citations


Journal ArticleDOI
TL;DR: The convolution/superposition method of dose calculation has the potential to become the preferred technique for radiotherapy treatment planning and under extreme conditions of a short SSD, a large field size and high incident photon energy, the parallel kernel approximation results in discrepancies that may be clinically unacceptable.
Abstract: The convolution/superposition method of dose calculation has the potential to become the preferred technique for radiotherapy treatment planning. When this approach is used for therapeutic x-ray beams, the dose spread kernels are usually aligned parallel to the central axis of the incident beam. While this reduces the computational burden, it is more rigorous to tilt the kernel axis to align it with the diverging beam rays that define the incident direction of primary photons. We have assessed the validity of the parallel kernel approximation by computing dose distributions using parallel and tilted kernels for monoenergetic photons of 2, 6, and 10 MeV; source-to-surface distances (SSDs) of 50, 80, and 100 cm; and for field sizes of 5 x 5, 15 x 15, and 30 x 30 cm2. Over most of the irradiated volume, the parallel kernel approximation yields results that differ from tilted kernel calculations by 3% or less for SSDs greater than 80 cm. Under extreme conditions of a short SSD, a large field size and high incident photon energy, the parallel kernel approximation results in discrepancies that may be clinically unacceptable. For 10-MeV photons, we have observed that the parallel kernel approximation can overestimate the dose by up to 4.4% of the maximum on the central axis for a field size of 30 x 30 cm2 applied with a SSD of 50 cm. Very localized dose underestimations of up to 27% of the maximum dose occurred in the penumbral region of a 30 x 30-cm2 field of 10-MeV photons applied with a SSD of 50 cm.

61 citations


Patent
09 Mar 1993
TL;DR: In this paper, the authors present a system and method for evaluating the performance of a computer program, or software performance evaluation tool, consisting of an analyzer module for analyzing a binary image of a program and making modifications necessary to measure performance; a kernel for measuring and storing runtime performance information; and a post processor for processing said runtime performance, correlating it with static information, and displaying resulting information to a user.
Abstract: A system and method for evaluating the performance of a computer program, or software performance evaluation tool. The system comprises an analyzer module for analyzing a binary image of said program and making modifications necessary to measure performance; a kernel for measuring and storing runtime performance information; and a post processor for processing said runtime performance information, correlating it with static information, and displaying resulting information to a user. The analyzer determines boundaries of a plurality of regions and routines within the binary image and inserts breakpoints at the boundaries. The analyzer also examines the binary image of a computer program instruction by instruction in order to instrument and bind the binary image to the kernel module, and uses an abstraction of a target computer in order to instrument the computer program.

54 citations


Journal Article
TL;DR: A class of vector-space bases is introduced for the sparse representation of discretizations of integral operators possessing a smooth, nonoscillatory kernel possessing a finite number of singularities in each row or column as a sparse matrix, to high precision.
Abstract: A class of vector-space bases is introduced for the sparse representation of discretizations of integral operators. An operator with a smooth, nonoscillatory kernel possessing a finite number of singularities in each row or column is represented in these bases as a sparse matrix, to high precision. A method is presented that employs these bases for the numerical solution of second-kind integral equations in time bounded by O(n log 2 n), where n is the number of points in the decretization

54 citations


Journal ArticleDOI
TL;DR: A synthesis-by-analysis model for texture replication or simulation that can closely replicate a given textured image or produce another image that although distinct from the original, has the same general visual characteristics and the same first and second-order gray-level statistics as the original image.
Abstract: A synthesis-by-analysis model for texture replication or simulation is presented This model can closely replicate a given textured image or produce another image that although distinct from the original, has the same general visual characteristics and the same first and second-order gray-level statistics as the original image The texture synthesis algorithm, proposed contains three distinct components: a moving-average (MA) filter, a filter excitation function, and a gray-level histogram The analysis portion of the texture synthesis algorithm derives the three from a given image The synthesis portion convolves the MA filter kernel with the excitation function, adds noise, and modifies the histogram of the result The advantages of this texture model over others include conceptually and computationally simple and robust parameter estimation, inherent stability, parsimony in the number of parameters, and synthesis through convolution The authors describe a procedure for deriving the correct MA kernel using a signal enhancement algorithm, demonstrate the effectiveness of the model by using it to mimic several diverse textured images, discuss its applicability to the problem of infrared background simulation, and include detailed algorithms for the implementation of the model >

Proceedings ArticleDOI
01 Dec 1993
TL;DR: This paper presents a hard real-time kernel, called HARTIK, specifically designed to handle robotics applications with predictable response time, and shows the functionality of the kernel by presenting a concrete example of a robot system that has to explore unknown objects by visual and force feedback.
Abstract: This paper presents a hard real-time kernel, called HARTIK, specifically designed to handle robotics applications with predictable response time. The main relevant features of this kernel include: direct specification of time constraints, such as periods and deadlines; preemptive scheduling; coexistence of hard soft, and non real-time tasks, separation between time constraints and importance; deadline tolerance; dynamic guarantee of critical tasks; and graceful degradation in overload conditions. The functionality of the kernel is then shown by presenting a concrete example of a robot system that has to explore unknown objects by visual and force feedback. >

Proceedings ArticleDOI
12 May 1993
TL;DR: In this paper, an image processing algorithm was developed for machine recognition of weevils and/or weevil damage in film x-ray images of wheat kernels [8 bits, (0.25 mm)2/pixel].
Abstract: An image processing algorithm has been developed for machine recognition of weevils and/or weevil damage in film x-ray images of wheat kernels [8 bits, (0.25 mm)2/pixel]. The 8 bit grey scale image is converted to a binary image of interior edges and lines using a Laplacian mask, zero threshold, and background removal. In undamaged kernels the predominant feature of this image is a line representing the central crease of the kernel. In insect-damaged kernels this feature is disrupted and additional edges or lines are seen at angles to the crease. The algorithm uses convolution masks to look for intersections (45 or 90 degree angles with 4 or 5 pixel length sides) at 8 orientations. Recognition varies with insect stage; at least 50% of infested kernels are machine recognized by the 4th instar (26 - 28 days). This is comparable to 50% recognition by humans at 25.5 days for images of similar resolution. False positive responses are limited to 0.5%.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Patent
27 Jul 1993
TL;DR: In this paper, a dedicated inverse discrete cosine transform (IDCT) processor comprising a controller and an array of accumulators is provided for performing n×n inverse cosine transforms.
Abstract: In a digital image processing system, a dedicated inverse discrete cosine transform (IDCT) processor comprising a controller and an array of accumulators is provided for performing n×n inverse discrete cosine transforms. The controller controls the computation of the output vector using a forward mapping procedure. The controller causes k unique kernel values of the reconstruction kernel of each non-zero transform domain coefficient to be selectively accumulated by the array of accumulators, where k equals at most (n2 +2n)/8. The array of accumulators comprises accumulator blocks sharing an input and a control line designed to exploit the symmetry characteristics of the reconstruction kernels. Each accumulator block is designed to perform a limited number of distinct operations. The accumulator blocks are logically grouped. P symmetry selection bits and less than an operation configuration selection bits, where 2q equals n4 /2(p-2) are provided on the control line for selecting the appropriate symmetry and operation configuration. As a result, the IDCT can be performed efficiently while keeping the implementation cost of the IDCT processor relatively low.

Journal ArticleDOI
TL;DR: In this paper, the reachability of a second-order integro-differential equation is proved based on a new kind of unique continuation property, and the main significance of the result is that there is no restriction on the size of the memory kernel.
Abstract: Reachability for a second-order integro-differential equation is proved. The method is based upon a new kind of unique continuation property. The main significance of the result is that there is no restriction on the size of the memory kernel.

Journal ArticleDOI
TL;DR: A Hough transform line finding algorithm in which the voting kernel is a smooth function of differences in both line parameters, decided in terms of a hypothesis testing approach and adjusted to give optimal results is considered.
Abstract: In this paper we consider a Hough transform line finding algorithm in which the voting kernel is a smooth function of differences in both line parameters. The shape of the voting kernel is decided in terms of a hypothesis testing approach, and the shape is adjusted to give optimal results. We show that this new kernel is robust to changes in the distribution of the underlying noise and the implementation is very fast, taking typically 2-3 s on a Sparc 2 workstation for a 256 × 256 image.

Patent
06 Sep 1993
TL;DR: In this paper, the kernels are spread and settle in grooves in the belt so as to be oriented in essentially the same direction, and then the neural network determines to which of a plurality of predetermined classes each kernel belongs.
Abstract: In automatic evaluation of cereal kernels or like granular products handled in bulk, the kernels are conveyed on a vibrating belt conveyor (15). Owing to the vibrations, the kernels are spread and settle in grooves (14) in the belt so as to be oriented in essentially the same direction. A video camera (40) produces digital images of all the kernels on the belt. The kernels are identified in the images, and for each kernel input signals are produced to a neural network based on the picture element values for the picture elements representing the kernel. Then the neural network determines to which of a plurality of predetermined classes each kernel belongs.

Journal Article
TL;DR: An image processing algorithm has been developed for machine recognition of weevils and/or weevil damage in film x-ray images of wheat kernels and at least 50% of infested kernels are machine recognized by the 4th instar.
Abstract: An image processing algorithm has been developed for machine recognition of weevils and/or weevil damage in film x-ray images of wheat kernels [8 bits, (0.25 mm)2/pixel]. The 8 bit grey scale image is converted to a binary image of interior edges and lines using a Laplacian mask, zero threshold, and background removal. In undamaged kernels the predominant feature of this image is a line representing the central crease of the kernel. In insect-damaged kernels this feature is disrupted and additional edges or lines are seen at angles to the crease. The algorithm uses convolution masks to look for intersections (45 or 90 degree angles with 4 or 5 pixel length sides) at 8 orientations. Recognition varies with insect stage; at least 50% of infested kernels are machine recognized by the 4th instar (26 - 28 days). This is comparable to 50% recognition by humans at 25.5 days for images of similar resolution. False positive responses are limited to 0.5%.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

02 Jan 1993
TL;DR: In this paper, the problem of finding a smallest set of edges whose addition makes an input undirected graph satisfy a given connectivity requirement is investigated. But the main focus of this paper is on the implementation of parallel graph algorithms on a massively parallel SIMD computer.
Abstract: Graphs play an important role in modeling the underlying structure of many real-world problems. In this thesis, we investigate several interesting graph problems. The first part of this thesis focuses on the problem of finding a smallest set of edges whose addition makes an input undirected graph satisfy a given connectivity requirement. This is a fundamental graph-theoretic problem that has a wide variety of applications in designing reliable networks and database systems. We have obtained efficient sequential and parallel algorithms for finding smallest augmentations to satisfy 3-edge-connectivity, 4-edge-connectivity, biconnectivity, triconnectivity, and four-connectivity. Our parallel algorithms are developed on the PRAM model. Our approach in obtaining these results is to first construct a data structure that describes all essential information needed to augment the input graph, e.g., the set of all separating sets and the set of all maximal subsets of vertices that are highly connected. Based on this data structure, we obtain a lower bound on the number of edges that need to be added and prove that this lower bound can be always reduced by one by properly adding an edge. The second part of the thesis focuses on the implementation of PRAM-based efficient parallel graph algorithms on a massively parallel SIMD computer. This work was performed in two phases. In the first phase, we implemented a set of parallel graph algorithms with the constraint that the size of the input cannot be larger than the number of physical processors. For this, we first built a kernel which consists of commonly used routines. Then we implemented efficient parallel graph algorithms by calling routines in the kernel. In the second phase, we addressed and solved the issue of allocating virtual processors in our programs. Under our current implementation scheme, there is no bound on the number of virtual processors used in the programs as long as there is enough memory to store all the data required during the computation. The performance data obtained from extensive testing suggests that the extra overhead for simulating virtual processors is moderate and the performance of our code tracks theoretical predictions quite well.

Journal ArticleDOI
TL;DR: In this paper, a set of valid filters is characterized in Fourier space by a general filter equation, and all the solutions to this filter equation may be found by the application of additive or multiplicative corrections to arbitrary functions.
Abstract: Different filters may be applied for accurate reconstruction of a three-dimensional (3-D) image from two-dimensional (2-D) parallel projections with the use of filtered back projection. The set of such valid filters is characterized in Fourier space by a general filter equation. We demonstrate that all the solutions to this filter equation may be found by the application of additive or multiplicative corrections to arbitrary functions. One may find the object-space convolution kernel by taking the inverse 2-D Fourier transform of a valid filter. This kernel contains singularities, and the problem of deriving explicit expressions by means of limits of regular functions is discussed in detail. Another approach to 3-D reconstruction is to form planar integrals from the projection data and to use 3-D Radon inversion procedures. Such an approach is shown here to be equivalent to filtered backprojection, where the corresponding filter is closely related to the weighting scheme used in the formation of the planar integrals.

Proceedings ArticleDOI
27 Apr 1993
TL;DR: The authors explain how to use symmetric convolution to implement a multiband filter bank for finite-length data that restricts the number of samples in the subbands but still gives perfect reconstruction.
Abstract: The authors describe symmetric convolution and its use for the nonexpansive implementation of multirate filter banks for images. Symmetric convolution is a formalized approach to convolving symmetric FIR (finite impulse response) filters with symmetrically extended data. It is efficient because the discrete sine and cosine transforms can be used to perform the convolution as a transform-domain multiplication. The authors explain how to use symmetric convolution to implement a multiband filter bank for finite-length data that restricts the number of samples in the subbands but still gives perfect reconstruction. >

Journal ArticleDOI
TL;DR: In this article, a modified version of Beelen's algorithm is used to find a column reduced polynomial matrix unimodularly equivalent to a given polynomial matrix.

Proceedings ArticleDOI
18 Oct 1993
TL;DR: In this article, a method for detecting bioacoustic transients is presented for detecting bowhead whale (Balaena mysticetus) calls recorded in a noisy Arctic environment, where the desired transient-an animal call-is modeled as a sequence of frequency sweeps.
Abstract: A method is presented for detecting bioacoustic transients. The desired transient-an animal call-is modeled as a sequence of frequency sweeps. Sweeps are detected by convolving a spectrogram of the signal with a kernel designed for the call of interest; convolution output is high when the call of interest is present and low other times. The method is tested on a set of bowhead whale (Balaena mysticetus) calls recorded in a noisy Arctic environment. The method detects bowhead calls well, performing better than a matched filter and a hidden Markov model on the task. Strengths and weaknesses of the method are discussed. >

Proceedings ArticleDOI
03 May 1993
TL;DR: A compact, discrete, capacitive, 2D Gaussian convolution network is presented by solving the heat equation in implicit finite difference form and is especially adapted to real-time analog VLSI vision chips because of its simplicity and compactness.
Abstract: A compact, discrete, capacitive, 2D Gaussian convolution network is presented by solving the heat equation in implicit finite difference form. A scalable 2D Gaussian convolution can be performed by controlling the iteration number. This capacitive network is especially adapted to real-time analog VLSI vision chips because of its simplicity and compactness. >

Journal ArticleDOI
TL;DR: The principles that make it possible to recover Cartesian grids in the two different geometries are illustrated and some preliminary results are reported.
Abstract: Summary SPARK, an acronym for ‘SPAtial Reconstruction Kernel’, is the nucleus of a software library being developed for the three-dimensional (3-D) reconstruction of objects observed by the electron microscope. A unifying concept is used: the Fourier transform, known in several central sections, is resampled to obtain a 3-D Cartesian grid, which is inverted by a fast Fourier transform. This technique is used for both single-axis tilting (of 2-D periodic layers or of isolated objects) and for the random conical-tilt technique. The principles that make it possible to recover Cartesian grids in the two different geometries are illustrated and some preliminary results are reported. SPARK resamples the Cartesian grids with the use of a fast and efficient algorithm of Shannon interpolation developed by the authors. Compared to back-projection techniques the method shows a considerable improvement in execution time with no sacrifice in accuracy; it therefore allows the effects of a variety of parameters in a given reconstruction to be scrutinized in a reasonable time. Some new possibilities and future extensions of the library are briefly outlined.

Journal ArticleDOI
TL;DR: A theorem for calculating the local histograms of a gray-scale input image by means of convolution of input-image binary slices with a binary kernel is presented and proved.
Abstract: A theorem for calculating the local histograms of a gray-scale input image by means of convolution of input-image binary slices with a binary kernel is presented and proved. The calculation of the local histograms of a gray-scale image for all resolution cells and its arbitrary neighborhoods is optically implemented in a shadow-casting correlator. The choice of different rank-order values from the local histograms can lead to a wide spectrum of nonlinear filtration algorithms.

Journal ArticleDOI
TL;DR: This article presents a powerful new strategy, called the kernel strategy, for studying fragments in the context of questions concerned with fixed point properties, and shows how it was used to answer a number of open questions.
Abstract: Barendregt defines combinatory logic as an equational system satisfying the combinatorsS andK with ((Sx)y)z=(xz)(yz) and (Kx)y=x; the set consisting ofS andK provides abasis for all of combinatory logic. Rather than studying all of the logic, logicians often focus onfragments of the logic, subsets whose basis is obtained by replacingS orK or both by other combinators. In this article, we present a powerful new strategy, called thekernel strategy, for studying fragments in the context of questions concerned with fixed point properties. Interest in such properties rests in part with their relation to normal forms and paradoxes. We show how the kernel strategy was used to answer a number of open questions, offering abundant evidence that the availability of the kernel strategy marks a singular advance for automated reasoning. In all of our experiments with this strategy applied by an automated reasoning program, the rate of success has been impressively high and the CPU time to obtain the desired information startlingly small. For each fragment we study, we use the kernel strategy to attempt to determine whether the strong or the weak fixed point property holds. WhereA is a given fragment with basisB, the strong fixed point property holds forA if and only if there exists a combinatory such that, for all combinatorsx,yx=x(yx), wherey is expressed purely in terms of elements ofB. The weak fixed point property holds forA if and only if for all combinatorsx there exists a combinatory such thaty=xy, wherey is expressed purely in terms of the elements ofB and the combinatorx. Because the use of the kernel strategy is so effective in addressing questions focusing on either fixed point property, its formulation marks an important advance for combinatory logic. Perhaps of especial interest to logicians is an infinite class of infinite sets of tightly coupled fixed point combinators (presented here), whose unexpected discovery resulted directly from the application of the strategy by an automated reasoning program. We also offer various open questions for possible research and focus on an automated reasoning program and input files that may prove useful for such research.

Proceedings ArticleDOI
01 Nov 1993
TL;DR: A novel image filtering algorithm and its extension to multiscale processing along with the LIP model based Sobel operator for edge detection are discussed in detail.
Abstract: This paper presents a summary of operations of the logarithmic image processing (LIP) model and its applications. The LIP model is a mathematical framework which provides a set of generalised version of addition, subtraction, multiplication, convolution and so on, for signal processing. First, we briefly describe the LIP model and point out its distinctive properties for image processing. Then we summarize the current applications of the LIP model. A novel image filtering algorithm and its extension to multiscale processing along with the LIP model based Sobel operator for edge detection are discussed in detail. Finally, we suggest further research area of the LIP model. >

Journal ArticleDOI
TL;DR: This work presents an exact deblurring method for the discrete domain where linear convolution is replaced by matrix multiplication, the Gaussian kernel is replacing by a highly structured Toeplitz matrix, and thedeblurring kernel is replace by the inverse of this blur matrix.
Abstract: The problem of removing degradation and blur is common in signal and image processing. While Gaussian convolution is often the model for the blur, no exact deblurring techniques for the Gaussian kernel have been given. Previously, a Gaussian deblurring kernel in the continuous domain has been presented. We present an exact deblurring method for the discrete domain where linear convolution is replaced by matrix multiplication, the Gaussian kernel is replaced by a highly structured Toeplitz matrix, and the deblurring kernel is replaced by the inverse of this blur matrix. To bypass numerical errors, the inverse is derived analytically and a closed-form solution is presented. In particular, the matrix is decomposed into the product of triangular matrices and a diagonal matrix, where numerically ill-conditioned elements are gathered and that allows for a direct accurate handling of numerical errors. This exact inverse is effective in degradation problems characterized by low noise and high representational accuracy and has potential application in the many areas where a Gaussian point spread function is relevant.

Book ChapterDOI
01 Jan 1993
TL;DR: Owing to the class representation, the new scheme has more than a 4–8 speed-up ratio for the general applications and can be unified to one operation, reversion, in the elementary equation.
Abstract: In this paper the conjugate transformation of the hexagonal grid is described and its elementary equation is defined Two strategies are used to extend a matrix morphology into the conjugate transformation First, the conjugate classification represents 128 structuring elements of the kernel form of the hexagonal grid to a tree of six levels Each node of a given level is a class of structuring elements with a calculable index Two conjugate nodes of the same level with the same index can be distinguished by two conjugate sets of 2 * n classes respectively Second, by considering each element which has six neighbours as a state for any Boolean matrix of the hexagonal grid, it can be transformed into an index matrix relevant to a specific level of the classification From the index matrix, two sets of Boolean matrices (feature matrices) can be constructed with the same number of classes on the level Depending on simpler algebraic properties of feature matrices, dilation and erosion can be unified to one operation, reversion, in the elementary equation The reversion has a self-duality property with a space of 22*n functions in which only a total of 2n+1 functions are dilation and erosion In addition, several images generated by applying morphological operations using an implemented prototype of the conjugate transformation and their running complexities compared with a matrix morphology, are illustrated Owing to the class representation, the new scheme has more than a 4–8 speed-up ratio for the general applications