scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1992"


Journal ArticleDOI
TL;DR: The gamma neural model as mentioned in this paper is a neural network architecture for processing temporal patterns, where only current signal values are presented to the neural net, which adapts its own internal memory to store the past.

240 citations


Journal ArticleDOI
19 May 1992
TL;DR: It is shown how to exploit two symmetries in edge-detection kernels for reducing storage and computational costs and generating simultaneously endstop- and junction-tuned filters for free.
Abstract: Families of kernels that are useful in a variety of early vision algorithms may be obtained by rotating and scaling in a continuum a ‘template’ kernel. These multi-scale multi-orientation family may be approximated by linear interpolation of a discrete finite set of appropriate ‘basis’ kernels. A scheme for generating such a basis together with the appropriate interpolation weights is described. Unlike previous schemes by Perona, and Simoncelli et al. it is guaranteed to generate the most parsimonious one. Additionally, it is shown how to exploit two symmetries in edge-detection kernels for reducing storage and computational costs and generating simultaneously endstop- and junction-tuned filters for free.

219 citations


Journal ArticleDOI
Matt P. Wand1
TL;DR: In this paper, it was shown that the numerical evaluation and minimization of both asymptotic and exact mean integrated squared error can be set up in a matrix algebraic formulation which requires no numerical integration.
Abstract: Kernel estimators for d dimensional data are usually parametrized by either a single smoothing parameter, or d smoothing parameters corresponding to each of the coordinate directions. A generalization of each of these parameterizations is to use a d× d matrix which allows smoothing in arbitrary directions. We demonstrate that, at this level of generality, the usual error approximations and their numerical minimization can be done quite simply using matrix algebra. The minimization formulas have the practical importance that they can be applied to data-driven selection of the smoothing parameters using a ”plug-in approach. Particular attention is paid to the special case of kernel estimation of multivariate normal mixture densities where it is shown that the numerical evaluation and minimization of both asymptotic and exact mean integrated squared error can be set up in a matrix algebraic formulation which requires no numerical integration. This provides a flexible family of multivariate smoothing problems...

42 citations


Book
01 Oct 1992

41 citations


Journal ArticleDOI
TL;DR: Two ways of improving Burt and Adelson's Laplacian pyramid, a technique developed for image compression, are described; a theoretical relationship between the present approach and the family of quadrature mirror filter image pyramids is derived.

41 citations


Patent
28 Sep 1992
TL;DR: In this article, an entropic kernel is used to recursively analyze the gray level space for candidate objects and validating the presence of valid objects by comparing the candidate object attribute values to a defined set of valid object attributes contained in a driver.
Abstract: The present invention relates to image analysis methods and systems for identifying objects in a background by generating a description, which may be either a histogram or co-occurrence matrix, of the gray level space of the image by using an entropic kernel to recursively analyze the gray level space for candidate objects and validating the presence of valid objects by comparing the candidate object attribute values to a defined set of valid object attribute values contained in a driver The present invention includes recursive, iterative and parallel processing methods The methods may be used in a wide variety of industrial inspection techniques, including colony counting and the identification of discrete features in carpets and of pigment elements embedded in a polymer

34 citations


Journal ArticleDOI
TL;DR: For the numerical approximation of convolution integrals and integral equations, quadrature methods are considered whose weights are constructed with the help of the Laplace transform of the convolution kernel and a linear multistep method as discussed by the authors.

29 citations


Patent
10 Jan 1992
TL;DR: In this article, a method for identifying wheat cultivars is described, which employs an imaging device such as a video camera and a computer to acquire and process images of grain samples and to identify the particular type or cultivar of the wheat by the images obtained thereby.
Abstract: A method for identifying wheat cultivars is provided which employs an imaging device such as a video camera and a computer to acquire and process images of grain samples and to identify the particular type or cultivar of the wheat by the images obtained thereby. The method involves the steps of acquiring the image of the kernel and germ for each grain, processing the image into digital format, storing data in the computer corresponding to the edge of the kernel and germ, determining the image characteristics of the kernel and germ, and comparing the image characteristics for the sample to a known standard. By accumulating data for a number of grains, calculating statistical information for the sample, and comparing to a known standard, the particular cultivar of the sample may be identified.

28 citations


Journal ArticleDOI
TL;DR: The method presented allows faster calculation of any time-frequency distribution with a kernel that can be formulated in the time-lag plane and, when combined with parallel processing, improve the processing speed to allow real-time calculations of the Choi-Williams distribution.
Abstract: The method presented allows faster calculation of any time-frequency distribution with a kernel that can be formulated in the time-lag plane. Specific examples are the Wigner and Choi-Williams distributions. The Choi-Williams distribution (CWD) uses an exponential kernel in the generalized class of bilinear time-frequency distributions to achieve a reduction in the cross-term components of the distribution. Matrix manipulations provide an intuitive approach and, when combined with parallel processing, improve the processing speed to allow real-time calculations of the CWD. The use of an outer product matrix with a weighting matrix is particularly useful when evaluating different weighting parameters. For any given signal, the outer product matrix needs to be calculated just once. The various weighting matrices can be stored and used with any signal when needed. Parallel processing architectures allow implementation of the algorithm with speeds that are appropriate for real-time, running window calculations. >

27 citations


Journal ArticleDOI
25 Oct 1992
TL;DR: In this paper, two reprojection methods are studied to their resolution loss and sampling errors, and the residual sum of squares (RSS) of the difference between the reprojected sinogram and a reference sinogram corresponding to infinite sampling is used to assess the sampling errors.
Abstract: Two reprojection methods are studied to their resolution loss and sampling errors. The methods are area weighted convolution (AWC), and Gaussian pixel convolution (GPC). Modulation transfer function (MTF) is used to evaluate the resolution loss, and the residual sum of squares (RSS) of the difference between the reprojected sinogram and a reference sinogram corresponding to infinite sampling is used to assess the sampling errors. The resolution loss is found to be determined by the reconstruction filter, the linear interpolation in the backprojection, and the convolution kernel. Sampling errors are found to be angle-dependent, and the angular dependency is more pronounced for the GPC. To avoid significant sampling errors, the width of the convolution kernel needs to be two times larger than the pixel distance. Large sub-binning size of the projection array leads to interpolation error, which is more pronounced for AWC than for GPC. For the GPC method, with pixel size twice the pixel distance, the sampling error can be greatly reduced. The sampling error can be reduced without additional resolution loss by using a smaller pixel distance. >

26 citations


Proceedings ArticleDOI
TL;DR: A theoretical comparison of gradient based edge detectors and morphological edge detectors is performed and empirical evaluation of the performance indicates that the blur-min operator is superior to the gradient based operator.
Abstract: Edge detection is the most fundamental step in vision algorithms. A number of edge detectors have been discussed in the computer vision literature. Examples of classic edge detectors include the Marr-Hildreth edge operator, facet edge operator, and the Canny edge operator. Edge detection using morphological techniques are attractive because they can be efficiently implemented in near real time machine vision systems that have special hardware support. However, little performance characterization of edge detectors has been done. In general, performance characterization of edge detectors has been done mainly by plotting empirical curves of performance. Quantitative performance evaluation of edge detectors was first performed by Abdou and Pratt. It is the goal of this paper to perform a theoretical comparison of gradient based edge detectors and morphological edge detectors. By assuming that an ideal edge is corrupted with additive noise we derive theoretical expressions for the probability of misdetection (the probability of labeling of a true edge pixel as a nonedge pixel in the output). Further, we derive theoretical expressions for the probability of false alarm (the probability of labeling of a nonedge pixel as an output edge pixel) by assuming that the input to the operator is a region of flat graytone intensity corrupted with additive Gaussian noise of zero mean and variance (sigma) 2. Even though the blurring step in the morphological operator introduces correlation in the additive noise, we make an approximation that the output samples after blurring are i.i.d. Gaussian random variables with zero mean and variance (sigma) 2/M where M is the window size of the blurring kernel. The false alarm probabilities obtained by using this approximation can be shown to be upperbounds of the false alarm probabilities computed without the approximation. The theory indicates that the blur- min operator is clearly superior when a 3 X 3 window size is used. Since we only have an upperbound for the false alarm probability the theory is inadequate to confirm the superiority of the blur-min operator. Empirical evaluation of the performance indicates that the blur-min operator is superior to the gradient based operator. Evaluation of the edge detectors on real images also indicate superiority of the blur-min operator. Application of hysteresis linking, after edge detection, significantly reduces the misdetection rate, but increases the false alarm rate.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The calculated absorbed dose distributions were found to agree well with Monte Carlo calculated data using the EGS4 program, and a density dependent correction factor applied to the central kernel value has been derived.
Abstract: For photon fields, separate dose distribution kernels have been generated for charged particles produced in the first interaction, for single and multiple scattered photons, including bremsstrahlung and annihilation. These kernels are applied in absorbed dose calculations for radiotherapy treatment planning using a convolution technique. The vast amount of kernel data required for 3D calculations can be accurately generated out of a small subset, due to rotational symmetry. Due to the discrete sampling of both the irradiated object and the dose distribution kernels, application of the Fano theorem and O'Connor's scaling theorem is not possible without difficulties. The scaling process has been thoroughly investigated and a density dependent correction factor applied to the central kernel value has been derived. The calculated absorbed dose distributions were found to agree well with Monte Carlo calculated data using the EGS4 program.

Book ChapterDOI
19 Oct 1992
TL;DR: D-oids are the dynamic counterpart of many-sorted algebras, in the sense that they can model dynamic structures as much as algeBRas can model static data types.
Abstract: We present a new formal structure, called d-oid, for modelling systems of evolving objects. In our view, d-oids are the dynamic counterpart of many-sorted algebras, in the sense that they can model dynamic structures as much as algebras can model static data types. D-oids are a basis for giving syntax and semantics for kernel languages for defining methods; these languages are built over what we call method expressions, like applicative kernel languages are built over terms. Moreover some hints are given towards modelling classes and inheritance.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear reconstruction algorithm was proposed to recover the source depth information from scattered radiation, which does not degrade spatial resolution in the imaging plane and provides depth resolution with a standard deviation of 4 cm for point sources.
Abstract: A new algorithm for three-dimensional image reconstruction in nuclear medicine in which scattered radiation rather than multiple projected images is used for determination of the source depth within the body is proposed. Images taken from numerous energy windows are combined for the reconstruction of the source distribution in the body. In the first paper of this series Gunter et al. [ IEEE Trans. Nucl. Sci.37, 1300 ( 1990)] examined simple linear algorithms for recovering source depth information from scattered radiation. These linear algorithms were unsuccessful because the scattering process produces little signal in the low-energy images at high spatial frequencies. As a result, the reconstructed source distributions exhibited nodal patterns and blurring. The scattering kernel that was measured and reported in the first paper is now examined more carefully. The singular-value decomposition of the kernel matrices is used to break the reconstruction problem into distinct channels that relate energy spectra to source depth distributions. Based on this analysis, a new nonlinear reconstruction algorithm that avoids the earlier problems is proposed. The new algorithm does not degrade spatial resolution in the imaging plane and provides depth resolution with a standard deviation of 4 cm for point sources without requiring any camera motion. The algorithm also provides significant attenuation correction and, therefore, improved quantitation of the source distribution.

Journal ArticleDOI
TL;DR: The optimum block size in terms of the number of computations is proved by investigating the relationship between block size and computational efficiency when a linear convolution is implemented by a series of circular convolutions.
Abstract: The relationship between block size and computational efficiency is investigated when a linear convolution is implemented by a series of circular convolutions. The optimum block size in terms of the number of computations is proved. A synthetic aperture radar signal processing example is presented to illustrate the conclusions. >

Journal ArticleDOI
TL;DR: An alternate approach that uses a complex-valued kernel with odd symmetry to perform morphological operations and is found to be robust in the presence of noise and spatial nonuniformities in the image.
Abstract: Morphological transformations are typically performed on binary images by convolution with a binary kernel, which is followed by a threshold. We present an alternate approach that uses a complex-valued kernel with odd symmetry to perform these morphological operations. The complex-valued kernel increases the information-processing ability of the processor with no increase in system complexity. One advantage is that the processor operates on all constant regions of a gray-level image in parallel. A scale-space representation of this processor is obtained by varying the size of the kernel continuously through a range of scales. By using redundant information in the scale representation, this system is found to be robust in the presence of noise and spatial nonuniformities in the image. An optical system to perform morphological filtering based on this system is presented.

Proceedings ArticleDOI
30 Aug 1992
TL;DR: The authors show that a smooth voting kernel which is a function of differences both in orientation and distance from the line, can give superior results than the standard Hough transform algorithm.
Abstract: It is advantageous to use edge orientation information from an edge detector when trying to find lines in an image using a Hough transform algorithm. Such methods have to balance between two competing problems: (1) to reduce interference between different line segments in the image; and (2) to allow for increased orientation uncertainties particularly near junctions of lines. To counter both these problems the authors show that a smooth voting kernel which is a function of differences both in orientation and distance from the line, can give superior results. They compare these results with the results obtained from using the standard Hough implementation as well as an implementation with a kernel which varies smoothly with separation but remains independent of differences in orientation. >

Journal ArticleDOI
TL;DR: It is shown that the SWVD half-kernel AR covariance poles produce precise frequency estimates for monocomponent signals, independent of data length and phase, just like real and analytic AR estimators, but with the advantage of excellent performance in noise.

Journal ArticleDOI
02 Jun 1992
TL;DR: In this article, a fast two-dimensional inverse discrete cosine transform (2D-IDCT) algorithm is developed on the basis of the DCT computational kernel matrices.
Abstract: The authors propose a new fast two-dimensional inverse discrete cosine transform (2D-IDCT) algorithm which is developed on the basis of the DCT computational kernel matrices. With the symmetrical properties of kernel matrices, one can greatly reduce the number of multiplications. With carefully grouping the DCT coefficients, the computational complexity can be further reduced, for example, if the 2D-DCT coefficients are coded and transmitted according to the zig-zag scanning. The proposed fast 2D-IDCT algorithm which takes the advantage of zero-valued DCT elements is suitable for high throughput HDTV receiving systems. The DCT is generally recognized as the best way to encode digital picture information, because it results in virtually the same energy compact performance as the Karhunen-Loeve transform (KLT). The DCT has the advantage of lower computational complexity than the KLT. >

Proceedings ArticleDOI
01 Nov 1992
TL;DR: A new method for edge detection by combining the Sobel operator and the Laplacian of Gaussian (LoG) operator is presented, which yields promising results in precise and blur free edge detection, and good stability versus image noise.
Abstract: This paper presents a new method for edge detection by combining the Sobel operator and the Lapla-cian of Gaussian (LoG) operator. The underlying idea is to combine the advantages of both operatorsand eliminate their disadvantages. Different methods of combining the two operators are considered. Onemethod yields promising results in precise and blur free edge detection, and good stability versus imagenoise. A method for designing the LoG kernel is also presented. 1. INTRODUCTION Many theories and algorithms applied to high-level vision tasks assume that the pictures are already segmented, i.e., the desired features (lines, edges etc.) of the picture are already given. Hence, edge detection is an important segmentation task in Image Processing and Computer Vision.We can distinguish between first derivative operators (Gradient operators) and second derivativeoperators (Laplacian operators). Both types of operators have been studied extensively, and their advan-tages and drawbacks are well known (see References 1-3 for a good overview). The underlying idea ofthis approach to edge detection is to combine the advantages of both types of operators, and eliminate

Proceedings ArticleDOI
23 Apr 1992
TL;DR: The YARTOS kernel supports the notion of guaranteed processing rates, where the desired processing rate of each task is made known to the kernel, and the kernel provides a guaranteed response time to each task that is sufficient for ensuring that the required processing rate is achieved.
Abstract: Real-time operating system services are required to support multimedia systems that rely heavily on the workstation processor for control of the audio and video processors and movement of audio and video data. The requirements for each service are described, together with the YARTOS kernel, an operating system kernel that provides real-time communication and computation services. The programming model supported by YARTOS is an extension of Wirth's discipline of real-time programming. In essence it is a message-passing system with a semantics of interprocess communication that specifies the real-time response that an operating system must provide to a message receiver. This allows a programmer to assert an upper bound on the time to receipt and processing of each message. The YARTOS kernel supports the notion of guaranteed processing rates. The desired processing rate of each task is made known to the kernel, and the kernel provides a guaranteed response time to each task that is sufficient for ensuring that the required processing rate is achieved. >

Journal ArticleDOI
Eric W. Hansen1
TL;DR: In this article, the authors derived a space-domain inversion formula for the incomplete Abel transform, which is equivalent to the frequency-domain inverse procedure of Dallas et al. They showed that the kernel of the inverse transform consists of the usual Abel inversion kernel plus a number of correction terms that act to complete the projections.
Abstract: An axisymmetric object is reconstructed from its transaxial line-integral projection by the inverse Abel transform. An interesting variation of the Abel inversion problem is the finite-length line-spread function introduced by Dallas et at. [ J. Opt. Soc. Am. A4, 2039 ( 1987)], in which the path of integration does not extend completely across the object support, resulting in incomplete projections. We refer to this operation as the incomplete Abel transform and derive a space-domain inversion formula for it. It is shown that the kernel of the inverse transform consists of the usual Abel inversion kernel plus a number of correction terms that act to complete the projections. The space-domain inverse is shown to be equivalent to Dallas’s frequency-domain inversion procedure. Finally, the space-domain inverse is demonstrated by numerical simulation.

Proceedings ArticleDOI
04 Oct 1992
TL;DR: Simulations involving chirp signals are presented and the flexibility offered by the McClellan transformation method in shaping the contours in the frequency domain is used to place a unit value along the kernel axes and satisfy marginal constraints.
Abstract: A McClellan transformation 2-D filter design method is used for designing a signal dependent time-frequency (t-f) kernel. This method is amenable to t-f kernel constraints, which do not commonly arise when filtering signals in image and array processing applications. The two parts of the McClellan transformation method (the one-dimensional filter and the linear transformation) provide a mechanism for emphasizing the auto-terms and improve spectral resolution. The flexibility offered by the McClellan transformation method in shaping the contours in the frequency domain is used to place a unit value along the kernel axes and satisfy marginal constraints. Simulations involving chirp signals are presented. >

Proceedings ArticleDOI
01 Apr 1992
TL;DR: (Xi) -filters were introduced in the mid-1980s by the author and have been used in many applications, including industrial inspection, remote sensing, and medicine and some new developments in each of these areas are reviewed.
Abstract: (Xi) -filters were introduced in the mid-1980s by the author and have been used in many applications, including industrial inspection, remote sensing, and medicine. This paper reviews some new developments in each of these areas. The (Xi) -filter is based on high- speed LUT (lookup table) operations wherein small-kernel, nonlinear convolutions are performed on three-dimensional data. In order to execute 3D transforms by LUT, it is necessary to utilize as compact a kernel as possible. This has led to the FCC (face-centered- cubic) tessellation where the kernel comprises only 13 binary data elements or voxels. (This is in contradistinction to the Cartesian tessellation where the kernel comprises 27 voxels.) Since each LUT contains at each of 8192 locations the transformed value of the voxel, the program word that defines a single (Xi) -filter transform is 8192 bits in length. There are, therefore, an essentially infinite number of program words and therefore an infinite number of transforms. In order to delimit the number of transforms to a set that are both useful and manageable, program works limited to the various ranking filters have been generated. In a kernel of 13 elements, there are, of course, only 13 ranks. By iterating (Xi) -filters based on these ranking transforms many interesting operations are possible as illustrated in this paper.

Proceedings ArticleDOI
10 May 1992
TL;DR: An interactive object-oriented layer-based simulator is presented, intended for large-scale investigations of artificial neural networks (ANNs), and has been successfully applied to the recognition of acoustical signals.
Abstract: An interactive object-oriented layer-based simulator is presented. It is intended for large-scale investigations of artificial neural networks (ANNs), and consists of two parts. The first is a simulation kernel, whose underlying base model is a layer of units, processing inputs to outputs. The core model has no knowledge of the ANN, but serves as a base for specialized layers. The second is the interactive counterpart, where by means of several windows, the actual state of the simulation can be visualized with variable amounts of information, and controlled, interacting with the layer as a whole, or with every single variable. Every specialized layer inherits from its parent layer both the simulation kernel and its interactive counterpart, modifies existing functions, and adds new variables and functions. Individual layers are combined by multiple inheritance. The simulator has been successfully applied to the recognition of acoustical signals. >

Journal ArticleDOI
TL;DR: In this article, an exact solution of problems of best approximation in the uniform and integral metrics of classes of periodic functions representable as a convolution of a kernel not increasing the oscillation with functions having a given convex upwards majorant of the modulus of continuity is given.
Abstract: This paper is devoted to an exact solution of problems of best approximation in the uniform and integral metrics of classes of periodic functions representable as a convolution of a kernel not increasing the oscillation with functions having a given convex upwards majorant of the modulus of continuity. The approximating sets are taken to be the trigonometric polynomials in the case of the uniform and integral metrics, and convolutions of the kernel defining the class with polynomial splines in the case of the integral metric.



Journal ArticleDOI
TL;DR: This decomposition of the circulant representation of the resulting matrix offers insight into the process of 2-D convolution and has applications in image processing.

Proceedings ArticleDOI
04 Oct 1992
TL;DR: In this paper, the authors demonstrate how the discrete-time spectrogram is related to a convolution of discrete time Wigner distributions in spite of this noticeable incompatibility concerning periodicity.
Abstract: The spectrogram for continous-time signals can be expressed as a convolution of two Wigner distributions in the time-frequency plane. Definitions exist for the discrete-time spectrogram and the discrete-time Wigner distribution, each with its own periodicity in the frequency domain. The authors demonstrate how the discrete-time spectrogram is related to a convolution of discrete-time Wigner distributions in spite of this noticeable incompatibility concerning periodicity. The result can be considered as the counterpart of the relation existing for continuous-time signals. >