scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1990"


Journal ArticleDOI
Abstract: The convolution approach to least-squares smoothing and differentiation is extended to remove the data truncation problem of the original Savitzky and Golay algorithm. A formalism based on the recursive properties of Gram polynomials provides a simple and direct means of calculating the convolution weights for all cases and thus enables a short but completely general routine to be written

611 citations


Journal ArticleDOI
01 Apr 1990
TL;DR: In this article, it was shown that for noise typical of electrooptical sensors, invertible multiple operators with their inverses will always outperform any set of single or multiple operators, with the inverse omitted.
Abstract: An updated review of the subject, including physically realizable examples along with explicit inverses and a computer simulation of the resulting large bandwidth, is given. A discussion of the ill-posedness of a single convolution operator clarifies the necessity of multiple operators. A precisely stated necessary and sufficient condition for inevitability is given. The performance of simultaneous convolution operators when there are sources of additive noise prior to the inverse is addressed. The main point is that for noise typical of electrooptical sensors, invertible multiple operators with their inverses will always outperform any set of single or multiple operators with the inverse omitted. A tutorial on the theory of distributions of compact support, which is used freely throughout the paper, is given in the appendix. >

84 citations


Journal ArticleDOI
TL;DR: A new method for nonlinear image processing that is well suited for hybrid optical-electronic implementation and enhances straight-line features in noisy, low-contrast images is presented.
Abstract: We present a new method for nonlinear image processing that is well suited for hybrid optical–electronic implementation. An input image is convolved with a long, narrow two-dimensional kernel that is rotated, either continuously or discretely, through 360 deg. During rotation the convolution output is monitored, and the maximum and minimum values measured at each point are stored. The processed image is given by an application-dependent function of Max(x, y) and Min(x, y). Setting the output equal to [Max(x, y) − Min(x, y)] enhances straight-line features in noisy, low-contrast images. Better results can be obtained by cascading a Max and a [Max − Min] operation. Numerically calculated examples illustrate the method and compare it with linear filtering.

65 citations


ReportDOI
01 Dec 1990
TL;DR: A class of vector-space bases is introduced for the sparse representation of discretizations of integral operators with smooth, non-oscillatory kernel possessing a finite number of singularities in each row or column as a sparse matrix, to high precision.
Abstract: : A class of vector-space bases is introduced for the sparse representation of discretizations of integral operators. An operator with a smooth, non-oscillatory kernel possessing a finite number of singularities in each row or column is represented in these bases as a sparse matrix, to high precision. A method is presented that employs these bases for the numerical solution of second-kind integral equations in time bounded by O(nlog squared n) , where n is the number of points in the discretization. Numerical results are given which demonstrate the effectiveness of the approach, and several generalizations and applications of the method are discussed.

58 citations


Book
01 Jan 1990
TL;DR: In this paper, the authors describe image applications conventional functions statistics and probability geometric functions convolution and correlation transforming image representations restoration analysis, showing image applications in conventional functions, probability, and geometric functions.
Abstract: Pictures image applications conventional functions statistics and probability geometric functions convolution and correlation transforming image representations restoration analysis.

56 citations


Journal ArticleDOI
TL;DR: It is demonstrated by means of simulation examples that significant reduction in the amplitude of ghost artifacts is obtained when the image is filtered by the inverse of the motion kernel.
Abstract: The effect of periodic motion of a single magnetic resonance imaging (MRI) slice in the direction of the slice selection axis is modeled as amplitude modulation of the raw data with a motion kernel along the phase encoding direction in the Fourier domain. It is shown that this motion can be detected in 1-D projections of the raw data along the frequency encoding direction which in combination with appropriate filtering leads to the recovery of the motion kernel. It is demonstrated by means of simulation examples that significant reduction in the amplitude of ghost artifacts is obtained when the image is filtered by the inverse of the motion kernel. Some issues to be investigated before the technique can be used in a clinical environment are mentioned. >

33 citations


01 Jan 1990
TL;DR: In this paper, a general procedure for computing the convolution weights at all points in the spectrum, for all polynomial orders, all fitter lengths, and any derlvatlve, based on the re- cursive properties of Gram polynomials, is presented.
Abstract: Sm- and d#ferentlatkn of large data sets by plecewlse least-squares polynomlal fffllng are now wklely used tech- nlques. The calculation speed Is very greatly enhanced H a convolution formalism is used to perform the calcuiatlons. Prevlously tables of convolution weights for the center-pdnt least-squares evaluatlon of 2m + 1 points have been pres- ented. A major drawback of the technique Is that the end polnts of the data sets are kot (2m pohts for a 2m + 1 point fllter). Convdutlon weights have also been presented In the speclal case of Inltlai-point values. In this paper a sknple general procedure for calculatlng the convolution weights at all podtlons, for all polynomial orders, all fitter lengths, and any derlvatlve Is presented. The method, based on the re- cursive properties of Gram polynomials, enables the convo- lutlon technique to be extended to cover all points In the spectrum.

27 citations


Book ChapterDOI
TL;DR: A specification formalism with parameterisation of an arbitrary order is presented, given a denotational-style semantics, accompanied by an inference system for proving that an object satisfies a specification.
Abstract: A specification formalism with parameterisation of an arbitrary order is presented. It is given a denotational-style semantics, accompanied by an inference system for proving that an object satisfies a specification. The inference system incorporates, but is not limited to, a clearly identified type-checking component.

21 citations


Proceedings Article
01 Oct 1990
TL;DR: Olympic oriented non-radial basis function networks (ONRBF) is introduced as a generalization of Radial Basis Function networks (RBF)- wherein the Euclidean distance metric in the exponent of the Gaussian is replaced by a more general polynomial.
Abstract: We introduce oriented non-radial basis function networks (ONRBF) as a generalization of Radial Basis Function networks (RBF)- wherein the Euclidean distance metric in the exponent of the Gaussian is replaced by a more general polynomial. This permits the definition of more general regions and in particular- hyper-ellipses with orientations. In the case of hyper-surface estimation this scheme requires a smaller number of hidden units and alleviates the "curse of dimensionality" associated kernel type approximators. In the case of an image, the hidden units correspond to features in the image and the parameters associated with each unit correspond to the rotation, scaling and translation properties of that particular "feature". In the context of the ONBF scheme, this means that an image can be represented by a small number of features. Since, transformation of an image by rotation, scaling and translation correspond to identical transformations of the individual features, the ONBF scheme can be used to considerable advantage for the purposes of image recognition and analysis.

21 citations


Book ChapterDOI
01 Sep 1990
TL;DR: An operating system kernel for highly parallel supercomputers, which was implemented on an iPSC/2 Hypercube with 32 processors is presented and an improved programming methodology based on a combination of data and task partitioning which leads to efficient computations on virtual fully connected highly parallel machines is reported on.
Abstract: The paper presents an operating system kernel for highly parallel supercomputers, which was implemented on an iPSC/2 Hypercube with 32 processors. The kernel offers a process model, which is well suited for most partitioning strategies of parallel algorithms. The base for the efficiency of this object oriented, global, and dynamic programming concept are advances in communication network technologies (virtual fully connection) of some new parallel supercomputers. After presenting the functionality and the implementation of MMK (Multiprocessor Multitasking Kernel), the paper reports on an improved programming methodology based on a combination of data and task partitioning which leads to efficient computations on virtual fully connected highly parallel machines. MMK is an integral part of the TOPSYS-project (TOols for Parallel SYStems) and all tools support the MMK programming model.

19 citations


Journal ArticleDOI
TL;DR: A mathematical model for the spatial computations performed by simple cells in the mammalian visual cortex is derived and it is shown how Gabor sampling arises as an approximation to this exact kernel for most cells.
Abstract: A mathematical model for the spatial computations performed by simple cells in the mammalian visual cortex is derived. The construction uses as organizing principles the experimentally observed simple cell linearity and rotational symmetry breaking, together with the constraint that simple cell inputs must effectively be ganglion cell outputs. This leads to a closed form expression for the simple cellkernel in terms of Jacobi?-functions. Using a?-function identity, it is also shown how Gabor sampling arises as an approximation to this exact kernel for most cells. In addition, the model provides a natural mechanism for introducing the type of nonlinearity observed in some simple cells. The cell's responses to a variety of visual stimuli are calculated using the exact kernel and compared to single cell recordings. In all cases, the model's predictions are in agreement with available experimental data.

Journal ArticleDOI
TL;DR: The visual estimation of object velocity in systems of tuned bilocal detector units (simplified Hassenstein-Reichardt detectors) is investigated and the response of correlative motion analyzers to compound motion and to motion of nonrigid objects is discussed.
Abstract: The visual estimation of object velocity in systems of tuned bilocal detector units (simplified Hassenstein–Reichardt detectors) is investigated. The units contain delay filters of an arbitrary low-pass characteristic. Arrays of such detector units with identical delay filters are assumed to cover the plane of analysis. The global evaluation of the output signals of suitably arranged detector units is exemplified by the analysis of frontoparallel translations of rigid objects. The correlative method permits the estimation of the instantaneous object velocity, independently of object form. The time course of the resulting estimate is shown to be the convolution of the true velocity profile with a time-invariant kernel that depends solely on the impulse response of the delay filters and thus characterizes the analyzer system. The mathematical analysis of the processing principle is illustrated by considering idealized detector systems. The response of correlative motion analyzers to compound motion and to motion of nonrigid objects is discussed.

Proceedings ArticleDOI
16 Jun 1990
TL;DR: An algorithm is developed for defining small kernels that are conditioned on the important components of the imaging process: the nature of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and post-filter interpolation, which produces a small, spatially constrained convolution kernel.
Abstract: An algorithm is developed for defining small kernels that are conditioned on the important components of the imaging process: the nature of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and post-filter interpolation. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernal values that minimize the expected mean-square error of the estimate of the scene characteristic. This development is consistent with the derivation of the spatially unconstrained Wiener characteristic filter, but leads to a small, spatially constrained convolution kernel. Simulation experiments demonstrate that the algorithm is more flexible than traditional small-kernel techniques and yields more accurate estimates. >

Journal ArticleDOI
TL;DR: An efficient parallel algorithm is presented for convolution on a mesh-connected computer with wraparound that does not require a broadcast feature for data values, and is applicable to both SIMD and MIMD meshes.
Abstract: An efficient parallel algorithm is presented for convolution on a mesh-connected computer with wraparound. The algorithm does not require a broadcast feature for data values, as assumed by previously proposed algorithms. As a result, the algorithm is applicable to both SIMD and MIMD meshes. For an N*N image and a M*M template, the previous algorithms take O(M/sup 2/q) time on an N*N mesh-connected multicomputer (q is the number of bits in each entry of the convolution matrix). The algorithms have complexity O(M/sup 2/r), where r=max (number of bits in an image entry, number of bits in a template entry). In addition to not requiring a broadcast capability, these algorithms are faster for binary images. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: It is shown that the GTFR technique offers a representation which is less sensitive to threshold settings and is thus more robust and potentially more important shorter-term advantages are illustrated and quantified as more accurate locations of bursts in the time-frequency distribution.
Abstract: A nonstationary analysis technique is applied to speech, with encouraging results. This technique makes use of a generalized time-frequency representation (GTFR) which uses a kernel that allows finite-time support while suppressing interference terms. This kernel has a cone shape in the (t, tau ) plane, where t is time of a signal and tau is an autocorrelationlike lag. The processing thus allows time and frequency resolution equivalent to the Wigner distribution but does not have significant interference terms. It is shown that the effect of this distribution on the long-term (1.5-s) display of speech is a visible enhancement of formant tracks. Potentially more important shorter-term (30-ms or less) advantages are illustrated and quantified as more accurate locations of bursts in the time-frequency distribution. It is shown that the GTFR technique offers a representation which is less sensitive to threshold settings and is thus more robust. >

Journal ArticleDOI
TL;DR: In this article, the authors propose a Model Representation Kernel (MK) for model representation, which can serve as a central model data base in an integrated environment for model development and simulation, the model developer may supply extra information which is used for automatic consistency analysis to check for unintended abuse of models.

Proceedings ArticleDOI
Weiping Li1
01 May 1990
TL;DR: A more efficient alternative method of using a vector transformation with convolution processors for FIR filtering of long coefficient sequences is introduced, and the total amount of hardware is reduced.
Abstract: In many signal processing applications, finite impulse response (FIR) filters with long coefficient sequences (more than a thousand coefficients) are required. To achieve high-speed operation, dedicated convolution processors have been used for FIR filtering. However, the number of coefficients that a single convolution processor chip can handle is relatively small, compared with the number of coefficients required. A straightforward way of using those convolution processors for FIR filtering of long coefficient sequences is to cascade them. A more efficient alternative method is introduced. By using a vector transformation with convolution processors, the total amount of hardware is reduced. Hardware implementation of the vector transformation is also discussed. >

Proceedings ArticleDOI
13 May 1990
TL;DR: A segmentation technique for very sparse surfaces is described, which models the surfaces with reproducing kernel-based splines which can be shown to solve a regularized surface reconstruction problem.
Abstract: A segmentation technique for very sparse surfaces is described. It is based on minimizing the energy of the surfaces in the scene. While it could be used in almost any system as part of surface reconstruction/model recovery, the algorithm is designed to be usable when the depth information is scattered and very sparse, as is generally the case with depth generated by stereo algorithms. Results from a sequential algorithm are presented, and a working prototype that executes on the massively parallel Connection Machine is discussed. The technique presented models the surfaces with reproducing kernel-based splines which can be shown to solve a regularized surface reconstruction problem. From the functional form of these splines the authors derive computable upper and lower bounds on the energy of a surface over a given finite region. The computation of the spline, and the corresponding surface representation are quite efficient for very sparse data. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: A VLSI architecture is introduced to achieve a single-chip real-time implementation of large-kernel convolutions, providing a way to organize the computation in order to lower the I/O bandwidth to 2 pixels per clock cycle, without increasing the internal storage.
Abstract: A VLSI architecture is introduced to achieve a single-chip real-time implementation of large-kernel convolutions. The architecture provides a way to organize the computation in order to lower the I/O bandwidth to 2 pixels per clock cycle, without increasing the internal storage. As a result, the whole silicon array can be dedicated to computation, without excessive external memory requirements, opening the way to single-chip, very-large-kernel convolutions. As an example, a 16*16 convolution or correlation architecture has been devised based on a 1.2- mu m CMOS process. The same architecture can be used for data processing involving 2-D data convergence. >

Proceedings ArticleDOI
05 Feb 1990
TL;DR: A new method is proposed to detect and enhance such features as object bound-aries or line segments in a noisy gray-scale image by utilizing directional information at each point in the input image.
Abstract: A new method is proposed to detect and enhance such features as object bound-aries or line segments in a noisy gray-scale image. This method utilizes directional information at each point in the input image. The input image is convolved with a 2-D kernel, discussed below, which is rotated through 360 degrees, either continuously or discretely in a fairly large number of steps. As the kernel rotates, the convolution output is measured and the maximum, minimum, and mean values at each point (as a function of rotation angle) are stored in a computer. Once these values are obtained, a class of image processing operations can be performed. In an optical implementation of the processing operation, it is necessary to physically rotate a mask in the optical system. However, this is much faster than effecting an equivalent kernel-rotation operation with a digital image processor.

Proceedings ArticleDOI
16 Jun 1990
TL;DR: A flexible drawing understanding system with state transition models is proposed, and the drawing processor AI-Mudams (written in C) is used as the token extractor in the embodiment discussed.
Abstract: A flexible drawing understanding system with state transition models is proposed. The drawing processor AI-Mudams (written in C) is used as the token extractor in the embodiment discussed. Given drawing images are converted efficiently to suitable geometrical primitives, such as contour vectors, core vectors, dots loops, or, in some cases, primitives with semantics (road line, or house etc.). The understanding system kernel is implemented in Prolog, and the geometrical evaluator is also prepared in C for checking basic geometrical situations, including shape, geometrical relations, and allocations. This understanding kernel accepts the individual state transition rules corresponding to individual drawing images and recognition targets and realizes understanding in the form of bottom-up and top-down state transition. Experiments on different types of drawings reveal that the framework is flexible and effective for various kinds of drawing image. >

Proceedings ArticleDOI
16 Jun 1990
TL;DR: An overview of the requirements for industrial applications of computer vision is given, and a method called multilevel input binary template matching, which appliesbinary template matching to gray-level images, is presented.
Abstract: An overview of the requirements for industrial applications of computer vision is given. A method called multilevel input binary template matching, which applies binary template matching to gray-level images, is presented. With this method, a vision system can be tailored to the difficulty of the individual computer-vision application while maintaining high speed and high reliability in all cases. A full custom VLSI specially designed for binary template matching is discussed. This dedicated VLSI chip, called IRIS, has a configurable kernel up to a size of 1024 elements for 1-D or up to 32*32 elements for 2-D, and has integrated line buffers. Some application examples from the field of industrial automation are presented. >

Proceedings ArticleDOI
22 Oct 1990
TL;DR: In this paper, generalized matrix inverse (GMI) is used to estimate source activity distributions from single photon emission computed tomography (SPECT) projection measurements, and the SVDs are used to form approximate generalized matrix inverses.
Abstract: Generalized matrix inverses are used to estimate source activity distributions from single photon emission computed tomography (SPECT) projection measurements. Image reconstructions for a numerical simulation and a clinical brain study are examined. The photon flux from the source region and photon detection by the gamma camera are modeled by matrices which are computed by Monte Carlo methods. The singular value decompositions (SVDs) of these matrices give considerable insight into the SPECT image reconstruction problem and the SVDs are used to form approximate generalized matrix inverses. Tradeoffs between resolution and error in estimating source voxel intensities are discussed, and estimates of these errors provide a robust means of stabilizing the solution to the ill-posed inverse problem. In addition to its quantitative clinical applications, the generalized matrix inverse method may be a useful research tool for tasks such as evaluating collimator design and optimizing gamma camera motion. >

Proceedings ArticleDOI
27 Nov 1990
TL;DR: In this article, two image processing techniques for measuring the gill position of a Pacific salmon were presented, based on the isolation and enhancement of an elongated edge-like structure in the image of the head area of a fish.
Abstract: Fast and accurate measurement of gill position is important in automated fish processing, to ensure correct positioning in a cutting machine. Two image-processing techniques for accomplishing this are presented. The methods are based on the isolation and enhancement of an elongated edge-like structure in the image of the head area of a fish. In one approach, the gill position is measured by establishing the smallest rectangle that encloses this structure. In the other, the particular image is transformed into a condensed signature through gray-level averaging in the direction of the edge-like structure. Associated image generation, enhancement, and processing procedures are described. The techniques are implemented on a vision workstation containing a multitasking operating system with a real-time kernel, and are tested on Pacific salmon. >

Proceedings ArticleDOI
27 Dec 1990
TL;DR: A new hybrid optical/digital method for scale- and rotation-invariant pattern recognition is presented using a rotating kernel mm-max transformation, which exhibits an approximate scale invariance and the recognition capability depending to a small degree on the kernel length used.
Abstract: A new hybrid optical/digital method for scale- and rotation-invariant pattern recognition is presented using a rotating kernel mm-max transformation. In this method, the input object is convolved with a long, narrow 2-D kernel. As the kernel rotates, the convolution output is monitored and the maximum [=Maxl and minimum [=MinJ values, along with the angle °M at which Max is found, are stored. The processed object is given by some function f[ , I of Max and Mm values. From the description (f[ , , OM), the 9-projection is first calculated. To obtain scale invariance, this projection is normalized by its integral. The normalized 0-projection exhibits an approximate scale invariance, the recognition capability depending to a small degree on the kernel length used. Since the kernel rotates, rotation invariance is achieved. Results of numerical experiments are presented. Some effects that variations in the kernel length have on the discrimination of objects are discussed.

Proceedings ArticleDOI
01 Feb 1990
TL;DR: A description of the image processing system used, the algorithms employed for both the sizing coat height measurements and for wrinkle detection, and results obtained for actual data are presented.
Abstract: Texture analysis has been an area of active research for the last two decades. This paper presents results on the application of two techniques for the analysis of textured images. Specifically, we look at the two problems of identifying and classifying sandpaper samples based on the textural properties created by varying the sizing coat, and detecting and classifying wrinkles within the sandpaper sample. The techniques used for identifying the samples are based on mathematical morphology and the computation of neighboring grey level dependence matrices. The features generated by applying a four grey-level morphological operations to images with a 5 X 5 kernel are used to classify the different textures. The size and shape of the structuring elements, as well as the sequence of operations can be optimized to provide the best discrimination for a particular product. Next, we show that the techniques used for the solution of sizing coat problems have a natural extension to the wrinkle detection problem. The wrinkle detection problem may be thought of as detecting a time-varying signal in a noisy background when processing in the spectral domain. The method that we use is the modified Hough transform. The parameters obtained from this transform, the length, angle, and orientation of the wrinkle, are the parameters of interest to the manufacturer. This paper presents a description of the image processing system used, the algorithms employed for both the sizing coat height measurements and for wrinkle detection, and results obtained for actual data. With over 100 sandpaper samples, the algorithms have proven to be entirely successful, and the system is currently being implemented online in the sponsor's manufacturing facility.

Patent
16 May 1990
TL;DR: In this article, an apparatus and method for removing background noise and high frequency noise form an image by comparing each pixel in the image with neighboring pixels defining a variably shaped and sized kernel.
Abstract: An apparatus and method for removing background noise and high frequency noise form an image by comparing each pixel in the image with neighboring pixels defining a variably shaped and sized kernel. The size and shape of the kernel are optimized for the particular characteristics of the data to be analyzed.

Proceedings ArticleDOI
17 Jun 1990
TL;DR: It is demonstrated that the system stabilizes in a state in which the different basic units are characterized by the group-theoretically derived filter functions.
Abstract: A study is made of properties of so-called basic units. The authors investigate an eigenvalue problem that turns up in the study of the stable states of such units. Basic units using Hebb-type learning rules converge to stable states which are eigenfunctions of an integral equation whose kernel is given by the covariance function of the input process. The authors investigate one basic unit and assume that the set of input patterns of this basic unit is regular in the sense that all patterns can be derived from a single prototype pattern by a group-theoretically defined transformation. They show that the stable states of the unit are uniquely determined by the symmetry of the input set. It is demonstrated that the system stabilizes in a state in which the different basic units are characterized by the group-theoretically derived filter functions. The authors train the system with an input set consisting of rotated edge and line patterns and show that the stable states of the system are characterized by pure line and pure edge detectors. How the system can be used in texture segmentation is described

Journal ArticleDOI
TL;DR: The method is adaptive and uses local evaluation of the kernel parameters and its performance on the reconstruction of very sparse images is shown.

Proceedings ArticleDOI
E.Z. Tihanyi1, J.L. Barron
16 Jun 1990
TL;DR: An automated edge detection algorithm, referred to as spatio-temporal edge focusing, is presented and it is shown that the combination of these two techniques maintains their advantages while minimizing or removing most of their disadvantages.
Abstract: An automated edge detection algorithm, referred to as spatio-temporal edge focusing, is presented. It combines spatial edge focusing and temporal edge focusing. While spatial edge focusing and temporal edge focusing each have their own advantages and disadvantages, it is shown that the combination of these two techniques maintains their advantages while minimizing or removing most of their disadvantages. The final result is an automated edge detection algorithm that produces relatively noise-free edge maps with well-localized edges. >