scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1988"


Journal ArticleDOI
01 Oct 1988
TL;DR: In this paper, the authors evaluated the Sommerfeld integrals that appear in the rigorous analysis of radiating objects embedded in a layered medium, and examined the validity of the image theory by the method of numerical integration.
Abstract: The paper deals with the evaluation of Sommerfeld integrals that appear in the rigorous analysis of radiating objects embedded in a layered medium. The discrete images of a horizontal electric dipole above or within a multilayered medium, can be obtained by numerically approximating the kernel in a Sommerfeld type integral, using a series of exponential functions with different complex coefficients. The distance limitation of the image theory expression is eliminated by cooperating the contribution of the surface waves. The validity of the image theory is examined by the method of numerical integration.

303 citations


Journal ArticleDOI
TL;DR: This work proposes a method that transforms measured profiles to new, modified distributions so that they satisfy the theoretical symmetry condition and the resultant kernel from the deconvolution is then free of fluctuations.
Abstract: A method has been developed to extract pencil beam kernels from measured broad beam profiles. In theory, the convolution of a symmetric kernel with a step function will yield a function that is symmetric about the inflection point. Conversely, by deconvolution, the kernel may be extracted from a measured distribution. In practice, however, due to the uncertainties and errors associated with the measurements and due to the singularities produced in the fast Fourier transforms employed in the deconvolution process, the kernels thus obtained and the dose distributions calculated therefrom, often exhibit erratic fluctuations. We propose a method that transforms measured profiles to new, modified distributions so that they satisfy the theoretical symmetry condition. The resultant kernel from the deconvolution is then free of fluctuations. We applied this method to compute photon and electron dose distributions at various depths in water and electron fluence distributions in air. The agreement between measured and computed profiles is within 1% in dose or 1 mm in distance in high dose gradient regions.

56 citations


Proceedings ArticleDOI
14 Nov 1988
TL;DR: A resolution-independent method for detection of imperfections in quasi-periodic textures is described, which offers a completely automated generation of a bank of suitable filters, the form and the coefficients of which are made dependent on the texture type to be inspected.
Abstract: A resolution-independent method for detection of imperfections in quasi-periodic textures is described. After image standardization, the period is estimated in the horizontal and vertical directions. This determines the size of a sparse convolution mask. Mask coefficients are determined by the well-known technique of eigenfilter extraction. The method thus offers a completely automated generation of a bank of suitable filters, the form and the coefficients of which are made dependent on the texture type to be inspected. After feature extraction in the filtered images, a Mahalanobis classifier is applied. >

45 citations



Patent
H. Keith Nishihara1
18 Mar 1988
TL;DR: In this paper, a spatial convolution filter is applied to a camera image and the sign of the filter's output is used to produce a second binary image, which is then reapplied to this binary signal blob image and peaks in the output of the second convolution are used as primitive elements for building a shape description.
Abstract: A system for shape recognition includes a spatial convolution filter which has an approximately band pass response. The filter is applied to a camera image and the sign of the filter's output is used to produce a second binary image. At sufficiently coarse (low center spatial frequency) scales, this image tends to change isolated shapes into single blob-shaped regions. The same convolution operator is then reapplied to this binary signal blob image and peaks in the output of the second convolution are used as the primitive elements for building a shape description. By processing the image with operators of two different scales the peaks shift position. The location of peaks from the coarse scale operator together with vectors to the fine scale peaks provide a description of the shape for a look-up table or other system.

35 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived the asymptotic mean squared error of the convolution type kernel estimators in the random design case, where given i.i.d. bivariate observations (Xi, Yi), i = 1,..., n, the aim is to estimate the regression function m(x) = E(Y/X =x).

34 citations


Patent
03 Nov 1988
TL;DR: A modular matrix processor as discussed by the authors is capable of configuration as a stand alone symmetric kernel convolutor or one of plural cascaded asymmetrical kernel convolutors, which includes coefficient registers and associated multipliers to multiply sequential pixels or words by an appropriate coefficient.
Abstract: A modular matrix processor which is capable of configuration as a stand alone symmetrical kernel convolutor or one of plural cascaded asymmetrical kernel convolutors. The module includes coefficient registers and associated multipliers to multiply sequential pixels or words by an appropriate coefficient. A summer, with appropriate input delays, sums the products and provides them to a plurality of FIFO (first in - first out) for storing the sums per row. An adder adds the contents of the summer, FIFO's and cascaded inputs to provide a convolution output Pc. Plural frames may be processed using the pixel input, coefficient input and cascade inputs. Plural modules may be used per row to increase the kernel size as well as the row capacity of the FIFO's.

31 citations


Journal ArticleDOI
TL;DR: Results from three cultivars grown at two test locations show that environment influences both aspect ratio and kernel length profiles, while for a single location, variation in kernel morphology reflects genotypic differences between cultivars, differences between locations reflect environmental influences upon the genome.

28 citations


Patent
22 Apr 1988
TL;DR: In this article, a linear combination of one or more terms, at least one term being the convolutional product of two or more primitive kernels, is used for image processing in which an image array is convolved with a kernel.
Abstract: Method and apparatus for image processing in which an image array is convolved with a kernel. The convolution is carried out with a decomposed form of the kernel that is a linear combination of one or more terms, at least one term being the convolutional product of two or more primitive kernels. Each primitive kernel has a center element, and even or odd parity with respect to reflection about the center element along each of the dimensions of the image array. One apparatus for carrying out the invention comprises a parallel processor associated with at least one dimension of the image array. The parallel processor receives parallel input data comprising a one-dimensional input portion of the image array extending along such dimension, and convolves the input data with the decomposed form of the kernel.

27 citations


Proceedings ArticleDOI
07 Jun 1988
TL;DR: The authors present the design of a neural network performing the minimization of a quadratic distance between the analog acquired picture and a convolution of the resulting halftoned binary picture, showing that using a diffusion kernel and switched-capacitor networks results in an effective halftoning circuit.
Abstract: As part of an effort to build a smart sensor, the authors present the design of a neural network performing the minimization of a quadratic distance between the analog acquired picture and a convolution of the resulting halftoned binary picture. It is shown that using a diffusion kernel and switched-capacitor networks results in an effective halftoning circuit, well-suited to a very compact CMOS implementation. It is concluded that this design methodology can be utilized for the implementation of a large class of early or low-level vision problems, expressed as quadratic cost function minimization. >

25 citations


Proceedings ArticleDOI
22 Aug 1988
TL;DR: In this paper, a method is presented for evoking a controlled, continuously variable degree of rotation and scale invariance in optical correlation; the method is suitable for off-line computation of filters, though not for real-time computation.
Abstract: A method is presented for evoking a controlled, continuously-variable degree of rotation- and scale-invariance in optical correlation; the method is suitable for off-line computation of filters, though not for real-time computation. While a closed-form solution for the blur kernels has thus far evaded solution, a digital approximation method has been presented. A simulated correlation run with real, frame-grabbed imagery has indicated the method's desired performance. These Gaussian blur kernels can be replaced with box-car kernels or other blur kernels suitable for the given correlation-task.

ReportDOI
01 Sep 1988
TL;DR: In this article, the kernel method is used to construct fixed point combinators from a given set of combinators, such that no two of the combinators are equal even in the presence of extensionality.
Abstract: In this report, we establish that the use of an automated theorem- proving program to study deep questions from mathematics and logic is indeed an excellent move. Among such problems, we focus mainly on that concerning the construction of fixed point combinators--a problem considered by logicians to be significant and difficult to solve, and often computationally intensive and arduous. To be a fixed point combinator, THETA must satisfy the equation THETAx = x(THETAx) for all combinators x. The specific questions on which we focus most heavily ask, for each chosen set of combinators, whether a fixed point combinator can be constructed from the members of that set. For answering questions of this type, we present a new, sound, and efficient method, called the kernel method, which can be applied quite easily by hand and very easily by an automated theorem-proving program. For the application of the kernel method by a theorem-proving program, we illustrate the vital role that is played by both paramodulation and demodulation--two of the powerful features frequently offered by an automated theorem-proving program for treating equality as if it is ''understood.'' We also state a conjecture that, if proved, establishes the completeness of the kernel method. From what we can ascertain, this method--which relies on the introduced concepts of kernel and superkernel--offers the first systematic approach for searching for fixed point combinators. We successfully apply the new kernel method to various sets of combinators and, for the set consisting of the combinators B and W, construct an infinite set of fixed point combinators such that no two of the combinators are equal even in the presence of extensionality--a law that asserts that two combinators are equal if they behave the same. 18 refs.

Book ChapterDOI
01 Jan 1988
TL;DR: The basic concepts of the kernel are developed from rather general design objectives and its use and major properties are illustrated and a prototype implementation is demonstrated, which demonstrates the feasibility of adding these facilities to a given operating system without affecting existing interfaces or applications.
Abstract: The apparent complexity of distributed application development, especially in a heterogeneous environment, is the prime motivation for the network operating system kernel described in this paper. The kernel reduces this complexity by separating the distribution related issues from the application related ones. It provides an interface of generic objects and operations, which are able to take away from the application programmer most of the problems of distribution, access protection, resource management, and data representation. This paper develops the basic concepts of the kernel from rather general design objectives and illustrates its use and major properties. A prototype implementation, which is running on three different architectures, demonstrates the feasibility of adding these facilities to a given operating system without affecting existing interfaces or applications. The paper reports about early experience with the implementation and performance of the prototype.

Journal ArticleDOI
TL;DR: This paper examines how a large array of neurons, and their associated neural circuitry, may determine known receptive field profile types and some well-known visual phenomena including Mach bands, edge enhancement, and visual masking of one signal by another.
Abstract: In this paper we examine how a large array of neurons, and their associated neural circuitry, may determine known receptive field profile types and some well-known visual phenomena including Mach bands, edge enhancement, and visual masking of one signal by another. The neural model has a spatio-temporal structure and is described by a nonlinear integropartial differential difference equation with an isotropic Gabor kernel -- a Gaussian apertured cosine modulation. Several simulations are presented.

Proceedings ArticleDOI
25 Oct 1988
TL;DR: A formalism for representation and processing of images which are not band-limited is presented and an example of a sampling scheme wherein the sampling rate decreases as a function of the distance from the center of the visual field is elaborated.
Abstract: In this paper we present a formalism for representation and processing of images which are not band-limited. Motivated by the properties of the visual system, we elaborated an example of a sampling scheme wherein the sampling rate decreases as a function of the distance from the center of the visual field. The broader class of images under consideration, obtained by the application of an appropriate projection filter, constitutes a Reproducing Kernel space which is characterized by "locally band-limited" properties. Sequential half-band filtering of such images generates a pyramidal scheme in the context of nonuniform systems and images.

Proceedings ArticleDOI
29 Jan 1988
TL;DR: Results are presented of a computer simulation of the deconvolution technique, indicating that it can remove the "ghosts" which are present in the basic shift-and-add image of a multiple star.
Abstract: Recent advances in deconvolution have allowed positive images to be deconvolved without prior knowledge of either of the images comprising the convolution. We incorporate these methods into the shift-and-add principle by exploiting the property of the basic shift-and-add image that it is a (noisy) convolution of the true image of an object with some unknown point-spread-function. This allows an estimate of the true image to be extracted from the shift-and-add image. The computational efficiency of basic shift-and-add is preserved during data gathering since extensive computation is only applied to a single image. Results are presented of a computer simulation of the technique, indicating that it can remove the "ghosts" which are present in the basic shift-and-add image of a multiple star.

Proceedings ArticleDOI
24 Apr 1988
TL;DR: The overall structure of the kernel is described, including a hierarchy of interprocess communication primitives and a generic way to interact with external devices, and it is possible to estimate the amount of time that a real-time process takes to execute since the execution times of system calls are bounded.
Abstract: Timix is a distributed real-time kernel that is being developed to support multisensor robot systems. The overall structure of the kernel is described, including a hierarchy of interprocess communication primitives and a generic way to interact with external devices. A simple multisensor robot that is being implemented using Timix is also presented. Three salient aspects of Timix are as follows. First, it is possible to estimate the amount of time that a real-time process takes to execute since the execution times of system calls are bounded. Second, an hierarchy of communication methods, which differ in synchronization, time and space overheads, and bandwidth requirements, are provided to allow the programmer to choose the one most applicable to a particular application, depending on the real-time requirements. Finally, new devices, which ar directly controlled by application processes, can be integrated into the system without changing the kernel. >

Journal ArticleDOI
TL;DR: A linear convolution of two N- point sequences is computed by using N-point Fermat number transforms, so that the convolution length is doubled for a given modulo FermatNumber number.
Abstract: A linear convolution of two N-point sequences is computed by using N-point Fermat number transforms, so that the convolution length is doubled for a given modulo Fermat number. The algorithm is also suitable for other convolutional algorithms of number theoretic transforms.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: Based on these VLSI architectures, a chip is proposed which would implement a real-time convolution of 512*512 images with kernels up to 11*11 and allow online processing of video format images.
Abstract: The authors propose VLSI architectures for real-time convolution of an image by any large kernel with symmetries. Consideration is limited to horizontal, vertical, and central symmetries in order to reduce the hardware requirements. Based on these architectures, a chip is proposed which would implement a real-time convolution of 512*512 images with kernels up to 11*11 and allow online processing of video format images. >

Proceedings ArticleDOI
01 Jun 1988
TL;DR: Pipelined VLSI/WSI architectures supporting image coding transforms support flexible structures characterized by a “basic” pipeline - performing the common kernel of computation - and by transform-dependent input and output stages.
Abstract: Pipelined VLSI/WSI architectures supporting image coding transforms are defined and evaluated in the paper. The structures proposed in the paper have been derived by considering a common algorithmic kernel of the set of examined transforms. The possibility of reducing the computations to a common algorithmic version allows definition of flexible structures characterized by a “basic” pipeline - performing the common kernel of computation - and by transform-dependent input and output stages. Three different pipelines have been defined and figures of merit, such as silicon area occupation and throughput, have been evaluated for architectures comparisons.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the proximity effect in the superconductor-normal material (N) model system, and obtained the Gor'kov's integral kernel which is exact in general cases between the clean and the dirty limit.



Journal ArticleDOI
17 May 1988
TL;DR: A pipelined architecture that maps different sizes and shapes of kernels on a fixed size array of computing elements using a single pass of the input data to show that the array can be operated at its highest throughout for any kernel size.
Abstract: Existing architectures for 2-D convolution suffer from such drawbacks as inflexibility with respect to image and/or kernel sizes (systolic arrays) or data distribution and collection overhead (SIMD processor arrays). This paper introduces a pipelined architecture that maps different sizes and shapes of kernels on a fixed size array of computing elements using a single pass of the input data. It is shown that the array can be operated at its highest throughout for any kernel size. Interfacing this architecture with the host requires receiving and outputting data in a simple raster-scan fashion.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The Clouds kernel is a native-layer distributed kernel supporting the Clouds operating system that provides atomic actions to support reliable computation and is closely integrated with the virtual memory management of the system.
Abstract: The Clouds kernel is a native-layer distributed kernel supporting the Clouds operating system. Clouds provides atomic actions to support reliable computation. The data-recovery mechanism supporting atomic actions in the Clouds kernel is the responsibility of a component called the storage manager. This recovery mechanism uses a pessimistic shadowing technique and is designed to be an efficient, low-level facility. Recovery is completely separate from the synchronization support for atomic actions. Clouds recovery is closely integrated with the virtual memory management of the system, making for an effective recovery system. The algorithms that control the update of objects are straightforward enough that a fairly rigorous proof of correctness is possible. >

Proceedings ArticleDOI
G. Eichmann1, A. Kostrzewski1, Berlin Ha1, Dai Hyun Kim1, Yao Li1 
18 Jul 1988
TL;DR: In this article, Fourier spectrum filtering based and local averaging using 2D lens array are used for real-time pyramid image generation, and experimental results for optical Gaussian, Laplacian and other fivadtree pyramidal image processing are shown.
Abstract: Pyramidal processing is a form of multiresolution image analysis in which a primary image is decomposed into a set of different resolution image copies. Pyramidal processing aims to extract and interpret significant features of an image appearing at different resolutions. Digital pyramidal image processing, because of the large number of convolution type operations, is time-consuming. On the other hand, optical pyramidal processors, because of their ease in performing convolution operation, are preferable in real-time image understanding applications. Two methods of optical pyramidal image generation, a Fourier spectrum filtering based, and a local averaging using 2D lens array, are presented. Preliminary experimental results for optical Gaussian, Laplacian and other fivadtree pyramidal image processing are shown. Experimental results, using commercial lifiuid crystal TVs, for a real-time pyramid image generation are presented.

Book ChapterDOI
01 Jan 1988
TL;DR: A local spatial convolution filter for the restoration of one- and two—dimensional signals is suggested whose design is based on approximating any global linear restoration filter, which might be the Wiener, pseudoinverse, constrained least squares, or projection filter.
Abstract: A local spatial convolution filter for the restoration of one- and two—dimensional signals is suggested whose design is based on approximating any global linear restoration filter, which might be the Wiener, pseudoinverse, constrained least squares, or projection filter. The local filter provides a restoration that is as close as possible to the global restoration. It is shown by an example using a blurred standard image that the restorations are satisfactory even when the filter size is quite small. Quantitative properties of the suggested localization filter are discussed.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented and a CAD-like symbolic image description is dumped into a symbol memory at pixel clock rate.
Abstract: A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

Proceedings ArticleDOI
05 Jun 1988
TL;DR: Using fundamental operators from image algebra, this paper presented simple closed-form expressions for dilation, erosion, and convolution, which appear as terms within the algebra and reveal a universal operational structure within image algebra.
Abstract: Using fundamental operators from image algebra, the authors present simple closed-form expressions for dilation, erosion, and convolution. Algebraically, these expressions appear as terms within the algebra. Moreover, the methodology for obtaining the expressions reveals a universal operational structure within image algebra, of which the three aforementioned operations are particular instances. The result is a natural parallel mechanism for computation and a representation of convolution that naturally overcomes the difficulties arising from the variability of image domains in the defining relation. >

Journal ArticleDOI
TL;DR: The stationary-phase approximation is used to derive an approximate kernel for the propagation of a monochromatic wave specified on a curved source plane.
Abstract: We used the stationary-phase approximation to derive an approximate kernel for the propagation of a monochromatic wave specified on a curved source plane.