scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1979"


Journal ArticleDOI
TL;DR: Two solution concepts for cooperative games in characteristic-function form, the kernel and the nucleolus, are studied in their relationship to a number of other concepts, most notably the core, to analyze the behavior of the strong e-core as e varies.
Abstract: Two solution concepts for cooperative games in characteristic-function form, the kernel and the nucleolus, are studied in their relationship to a number of other concepts, most notably the core. The unifying technical idea is to analyze the behavior of the strong e-core as e varies. One of the central results is that the portion of the prekernel that falls within the core, or any other strong e-core, depends only on the latter's geometrical shape. The prekernel is closely related to the kernel and often coincides with it, but has a simpler definition and simpler analytic properties. A notion of “quasi-zero-monotonicity” is developed to aid in enlarging the class of games in which kernel considerations can be replaced by prekernel considerations. The nucleolus is approached through a new, geometrical definition, equivalent to Schmeidler's original definition but providing very elementary proofs of existence, unicity, and other properties. Finally, the intuitive interpretations of the two solution concepts are clarified: the kernel as a kind of multi-bilateral bargaining equilibrium without interpersonal utility comparisons, in which each pair of players bisects an interval which is either the battleground over which they can push each other aided by their best allies if they are strong or the no-man's-land that lies between them if they are weak; the nucleolus as the result of an arbitrator's desire to minimize the dissatisfaction of the most dissatisfied coalition.

494 citations


Journal ArticleDOI
01 Dec 1979
TL;DR: In this article, Fan-Beam reconstruction is applied to the problem of reconstructing density distributions from arbitrary fan-beam data, where the kernel of the general linear operator is factored and rewritten as a function of the difference of coordinates only and the superposition integral consequently simplifies into a convolution integral.
Abstract: In a previous paper a technique was developed for finding reconstruction algorithms for arbitrary ray-sampling schemes. The resulting algorithms use a general linear operator, the kernel of which depends on the details of the scanning geometry. Here this method is applied to the problem of reconstructing density distributions from arbitrary fan-beam data. The general fan-beam method is then specialized to a number of scanning geometries of practical importance. Included are two cases where the kernel of the general linear operator can be factored and rewritten as a function of the difference of coordinates only and the superposition integral consequently simplifies into a convolution integral. Algorithms for these special cases of the fan-beam problem have been developed previously by others. In the general case, however, Fourier transforms and convolutions do not apply, and linear space-variant operators must be used. As a demonstration, details of a fan-beam method for data obtained with uniform ray-sampling density are developed.

112 citations


Patent
22 May 1979
TL;DR: In this paper, an improved method and apparatus for digital image processing is disclosed which permits greater efficiency in implementation of digital filtering techniques, and means for implementing the method are also disclosed.
Abstract: An improved method and apparatus for digital image processing is disclosed which permits greater efficiency in implementation of digital filtering techniques. In one implementation specially selected small generating kemels, or masks, are sequentially convolved with a data array of pixels representative of a particular image for more efficient restoration, enhancement or other conventional digital image processing techniques. The small generating kernels may be varied for each sequential convolution. in some implementations the output of each sequential convolution may be weighted in accordance with the filtering desired. Means for implementing the method are also disclosed.

62 citations


Journal ArticleDOI
TL;DR: The concepts of generalized projection and generalized Radon transform are introduced and point-spread functions are given for cases involving piecewise-uniform symmetrical source distributions and uniform detectors.
Abstract: Tomographic reconstruction has ordinarily assumed that the measurement data can be regarded as line integrals, but the finite width of the X-ray beam invalidates this assumption. The data can however be expressed in the form of integrals over a strip rather than a line. The strip integral kernel is calculated allowing for extended source and detector, as well as for nonuniform photon emission and detector sensitivity. Strip eccentricity, which occurs in practice, is also taken into account. Even if the measurement data were to cover all scanning angles, there would be imperfect reconstruction expressible as a space-variant point spread function deducible from the strip integral kernel. To deal with this it is convenient to introduce the concepts of generalized projection and generalized Radon transform. Point-spread functions are given for cases involving piecewise-uniform symmetrical source distributions and uniform detectors.

32 citations


Journal ArticleDOI
TL;DR: In this paper, an extreme form of the above approximations, for which multiplication by any filter element (except the central one) requires only a simple binary shift, is investigated, and a projection of M samples can be filtered in an extremely straightforward manner using only M full-precision multiplications, representing a significant advantage over convolution implementations using Fourier or number-theoretic transforms.
Abstract: The amount of computation required to convolve projection data with a filter array may be reduced by implementing the required multiplications with reduced precision or by approximating the filter with a function which is piecewise constant over intervals several times longer than the projection sampling increment. We investigate an extreme form of the above approximations, for which multiplication by any filter element (except the central one) requires only a simple binary shift. Using this approximation, a projection of M samples may be filtered in an extremely straightforward manner using only M full-precision multiplications, representing a significant advantage over convolution implementations using Fourier or number-theoretic transforms. Simulations are presented which show that in most cases only an insignificant amount of error in the reconstructed image results from the use of this form of convolution approximation.

11 citations


Journal ArticleDOI
TL;DR: In this article, a modified Galerkin method was used to approximate the solution of nonlinear Volterra integral equations of the second kind with smooth kernels, which was generalized to include such equations with singular, monotone kernels of convolution type.

10 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the Ramachandran-Lakshminarayanan convolution kernel has one zero between each non-zero value in the spatial domain.

7 citations


Journal ArticleDOI
TL;DR: An algorithm is described to transform sampled CT convolution kernels into "binary" kernels having values which are even powers of 2, so that multiplication in the convolution operation can be replaced by a much faster shift operation.
Abstract: An algorithm is described to transform sampled CT convolution kernels into "binary" kernels having values which are even powers of 2. Multiplication in the convolution operation can then be replaced by a much faster shift operation. A technique for further reducing the number of shift and addition operations using new computationally fast CT kernels is also given. The binary kernels are computationally faster by about 80-90% on a conventional computer. Reconstructed CT images using both conventional kernels and their transformed binary equivalents are compared.

5 citations


Proceedings ArticleDOI
06 Jul 1979
TL;DR: In this paper, the shape of the convolution kernel is exploited for reconstruction from truncated projections and for ultra-fast approximate projection filtering, and the results of simulations indicate that these techniques would be of value in computed tomography.
Abstract: The convolution/back-projection algorithm has certain characteristics which enable it to reconstruct useful images when the projection data or data-processing operations are incomplete or when they are approximated rather crudely. In this paper we show how the shape of the convolution kernel can be exploited for reconstruction from truncated projections and for ultra-fast approximate projection filtering. The results of simulations indicate that these techniques would be of value in practical computed tomography.

4 citations


Book ChapterDOI
Paul Berner1
TL;DR: In this paper, it was shown that the kernel characterization and surjectivity result holds for the larger class of continuous convolution operators, including Schwartz's space of distributions, and the case in which E is an open and compact surjective limit of appropriate spaces.
Abstract: Publisher Summary This chapter presents a study on convolution operators and surjective limits. Convolution operators on a space of entire functions have been studied by a variety of authors including Boland, who showed that if E is a quasi-complete nuclear and dual nuclear space, then a non-zero convolution operator on H(E), continuous for the compaot open topology, satisfies Malgrange's charactorization of the kernel, and if E is also a dual Frechet nuclear space, then such an operator is also surjective. The chapter presents a study on the case in which E is an open and compact surjective limit of appropriate spaces. The results enables to draw Boland's conclusions for nuclear spaces E, which are not dual metric and not necessarily dual nuclear. The result includes the important case of Schwartz's space of distributions, and for this case and others it is shown that the kernel characterization and surjectivity result holds for the larger class of continuous convolution operators. Thus, the chapter presents two affirmative answers to a question of Boland.

3 citations


Proceedings ArticleDOI
20 Aug 1979
TL;DR: In this paper, the equivalence of 3 × 3 linear discrete convolution filtering to spatial frequency domain multiplication is reviewed, and real-time spatial filtering is readily implemented using a multitapped CCD delay line coefficients on-chip with a CCD imager.
Abstract: The equivalence of 3 x 3 linear discrete convolution filtering to spatial frequency domain multiplication is reviewed in this paper. DPCM (Differential Pulse Code Modulation) is simply a low pass filter in the frequency domain. More complex 3 x 3 filters allow the choice of a band stop locus in the frequency domain in order to reduce unwanted spatial frequency components. This real time spatial filtering is readily implemented using a multitapped CCD delay line coefficients on-chip with a CCD imager.

Proceedings ArticleDOI
01 Apr 1979
TL;DR: A discrete method for the convolution of odd-order splines is presented, which is similar to those of fast convolution and contain the latter as a special case, as bandlimited functions may be interpreted as infinite- order splines.
Abstract: A discrete method for the convolution of odd-order splines is presented. The resulting algorithms are similar to those of fast convolution and contain the latter as a special case, as bandlimited functions may be interpreted as infinite-order splines. Finite-order spline interpolations offer besides their good approximation properties more flexibility, as discontinuities in the signals and their derivatives and strictly finite durations may be allowed for. In convolving a finite-duration signal with an infinite-duration one, overlap techniques can be used, which do not require more computation time and storage than conventional fast convolution algorithms.

Proceedings ArticleDOI
O.J. Tretiak1
06 Nov 1979
TL;DR: The substance of the paper consists of mathema tical transformations of the above problem which lead to efficient algorithms and novel edge detection operators.
Abstract: A theory is proposed for the structure of edges in a two dimensional visual field. This theory consists of models of ideal edges, and of distortion criteria for the evaluation of the accuracy with which a visual field fits this model. In this formulation the edge is specified by a curve described by a pair of parametric equations, and the line or edge detection problem is solved by minimizing a cost function over the set of curves. The substance of the paper consists of mathema tical transformations of the above problem which lead to efficient algorithms and novel edge detection operators.

Journal ArticleDOI
TL;DR: In this paper, a technique is presented for constructing uniform pointwise approximate solutions to the initial-value problem, where the averaging concept is defined in terms of the kernel of the original operator and does not depend upon an averaging of the elements in its domain.
Abstract: A technique is presented for constructing uniform pointwise approximate solutions to the initial-value problem $y'' + ( {\lambda ^2 + \varepsilon \phi ( x )} )y( x ) = 0,y( 0 ) = y_0 ,y'( 0 ) = y_1 $ on the interval $| x | \leqq 1 | \varepsilon | $ for almost periodic functions $\phi ( x )$. The system is initially expressed as a Volterra integral equation which is further developed in terms of the notion of averaged Volterra operators. The averaging concept, which leads to a convolution operator, is defined in terms of the kernel of the original operator and does not depend upon an averaging of the elements in its domain. This permits pointwise estimates to be made in the resulting equivalent integral equation. Successive application of the averaging yields a sequence of integral equations which converge to one of convolution type. Each order of truncation is solvable by transform techniques. Only the first order is rigorously analyzed for a broad class of almost periodic functions and yields estimates o...