scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1987"


Patent
28 Sep 1987
TL;DR: In this paper, a file and resource serving and locking service is provided by a user level request server on the intermediate computer to make requested files or resources available on or through the intermediate computers to the operating system.
Abstract: A computer network (FIG. 1) comprises a plurality of personal computers (PCs 10), groups of which are each logically connected to a different one of a plurality of intermediate computers (11). At least one of the intermediate computers is connected to a mainframe computer (12). File and resource serving and locking services are provided transparently to PC user programs (200). Certain user service requests ("open file" and "exit" calls) on each PC to the PC operating systems means (20,22) are trapped by an operating system kernel-level patch (21), and corresponding requests are sent to a kernel-level driver (31) on the associated intermediate computer. The driver collects requests from all PCs associated with the intermediate computer and funnels them to a user level request server (32) on the intermediate computer. The request server performs requested file and resource serving and locking services in an effort to make requested files or resources available on or through the intermediate computer to the PC's operating system. The request server calls upon a NETSVR process (33) to find requested files and resources on other intermediate computers and to transfer requested files to its intermediate computer. The request server calls upon an APISVR process (34) to obtain requested files unavilable on intermediate computers (11) from a database (13) of the mainframe computer. The request server returns notices of its successor failure to the patch through the driver. In response to the notices, the patch forwards the trapped user requests to the PC operating system to service the requests. The PC operating system views and uses the associated intermediate computer as a peripheral device to satsify user file or resource requests.

167 citations


Journal ArticleDOI
TL;DR: This work presents a technique for computing the convolution of an image with LoG (Laplacian-of-Gaussian) masks, with the paradoxical result that the computation time decreases when ¿ increases.
Abstract: We present a technique for computing the convolution of an image with LoG (Laplacian-of-Gaussian) masks. It is well known that a LoG of variance a can be decomposed as a Gaussian mask and a LoG of variance ?1 < ?. We take advantage of the specific spectral characteristics of these filters in our computation: the LoG is a bandpass filter; we can therefore fold the spectrum of the image (after low pass filtering) without loss of information, which is equivalent to reducing the resolution. We present a complete evaluation of the parameters involved, together with a complexity analysis that leads to the paradoxical result that the computation time decreases when ? increases. We illustrate the method on two images.

132 citations


Journal ArticleDOI
TL;DR: The basic idea of the proposed scheme is to apply the 1-D systolic concept to 2-D convolution on a mesh structure to make the scheme suitable for VLSI implementation.
Abstract: In this correspondence, a parallel 2-D convolution scheme is presented. The processing structure is a mesh connected array processor consisting of the same number of simple processing elements as the number of pixels in the image. For most windows considered, the number of computation steps required is the same as that of the coefficients of a convolution window. The proposed scheme can be easily extended to convolution windows of arbitrary size and shape. The basic idea of the proposed scheme is to apply the 1-D systolic concept to 2-D convolution on a mesh structure. The computation is carried out along a path called a convolution path in a systolic manner. The efficiency of the scheme is analyzed for windows of various shapes. The ideal convolution path is a Hamiltonian path ending at the center of the window, the length of which is equal to the number of window coefficients. The simple architecture and control strategy make the proposed scheme suitable for VLSI implementation.

80 citations


Journal ArticleDOI
TL;DR: Physically realizable, optimal generating kernels are presented and show a better command of details than the ones generated by a simple 4 × 4 averaging, or a computationally equivalent kernel.
Abstract: Construction of image pyramids is described as a two-di-mensional decimation process Frequently employed generating kernels are compared to the optimal kernel that assures minimal information loss after the resolution reduction, ie, the one corresponding to an ideal low pass filter Physically realizable, optimal generating kernels are presented The amount of computation required for generation of the image pyramid can be reduced significantly by employing half-band filters as components of the optimal kernel Image pyramids generated by the optimal kernel show a better command of details than the ones generated by a simple 4 × 4 averaging, or a computationally equivalent kernel

76 citations


Journal ArticleDOI
TL;DR: In this article, a technique for the evaluation of static and dynamic potentials due to source distributions defined on domains with simple shape is presented, where the domains considered are polyhedral regions and, in two-dimensional problems, plane polygons, on which uniform or linearly varying source distributions are defined.
Abstract: A technique for the evaluation of static and dynamic potentials due to source distributions defined on domains with simple shape is presented. The domains considered are polyhedral regions and, in two-dimensional problems, plane polygons, on which uniform or linearly varying source distributions are defined. It is shown how three-dimensional (two-dimensional) potential integrals are always reducible to surface (line) integrals with nonsingular kernel, by use of a nonlinear transformation for the integration variables that permits analytic integration. In the static case the integration on the boundary is performed analytically and closed form results are given. In the dynamic case the expressions of the boundary integrals are given in a form suitable for numerical integration. The use of matrix notation allows for very compact expressions readily translatable into computer programs.

64 citations


Journal ArticleDOI
TL;DR: An optimum smoothing is defined by minimizing the width of the different functions approximating the desired Dirac distribution by imposing inequalities on the successive derivatives of φ which are equivalent to those used for the obtention of the classical limit for the corresponding quantum problem.
Abstract: A compromise is found between the different requirements that we would like to be fulfilled by a time frequency distribution, namely, positivity and obtention of a distribution close to the Dirac one for the unimodular signal s(t) = \exp i\phi (t) (the fulfillment of the marginal conditions being of less interest in signal theory). Starting from the usual Wigner-Ville distribution, we define an optimum smoothing by minimizing the width of the different functions approximating the desired Dirac distribution. The smoothing is obtained by a convolution through a double Gaussian of width σ t and σ ω such that σ t σ ω = 1/2. Two possibilities appear: in the first one, we do not introduce any correlation between t and ω in the convolution kernel, and obtain a simple result. In the second one, extrapolating the frequency variation, and still using a Gaussian, we obtain a better result although the smoothing process becomes more complex. These results, to be physically meaningful, impose inequalities on the successive derivatives of φ which are equivalent to those used for the obtention of the classical limit for the corresponding quantum problem.

60 citations


Journal ArticleDOI
TL;DR: In this article, an image processing algorithm was developed for detecting stress cracks in corn kernels using a commercial vision system and the kernel images, when processed using the algorithm developed, produced white streaks corresponding to the stress cracks.
Abstract: AN image processing algorithm was developed for detecting stress cracks in corn kernels using a commercial vision system. White light in back-lighting mode with black-coated background having a small aperture for the light provided the best viewing conditions. The kernel images, when processed using the algorithm developed, produced white streaks corresponding to the stress cracks. Double stress cracks were the easiest to detect. Careful positioning of the kernel over the lighting aperture was necessary for satisfactory detection of single and multiple stress cracks.

38 citations


Journal ArticleDOI
TL;DR: Although resolution can be improved in this fashion, the step-edge position and intensity estimates thus determined may be subject to systematic biases and the higher resolution performance is accompanied by lower robustness to noise.
Abstract: Most step-edge detectors are designed to detect locally straight edge-segments which can be isolated within the operator kernel. While it can easily be demonstrated that a cross-sectional support of at least 4 pixels is required for the unambiguous detection of a stepedge, edges which cannot be isolated within windows having this width can nevertheless be resolved. This is achieved by preceding the stepedge detection process by image-intensity interpolation. Although resolution can be improved in this fashion, the step-edge position and intensity estimates thus determined may be subject to systematic biases. Also, the higher resolution performance is accompanied by lower robustness to noise.

36 citations


Journal ArticleDOI
TL;DR: In this article, an integrodifferential equation of the Volterra type is considered under the action of an L 2 (0, T, L 2(Γ))-boundary control, and the controllability results obtained in [17] for the underlying reference model associated with a trivial convolution kernel, carry over to the model under consideration without any smallness assumption concerning the memory kernel.
Abstract: An integrodifferential equation of the Volterra type is considered under the action of anL 2(0, T, L2(Γ))-boundary control. By harmonic analysis arguments it is shown that the controllability results obtained in [17] for the underlying reference model associated with a trivial convolution kernel, carry over to the model under consideration without any smallness assumption concerning the memory kernel.

34 citations


Proceedings ArticleDOI
01 Apr 1987
TL;DR: Two sets of 1-D FIR digital filtering architectures are proposed to reduce computational complexity and to increase throughput rate and each architecture is regular in structure with a high degree of parallelism and pipelining.
Abstract: Two sets of 1-D FIR digital filtering architectures are proposed to reduce computational complexity and to increase throughput rate. Each architecture is regular in structure with a high degree of parallelism and pipelining. Consequently, they are suitable for VLSI or multiprocessor implementation. In the paper, infinite linear convolution is first converted into finite length linear or cyclic convolution in polynomial ring. Certain algorithms that are used to reduce computational complexity in finite length linear or cyclic convolution can then be applied here to reduce computational complexity of polynomial convolution and give the resulting filter structure.

26 citations


Journal ArticleDOI
TL;DR: This paper adds quantitative confirmation to the assertions of previous writers that the phase of the signal received in the aperture contains the preponderance of the information useful for image formation.
Abstract: This paper adds quantitative confirmation to the assertions of previous writers that the phase of the signal received in the aperture contains the preponderance of the information useful for image formation. An easily calculable metric is established for judging the extent of the alteration of an image due to an arbitrary distortion of the signal in the aperture or of the imaging kernel in the signal processor. Given some known or measurable amplitude or phase nonlinearity in the detector at the front end of the measurement system, it is possible to judge whether such nonlinearity is tolerable or whether it must be removed. Subjective viewing tests confirm the validity of the metric. High angular resolution microwave images are used to test the theory. It is demonstrated that hard limiting the input signals to a phased array preserves much of the image integrity; amplitude-only information, on the other hand, destroys it. Only one bit of phase information sometimes suffices. Two bits of signal phase and 2-bit phase quantization of the exponential kernel in the signal processor results in an extraordinarily simple signal processor that produces surprisingly good imagery.

Proceedings ArticleDOI
01 Apr 1987
TL;DR: This work develops the approach by examining statistical properties of simple linear estimators involving derivatives of a low-pass Gaussian kernel, and investigates the effectiveness of various combinations of the partial derivative estimates in detecting blurred steps and lines.
Abstract: Edge detection in sampled images may be viewed as a problem of numerical differentiation. In fact, most point edge operators function by estimating the local gradient or Laplacian. Adopting this view, Torre and Poggio [2] apply regularization techniques to the problem of computing derivatives, and arrive at a class of simple linear estimators involving derivatives of a low-pass Gaussian kernel. In this work, we further develop the approach by examining statistical properties of such estimators, and investigate the effectiveness of various combinations of the partial derivative estimates in detecting blurred steps and lines. We also touch briefly on the problem of sensitivity to various types of edge structures, and develop an isotropic operator with reduced sensitivity to isolated spikes.

Journal ArticleDOI
TL;DR: The choice of parameters used in selectively enhancing the mediastinum was investigated, and the amplification constant was examined in light of its effect on both structure and noise in the image.
Abstract: Previous work has demonstrated the potential for adaptive filtration in processing digital chest images. The technique uses the histogram of the image to determine the pixels (and regions) in which edge enhancement is applied. This paper extends that work by investigating the choice of parameters used in selectively enhancing the mediastinum. The image is separated into its low and high frequency components by convolution with a square kernel. The effect of kernel size was studied with a choice of 17 x 17 mm, which was found to be sufficient to include the frequencies of interest. A serious deficiency in previous implementations of this technique is the existence of ringing artifacts at the juncture of the lung and mediastinum. These result in part from the use of a step function to specify the low frequency image intensity above which high frequencies are amplified. By replacing this step with a smoother (cosine) function, the artifact can be removed. Finally, the amplification constant was examined in light of its effect on both structure and noise in the image.

Journal ArticleDOI
TL;DR: In this paper, the Backus-Gilbert or averaging kernel inversion of linear integral equations is applied to numerical differentiation, Laplace transform inversion and to a geophysical inverse problem arising in electromagnetic sounding.
Abstract: The paper deals with the Backus-Gilbert or averaging kernel inversion of linear integral equations The theoretical background of the method is developed: it is shown that the method leads to a sequence of linear pointwise estimates, which are asymptotically unbiased when no error is present Anumerical implementation is given Finally, the algorithm is applied to numerical differentiation, Laplace transform inversion and to a geophysical inverseproblem arising in electromagnetic sounding

Journal ArticleDOI
TL;DR: An implementation of the two-dimensional convolution algorithm that is suitable for image-processing applications is presented and the execution time is shown to be reduced due to an efficient data organization and by taking into account the fact that the pixel values in an image assume a fixed number of discrete levels.
Abstract: An implementation of the two-dimensional convolution algorithm that is suitable for image-processing applications is presented. The execution time of the algorithm is shown to be reduced due to an efficient data organization and by taking into account the fact that the pixel values in an image assume a fixed number of discrete levels.

Proceedings ArticleDOI
K. Doshi1, Peter Varman1
01 Jun 1987
TL;DR: The design is simple and hence a good candidate for VLSI integration, its one-dimensional organization and unidirectional data flow characteristics result in good fault-tolerance for the array.
Abstract: This paper describes a modular, systolic design for two-dimensional convolution which is a frequent and computationally intensive operation in low-level image processing. The design consists of a one-dimensional array of homogeneous cells, each with a fixed amount of storage. The paper also presents schema by which the design consisting of a limited number of cells can be used to implement convolutions of varying kernel sizes, with optimal throughput. The design is simple and hence a good candidate for VLSI integration. Its one-dimensional organization and unidirectional data flow characteristics result in good fault-tolerance for the array.

Journal ArticleDOI
TL;DR: In this article, the usual higher Hilbert-Riesz transforms are principal value convolution transforms with kernels Yj(x)¦x¦−j−n, where Yj, is a homogeneous harmonic polynomial of degree j.

Patent
07 Apr 1987
TL;DR: In this paper, the advantages of the convolution calculation are combined with the advantage of recursive filtering in a computer tomography apparatus in accordance with the invention of convolution and recursive filtering.
Abstract: In a computer tomography apparatus in accordance with the invention the advantages of the convolution calculation are combined with the advantages of recursive filtering. The processing unit for the measurement data includes a convolution filter which has only 64 convolution factors and which performs the central part of a convolution calculation (the number of measurement data amounts to, for example 512 or 1024) and a parallel-operating recursive filter which approximates the rests of the convolution calculation situated outside the central part by recursive filtering of the measurement data. Using five different multiplication factors and a corresonding number of attenuation factors, ample accuracy is achieved. By variable of only the 64 convolution factors, a large number of different filters can be realized, without it being necessary to change the factors for recursive filtering.

Proceedings ArticleDOI
01 Apr 1987
TL;DR: Scale-space filtering is a multiresolution filtering, which is defined as the convolution of a random waveform f(x) with the Gaussian kernel w(x,σ) and it is confirmed that the structure line is effective to represent waveforms hierarchically.
Abstract: Scale-space filtering is a multiresolution filtering, which is defined as the convolution of a random waveform f(x) with the Gaussian kernel w(x,σ). The set of filtered waveform f(x,σ), called generalized waveform of f(x), is very useful for the structural analysis of waveforms. In this paper, we introduce an analytic line, named structure line, on the surface of generalized waveform f(x,σ). Structure line describes the relation of the convex and concave region of generalized waveform. Structure line is defined by some derivatives of generalized waveform, and has the same topological structure as a trinary tree. The properties of structure line are discussed and some examples are shown. It is confirmed that the structure line is effective to represent waveforms hierarchically.

Patent
19 May 1987
TL;DR: In this article, the authors proposed a method to obtain the holding of a requested frequency characteristic as a kernel to perform an aperture correction and an interpolation simultaneously, and to attain the highly accurate correction of deterioration due to a shape distortion.
Abstract: PURPOSE:To attain the holding of a requested frequency characteristic as a kernel to perform an aperture correction and an interpolation simultaneously, and to attain the highly accurate correction of deterioration due to a shape distortion, etc., by providing a process means having a distortion correction quantity calculating part, etc. CONSTITUTION:An image converted to an electrical signal with a sensor is digitized through an A/D converter, and is stored at an observation image buffer 13. At a processor 14, an image data is read out from the buffer 13, and the correction process of a sensor aperture characteristic and an image geometrical distortion correction process are applied on it, and a result is displayed at a display device 15, or is stored at a correction image buffer 16. At such a case, the geometrical distortion quantity of the observation image is calculated by using a reference geometrical pattern 24 and the content of the buffer 13 at a distortion correction quantity calculating part 21 in the processor 14. And the requested frequency characteristic as the kernel to perform the aperture correction and the interpolation simultaneously through a resampler 22, and a kernel 23 can be attached, and a correction against the shape distortion and the deterioration due to the sensor aperture characteristic can be performed with high accuracy.

01 Jan 1987
TL;DR: A new approach of direct sample by sample DCT has been adopted using distributed arithmetic, multipliers has been replaced by adders, and the memory requirements has been significantly reduced by exploiting the symmetry and periodicity of the DCT kernel.
Abstract: The Discrete Cosine Transform (DCT) [1] is presently the best known transform image encoder that performs closest to the theoretically optimal karhunen-loeve transform [1]. The kernel of DCT has both periodicity and symmetry properties. The most efficient DCT implementation todate is due to B.G.Lee [2]. His Fast Cosine Transform algorithm [2] is based on a decimation in time scheme resulting in butterfly structural units with some similarity to Fast Fourier Transform [FFT]. In this paper [3] a new approach of direct sample by sample DCT has been adopted. Using distributed arithmetic [4] multipliers has been replaced by adders, and the memory requirements has been significantly reduced by exploiting the symmetry and periodicity of the DCT kernel.

01 Jan 1987
TL;DR: This paper presents an abbreviated description of the Clouds architecture and its relation to the operation of the invocation mechanism, including remote invocation, perobjcct access control, and location independent invocation.
Abstract: Many distributed opcraLing systems have been developed in recent years based on the acLion/object paradigm. The Clouds mullicamputee system provides a faull-tolemnt distributed compuling environment buill from passive data objccts,fault-aromic lransactions, processes, and a global kemel inlerface implemented on LOp of unreliable hardware. Key to the successful functioning of Clouds is its simple architecture consisling of passive, abslI'act data objccts and me uniform operation invocation m~hanism. This architecture allows plain processes or nested lransactions La access user and SYSLeffi data in a lransparem, uniform manner, whether those objects are local to the eurrent machine or on some remoLe processor. The same basic interface used to make opcl'lluon invocation requests on objects can be used to spawn processes and actions, and LO gain access to resLriCLed kernel services. This paper presents an abbreviated description of the Clouds architecture and its relation to the operation of the invocation mechanism, including remote invocation, perobjcct access control, and location independent invocation. Some conclusions derived from the first prototype arc also presented.

Journal ArticleDOI
TL;DR: This modular, hierarchical operating system will capitalize on hardware features deemed important for image and digital signal processing: a fast LAN and shared memory.
Abstract: This modular, hierarchical operating system will capitalize on hardware features deemed important for image and digital signal processing: a fast LAN and shared memory

Journal ArticleDOI
TL;DR: In this paper, the integral equation defined on a finite interval with convolution kernel, which is the basic equation in the analytical kinetic theory of neutrals in a bounded plasma slab, is solved by reinvestigating the corresponding Riemann boundary value problem.
Abstract: The integral equation defined on a finite interval with convolution kernel, which is the basic equation in the analytical kinetic theory of neutrals in a bounded plasma slab, is solved by reinvestigating the corresponding Riemann boundary value problem. For the case that the Fourier transform of the kernel is a rational function, a general analytical solution formalism is derived and used to calculate the density of hydrogen neutrals, without numerical means, for the cases of homogeneous and inhomogeneous plasma temperatures.



01 Jan 1987
TL;DR: Physically realizable, optimal generating kernels are presented and imagepyramidsgenerated by theoptimal kernel showabetter command of details thantheones generated by asimple 4x 4averaging, oracomputationally equivalent kernel.
Abstract: ofimagepyramids isdescribed asatwo-dimensional decimation process. Frequently employed generating kernelsarecompared totheoptimal kernel thatassures minimal information loss after theresolution reduction, i.e., theonecorresponding toanideal lowpassfilter. Physically realizable, optimal generating kernels arepresented. Theamountofcomputation required forgeneration oftheimagepyramid canbereduced significantly byemployinghalf-band filters ascomponents oftheoptimal kernel. Imagepyramidsgenerated bytheoptimal kernel showabetter commandof details thantheonesgenerated byasimple 4x 4averaging, oracomputationally equivalent kernel. IndexTerms-Fast filter transforms, imagepyramids, two-dimensional decimation andinterpolation processes.

Journal ArticleDOI
TL;DR: A solution is presented for the problem of the stabilizabi1ity of a system (global system) composed of two interconnected subsystems, with static, linear, state-vector feedbacks (LLSVF), provided by the so-called intercontrollability matrix D of the interconnected system and the explicit determination of its kernel U.

Journal ArticleDOI
TL;DR: The correspondence between the Wiener-type filtration approach and the classical Bayes strategy of choosing optimal decision rules to construct optimal regularized solutions of an integral equation of the convolution type with a deterministic kernel in the space of stationary Hilbert random processes is established in this paper.
Abstract: The correspondence between the Wiener-type filtration approach and the classical Bayes strategy of choosing optimal decision rules to construct optimal regularized solutions of an integral equation of the convolution type with a deterministic kernel in the space of stationary Hilbert random processes is established. A method of obtaining order-unimprovable asymptotic estimates of the optimal accuracy in the parametric classes of solutions, kernels and noise is developed. Quasi-optimal approximations on the basis of regularizing algorithms of Tikhonov's type are constructed.