scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 1997"


Journal ArticleDOI
TL;DR: It is shown that absolute activation levels are strongly dependent on the parameters of the filter used in image construction and that significance of an activation signal can be enhanced through appropriate filter selection.
Abstract: When constructing MR images from acquired spatial frequency data, it can be beneficial to apply a low-pass filter to remove high frequency noise from the resulting images. This amounts to attenuating high spatial frequency fluctuations that can affect detected MR signal. A study is presented of spatially filtering MR data and possible ramifications on detecting regionally specific activation signal. It is shown that absolute activation levels are strongly dependent on the parameters of the filter used in image construction and that significance of an activation signal can be enhanced through appropriate filter selection. A comparison is made between spatially filtering MR image data and applying a Gaussian convolution kernel to statistical parametric maps.

144 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the Euler equations for the propagation of gravity waves on the surface of an ideal, incompressible, inviscid fluid, and the asymptotic decay of solitary waves to a quiescent state away from their principal elevation.

120 citations


Proceedings ArticleDOI
04 Jun 1997
TL;DR: Oriented Line Integral Convolution (OLIC), where direction as well as orientation are encoded within the resulting image, is introduced by using a sparse input texture and a ramp like (anisotropic) convolution kernel.
Abstract: Line Integral Convolution (LIC) is a common approach for the visualisation of 2D vector fields. It is well suited for visualizing the direction of a flow field, but it gives no information about the orientation of the underlying vectors. We introduce Oriented Line Integral Convolution (OLIC), where direction as well as orientation are encoded within the resulting image. This is achieved by using a sparse input texture and a ramp like (anisotropic) convolution kernel. This method can be used for animations, whereby the computation of so called pixel traces fastens the calculation process. Various OLICs illustrating simple and real world vector fields are shown.

112 citations


Journal ArticleDOI
TL;DR: In this paper, a numerical procedure for the time integration of spatially discretized finite element equations for viscoelastic structures governed by a constitutive equation involving fractional calculus operators is presented.
Abstract: Numerical procedures for the time integration of the spatially discretized finite element equations for viscoelastic structures governed by a constitutive equation involving fractional calculus operators are presented. To avoid difficulties concerning fractional-order initial conditions, a form of the fractlonal calculus model of viscoelasticity involving a convolution integral with a singular memory kernel of Mittag-Lefler type is used. The constitutive equation is generalized to three-dimensional states for isotropic materials. A simplification of the fractional derivative of the memory kernel is used, in connection with Grunwald's definition of fractional differentiation and a backward Euler rule, for the time evolution of the convolution term. A desirable feature of this process is that no actual evaluation of the memory kernel is needed. This, together with the Newmark method for time integration, enables the direct calculation of the time evolution of the nodal degrees of freedom. To illustrate the ability of the numerical procedure a few numerical examples are presented. In one example the numerically obtained solution is compared with a time series expansion of the analytical solution.

89 citations


Journal ArticleDOI
TL;DR: A model is presented in which the scatter signal in images obtained obtained by electronic portal imaging devices (EPIDs) is removed by a forward convolution method and can be extracted to better than 1.5%, even when the original Scatter-to-Primary Ratio (SPR) is more than 25%.
Abstract: A model is presented in which the scatter signal in images obtained obtained by electronic portal imaging devices (EPIDs) is removed by a forward convolution method. The convolution kernel, kt(r) is a cylindrically symmetric kernel, generated by Monte Carlo, representing the scattered signal of a pencil beam at the image plane after the photons have gone through an object of thickness, t. A set of the kernels is presented and used to extract the primary signal. The signal from primary photons in the image, P(r), is extracted by an iterative method in which the essential assumption is that the scatter signal S(r) can be described by a superposition of the signal that would be obtained with the object removed from the beam, O(r), and the kernel kt(r). The thickness, t, that is used to choose the kernel, is directly related to P(r) by a simple exponential relationship; hence the thickness, t, of the object and the primary signal, P(r), are both iterated to better estimates through this procedure. The model is tested on Monte Carlo simulated data, where the extracted primary signal is compared with the "true" primary signal. Results are presented for a set of phantoms of uniform thicknesses up to 35 cm, and for field areas up to 320 cm(2), and for an inhomogeneous phantom containing a sphere of different density. The primary signal can be extracted to better than 1.5%, even when the original Scatter-to-Primary Ratio (SPR) is more than 25%. Finally, we have tested the model on EPID images, a nonuniform (breast) phantom is presented here. The breast phantom both have a curved external contour and contains a structure of a different density (lung). The radiological thickness of this breast phantom, as extracted using the above convolution model, was found to be within 2.8 mm (1 sd) of the true radiological thickness.

88 citations


Proceedings ArticleDOI
26 Oct 1997
TL;DR: A new approach based on partial differential equations (PDE) to restore noisy blurred images and the role of varying the parameters for denoising, enhancement and coupling is presented.
Abstract: We present a new approach based on partial differential equations (PDE) to restore noisy blurred images. After studying the methods to denoise images, staying as close as possible to the input image and methods to restore discontinuities, we propose a new scheme which combines all this schemes. A quantified numerical test on a synthetic image demonstrates the efficiency of our scheme and the role of varying the parameters for denoising, enhancement and coupling. A result on a real image is also presented.

80 citations


Journal ArticleDOI
TL;DR: The iterative removal of irregular regions in a kernel enables us to selectively simplify the topology of the image and leads to a method for segmenting some grayscale images without the need to define and tune parameters.
Abstract: We consider a cross-section topology that is defined on grayscale images. The main interest of this topology is that it keeps track of the grayscale information of an image. We define some basic notions relative to that topology. Furthermore, we indicate how to acquire a homotopic kernel and a leveling kernel. Such kernels can be seen as "ultimate" topological simplifications of an image. A kernel of a real image, though simplified, is still an intricated image from a topological point of view. We introduce the notion of an irregular region. The iterative removal of irregular regions in a kernel enables us to selectively simplify the topology of the image. Through an example, we show that this notion leads to a method for segmenting some grayscale images without the need to define and tune parameters.

77 citations


Patent
09 Jan 1997
TL;DR: In this paper, the adaptive structure of a Wiener filter is used to deconvolve three-dimensional wide-field microscope images for the purposes of improving spatial resolution and removing out-of-focus light.
Abstract: An adaptive structure of a Wiener filter is used to deconvolve three-dimensional wide-field microscope images for the purposes of improving spatial resolution and removing out-of-focus light. The filter is a three-dimensional kernel representing a finite-impulse-response (FIR) structure requiring on the order of one thousand (1000) taps or more to achieve an acceptable mean-square-error. Converging to a solution is done in the spatial-domain and therefore does not experience many of the problems of frequency-domain solutions. Alternatively, a three-dimensional kernel representing an infinite-impulse-response (IIR) structure may be employed. An IIR structure typically requires fewer taps to achieve the same or better performance, resulting in higher resolution images with less noise and faster computations.

70 citations


Journal ArticleDOI
TL;DR: Inverse problems for identification of the memory kernel in the linear constitutive stress-strain relation of Boltzmann type are reduced to a non-linear Volterra integral equation using Fourier's method for solving the direct problem as discussed by the authors.
Abstract: Inverse problems for identification of the memory kernel in the linear constitutive stress-strain relation of Boltzmann type are reduced to a non-linear Volterra integral equation using Fourier's method for solving the direct problem To this equation the contraction principle in weighted norms is applied In this way global existence of a solution to the inverse problem is proved and stability estimates for it are derived

66 citations


Journal ArticleDOI
TL;DR: In this article, a dependability analysis for semi-Markov systems with finite state space is presented, based on algebraic calculus within a convolution algebra, which does not need the semi-markov kernel to be absolutely continuous.

59 citations


Proceedings ArticleDOI
01 Oct 1997
TL;DR: The paper presents fast rendering of oriented line integral convolution (FROLIC), which is approximately two orders of magnitude faster than OLIC, to conveniently explore and investigate analytically defined 2D vector fields.
Abstract: Oriented line integral convolution (OLIC) illustrates flow fields by convolving a sparse texture with an anisotropic convolution kernel. The kernel is aligned to the underlying flow of the vector field. OLIC does not only show the direction of the flow but also its orientation. The paper presents fast rendering of oriented line integral convolution (FROLIC), which is approximately two orders of magnitude faster than OLIC. Costly convolution operations as done in OLIC are replaced in FROLIC by approximating a streamlet through a set of disks with varying intensity. The issue of overlapping streamlets is discussed. Two efficient animation techniques for animating FROLIC images are described. FROLIC has been implemented as a Java applet. This allows researchers from various disciplines (typically with inhomogenous hardware environments) to conveniently explore and investigate analytically defined 2D vector fields.

PatentDOI
TL;DR: In this paper, a method and an apparatus for three-dimensional imaging of ultrasound data by reducing speckle artifact data before the acquired data from a volume of interest is projected onto an image plane is presented.
Abstract: A method and an apparatus for three-dimensional imaging of ultrasound data by reducing speckle artifact data before the acquired data from a volume of interest is projected onto an image plane. An ultrasound scanner collects B-mode or color flow mode images in a cine memory, i.e., for a multiplicity of slices. The data from a respective region of interest for each slice is sent to a master controller, such data forming a volume of interest. The master controller performs an algorithm that iteratively projects the pixel data in the volume of interest onto a plurality of rotated image planes using a ray-casting technique. Prior to projection, the master controller smooths the speckle contained in the pixel data filtering using a convolution filter having a nine-point kernel. Convolution filtering of image data is carried out by defining a desired area of the image, such as an area represented by an array of pixels, by weighting each of the pixels in the array with a respective weighting coefficient, and then by summing the weighted pixels to produce a filtered pixel value which is substituted for the central pixel in the array. The filtered pixel data forms a new data volume which is then projected onto each successive image plane.

Journal ArticleDOI
TL;DR: The results showed that based on the relative dose errors at a depth of 15 cm along the central axis, the terma divergence correction may be used for fields smaller than 10 x 10 cm2 with a SSD larger than 80 cm; the dose divergence correction with an additional kernel hardening correction can reduce dose error and may be more applicable than the terMA divergence correction.
Abstract: To account for clinical divergent and polychromatic photon beams, we have developed kernel tilting and kernel hardening correction methods for convolution dose calculation algorithms. The new correction methods were validated by Monte Carlo simulation. The accuracy and computation time of the our kernel tilting and kernel hardening correction methods were also compared to the existing approaches including terma divergence correction, dose divergence correction methods, and the effective mean kernel method with no kernel hardening correction. Treatment fields of 10 x 10-40 x 40 cm2 (field size at source to axis distance (SAD)) with source to source distances (SSDs) of 60, 80, and 100 cm, and photon energies of 6, 10, and 18 MV have been studied. Our results showed that based on the relative dose errors at a depth of 15 cm along the central axis, the terma divergence correction may be used for fields smaller than 10 x 10 cm2 with a SSD larger than 80 cm; the dose divergence correction with an additional kernel hardening correction can reduce dose error and may be more applicable than the terma divergence correction. For both these methods, the dose error increased linearly with the depth in the phantom; the 90% isodose lines at the depth of 15 cm were shifted by about 2%-5% of the field width due to significant underestimation of the penumbra dose. The kernel hardening effect was less prominent than the kernel tilting effect for clinical photon beams. The dose error by using nonhardening corrected kernel is less than 2.0% at a depth of 15 cm along the central axis, yet it increased with a smaller field size and lower photon energy. The kernel hardening correction could be more important to compute dose in the fields with beam modifiers such as wedges when beam hardening is more significant. The kernel tilting correction and kernel hardening correction increased computation time by about 3 times, and 0.5-1 times, respectively. This can be justified by more accurate dose calculations for the majority of clinical treatments.

Proceedings ArticleDOI
17 Jun 1997
TL;DR: The notion of completion energy is examined and a fast method to compute the most likely completions in images is introduced and two novel analytic approximations to the curve of least energy are developed.
Abstract: The detection of smooth curves in images and their completion over gaps are two important problems in perceptual grouping. In this paper we examine the notion of completion energy and introduce a fast method to compute the most likely completions in images. Specifically we develop two novel analytic approximations to the curve of least energy. In addition, we introduce a fast numerical method to compute the curve of least energy, and show that our approximations are obtained at early stages of this numerical computation. We then use our newly developed energies to find the most likely completions in images through a generalized summation of induction fields. Since in practice edge elements are obtained by applying filters of certain widths and lengths to the image, we adjust our computation to take these parameters into account. Finally, we show that, due to the smoothness of the kernel of summation, the process of summing induction fields can be run in time that is linear in the number of different edge elements in the image, or in O(N log N) where N is the number of pixels in the image, using multigrid methods.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: A morphological diffusion coefficient capable of smoothing small objects while maintaining edge locality is introduced and results are presented that demonstrate its efficacy in edge detection tasks.
Abstract: Current formulations of anisotropic diffusion are unable to prevent feature drift and smooth small regions. These deficiencies reduce the effectiveness of the diffusion operation in many image processing tasks, including segmentation, edge detection, compression, and multiscale processing. This paper introduces a morphological diffusion coefficient capable of smoothing small objects while maintaining edge locality. Results are presented that demonstrate its efficacy in edge detection tasks.


Journal ArticleDOI
TL;DR: The paper describes varieties of blending in kernel boundary modellers emphasizing topology, algorithms and program structure rather than geometry and using examples drawn from the ACIS modeller to explain how object-oriented methods can ease the addition of new categories of blend by applications built on a kernel modeller.
Abstract: The paper describes varieties of blending in kernel boundary modellers emphasizing topology, algorithms and program structure rather than geometry and using examples drawn from the ACIS modeller. It illustrates some of the many configurations that can occur when blends spread on to faces distant from the original implicitly blended edges and vertices. From a systems standpoint, a staged evaluation of blends is shown to have advantages in the use of existing Boolean code to perform much of the work. Lastly the paper explains how object-oriented methods can ease the addition of new categories of blend by applications built on a kernel modeller.

Patent
27 Mar 1997
TL;DR: In this paper, a kernel-based method and apparatus includes a preprocessor, which operates on an input data in such a way as to provide invariance under some symmetry transformation.
Abstract: A kernel-based method and apparatus includes a preprocessor, which operates on an input data in such a way as to provide invariance under some symmetry transformation.

Proceedings ArticleDOI
01 May 1997
TL;DR: The disjoint cellular structure of a modern solid modelling kernel is enhanced to include a hierarchy of geometrically overlapping features, which allows multiple versions of any feature to be modified simultaneously via the master.
Abstract: This paper provides a rationale for its abstract data definitions of geometric features. These definitions are used as the basis of a suite of functions to support feature modelling. It assumes that implementations of the proposed functions will make use of a solid modelling kernel that supports objects with a disjoint cellular structure and persistent cell identifiers. It is intended that most of the functions required for the manipulation of features can be provided by overloaded kernel functions; new functions are necessary only to provide feature specific operations. Unlike most features in the literature, the features of this paper are not design features, manufacturing features or features for any other specific application. They are geometric structures that support such applications. Thus, the disjoint cellular structure of a modern solid modelling kernel is enhanced to include a hierarchy of geometrically overlapping features. A feature at any node in the hierarchy is defined in terms of instances of multiple sub-features combined using any object construction function of the kernel. Such instancing allows multiple versions of any feature to be modified simultaneously via the master.

Journal ArticleDOI
TL;DR: In this paper, a machine-vision system was developed to identify different types of crown end shapes of corn kernels, which provided an average accuracy of approximately 87% compared to human inspection.
Abstract: A machine-vision system was developed to identify different types of crown end shapes of corn kernels. Image processing techniques were used to enhance the object and reduce noise in the acquired image. Corn kernels were classified as convex or dent based on their crown end shape. Dent corn kernels were further classified into smooth dent or non-smooth dent kernels. A one-dimensional line profile analysis was used to obtain the needed three-dimensional information. This system provided an average accuracy of approximately 87% compared to human inspection. The processing time was between 1.5 and 1.8 s/kernel.

Proceedings ArticleDOI
TL;DR: This paper uses a set of training images, and iterates the two steps of designing the error diffusion filter and evaluating the spectrum of the quantizer error, to propose an iterative method for designing an optimum error diffusion kernel.
Abstract: The quality of typical error diffused images can be improved by designing an error diffusion filter that minimizes a frequency weighted mean squared error between the continuous tone input and the halftone output . Previous approaches to this design are typically based on an assumption that the binary quantizer error (between thequantizer input and output) is a white noise source. We propose in this paper an iterative method for designingan optimum error diffusion kernel without such an assumption on the spectral characteristics of the binaryquantizer error. In particular, we use a set of training images, and iterate the two steps of designing the errordiffusion filter and evaluating the spectrum of the quantizer error. Experimental results are shown for errordiffusion filters designed using this iterative method.Keywords: error diffusion, optimum error diffusion kernel, quantization error, frequency weighted mean squarederror 1. INTRODUCTION Error diffusion' is known to be an excellent method for generating halftones from continuous tone images. Itis also known, however, that error diffusion tends to generate halftones with certain objectionable artifacts.2 Asa result, there have been much work reported in the literature aimed at improving the quality of error diffused

Journal ArticleDOI
TL;DR: Fan-beam reconstructions demonstrate the same image quality as that of parallel-beam reconstruction as discussed by the authors, but do not produce a filtered backprojection reconstruction algorithm but instead have a formulation that is an inverse integral operator with a spatially varying kernel.
Abstract: A convolution backprojection algorithm was derived by Tretiak and Metz (198) to reconstruct two-dimensional (2-D) transaxial slices from uniformly attenuated parallel-beam projections. Using transformation of coordinates, this algorithm can be modified to obtain a formulation useful to reconstruct uniformly attenuated fan-beam projections. Unlike that for parallel-beam projections, this formulation does not produce a filtered backprojection reconstruction algorithm but instead has a formulation that is an inverse integral operator with a spatially varying kernel. This algorithm thus requires more computation time than does the filtered backprojection reconstruction algorithm for the uniformly attenuated parallel-beam case. However, the fan-beam reconstructions demonstrate the same image quality as that of parallel-beam reconstructions.

Book ChapterDOI
08 Dec 1997
TL;DR: An object-based, layered framework and associated library in C for real-time radar applications that meets performance requirements by highly optimizing the kernel layer, and by performing allocations and preparations for data transfers during a set-up time.
Abstract: We have developed an object-based, layered framework and associated library in C for real-time radar applications Object classes allow us to reuse code modules, and a layered framework enhances the portability of applications The framework is divided into a machine-dependent kernel layer, a mathematical library layer, and an application layer We meet performance requirements by highly optimizing the kernel layer, and by performing allocations and preparations for data transfers during a set-up time Our initial application employs a space-time adaptive processing (STAP) algorithm and requires throughput on the order of 20 Gflop/s (sustained), with 1 s latency We present performance results for a key portion of the STAP algorithm and discuss future work

Proceedings ArticleDOI
19 Sep 1997
TL;DR: It is shown that, through the use of model- integrated program synthesis (MIPS), parallel real-time implementations of image processing data flows can be synthesized from high level graphical specifications, enabling the cost-effective exploitation of parallel hardware for building more flexible and powerful real- time imaging systems.
Abstract: In this paper, it is shown that, through the use of model- integrated program synthesis (MIPS), parallel real-time implementations of image processing data flows can be synthesized from high level graphical specifications. The complex details in inherent to parallel and real-time software development become transparent to the programmer, enabling the cost-effective exploitation of parallel hardware for building more flexible and powerful real-time imaging systems. The model integrated real-time image processing system (MIRTIS) is presented as an example. MIRTIS employs the multigraph architecture (MGA), a framework and set of tools for building MIPS systems, to generate parallel real-time image processing software which runs under the control of a parallel run-time kernel on a network of Texas Instruments TMS320C40 DSPs (C40s). The MIRTIS models contain graphical declarations of the image processing computations to be performed, the available hardware resources, and the timing constraints of the application. The MIRTIS model interpreter performs the parallelization, scaling, and mapping of the computations to the resources automatically or determines that the timing constraints cannot be met with the available resources. MIRTIS is a clear example of how parallel real-time image processing systems can be built which are (1) cost-effectively programmable, (2) flexible, (3) scalable, and (4) built from commercial off-the-shelf (COTS) components.

Proceedings Article
Ken Shirriff1
06 Jan 1997
TL;DR: This paper illustrates how an existing UNIX operating system kernel can be extended to provide distributed process support, it provides interfaces that may be useful for general access to the kernel's process activity, and it gives experience with object-oriented programming in a commercial kernel.
Abstract: The Solaris MC distributed operating system provides a single-system image across a cluster of nodes, including distributed process management. It supports remote signals, waits across nodes, remote execution, and a distributed /proc pseudo file system. Process management in Solaris MC is implemented through an object-oriented interface to the process system. This paper has three main goals: it illustrates how an existing UNIX operating system kernel can be extended to provide distributed process support, it provides interfaces that may be useful for general access to the kernel's process activity, and it gives experience with object-oriented programming in a commercial kernel.

Journal ArticleDOI
TL;DR: In this article, a method of indexing pictures by pattern recognition based on key image objects for large scale image databases is presented, where a set of key objects can be selected to ensure a good distribution of the pictures amongst the kernel clusters.

Journal ArticleDOI
TL;DR: In this paper, a regularization approach is proposed to solve the problem of camouflaged deconvolution, where the kernel in a simple convolution model is not completely specified, and cross-validation is used to determine the degree of smoothness of the solution.
Abstract: Camouflaged deconvolution arises when the kernel in a simple convolution model is not completely specified. We consider a situation in which the same fixed signal is repeatedly measured by separate convolutions with imprecisely known kernels. We develop a regularization methodology for application to these problems. The method involves simultaneous estimation of the target signal and the unknown parameters of the convolution kernels. Cross-validation is used to determine the degree of smoothness of the solution. We use simulation studies matched to the application to evaluate statistical performance. These simulations find that the convergence of the regularization estimator is largely unaffected by the lack of information about the convolution kernels. We illustrate the methodology by application to a blood curve modeling problem arising in the context of positron emission tomography (PET) studies with flurodeoxyglucose (FDG), a commonly used glucose tracer. The results show promise towards the ...

Journal ArticleDOI
TL;DR: A fully parallel focal plane charge-coupled device (CCD) array which performs image acquisition and convolution with arbitrary kernels is presented and the real-time programmable spatial convolution is generated for all pixels in parallel during the exposure.
Abstract: A fully parallel focal plane charge-coupled device (CCD) array which performs image acquisition and convolution with arbitrary kernels is presented. The real-time programmable spatial convolution is generated for all pixels in parallel during the exposure. The 2-D convolution is performed by shifting a charge pattern in two dimensions and the exposure time is varied in proportion to the weight of each kernel coefficient. The problem of negative weights can be solved by taking the difference of two convolutions each with only positive weights. The CCD was fabricated using a standard CMOS/CCD process. Convolutions have been performed with a variety of linear filters that are commonly used in machine vision. Typical rms deviations from the ideal filter characteristics are between 1-2% of the largest kernel tap value. Results and practical applications of this work are discussed.

Journal ArticleDOI
TL;DR: The proposed method consists of first estimating a histogram of the underlying point process and constructing a kernel estimate of the intensity function through a regularized backsubstitution of a discrete-time convolution with the estimated histogram.
Abstract: The estimation of the intensity function of an inhomogeneous Poisson process is considered when the observable data consists of sampled shot noise that results from passing the Poisson process through an unknown linear time-invariant system. The proposed method consists of first estimating a histogram of the underlying point process. The estimated histogram is used to construct a kernel estimate of the intensity function. An estimate of the unknown impulse response of the linear time-invariant system is constructed via a regularized backsubstitution of a discrete-time convolution with the estimated histogram.

Journal ArticleDOI
TL;DR: This work investigates the impact of the central-ray approximation upon reconstruction accuracy and computational efficiency of quantitative SPECT by modeling photon attenuation and detector resolution variation as a depth-dependent convolution.
Abstract: In order to model photon attenuation and detector resolution variation as a depth-dependent convolution for efficient reconstruction of quantitative SPECT, a central-ray approximation is necessary. This work investigates the impact of the approximation upon reconstruction accuracy and computational efficiency. A patient chest CT image was acquired and converted into an object-specific attenuation map. From a segmentation of the map, an emission thorax phantom was constructed with a cardiac insert. To generate a system-specific resolution-variant kernel, a point source was measured at several depths from the surface of a low-energy, high-resolution, parallel-hole collimator of a SPECT system. Projections of parallel-beam geometry were simulated from the phantom, the map, and the kernel on an elliptical orbit. Reconstruction was performed by the ML-EM algorithm with and without the central-ray approximation. The approximation cuts down dramatically (more than 100 fold) the computing time with a negligible loss (less than 1%) of reconstruction accuracy.