scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 2006"


Journal ArticleDOI
TL;DR: This framework of composite kernels demonstrates enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only, flexibility to balance between the spatial and spectral information in the classifier, and computational efficiency.
Abstract: This letter presents a framework of composite kernel machines for enhanced classification of hyperspectral images. This novel method exploits the properties of Mercer's kernels to construct a family of composite kernels that easily combine spatial and spectral information. This framework of composite kernels demonstrates: 1) enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only: 2) flexibility to balance between the spatial and spectral information in the classifier; and 3) computational efficiency. In addition, the proposed family of kernel classifiers opens a wide field for future developments in which spatial and spectral information can be easily integrated.

1,069 citations


Journal ArticleDOI
TL;DR: A learning-based method for recovering 3D human body pose from single images and monocular image sequences, embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose.
Abstract: We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.

855 citations


Proceedings Article
04 Dec 2006
TL;DR: This work addresses the problem of blind motion deblurring from a single image, caused by a few moving objects, and relies on the observation that the statistics of derivative filters in images are significantly changed by blur.
Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.

459 citations


Book ChapterDOI
18 Sep 2006
TL;DR: In this article, a Partial Tree (PT) kernel was proposed to exploit dependency trees for syntactic parsing information in natural language learning, and the experiments with Support Vector Machines on the task of semantic role labeling and question classification showed that the kernel running time is linear on the average case and the PT kernel improved on the other tree kernels when applied to the appropriate parsing paradigm.
Abstract: In this paper, we provide a study on the use of tree kernels to encode syntactic parsing information in natural language learning. In particular, we propose a new convolution kernel, namely the Partial Tree (PT) kernel, to fully exploit dependency trees. We also propose an efficient algorithm for its computation which is futhermore sped-up by applying the selection of tree nodes with non-null kernel. The experiments with Support Vector Machines on the task of semantic role labeling and question classification show that (a) the kernel running time is linear on the average case and (b) the PT kernel improves on the other tree kernels when applied to the appropriate parsing paradigm.

448 citations


Journal Article
TL;DR: A new convolution kernel, namely the Partial Tree (PT) kernel, is proposed, to fully exploit dependency trees and an efficient algorithm for its computation is proposed which is futhermore sped-up by applying the selection of tree nodes with non-null kernel.
Abstract: In this paper, we provide a study on the use of tree kernels to encode syntactic parsing information in natural language learning. In particular, we propose a new convolution kernel, namely the Partial Tree (PT) kernel, to fully exploit dependency trees. We also propose an efficient algorithm for its computation which is futhermore sped-up by applying the selection of tree nodes with non-null kernel. The experiments with Support Vector Machines on the task of semantic role labeling and question classification show that (a) the kernel running time is linear on the average case and (b) the PT kernel improves on the other tree kernels when applied to the appropriate parsing paradigm.

434 citations


Proceedings ArticleDOI
17 Jun 2006
TL;DR: Experimental results show that the proposed algorithms for Discriminative Component Analysis and Kernel DCA are effective and promising in learning good quality distance metrics for image retrieval.
Abstract: Relevant Component Analysis (RCA) has been proposed for learning distance metrics with contextual constraints for image retrieval. However, RCA has two important disadvantages. One is the lack of exploiting negative constraints which can also be informative, and the other is its incapability of capturing complex nonlinear relationships between data instances with the contextual information. In this paper, we propose two algorithms to overcome these two disadvantages, i.e., Discriminative Component Analysis (DCA) and Kernel DCA. Compared with other complicated methods for distance metric learning, our algorithms are rather simple to understand and very easy to solve. We evaluate the performance of our algorithms on image retrieval in which experimental results show that our algorithms are effective and promising in learning good quality distance metrics for image retrieval.

330 citations


Proceedings ArticleDOI
14 May 2006
TL;DR: A new TV-based algorithm for image deconvolution, under the assumptions of linear observations and additive white Gaussian noise is proposed, which has O(N) computational complexity, for finite support convolutional kernels.
Abstract: The total variation regularizer is well suited to piecewise smooth images If we add the fact that these regularizers are convex, we have, perhaps, the reason for the resurgence of interest on TV-based approaches to inverse problems This paper proposes a new TV-based algorithm for image deconvolution, under the assumptions of linear observations and additive white Gaussian noise To compute the TV estimate, we propose a majorization-minimization approach, which consists in replacing a difficult optimization problem by a sequence of simpler ones, by relying on convexity arguments The resulting algorithm has O(N) computational complexity, for finite support convolutional kernels In a comparison with state-of-the-art methods, the proposed algorithm either outperforms or equals them, with similar computational complexity

201 citations


Journal ArticleDOI
TL;DR: Experimental results are reported on a real-world image collection to demonstrate that the proposed methods outperform the traditional kernel BDA (KBDA) and the support vector machine (SVM) based RF algorithms.
Abstract: In recent years, a variety of relevance feedback (RF) schemes have been developed to improve the performance of content-based image retrieval (CBIR). Given user feedback information, the key to a RF scheme is how to select a subset of image features to construct a suitable dissimilarity measure. Among various RF schemes, biased discriminant analysis (BDA) based RF is one of the most promising. It is based on the observation that all positive samples are alike, while in general each negative sample is negative in its own way. However, to use BDA, the small sample size (SSS) problem is a big challenge, as users tend to give a small number of feedback samples. To explore solutions to this issue, this paper proposes a direct kernel BDA (DKBDA), which is less sensitive to SSS. An incremental DKBDA (IDKBDA) is also developed to speed up the analysis. Experimental results are reported on a real-world image collection to demonstrate that the proposed methods outperform the traditional kernel BDA (KBDA) and the support vector machine (SVM) based RF algorithms

188 citations


Journal ArticleDOI
TL;DR: The asmoothed images are fair representations of the input data in the sense that the residuals are consistent with pure noise, that is, they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.
Abstract: An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language (idl). The functional form of the kernel can be varied (top-hat, Gaussian, etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel, the algorithm increases the smoothing scale until the signal-to-noise ratio (S/N) within the kernel reaches a pre-set value. Thus, noise is suppressed very efficiently, while at the same time real structure, that is, signal that is locally significant at the selected S/N level, is preserved on all scales. In particular, extended features in noise-dominated regions are visually enhanced. The asmooth algorithm differs from other AKS routines in that it allows a quantitative assessment of the goodness of the local signal estimation by producing adaptively smoothed images in which all pixel values share the same S/N above the background. We apply asmooth to both real observational data (an X-ray image of clusters of galaxies obtained with the Chandra X-ray Observatory) and to a simulated data set. We find the asmoothed images to be fair representations of the input data in the sense that the residuals are consistent with pure noise, that is, they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.

179 citations


Journal ArticleDOI
Ömer Civalek1
TL;DR: In this article, a discrete singular convolution method for the free vibration analysis of rotating conical shells is proposed, where a regularized Shannon's delta kernel is used to illustrate the present algorithm.

136 citations


Proceedings ArticleDOI
05 Jul 2006
TL;DR: Conditions under which the FFT gives better performance than the corresponding convolution are identified and the different kernel sizes and issues of application of multiple filters on one image are assessed.
Abstract: Many contemporary visualization tools comprise some image filtering approach. Since image filtering approaches are very computationally demanding, the acceleration using graphics-hardware (GPU) is very desirable to preserve interactivity of the main visualization tool itself. In this article we take a close look on GPU implementation of two basic approaches to image filtering - Fast Fourier Transform (frequency domain) and convolution (spatial domain). We evaluate these methods in terms of the performance in real time applications and suitability for GPU implementation. Convolution yields better performance than Fast Fourier Transform (FFT) in many cases; however, this observation cannot be generalized. In this article we identify conditions under which the FFT gives better performance than the corresponding convolution and we assess the different kernel sizes and issues of application of multiple filters on one image.

Proceedings Article
04 Dec 2006
TL;DR: A novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors is introduced.
Abstract: Pyramid intersection is an efficient method for computing an approximate partial matching between two sets of feature vectors. We introduce a novel pyramid embedding based on a hierarchy of non-uniformly shaped bins that takes advantage of the underlying structure of the feature space and remains accurate even for sets with high-dimensional feature vectors. The matching similarity is computed in linear time and forms a Mercer kernel. Whereas previous matching approximation algorithms suffer from distortion factors that increase linearly with the feature dimension, we demonstrate that our approach can maintain constant accuracy even as the feature dimension increases. When used as a kernel in a discriminative classifier, our approach achieves improved object recognition results over a state-of-the-art set kernel.

Posted Content
TL;DR: This work develops a new collaborative filtering method that combines both previously known users' preferences, as well as product/user attributes, i.e. standard CF, to predict a given user's interest in a particular product.
Abstract: We develop a new collaborative filtering (CF) method that combines both previously known users' preferences, ie standard CF, as well as product/user attributes, ie classical function approximation, to predict a given user's interest in a particular product Our method is a generalized low rank matrix completion problem, where we learn a function whose inputs are pairs of vectors -- the standard low rank matrix completion problem being a special case where the inputs to the function are the row and column indices of the matrix We solve this generalized matrix completion problem using tensor product kernels for which we also formally generalize standard kernel properties Benchmark experiments on movie ratings show the advantages of our generalized matrix completion method over the standard matrix completion one with no information about movies or people, as well as over standard multi-task or single task learning methods

Journal ArticleDOI
TL;DR: A new direct DCIM without any quasi-static and surface-wave extraction is introduced and a novel path to include more variation before the MPM is introduced to avoid large variations of the spectral kernel.
Abstract: Sommerfeld integration is introduced to calculate the spatial-domain Green's functions (GF) for the method of moments in multilayered media. To avoid time-consuming numerical integration, the discrete complex image method (DCIM) was introduced by approximating the spectral-domain GF by a sum of exponentials. However, traditional DCIM is not accurate in the far- and/or near-field region. Quasi-static and surface-wave terms need to be extracted before the approximation and it is complicated to extract the surface-wave terms. In this paper, some features of the matrix pencil method (MPM) are clarified. A new direct DCIM without any quasi-static and surface-wave extraction is introduced. Instead of avoiding large variations of the spectral kernel, we introduce a novel path to include more variation before we apply the MPM. The spatial-domain GF obtained by the new DCIM is accurate both in the near- and far-field regions. The CPU time used to perform the new DCIM is less than 1 s for computing the fields with a horizontal source-field separation from 1.6/spl times/10/sup -4//spl lambda/ to 16/spl lambda/. The new DCIM can be even accurate up to 160/spl lambda/ provided the variation of the spectral kernel is large enough and we have accounted for a sufficient number of complex images.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: It is shown that the proposed method does not require edge detection preprocessing and can estimate a wide range of blur radius, and is compared with a state-of-the-art method.
Abstract: In this paper a novel local blur estimation method is presented The focal blur process is usually modeled as a Gaussian low-pass filtering and then the problem of blur estimation is to identify the Gaussian blur kernel In the proposed method, the blurred input image is first re-blurred by Gaussian blur kernels with different blur radii Then the difference ratios between the multiple re-blurred images and the input image are used to determine the unknown blur radius We show that the proposed method does not require edge detection pre-processing and can estimate a wide range of blur radius Experimental results of the proposed method on both synthetic and natural images and a comparison with a state-of-the-art method are presented

Journal ArticleDOI
TL;DR: In this article, a kernel-based abnormal detection method was proposed to detect oil slicks in the wavelet decomposition of a SAR image, which was applied on ENVISAT Advanced SAR images with no consideration to signal stationarity nor to the presence of strong backscatters.
Abstract: Spaceborne synthetic aperture radar (SAR) is well adapted to detect ocean pollution independently from daily or weather conditions. In fact, oil slicks have a specific impact on ocean wave spectra. Initial wave spectra may be characterized by three kinds of waves, namely big, medium, and small, which correspond physically to gravity and gravity-capillary waves. The increase of viscosity, due to the presence of oil damps gravity-capillary waves. This induces not only a damping of the backscattering to the sensor but also a damping of the energy of the wave spectra. Thus, local segmentation of wave spectra may be achieved by the segmentation of a multiscale decomposition of the original SAR image. In this paper, a semisupervised oil-slick detection is proposed by using a kernel-based abnormal detection into the wavelet decomposition of a SAR image. It performs accurate detection with no consideration to signal stationarity nor to the presence of strong backscatters (such as a ship). The algorithm has been applied on ENVISAT Advanced SAR images. It yields accurate segmentation results even for small slicks, with a very limited number of false alarms

Journal ArticleDOI
Ömer Civalek1
TL;DR: A discrete singular convolution (DSC) free vibration analysis of conical panels is presented in this article, where the derivatives in both the governing equations and the boundary conditions are discretized by the method of DSC.
Abstract: A discrete singular convolution (DSC) free vibration analysis of conical panels is presented. Regularized Shannon's delta kernel (RSK) is selected as singular convolution to illustrate the present algorithm. In the proposed approach, the derivatives in both the governing equations and the boundary conditions are discretized by the method of DSC. Effects of boundary conditions, vertex and subtended angle on the frequencies of conical panel are investigated. The effect of the circumferential node number on the vibrational behaviour of the panel is also analysed. The obtained results are compared with those of other numerical methods. Numerical results indicate that the DSC is a simple and reliable method for vibration analysis of conical panels. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors studied the singular limit of a memory kernel collapsing into a Dirac mass, and the convergence of solutions on finite time-intervals when enough dissipativity is present.
Abstract: We consider differential systems with memory terms, expressed by convolution integrals, which account for the past history of one or more variables. The aim of this work is to analyze the passage to the singular limit when the memory kernel collapses into a Dirac mass. In particular, we focus on the reaction-diffusion equation with memory, and we discuss the convergence of solutions on finite time-intervals. When enough dissipativity is present, we also establish convergence results of the global and the exponential attractors. Nonetheless, the techniques here devised are quite general, and suitable to be applied to a large variety of models.

Journal ArticleDOI
TL;DR: In this article, a family of positive definite kernels was proposed for the manipulation of 3D structures of molecules with kernel methods, based on the comparison of the three-point pharmacophores present in the 3D structure of molecules, a set of molecular features known to be particularly relevant for virtual screening applications.
Abstract: We introduce a family of positive definite kernels specifically optimized for the manipulation of 3D structures of molecules with kernel methods The kernels are based on the comparison of the three-point pharmacophores present in the 3D structures of molecules, a set of molecular features known to be particularly relevant for virtual screening applications We present a computationally demanding exact implementation of these kernels, as well as fast approximations related to the classical fingerprint-based approaches Experimental results suggest that this new approach is competitive with state-of-the-art algorithms based on the 2D structure of molecules for the detection of inhibitors of several drug targets

Proceedings ArticleDOI
17 Jun 2006
TL;DR: It is shown that the spatial discretisation strategy can accelerate GMS by one to two orders of magnitude while achieving essentially the same segmentation; and that the other strategies attain speedups of less than an order of magnitude.
Abstract: Gaussian mean-shift (GMS) is a clustering algorithm that has been shown to produce good image segmentations (where each pixel is represented as a feature vector with spatial and range components). GMS operates by defining a Gaussian kernel density estimate for the data and clustering together points that converge to the same mode under a fixed-point iterative scheme. However, the algorithm is slow, since its complexity is O(kN2), where N is the number of pixels and k the average number of iterations per pixel. We study four acceleration strategies for GMS based on the spatial structure of images and on the fact that GMS is an expectation-maximisation (EM) algorithm: spatial discretisation, spatial neighbourhood, sparse EM and EM-Newton algorithm. We show that the spatial discretisation strategy can accelerate GMS by one to two orders of magnitude while achieving essentially the same segmentation; and that the other strategies attain speedups of less than an order of magnitude.

Journal ArticleDOI
TL;DR: Quantitative fidelity analyses and visual experiments indicate that these new nonseparable, 2-D cubic-convolution kernels can outperform several popular interpolation methods and establish a practical foundation for adaptive interpolation based on local autocorrelation estimates.
Abstract: Cubic convolution is a popular method for image interpolation. Traditionally, the piecewise-cubic kernel has been derived in one dimension with one parameter and applied to two-dimensional (2-D) images in a separable fashion. However, images typically are statistically nonseparable, which motivates this investigation of nonseparable cubic convolution. This paper derives two new nonseparable, 2-D cubic-convolution kernels. The first kernel, with three parameters (designated 2D-3PCC), is the most general 2-D, piecewise-cubic interpolator defined on [-2,2]/spl times/[-2,2] with constraints for biaxial symmetry, diagonal (or 90/spl deg/ rotational) symmetry, continuity, and smoothness. The second kernel, with five parameters (designated 2D-5PCC), relaxes the constraint of diagonal symmetry, based on the observation that many images have rotationally asymmetric statistical properties. This paper also develops a closed-form solution for determining the optimal parameter values for parametric cubic-convolution kernels with respect to ensembles of scenes characterized by autocorrelation (or power spectrum). This solution establishes a practical foundation for adaptive interpolation based on local autocorrelation estimates. Quantitative fidelity analyses and visual experiments indicate that these new methods can outperform several popular interpolation methods. An analysis of the error budgets for reconstruction error associated with blurring and aliasing illustrates that the methods improve interpolation fidelity for images with aliased components. For images with little or no aliasing, the methods yield results similar to other popular methods. Both 2D-3PCC and 2D-5PCC are low-order polynomials with small spatial support and so are easy to implement and efficient to apply.

Journal ArticleDOI
TL;DR: A multiscale model to represent natural images based on the scale-space representation: a model that has an inspiration in the human visual system and fulfills a number of properties that allows estimating the local orientation for several image structures.
Abstract: The efficient representation of local differential structure at various resolutions has been a matter of great interest for adaptive image processing and computer vision tasks. In this paper, we derive a multiscale model to represent natural images based on the scale-space representation: a model that has an inspiration in the human visual system. We first derive the one-dimensional case and then extend the results to two and three dimensions. The operators obtained for analysis and synthesis stages are derivatives of the Gaussian smoothing kernel, so that, for the two-dimensional case, we can represent them either in a rotated coordinate system or in terms of directional derivatives. The method to perform the rotation is efficient because it is implemented by means of the application of the so-called generalized binomial filters. Such a family of discrete sequences fulfills a number of properties that allows estimating the local orientation for several image structures. We also define the discrete counterpart in which the coordinate normalization of the continuous case is approximated as a subsampling of the discrete domain.

Journal Article
TL;DR: An abstract integrodifferential equation arising from linear viscoelasticity is considered and the stability properties of the related C0-semigroup are discussed, in dependence on the form of the convolution (memory) kernel.
Abstract: An abstract integrodifferential equation arising from linear viscoelasticity is considered. The stability properties of the related C0-semigroup are discussed, in dependence on the form of the convolution (memory) kernel.

Journal ArticleDOI
TL;DR: A novel procedure is proposed based on local linear kernel smoothing, in which local neighborhoods are adapted to the local smoothness of the surface measured by the observed data, which can remove noise correctly in continuity regions of thesurface and preserve discontinuities at the same time.
Abstract: In this paper, we are interested in the problem of estimating a discontinuous surface from noisy data. A novel procedure for this problem is proposed based on local linear kernel smoothing, in which local neighborhoods are adapted to the local smoothness of the surface measured by the observed data. The procedure can therefore remove noise correctly in continuity regions of the surface and preserve discontinuities at the same time. Since an image can be regarded as a surface of the image intensity function and such a surface has discontinuities at the outlines of objects, this procedure can be applied directly to image denoising. Numerical studies show that it works well in applications, compared to some existing procedures

Journal ArticleDOI
TL;DR: The proposed kernel locality preserving projections (KLPP) algorithm consists of two steps: kernel principal component analysis (KPCA) plus LPP, and provides an outline for implementing KLPP.

Journal ArticleDOI
TL;DR: This work describes an auto‐calibrated method for parallel imaging with spiral trajectory where an interpolation kernel, accounting for coil sensitivity factors, is derived from experimental data and used to interpolate the reduced data set in parallel imaging to estimate the missing k‐space data.
Abstract: This work describes an auto-calibrated method for parallel imaging with spiral trajectory. The method is a k-space approach where an interpolation kernel, accounting for coil sensitivity factors, is derived from experimental data and used to interpolate the reduced data set in parallel imaging to estimate the missing k-space data. For the case of spiral imaging, this interpolation kernel is defined along radial directions so that missing spiral interleaves can be estimated directly from neighboring interleaves. This kernel is invariant along the radial direction but varies azimuthally. Therefore, the k-space is divided into angular sectors and sector-specific kernels are used. It is demonstrated experimentally that relatively few sectors are sufficient for accurate reconstruction, allowing for efficient implementation. The interpolation kernels can be derived either from a separate calibration scan or self-calibration data available with a dual-density spiral acquisition. The reconstruction method is implemented with two sampling strategies and experimentally demonstrated to be robust.

Book ChapterDOI
13 Jan 2006
TL;DR: This paper introduces the kernel constrained mutual subspace method (KCMSM) and provides a new framework for 3D object recognition by applying it to multiple view images by projecting the data onto the kernel generalized difference subspace.
Abstract: This paper introduces the kernel constrained mutual subspace method (KCMSM) and provides a new framework for 3D object recognition by applying it to multiple view images KCMSM is a kernel method for classifying a set of patterns An input pattern x is mapped into the high-dimensional feature space $\cal{F}$ via a nonlinear function φ, and the mapped pattern φ(x) is projected onto the kernel generalized difference subspace, which represents the difference among subspaces in the feature space $\cal{F}$ KCMSM classifies an input set based on the canonical angles between the input subspace and a reference subspace This subspace is generated from the mapped patterns on the kernel generalized difference subspace, using principal component analysis This framework is similar to conventional kernel methods using canonical angles, however, the method is different in that it includes a powerful feature extraction step for the classification of the subspaces in the feature space $\cal{F}$ by projecting the data onto the kernel generalized difference subspace The validity of our method is demonstrated by experiments in a 3D object recognition task using multiview images

Posted Content
TL;DR: In this paper, a variable-step-size algorithm for approximating convolutions which occur in evolution equations with memory terms is presented for which advancing N steps requires only O(N log n) operations and O(log n) active memory.
Abstract: To approximate convolutions which occur in evolution equations with memory terms, a variable-stepsize algorithm is presented for which advancing N steps requires only O(N log(N)) operations and O(log(N)) active memory, in place of O(N^2) operations and O(N) memory for a direct implementation. A basic feature of the fast algorithm is the reduction, via contour integral representations, to differential equations which are solved numerically with adaptive step sizes. Rather than the kernel itself, its Laplace transform is used in the algorithm. The algorithm is illustrated on three examples: a blow-up example originating from a Schr\"odinger equation with concentrated nonlinearity, chemical reactions with inhibited diffusion, and viscoelasticity with a fractional order constitutive law.

Proceedings ArticleDOI
10 Apr 2006
TL;DR: In this article, a kernel particle filter was used for 3D body tracking in a video stream acquired from a single uncalibrated camera using intensity-based and color-based cues.
Abstract: This paper presents the application of a kernel particle filter for 3D body tracking in a video stream acquired from a single uncalibrated camera. Using intensity-based and color-based cues as well as an articulated 3D body model with shape represented by cylinders, a real-time body tracking in monocular cluttered image sequences has been realized. The algorithm runs at 7.5 Hz on a laptop computer and tracks the upper body of a human with two arms. First, experimental results show that the proposed approach has good tracking as well as recovering capabilities despite using a small number of particles. The approach is intended for use on a mobile robot to improve human robot interaction.

Journal ArticleDOI
TL;DR: It is shown that the proposed systolic designs for circular convolution can be used for computation of linear convolution as well and involve significantly less memory and less area-delay complexity.
Abstract: Novel one- and two-dimensional systolic structures are designed for computation of circular convolution using distributed arithmetic (DA). The proposed structures involve significantly less memory and less area-delay complexity compared with the existing DA-based structures for circular convolution. Besides, it is shown that the proposed systolic designs for circular convolution can be used for computation of linear convolution as well