scispace - formally typeset
Search or ask a question

Showing papers on "Kernel (image processing) published in 2003"


Journal ArticleDOI
TL;DR: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed, which employs a metric derived from the Bhattacharyya coefficient as similarity measure, and uses the mean shift procedure to perform the optimization.
Abstract: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed. The feature histogram-based target representations are regularized by spatial masking with an isotropic kernel. The masking induces spatially-smooth similarity functions suitable for gradient-based optimization, hence, the target localization problem can be formulated using the basin of attraction of the local maxima. We employ a metric derived from the Bhattacharyya coefficient as similarity measure, and use the mean shift procedure to perform the optimization. In the presented tracking examples, the new method successfully coped with camera motion, partial occlusions, clutter, and target scale variations. Integration with motion filters and data association techniques is also discussed. We describe only a few of the potential applications: exploitation of background information, Kalman tracking using motion models, and face tracking.

4,996 citations


Proceedings ArticleDOI
18 Jun 2003
TL;DR: Lindeberg's theory of feature scale selection based on local maxima of differential scale-space filters to the problem of selecting kernel scale for mean-shift blob tracking shows that a difference of Gaussian (DOG) mean- shift kernel enables efficient tracking of blobs through scale space.
Abstract: The mean-shift algorithm is an efficient technique for tracking 2D blobs through an image. Although the scale of the mean-shift kernel is a crucial parameter, there is presently no clean mechanism for choosing or updating scale while tracking blobs that are changing in size. We adapt Lindeberg's (1998) theory of feature scale selection based on local maxima of differential scale-space filters to the problem of selecting kernel scale for mean-shift blob tracking. We show that a difference of Gaussian (DOG) mean-shift kernel enables efficient tracking of blobs through scale space. Using this kernel requires generalizing the mean-shift algorithm to handle images that contain negative sample weights.

909 citations


Journal ArticleDOI
TL;DR: This paper proposes a kernel machine-based discriminant analysis method, which deals with the nonlinearity of the face patterns' distribution and effectively solves the so-called "small sample size" (SSS) problem, which exists in most FR tasks.
Abstract: Techniques that can introduce low-dimensional feature representation with enhanced discriminatory power is of paramount importance in face recognition (FR) systems. It is well known that the distribution of face images, under a perceivable variation in viewpoint, illumination or facial expression, is highly nonlinear and complex. It is, therefore, not surprising that linear techniques, such as those based on principle component analysis (PCA) or linear discriminant analysis (LDA), cannot provide reliable and robust solutions to those FR problems with complex face variations. In this paper, we propose a kernel machine-based discriminant analysis method, which deals with the nonlinearity of the face patterns' distribution. The proposed method also effectively solves the so-called "small sample size" (SSS) problem, which exists in most FR tasks. The new algorithm has been tested, in terms of classification error rate performance, on the multiview UMIST face database. Results indicate that the proposed methodology is able to achieve excellent performance with only a very small set of features being used, and its error rate is approximately 34% and 48% of those of two other commonly used kernel FR approaches, the kernel-PCA (KPCA) and the generalized discriminant analysis (GDA), respectively.

651 citations


Journal ArticleDOI
TL;DR: Stability of the resulting initialboundary value scheme is proved, error estimates for the considered approximation of the boundary condition are given, and the efficiency of the proposed method is illustrated on several examples.
Abstract: This paper is concerned with transparent boundary conditions (TBCs) for the time-dependent Schrodinger equation in one and two dimensions. Discrete TBCs are introduced in the numerical simulations of whole space problems in order to reduce the computational domain to a finite region. Since the discrete TBC for the Schrodinger equation includes a convolution w.r.t. time with a weakly decaying kernel, its numerical evaluation becomes very costly for large-time simulations. As a remedy we construct approximate TBCs with a kernel having the form of a finite sum-of-exponentials, which can be evaluated in a very efficient recursion. We prove stability of the resulting initialboundary value scheme, give error estimates for the considered approximation of the boundary condition, and illustrate the efficiency of the proposed method on several examples.

179 citations


Journal ArticleDOI
TL;DR: A way to use special convolution kernels to efficiently implement "multiplicative" updates on graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge.
Abstract: Kernels are typically applied to linear algorithms whose weight vector is a linear combination of the feature vectors of the examples. On-line versions of these algorithms are sometimes called "additive updates" because they add a multiple of the last feature vector to the current weight vector.In this paper we have found a way to use special convolution kernels to efficiently implement "multiplicative" updates. The kernels are defined by a directed graph. Each edge contributes an input. The inputs along a path form a product feature and all such products build the feature vector associated with the inputs.We also have a set of probabilities on the edges so that the outflow from each vertex is one. We then discuss multiplicative updates on these graphs where the prediction is essentially a kernel computation and the update contributes a factor to each edge. After adding the factors to the edges, the total outflow out of each vertex is not one any more. However some clever algorithms re-normalize the weights on the paths so that the total outflow out of each vertex is one again. Finally, we show that if the digraph is built from a regular expressions, then this can be used for speeding up the kernel and re-normalization computations.We reformulate a large number of multiplicative update algorithms using path kernels and characterize the applicability of our method. The examples include efficient algorithms for learning disjunctions and a recent algorithm that predicts as well as the best pruning of a series parallel digraphs.

153 citations


Proceedings ArticleDOI
19 Oct 2003
TL;DR: A framework for summarizing digital media based on structural analysis on characterizing the repetitive structure in popular music by combining segments representing the clusters most frequently repeated throughout the piece is presented.
Abstract: We present a framework for summarizing digital media based on structural analysis. Though these methods are applicable to general media, we concentrate here on characterizing the repetitive structure in popular music. In the first step, a similarity matrix is calculated from interframe spectral similarity. Segment boundaries, such as verse-chorus transitions, are found by correlating a kernel along the diagonal of the matrix. Once segmented, spectral statistics of each segment are computed. In the second step, segments are clustered, based on the pairwise similarity of their statistics, using a matrix decomposition. Finally, the audio is summarized by combining segments representing the clusters most frequently repeated throughout the piece. We present results on a small corpus showing more than 90% correct detection of verse and chorus segments.

141 citations


Proceedings ArticleDOI
TL;DR: A framework for analyzing the structure of digital media streams using spectral data to construct a similarity matrix calculated from inter-frame spectral similarity and clusters are clustered based on the self-similarity of their statistics.
Abstract: We present a framework for analyzing the structure of digital media streams. Though our methods work for video, text, and audio, we concentrate on detecting the structure of digital music files. In the first step, spectral data is used to construct a similarity matrix calculated from inter-frame spectral similarity.The digital audio can be robustly segmented by correlating a kernel along the diagonal of the similarity matrix. Once segmented, spectral statistics of each segment are computed. In the second step,segments are clustered based on the self-similarity of their statistics. This reveals the structure of the digital music in a set of segment boundaries and labels. Finally, the music is summarized by selecting clusters with repeated segments throughout the piece. The summaries can be customized for various applications based on the structure of the original music.

141 citations


Journal ArticleDOI
TL;DR: The paper proves mathematically that the combination of kernels improves watermark performance and shows that the proposed watermarking scheme is much better than previous echo-hiding schemes in terms of detection rate and imperceptibility.
Abstract: The paper presents a novel echo-hiding method for audio watermarking. The method is quite different from previous echo-hiding methods since it presents a new echo kernel which introduces a forward kernel as well as a backward kernel. The new kernel, a combination of the backward and forward kernels, can enhance considerably the watermark detection rate. Thus, it is possible to reduce echo amplitude. The paper proves mathematically that the combination of kernels improves watermark performance. Experimental results show that the proposed watermarking scheme is much better than previous echo-hiding schemes in terms of detection rate and imperceptibility.

125 citations


Patent
27 Feb 2003
TL;DR: In this article, methods and systems for processing color images, such as by separating color and spatial information into separate channels, are presented, which are useful in forming electronic devices with reduced opto-mechanical, optoelectronic and processing complexity or cost.
Abstract: An image processing method includes the steps of wavefront coding a wavefront that forms an optical image, converting the optical image to a data stream, and processing the data stream with a filter kernel to reverse effects of wavefront coding and generate a final image. By example, the filter set kernel may be a reduced filter set kernel, or a color-specific kernel. Methods and systems are also disclosed for processing color images, such as by separating color and spatial information into separate channels. Methods and systems herein are for example useful in forming electronic devices with reduced opto-mechanical, opto-electronic and processing complexity or cost.

110 citations


Proceedings ArticleDOI
18 Jun 2003
TL;DR: A new positive definite kernel f(A, B) defined over pairs of matrices A, B is derived based on the concept of principal angles between two linear subspaces and it is shown that the principal angles can be recovered using only inner-products between pairs of column vectors of the input matrices thereby allowing the original column vectors to be mapped onto arbitrarily high-dimensional feature spaces.
Abstract: We consider the problem of learning with instances defined over a space of sets of vectors. We derive a new positive definite kernel f(A, B) defined over pairs of matrices A, B based on the concept of principal angles between two linear subspaces. We show that the principal angles can be recovered using only inner-products between pairs of column vectors of the input matrices thereby allowing the original column vectors of A, B to be mapped onto arbitrarily high-dimensional feature spaces. We apply this technique to inference over image sequences applications of face recognition and irregular motion trajectory detection.

108 citations


01 Jan 2003
TL;DR: TrueTime is a toolbox for simulating the timely behavior of real-time kernelsexecuting controller tasks and makes it possible to simulate simple models of network protocols and their influence on networked control loops.
Abstract: Traditional control design using MATLAB/Simulink, often disregards the temporal effects arising fromthe actual implementation of the controllers. Nowadays, controllersare often implemented as tasks in a real-time kernel and communicatewith other nodes over a network. Consequently, the constraints of thetarget system, e.g., limited CPU speed and network bandwidth, must betaken into account at design time.For this purpose we have developed TrueTime, a toolbox forsimulation of distributed real-time control systems. TrueTimemakes it possible to simulate the timely behavior of real-time kernelsexecuting controller tasks. TrueTime also makes it possibleto simulate simple models of network protocols and their influence onnetworked control loops.TrueTime consists of a kernel block and a network block, bothvariable-step S-functions written in C++. TrueTimealso provides a collection of MATLAB functions used to, e.g., do A/Dand D/A conversion, send and receive network messages, set up timers,and change task attributes. The TrueTime blocks are connectedwith ordinary continuous Simulink blocks to form a real-time controlsystem.The TrueTime kernel block simulates a computer with anevent-driven real-time kernel, A/D and D/A converters, a networkinterface, and external interrupt channels. The kernel executesuser-defined tasks and interrupt handlers, representing, e.g., I/Otasks, control algorithms, and communication tasks. Execution isdefined by user-written code functions (C++ functions orm-files) or graphically using ordinary discrete Simulink blocks. Thesimulated execution time of the code may be modeled as constant,random or even data-dependent. Furthermore, the real-time schedulingpolicy of the kernel is arbitrary and decided by the user.The TrueTime network block is event driven and distributesmessages between computer nodes according to a chosen network model.Currently five of the most common medium access control protocols aresupported (CSMA/CD (Ethernet), CSMA/CA (CAN), token-ring, FDMA, andTDMA). It is also possible to specify network parameters such astransmission rate, pre- and post-processing delays, frame overhead,and loss probability.TrueTime is currently used as an experimental platform forresearch on flexible approaches to real-time implementation andscheduling of controller tasks. One example is feedback schedulingwhere feedback is used in the real-time system to dynamicallydistribute resources according to the current situation in the system.

Journal ArticleDOI
TL;DR: The presented algorithms work if Dm(x) is either an analytical function or only given in a numerical form and the inverse problem may imply ill-posed problems, and the use of FFT may be susceptible to them.
Abstract: A Gaussian convolution kernel K is deduced as a Green's function of a Lie operator series. The deconvolution of a Gaussian kernel is developed by the inverse Green's function K(-1). A practical application is the deconvolution of measured profiles Dm(x) of photons and protons with finite detector size to determine the profiles Dp(x) of point-detectors or Monte Carlo Bragg curves of protons. The presented algorithms work if Dm(x) is either an analytical function or only given in a numerical form. Some approximation methods of the deconvolution are compared (differential operator expansion to analytical adaptations of 2 x 2 cm2 and 4 x 4 cm2 profiles, Hermite expansions to measured 6 x 6 cm2 and 20 x 20 cm2 profiles and Bragg curves of 80/180 MeV protons, FFT to an analytical 4 x 4 cm2 profile). The inverse problem may imply ill-posed problems, and, in particular, the use of FFT may be susceptible to them.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric method for estimating the conditional risk-neutral density (RND) from a cross-section of option prices is proposed, termed the positive convolution approximation (PCA).

Journal ArticleDOI
TL;DR: Two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation is developed with a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2] x [-2-2] that is constrained for continuity, smoothness, symmetry, and flat-field response.
Abstract: The paper develops two-dimensional (2D), nonseparable, piecewise cubic convolution (PCC) for image interpolation. Traditionally, PCC has been implemented based on a one-dimensional (1D) derivation with a separable generalization to two dimensions. However, typical scenes and imaging systems are not separable, so the traditional approach is suboptimal. We develop a closed-form derivation for a two-parameter, 2D PCC kernel with support [-2,2]/spl times/[-2,2] that is constrained for continuity, smoothness, symmetry, and flat-field response. Our analyses, using several image models, including Markov random fields, demonstrate that the 2D PCC yields small improvements in interpolation fidelity over the traditional, separable approach. The constraints on the derivation can be relaxed to provide greater flexibility and performance.

Journal ArticleDOI
TL;DR: In this article, an empirical system kernel determined from scans of line source phantoms is incorporated into the forward model of the EM and Bayesian algorithms to achieve resolution recovery, and significant improvements in reconstruction quality can be realized by combining accurate models of the system response with Bayesian reconstruction algorithms.
Abstract: We quantitatively compare filtered backprojection (FBP), expectation-maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner-a dedicated research scanner which has been developed for small and intermediate field of view imaging applications. In contrast to previous approaches that rely on Monte Carlo simulations, a key feature of our investigation is the use of an empirical system kernel determined from scans of line source phantoms. This kernel is incorporated into the forward model of the EM and Bayesian algorithms to achieve resolution recovery. Three data sets are used, data collected on the IndyPET scanner using a bar phantom and a Hoffman three-dimensional brain phantom, and simulated data containing a hot lesion added to a uniform background. Reconstruction quality is analyzed quantitatively in terms of bias-variance measures (bar phantom) and mean square error (lesion phantom). We observe that without use of the empirical system kernel, the FBP, EM, and Bayesian algorithms give similar performance. However, with the inclusion of the empirical kernel, the iterative algorithms provide superior reconstructions compared with FBP, both in terms of visual quality and quantitative measures. Furthermore, Bayesian methods outperform EM. We conclude that significant improvements in reconstruction quality can be realized by combining accurate models of the system response with Bayesian reconstruction algorithms.

Proceedings ArticleDOI
02 Nov 2003
TL;DR: A kernel-based fuzzy clustering algorithm that exploits the spatial contextual information in image data that is more robust to noise than the conventional fuzzy image segmentation algorithms.
Abstract: The 'kernel method' has attracted great attention with the development of support vector machine (SVM) and has been studied in a general way. In this paper, we present a kernel-based fuzzy clustering algorithm that exploits the spatial contextual information in image data. The algorithm is realized by modifying the objective function in the conventional fuzzy c-means algorithm using a kernel-induced distance metric and a spatial penalty term that takes into account the influence of the neighboring pixels on the centre pixel. Experimental results on both synthetic and real MR images show that the proposed algorithm is more robust to noise than the conventional fuzzy image segmentation algorithms.

Proceedings ArticleDOI
19 Oct 2003
TL;DR: In this article, a 3D isotropic shift-invariant blur kernel is proposed to estimate the range of positrons in a homogeneous medium and a new shift-variant blurring model for positron range that accounts for spatial inhomogeneities in the positron scatter properties of the medium is proposed.
Abstract: Positron range is one of the factors that fundamentally limits the spatial resolution of PET images. With the higher resolution of small animal imaging systems and increased interest in using higher energy positron emitters, it is important to consider range effects when designing image reconstruction methods. The positron range distribution can be measured experimentally or calculated using approximate analytic formulae or Monte Carlo simulations. We investigate the use of this distribution within a MAP image reconstruction framework. Positron range is modeled as a blurring kernel and included as part of the forward projection matrix. We describe the use of a 3D isotropic shift-invariant blur kernel, which assumes that positrons are propagating in a homogeneous medium and is computed by Monte Carlo simulation using EGS4. We also propose a new shift-variant blurring model for positron range that accounts for spatial inhomogeneities in the positron scatter properties of the medium. Monte Carlo simulations, phantom, and animal studies with the isotopes Cu-60 and Cu-64 are presented.

Journal ArticleDOI
TL;DR: From the experimental results and performance analysis, it is observed that the time performance to process a very large volume of image data on a network of workstations is much better than that obtained using EQS, which verifies the feasibility of DLT in practical applications.
Abstract: In distributed computing systems, a critical concern is to efficiently partition and schedule the tasks among available processors in such a way that the overall processing time of the submitted tasks is at a minimum On a network of workstations, using parallel virtual machine communication library, we conducted distributed image-processing experiments following two different scheduling and partitioning strategies In this article, following the recently evolved paradigm, referred to as divisible load theory (DLT), we conducted an experimental study on the time performance to process a very large volume of image data on a network of workstations This is the first time in the domain of DLT such an experimental investigation has been carried out As a case study, we use edge detection using Sobel operator as an application to demonstrate the performance of the strategy proposed by DLT Then, we present our program model and timing mechanism for the distributed image processing Following our system models, we compare two different partitioning and scheduling strategies: the partitioning and scheduling strategy following divisible load-scheduling theory (PSSD) and the traditional equal-partitioning strategy (EQS) From the experimental results and performance analysis using different image sizes, kernel sizes, and number of workstations, we observe that the time performance using PSSD is much better than that obtained using EQS We also demonstrate the speed-up achieved by these strategies Furthermore, we observe that the theoretical analysis using DLT agrees with the experimental results quite well, which verifies the feasibility of DLT in practical applications

Patent
06 Jun 2003
TL;DR: In this paper, a call to the common interface is mapped to the kernel mode implementation for kernel mode processes and to the user mode implementation of user mode processes, and the mapping may be performed at runtime or may be static.
Abstract: Methods, systems, and computer program products that, by defining a common interface, allow for a single implementation of operations common to both kernel mode and user mode processing, relative to a hardware adapter. Corresponding kernel mode and user mode implementations of the operations are provided. For a given process, a call to the common interface is mapped to the kernel mode implementation for kernel mode processes and to the user mode implementation for user mode processes. The mapping may be performed at runtime or may be static. The common operation may provide a user mode process direct access to a hardware adapter, such as for sending and receiving information, without switching to kernel mode. A kernel mode implementation for operations unique to kernel mode processing, such as specifying security parameters for the hardware adapter to enforce, or initiating and terminating communication through the hardware adapter, also may be provided.

Journal ArticleDOI
TL;DR: This work investigates new image representations/kernels derived from probabilistic models of the class of images considered and presents a new feature selection method which can be used to reduce the dimensionality of the image representation without significant losses in terms of the performance of the detection-search-system.
Abstract: The success of a multimedia information system depends heavily on the way the data is represented. Although there are "natural" ways to represent numerical data, it is not clear what is a good way to represent multimedia data, such as images, video, or sound. We investigate various image representations where the quality of the representation is judged based on how well a system for searching through an image database can perform-although the same techniques and representations can be used for other types of object detection tasks or multimedia data analysis problems. The system is based on a machine learning method used to develop object detection models from example images that can subsequently be used for examples to detect-search-images of a particular object in an image database. As a base classifier for the detection task, we use support vector machines (SVM), a kernel based learning method. Within the framework of kernel classifiers, we investigate new image representations/kernels derived from probabilistic models of the class of images considered and present a new feature selection method which can be used to reduce the dimensionality of the image representation without significant losses in terms of the performance of the detection-search-system.

Proceedings ArticleDOI
06 Oct 2003
TL;DR: This paper proposes to guide the event insertion by using a set of rules, amounting to an aspect that describes the control-flow contexts in which each event should be generated, and presents an implementation that has been developed to automatically perform this evolution.
Abstract: Automating software evolution requires both identifying precisely the affected program points and selecting the appropriate modification at each point. This task is particularly complicated when considering a large program, even when the modifications appear to be systematic. We illustrate this situation in the context of evolving the Linux kernel to support Bossa, an event-based framework for process-scheduler development. To support Bossa, events must be added at points scattered throughout the kernel. In each case, the choice of event depends on properties of one or a sequence of instructions. To describe precisely the choice of event, we propose to guide the event insertion by using a set of rules, amounting to an aspect that describes the control-flow contexts in which each event should be generated. In this paper, we present our approach and describe the set of rules that allows proper event insertion. These rules use temporal logic to describe sequences of instructions that require events to be inserted. We also give an overview of an implementation that we have developed to automatically perform this evolution.

Journal ArticleDOI
TL;DR: Kernel discriminant analysis, which employs the kernel technique to perform linear discriminantAnalysis in a high-dimensional feature space, is developed to extract the significant nonlinear features which maximise the between- class variance and minimise the within-class variance.

Journal ArticleDOI
TL;DR: This work presents an alternative approach using default reconstruction of sharp images and online filtering in the spatial domain allowing modification of the sharpness-noise tradeoff in real time, which can completely replace the variety of different reconstruction kernels.
Abstract: In computed tomography (CT), selection of a convolution kernel determines the tradeoff between image sharpness and pixel noise. For certain clinical applications it is desirable to have two or more sets of images with different settings. So far, this typically requires reconstruction of several sets of images. We present an alternative approach using default reconstruction of sharp images and online filtering in the spatial domain allowing modification of the sharpness-noise tradeoff in real time. A suitable smoothing filter function in the frequency domain is the ratio of smooth and original (sharp) kernel. Efficient implementation can be achieved by a Fourier transform of this ratio to the spatial domain. Separating the two-dimensional spatial filtering into two subsequent one-dimensional filtering stages in the x and y directions using a Gaussian approximation for the convolution kernel further reduces computational complexity. Due to efficient implementation, interactive modification of the filter settings becomes possible, which can completely replace the variety of different reconstruction kernels.

Book ChapterDOI
01 Jan 2003
TL;DR: It is shown that under some conditions these kernels are closed under sum, product, or Kleene-closure and a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings is given.
Abstract: Kernel methods are widely used in statistical learning techniques. We recently introduced a general kernel framework based on weighted transducers or rational relations, rational kernels, to extend kernel methods to the analysis of variable-length sequences or more generally weighted automata. These kernels are efficient to compute and have been successfully used in applications such as spoken-dialog classification. Not all rational kernels are positive definite and symmetric (PDS) however, a sufficient property for guaranteeing the convergence of discriminant classification algorithms such as Support Vector Machines. We present several theoretical results related to PDS rational kernels. We show in particular that under some conditions these kernels are closed under sum, product, or Kleene-closure and give a general method for constructing a PDS rational kernel from an arbitrary transducer defined on some non-idempotent semirings. We also show that some commonly used string kernels or similarity measures such as the edit-distance, the convolution kernels of Haussler, and some string kernels used in the context of computational biology are specific instances of rational kernels. Our results include the proof that the edit-distance over a non-trivial alphabet is not negative definite, which, to the best of our knowledge, was never stated or proved before.

Proceedings ArticleDOI
24 Nov 2003
TL;DR: Kernel particle filter is presented as a variation of particle filter with improved sampling efficiency and performance in visual tracking by invoking kernel-based representation of densities and introducing mean shift as an iterative mode-seeking procedure.
Abstract: Particle filter has recently received attention in computer vision applications due to attributes such as its ability to carry multiple hypotheses and its relaxation of the linearity assumption. Its shortcoming is increase in complexity with state dimension. We present kernel particle filter as a variation of particle filter with improved sampling efficiency and performance in visual tracking. Unlike existing methods that use stochastic or deterministic optimization procedures to find the modes in a likelihood function, we redistribute particles by invoking kernel-based representation of densities and introducing mean shift as an iterative mode-seeking procedure, in which particles move towards dominant modes while still maintaining as fair samples from the posterior. Experiments on face and limb tracking show that the algorithm is superior to conventional particle filter in handling weak dynamic models and occlusions with 60% fewer particles in 3-9 dimensional spaces.

Proceedings ArticleDOI
TL;DR: This work presents a relatively simple defect correction algorithm, requiring only a small 7 by 7 kernel of raw color filter array data that effectively corrects a wide variety of defect types, that produces substantially better results in high-frequency image regions compared to conventional one-dimensional correction methods.
Abstract: Although the number of pixels in image sensors is increasing exponentially, production techniques have only been able to linearly reduce the probability that a pixel will be defective. The result is a rapidly increasing probability that a sensor will contain one or more defective pixels. Sensors with defects are often discarded after fabrication because they may not produce aesthetically pleasing images. To reduce the cost of image sensor production, defect correction algorithms are needed that allow the utilization of sensors with bad pixels. We present a relatively simple defect correction algorithm, requiring only a small 7 by 7 kernel of raw color filter array data that effectively corrects a wide variety of defect types. Our adaptive edge algorithm is high quality, uses few image lines, is adaptable to a variety of defect types, and independent of other on-board DSP algorithms. Results show that the algorithm produces substantially better results in high-frequency image regions compared to conventional one-dimensional correction methods.

Journal ArticleDOI
TL;DR: The Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose calculations and allows continuous sampling of photon direction to model sharp changes in fluence, such as those due to collimator tongue-and-groove.
Abstract: The convolution/superposition calculations for radiotherapy dose distributions are traditionally performed by convolving polyenergetic energy deposition kernels with TERMA (total energy released per unit mass) precomputed in each voxel of the irradiated phantom. We propose an alternative method in which the TERMA calculation is replaced by random sampling of photon energy, direction and interaction point. Then, a direction is randomly sampled from the angular distribution of the monoenergetic kernel corresponding to the photon energy. The kernel ray is propagated across the phantom, and energy is deposited in each voxel traversed. An important advantage of the explicit sampling of energy is that spectral changes with depth are automatically accounted for. No spectral or kernel hardening corrections are needed. Furthermore, the continuous sampling of photon direction allows us to model sharp changes in fluence, such as those due to collimator tongue-and-groove. The use of explicit photon direction also facilitates modelling of situations where a given voxel is traversed by photons from many directions. Extra-focal radiation, for instance, can therefore be modelled accurately. Our method also allows efficient calculation of a multi-segment/multi-beam IMRT plan by sampling of beam angles and field segments according to their relative weights. For instance, an IMRT plan consisting of seven 14 x 12 cm2 beams with a total of 300 field segments can be computed in 15 min on a single CPU, with 2% statistical fluctuations at the isocentre of the patient's CT phantom divided into 4 x 4 x 4 mm3 voxels. The calculation contains all aperture-specific effects, such as tongue and groove, leaf curvature and head scatter. This contrasts with deterministic methods in which each segment is given equal importance, and the time taken scales with the number of segments. Thus, the Monte Carlo superposition provides a simple, accurate and efficient method for complex radiotherapy dose calculations.

Journal ArticleDOI
01 Jun 2003
TL;DR: The MXX model and variations of the kernel method are combined to produce new autoassociative and heteroassociationative memories which exhibit better error correction capabilities than MXX and WXX and a reduced number of spurious memories which can be easily described in terms of the fundamental memories.
Abstract: Morphological associative memories (MAMs) belong to the class of morphological neural networks. The recording scheme used in the original MAM models is similar to the correlation recording recipe. Recording is achieved by means of a maximum (MXY model) or minimum (WXY model) of outer products.Notable features of autoassociative morphological memories (AMMs) include optimal absolute storage capacity and one-step convergence. Heteroassociative morphological memories (HMMs) do not have these properties and are not very well understood. The fixed points of AMMs can be characterized exactly in terms of the original patterns. Unfortunately, AMM fixed points include a large number of spurious memories.In this paper, we combine the MXX model and variations of the kernel method to produce new autoassociative and heteroassociative memories. We also introduce a dual kernel method. A new, dual model is given by a combination of the WXX model and a variation of the dual kernel method. The new MAM models exhibit better error correction capabilities than MXX and WXX and a reduced number of spurious memories which can be easily described in terms of the fundamental memories.

Patent
Renato Keshet1, Ron Maurer1, Doron Shaked1, Yacov Hel-Or1, Danny Barash1 
11 Mar 2003
TL;DR: In this paper, a sensor image is processed by applying a first demosaicing kernel to produce a sharp image, then applying a second kernel to generate a smooth image, and finally using the sharp and smooth images to produce an output image.
Abstract: A sensor image is processed by applying a first demosaicing kernel to produce a sharp image; applying a second demosaicing kernel to produce a smooth image; and using the sharp and smooth images to produce an output image.

Journal ArticleDOI
TL;DR: This thesis surveys algorithms for computing linear and cyclic convolution in a uniform mathematical notation that allows automatic derivation, optimization, and implementation and finds a window where CRT-based algorithms outperform other methods of computing convolutions.