scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2006"


Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence that exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories.
Abstract: This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting "spatial pyramid" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s "gist" and Lowe’s SIFT descriptors.

8,736 citations


Journal ArticleDOI
30 Nov 2006
TL;DR: This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot using basic techniques that are by now well established in the field.
Abstract: This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot. In this paper, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques

2,026 citations


Journal ArticleDOI
17 Jun 2006
TL;DR: A large-scale evaluation of an approach that represents images as distributions of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance.
Abstract: Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the ÷2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on 4 texture and 5 object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance.

1,863 citations


Book ChapterDOI
07 May 2006
TL;DR: A detector for standing and moving people in videos with possibly moving cameras and backgrounds is developed, testing several different motion coding schemes and showing empirically that orientated histograms of differential optical flow give the best overall performance.
Abstract: Detecting humans in films and videos is a challenging problem owing to the motion of the subjects, the camera and the background and to variations in pose, appearance, clothing, illumination and background clutter. We develop a detector for standing and moving people in videos with possibly moving cameras and backgrounds, testing several different motion coding schemes and showing empirically that orientated histograms of differential optical flow give the best overall performance. These motion-based descriptors are combined with our Histogram of Oriented Gradient appearance descriptors. The resulting detector is tested on several databases including a challenging test set taken from feature films and containing wide ranges of pose, motion and background variations, including moving cameras and backgrounds. We validate our results on two challenging test sets containing more than 4400 human examples. The combined detector reduces the false alarm rate by a factor of 10 relative to the best appearance-based detector, for example giving false alarm rates of 1 per 20,000 windows tested at 8% miss rate on our Test Set 1.

1,812 citations


Journal ArticleDOI
TL;DR: This paper proposes to endow the tensor space with an affine-invariant Riemannian metric and demonstrates that it leads to strong theoretical properties: the cone of positive definite symmetric matrices is replaced by a regular and complete manifold without boundaries, the geodesic between two tensors and the mean of a set of tensors are uniquely defined.
Abstract: Tensors are nowadays a common source of geometric information. In this paper, we propose to endow the tensor space with an affine-invariant Riemannian metric. We demonstrate that it leads to strong theoretical properties: the cone of positive definite symmetric matrices is replaced by a regular and complete manifold without boundaries (null eigenvalues are at the infinity), the geodesic between two tensors and the mean of a set of tensors are uniquely defined, etc. We have previously shown that the Riemannian metric provides a powerful framework for generalizing statistics to manifolds. In this paper, we show that it is also possible to generalize to tensor fields many important geometric data processing algorithms such as interpolation, filtering, diffusion and restoration of missing data. For instance, most interpolation and Gaussian filtering schemes can be tackled efficiently through a weighted mean computation. Linear and anisotropic diffusion schemes can be adapted to our Riemannian framework, through partial differential evolution equations, provided that the metric of the tensor space is taken into account. For that purpose, we provide intrinsic numerical schemes to compute the gradient and Laplace-Beltrami operators. Finally, to enforce the fidelity to the data (either sparsely distributed tensors or complete tensors fields) we propose least-squares criteria based on our invariant Riemannian distance which are particularly simple and efficient to solve.

1,588 citations


Book ChapterDOI
29 May 2006
TL;DR: A new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte- carlo phase is presented, that provides finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity.
Abstract: A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 × 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament.

1,273 citations


Journal ArticleDOI
TL;DR: A new family of Riemannian metrics called Log‐Euclidean is proposed, based on a novel vector space structure for tensors, which can be converted into Euclidean ones once tensors have been transformed into their matrix logarithms.
Abstract: Diffusion tensor imaging (DT-MRI or DTI) is an emerging imaging modality whose importance has been growing considerably. However, the processing of this type of data (i.e., symmetric positive-definite matrices), called "tensors" here, has proved difficult in recent years. Usual Euclidean operations on matrices suffer from many defects on tensors, which have led to the use of many ad hoc methods. Recently, affine-invariant Riemannian metrics have been proposed as a rigorous and general framework in which these defects are corrected. These metrics have excellent theoretical properties and provide powerful processing tools, but also lead in practice to complex and slow algorithms. To remedy this limitation, a new family of Riemannian metrics called Log-Euclidean is proposed in this article. They also have excellent theoretical properties and yield similar results in practice, but with much simpler and faster computations. This new approach is based on a novel vector space structure for tensors. In this framework, Riemannian computations can be converted into Euclidean ones once tensors have been transformed into their matrix logarithms. Theoretical aspects are presented and the Euclidean, affine-invariant, and Log-Euclidean frameworks are compared experimentally. The comparison is carried out on interpolation and regularization tasks on synthetic and clinical 3D DTI data.

1,137 citations


Book ChapterDOI
07 May 2006
TL;DR: In this article, the authors show experimentally that for a representative selection of commonly used test databases and for moderate to large numbers of samples, random sampling gives equal or better classifiers than the sophisticated multiscale interest operators that are in common use.
Abstract: Bag-of-features representations have recently become popular for content based image classification owing to their simplicity and good performance. They evolved from texton methods in texture analysis. The basic idea is to treat images as loose collections of independent patches, sampling a representative set of patches from the image, evaluating a visual descriptor vector for each patch independently, and using the resulting distribution of samples in descriptor space as a characterization of the image. The four main implementation choices are thus how to sample patches, how to describe them, how to characterize the resulting distributions and how to classify images based on the result. We concentrate on the first issue, showing experimentally that for a representative selection of commonly used test databases and for moderate to large numbers of samples, random sampling gives equal or better classifiers than the sophisticated multiscale interest operators that are in common use. Although interest operators work well for small numbers of samples, the single most important factor governing performance is the number of patches sampled from the test image and ultimately interest operators can not provide enough patches to compete. We also study the influence of other factors including codebook size and creation method, histogram normalization method and minimum scale for feature extraction.

1,099 citations


Journal ArticleDOI
TL;DR: Results indicate that this MHV representation can be used to learn and recognize basic human action classes, independently of gender, body size and viewpoint.

941 citations


Journal ArticleDOI
TL;DR: A learning-based method for recovering 3D human body pose from single images and monocular image sequences, embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose.
Abstract: We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.

855 citations


Journal ArticleDOI
TL;DR: This paper provides a new proof of the characterization of Riemannian centers of mass and an original gradient descent algorithm to efficiently compute them and develops the notions of mean value and covariance matrix of a random element, normal law, Mahalanobis distance and χ2 law.
Abstract: In medical image analysis and high level computer vision, there is an intensive use of geometric features like orientations, lines, and geometric transformations ranging from simple ones (orientations, lines, rigid body or affine transformations, etc.) to very complex ones like curves, surfaces, or general diffeomorphic transformations. The measurement of such geometric primitives is generally noisy in real applications and we need to use statistics either to reduce the uncertainty (estimation), to compare observations, or to test hypotheses. Unfortunately, even simple geometric primitives often belong to manifolds that are not vector spaces. In previous works [1, 2], we investigated invariance requirements to build some statistical tools on transformation groups and homogeneous manifolds that avoids paradoxes. In this paper, we consider finite dimensional manifolds with a Riemannian metric as the basic structure. Based on this metric, we develop the notions of mean value and covariance matrix of a random element, normal law, Mahalanobis distance and ?2 law. We provide a new proof of the characterization of Riemannian centers of mass and an original gradient descent algorithm to efficiently compute them. The notion of Normal law we propose is based on the maximization of the entropy knowing the mean and covariance of the distribution. The resulting family of pdfs spans the whole range from uniform (on compact manifolds) to the point mass distribution. Moreover, we were able to provide tractable approximations (with their limits) for small variances which show that we can effectively implement and work with these definitions.

Journal ArticleDOI
TL;DR: An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed and it can be implemented in a decentralized way provided some local geographic information is available to the mobiles.
Abstract: An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density.

Proceedings ArticleDOI
11 Jan 2006
TL;DR: This paper reports on the development and formal certification of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness.
Abstract: This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.

Journal ArticleDOI
TL;DR: The paper describes JULIA, a Java implementation of the FRACTAL model, a small but efficient runtime framework, which relies on a combination of interceptors and mixins for the programming of reflective features of components.
Abstract: This paper presents FRACTAL, a hierarchical and reflective component model with sharing. Components in this model can be endowed with arbitrary reflective capabilities, from plain black-box objects to components that allow a fine-grained manipulation of their internal structure. The paper describes JULIA, a Java implementation of the model, a small but efficient runtime framework, which relies on a combination of interceptors and mixins for the programming of reflective features of components. The paper presents a qualitative and quantitative evaluation of this implementation, showing that component-based programming in FRACTAL can be made very efficient. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This survey summary summarizes different modeling and solution concepts of networking games, as well as a number of different applications in telecommunications that make use of or can make useof networking games.

Journal ArticleDOI
TL;DR: In this paper, the authors revisited the concepts of Jacobian matrix, manipulability and condition number for parallel robots as accuracy indices in view of optimal design and showed that their real significance is not always well understood.
Abstract: Although the concepts of Jacobian matrix, manipulability, and condition number have existed since the very early beginning of robotics their real significance is not always well understood. In this paper we revisit these concepts for parallel robots as accuracy indices in view of optimal design. We first show that the usual Jacobian matrix derived from the input-output velocities equations may not be sufficient to analyze the positioning errors of the platform. We then examine the concept of manipulability and show that its classical interpretation is erroneous. We then consider various common local dexterity indices, most of which are based on the condition number of the Jacobian matrix. It is emphasized that even for a given robot in a particular pose there are a variety of condition numbers and that their values are not coherent between themselves but also with what we may expect from an accuracy index. Global conditioning indices are then examined. Apart from the problem of being based on the local accuracy indices that are questionable, there is a computational problem in their calculation that is neglected most of the time. Finally, we examine what other indices may be used for optimal design and show that their calculation is most challenging.

Proceedings Article
04 Dec 2006
TL;DR: This work introduces Extremely Randomized Clustering Forests - ensembles of randomly created clustering trees - and shows that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks.
Abstract: Some of the most effective recent methods for content-based image classification work by extracting dense or sparse local image descriptors, quantizing them according to a coding rule such as k-means vector quantization, accumulating histograms of the resulting "visual word" codes over the image, and classifying these with a conventional classifier such as an SVM. Large numbers of descriptors and large codebooks are needed for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests - ensembles of randomly created clustering trees - and show that these provide more accurate results, much faster training and testing and good resistance to background clutter in several state-of-the-art image classification tasks.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This work focuses on the problem of compensating strong perturbations of the dynamics of the robot and proposes a new linear model predictive control scheme which is an improvement of the original ZMP preview control scheme.
Abstract: A humanoid walking robot is a highly nonlinear dynamical system that relies strongly on contact forces between its feet and the ground in order to realize stable motions, but these contact forces are unfortunately severely limited. Model predictive control, also known as receding horizon control, is a general control scheme specifically designed to deal with such constrained dynamical systems, with the potential ability to react efficiently to a wide range of situations. Apart from the question of computation time which needs to be taken care of carefully (these schemes can be highly computation intensive), the initial question of which optimal control problems should be considered to be solved online in order to lead to the desired walking movements is still unanswered. A key idea for answering to this problem can be found in the ZMP preview control scheme. After presenting here this scheme with a point of view slightly different from the original one, we focus on the problem of compensating strong perturbations of the dynamics of the robot and propose a new linear model predictive control scheme which is an improvement of the original ZMP preview control scheme.

Journal ArticleDOI
01 Nov 2006
TL;DR: This paper presents Grid'5000, a 5000 CPU nation-wide infrastructure for research in Grid computing, designed to provide a scientific tool for computer scientists similar to the large-scale instruments used by physicists, astronomers, and biologists.
Abstract: Large scale distributed systems such as Grids are difficult to study from theoretical models and simulators only. Most Grids deployed at large scale are production platforms that are inappropriate research tools because of their limited reconfiguration, control and monitoring capabilities. In this paper, we present Grid'5000, a 5000 CPU nation-wide infrastructure for research in Grid computing. Grid'5000 is designed to provide a scientific tool for computer scientists similar to the large-scale instruments used by physicists, astronomers, and biologists. We describe the motivations, design considerations, architecture, control, and monitoring infrastructure of this experimental platform. We present configuration examples and performance results for the reconfiguration subsystem.

Journal ArticleDOI
TL;DR: In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach and has been validated on several complex image sequences including outdoor environments.
Abstract: Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

Book ChapterDOI
07 May 2006
TL;DR: The results show that color descriptors remain reliable under photometric and geometrical changes, and with decreasing image quality, and for all experiments a combination of color and shape outperforms a pure shape-based approach.
Abstract: Although color is commonly experienced as an indispensable quality in describing the world around us, state-of-the art local feature-based representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variations which causes the measured color values to vary significantly. In this paper we aim to extend the description of local features with color information. To accomplish a wide applicability of the color descriptor, it should be robust to : 1. photometric changes commonly encountered in the real world, 2. varying image quality, from high quality images to snap-shot photo quality and compressed internet images. Based on these requirements we derive a set of color descriptors. The set of proposed descriptors are compared by extensive testing on multiple applications areas, namely, matching, retrieval and classification, and on a wide variety of image qualities. The results show that color descriptors remain reliable under photometric and geometrical changes, and with decreasing image quality. For all experiments a combination of color and shape outperforms a pure shape-based approach.

Journal ArticleDOI
TL;DR: A novel adaptive and patch-based approach is proposed for image denoising and representation based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel to associate with each pixel the weighted sum of data points within an adaptive neighborhood.
Abstract: A novel adaptive and patch-based approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameter-free algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods

Journal ArticleDOI
TL;DR: A novel representation for three-dimensional objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches is introduced, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints.
Abstract: This article introduces a novel representation for three-dimensional (3D) objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches. Geometric constraints associated with different views of the same patches under affine projection are combined with a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints. The proposed approach does not require a separate segmentation stage, and it is applicable to highly cluttered scenes. Modeling and recognition results are presented.

Book ChapterDOI
01 Oct 2006
TL;DR: This article focuses on the computation of statistics of invertible geometrical deformations (i.e., diffeomorphisms), based on the generalization to this type of data of the notion of principal logarithm, which is a simple 3D vector field and well-defined for diffe morphisms close enough to the identity.
Abstract: In this article, we focus on the computation of statistics of invertible geometrical deformations (i.e., diffeomorphisms), based on the generalization to this type of data of the notion of principal logarithm. Remarkably, this logarithm is a simple 3D vector field, and is well-defined for diffeomorphisms close enough to the identity. This allows to perform vectorial statistics on diffeomorphisms, while preserving the invertibility constraint, contrary to Euclidean statistics on displacement fields. We also present here two efficient algorithms to compute logarithms of diffeomorphisms and exponentials of vector fields, whose accuracy is studied on synthetic data. Finally, we apply these tools to compute the mean of a set of diffeomorphisms, in the context of a registration experiment between an atlas an a database of 9 T1 MR images of the human brain.

Proceedings ArticleDOI
14 Jun 2006
TL;DR: This paper studies a specific type of hierarchical function bases, defined by the eigenfunctions of the Laplace-Beltrami operator, and explains in practice how to compute an approximation of the eigens of a differential operator and shows possible applications in geometry processing.
Abstract: One of the challenges in geometry processing is to automatically reconstruct a higher-level representation from raw geometric data. For instance, computing a parameterization of an object helps attaching information to it and converting between various representations. More generally, this family of problems may be thought of in terms of constructing structured function bases attached to surfaces. In this paper, we study a specific type of hierarchical function bases, defined by the eigenfunctions of the Laplace-Beltrami operator. When applied to a sphere, this function basis corresponds to the classical spherical harmonics. On more general objects, this defines a function basis well adapted to the geometry and the topology of the object. Based on physical analogies (vibration modes), we first give an intuitive view before explaining the underlying theory. We then explain in practice how to compute an approximation of the eigenfunctions of a differential operator, and show possible applications in geometry processing.

Journal ArticleDOI
TL;DR: This work proposes to combine the Richardson–Lucy algorithm with a regularization constraint based on Total Variation, which suppresses unstable oscillations while preserving object edges and shows that this constraint improves the deconvolution results as compared with the unregularized Richardson– Lucy algorithm, both visually and quantitatively.
Abstract: Confocal laser scanning microscopy is a powerful and popular technique for 3D imaging of biological specimens. Although confocal microscopy images are much sharper than standard epifluorescence ones, they are still degraded by residual out-of-focus light and by Poisson noise due to photon-limited detection. Several deconvolution methods have been proposed to reduce these degradations, including the Richardson-Lucy iterative algorithm, which computes maximum likelihood estimation adapted to Poisson statistics. As this algorithm tends to amplify noise, regularization constraints based on some prior knowledge on the data have to be applied to stabilize the solution. Here, we propose to combine the Richardson-Lucy algorithm with a regularization constraint based on Total Variation, which suppresses unstable oscillations while preserving object edges. We show on simulated and real images that this constraint improves the deconvolution results as compared with the unregularized Richardson-Lucy algorithm, both visually and quantitatively.

Posted Content
TL;DR: In this article, the authors use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet.
Abstract: The performance of peer-to-peer file replication comes from its piece and peer selection strategies. Two such strategies have been introduced by the BitTorrent protocol: the rarest first and choke algorithms. Whereas it is commonly admitted that BitTorrent performs well, recent studies have proposed the replacement of the rarest first and choke algorithms in order to improve efficiency and fairness. In this paper, we use results from real experiments to advocate that the replacement of the rarest first and choke algorithms cannot be justified in the context of peer-to-peer file replication in the Internet. We instrumented a BitTorrent client and ran experiments on real torrents with different characteristics. Our experimental evaluation is peer oriented, instead of tracker oriented, which allows us to get detailed information on all exchanged messages and protocol events. We go beyond the mere observation of the good efficiency of both algorithms. We show that the rarest first algorithm guarantees close to ideal diversity of the pieces among peers. In particular, on our experiments, replacing the rarest first algorithm with source or network coding solutions cannot be justified. We also show that the choke algorithm in its latest version fosters reciprocation and is robust to free riders. In particular, the choke algorithm is fair and its replacement with a bit level tit-for-tat solution is not appropriate. Finally, we identify new areas of improvements for efficient peer-to-peer file replication protocols.

Journal IssueDOI
TL;DR: J ULIA is described, a Java implementation of the F RACTAL model, a small but efficient runtime framework, which relies on a combination of interceptors and mixins for the programming of reflective features of components.
Abstract: This paper presents F RACTAL, a hierarchical and reflective component model with sharing. Components in this model can be endowed with arbitrary reflective capabilities, from plain black-box objects to components that allow a fine-grained manipulation of their internal structure. The paper describes J ULIA, a Java implementation of the model, a small but efficient runtime framework, which relies on a combination of interceptors and mixins for the programming of reflective features of components. The paper presents a qualitative and quantitative evaluation of this implementation, showing that component-based programming in F RACTAL can be made very efficient. Copyright © 2006 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A new globally smooth parameterization method for the triangulated surfaces of arbitrary topology that can construct both quasiconformal and quasi-isometric parameterizations, and is particularly suitable for surface fitting and remeshing.
Abstract: We present a new globally smooth parameterization method for the triangulated surfaces of arbitrary topology. Given two orthogonal piecewise linear vector fields defined over the input mesh (typically the estimated principal curvature directions), our method computes two piecewise linear periodic functions, aligned with the input vector fields, by minimizing an objective function. The bivariate function they define is a smooth parameterization almost everywhere on the surface, except in the vicinity of singular vertices, edges, and triangles, where the derivatives of the parameterization vanish. We extract a quadrilateral chart layout from the parameterization function and propose an automatic procedure to detect the singularities, and fix them by splitting and reparameterizing the containing charts. Our method can construct both quasiconformal (angle preserving) and quasi-isometric (angle and area preserving) parameterizations. The more restrictive class of quasi-isometric parameterizations is constructed at the expense of introducing more singularities. The constructed parameterizations can be used for a variety of geometry processing applications. Since we can align the parameterization with the principal curvature directions, our result is particularly suitable for surface fitting and remeshing.

Book ChapterDOI
14 Jun 2006
TL;DR: It is considered in this paper that a DSL (Domain Specific Language) may be defined by a set of models and the notion of metamodel is used to define the source DSL, the target DSL and the transformation DSL itself.
Abstract: We consider in this paper that a DSL (Domain Specific Language) may be defined by a set of models. A typical DSL is the ATLAS Transformation Language (ATL). An ATL program transforms a source model (conforming to a source metamodel) into a target model (conforming to a target metamodel). Being itself a model, the transformation program conforms to the ATL metamodel. The notion of metamodel is thus used to define the source DSL, the target DSL and the transformation DSL itself. As a consequence we can see that agility to define metamodels and precision of these definitions is of paramount importance in any model engineering activity. In order to fullfill the goals of agility and precision in the definition of our metamodels, we have been using a notation called KM3 (Kernel MetaMetaModel). KM3 may itself be considered as a DSL for describing metamodels. This paper presents the rationale for using KM3, some examples of its use and a precise definition of the language.