scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2000"


Journal ArticleDOI
TL;DR: A look at progress in the field over the last 20 years is looked at and some of the challenges that remain for the years to come are suggested.
Abstract: The analysis of medical images has been woven into the fabric of the pattern analysis and machine intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come.

4,249 citations


Journal ArticleDOI
TL;DR: Two evaluation criteria for interest points' repeatability rate and information content are introduced and different interest point detectors are compared using these two criteria.
Abstract: Many different low-level feature detectors exist and it is widely agreed that the evaluation of detectors is important. In this paper we introduce two evaluation criteria for interest points' repeatability rate and information content. Repeatability rate evaluates the geometric stability under different transformations. Information content measures the distinctiveness of features. Different interest point detectors are compared using these two criteria. We determine which detector gives the best results and show that it satisfies the criteria well.

1,690 citations


Journal ArticleDOI
TL;DR: The Monge-Kantorovich mass transfer problem is reset in a fluid mechanics framework and numerically solved by an augmented Lagrangian method.
Abstract: Summary. The $L^2$ Monge-Kantorovich mass transfer problem [31] is reset in a fluid mechanics framework and numerically solved by an augmented Lagrangian method.

1,573 citations


Journal ArticleDOI
TL;DR: This paper focuses on the primal version of the new algorithm, an algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints, which applies sequential quadratic programming techniques to a sequence of barrier problems.
Abstract: An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented.

1,514 citations


Journal ArticleDOI
TL;DR: An assessing method of mixture model in a cluster analysis setting with integrated completed likelihood appears to be more robust to violation of some of the mixture model assumptions and it can select a number of dusters leading to a sensible partitioning of the data.
Abstract: We propose an assessing method of mixture model in a cluster analysis setting with integrated completed likelihood. For this purpose, the observed data are assigned to unknown clusters using a maximum a posteriori operator. Then, the integrated completed likelihood (ICL) is approximated using the Bayesian information criterion (BIC). Numerical experiments on simulated and real data of the resulting ICL criterion show that it performs well both for choosing a mixture model and a relevant number of clusters. In particular, ICL appears to be more robust than BIC to violation of some of the mixture model assumptions and it can select a number of dusters leading to a sensible partitioning of the data.

1,418 citations


Journal ArticleDOI
TL;DR: This article proposes alternatives for Bayesian inference for permutation invariant posteriors, including a clustering device and alternative appropriate loss functions and shows that exploration of these modes can be imposed using tempered transitions.
Abstract: This article deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo (MCMC) techniques usually have difficulties with well-separated modes such as occur here; the MCMC sampler stays within a neighborhood of a local mode and fails to visit other equally important modes. We show that exploration of these modes can be imposed using tempered transitions. However, if the prior distribution does not distinguish between the different components, then the posterior mixture distribution is symmetric and standard estimators such as posterior means cannot be used. We propose alternatives for Bayesian inference for permutation invariant posteriors, including a clustering device and alternative appropriate loss functions.

640 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a technique for the dynamic estimation of bounds on unmeasured variables (or parameters) of an uncertain dynamical system, which relies on interval observers: from (possibly time varying) intervals on the uncertainty and measurements, they compute guaranteed intervals for the variables.

634 citations


Journal ArticleDOI
TL;DR: A completely automatic method to build stable average anatomical models of the human brain using a set of magnetic resonance (MR) images and results showing convergence toward the centroid of the image set used for the computation of the model are provided.

486 citations


Journal ArticleDOI
TL;DR: This article presents a technique where appearances of objects are represented by the joint statistics of such local neighborhood operators, which represents a new class of appearance based techniques for computer vision.
Abstract: The appearance of an object is composed of local structure. This local structure can be described and characterized by a vector of local features measured by local operators such as Gaussian derivatives or Gabor filters. This article presents a technique where appearances of objects are represented by the joint statistics of such local neighborhood operators. As such, this represents a new class of appearance based techniques for computer vision. Based on joint statistics, the paper develops techniques for the identification of multiple objects at arbitrary positions and orientations in a cluttered scene. Experiments show that these techniques can identify over 100 objects in the presence of major occlusions. Most remarkably, the techniques have low complexity and therefore run in real-time.

480 citations


Proceedings ArticleDOI
01 Sep 2000
TL;DR: The μ -calculus is presented, a syntax for λ-calculus + control operators exhibiting symmetries such as program/context and call-by-name/call- by-value, derived from implicational Gentzen's sequent calculus LK.
Abstract: We present the μ -calculus, a syntax for λ-calculus + control operators exhibiting symmetries such as program/context and call-by-name/call-by-value. This calculus is derived from implicational Gentzen's sequent calculus LK, a key classical logical system in proof theory. Under the Curry-Howard correspondence between proofs and programs, we can see LK, or more precisely a formulation called LKμ , as a syntax-directed system of simple types for μ -calculus. For μ -calculus, choosing a call-by-name or call-by-value discipline for reduction amounts to choosing one of the two possible symmetric orientations of a critical pair. Our analysis leads us to revisit the question of what is a natural syntax for call-by-value functional computation. We define a translation of λμ-calculus into μ -calculus and two dual translations back to λ-calculus, and we recover known CPS translations by composing these translations.

379 citations


Book ChapterDOI
11 Oct 2000
TL;DR: This approach, besides its simplicity, provides a robust and efficient way to rigidly register images in various situations, and can easily be implemented on a parallel architecture, which opens potentialities for real time applications using a large number of processors.
Abstract: In order to improve the robustness of rigid registration algorithms in various medical imaging problems, we propose in this article a general framework built on block matching strategies. This framework combines two stages in a multi-scale hierarchy. The first stage consists in finding for each block (or subregion) of the first image, the most similar subregion in the other image, using a similarity criterion which depends on the nature of the images. The second stage consists in finding the global rigid transformation which best explains most of these local correspondances. This is done with a robust procedure which allows up to 50% of false matches. We show that this approach, besides its simplicity, provides a robust and efficient way to rigidly register images in various situations. This includes for instance the alignment of 2D histological sections for the 3D reconstructions of trimmed organs and tissues, the automatic computation of the mid-sagittal plane in multimodal 3D images of the brain, and the multimodal registration of 3D CT and MR images of the brain. A quantitative evaluation of the results is provided for this last example, as well as a comparison with the classical approaches involving the minimization of a global measure of similarity based on Mutual Information or the Correlation Ratio. This shows a significant improvement of the robustness, for a comparable final accuracy. Although slightly more expensive in terms of computational requirements, the proposed approach can easily be implemented on a parallel architecture, which opens potentialities for real time applications using a large number of processors.

Proceedings ArticleDOI
18 Mar 2000
TL;DR: In this paper, the authors considered whether a passive isometric input device, such as a Spaceball/sup TM, used together with visual feedback, could provide the operator with a pseudo-haptic feedback.
Abstract: This paper considers whether a passive isometric input device, such as a Spaceball/sup TM/, used together with visual feedback, could provide the operator with a pseudo-haptic feedback. For this aim, two psychophysical experiments have been conducted. The first experiment consisted of a compliance discrimination, between two virtual springs hand-operated by means of the Spaceball/sup TM/. In this experiment, the stiffness (or compliance) JND turned out to be 6%. The second experiment assessed stiffness discrimination between a virtual spring and the equivalent spring in reality. In this case, the stiffness (or compliance) JND was found to be 13.4%. These results are consistent with previous outcomes on manual discrimination of compliance. Consequently, this consistency reveals that the passive apparatus that was used can, to some extent, simulate haptic information. In addition, a final test indicated that the proprioceptive sense of the subjects was blurred by visual feedback. This gave them the illusion of using a nonisometric device.

Journal ArticleDOI
TL;DR: It is shown that the support of frequent non-key patterns can be inferred from frequent key patterns without accessing the database, and PASCAL is among the most efficient algorithms for mining frequent patterns.
Abstract: In this paper, we propose the algorithm PASCAL which introduces a novel optimization of the well-known algorithm Apriori. This optimization is based on a new strategy called pattern counting inference that relies on the concept of key patterns. We show that the support of frequent non-key patterns can be inferred from frequent key patterns without accessing the database. Experiments comparing PASCAL to the three algorithms Apriori, Close and Max-Miner, show that PASCAL is among the most efficient algorithms for mining frequent patterns.

Proceedings ArticleDOI
01 May 2000
TL;DR: An algorithm to reconstruct smooth surfaces of arbitrary topology from unorganised sample points and normals using natural neighbour interpolation, works in any dimension and allows to deal with non uniform samples.
Abstract: We present an algorithm to reconstruct smooth surfaces of arbitrary topology from unorganised sample points and normals. The method uses natural neighbour interpolation, works in any dimension and allows to deal with non uniform samples. The reconstructed surface is a smooth manifold passing through all the sample points. This surface is implicitly represented as the zero-set of some pseudo-distance function. It can be meshed so as to satisfy a user-defined error bound. Experimental results are presented for surfaces in R^3.

Journal ArticleDOI
TL;DR: This paper proposes an alternative to the use of Hamilton?Jacobi equations which eliminates this contradiction: in the method the implicit representation always remains a distance function by construction, and the implementation does not differ from the theory anymore.

Journal ArticleDOI
TL;DR: A supervised classification model based on a variational approach to find an optimal partition composed of homogeneous classes with regular interfaces and shows how these forces can be defined through the minimization of a unique fonctional.
Abstract: We present a supervised classification model based on a variational approach This model is devoted to find an optimal partition composed of homogeneous classes with regular interfaces The originality of the proposed approach concerns the definition of a partition by the use of level sets Each set of regions and boundaries associated to a class is defined by a unique level set function We use as many level sets as different classes and all these level sets are moving together thanks to forces which interact in order to get an optimal partition We show how these forces can be defined through the minimization of a unique fonctional The coupled Partial Differential Equations (PDE) related to the minimization of the functional are considered through a dynamical scheme Given an initial interface set (zero level set), the different terms of the PDE's are governing the motion of interfaces such that, at convergence, we get an optimal partition as defined above Each interface is guided by internal forces (regularity of the interface), and external ones (data term, no vacuum, no regions overlapping) Several experiments were conducted on both synthetic and real images

Journal ArticleDOI
TL;DR: This paper presents how the 2 1/2 D visual servoing scheme, recently developed, can be used with unknown objects characterized by a set of points, based on the estimation of the camera displacement from two views, given by the current and desired images.
Abstract: Classical visual servoing techniques need a strong a priori knowledge of the shape and the dimensions of the observed objects In this paper, we present how the 2 1/2 D visual servoing scheme we have recently developed, can be used with unknown objects characterized by a set of points Our scheme is based on the estimation of the camera displacement from two views, given by the current and desired images Since vision-based robotics tasks generally necessitate to be performed at video rate, we focus only on linear algorithms Classical linear methods are based on the computation of the essential matrix In this paper, we propose a different method, based on the estimation of the homography matrix related to a virtual plane attached to the object We show that our method provides a more stable estimation when the epipolar geometry degenerates This is particularly important in visual servoing to obtain a stable control law, especially near the convergence of the system Finally, experimental results confirm the improvement in the stability, robustness, and behaviour of our scheme with respect to classical methods

Book ChapterDOI
18 May 2000
TL;DR: Quilt as discussed by the authors is a query language that combines information from diverse data sources into a new query result with a new structure of its own, which can combine information from different data sources.
Abstract: The World Wide Web promises to transform human society by making virtually all types of information instantly available everywhere. Two prerequisites for this promise to be realized are a universal markup language and a universal query language. The power and flexibility of XML make it the leading candidate for a universal markup language. XML provides a way to label information from diverse data sources including structured and semi-structured documents, relational databases, and object repositories. Several XML-based query languages have been proposed, each oriented toward a specific category of information. Quilt is a new proposal that attempts to unify concepts from several of these query languages, resulting in a new language that exploits the full versatility of XML. The name Quilt suggests both the way in which features from several languages were assembled to make a new query language, and the way in which Quilt queries can combine information from diverse data sources into a query result with a new structure of its own.

Book ChapterDOI
14 May 2000
TL;DR: A theoretical analysis of all fast correlation attacks shows that the algorithm with parity-check equations of weight 4 or 5 is usually much more efficient than correlation attacks based on convolutional codes or on turbo codes.
Abstract: This paper describes new techniques for fast correlation attacks, based on Gallager iterative decoding algorithm using parity-check equations of weight greater than 3. These attacks can be applied to any key-stream generator based on LFSRs and it does not require that the involved feedback polynomial have a low weight. We give a theoretical analysis of all fast correlation attacks, which shows that our algorithm with parity-check equations of weight 4 or 5 is usually much more efficient than correlation attacks based on convolutional codes or on turbo codes. Simulation results confirm the validity of this comparison. In this context, we also point out the major role played by the nonlinearity of the Boolean function used in a combination generator.

Journal ArticleDOI
01 Jun 2000
TL;DR: This work proposes an extension to XML query languages that enables keyword search at the granularity of XML elements, that helps novice users formulate queries, and also yields new optimization opportunities for the query processor.
Abstract: Due to the popularity of the XML data format, several query languages for XML have been proposed, specially devised to handle data of which the structure is unknown, loose, or absent. While these languages are rich enough to allow for querying the content and structure of an XML document, a varying or unknown structure can make formulating queries a very difficult task. We propose an extension to XML query languages that enables keyword search at the granularity of XML elements, that helps novice users formulate queries, and also yields new optimization opportunities for the query processor. We present an implementation of this extension on top of a commercial RDBMS; we then discuss implementation choices and performance results.

Journal ArticleDOI
TL;DR: The entropy inequality is proved for the Gaussian-BGK model of Boltzmann equation and new entropic kinetic models for polyatomic gases are introduced which suppress the internal energy variable in the phase space by using two distribution functions.
Abstract: In this paper we prove the entropy inequality for the Gaussian-BGK model of Boltzmann equation This model, also called ellipsoidal statistical model, was introduced in order to fit realistic values of the transport coefficients (Prandtl number, second viscosity) in the Navier-Stokes approxima- tion, which cannot be achieved by the usual relaxation towards isotropic Maxwellians introduced in standard BGK models Moreover, we introduce new entropic kinetic models for polyatomic gases which suppress the internal energy variable in the phase space by using two distribution functions (one for particles mass and one for their internal energy) This reduces the cost of their numerical solution while keeping a kinetic description well adapted to desequilibrium regions

Journal ArticleDOI
TL;DR: Zeilberger's fast algorithm for definite hypergeometric summation to non-hypergeometric holonomic sequences is extended and generalizes to the differential case and to q-calculus as well.

Journal ArticleDOI
TL;DR: A new method of segmentation, called the scale causal multigrid (SCM) algorithm, has been successfully applied to real sonar images and seems to be well suited to the segmentation of very noisy images.
Abstract: This paper is concerned with hierarchical Markov random field (MRP) models and their application to sonar image segmentation. We present an original hierarchical segmentation procedure devoted to images given by a high-resolution sonar. The sonar image is segmented into two kinds of regions: shadow (corresponding to a lack of acoustic reverberation behind each object lying on the sea-bed) and sea-bottom reverberation. The proposed unsupervised scheme takes into account the variety of the laws in the distribution mixture of a sonar image, and it estimates both the parameters of noise distributions and the parameters of the Markovian prior. For the estimation step, we use an iterative technique which combines a maximum likelihood approach (for noise model parameters) with a least-squares method (for MRF-based prior). In order to model more precisely the local and global characteristics of image content at different scales, we introduce a hierarchical model involving a pyramidal label field. It combines coarse-to-fine causal interactions with a spatial neighborhood structure. This new method of segmentation, called the scale causal multigrid (SCM) algorithm, has been successfully applied to real sonar images and seems to be well suited to the segmentation of very noisy images. The experiments reported in this paper demonstrate that the discussed method performs better than other hierarchical schemes for sonar image segmentation.

Proceedings ArticleDOI
05 Jan 2000
TL;DR: Two forms of interferences are individuated in Cardelli and Gordon's Mobile Ambients (MA): plain interferences, which are similar to the interferences one finds in CCS and φ-calculus; and grave interferences which are more dangerous and may be regarded as programming errors.
Abstract: Two forms of interferences are individuated in Cardelli and Gordon's Mobile Ambients (MA): plain interferences, which are similar to the interferences one finds in CCS and p-calculus; and grave interferences, which are more dangerous and may be regarded as programming errors. To control interferences, the MA movement primitives are modified. On the new calculus, the Mobile Safe Ambients (SA), a type system is defined that: controls the mobility of ambients; removes all grave interferences. Other advantages of SA are: a useful algebraic theory; programs sometimes more robust (they require milder conditions for correctness) and/or simpler. These points are illustrated on several examples.

Book ChapterDOI
09 Sep 2000
TL;DR: The join calculus as mentioned in this paper is a language that models distributed and mobile programming, characterized by an explicit notion of locality, a strict adherence to local synchronization, and a direct embedding of the ML programming language.
Abstract: In these notes, we give an overview of the join calculus, its semantics, and its equational theory The join calculus is a language that models distributed and mobile programming It is characterized by an explicit notion of locality, a strict adherence to local synchronization, and a direct embedding of the ML programming language The join calculus is used as the basis for several distributed languages and implementations, such as JoCaml and functional netsLocal synchronization means that messages always travel to a set destination, and can interact only after they reach that destination; this is required for an efficient implementation Specifically, the join calculus uses ML's function bindings and pattern-matching on messages to program these synchronizations in a declarative mannerFormally, the language owes much to concurrency theory, which provides a strong basis for stating and proving the properties of asynchronous programs Because of several remarkable identities, the theory of process equivalences admits simplifications when applied to the join calculus We prove several of these identities, and argue that equivalences for the join calculus can be rationally organized into a five-tiered hierarchy, with some trade-off between expressiveness and proof techniquesWe describe the mobility extensions of the core calculus, which allow the programming of agent creation and migration We briefly present how the calculus has been extended to model distributed failures on the one hand, and cryptographic protocols on the other

Journal ArticleDOI
TL;DR: This paper's rewriting techniques provide semantic foundations for Maude's functional sublanguage, where they have been efficiently implemented.

01 Jan 2000
TL;DR: In this article, an extension to XML query languages that enables keyword search at the granularity of XML elements, that helps novice users formulate queries, and also yields new optimization opportunities for the query processor.
Abstract: Due to the popularity of the XML data format, several query languages for XML have been proposed, specially devised to handle data of which the structure is unknown, loose, or absent. While these languages are rich enough to allow for querying the content and structure of an XML document, a varying or unknown structure can make formulating queries a very difficult task. We propose an extension to XML query languages that enables keyword search at the granularity of XML elements, that helps novice users formulate queries, and also yields new optimization opportunities for the query processor. We present an implementation of this extension on top of a commercial RDBMS; we then discuss implementation choices and performance results.

Journal ArticleDOI
TL;DR: This paper approaches the issue of the optimal selection of the norm, namely the H1 norm, used in POD for the compressible Navier–Stokes equations by several numerical tests and finds that low order modeling of relatively complex flow simulations provides good qualitative results compared with reference computations.
Abstract: Fluid flows are very often governed by the dynamics of a mall number of coherent structures, i.e., fluid features which keep their individuality during the evolution of the flow. The purpose of this paper is to study a low order simulation of the Navier–Stokes equations on the basis of the evolution of such coherent structures. One way to extract some basis functions which can be interpreted as coherent structures from flow simulations is by Proper Orthogonal Decomposition (POD). Then, by means of a Galerkin projection, it is possible to find the system of ODEs which approximates the problem in the finite-dimensional space spanned by the POD basis functions. It is found that low order modeling of relatively complex flow simulations, such as laminar vortex shedding from an airfoil at incidence and turbulent vortex shedding from a square cylinder, provides good qualitative results compared with reference computations. In this respect, it is shown that the accuracy of numerical schemes based on simple Galerkin projection is insufficient and numerical stabilization is needed. To conclude, we approach the issue of the optimal selection of the norm, namely the H 1 norm, used in POD for the compressible Navier–Stokes equations by several numerical tests.

Journal ArticleDOI
TL;DR: This study permits us to understand the limitations of the actual solutions and the required modifications to let TCP cope with a heterogeneous Internet on an end-to-end basis.
Abstract: Transmission media carrying Internet traffic present a wide range of characteristics, some of which, such as transmission errors, long end-to-end delay, and bandwidth asymmetry, may cause a degradation of the TCP performance. Many works have studied the performance of TCP over these media, most of which focus on a particular network type. In this work we study TCP performance independent of the type of network by considering the different possible characteristics of the connection path. We present the problems and the different proposed solutions. This study permits us to understand the limitations of the actual solutions and the required modifications to let TCP cope with a heterogeneous Internet on an end-to-end basis.

Journal ArticleDOI
TL;DR: In this paper, the performance of the MITC general shell elements is evaluated in the analysis of judiciously selected test problems and the authors conclude that the elements are effective for general engineering applications.