scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 1995"


Journal ArticleDOI
TL;DR: What are the common features in the different approaches, the choices that have to be made and what considerations are relevant for a successful system-identification application of these techniques are described, from a user's perspective.

2,031 citations


Journal ArticleDOI
TL;DR: The methodology presented in this paper is applied to the gain scheduling of a missile autopilot and is to bypass most difficulties associated with more classical schemes such as gain-interpolation or gain-scheduling techniques.

1,439 citations


Journal ArticleDOI
TL;DR: Extensions of H/sub /spl infin// synthesis techniques to allow for controller dependence on time-varying but measured parameters are discussed and simple heuristics are proposed to compute robust time-invariant controllers.
Abstract: An important class of linear time-varying systems consists of plants where the state-space matrices are fixed functions of some time-varying physical parameters /spl theta/. Small gain techniques can be applied to such systems to derive robust time-invariant controllers. Yet, this approach is often overly conservative when the parameters /spl theta/ undergo large variations during system operation. In general, higher performance can be achieved by control laws that incorporate available measurements of /spl theta/ and therefore "adjust" to the current plant dynamics. This paper discusses extensions of H/sub /spl infin// synthesis techniques to allow for controller dependence on time-varying but measured parameters. When this dependence is linear fractional, the existence of such gain-scheduled H/sub /spl infin// controllers is fully characterized in terms of linear matrix inequalities. The underlying synthesis problem is therefore a convex program for which efficient optimization techniques are available. The formalism and derivation techniques developed here apply to both the continuous- and discrete-time problems. Existence conditions for robust time-invariant controllers are recovered as a special case, and extensions to gain-scheduling in the face of parametric uncertainty are discussed. In particular, simple heuristics are proposed to compute such controllers. >

1,229 citations


Journal ArticleDOI
TL;DR: Methods of optimization to derive the maximum likelihood estimates as well as the practical usefulness of these models are discussed and an application on stellar data which dramatically illustrated the relevance of allowing clusters to have different volumes is illustrated.

858 citations


Book
01 Jan 1995
TL;DR: This chapter discusses methods for Solving Recurrences of Trees, Representations of Trees and Binary Trees, and Analyzing Properties of Permutations with CGFs.
Abstract: 1. Analysis of Algorithms. Why Analyze an Algorithm? Computational Complexity. Analysis of Algorithms. Average-Case Analysis. Example: Analysis of Quicksort. Asymptotic Approximations. Distributions. Probabilistic Algorithms. 2. Recurrence Relations. Basic Properties. First-Order Recurrences. Nonlinear First-Order Recurrences. Higher-Order Recurrences. Methods for Solving Recurrences. Binary Divide-and-Conquer Recurrences and Binary Numbers. General Divide-and-Conquer Recurrences. 3. Generating Functions. Ordinary Generating Functions. Exponential Generating Functions. Generating Function Solution of Recurrences. Expanding Generating Functions. Transformations with Generating Functions. Functional Equations on Generating Functions. Solving the Quicksort Median-of-Three. Recurrence with OGFs. Counting with Generating Functions. The Symbolic Method. Lagrange Inversion. Probability Generating Functions. Bivariate Generating Functions. Special Functions. 4. Asymptotic Approximations. Notation for Asymptotic Approximations. Asymptotic Expansions. Manipulating Asymptotic Expansions. Asymptotic Approximations of Finite Sums. Euler-Maclaurin Summation. Bivariate Asymptotics. Laplace Method. "Normal" Examples from the Analysis of Algorithms. "Poisson" Examples from the Analysis of Algorithms. Generating Function Asymptotics. 5. Trees. Binary Trees. Trees and Forests. Properties of Trees. Tree Algorithms. Binary Search Trees. Average Path Length in Catalan Trees. Path Length in Binary Search Trees. Additive Parameters of Random Trees. Height. Summary of Average-Case Results on Properties of Trees. Representations of Trees and Binary Trees. Unordered Trees. Labelled Trees. Other Types of Trees. 6. Permutations. Basic Properties of Permutations. Algorithms on Permutations. Representations of Permutations. Enumeration Problems. Analyzing Properties of Permutations with CGFs. Inversions and Insertion Sorts. Left-to-Right Minima and Selection Sort. Cycles and In Situ Permutation. Extremal Parameters. 7. Strings and Tries. String Searching. Combinatorial Properties of Bitstrings. Regular Expressions. Finite-State Automata and Knuth-Morris-Pratt Algorithm. Context-Free Grammars. Tries. Trie Algorithms. Combinatorial Properties of Tries. Larger alphabets. 8. Words and Maps. Hashing with Separate Chaining. Basic Properties of Words. Birthday Paradox and Coupon Collector Problem. Occupancy Restrictions and Extremal Parameters. Occupancy Distributions. Open Addressing Hashing. Maps. Integer Factorization and Maps. 020140009XT04062001

752 citations


Journal ArticleDOI
TL;DR: The survey indicates that the essential points in noisy speech recognition consist of incorporating time and frequency correlations, giving more importance to high SNR portions of speech in decision making, exploiting task-specific a priori knowledge both of speech and of noise, using class-dependent processing, and including auditory models in speech processing.

712 citations


Journal ArticleDOI
TL;DR: Numerical results support this approach, as validated by the use of these algorithms on complex sequences, and two robust estimators in a multi-resolution framework are developed.

673 citations


Journal ArticleDOI
TL;DR: This survey presents a unified and essentially self-contained approach to the asymptotic analysis of a large class of sums that arise in combinatorial mathematics, discrete probabilistic models, and the average-case analysis of algorithms using the Mellin transform, a close relative of the integral transforms of Laplace and Fourier.

603 citations


Journal ArticleDOI
TL;DR: A number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities are described.
Abstract: In this paper we describe a number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities. We outline the ideas underlying these methods and present rate-of-convergence estimates.

419 citations


Journal ArticleDOI
TL;DR: Different approximation methods are considered, and the acquired approximation experience is applied to estimation problems, and wavelet and ‘neuron’ approximations are introduced, and shown to be spatially adaptive.

416 citations


Book
26 May 1995
TL;DR: In this paper, the authors present a history of ergodic Markov chains, including the explicit construction of Lyapunov functions and random walks in two-dimensional complexes.
Abstract: Introduction and history 1. Preliminaries 2. General criteria 3. Explicit construction of Lyapunov functions 4. Ideology of induced chains 5. Random walks in two dimensional complexes 6. Stability 7. Exponential convergence and analyticity for ergodic Markov chains Bibliography.

Journal ArticleDOI
TL;DR: In this article, the authors present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling and parallel processing.

Journal ArticleDOI
TL;DR: In this article, a conceptual framework is provided in which to think of the relationships between the three-dimensional structure of physical space and the geometric properties of a set of cameras that provide pictures from which measurements can be made.
Abstract: A conceptual framework is provided in which to think of the relationships between the three-dimensional structure of physical space and the geometric properties of a set of cameras that provide pictures from which measurements can be made. We usually think of physical space as being embedded in a three-dimensional Euclidean space, in which measurements of lengths and angles do make sense. It turns out that for artificial systems, such as robots, this is not a mandatory viewpoint and that it is sometimes sufficient to think of physical space as being embedded in an affine or even a projective space. The question then arises of how to relate these models to image measurements and to geometric properties of sets of cameras. It is shown that, in the case of two cameras, a stereo rig, the projective structure of the world can be recovered as soon as the epipolar geometry of the stereo rig is known and that this geometry is summarized by a single 3 × 3 matrix, which is called the fundamental matrix. The affine structure can then be recovered if to this information is added a projective transformation between the two images that is induced by the plane at infinity. Finally, the Euclidean structure (up to a similitude) can be recovered if to these two elements is added the knowledge of two conics (one for each camera) that are the images of the absolute conic, a circle of radius -1 in the plane at infinity. In all three cases it is shown how the three-dimensional information can be recovered directly from the images without explicit reconstruction of the scene structure. This defines a natural hierarchy of geometric structures, a set of three strata that is overlaid upon the physical world and that is shown to be recoverable by simple procedures that rely on two items, the physical space itself together with possibly, but not necessarily, some a priori information about it, and some voluntary motions of the set of cameras.

Journal ArticleDOI
TL;DR: A new method to compute the differential characteristics of isointensity surfaces from three-dimensional images, based on the implicit representation of the surface, which allows for entirely new formulas, which make use of only the differentials of the 3D image, and which allow to get rid of the problem of parametrizing the surfaces.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of the distortion model that best transform these edge to segments.
Abstract: Most algorithms in 3D computer vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle lens, generate a lot of nonlinear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.

Journal ArticleDOI
TL;DR: This technique, akin to Mellin transform asymptotics, is put in perspective and illustrated by means of several examples related to combinatorics and the analysis of algorithms like digital tries, digital search trees, quadtrees, and distributed leader election.

Proceedings ArticleDOI
20 Jun 1995
TL;DR: The geometry of multi image perspective projection and the matching constraints that this induces on image measurements are studied and their complex algebraic interdependency is captured by quadratic structural simplicity constraints on the Grassmannian.
Abstract: The paper studies the geometry of multi image perspective projection and the matching constraints that this induces on image measurements. The combined image projections define a 3D joint image subspace of the space of combined homogeneous image coordinates. This is a complete projective replica of the 3D world in image coordinates. Its location encodes the imaging geometry and is captured by the 4 index joint image Grassmannian tensor. Projective reconstruction in the joint image is a canonical process requiring only a simple rescaling of image coordinates. Reconstruction in world coordinates amounts to a choice of basis in the joint image. The matching constraints are multilinear tensorial equations in image coordinates that tell whether tokens in different images could be the projections of a single world token. For 2D images of 3D points there are exactly three basic types: the epipolar constraint, A. Shashua's (1995) trilinear one, and a new quadrilinear 4 image one. For images of lines, R. Hartley's (1994) trilinear constraint is the only type. The coefficients of the matching constraints are tensors built directly from the joint image Grassmannian. Their complex algebraic interdependency is captured by quadratic structural simplicity constraints on the Grassmannian. >

Journal ArticleDOI
TL;DR: Three-dimensional edge detection in voxel images is used to locate points corresponding to surfaces of 3D structures in order to extract points or lines which may be used by registration and tracking procedures, and the 3D image is treated as a hypersurface (a three-dimensional manifold) in R4 to avoid the problem of establishing links between 3D edge detection and local surface approximation.

Book ChapterDOI
10 Apr 1995
TL;DR: This paper shows how a restricted form of second-order syntax and embedded implication can be used together with induction in the Coq Proof Development system, and fully formalizes a proof of type soundness in the system.
Abstract: The terms of the simply-typed λ-calculus can be used to express the higher-order abstract syntax of objects such as logical formulas, proofs, and programs. Support for the manipulation of such objects is provided in several programming languages (e.g. λProlog, Elf). Such languages also provide embedded implication, a tool which is widely used for expressing hypothetical judgments in natural deduction. In this paper, we show how a restricted form of second-order syntax and embedded implication can be used together with induction in the Coq Proof Development system. We specify typing rules and evaluation for a simple functional language containing only function abstraction and application, and we fully formalize a proof of type soundness in the system. One difficulty we encountered is that expressing the higher-order syntax of an object-language as an inductive type in Coq generates a class of terms that contains more than just those that directly represent objects in the language. We overcome this difficulty by defining a predicate in Coq that holds only for those terms that correspond to programs. We use this predicate to express and prove the adequacy for our syntax.

Journal ArticleDOI
TL;DR: This paper presents an algorithm enabling to compute the possible rotation of the end-effector around a fixed point and enables to take into account all the constraints limiting the workspace.
Abstract: An important step during the design of a parallel manipulators is the determination of its workspace. For a 6-d.o.f. parallel manipulator workspace limitations are due to the bounded range of their linear actuators, mechanical limits on their passive joints and links interference. The computation of the workspace of a parallel manipulator is far more complex than for a serial link manipulator as its translation ability is dependent upon the orientation of the end-effector. We present in this paper an algorithm enabling to compute the possible rotation of the end-effector around a fixed point. This algorithm enables to take into account all the constraints limiting the workspace. Various examples are presented.

Journal ArticleDOI
TL;DR: To any accessible nonlinear system the authors associate a so-called infinitesimal Brunovsky form, which gives an algebraic criterion for strong accessibility as well as a generalization of Kronecker controllability indices.
Abstract: To any accessible nonlinear system we associate a so-called infinitesimal Brunovsky form. This gives an algebraic criterion for strong accessibility as well as a generalization of Kronecker controllability indices. An output function which defines a right-invertible system without zero dynamics is shown to exist if and only if the basis of the Brunovsky form can be transformed into a system of exact differential forms. This is equivalent to the system being differentially flat and hence constitutes a necessary and sufficient condition for dynamic feedback linearizability. >

Journal ArticleDOI
TL;DR: A prototype system helps find video segments of interest from existing collections and create new video presentations with algebraic combinations of these segments.
Abstract: A new data model called algebraic video provides operations for composing, searching, navigating,and playing back digital video presentations. A prototype system helps find video segments of interest from existing collections and create new video presentations with algebraic combinations of these segments. >

Book ChapterDOI
27 Sep 1995
TL;DR: This work describes a number of hybrid schemes improving over distributed reference counting algorithms in order to collect cyclic garbage, and describes tracing-based techniques derived from uniprocessor tracing- based techniques.
Abstract: We present the specturm of distributed garbage collection techniques. We first describe those reference counting-based techniques and compare them, in particular with respect to resilience to message failures. Reference counting-based techniques are acyclic since they are unable to collect cyclic data structures. We then describe a number of hybrid schemes improving over distributed reference counting algorithms in order to collect cyclic garbage. We then describe tracing-based techniques derived from uniprocessor tracing-based techniques. Finally, we discuss the pros and cons of each technique.

Proceedings ArticleDOI
25 Jan 1995
TL;DR: This extension solves the full transparency problem (how to give syntactic signatures for higher-order functors that express exactly their propagation of type equations), and also provides better support for non-closed code fragments.
Abstract: we present a variety of the Standard ML module system where parameterized abstract types (i.e. functors returning generative types) map provably equal arguments to compatible abstract types, instead of generating distinct types at each applications as in Standard ML. This extension solves the full transparency problem (how to give syntactic signatures for higher-order functors that express exactly their propagation of type equations), and also provides better support for non-closed code fragments.

Journal ArticleDOI
TL;DR: Besides obtaining more intertransaction concurrency, chopping transactions in this way can enhance intratransaction parallelism and permit users to obtain more concurrency while preserving correctness.
Abstract: Chopping transactions into pieces is good for performance but may lead to nonserializable executions. Many researchers have reacted to this fact by either inventing new concurrency-control mechanisms, weakening serializability, or both. We adopt a different approach. We assume a user who—has access only to user-level tools such as (1) choosing isolation degrees 1ndash;4, (2) the ability to execute a portion of a transaction using multiversion read consistency, and (3) the ability to reorder the instructions in transaction programs; and —knows the set of transactions that may run during a certain interval (users are likely to have such knowledge for on-line or real-time transactional applications).Given this information, our algorithm finds the finest chopping of a set of transactions TranSet with the following property: If the pieces of the chopping execute serializably, then TranSet executes serializably. This permits users to obtain more concurrency while preserving correctness. Besides obtaining more intertransaction concurrency, chopping transactions in this way can enhance intratransaction parallelism.The algorithm is inexpensive, running in O(n×(e+m)) time, once conflicts are identified, using a naive implementation, where n is the number of concurrent transactions in the interval, e is the number of edges in the conflict graph among the transactions, and m is the maximum number of accesses of any transaction. This makes it feasible to add as a tuning knob to real systems.

Journal ArticleDOI
TL;DR: In this paper, the authors proved convergence of the Galerkin finite element method with polynomials of arbitrary degree q ≥ 0 on general unstructured meshes for scalar conservation laws in multidimensions.
Abstract: We prove convergence of the discontinuous Galerkin finite element method with polynomials of arbitrary degree q≥0 on general unstructured meshes for scalar conservation laws in multidimensions. We also prove for systems of conservation laws that limits of discontinuous Galerkin finite element solutions satisfy the entropy inequalities of the system related to convex entropies.

Journal ArticleDOI
TL;DR: This work investigates the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of system F), implicit polymorphism, abstract data types, and subtyping.
Abstract: There are situations in programming where some dynamic typing is needed, even in the presence of advanced static type systems. We investigate the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of system F ), implicit polymorphism (in the style of ML), abstract data types, and subtyping.

Book ChapterDOI
10 Apr 1995
TL;DR: A simply typed λ-term whose computation in the λσ-calculus does not always terminate is presented.
Abstract: We present a simply typed λ-term whose computation in the λσ-calculus does not always terminate.

Book ChapterDOI
05 Dec 1995
TL;DR: This paper describes a variational approach devised for the purpose of estimating optical flow from a sequence of images with the constraint to preserve the flow discontinuities, set as a regularization and minimization of a non quadratic functional.
Abstract: This paper describes a variational approach devised for the purpose of estimating optical flow from a sequence of images with the constraint to preserve the flow discontinuities. This problem is set as a regularization and minimization of a non quadratic functional. The Tikhonov quadratic regularization term usually used to recover smooth solution is replaced by a particular function of the gradient flow specifically derived to allow flow discontinuities formation in the solution. Conditions to be fulfilled by this specific regularizing term, to preserve discontinuities and insure stability of the regularization problem, are also derived. To minimize this non quadratic functional, two different methods have been investigated. The first one is an iterative scheme to solve the associated non-linear Euler-Lagrange equations. The second solution introduces dual variables so that the minimization problem becomes a quadratic or a convex functional minimization problem. Promising experimental results on synthetic and real image sequences will illustrate the capabilities of this approach.

Journal ArticleDOI
TL;DR: This paper proposes a list of such problems after a review of the current major 3D imaging modalities, and a description of the related medical needs, and presents some of the past and current work done in the research group EPIDAURE at INRIA on the following topics.