scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 1994"


Journal ArticleDOI
TL;DR: In this paper, the continuous and discrete-time H∞ control problems are solved via elementary manipulations on linear matrix inequalities (LMI), and two interesting new features emerge through this approach: solvability conditions valid for both regular and singular problems, and an LMI-based parametrization of all H ∞-suboptimal controllers, including reduced-order controllers.
Abstract: The continuous- and discrete-time H∞ control problems are solved via elementary manipulations on linear matrix inequalities (LMI). Two interesting new features emerge through this approach: solvability conditions valid for both regular and singular problems, and an LMI-based parametrization of all H∞-suboptimal controllers, including reduced-order controllers. The solvability conditions involve Riccati inequalities rather than the usual indefinite Riccati equations. Alternatively, these conditions can be expressed as a system of three LMIs. Efficient convex optimization techniques are available to solve this system. Moreover, its solutions parametrize the set of H∞ controllers and bear important connections with the controller order and the closed-loop Lyapunov functions. Thanks to such connections, the LMI-based characterization of H∞ controllers opens new perspectives for the refinement of H∞ design. Applications to cancellation-free design and controller order reduction are discussed and illustrated by examples.

3,200 citations


Journal ArticleDOI
TL;DR: A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense3-D maps obtained by use a correlation-based stereoscopic system, and it is efficient and robust, and yields an accurate motion estimate.
Abstract: A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3-D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subset-subset matching. A least-squares technique is used to estimate 3-D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate.

2,177 citations


Journal ArticleDOI
TL;DR: A class of numerical schemes on unstructured meshes for the numerical simulation of hyperbolic equations and systems is described and it is demonstrated that a higher order of accuracy is indeed obtained, even on very irregular meshes.

432 citations


Journal ArticleDOI
TL;DR: The GFUN package is described which contains functions for manipulating sequences, linear recurrences, or differential equations and generating functions of various types and is intended both as an elementary introduction to the subject and as a reference manual for the package.
Abstract: We describe the GFUN package which contains functions for manipulating sequences, linear recurrences, or differential equations and generating functions of various types. This article is intended both as an elementary introduction to the subject and as a reference manual for the package.

400 citations


Proceedings ArticleDOI
01 Oct 1994
TL;DR: The mechanism uses a novel probing mechanism to solicit feedback information in a scalable manner and to estimate the number of receivers and separates the congestion signal from the congestion control algorithm, so as to cope with heterogeneous networks.
Abstract: We describe a mechanism for scalable control of multicast continuous media streams. The mechanism uses a novel probing mechanism to solicit feedback information in a scalable manner and to estimate the number of receivers. In addition, it separates the congestion signal from the congestion control algorithm, so as to cope with heterogeneous networks.This mechanism has been implemented in the IVS video conference system using options within RTP to elicit information about the quality of the video delivered to the receivers. The H.261 coder of IVS then uses this information to adjust its output rate, the goal being to maximize the perceptual quality of the image received at the destinations while minimizing the bandwidth used by the video transmission. We find that our prototype control mechanism is well suited to the Internet environment. Furthermore, it prevents video sources from creating congestion in the Internet. Experiments are underway to investigate how the scalable proving mechanism can be used to facilitate multicast video distribution to large number of participants.

397 citations


Book
01 Aug 1994
TL;DR: In this paper, the authors present the k-epsilon method for turbulence in a language familiar to applied mathematicians, stripped bare of all the technicalities of turbulence theory.
Abstract: This book is aimed at applied mathematicians interested in numerical simulation of turbulent flows. The book is centered around the k - {epsilon} model but it also deals with other models such as subgrid scale models, one equation models and Reynolds Stress models. The reader is expected to have some knowledge of numerical methods for fluids and, if possible, some understanding of fluid mechanics, the partial differential equations used and their variational formulations. This book presents the k - {epsilon} method for turbulence in a language familiar to applied mathematicians, stripped bare of all the technicalities of turbulence theory. The model is justified from a mathematical standpoint rather than from a physical one. The numerical algorithms are investigated and some theoretical and numerical results presented. This book should prove an invaluable tool for those studying a subject that is still controversial but very useful for industrial applications. (authors). 71 figs., 200 refs.

380 citations


Proceedings ArticleDOI
01 Jun 1994
TL;DR: A new parametric framework for analyzing recursive pointer data structures is presented which can express a new natural class of alias information not accessible to existing methods, and which on numerous examples that occur in practice is much more precise than recently published algorithms.
Abstract: Existing methods for alias analysis of recursive pointer data structures are based on two approximation techniques: k-limiting, and store-based (or equivalently location or region-based) approximations, which blur distinction between elements of recursive data structures. Although notable progress in inter-procedural alias analysis has been recently accomplished, very little progress in the precision of analysis of recursive pointer data structures has been seen since the inception of these approximation techniques by Jones and Muchnick a decade ago. As a result, optimizing, verifying and parallelizing programs with pointers has remained difficult.We present a new parametric framework for analyzing recursive pointer data structures which can express a new natural class of alias information not accessible to existing methods. The key idea is to represent alias information by pairs of symbolic access paths which are qualified by symbolic descriptions of the positions for which the alias pair holds.Based on this result, we present an algorithm for interprocedural may-alias analysis with pointers which on numerous examples that occur in practice is much more precise than recently published algorithms [CWZ90, He90, LR92, CBC93].

376 citations


Journal ArticleDOI
TL;DR: A general strategy is developed for solving the random generation problem with two closely related types of methods: for structures of size n, the boustrophedonic algorithms exhibit a worst-case behaviour of the form O(n log n); the sequential algorithms have worst case O( n2), while offering good potential for optimizations in the average case.

338 citations


Journal ArticleDOI
TL;DR: This work proposes a new solution to the problem of determining the shortest paths of bounded curvature joining two oriented points in the plane based on the minimum principle of Pontryagin.
Abstract: Given two oriented points in the plane, we determine and compute the shortest paths of bounded curvature joining them. This problem has been solved recently by Dubins in the no-cusp case, and by Reeds and Shepp otherwise. We propose a new solution based on the minimum principle of Pontryagin. Our approach simplifies the proofs and makes clear the global or local nature of the results.

209 citations


Journal ArticleDOI
TL;DR: A pursuit algorithm has been designed that directly tracks the region representing the projection of a moving object in the image, rather than relying on the set of trajectories of individual points or segments, which makes it possible to predict the position of the target in the next frame.
Abstract: This work investigates a new approach to the tracking of regions in an image sequence. The approach relies on two successive operations: detection and discrimination of moving targets and then pursuit of the targets. A motion-based segmentation algorithm, previously developed in the laboratory, provides the detection and discrimination stage. This paper emphasizes the pursuit stage. A pursuit algorithm has been designed that directly tracks the region representing the projection of a moving object in the image, rather than relying on the set of trajectories of individual points or segments. The region tracking is based on the dense estimation of an affine model of the motion field within each region, which makes it possible to predict the position of the target in the next frame. A multiresolution scheme provides reliable estimates of the motion parameters, even in the case of large displacements. Two interacting linear dynamic systems describe the temporal evolution of the geometry and the motion of the tracked regions. Experiments conducted on real images demonstrate that the approach is robust against occlusion and can handle large interframe displacements and complex motions.

206 citations


Proceedings ArticleDOI
12 Jun 1994
TL;DR: Experiments indicate that the control mechanism is well suited to the Internet environment, and makes it possible to establish and maintain quality videoconferences even across congested connections in the Internet.
Abstract: Datagram networks such as the Internet do not provide guaranteed resources such as bandwidth or guaranteed performance measures such as maximum delay. One way to support packet video in these networks is to use feedback mechanisms that adapt the output rate of video coders based on the state of the network. The authors present one such mechanism. They describe the feedback information, and how it is used by the coder control algorithm. They also examine how the need to operate in a multicast environment impacts the design of the control mechanism. This mechanism has been implemented in the H.261 video coder of IVS. IVS is a videoconference system for the Internet developed at INRIA. Experiments indicate that the control mechanism is well suited to the Internet environment. In particular, it makes it possible to establish and maintain quality videoconferences even across congested connections in the Internet. Furthermore, it prevents video sources from swamping the resources of the Internet, which could lead to unacceptable service to all users of the network. >

Journal ArticleDOI
TL;DR: The Davidson method that computes a few of the extreme eigenvalues of a symmetric matrix and corresponding eigenvectors and a general convergence result for methods based on projection techniques is given and can be applied to the Lanczos method.
Abstract: This paper deals with the Davidson method that computes a few of the extreme eigenvalues of a symmetric matrix and corresponding eigenvectors. A general convergence result for methods based on projection techniques is given and can be applied to the Lanczos method as well. The efficiency of the preconditioner involved in the method is discussed. Finally, by means of numerical experiments, the Lanczos and Davidson methods are compared and a procedure for a dynamic restarting process is described.

Journal ArticleDOI
TL;DR: A new comprehensive approach to minimize global energy functions using a multiscale relaxation algorithm that appears to be far less sensitive to local minima than standard relaxation algorithms is investigated.
Abstract: Many image analysis and computer vision problems have been expressed as the minimization of global energy functions describing the interactions between the observed data and the image representations to be extracted in a given task. In this note, we investigate a new comprehensive approach to minimize global energy functions using a multiscale relaxation algorithm. The energy function is minimized over nested subspaces of the original space of possible solutions. These subspaces consist of solutions which are constrained at different scales. The constrained relaxation is implemented via a coarse-to-fine multiresolution algorithm that yields fast convergence towards high quality estimates when compared to standard monoresolution or multigrid relaxation schemes. It also appears to be far less sensitive to local minima than standard relaxation algorithms. The efficiency of the approach is demonstrated on a highly nonlinear combinatorial problem which consists of estimating long-range motion in an image sequence on a discrete label space. The method is compared to standard relaxation algorithms on real world and synthetic image sequences.

Book ChapterDOI
01 Jun 1994
TL;DR: It is shown how a special decomposition of general projection matrices, called canonic, enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations.
Abstract: We show how a special decomposition of general projection matrices, called canonic enables us to build geometric descriptions for a system of cameras which are invariant with respect to a given group of transformations. These representations are minimal and capture completely the properties of each level of description considered: Euclidean (in the context of calibration, and in the context of structure from motion, which we distinguish clearly), affine, and projective, that we also relate to each other. In the last case, a new decomposition of the well-known fundamental matrix is obtained. Dependencies, which appear when three or more views are available, are studied in the context of the canonic decomposition, and new composition formulas are established, as well as the link between local (ie for pairs of views) representations and global (ie for a sequence of images) representations.

Book ChapterDOI
01 Jun 1994
TL;DR: A robust correlation based approach that eliminates outliers is developed to produce a reliable set of corresponding high curvature points that are used to estimate the so-called Fundamental Matrix which is closely related to the epipolar geometry of the uncalibrated stereo rig.
Abstract: This paper addresses the problem of accurately and automatically recovering the epipolar geometry from an uncalibrated stereo rig and its application to the image matching problem. A robust correlation based approach that eliminates outliers is developed to produce a reliable set of corresponding high curvature points. These points are used to estimate the so-called Fundamental Matrix which is closely related to the epipolar geometry of the uncalibrated stereo rig. We show that an accurate determination of this matrix is a central problem. Using a linear criterion in the estimation of this matrix is shown to yield erroneous results. Different parameterization and non-linear criteria are then developed to take into account the specific constraints of the Fundamental Matrix providing more accurate results. Various experimental results on real images illustrates the approach.

Proceedings ArticleDOI
21 Jun 1994
TL;DR: This paper proposes a new method extending the classical correlation method to estimate accurately both the disparity and its derivatives directly from the image data, and relates those derivatives to differential properties of the surface such as orientation and curvatures.
Abstract: We are considering the problem of recovering the three-dimensional geometry of a scene from binocular stereo disparity. Once a dense disparity map has been computed from a stereo pair of images, one often needs to calculate some local differential properties of the corresponding 3-D surface such as orientation or curvatures. The usual approach is to build a 3-D reconstruction of the surface(s) from which all shape properties will then be derived without ever going back to the original images. In this paper, we depart from this paradigm and propose to use the images directly to compute the shape properties. We thus propose a new method extending the classical correlation method to estimate accurately both the disparity and its derivatives directly from the image data. We then relate those derivatives to differential properties of the surface such as orientation and curvatures. >

Journal ArticleDOI
TL;DR: Kioeden and Platen as discussed by the authors proposed a numerical solution of stochastic differential equations, which can be seen as a generalization of the Kioeden-Platen solution.
Abstract: Numerical solution of stochastic differential equations, P. E. Kioeden and E. Platen, Springer-Verlag (1992). 632 pp. $59.00 ISBN 0-386-54062-8

Proceedings ArticleDOI
01 Feb 1994
TL;DR: A new concurrent mark-and-sweep garbage collection algorithm that supports multiprocessor environments where the registers of running processes are not readily accessible, without imposing any overhead on the elementary operations of loading a register or reading or initializing a field.
Abstract: We describe and prove the correctness of a new concurrent mark-and-sweep garbage collection algorithm. This algorithm derives from the classical on-the-fly algorithm from Dijkstra et al. [9]. A distinguishing feature of our algorithm is that it supports multiprocessor environments where the registers of running processes are not readily accessible, without imposing any overhead on the elementary operations of loading a register or reading or initializing a field. Furthermore our collector never blocks running mutator processes except possibly on requests for free memory; in particular, updating a field or creating or marking or sweeping a heap object does not involve system-dependent synchronization primitives such as locks. We also provide support for process creation and deletion, and for managing an extensible heap of variable-sized objects.

Book ChapterDOI
25 Sep 1994
TL;DR: A λ-calculus for which applicative terms have no longer the form (...((u u1) u2)... un) but the form [u [u1;...;un], for which [u 1;... ;un] is a list of terms is considered.
Abstract: We consider a λ-calculus for which applicative terms have no longer the form (...((u u1) u2)... un) but the form (u [u1;...;un]), for which [u1;...;un] is a list of terms. While the structure of the usual λ-calculus is isomorphic to the structure of natural deduction, this new structure is isomorphic to the structure of Gentzen-style sequent calculus. To express the basis of the isomorphism, we consider intuitionistic logic with the implication as sole connective. However we do not consider Gentzen's calculus LJ, but a calculus LJT which leads to restrict the notion of cut-free proofs in LJ. We need also to explicitly consider, in a simply typed version of this λ-calculus, a substitution operator and a list concatenation operator. By this way, each elementary step of cutelimination exactly matches with a β-reduction, a substitution propagation step or a concatenation computation step.

Proceedings ArticleDOI
29 Jun 1994
TL;DR: In this article, a polynomial-time projective algorithm for the numerical solution of linear matrix inequalities (LMI) is presented, and a complexity analysis is provided, and applications to two generic LMI problems are discussed.
Abstract: In many control problems, the design constraints have natural formulations in terms of linear matrix inequalities (LMI). When no analytical solution is available, such problems can be attacked by solving the LMIs via convex optimization techniques. This paper describes the polynomial-time projective algorithm for the numerical solution of LMIs. Simple geometrical arguments are used to clarify the strategy and convergence mechanism of the projective method. A complexity analysis is provided, and applications to two generic LMI problems are discussed.

Journal ArticleDOI
TL;DR: In this article, a condition of semistability is shown to ensure the quadratic convergence of Newton's method and the superlinear convergence of some quasi-Newton algorithms, provided the sequence defined by the algorithm exists and converges.
Abstract: This paper presents some new results in the theory of Newton-type methods for variational inequalities, and their application to nonlinear programming. A condition of semistability is shown to ensure the quadratic convergence of Newton's method and the superlinear convergence of some quasi-Newton algorithms, provided the sequence defined by the algorithm exists and converges. A partial extension of these results to nonsmooth functions is given. The second part of the paper considers some particular variational inequalities with unknowns (x, λ), generalizing optimality systems. Here only the question of superlinear convergence of {x k } is considered. Some necessary or sufficient conditions are given. Applied to some quasi-Newton algorithms they allow us to obtain the superlinear convergence of {x k }. Application of the previous results to nonlinear programming allows us to strengthen the known results, the main point being a characterization of the superlinear convergence of {x k } assuming a weak second-order condition without strict complementarity.

Journal ArticleDOI
TL;DR: It is shown here that this approach based on the so-called ‘asymptotic local’ approach for change detection is of much wider applicability: model reduction can be enforced, biased identification procedures can be used, and finally one can even get rid of identification and use instead some much simpler Monte-Carlo estimation technique prior to change detection.

Journal ArticleDOI
TL;DR: An interactive deformation technique called AxDf (Axial Deformations), which allows deformations, such as bending, scaling, twisting and stretching, that can be controlled with a 3D axis to be easily specified.
Abstract: The paper is part of a research effort that focuses on the provision of more efficient and effective design methods for broadcast modelling systems. It presents an interactive deformation technique called AxDf (Axial Deformations). Based on the paradigm of the modelling tool, the axial-deformations technique allows deformations, such as bending, scaling, twisting and stretching, that can be controlled with a 3D axis to be easily specified. Moreover, AxDf can easily be combined with other existing deformation techniques.

Journal ArticleDOI
TL;DR: It is demonstrated that problem classes and regions of the phase transition previously thought to be easy can sometimes be orders of magnitude more difficult than the worst problems in problem classesand regions ofThe phase transition considered hard.

Journal ArticleDOI
TL;DR: This work builds upon the seminal work of Kishon et al. (1990), where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures, but introduces two enhancements that allow a more accurate estimation of position, curvature, torsions, and Frénet frames along the curve.
Abstract: We present a new approach to the problem of matching 3-D curves. The approach has a low algorithmic complexity in the number of models, and can operate in the presence of noise and partial occlusions. Our method builds upon the seminal work of Kishon et al. (1990), where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures. However, we introduce two enhancements: We present experimental results using synthetic data and also using characteristic curves extracted from 3-D medical images. An earlier version of this article was presented at the 2nd European Conference on Computer Vision in Italy.

Journal ArticleDOI
TL;DR: It is shown that such periodicity phenomena can be analyzed rather systematically using classical tools from analytic number theory, namely the Mellin—Perron formulae, which yields naturally the Fourier series involved in the expansions of a variety of digital sums related to number representation systems.

Journal ArticleDOI
TL;DR: In this article, the authors consider linearly elastic shells whose middle surfaces have the most general geometries and provide complete proofs of the ellipticity of the strain energies found in two commonly used two-dimensional models: Koiter's model and Naghdi's model.
Abstract: We consider linearly elastic shells whose middle surfaces have the most general geometries, and we provide complete proofs of the ellipticity of the strain energies found in two commonly used two-dimensional models: Koiter's model and Naghdi's model.

Journal ArticleDOI
TL;DR: A detailed system-theoretic analysis is presented of the stability and steady-state behavior of the fine-to-coarse Kalman filter and its Riccati equation and of the new scale-recursive RicCati equation associated with it.
Abstract: An algorithm analogous to the Rauch-Tung-Striebel algorithm/spl minus/consisting of a fine-to-coarse Kalman filter-like sweep followed by a coarse-to-fine smoothing step/spl minus/was developed previously by the authors (ibid. vol.39, no.3, p.464-78 (1994)). In this paper they present a detailed system-theoretic analysis of this filter and of the new scale-recursive Riccati equation associated with it. While this analysis is similar in spirit to that for standard Kalman filters, the structure of the dyadic tree leads to several significant differences. In particular, the structure of the Kalman filter error dynamics leads to the formulation of an ML version of the filtering equation and to a corresponding smoothing algorithm based on triangularizing the Hamiltonian for the smoothing problem. In addition, the notion of stability for dynamics requires some care as do the concepts of reachability and observability. Using these system-theoretic constructs, the stability and steady-state behavior of the fine-to-coarse Kalman filter and its Riccati equation are analysed. >

Journal ArticleDOI
TL;DR: This paper presents a mesh generation method of the advancing-front type which is designed in such a way that the well-known difficulties of the classical advancing- front method are not present.
Abstract: This paper presents a mesh generation method of the advancing-front type which is designed in such a way that the well-known difficulties of the classical advancing-front method are not present. The retained solution consists of using the first steps of a Voronoi–Delaunay method to construct a background mesh which is then used to govern the algorithm. The two-dimensional case is considered in detail and possible extensions to adaption problems and three dimensions are indicated.

Book ChapterDOI
07 Nov 1994
TL;DR: An extension to the TAM model is proposed to deal efficiently with authorization schemes involving sets of privileges and can be useful to identify which privilege transfers can lead to unsafe protection states.
Abstract: In this paper, an extension to the TAM model is proposed to deal efficiently with authorization schemes involving sets of privileges. This new formalism provides a technique to analyse the safety problem for this kind of schemes and can be useful to identify which privilege transfers can lead to unsafe protection states. Further extensions are suggested towards quantitative evaluation of operational security and intrusion detection.