scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 1999"


Book ChapterDOI
21 Sep 1999
TL;DR: A survey of the theory and methods of photogrammetric bundle adjustment can be found in this article, with a focus on general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Abstract: This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.

3,521 citations


Journal ArticleDOI
01 Apr 1999
TL;DR: Experimental results with an eye-in-hand robotic system confirm the improvement in the stability and convergence domain of the 2 1/2 D visual servoing with respect to classical position-based and image-based visual Servoing.
Abstract: We propose an approach to vision-based robot control, called 2 1/2 D visual servoing, which avoids the respective drawbacks of classical position-based and image-based visual servoing. Contrary to the position-based visual servoing, our scheme does not need any geometric three-dimensional model of the object. Furthermore and contrary to image-based visual servoing, our approach ensures the convergence of the control law in the whole task space. 2 1/2 D visual servoing is based on the estimation of the partial camera displacement from the current to the desired camera poses at each iteration of the control law. Visual features and data extracted from the partial displacement allow us to design a decoupled control law controlling the six camera DOFs. The robustness of our visual servoing scheme with respect to camera calibration errors is also analyzed: the necessary and sufficient conditions for local asymptotic stability are easily obtained. Then, due to the simple structure of the system, sufficient conditions for global asymptotic stability are established. Finally, experimental results with an eye-in-hand robotic system confirm the improvement in the stability and convergence domain of the 2 1/2 D visual servoing with respect to classical position-based and image-based visual servoing.

861 citations


Journal ArticleDOI
TL;DR: This work proposes a family of linear methods that yield a unique solution to 4- and 5-point pose determination for generic reference points and shows that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for copLANar configurations in practice.
Abstract: The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is well-known that from three corresponding points there are at most four algebraic solutions. Less appears to be known about the cases of four and five corresponding points. We propose a family of linear methods that yield a unique solution to 4- and 5-point pose determination for generic reference points. We first review the 3-point algebraic method. Then we present our two-step, 4-point and one-step, 5-point linear algorithms. The 5-point method can also be extended to handle more than five points. Finally, we demonstrate our methods on both simulated and real images. We show that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for coplanar configurations in practice.

671 citations


Journal ArticleDOI
17 May 1999
TL;DR: This work presents a query language for XML, called XML-QL, which is argued to be suitable for performing the above tasks, and can extract data from existing XML documents and construct new XML documents.
Abstract: An important application of XML is the interchange of electronic data (EDI) between multiple data sources on the Web. As XML data proliferates on the Web, applications will need to integrate and aggregate data from multiple source and clean and transform data to facilitate exchange. Data extraction, conversion, transformation, and integration are all well-understood database problems, and their solutions rely on a query language. We present a query language for XML, called XML-QL, which we argue is suitable for performing the above tasks. XML-QL is a declarative, `relational complete' query language and is simple enough that it can be optimized. XML-QL can extract data from existing XML documents and construct new XML documents.

649 citations


Journal Article
TL;DR: The purpose of this paper is to present the results of an initial study about storing and querying XML data, focusing on the use of relational database systems and on very simplistic schemes to store and query XML data.
Abstract: XML is rapidly becoming a popular data format. It can be expected that soon large volumes of XML data will exist. XML data is either produced manually (like html documents today), or it is generated by a new generation of software tools for the WWW and/or electronic data interchange (EDI). The purpose of this paper is to present the results of an initial study about storing and querying XML data. As a first step, this study was focussed on the use of relational database systems and on very simplistic schemes to store and query XML data. In other words, we would like to study how the simplest and most obvious approaches perform, before thinking about more sophisticated approaches. In general, numerous different options to store and query XML data exist. In addition to a relational database, XML data can be stored in a file system, an object-oriented database (e.g., Excelon), or a special-purpose (or semi-structured) system such as Lore (Stanford), Lotus Notes, or Tamino (Software AG). It is still unclear which of these options will ultimately find wide-spread acceptance. A file system could be used with very little effort to store XML data, but a file system would not provide any support for querying the XML data. Object-oriented database systems would allow to cluster XML elements and sub-elements; this feature might be useful for certain applications, but the current generation of object-oriented database systems is not mature enough to process complex queries on large databases. It is going to take even longer before special-purpose systems are mature. Even when using an RDBMS, there are many different ways to store XML data. One strategy is to ask the user or a system administrator in order to decide how XML elements are stored in relational tables. Such an approach is supported, e.g., by Oracle 8i. Another option is to infer from the DTDs of the XML documents how the XML elements should be mapped into tables; such an approach has been studied in [4]. Yet another option is to analyze the XML data and the expected query workload; such an approach has been devised, e.g., in [2]. In this work, we will only study very simple ad-hoc schemes; we think that such a study is necessary before adopting a more complex approach. The schemes that we analyze require no input by the user, they work in the absence of DTDs or if DTDs are meaningless, and they do not involve any analysis of the XML data. Due to their simplicity, the approaches we study will not show the best possible performance, but as we will see, some of them will show very good query performance in most situations. Also, there is no guarantee that any of the more sophisticated approaches known so far will perform better than our simple schemes; see [3] for some experimental results in this respect. Furthermore, the results of our study can be used as input for more sophisticated approaches.

564 citations


Journal ArticleDOI
01 Jun 1999
TL;DR: It is demonstrated that the Tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and experimental evidence that the techniques result in behavior desirable for a data integration system is presented.
Abstract: Query processing in data integration occurs over network-bound, autonomous data sources. This requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. This paper presents the Tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. Interleaved planning and execution with partial optimization allows Tukwila to quickly recover from decisions based on inaccurate estimates. During execution, Tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources. We demonstrate that the Tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and we present experimental evidence that our techniques result in behavior desirable for a data integration system.

477 citations


Book ChapterDOI
01 Aug 1999
TL;DR: The aim of this paper is to emphasize problems by considering an eye-in-hand system and a positioning task with respect to a static target which constrains the six camera degrees of freedom.
Abstract: Visual servoing, using image-based control or position-based control, generally gives satisfactory results. However, in some cases, convergence and stability problems may occur. The aim of this paper is to emphasize these problems by considering an eye-in-hand system and a positioning task with respect to a static target which constrains the six camera degrees of freedom.

468 citations


Journal ArticleDOI
TL;DR: It is shown that the solutions of any zero-dimensional system of polynomials can be expressed through a special kind of univariate representation (Rational Univariate Representation): where (f,g,g1, …,gn) are polynmials of K[X1,…, Xn].
Abstract: This paper is devoted to the resolution of zero-dimensional systems in K[X 1, …X n ], where K is a field of characteristic zero (or strictly positive under some conditions). We follow the definition used in MMM95 and basically due to Kronecker for solving zero-dimensional systems: A system is solved if each root is represented in such way as to allow the performance of any arithmetical operations over the arithmetical expressions of its coordinates. We propose new definitions for solving zero-dimensional systems in this sense by introducing the Univariate Representation of their roots. We show by this way that the solutions of any zero-dimensional system of polynomials can be expressed through a special kind of univariate representation (Rational Univariate Representation): where (f,g,g 1, …,g n ) are polynomials of K[X 1, …, X n ]. A special feature of our Rational Univariate Representation is that we dont loose geometrical information contained in the initial system. Moreover we propose different efficient algorithms for the computation of the Rational Univariate Representation, and we make a comparison with standard known tools.

429 citations



Proceedings ArticleDOI
21 Mar 1999
TL;DR: A simple algorithm is obtained that optimizes a subjective measure as opposed to an objective measure of quality, and incorporates the constraints of rate control and playout delay adjustment schemes, and it adapts to varying loss conditions in the network.
Abstract: Excessive packet loss rates can dramatically decrease the audio quality perceived by users of Internet telephony applications. Previous results suggest that error control schemes using forward error correction (FEC) are good candidates for decreasing the impact of packet loss on audio quality. However, the FEC scheme must be coupled to a rate control scheme. Furthermore, the amount of redundant information used at any given point in time should also depend on the characteristics of the loss process at that time (it would make no sense to send much redundant information when the channel is loss free), on the end to end delay constraints (destination typically have to wait longer to decode the FEC as more FEC information is used), on the quality of the redundant information, etc. However, it is not clear given all these constraints how to choose the "best" possible redundant information. We address this issue, and illustrate the approach using an FEC scheme for packet audio standardized in the IETF. We show that the problem of finding the best redundant information can be expressed mathematically as a constrained optimization problem for which we give explicit solutions. We obtain from these solutions a simple algorithm with very interesting features, namely (i) the algorithm optimizes a subjective measure (such as the audio quality perceived at a destination) as opposed to an objective measure of quality (such as the packet loss rate at a destination), (ii) it incorporates the constraints of rate control and playout delay adjustment schemes, and (iii) it adapts to varying loss conditions in the network (estimated online with RTCP feedback). We have been using the algorithm, together with a TCP-friendly rate control scheme and we have found it to provide very good audio quality even over paths with high and varying loss rates. We present simulation and experimental results to illustrate its performance.

377 citations


Journal ArticleDOI
TL;DR: This article describes GlOSS, Glossary of Servers Server, with two versions: bGloss, which provides a Boolean query retrieval model, and vGlOSS, which providing a vector-space retrieval model and extensively describes the methodology for measuring the retrieval effectiveness of these systems.
Abstract: The dramatic growth of the Internet has created a new problem for users: location of the relevant sources of documents. This article presents a framework for (and experimentally analyzes a solution to) this problem, which we call the text-source discovery problem. Our approach consists of two phases. First, each text source exports its contents to a centralized service. Second, users present queries to the service, which returns an ordered list of promising text sources. This article describes GlOSS, Glossary of Servers Server, with two versions: bGlOSS, which provides a Boolean query retrieval model, and vGlOSS, which provides a vector-space retrieval model. We also present hGlOSS, which provides a decentralized version of the system. We extensively describe the methodology for measuring the retrieval effectiveness of these systems and provide experimental evidence, based on actual data, that all three systems are highly effective in determining promising text sources for a given query.

Journal ArticleDOI
TL;DR: A general tridimensional reconstruction algorithm of range and volumetric images, based on deformable simplex meshes, which can handle surfaces without any restriction on their shape or topology.
Abstract: In this paper, we propose a general tridimensional reconstruction algorithm of range and volumetric images, based on deformable simplex meshes. Simplex meshes are topologically dual of triangulations and have the advantage of permitting smooth deformations in a simple and efficient manner. Our reconstruction algorithm can handle surfaces without any restriction on their shape or topology. The different tasks performed during the reconstruction include the segmentation of given objects in the scene, the extrapolation of missing data, and the control of smoothness, density, and geometric quality of the reconstructed meshes. The reconstruction takes place in two stages. First, the initialization stage creates a simplex mesh in the vicinity of the data model either manually or using an automatic procedure. Then, after a few iterations, the mesh topology can be modified by creating holes or by increasing its genus. Finally, an iterative refinement algorithm decreases the distance of the mesh from the data while preserving high geometric and topological quality. Several reconstruction examples are provided with quantitative and qualitative results.

Journal ArticleDOI
TL;DR: Results obtained with MPEG-4 test sequences and additional sequences show that the accuracy of object segmentation is substantially improved in presence of moving cast shadows.
Abstract: To prevent moving shadows being misclassified as moving objects or parts of moving objects, this paper presents an explicit method for detection of moving cast shadows on a dominating scene background. Those shadows are generated by objects moving between a light source and the background. Moving cast shadows cause a frame difference between two succeeding images of a monocular video image sequence. For shadow detection, these frame differences are detected and classified into regions covered and regions uncovered by a moving shadow. The detection and classification assume plane background and a nonnegligible size and intensity of the light sources. A cast shadow is detected by temporal integration of the covered background regions while subtracting the uncovered background regions. The shadow detection method is integrated into an algorithm for two-dimensional (2-D) shape estimation of moving objects from the informative part of the description of the international standard ISO/MPEG-4. The extended segmentation algorithm compensates first apparent camera motion. Then, a spatially adaptive relaxation scheme estimates a change detection mask for two consecutive images. An object mask is derived from the change detection mask by elimination of changes due to background uncovered by moving objects and by elimination of changes due to background covered or uncovered by moving cast shadows. Results obtained with MPEG-4 test sequences and additional sequences show that the accuracy of object segmentation is substantially improved in presence of moving cast shadows. Objects and shadows are detected and tracked separately.

Journal ArticleDOI
TL;DR: A survey of some methods, techniques and tools aimed at managing corporate knowledge from a corporate memory designer's perspective and analyses problems and solutions related to the following steps.
Abstract: This article??This article is an extension of a paper presented at the specialized workshop KAW?98.is a survey of some methods, techniques and tools aimed at managing corporate knowledge from a corporate memory designer's perspective. In particular, it analyses problems and solutions related to the following steps: detection of needs of corporate memory, construction of the corporate memory, its diffusion (specially using the Internet technologies), use, evaluation and evolution

Proceedings ArticleDOI
31 May 1999
TL;DR: It is found that RED with small buffers does not improve significantly the performance of the network, in particular the overall throughput is smaller than with tail drop and the difference in delay is not significant.
Abstract: In this paper we examine the benefits of random early detection (RED) by using a testbed made of two commercially available routers and up to 16 PCs to observe RED performance under a traffic load made of FTP transfers, together with HTTP traffic and non-responsive UDP flows. The main results we found were, first, that RED with small buffers does not improve significantly the performance of the network, in particular the overall throughput is smaller than with tail drop and the difference in delay is not significant. Second, parameter tuning in RED remains an inexact science, but has no big impact on the end-to-end performance. We argue that RED deployment is not straightforward, and we strongly recommend more research with realistic network settings to develop a full quantitative understanding of RED. Nevertheless, RED allows us to control the queue size with large buffers.

Proceedings ArticleDOI
01 Oct 1999
TL;DR: The main originality of the escape analysis is that it determines precisely the effect of assignments, which is necessary to apply it to object oriented languages with promising results, whereas previous work applied it to functional languages and were very imprecise on assignments.
Abstract: Escape analysis [27, 14, 5] is a static analysis that determines whether the lifetime of data exceeds its static scope.The main originality of our escape analysis is that it determines precisely the effect of assignments, which is necessary to apply it to object oriented languages with promising results, whereas previous work [27, 14, 5] applied it to functional languages and were very imprecise on assignments. Our implementation analyses the full Java™ Language.We have applied our analysis to stack allocation and synchronization elimination. We manage to stack allocate 13% to 95% of data, eliminate more than 20% of synchronizations on most programs (94% and 99% on two examples) and get up to 44% speedup (21% on average). Our detailed experimental study on large programs shows that the improvement comes from the decrease of the garbage collection and allocation times than from improvements on data locality [7], contrary to what happened for ML [5].

Book ChapterDOI
19 Sep 1999
TL;DR: This paper shows that the "Demons Algorithm" can be considered as anroximation of a second order gradient descent on the sum of square of intensity differences criterion and reformulate Gaussian and physical model regulariza- tions as minimization problems.
Abstract: The “Demons Algorithm” in increasingly used for non-rigid registration of 3D medical images However, if it is fast and usually accurate, the algorithm is based on intuitive ideas about image registration and it is difficult to predict when it will fail and why We show in this paper that this algorithm can be considered as an approximation of a second order gradient descent on the sum of square of intensity differences criterion We also reformulate Gaussian and physical model regularizations as minimization problems Experimental results on synthetic and 3D Ultrasound images show that this formalization helps identifying the weak points of the algorithm and offers new research openings

Journal ArticleDOI
TL;DR: An original approach to partitioning of a video document into shots is described, which exploits image motion information, which is generally more intrinsic to the video structure itself, and other possible extensions, such as mosaicing and mobile zone detection are described.
Abstract: This paper describes an original approach to partitioning of a video document into shots. Instead of an interframe similarity measure which is directly intensity based, we exploit image motion information, which is generally more intrinsic to the video structure itself. The proposed scheme aims at detecting all types of transitions between shots using a single technique and the same parameter set, rather than a set of dedicated methods. The proposed shot change detection method is related to the computation, at each time instant, of the dominant image motion represented by a two-dimensional affine model. More precisely, we analyze the temporal evolution of the size of the support associated to the estimated dominant motion. Besides, the computation of the global motion model supplies by-products, such as qualitative camera motion description, which we describe in this paper, and other possible extensions, such as mosaicing and mobile zone detection. Results on videos of various content types are reported and validate the proposed approach.

Journal ArticleDOI
TL;DR: More than sixteen years after the beginning of a linear theory for certain discrete event systems in which max-plus algebra and similar algebraic tools play a central role, this article summarized some of the main achievements in an informal style based on examples.

Journal ArticleDOI
TL;DR: This paper describes a systematic approach to the enumeration of ‘non-crossing’ geometric congurations built on vertices of a convex n-gon in the plane that relies on generating functions, symbolic methods, singularity analysis, and singularity perturbation.

Journal ArticleDOI
TL;DR: A deflated version of the conjugate gradient algorithm for solving linear systems that can be useful in cases when a small number of eigenvalues of the iteration matrix are very close to the origin.
Abstract: We present a deflated version of the conjugate gradient algorithm for solving linear systems. The new algorithm can be useful in cases when a small number of eigenvalues of the iteration matrix are very close to the origin. It can also be useful when solving linear systems with multiple right-hand sides, since the eigenvalue information gathered from solving one linear system can be recycled for solving the next systems and then updated.

Proceedings ArticleDOI
21 Jun 1999
TL;DR: This implementation demonstrates an interactive application working with both ray tracing and path tracing renderers in situations where they would normally be considered too expensive, using a software only implementation without the use of 3D graphics hardware.
Abstract: Interactive rendering requires rapid visual feedback. The render cache is a new method for achieving this when using high-quality pixel-oriented renderers such as ray tracing that are usually considered too slow for interactive use. The render cache provides visual feedback at a rate faster than the renderer can generate complete frames, at the cost of producing approximate images during camera and object motion. The method works both by caching previous results and reprojecting them to estimate the current image and by directing the renderer's sampling to more rapidly improve subsequent images. Our implementation demonstrates an interactive application working with both ray tracing and path tracing renderers in situations where they would normally be considered too expensive. Moreover we accomplish this using a software only implementation without the use of 3D graphics hardware.

Journal ArticleDOI
TL;DR: A new combined approach, where a genetic algorithm is improved with the introduction of some knowledge about the scheduling problem represented by the use of a list heuristic in the crossover and mutation genetic operations, which shows that the knowledge-augmented algorithm produces much better results in terms of quality of solutions, although being slower interms of execution time.
Abstract: In the multiprocessor scheduling problem, a given program is to be scheduled in a given multiprocessor system such that the program's execution time is minimized. This problem being very hard to solve exactly, many heuristic methods for finding a suboptimal schedule exist. We propose a new combined approach, where a genetic algorithm is improved with the introduction of some knowledge about the scheduling problem represented by the use of a list heuristic in the crossover and mutation genetic operations. This knowledge-augmented genetic approach is empirically compared with a "pure" genetic algorithm and with a "pure" list heuristic, both from the literature. Results of the experiments carried out with synthetic instances of the scheduling problem show that our knowledge-augmented algorithm produces much better results in terms of quality of solutions, although being slower in terms of execution time.

Journal ArticleDOI
01 Jun 1999
TL;DR: A theoretical and experimental analysis of the resulting search space and a novel query optimization algorithm that is designed to perform well under the different conditions that may arise are described.
Abstract: We consider the problem of query optimization in the presence of limitations on access patterns to the data (i.e., when one must provide values for one of the attributes of a relation in order to obtain tuples). We show that in the presence of limited access patterns we must search a space of annotated query plans, where the annotations describe the inputs that must be given to the plan. We describe a theoretical and experimental analysis of the resulting search space and a novel query optimization algorithm that is designed to perform well under the different conditions that may arise. The algorithm searches the set of annotated query plans, pruning invalid and non-viable plans as early as possible in the search space, and it also uses a best-first search strategy in order to produce a first complete plan early in the search. We describe experiments to illustrate the performance of our algorithm.

Proceedings ArticleDOI
01 Mar 1999
TL;DR: A new optimization heuristic able to support heterogeneous architectures and takes into account accurately inter-processor communications, which are usually neglected but may reduce dramatically multiprocessor performances.
Abstract: This paper presents an enhancement of our "Algorithm Architecture Adequation" (AAA) prototyping methodology which allows to rapidly develop and optimize the implementation of a reactive real-time dataflow algorithm on a embedded heterogeneous multiprocessor architecture, predict its real-time behavior and automatically generate the corresponding distributed and optimized static executive. It describes a new optimization heuristic able to support heterogeneous architectures and takes into account accurately inter-processor communications, which are usually neglected but may reduce dramatically multiprocessor performances.

Proceedings ArticleDOI
20 Sep 1999
TL;DR: Two direct quasilinear methods for camera pose and calibration from a single image of 4 or 5 known 3D points are described, and an experimental eigendecomposition based one is given that handles both planar and nonplanar cases.
Abstract: We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point 'Direct Linear Transform' method by incorporating partial prior camera knowledge, while still allowing some unknown calibration parameters to be recovered. Only linear algebra is required, the solution is unique in non-degenerate cases, and additional points can be included for improved stability. Both methods fail for coplanar points, but we give an experimental eigendecomposition based one that handles both planar and nonplanar cases. Our methods use recent polynomial solving technology, and we give a brief summary of this. One of our aims was to try to understand the numerical behaviour of modern polynomial solvers on some relatively simple test cases, with a view to other vision applications.

Journal ArticleDOI
TL;DR: In this paper, a modified entropy criterion for choosing the number of clusters arising from a mixture model was presented. But it was not valid to decide between one and more than one clusters.

Proceedings ArticleDOI
21 Mar 1999
TL;DR: The goal in this paper is to obtain a quantitative description of the service provided by tagging schemes, and describes and solves simple analytic models of two previously proposed schemes, namely the assured service scheme and the premium service scheme.
Abstract: Schemes based on the tagging of packets have been proposed as a low-cost way to augment the single class best effort service model of the current Internet by including some kind of service discrimination. Such schemes have a number of attractive features, however, it is not clear exactly what kind of service they would provide to applications. Yet quantifying such service is very important to understand the benefits and drawbacks of the different tagging schemes and of the mechanisms in each scheme (for example how much RED with input and output (RIO) contributes in the assured scheme), and to tackle key performance and economic issues (e.g. the difference in tariff between different service classes would presumably depend on the difference in performance between the classes). The goal in this paper is to obtain a quantitative description of the service provided by tagging schemes. Specifically, we describe and solve simple analytic models of two previously proposed schemes, namely the assured service scheme and the premium service scheme. We obtain expressions for performance measures that characterize the service provided to tagged packets, the service provided to non-tagged packets, and the fraction of tagged packets that do not get the better service they were supposed to. We use these expressions, as well as simulations and experiments from actual implementations, to illustrate the benefits and shortcomings of the schemes.

Journal ArticleDOI
TL;DR: This article proposes a theoretically justified optimization problem that permits to take into account both requirements of restoration and motion segmentation, and proposes a suitable numerical scheme based on half quadratic minimization that achieves convergence and stability.
Abstract: This article deals with the problem of restoring and motion segmenting noisy image sequences with a static background Usually, motion segmentation and image restoration are considered separately in image sequence restoration Moreover, motion segmentation is often noise sensitive In this article, the motion segmentation and the image restoration parts are performed in a coupled way, allowing the motion segmentation part to positively influence the restoration part and vice-versa This is the key of our approach that allows to deal simultaneously with the problem of restoration and motion segmentation To this end, we propose a theoretically justified optimization problem that permits to take into account both requirements The model is theoretically justified Existence and unicity are proved in the space of bounded variations A suitable numerical scheme based on half quadratic minimization is then proposed and its convergence and stability demonstrated Experimental results obtained on noisy synthetic data and real images will illustrate the capabilities of this original and promising approach

Journal ArticleDOI
TL;DR: Algorithms are presented to determine all the possible locations of the center of the platform that can be reached with a fixed orientation and the inclusive orientation workspace, which shows that for robots of similar dimensions the joints layout has a large influence on the workspace volume.
Abstract: We consider in this paper a Gough-type parallel robot whose leg length values are constrained to lie within some fixed ranges and for which there may be mechanical limits for the motion of the passive joints.The purpose of this paper is to present algorithms to determine:• the constant orientation workspace: all the possible locations of the center of the platform that can be reached with a fixed orientation• thetotal orientation workspace: all the possible locations of the center of the platform that can be reached with any orientation in a set defined by three ranges for the orientation angles (the dextrous workspace is an example of total orientation workspace case, the three ranges being T [0,360] degree1)• the inclusive orientationworkspace: all the possible locations of the center of the platform that can be reached with at least one orientation among a set defined by three ranges for the orientation angles (the maximal or reachableworkspace is an example of inclusive orientation workspace, the thre...