scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 1981"


Journal ArticleDOI
TL;DR: The main result of the paper states that, given a complete set U of view updates, U has a translator if and only if U is translatable under constant complement.
Abstract: A database view is a portion of the data structured in a way suitable to a specific application. Updates on views must be translated into updates on the underlying database. This paper studies the translation process in the relational model.The procedure is as follows: first, a “complete” set of updates is defined such that together with every update the set contains a “return” update, that is, one that brings the view back to the original state;given two updates in the set, their composition is also in the set.To translate a complete set, we define a mapping called a “translator,” that associates with each view update a unique database update called a “translation.” The constraint on a translation is to take the database to a state mapping onto the updated view. The constraint on the translator is to be a morphism.We propose a method for defining translators. Together with the user-defined view, we define a “complementary” view such that the database could be computed from the view and its complement. We show that a view can have many different complements and that the choice of a complement determines an update policy. Thus, we fix a view complement and we define the translation of a given view update in such a way that the complement remains invariant (“translation under constant complement”). The main result of the paper states that, given a complete set U of view updates, U has a translator if and only if U is translatable under constant complement.

621 citations


Journal ArticleDOI
TL;DR: A projected gradient algorithm is developed to minimize the criterion and it is shown that the minimization procedure can be implemented in a highly parallel manner.
Abstract: We approach the problem of labeling a set of objects from a quantitative standpoint. We define a world model in terms of transition probabilities and propose a definition of a class of global criteria that combine both ambiguity and consistency. A projected gradient algorithm is developed to minimize the criterion. We show that the minimization procedure can be implemented in a highly parallel manner. Results are shown on several examples and comparisons are made with relaxation labeling techniques.

198 citations


Journal ArticleDOI
TL;DR: A complete description of the Knuth-Bendix completion algorithm is given, showing that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.

196 citations



Journal ArticleDOI
TL;DR: It is shown that an approximate solution can be calculated by a policy improvement algorithm involving computations relative to an 'aggregated' problem together with a family of 'decentralized' problems.

144 citations


Book ChapterDOI
01 Jan 1981
TL;DR: The paper explains the recently introduced method of conjunctive conceptual clustering in terms of dynamic clustering and shows by an example its advantages over methods of numerical taxonomy from the viewpoint of cluster interpretation.
Abstract: Clustering is described as a multistep process in which some of the steps are performed by a data analyst and some by a computer program. At present, those performed by a computer program do not produce any description of the generated clusters. The recently introduced method of conjunctive conceptual clustering overcomes this problem by requiring that each cluster has a conjunctive description built from relations on object attributes and closely “fitting” the cluster. The paper explains the above clustering method in terms of dynamic clustering and shows by an example its advantages over methods of numerical taxonomy from the viewpoint of cluster interpretation.

144 citations


Journal ArticleDOI
TL;DR: In this article, the authors formulated the assumptions under which Rice J integral and critical energy release rate are equivalent in fragile fracture mechanics, and showed that the assumption holds for all fragile mechanics.
Abstract: We formulate the assumptions under which Rice J integral and the critical energy release rate, which occur in fragile fracture mechanic, are equivalent.

140 citations


Journal ArticleDOI
TL;DR: A method used to compute initial likelihoods and compatibilities is described, derived from an earlier symbolic matching procedure, but was modified to provide the data needed for application of the labeling method.
Abstract: This paper discusses the application of stochastic labeling to a general symbolic image description problem. A method used to compute initial likelihoods and compatibilities is described. It was derived from an earlier symbolic matching procedure, but was modified to provide the data needed for application of the labeling method. This labeling procedure differs from simpler ones, in that it minimizes a global criterion at each iteration. This technique is compared to other matching methods, and results on two scenes are presented.

88 citations



Book ChapterDOI
01 Jan 1981

79 citations


Journal ArticleDOI
TL;DR: An algorithm is described which builds a ``Peano scanning,'' i.e., the reciprocal mapping, from [0, 1]n to [ 0, 1], of the well-known `` Peano curve,'' and gives a one-dimensional image of it.
Abstract: This paper describes an algorithm which builds a ``Peano scanning,'' i.e., the reciprocal mapping, from [0, 1]n to [0, 1], of the well-known ``Peano curve.'' This Peano scanning is applied to a set of points in [0, 1]n and gives a one-dimensional image of it. Several applications of this technique have already been developed and are presented in this paper.

Book ChapterDOI
01 Jan 1981
TL;DR: The main basic choices which are preliminary to any clustering are presented and the dynamic clustering method which gives a solution to a family of optimization problems related to those choices is presented.
Abstract: We present first the main basic choices which are preliminary to any clustering and then the dynamic clustering method which gives a solution to a family of optimization problems related to those choices. We show then how these choices interfere in pattern recognition using three approaches: the syntactic approach, the logical approach and the numerical approach. For each approach we present a practical application.

Journal ArticleDOI
TL;DR: In order to increase storage utilization, this work introduces two schemes based on a similar idea and analyzes the performance of the second scheme, which uses an index of much smaller size.
Abstract: New file organizations based on hashing and suitable for data whose volume may vary rapidly recently appeared in the literature. In the three schemes which have been independently proposed, rehashing is avoided, storage space is dynamically adjusted to the number of records actually stored, and there are no overflow records. Two of these techniques employ an index to the data file. Retrieval is fast and storage utilization is low.In order to increase storage utilization, we introduce two schemes based on a similar idea and analyze the performance of the second scheme. Both techniques use an index of much smaller size. In both schemes, overflow records are accepted. The price which has to be paid for the improvement in storage utilization is a slight access cost degradation.

Journal ArticleDOI
TL;DR: In this article, the authors give a simple approach for obtaining recursive-in-time identification procedures for both AR and ARMA processes, investigating furthermore the connections between the so obtained algorithms and the classical (least squares procedure).
Abstract: Fast recursive-in-time identification procedures for both AR and ARMA processes (e.g., Chandrasekhar, square root algorithms,... ) have been available for a few years. These algorithms realize the desired transfer functions in the classical polynomial or rational form. On the other hand, the synthesis of polynomial and rational transfer functions in lattice and ladder form has fostered great interest in network theory by virtue of its pleasant properties. This type of synthesis is strongly related to the theory of orthogonal polynomials on the unit circle. An identification procedure with the realization of the desired whitening filter in lattice form was available for AR processes. We give here a simple approach for obtaining such algorithms, investigating furthermore the connections between the so obtained algorithms and the classical ones (least squares procedure). In the same way, we obtain identification procedures with realization of the desired filter in lattice and ladder form for ARMA processes, together with the connection with the classical extended least squares procedure. The method is based upon a fairly general Levinson orthogonalization lemma in Hilbert space, involving innovation techniques. We extend the method to various other estimation problems. The algorithms we obtain are fast (even somewhat faster than the previous fast ones) and seem to be well conditioned.

Journal ArticleDOI
01 May 1981
TL;DR: A survey of various stochastic texture models that have been developed is presented and examples are given of the application of these models to texture synthesis and texture analysis.
Abstract: Stochastic models for the generation of two-dimensional texture fields have been used in studies of human visual texture perception. Such models also find application in image processing for the development of techniques for synthesizing and analyzing texture. This paper presents a survey of various stochastic texture models that have been developed. Results of visual texture discrimination experiments are also provided. Finally, examples are given of the application of these models to texture synthesis and texture analysis.

Journal ArticleDOI
TL;DR: The results show that inexperienced (naive) subjects have different requirements than do more experienced subjects, and it appears that computer-oriented words used as commands are better than more usual words.
Abstract: In this experiment, two groups of subjects having different levels of experience with computers were tested to compare the learning and recall of computer command definitions that differed in context and redundancy and to study the effect of context change. The results show that inexperienced (naive) subjects have different requirements than do more experienced subjects. The experienced group recalls more meanings than the inexperienced group but is more negatively influenced by the number of competing words (that is, different words used to label the same computer function) than by contextual connotation. For the inexperienced subjects, particularly, it appears that computer-oriented words used as commands are better than more usual words. Besides, inexperienced subjects are more negatively influenced by context changes than are experienced subjects.

Patent
27 Apr 1981
TL;DR: In this paper, an exploration device for the surface of a body uses a laser pencil beam ch successively illuminates quasi-pinpoint regions on the surface, and at least two distinct view-taking devices are provided for forming respective images of the illuminated quasipinpoint region on discrete element photo-sensitive receivers, each element energized by one of said images supplying an electrical indication for use by a computer to determine the position of the quasi point region.
Abstract: An exploration device for the surface of a body uses a laser pencil beam ch successively illuminates quasi-pinpoint regions on the surface of the body. At least two distinct view taking devices are provided for forming respective images of the illuminated quasi-pinpoint region on discrete element photosensitive receivers, each element energized by one of said images supplying an electrical indication for use by a computer to determine the position of the quasi-pinpoint region. The receivers are advantageously linear in shape, for example photodiode bars. Three view-taking stations each equipped with a respective one of these photosensitive bars may be used, each supplying a coordinate of the image position to the computer. In order to allow using receivers comprising faulty photosensitive elements, for each view taking station, at least two identical images are formed on photosensitive receivers the faulty elements of which are not coincident.

Book ChapterDOI
01 Jan 1981
TL;DR: A technique is developed and used to derive lower bounds on the area required by a VLSI circuit by taking into account the amount of information that has to be memorized in the course of the computation.
Abstract: A technique is developed and used to derive lower bounds on the area required by a VLSI circuit by taking into account the amount of information that has to be memorized in the course of the computation. Simple arguments show, in particular, that any circuit performing operations such as cyclic shift and binary multiplication requires an area at least proportional to its output size. By extending the technique, it is also possible to obtain general tradeoffs between the area, the time, and the period (a measure of the pipeline rate) of a circuit performing operations like binary addition. The existence of VLSI designs for these operations shows that all the lower bounds are optimal up to some constant factor.

Proceedings ArticleDOI
29 Apr 1981
TL;DR: It is shown that, knowing the expected size of all projections of each relation in the database, one can compute the size of each query expressed in relational algebra and give the results for each operator of this language.
Abstract: We present a probabilistic model for evaluating the size of relations derived from given relations through relational algebra operators. We define tools to estimate the derived relations size and we state the assumptions underwhich we perform such an evaluation.The particular class of data base schemata in which we evaluate the derived relation size is characterized by properties such as independence between two relations having union-compatible domains or independence between distinct tuples in a relation. We show that, knowing the expected size of all projections of each relation in the database, we can compute the size of each query expressed in relational algebra and we give the results for each operator of this language (selection, projection, union, intersection, e-join).

Journal ArticleDOI
TL;DR: An experimental system designed as part of INRIA's Project Sirius, Delta implements a distributed executive for real-time transactional systems.
Abstract: An experimental system designed as part of INRIA's Project Sirius, Delta implements a distributed executive for real-time transactional systems.

Proceedings Article
24 Aug 1981
TL;DR: A complete solution to the problem of picking randomly oriented workpieces out of a bin by way of a 3-D sensor and of the algorithm involved is presented.
Abstract: A complete solution to the problem cf picking randomly oriented workpieces out of a bin is presented. The geometry of the workpieces is arbitrary, the only parameters to be known are those of the grasper. The paper includes a description of the 3-D sensor and of the algorithm involved.


Journal ArticleDOI
TL;DR: In this article, the authors considered the numerical solution of finite incompressible elasticity problems for Mooney-Rivlin materials in n -dimensional space, n = 2, 3, and introduced a new class of piecewise reduced quadratic, conforming, simplicial finite elements that eliminates the above inconvenience with almost no extra implementational cost.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a stochastic control problem for a random evolution and prove the existence of an optimal control which is Markovian in the Bellman equation.
Abstract: We consider a stochastic control problem for a random evolution. We study the Bellman equation of the problem and we prove the existence of an optimal stochastic control which is Markovian. This problem enables us to approximate the general problem of the optimal control of solutions of stochastic differential equations.


Book ChapterDOI
05 Mar 1981
TL;DR: The aim of the paper is to present the operational semantics of Algebraic Abstract Data Types (AAT) in terms of rewriting systems and their programming in PROLOG respectively.
Abstract: The aim of the paper is to present the operational semantics of Algebraic Abstract Data Types (AAT) in terms of rewriting systems and their programming in PROLOG respectively.

Journal ArticleDOI
TL;DR: In this article, the authors used the theory of homogenization to derive an equation for the mean flow of a turbulent flow with a random turbulent eddy viscosity tensor.


Journal ArticleDOI
TL;DR: This paper proposes a reference model for the architecture of integrated office systems, and the set of protocols that rule the interworking between the different entities (logical and physical), inspired from the Open System Architecture Reference Model developed in ISO.

Journal ArticleDOI
TL;DR: This work lays the basis for a mixed language system where static checking is sufficient to ensure type correctness of the mixed language program, and “Abstract” types, such as defined in CLU, are finally shown to be a particular class of “foreign’ types.
Abstract: As a support for writing software, a comprehensive set of problem oriented languages appears preferable to any so-called universal language, as soon as static checking is sufficient to ensure type correctness of the mixed language program. We lay the basis for a mixed language system where this requirement is fulfilled. The general outline of the system is first sketched. Detailed consideration is then given to our basic constructs for establishing communication between languages, namely “standard” types and “foreign” types. “Abstract” types, such as defined in CLU, are finally shown to be a particular class of “foreign” types.