scispace - formally typeset
Search or ask a question

Showing papers on "Completeness (order theory) published in 2004"


Journal ArticleDOI
01 Mar 2004
TL;DR: This work presents a new approach that identifies the location of a type error as a set of program points (a slice) all of which are necessary for the type error.
Abstract: Previous methods have generally identified the location of a type error as a particular program point or the program subtree rooted at that point. We present a new approach that identifies the location of a type error as a set of program points (a slice) all of which are necessary for the type error. We identify the criteria of completeness and minimality for type error slices. We discuss the advantages of complete and minimal type error slices over previous methods of presenting type errors. We present and prove the correctness of algorithms for finding complete and minimal type error slices for implicitly typed higher-order languages like Standard ML.

112 citations


Journal ArticleDOI
Hayoung Lee1
28 Jun 2004
TL;DR: In this article, the authors studied locally spatially homogeneous solutions of the Einstein-Vlasov system with a positive cosmological constant, and the global existence of solutions of this system and the casual geodesic completeness were shown.
Abstract: We study locally spatially homogeneous solutions of the Einstein-Vlasov system with a positive cosmological constant. First the global existence of solutions of this system and the casual geodesic completeness are shown. Then the asymptotic behaviour of solutions in the future time is investigated in various aspects.

70 citations


Posted Content
TL;DR: An axiomatic characterization of two rules for comparing alternative sets of objects on the basis of the diversity that they offer are provided and some connections with the broader issue of measuring freedom of choice are provided.
Abstract: This paper provides an axiomatic characterization of two rules for comparing alternative sets of objects on the basis of the diversity that they offer. The framework considered assumes a finite universe of objects and an a priori given ordinal quadernary relation that compares alternative pairs of objects on the basis of their ordinal dissimilarity. Very few properties of this quadernary relation are assumed (beside completeness, transitivity and a very natural form of symmetry). The two rules that we characterize are the maxi-max criterion and the lexi-max criterion. The maxi-max criterion considers that a set is more diverse than another if and only if the two objects that are the most dissimilar in the former are weakly as dissimilar as the two most dissimilar objects in the later. The lexi-max criterion is defined as usual as the lexicographic extension of the maxi-max criterion. Some connections with the broader issue of measuring freedom of choice are also provided.

68 citations


Journal ArticleDOI
TL;DR: In this paper, the spectral and scattering theory is investigated for a generalization, to scattering metrics on two-dimensional compact manifolds with boundary, of the class of smooth potentials on R2 which are homogeneous of degree zero near infinity.

66 citations


Proceedings Article
01 Jan 2004
TL;DR: A method originating from Formal Concept Analysis is proposed which uses empirical data to systematically generate hypothetical axioms about the domain, which are represented to an ontology engineer for decision.
Abstract: Designing ontologies and specifying axioms of the described domains is an expensive and error-prone task. Thus, we propose a method originating from Formal Concept Analysis which uses empirical data to systematically generate hypothetical axioms about the domain, which are represented to an ontology engineer for decision. In this paper, we focus on axioms that can be expressed as entailment statements in the description logic FLE . The proposed technique is an incremental one, therefore, in every new step we have to reuse the ax- iomatic information acquired so far. We present a sound and complete deduction calculus for FLE entailment statements. We give a detailed description of this multistep algorithm including a technique called empirical attribute reduction and demonstrate the pro- posed technique using an example from mathematics. We give a completeness result on the explored information and address the question of algorithm termination. Finally, we discuss possible appli- cations of our method.

64 citations


Proceedings ArticleDOI
01 Jan 2004
TL;DR: A definition of completeness of a UML model is proposed and a set of rules to assess model completeness are presented to assess the level of comple completeness in practice.
Abstract: Delivering high quality software in an economic way requires advanced control over the software development process and the product in all stages of its life-cycle. The use of metrics as means of control and improvement plays an important role in software engineering. Interviews with industrial software engineers identified incompleteness of UML designs as a potential problem for subsequent stages of development. In this paper we propose a definition of completeness of a UML model and present a set of rules to assess model completeness. We report results from industrial case studies to assess the level of completeness in practice. The amount of completeness and consistency rule violations was very high. Different case studies showed large variations in conformance to and violations of rules.

57 citations


Journal ArticleDOI
TL;DR: It is shown that the quasi-completion of the cut elimination theorem is a generalization of the MacNeille completion, and the finite model property is obtained for many cases, by modifying the completeness proof.
Abstract: We will give here a purely algebraic proof of the cut elimination theorem for various sequent systems. Our basic idea is to introduce mathematical structures, called Gentzen structures, for a given sequent system without cut, and then to show the completeness of the sequent system without cut with respect to the class of algebras for the sequent system with cut, by using the quasi-completion of these Gentzen structures. It is shown that the quasi-completion is a generalization of the MacNeille completion. Moreover, the finite model property is obtained for many cases, by modifying our completeness proof. This is an algebraic presentation of the proof of the finite model property discussed by Lafont [12] and Okada-Terui [17].

50 citations



Book ChapterDOI
29 Mar 2004
TL;DR: In this article, the problem of minimally refining an abstract model checking in order to get strong preservation can be formulated as a complete domain refinement in abstract interpretation, which always admits a fixpoint solution.
Abstract: Many algorithms have been proposed to minimally refine abstract transition systems in order to get strong preservation relatively to a given temporal specification language. These algorithms compute a state equivalence, namely they work on abstractions which are partitions of system states. This is restrictive because, in a generic abstract interpretation-based view, state partitions are just one particular type of abstraction, and therefore it could well happen that the refined partition constructed by the algorithm is not the optimal generic abstraction. On the other hand, it has been already noted that the well-known concept of complete abstract interpretation is related to strong preservation of abstract model checking. This paper establishes a precise correspondence between complete abstract interpretation and strongly preserving abstract model checking, by showing that the problem of minimally refining an abstract model checking in order to get strong preservation can be formulated as a complete domain refinement in abstract interpretation, which always admits a fixpoint solution. As a consequence of these results, we show that some well-known behavioural equivalences used in process algebra like simulation and bisimulation can be elegantly characterized in pure abstract interpretation as completeness properties.

44 citations


Journal ArticleDOI
TL;DR: A theory of cylindrical stochastic integration is proposed, recently developed by Mikulevicius and Rozovskii, as mathematical background to the theory of bond markets, and it is shown that either the optimal strategy is based on a finite number of bonds or it is not necessarily a measure-valued process.
Abstract: We propose here a theory of cylindrical stochastic integration, recently developed by Mikulevicius and Rozovskii, as mathematical background to the theory of bond markets. In this theory, since there is a continuum of securities, it seems natural to define a portfolio as a measure on maturities. However, it turns out that this set of strategies is not complete, and the theory of cylindrical integration allows one to overcome this difficulty. Our approach generalizes the measure-valued strategies: this explains some known results, such as approximate completeness, but at the same time it also shows that either the optimal strategy is based on a finite number of bonds or it is not necessarily a measure-valued process.

44 citations


Journal ArticleDOI
TL;DR: The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that the authors can check the cardinality of certain set differences G - G', where G and G' are sets of agents.
Abstract: Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even when the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.

01 Jan 2004
TL;DR: The decision problem is formulated, which is, given a level of the game, to decide whether it is possible to complete the level (and hence to find a solution to that level), and the problem is shown to be NP-Hard.
Abstract: In the computer game ‘Lemmings’, the player must guide a tribe of green-haired lemming creatures to safety, and save them from an untimely demise. We formulate the decision problem which is, given a level of the game, to decide whether it is possible to complete the level (and hence to find a solution to that level). Under certain limitations, this can be decided in polynomial time, but in general the problem is shown to be NP-Hard. This can hold even if there is only a single lemming to save, thus showing that it is hard to approximate the number of saved lemmings to any factor.

Journal ArticleDOI
TL;DR: In this article, the authors consider some systems of exponential functions, cosines, and sines with complex-valued coefficients and establish a necessary and sufficient condition for completeness and minimality of these systems in Lebesgue spaces.
Abstract: We consider some systems of exponential functions, cosines, and sines with complex-valued coefficients and establish a necessary and sufficient condition for completeness and minimality of these systems in Lebesgue spaces.

Journal ArticleDOI
TL;DR: In this paper, convexity results and related properties for the value functions of tandem queuing systems are derived for standard multiserver queues and for controllable queues with and without batch arrivals.
Abstract: We derive convexity results and related properties for the value functions of tandem queuing systems. The results for standard multiserver queues are new. For completeness, we also prove and generalize existing results on tandems of controllable queues. The results can be used to compare queuing systems. This is done for systems with and without batch arrivals and for systems with different numbers of on–off sources.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between asymptotic completeness in the global market and completeness of the finite submarkets, under a no-arbitrage assumption.
Abstract: We study completeness in large financial markets, namely markets containing countably many assets. We investigate the relationship between asymptotic completeness in the global market and completeness in the finite submarkets, under a no-arbitrage assumption. We also suggest a way to approximate a replicating strategy in the large market by finite-dimensional portfolios. Furthermore, we find necessary and sufficient conditions for completeness to hold in a factor model.

Posted Content
TL;DR: In this paper, the authors investigated the relationship between asymptotic completeness in the global market and completeness of the finite submarkets, under a no-arbitrage assumption.
Abstract: We study completeness in large financial markets, namely markets containing countably many assets. We investigate the relationship between asymptotic completeness in the global market and completeness in the finite submarkets, under a no-arbitrage assumption. We also suggest a way to approximate a replicating strategy in the large market by finite-dimensional portfolios. Furthermore, we find necessary and sufficient conditions for completeness to hold in a factor model.

Book ChapterDOI
12 Jul 2004
TL;DR: In this paper, the authors formalize the symmetries of these operators as Galois connections and dualities, and study their properties in the associated semirings of operators.
Abstract: Modal Kleene algebra is Kleene algebra enriched by forward and backward box and diamond operators. We formalize the symmetries of these operators as Galois connections and dualities. We study their properties in the associated semirings of operators. Modal Kleene algebra provides a unifying semantics for various program calculi and enhances efficient cross-theory reasoning in this class, often in a very concise state-free style. This claim is supported by novel algebraic soundness and completeness proofs for Hoare logic.

01 Jan 2004
TL;DR: In this article, the role of set comprehension in higher-order automated theorem proving is studied and an approach to automated proof search presented here extends mating search by including connections up to extensional and equational reasoning.
Abstract: Church's simple type theory allows quantification over sets and functions. This expressive power allows a natural formalization of much of mathematics. However, searching for set instantiations has not yet been well-understood. Here we study the role of set comprehension in higher-order automated theorem proving. In particular, we introduce formulations of Church's type theory which are restricted with respect to set comprehension. We then define corresponding semantics and show soundness and completeness. Using completeness, we show that some restrictions to set comprehension are complete. That is, we can prove any theorem with restricted set comprehension that could be proven with unrestricted set comprehension. We also show some restrictions are incomplete. This methodology is used to study set comprehension both in the presence and absence of extensionality. We also describe methods for automated theorem proving in extensional type theory with restricted set instantiations. The approach to automated proof search presented here extends mating search by including connections up to extensional and equational reasoning. Search procedures based on these ideas have been implemented as part of the TPS theorem prover. The procedures have also been augmented by including the ability to solve for certain sets using set constraints. We describe the implementation and include experimental results.

Book ChapterDOI
22 Sep 2004
TL;DR: In this article, a unification procedure for solving term equations involving individual and sequence variables is presented, and the decisionability of the unification procedure is proved, and almost minimality of the procedure is shown.
Abstract: Term equations involving individual and sequence variables, and individual and sequence function symbols are studied. Function symbols can have either fixed or flexible arity. A new unification procedure for solving such equations is presented. Decidability of unification is proved. Completeness and almost minimality of the procedure is shown.

Journal ArticleDOI
TL;DR: It is shown that for a closed structure, the counting problem is #P-complete, and that the famous counting problems of n-queen and toroidal n-Queen problems are both beyond the#P-class.

Journal ArticleDOI
Bo-Yong Chen1
TL;DR: In this paper, it was shown that any hyperconvex manifold has a complete Bergman metric, and that any manifold with a complete manifold Bergman distance metric has complete complete hyperconcave manifold.
Abstract: Abstract We proved that any hyperconvex manifold has a complete Bergman metric.

Journal ArticleDOI
TL;DR: Among the novelties are an unusually simple axiomatization of control operators and a strengthened completeness result with a proof based on a delaying transform.
Abstract: We investigate continuations in the context of idealized call-by-value programming languages On the semantic side, we analyze the categorical structures that arise from continuation models of call-by-value languages On the syntactic side, we study the call-by-value continuation-passing transformation as a translation between equational theories Among the novelties are an unusually simple axiomatization of control operators and a strengthened completeness result with a proof based on a delaying transform

Proceedings Article
01 Jan 2004
TL;DR: It is shown that the standard notion of Kripke completeness is the strongest one among many provably distinct algebraically motivated completeness properties, and notions of completeness with respect to algebras which are either atomic, complete, completely additive or admit residuals are investigated.
Abstract: We are going to show that the standard notion of Kripke completeness is the strongest one among many provably distinct algebraically motivated completeness properties, some of which seem to be of intrinsic interest. More specifically, we are going to investigate notions of completeness with respect to algebras which are either atomic, complete, completely additive or admit residuals (the last notion of completeness coincides with conservativity of minimal tense extensions); we will be also interested in combinations of these properties.

Journal ArticleDOI
TL;DR: In this paper, it was shown that if for some e > 0, NP contains a set that is DTIME(2ne))-bi-immune, then it is 2-Turing complete for NP (hence 3-truth-table complete).
Abstract: We prove that if for some e > 0, NP contains a set that is DTIME(2ne))-bi-immune, then NP contains a set that is 2-Turing complete for NP (hence 3-truth-table complete) but not 1-truth-table complete for NP. Thus this hypothesis implies a strong separation of completeness notions for NP. Lutz and Mayordomo (Theor. Comput. Sci. 164 (1996) 141-163) and Ambos-Spies and Bentzien (J. Comput. Syst. Sci. 61(3) (2000) 335-361) previously obtained the same consequence using strong hypotheses involving resource-bounded measure and/or category theory. Our hypothesis is weaker and involves no assumptions about stochastic properties of NP.

Proceedings ArticleDOI
13 Jun 2004
TL;DR: This work presents a computational criterion for a function f to be complete for the asymmetric case and shows a matching criterion called computational row transitivity for f to have a simple SFE (based on no additional assumptions).
Abstract: A Secure Function Evaluation (SFE) of a two-variable function f(·,·) is a protocol that allows two parties with inputs x and y to evaluate f(x,y) in a manner where neither party learns "more than is necessary". A rich body of work deals with the study of completeness for secure two-party computation. A function f is complete for SFE if a protocol for securely evaluating f allows the secure evaluation of all (efficiently computable) functions. The questions investigated are which functions are complete for SFE, which functions have SFE protocols unconditionally and whether there are functions that are neither complete nor have efficient SFE protocols.The previous study of these questions was mainly conducted from an Information Theoretic point of view and provided strong answers in the form of combinatorial properties. However, we show that there are major differences between the information theoretic and computational settings. In particular, we show functions that are considered as having SFE unconditionally by the combinatorial criteria but are actually complete in the computational setting. We initiate the fully computational study of these fundamental questions. Somewhat surprisingly, we manage to provide an almost full characterization of the complete functions in this model as well. More precisely, we present a computational criterion (called computational row non-transitivity) for a function f to be complete for the asymmetric case. Furthermore, we show a matching criterion called computational row transitivity for f to have a simple SFE (based on no additional assumptions). This criterion is close to the negation of the computational row non-transitivity and thus we essentially characterize all "nice" functions as either complete or having SFE unconditionally.

Journal ArticleDOI
01 Jul 2004
TL;DR: In this paper, it was shown that collisionless spacetimes with collisionless matter evolving from data on a compact Cauchy surface with hyperbolic symmetry are timelike and null geodesically complete in the expanding direction.
Abstract: Spacetimes with collisionless matter evolving from data on a compact Cauchy surface with hyperbolic symmetry are shown to be timelike and null geodesically complete in the expanding direction, provided the data satisfy a certain size restriction.

Journal ArticleDOI
TL;DR: In this article, it was shown that for generic sliced spacetimes global hyperbolicity is equivalent to space completeness under the assumption that the lapse, shift and spatial metric are uniformly bounded.
Abstract: We show that for generic sliced spacetimes global hyperbolicity is equivalent to space completeness under the assumption that the lapse, shift and spatial metric are uniformly bounded. This leads us to the conclusion that simple sliced spaces are timelike and null geodesically complete if and only if space is a complete Riemannian manifold.

Journal ArticleDOI
TL;DR: First, the ordinary possible worlds models are extended to infinite possible world models and an axiomatic system is proposed and it has been proved complete and to be complete.

Book
01 Jan 2004
TL;DR: Experimental results over MCNC benchmarks show that the bi-decomposition outperforms SIS and other BDD-based decomposition methods interms of area and delay of the resulting circuits with comparableCPU time.
Abstract: This paper introduces the theory of bi-decomposition of Boolean functions. This approach optimally exploits functional properties of a Boolean function in order to find an associated multilevel circuit representation having a very short delay by using simple two input gates. The machine learning process is based on the Boolean Differential Calculus and is focused on the aim of detecting the profitable functional properties available for the Boolean function.For clear understanding the bi-decomposition of completely specified Boolean functions is introduced first. Significantly better chance of success are given for bi-decomposition of incompletely specified Boolean functions, discussed secondly. The inclusion of the weak bidecomposition allows to prove the the completeness of the suggested decomposition method. The basic task for machine learning consists of determining the decomposition type and dedicated sets of variables. Lean on this knowledge a complete recursive design algorithm is suggested.Experimental results over MCNC benchmarks show that the bi-decomposition out-performs SIS and other BDD-based decomposition methods in terms of area and delay of the resulting circuits with comparable CPU time.By switching from the ON-set/OFF-set model of Boolean function lattices to their upper-and lower-bound model a new view to the bi-decomposition arises. This new form of the bi-decomposition theory makes a comprehensible generalization of the bi-decomposition to multivalued function possible.

Journal ArticleDOI
TL;DR: In this paper, the Birch's sequence YK = pαqβ ¦ p, q > 1, α, β N0, 0 ≦ β ≦ K giving a partial answer for a question of P Erdos was investigated.
Abstract: We investigate the Birch's sequence YK = pαqβ ¦ p, q > 1, α, β N0, 0 ≦ β ≦ K giving a partial answer for a question of P Erdos