scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Mathematics and Artificial Intelligence in 2005"


Journal ArticleDOI
TL;DR: A new randomized algorithm for SAT, i.e., the satisfiability problem for Boolean formulas in conjunctive normal form, which is inspired by two randomized algorithms having the best current worst-case upper bounds.
Abstract: In this paper we present a new randomized algorithm for SAT, i.e., the satisfiability problem for Boolean formulas in conjunctive normal form. Despite its simplicity, this algorithm performs well on many common benchmarks ranging from graph coloring problems to microprocessor verification. Our algorithm is inspired by two randomized algorithms having the best current worst-case upper bounds ([27,28] and [30,31]). We combine the main ideas of these algorithms in one algorithm. The two approaches we use are local search (which is used in many SAT algorithms, e.g., in GSAT [34] and WalkSAT [33]) and unit clause elimination (which is rarely used in local search algorithms). In this paper we do not prove any theoretical bounds. However, we present encouraging results of computational experiments comparing several implementations of our algorithm with other SAT solvers. We also prove that our algorithm is probabilistically approximately complete (PAC).

95 citations


Journal ArticleDOI
TL;DR: An experimental analysis demonstrates that indirect mechanisms, such as ascending-price auctions, can achieve better allocative efficiency with less preference elicitation than sealed-bid (direct) auctions because they promote better decisions about preference elicit.
Abstract: We consider auction design in a setting with costly preference elicitation. Well designed auctions can help to avoid unnecessary elicitation while determining efficient allocations. Careful design can also lead to more efficient outcomes when elicitation is too costly to permit perfect allocative efficiency. An incremental revelation principle is developed and used to motivate the role of proxied and indirect auction designs. Proxy agents, situated between bidders and an auction, can be used to maintain partial information about bidder preferences, to compute equilibrium bidding strategies based on the available information, and to elicit additional preference information as required. We derive information-theoretic elicitation policies for proxy agents under a simple model of costly elicitation across different auction designs. An experimental analysis demonstrates that indirect mechanisms, such as ascending-price auctions, can achieve better allocative efficiency with less preference elicitation than sealed-bid (direct) auctions because they promote better decisions about preference elicitation.

83 citations


Journal ArticleDOI
TL;DR: Biazzo and Sanfilippo as mentioned in this paper studied the computational complexity of probabilistic reasoning under coherence, and showed that the notions of g-coherence and g coherent entailment can be expressed by combining notions in model-theoretic Probabilistic Reasoning with concepts from default reasoning.
Abstract: In previous work [V. Biazzo, A. Gilio, T. Lukasiewicz and G. Sanfilippo, Probabilistic logic under coherence, model-theoretic probabilistic logic, and default reasoning in System P, Journal of Applied Non-Classical Logics 12(2) (2002) 189---213.], we have explored the relationship between probabilistic reasoning under coherence and model-theoretic probabilistic reasoning. In particular, we have shown that the notions of g-coherence and of g-coherent entailment in probabilistic reasoning under coherence can be expressed by combining notions in model-theoretic probabilistic reasoning with concepts from default reasoning. In this paper, we continue this line of research. Based on the above semantic results, we draw a precise picture of the computational complexity of probabilistic reasoning under coherence. Moreover, we introduce transformations for probabilistic reasoning under coherence, which reduce an instance of deciding g-coherence or of computing tight intervals under g-coherent entailment to a smaller problem instance, and which can be done very efficiently. Furthermore, we present new algorithms for deciding g-coherence and for computing tight intervals under g-coherent entailment, which reformulate previous algorithms using terminology from default reasoning. They are based on reductions to standard problems in model-theoretic probabilistic reasoning, which in turn can be reduced to linear optimization problems. Hence, efficient techniques for model-theoretic probabilistic reasoning can immediately be applied for probabilistic reasoning under coherence (for example, column generation techniques). We describe several such techniques, which transform problem instances in model-theoretic probabilistic reasoning into smaller problem instances. We also describe a technique for obtaining a reduced set of variables for the associated linear optimization problems in the conjunctive case, and give new characterizations of this reduced set as a set of non-decomposable variables, and using the concept of random gain.

82 citations


Journal ArticleDOI
TL;DR: This paper analyses the process and outcomes of competitive bilateral negotiation for a model based on negotiation decision functions by exploring all possible incomplete information scenarios – both symmetric and asymmetric.
Abstract: This paper analyses the process and outcomes of competitive bilateral negotiation for a model based on negotiation decision functions. Each agent has time constraints in the form of a deadline and a discounting factor. The importance of information possessed by participants is highlighted by exploring all possible incomplete information scenarios -- both symmetric and asymmetric. In particular, we examine a range of negotiation scenarios in which the amount of information that agents have about their opponent's parameters is systematically varied. For each scenario, we determine the equilibrium solution and study its properties. The main results of our study are as follows. Firstly, in some scenarios agreement takes place at the earlier deadline, while in others it takes place near the beginning of negotiation. Secondly, in some scenarios the price surplus is split equally between the agents while in others the entire price surplus goes to a single agent. Thirdly, for each possible scenario, the equilibrium outcome possesses the properties of uniqueness and symmetry -- although it is not always Pareto optimal. Finally, we also show the relative impacts of the opponent's parameters on the bargaining outcome.

72 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of problems called para-primal problems, incomparable with the families identified by Feder and Vardi (1998), were introduced and proved to be decidable in polynomial time.
Abstract: In this paper we consider constraint satisfaction problems where the set of constraint relations is fixed. Feder and Vardi (1998) identified three families of constraint satisfaction problems containing all known polynomially solvable problems. We introduce a new class of problems called para-primal problems, incomparable with the families identified by Feder and Vardi (1998) and we prove that any constraint problem in this class is decidable in polynomial time. As an application of this result we prove a complete classification for the complexity of constraint satisfaction problems under the assumption that the basis contains all the permutation relations. In the proofs, we make an intensive use of algebraic results from clone theory about the structure of para-primal and homogeneous algebras.

56 citations


Journal ArticleDOI
TL;DR: This paper investigates Walley's concepts of epistemic irrelevance and epistemic independence for imprecise probability models, and their relation to the graphoid axioms.
Abstract: This paper investigates Walley's concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can violate the symmetry, contraction and intersection axioms, that epistemic independence can violate contraction and intersection, and that this accords with informal notions of irrelevance and independence.

53 citations


Journal ArticleDOI
TL;DR: The results of the SAT Competition 2002 are given, the interpretations are tried, and suggestions for future competitions are given.
Abstract: SAT Competition 2002 held in March–May 2002 in conjunction with SAT 2002 (the Fifth International Symposium on the Theory and Applications of Satisfiability Testing). About 30 solvers and 2300 benchmarks took part in the competition, which required more than 2 CPU years to complete the evaluation. In this report, we give the results of the competition, try to interpret them, and give suggestions for future competitions.

48 citations


Journal ArticleDOI
TL;DR: It is shown thatDPLL with the considered cut restrictions, such as allowing splitting only on the variables corresponding to the input gates, cannot polynomially simulate DPLL with unrestricted splitting.
Abstract: This paper studies the relative efficiency of variations of a tableau method for Boolean circuit satisfiability checking. The considered method is a nonclausal generalisation of the Davis---Putnam---Logemann---Loveland (DPLL) procedure to Boolean circuits. The variations are obtained by restricting the use of the cut (splitting) rule in several natural ways. It is shown that the more restricted variations cannot polynomially simulate the less restricted ones. For each pair of methods T, T?, an infinite family $\{\mathcal{C}_{n}\}$ of circuits is devised for which T has polynomial size proofs while in T? the minimal proofs are of exponential size w.r.t. n, implying exponential separation of T and T? w.r.t. n. The results also apply to DPLL for formulas in conjunctive normal form obtained from Boolean circuits by using Tseitin's translation. Thus DPLL with the considered cut restrictions, such as allowing splitting only on the variables corresponding to the input gates, cannot polynomially simulate DPLL with unrestricted splitting.

43 citations


Journal ArticleDOI
Gert de Cooman1
TL;DR: It is shown that there is a common order-theoretic structure underlying many of the models for representing beliefs in the literature, and that the model based on classical propositional logic can be embedded in that based on the theory of coherent lower previsions.
Abstract: I show that there is a common order-theoretic structure underlying many of the models for representing beliefs in the literature. After identifying this structure, and studying it in some detail, I argue that it is useful. On the one hand, it can be used to study the relationships between several models for representing beliefs, and I show in particular that the model based on classical propositional logic can be embedded in that based on the theory of coherent lower previsions. On the other hand, it can be used to generalise the coherentist study of belief dynamics (belief expansion and revision) by using an abstract order-theoretic definition of the belief spaces where the dynamics of expansion and revision take place. Interestingly, many of the existing results for expansion and revision in the context of classical propositional logic can still be proven in this much more abstract setting, and therefore remain valid for many other belief models, such as those based on imprecise probabilities.

37 citations


Journal ArticleDOI
TL;DR: In this article, the basic operations of conditioning and marginalization are expressed in terms of variables, and it is shown that epistemic irrelevance is an asymmetric graphoid, which is verified in probability theory when the global probability distribution is positive in all the values.
Abstract: This paper studies graphoid properties for epistemic irrelevance in sets of desirable gambles. For that aim, the basic operations of conditioning and marginalization are expressed in terms of variables. Then, it is shown that epistemic irrelevance is an asymmetric graphoid. The intersection property is verified in probability theory when the global probability distribution is positive in all the values. Here it is always verified due to the handling of zero probabilities in sets of gambles. An asymmetrical D-separation principle is also presented, by which this type of independence relationships can be represented in directed acyclic graphs.

36 citations


Journal ArticleDOI
TL;DR: A graphical interpretation of the constraints is provided, bridging the absolute qualitative labels of two quantities into their corresponding relative relation(s), and conversely, the relative order of magnitude relations are characterized in the absolute order-of-magnitude world.
Abstract: The aim of this paper is to analyze under which conditions Absolute Order-of-Magnitude and Relative Order-of-Magnitude models may be concordant and to determine the constraints which guarantee concordance. A graphical interpretation of the constraints is provided, bridging the absolute qualitative labels of two quantities into their corresponding relative relation(s), and conversely. The relative order of magnitude relations are then characterized in the absolute order-of-magnitude world.

Journal ArticleDOI
TL;DR: This paper aims the evaluation of data structures for backtrack search SAT solvers, under a common unbiased SAT framework, and proposes new data structures that are competitive with the most efficient data structures currently available, and that may be preferable for the next generation SATsolvers.
Abstract: The implementation of efficient Propositional Satisfiability (SAT) solvers entails the utilization of highly efficient data structures, as illustrated by most of the recent state-of-the-art SAT solvers. However, it is in general hard to compare existing data structures, since different solvers are often characterized by fairly different algorithmic organizations and techniques, and by different search strategies and heuristics. This paper aims the evaluation of data structures for backtrack search SAT solvers, under a common unbiased SAT framework. In addition, advantages and drawbacks of each existing data structure are identified. Finally, new data structures are proposed, that are competitive with the most efficient data structures currently available, and that may be preferable for the next generation SAT solvers.

Journal ArticleDOI
TL;DR: This paper shows how to format GPP as a search problem and introduces a sequence of admissible heuristic functions estimating the size of the optimal partition by looking into different interactions between vertices of the graph and achieves a speedup of up to a number of orders of magnitude.
Abstract: As search spaces become larger and as problems scale up, an efficient way to speed up the search is to use a more accurate heuristic function. A better heuristic function might be obtained by the following general idea. Many problems can be divided into a set of subproblems and subgoals that should be achieved. Interactions and conflicts between unsolved subgoals of the problem might provide useful knowledge which could be used to construct an informed heuristic function. In this paper we demonstrate this idea on the graph partitioning problem (GPP). We first show how to format GPP as a search problem and then introduce a sequence of admissible heuristic functions estimating the size of the optimal partition by looking into different interactions between vertices of the graph. We then optimally solve GPP with these heuristics. Experimental results show that our advanced heuristics achieve a speedup of up to a number of orders of magnitude. Finally, we experimentally compare our approach to other states of the art graph partitioning optimal solvers on a number of classes of graphs. The results obtained show that our algorithm outperforms them in many cases.

Journal ArticleDOI
Stefan Szeider1
TL;DR: It is shown that recognition of var-satisfiable CNF formulas is Π2P-complete, answering a question posed by Kleine Buning and Zhao, and is viewed as the best possible generalization of matched CNF formula.
Abstract: A CNF formula is called matched if its associated bipartite graph (whose vertices are clauses and variables) has a matching that covers all clauses. Matched CNF formulas are satisfiable and can be recognized efficiently by matching algorithms. We generalize this concept and cover clauses by collections of bicliques (complete bipartite graphs). It turns out that such generalization indeed gives rise to larger classes of satisfiable CNF formulas which we term biclique satisfiable. We show, however, that the recognition of biclique satisfiable CNF formulas is NP-complete, and remains NP-hard if the size of bicliques is bounded. A satisfiable CNF formula is called var-satisfiable if it remains satisfiable under arbitrary replacement of literals by their complements. Var-satisfiable CNF formulas can be viewed as the best possible generalization of matched CNF formulas as every matched CNF formula and every biclique satisfiable CNF formula is var-satisfiable. We show that recognition of var-satisfiable CNF formulas is Π2P-complete, answering a question posed by Kleine Buning and Zhao.

Journal ArticleDOI
TL;DR: This article studies the computational complexity of the agent design problem for tasks that are of the form “achieve this state of affairs” or “maintain thisstate of affairs,” and considers three general formulations of these problems (in both non-deterministic and deterministic environments).
Abstract: The agent design problem is as follows: given a specification of an environment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In this article, we study the computational complexity of the agent design problem for tasks that are of the form "achieve this state of affairs" or "maintain this state of affairs." We consider three general formulations of these problems (in both non-deterministic and deterministic environments) that differ in the nature of what is viewed as an "acceptable" solution: in the least restrictive formulation, no limit is placed on the number of actions an agent is allowed to perform in attempting to meet the requirements of its specified task. We show that the resulting decision problems are intractable, in the sense that these are non-recursive (but recursively enumerable) for achievement tasks, and non-recursively enumerable for maintenance tasks. In the second formulation, the decision problem addresses the existence of agents that have satisfied their specified task within some given number of actions. Even in this more restrictive setting the resulting decision problems are either pspace-complete or np-complete. Our final formulation requires the environment to be history independent and bounded. In these cases polynomial time algorithms exist: for deterministic environments the decision problems are nl-complete; in non-deterministic environments, p-complete.

Journal ArticleDOI
TL;DR: This work shows classes of formulae where this problem of selection of a minimally unsatisfiable subformula can be solved efficiently by using a variant of Farkas’ lemma and solving a linear programming problem.
Abstract: A minimally unsatisfiable subformula (MUS) is a subset of clauses of a given CNF formula which is unsatisfiable but becomes satisfiable as soon as any of its clauses is removed. The selection of a MUS is of great relevance in many practical applications. This expecially holds when the propositional formula encoding the application is required to have a well-defined satisfiability property (either to be satisfiable or to be unsatisfiable). While selection of a MUS is a hard problem in general, we show classes of formulae where this problem can be solved efficiently. This is done by using a variant of Farkas' lemma and solving a linear programming problem. Successful results on real-world contradiction detection problems are presented.

Journal ArticleDOI
TL;DR: A high-performance implementation of an exact algorithm for MAX-2-SAT which outperforms any implementation the authors know about in the same category and is a feasible and effective tool to solve large instances of the Max-Cut problem in graph theory.
Abstract: We study three new techniques that will speed up the branch-and-bound algorithm for the MAX-2-SAT problem: The first technique is a group of new lower bound functions for the algorithm and we show that these functions are admissible and consistently better than other known lower bound functions. The other two techniques are based on the strongly connected components of the implication graph of a 2CNF formula: One uses the graph to simplify the formula and the other uses the graph to design a new variable ordering. The experiments show that the simplification can reduce the size of the input substantially no matter what is the clause-to-variable ratio and that the new variable ordering performs much better when the clause-to-variable ratio is less than 2. A direct outcome of this research is a high-performance implementation of an exact algorithm for MAX-2-SAT which outperforms any implementation we know about in the same category. We also show that our implementation is a feasible and effective tool to solve large instances of the Max-Cut problem in graph theory.

Journal ArticleDOI
TL;DR: The proposed method not only provides a common platform for a systematic study and a reliable improvement of deterministic and stochastic SAT solvers alike but also supports the introduction and validation of new problem instance classes.
Abstract: A recent series of experiments with a group of state-of-the-art SAT solvers and several well-defined classes of problem instances reports statistically significant performance variability for the solvers. A systematic analysis of the observed performance data, all openly archived on the Web, reveals distributions which we classify into three broad categories: (1) readily characterized with a simple χ2-test, (2) requiring more in-depth analysis by a statistician, (3) incomplete, due to time-out limit reached by specific solvers. The first category includes two well-known distributions: normal and exponential; we use simple first-order criteria to decide the second category and label the distributions as near-normal, near-exponential and heavy-tail. We expect that good models for some if not most of these may be found with parameters that fit either generalized gamma, Weibull, or Pareto distributions. Our experiments show that most SAT solvers exhibit either normal or exponential distribution of execution time (runtime) on many equivalence classes of problem instances. This finding suggests that the basic mathematical framework for these experiments may well be the same as the one used to test the reliability or lifetime of hardware components such as lightbulbs, A/C units, etc. A batch of N replicated hardware components represents an equivalence class of N problem instances in SAT, a controlled operating environment A represents a SAT solver A, and the survival function RA(x) (where x represents the lifetime) is the complement of the solvability function SA(x)e1−RA(x) where x may represent runtime, implications, backtracks, etc. As demonstrated in the paper, a set of unrelated benchmarks or randomly generated SAT instances available today cannot measure the performance of SAT solvers reliably – there is no control on their ‘hardness’. However, equivalence class instances as defined in this paper are, in effect, replicated instances of a specific reference instance. The proposed method not only provides a common platform for a systematic study and a reliable improvement of deterministic and stochastic SAT solvers alike but also supports the introduction and validation of new problem instance classes.

Journal ArticleDOI
TL;DR: The method of Sharir for detecting strongly connected components in a directed graph can be adapted to performing "lean" resolution on a set of binary clauses to find implied equivalent literals, implied unit clauses, and implied binary clauses.
Abstract: Binary-clause reasoning has been shown to reduce the size of the search space on many satisfiability problems, but has often been so expensive that run-time was higher than that of a simpler procedure that explored a larger space. The method of Sharir for detecting strongly connected components in a directed graph can be adapted to performing “lean” resolution on a set of binary clauses. Beyond simply detecting unsatisfiability, the goal is to find implied equivalent literals, implied unit clauses, and implied binary clauses.

Journal ArticleDOI
TL;DR: The set packing and set covering formulations are used to suggest novel iterative Dutch auction algorithms for combinatorial auction problems and it is proved the convergence of the algorithms is convergence and the solutions obtained lie within provable worst case bounds.
Abstract: The combinatorial auction problem can be modeled as a weighted set packing problem. Similarly the reverse combinatorial auction can be modeled as a weighted set covering problem. We use the set packing and set covering formulations to suggest novel iterative Dutch auction algorithms for combinatorial auction problems. We use generalized Vickrey auctions (GVA) with reserve prices in each iteration. We prove the convergence of the algorithms and show that the solutions obtained using the algorithms lie within provable worst case bounds. We conduct numerical experiments to show that in general the solutions obtained using these algorithms are much better than the theoretical bounds.

Journal ArticleDOI
TL;DR: This work will show the connection between classical independence of frames as Boolean subalgebras and independence offrames as elements of a locally finite Birkhoff lattice and suggest a potential algebraic solution of the conflict problem.
Abstract: One of the major ideas of Shafer's mathematical theory of evidence is the introduction of uncertainty descriptions on different representation domains of phenomena, called families of compatible frames of discernment. Here we are going to analyze these families of frames from an algebraic point of view, study the properties of minimal refinements of collections of domains and introduce the internal operation of maximal coarsening to establish the structure of semimodular lattice. Motivated by the search for a solution of the conflict problem that arises in sensor fusion applications, we will show the connection between classical independence of frames as Boolean subalgebras and independence of frames as elements of a locally finite Birkhoff lattice. This will eventually suggest a potential algebraic solution of the conflict problem.

Journal ArticleDOI
TL;DR: It is proved that Max SAT with unrestricted weights is NP-hard for the class of graph formulas, where Min SAT can be solved in polynomial time, and that PSAT isNP-complete for ideal formulas.
Abstract: Both probabilistic satisfiability (PSAT) and the check of coherence of probability assessment (CPA) can be considered as probabilistic counterparts of the classical propositional satisfiability problem (SAT). Actually, CPA turns out to be a particular case of PSAT; in this paper, we compare the computational complexity of these two problems for some classes of instances. First, we point out the relations between these probabilistic problems and two well known optimization counterparts of SAT, namely Max SAT and Min SAT. We then prove that Max SAT with unrestricted weights is NP-hard for the class of graph formulas, where Min SAT can be solved in polynomial time. In light of the aforementioned relations, we conclude that PSAT is NP-complete for ideal formulas, where CPA can be solved in linear time.

Journal ArticleDOI
TL;DR: In this paper, Bollobas et al. studied the connection between the order of phase transitions in combinatorial problems and the complexity of decision algorithms for such problems and showed that discontinuity of the spine is associated with a 2?(n) resolution complexity.
Abstract: We study the connection between the order of phase transitions in combinatorial problems and the complexity of decision algorithms for such problems. We rigorously show that, for a class of random constraint satisfaction problems, a limited connection between the two phenomena indeed exists. Specifically, we extend the definition of the spine order parameter of Bollobas et al. [10] to random constraint satisfaction problems, rigorously showing that for such problems a discontinuity of the spine is associated with a 2?(n) resolution complexity (and thus a 2?(n) complexity of DPLL algorithms) on random instances. The two phenomena have a common underlying cause: the emergence of "large" (linear size) minimally unsatisfiable subformulas of a random formula at the satisfiability phase transition. We present several further results that add weight to the intuition that random constraint satisfaction problems with a sharp threshold and a continuous spine are "qualitatively similar to random 2-SAT". Finally, we argue that it is the spine rather than the backbone parameter whose continuity has implications for the decision complexity of combinatorial problems, and we provide experimental evidence that the two parameters can behave in a different manner.

Journal ArticleDOI
TL;DR: A dichotomy result is proved: any generalized satisfiability local search problem is either in P or PLS-complete, which contributes to a better understanding of the complexity class PLS through the identification of an appropriate tool that captures reducibility among Boolean constraint satisfaction local search problems: sensitive implementation.
Abstract: The class of generalized satisfiability problems, first introduced by Schaefer in 1978, presents a uniform way of studying the complexity of Boolean constraint satisfaction problems with respect to the nature of constraints allowed in the input We investigate the complexity of local search for this class of problems We prove a dichotomy result: any generalized satisfiability local search problem is either in P or PLS-complete In the meantime our study contributes to a better understanding of the complexity class PLS through the identification of an appropriate tool that captures reducibility among Boolean constraint satisfaction local search problems: sensitive implementation

Journal ArticleDOI
TL;DR: In this paper, the exact 3-satisfiability (X3SAT) problem was shown to be deterministically decidable in time O(20.18674n).
Abstract: Let FeC1∧⋅⋅⋅∧Cm be a Boolean formula in conjunctive normal form over a set V of n propositional variables, s.t. each clause Ci contains at most three literals l over V. Solving the problem exact 3-satisfiability (X3SAT) for F means to decide whether there is a truth assignment setting exactly one literal in each clause of F to true (1). As is well known X3SAT is NP-complete [6]. By exploiting a perfect matching reduction we prove that X3SAT is deterministically decidable in time O(20.18674n). Thereby we improve a result in [2,3] stating X3SAT∈O(20.2072n) and a bound of O(20.200002n) for the corresponding enumeration problem nX3SAT stated in a preprint [1]. After that by a more involved deterministic case analysis we are able to show that X3SAT∈O(20.16254n).

Journal ArticleDOI
TL;DR: In this paper, the reliability of tree approximations is improved by using the imprecise Dirichlet model, which results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data.
Abstract: This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m 4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.

Journal ArticleDOI
TL;DR: It is shown that reasoning via entailment with universal near surety is equivalent to reasoning in a particular type of argumentation system having the property that when two subsets of the rule base conflict with each other, the effectively more specific subset overrides the other.
Abstract: Rules having rare exceptions may be interpreted as assertions of high conditional probability In other words, a rule If X then Y may be interpreted as meaning that Pr(Y?X) 1 A general approach to reasoning with such rules, based on second-order probability, is advocated Within this general approach, different reasoning methods are needed, with the selection of a specific method being dependent upon what knowledge is available about the relative sizes, across rules, of upper bounds on each rule's exception probabilities Pr(¬Y?X) A method of reasoning, entailment with universal near surety, is formulated for the case when no information is available concerning the relative sizes of upper bounds on exception probabilities Any conclusion attained under these conditions is robust in the sense that it will still be attained if information about the relative sizes of exception probability bounds becomes available It is shown that reasoning via entailment with universal near surety is equivalent to reasoning in a particular type of argumentation system having the property that when two subsets of the rule base conflict with each other, the effectively more specific subset overrides the other As stepping stones toward attaining this argumentation result, theorems are proved characterizing entailment with universal near surety in terms of upper envelopes of probability measures, upper envelopes of possibility measures, and directed graphs In addition, various attributes of entailment with universal near surety, including property inheritance, are examined

Journal ArticleDOI
TL;DR: This work describes a new single-phase approach that, under a simple cost model, can be encoded and solved as a SAT problem and uses it to address three of the ten SAT challenges posed by Selman, Kautz and McAllester in 1997.
Abstract: Mediator systems integrate distributed, heterogeneous and autonomous data sources, but their effective use requires the solution of hard query optimization problems. This is usually done in two phases: the selection of a set of data sources is similar to a set covering problem, and their ordering into a feasible and efficient query is a capability restricted join order problem. However, a two-phase approach is unlikely to find optimum queries. We describe a new single-phase approach that, under a simple cost model, can be encoded and solved as a SAT problem. Results on artificial benchmarks indicate that this is an interesting problem from the encoding and search viewpoints, and we use them to address three of the ten SAT challenges posed by Selman, Kautz and McAllester in 1997.

Journal ArticleDOI
TL;DR: A simple measure of expressiveness: the number of formulas expressible by a language, up to semantic equivalence, is studied and a dichotomy theorem on constraint languages regarding this measure is proved.
Abstract: In reasoning tasks involving logical formulas, high expressiveness is desirable, although it often leads to high computational complexity. We study a simple measure of expressiveness: the number of formulas expressible by a language, up to semantic equivalence. In the context of constraints, we prove a dichotomy theorem on constraint languages regarding this measure.

Journal ArticleDOI
TL;DR: In this paper, the authors present a model for an investor in a frictionless market that combines investors' incentives in the form of pre-existing liability structures with derivatives pricing procedure tailored for a particular investor.
Abstract: A fundamental question that arises in derivative pricing is why investors trade in a particular derivative at a "fair" price supplied by Arbitrage Pricing Theory (APT). APT establishes a price that is fair for a disinterested investor with a particular set of beliefs about market evolution and attributes trading to differences in those beliefs entertained by the opposite sides of the transaction. We present a model for an investor in a frictionless market that combines investors' incentives in the form of pre-existing liability structures with derivatives pricing procedure tailored for a particular investor. This model enables us to show, through a series of experiments, that investors trade even when their belief structures are identical and accurate. More generally, our study suggests that multi-agent simulation of a financial market can provide a mechanism for conducting experiments that shed light on fundamental properties of the market. As all processes in financial markets (including decision making) become automated, it becomes crucial to have a mechanism by which we can observe the patterns that emerge from a variety of possible investor behaviors. Our simulator, designed as a dealer's market, provides such a mechanism within a certain range of models.