scispace - formally typeset
Search or ask a question
Journal Article

Integer programming, lattices, and results in fixed dimension

01 Jan 2005-Discrete Optimization (North-Holland Publishing Company)-Vol. 12, pp 171
TL;DR: In this article, various lattice basis reduction algorithms are used as auxiliary algorithms when solving integer feasibility and optimization problems, and three algorithms are described: binary search, a linear algorithm for a fixed number of constraints, and a randomized algorithm for different numbers of constraints.
About: This article is published in Discrete Optimization.The article was published on 2005-01-01 and is currently open access. It has received 31 citations till now. The article focuses on the topics: Integer programming & Lattice reduction.
Citations
More filters
01 Jan 2005
TL;DR: A representation of convex simple integer recourse problems as continuous simple recourse problems, so that they can be solved by existing special purpose algorithms.
Abstract: We consider the objective function of a simple recourse problem with fixed technology matrix and integer second-stage variables. Separability due to the simple recourse structure allows to study a one-dimensional version instead.Based on an explicit formula for the objective function, we derive a complete description of the class of probability density functions such that the objective function is convex. This result is also stated in terms of random variables.Next, we present a class of convex approximations of the objective function, which are obtained by perturbing the distributions of the right-hand side parameters. We derive a uniform bound on the absolute error of the approximation. Finally, we give a representation of convex simple integer recourse problems as continuous simple recourse problems, so that they can be solved by existing special purpose algorithms.

36 citations

01 Jan 2005
TL;DR: In this paper, the problem of synthesizing Reo coordination code from a specification of a behavior as a relation on scheduled-data streams is considered, where the specification is given as a constraint automaton that describes the desired input/output behavior at the inputs of the components.
Abstract: Composition of a concurrent system out of components involves coordination of their mutual interactions. In component-based construction, this coordination becomes the responsibility of the glue-code language and its underlying run-time middle-ware. Reo offers an expressive glue-language for construction of coordinating component connectors out of primitive channels. In this paper we consider the problem of synthesizing Reo coordination code from a specification of a behavior as a relation on scheduled-data streams. The specification is given as a constraint automaton that describes the desired input/output behavior at the ports of the components. The main contribution in this paper is an algorithm that generates Reo code from a given constraint automaton.

30 citations

Book ChapterDOI
01 Jan 2009
TL;DR: In this article, the authors discuss the formulation of two optimization models for selecting such a branching disjunction and then describe methods of solution using a standard MILP solver, and report on computational experiments carried out to study the effects of branching on such disjunctions.
Abstract: Branching is an important component of the branch-and-cut algorithm for solving mixed integer linear programs. Most solvers branch by imposing a disjunction of the form“\({x}_{i} \leq k \vee {x}_{i} \geq k + 1\)” for some integer k and some integer-constrained variable x i . A generalization of this branching scheme is to branch by imposing a more general disjunction of the form “\(\pi x \leq {\pi }_{0} \vee \pi x \geq {\pi }_{0} + 1\).” In this paper, we discuss the formulation of two optimization models for selecting such a branching disjunction and then describe methods of solution using a standard MILP solver. We report on computational experiments carried out to study the effects of branching on such disjunctions.

30 citations

Proceedings ArticleDOI
03 Jun 2013
TL;DR: This work approaches the problem of providing highlevel and intelligible descriptions of the motivations of an agent, based on observations of such an agent during the fulfillment of a series of complex activities (called sequential decisions in this work), and presents a methodology that allows researchers to converge towards a summary description of anAgent's behaviors.
Abstract: The execution of an agent's complex activities, comprising sequences of simpler actions, sometimes leads to the clash of conflicting functions that must be optimized. These functions represent satisfaction, short-term as well as long-term objectives, costs and individual preferences. The way that these functions are weighted is usually unknown even to the decision maker. But if we were able to understand the individual motivations and compare such motivations among individuals, then we would be able to actively change the environment so as to increase satisfaction and/or improve performance. In this work, we approach the problem of providing highlevel and intelligible descriptions of the motivations of an agent, based on observations of such an agent during the fulfillment of a series of complex activities (called sequential decisions in our work). A novel algorithm for the analysis of observational records is proposed. We also present a methodology that allows researchers to converge towards a summary description of an agent's behaviors, through the minimization of an error measure between the current description and the observed behaviors. This work was validated using not only a synthetic dataset representing the motivations of a passenger in a public transportation network, but also real taxi drivers' behaviors from their trips in an urban network. Our results show that our method is not only useful, but also performs much better than the previous methods, in terms of accuracy, efficiency and scalability.

26 citations

Journal Article
TL;DR: A semantic approach to black box test coverage is introduced, which defines a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight.
Abstract: Since testing is inherently incomplete, test selection has vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to black box test coverage. Our starting point is a weighted fault model (or WFM), which augments a specification by assigning a weight to each error that may occur in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour. We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of WFMs as fault automata. They are based on existing and novel optimization problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol.

23 citations

References
More filters
Proceedings ArticleDOI
03 May 1971
TL;DR: It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a tautology.
Abstract: It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a tautology. Here “reduced” means, roughly speaking, that the first problem can be solved deterministically in polynomial time provided an oracle is available for solving the second. From this notion of reducible, polynomial degrees of difficulty are defined, and it is shown that the problem of determining tautologyhood has the same polynomial degree as the problem of determining whether the first of two given graphs is isomorphic to a subgraph of the second. Other examples are discussed. A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.

6,675 citations

01 Jan 1982
TL;DR: In this paper, a polynomial-time algorithm was proposed to decompose a primitive polynomials into irreducible factors in Z(X) if the greatest common divisor of its coefficients is 1.
Abstract: In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q(X). It is well known that this is equivalent to factoring primitive polynomials feZ(X) into irreducible factors in Z(X). Here we call f~ Z(X) primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. (8). Its running time, measured in bit operations, is O(nl2+n9(log(fD3).

3,248 citations

Journal ArticleDOI
TL;DR: Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
Abstract: We report on improved practical algorithms for lattice basis reduction. We present a variant of the L3-algorithm with “deep insertions” and a practical algorithm for blockwise Korkine-Zolotarev reduction, a concept extending L3-reduction, that has been introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 58 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC 2 computer.

1,390 citations

Posted Content
TL;DR: A new "normalized information distance" is proposed, based on the noncomputable notion of Kolmogorov complexity, and it is demonstrated that it is a metric and called the similarity metric.
Abstract: A new class of distances appropriate for measuring similarity relations between sequences, say one type of similarity per distance, is studied. We propose a new ``normalized information distance'', based on the noncomputable notion of Kolmogorov complexity, and show that it is in this class and it minorizes every computable distance in the class (that is, it is universal in that it discovers all computable similarities). We demonstrate that it is a metric and call it the {\em similarity metric}. This theory forms the foundation for a new practical tool. To evidence generality and robustness we give two distinctive applications in widely divergent areas using standard compression programs like gzip and GenCompress. First, we compare whole mitochondrial genomes and infer their evolutionary history. This results in a first completely automatic computed whole mitochondrial phylogeny tree. Secondly, we fully automatically compute the language tree of 52 different languages.

1,015 citations

Journal ArticleDOI
TL;DR: Two ways of implementing the algorithm are considered: multitape Turing machines and logical nets (with step=binary logical element.)
Abstract: Es wird ein Algorithmus zur Berechnung des Produktes von zweiN-stelligen Dualzahlen angegeben. Zwei Arten der Realisierung werden betrachtet: Turingmaschinen mit mehreren Bandern und logische Netze (aus zweistelligen logischen Elementen aufgebaut).

976 citations