scispace - formally typeset
Search or ask a question
Author

Raymond Hemmecke

Bio: Raymond Hemmecke is an academic researcher from Technische Universität München. The author has contributed to research in topics: Graver basis & Integer programming. The author has an hindex of 25, co-authored 90 publications receiving 2160 citations. Previous affiliations of Raymond Hemmecke include Otto-von-Guericke University Magdeburg & University of California.


Papers
More filters
Journal ArticleDOI
TL;DR: LattE , a computer package for lattice point enumeration which contains the first implementation of A. Barvinok’s algorithm, is described and it is proved that these kinds of symbolic–algebraic ideas surpass the traditional branch-and-bound enumeration and in some instances LattE is the only software capable of counting.

299 citations

Book ChapterDOI
01 Dec 2010
TL;DR: This chapter is a study of a simple version of general nonlinear integer problems, where all constraints are still linear, and focuses on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure.
Abstract: Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms.We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.

278 citations

Book
07 Feb 2013
TL;DR: Algebraic and Geometric Ideas in the Theory of Discrete Optimization offers several research technologies not yet well known among practitioners of discrete optimization, minimizes prerequisites for learning these methods, and provides a transition from linear discrete optimization to nonlinear discrete optimization.
Abstract: This book presents recent advances in the mathematical theory of discrete optimization, particularly those supported by methods from algebraic geometry, commutative algebra, convex and discrete geometry, generating functions, and other tools normally considered outside the standard curriculum in optimization. Algebraic and Geometric Ideas in the Theory of Discrete Optimization offers several research technologies not yet well known among practitioners of discrete optimization, minimizes prerequisites for learning these methods, and provides a transition from linear discrete optimization to nonlinear discrete optimization. Audience: This book can be used as a textbook for advanced undergraduates or beginning graduate students in mathematics, computer science, or operations research or as a tutorial for mathematicians, engineers, and scientists engaged in computation who wish to delve more deeply into how and why algorithms do or do not work. Contents: Part I: Established Tools of Discrete Optimization; Chapter 1: Tools from Linear and Convex Optimization; Chapter 2: Tools from the Geometry of Numbers and Integer Optimization; Part II: Graver Basis Methods; Chapter 3: Graver Bases; Chapter 4: Graver Bases for Block-Structured Integer Programs; Part III: Generating Function Methods; Chapter 5: Introduction to Generating Functions; Chapter 6: Decompositions of Indicator Functions of Polyhedral; Chapter 7: Barvinok s Short Rational Generating Functions; Chapter 8: Global Mixed-Integer Polynomial Optimization via Summation; Chapter 9: Multicriteria Integer Linear Optimization via Integer Projection; Part IV: Grbner Basis Methods; Chapter 10: Computations with Polynomials; Chapter 11: Grbner Bases in Integer Programming; Part V: Nullstellensatz and Positivstellensatz Relaxations; Chapter 12: The Nullstellensatz in Discrete Optimization; Chapter 13: Positivity of Polynomials and Global Optimization; Chapter 14: Epilogue.

166 citations

Journal ArticleDOI
TL;DR: In this article, the equivalence of linear optimization and the so-called directed augmentation, and the stabilization of certain Graver bases, were shown to be polynomial time solvable for integer programming problems in variable dimension.

92 citations

Journal ArticleDOI
TL;DR: The fastest known n-fold integer programming algorithm runs in time O(n 3L) as discussed by the authors, where L is the binary length of the numerical part of the input and g is the Graver complexity of the bimatrix A defining the system.
Abstract: n-Fold integer programming is a fundamental problem with a variety of natural applications in operations research and statistics. Moreover, it is universal and provides a new, variable-dimension, parametrization of all of integer programming. The fastest algorithm for n-fold integer programming predating the present article runs in time \({O \left(n^{g(A)}L\right)}\) with L the binary length of the numerical part of the input and g(A) the so-called Graver complexity of the bimatrix A defining the system. In this article we provide a drastic improvement and establish an algorithm which runs in time O (n3L) having cubic dependency on n regardless of the bimatrix A. Our algorithm works for separable convex piecewise affine objectives as well. Moreover, it can be used to define a hierarchy of approximations for any integer programming problem.

91 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book
01 Jan 2004
TL;DR: In this paper, the authors present a set of monomial ideals for three-dimensional staircases and cellular resolutions, including two-dimensional lattice ideals, and a threedimensional staircase with cellular resolutions.
Abstract: Monomial Ideals.- Squarefree monomial ideals.- Borel-fixed monomial ideals.- Three-dimensional staircases.- Cellular resolutions.- Alexander duality.- Generic monomial ideals.- Toric Algebra.- Semigroup rings.- Multigraded polynomial rings.- Syzygies of lattice ideals.- Toric varieties.- Irreducible and injective resolutions.- Ehrhart polynomials.- Local cohomology.- Determinants.- Plucker coordinates.- Matrix Schubert varieties.- Antidiagonal initial ideals.- Minors in matrix products.- Hilbert schemes of points.

1,476 citations

Journal ArticleDOI
TL;DR: By sacrificing modest computation resources to save communication bandwidth and reduce transmission latency, fog computing can significantly improve the performance of cloud computing.
Abstract: Mobile users typically have high demand on localized and location-based information services. To always retrieve the localized data from the remote cloud, however, tends to be inefficient, which motivates fog computing. The fog computing, also known as edge computing, extends cloud computing by deploying localized computing facilities at the premise of users, which prestores cloud data and distributes to mobile users with fast-rate local connections. As such, fog computing introduces an intermediate fog layer between mobile users and cloud, and complements cloud computing toward low-latency high-rate services to mobile users. In this fundamental framework, it is important to study the interplay and cooperation between the edge (fog) and the core (cloud). In this paper, the tradeoff between power consumption and transmission delay in the fog-cloud computing system is investigated. We formulate a workload allocation problem which suggests the optimal workload allocations between fog and cloud toward the minimal power consumption with the constrained service delay. The problem is then tackled using an approximate approach by decomposing the primal problem into three subproblems of corresponding subsystems, which can be, respectively, solved. Finally, based on simulations and numerical results, we show that by sacrificing modest computation resources to save communication bandwidth and reduce transmission latency, fog computing can significantly improve the performance of cloud computing.

681 citations

Journal ArticleDOI
TL;DR: An emerging area of mixed-integer optimal control that adds systems of ordinary differential equations to MINLP is described and a range of approaches for tackling this challenging class of problems are discussed, including piecewise linear approximations, generic strategies for obtaining convex relaxations for non-convex functions, spatial branch-and-bound methods, and a small sample of techniques that exploit particular types of non- Convex structures to obtain improved convex Relaxations.
Abstract: Many optimal decision problems in scientific, engineering, and public sector applications involve both discrete decisions and nonlinear system dynamics that affect the quality of the final design or plan. These decision problems lead to mixed-integer nonlinear programming (MINLP) problems that combine the combinatorial difficulty of optimizing over discrete variable sets with the challenges of handling nonlinear functions. We review models and applications of MINLP, and survey the state of the art in methods for solving this challenging class of problems.Most solution methods for MINLP apply some form of tree search. We distinguish two broad classes of methods: single-tree and multitree methods. We discuss these two classes of methods first in the case where the underlying problem functions are convex. Classical single-tree methods include nonlinear branch-and-bound and branch-and-cut methods, while classical multitree methods include outer approximation and Benders decomposition. The most efficient class of methods for convex MINLP are hybrid methods that combine the strengths of both classes of classical techniques.Non-convex MINLPs pose additional challenges, because they contain non-convex functions in the objective function or the constraints; hence even when the integer variables are relaxed to be continuous, the feasible region is generally non-convex, resulting in many local minima. We discuss a range of approaches for tackling this challenging class of problems, including piecewise linear approximations, generic strategies for obtaining convex relaxations for non-convex functions, spatial branch-and-bound methods, and a small sample of techniques that exploit particular types of non-convex structures to obtain improved convex relaxations.We finish our survey with a brief discussion of three important aspects of MINLP. First, we review heuristic techniques that can obtain good feasible solution in situations where the search-tree has grown too large or we require real-time solutions. Second, we describe an emerging area of mixed-integer optimal control that adds systems of ordinary differential equations to MINLP. Third, we survey the state of the art in software for MINLP.

611 citations