scispace - formally typeset
Search or ask a question
Author

Claude-Guy Quimper

Bio: Claude-Guy Quimper is an academic researcher from Laval University. The author has contributed to research in topics: Constraint (information theory) & Constraint programming. The author has an hindex of 21, co-authored 92 publications receiving 1460 citations. Previous affiliations of Claude-Guy Quimper include École Polytechnique de Montréal & University of Waterloo.


Papers
More filters
Book ChapterDOI
25 Sep 2006
TL;DR: This paper considers how to propagate grammar constraints and a number of extensions to specify global constraints via grammars or automata and to propagate this constraint specification efficiently and effectively.
Abstract: Global constraints are an important tool in the constraint toolkit. Unfortunately, whilst it is usually easy to specify when a global constraint holds, it is often difficult to build a good propagator. One promising direction is to specify global constraints via grammars or automata. For example, the Regular constraint [1] permits us to specify a wide range of global constraints by means of a DFA accepting a regular language, and to propagate this constraint specification efficiently and effectively. More precisely, the Regular constraint ensures that the values taken by a sequence of variables form a string accepted by the DFA. In this paper, we consider how to propagate such grammar constraints and a number of extensions.

82 citations

Proceedings Article
09 Aug 2003
TL;DR: This paper presents a fast, simple algorithm for bounds consistency propagation of the alldifferent constraint and shows that this algorithm outperforms existing bounds consistency algorithms and also outperforms--on problems with an easily identifiable property-state-ofthe-art commercial implementations of propagators for stronger forms of local consistency.
Abstract: In constraint programming one models a problem by stating constraints on acceptable solutions. The constraint model is then usually solved by interleaving backtracking search and constraint propagation. Previous studies have demonstrated that designing special purpose constraint propagators for commonly occurring constraints can significantly improve the efficiency of a constraint programming approach. In this paper we present a fast, simple algorithm for bounds consistency propagation of the alldifferent constraint. The algorithm has the same worst case behavior as the previous best algorithm but is much faster in practice. Using a variety of benchmark and random problems, we show that our algorithm outperforms existing bounds consistency algorithms and also outperforms--on problems with an easily identifiable property-state-ofthe-art commercial implementations of propagators for stronger forms of local consistency.

79 citations

Book ChapterDOI
Claude-Guy Quimper, Toby Walsh1
23 Sep 2007
TL;DR: Based on an AND/OR decomposition, it is shown that the GRAMMAR constraint can be converted into clauses in conjunctive normal form without hindering propagation and used as an efficient incremental propagator.
Abstract: A wide range of constraints can be specified using automata or formal languages. The GRAMMAR constraint restricts the values taken by a sequence of variables to be a string from a given context-free language. Based on an AND/OR decomposition, we show that this constraint can be converted into clauses in conjunctive normal form without hindering propagation. Using this decomposition, we can propagate the GRAMMAR constraint in O(n3) time. The decomposition also provides an efficient incremental propagator. Down a branch of the search tree of length k, we can enforce GAC k times in the same O(n3) time. On specialized languages, running time can be even better. For example, propagation of the decomposition requires just O(n|δ|) time for regular languages where |δ| is the size of the transition table of the automaton recognizing the regular language. Experiments on a shift scheduling problem with a constraint solver and a state of the art SAT solver show that we can solve problems using this decomposition that defeat existing constraint solvers.

73 citations

Proceedings Article
03 Aug 2013
TL;DR: This work provides an algorithm that, given a negative example, focuses onto a constraint of the target network in a number of queries logarithmic in the size of the example.
Abstract: We learn constraint networks by asking the user partial queries. That is, we ask the user to classify assignments to subsets of the variables as positive or negative. We provide an algorithm that, given a negative example, focuses onto a constraint of the target network in a number of queries logarithmic in the size of the example. We give information theoretic lower bounds for learning some simple classes of constraint networks and show that our generic algorithm is optimal in some cases. Finally we evaluate our algorithm on some benchmarks.

69 citations

Journal Article
TL;DR: In this article, the authors study the global cardinality constraint (gcc) and propose an O(n 1.5 d) algorithm for domain consistency and O(cn 2 d + n 2.66 )-time algorithm for range consistency.
Abstract: We study the global cardinality constraint (gcc) and propose an O(n 1.5 d) algorithm for domain consistency and an O(cn + dn) algorithm for range consistency where n is the number of variables, d the number of values in the domain, and c an output dependent variable smaller than or equal to n. We show how to prune the cardinality variables in O(n 2 d + n 2.66 ) steps, detect if g cc is universal in constant time and prove that it is NP-Hard to maintain domain consistency on extended-GCC.

65 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 2006
TL;DR: Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas.
Abstract: Constraint programming is a powerful paradigm for solving combinatorial search problems that draws on a wide range of techniques from artificial intelligence, computer science, databases, programming languages, and operations research. Constraint programming is currently applied with success to many domains, such as scheduling, planning, vehicle routing, configuration, networks, and bioinformatics. The aim of this handbook is to capture the full breadth and depth of the constraint programming field and to be encyclopedic in its scope and coverage. While there are several excellent books on constraint programming, such books necessarily focus on the main notions and techniques and cannot cover also extensions, applications, and languages. The handbook gives a reasonably complete coverage of all these lines of work, based on constraint programming, so that a reader can have a rather precise idea of the whole field and its potential. Of course each line of work is dealt with in a survey-like style, where some details may be neglected in favor of coverage. However, the extensive bibliography of each chapter will help the interested readers to find suitable sources for the missing details. Each chapter of the handbook is intended to be a self-contained survey of a topic, and is written by one or more authors who are leading researchers in the area. The intended audience of the handbook is researchers, graduate students, higher-year undergraduates and practitioners who wish to learn about the state-of-the-art in constraint programming. No prior knowledge about the field is necessary to be able to read the chapters and gather useful knowledge. Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas. The handbook is organized in two parts. The first part covers the basic foundations of constraint programming, including the history, the notion of constraint propagation, basic search methods, global constraints, tractability and computational complexity, and important issues in modeling a problem as a constraint problem. The second part covers constraint languages and solver, several useful extensions to the basic framework (such as interval constraints, structured domains, and distributed CSPs), and successful application areas for constraint programming. - Covers the whole field of constraint programming - Survey-style chapters - Five chapters on applications Table of Contents Foreword (Ugo Montanari) Part I : Foundations Chapter 1. Introduction (Francesca Rossi, Peter van Beek, Toby Walsh) Chapter 2. Constraint Satisfaction: An Emerging Paradigm (Eugene C. Freuder, Alan K. Mackworth) Chapter 3. Constraint Propagation (Christian Bessiere) Chapter 4. Backtracking Search Algorithms (Peter van Beek) Chapter 5. Local Search Methods (Holger H. Hoos, Edward Tsang) Chapter 6. Global Constraints (Willem-Jan van Hoeve, Irit Katriel) Chapter 7. Tractable Structures for CSPs (Rina Dechter) Chapter 8. The Complexity of Constraint Languages (David Cohen, Peter Jeavons) Chapter 9. Soft Constraints (Pedro Meseguer, Francesca Rossi, Thomas Schiex) Chapter 10. Symmetry in Constraint Programming (Ian P. Gent, Karen E. Petrie, Jean-Francois Puget) Chapter 11. Modelling (Barbara M. Smith) Part II : Extensions, Languages, and Applications Chapter 12. Constraint Logic Programming (Kim Marriott, Peter J. Stuckey, Mark Wallace) Chapter 13. Constraints in Procedural and Concurrent Languages (Thom Fruehwirth, Laurent Michel, Christian Schulte) Chapter 14. Finite Domain Constraint Programming Systems (Christian Schulte, Mats Carlsson) Chapter 15. Operations Research Methods in Constraint Programming (John Hooker) Chapter 16. Continuous and Interval Constraints(Frederic Benhamou, Laurent Granvilliers) Chapter 17. Constraints over Structured Domains (Carmen Gervet) Chapter 18. Randomness and Structure (Carla Gomes, Toby Walsh) Chapter 19. Temporal CSPs (Manolis Koubarakis) Chapter 20. Distributed Constraint Programming (Boi Faltings) Chapter 21. Uncertainty and Change (Kenneth N. Brown, Ian Miguel) Chapter 22. Constraint-Based Scheduling and Planning (Philippe Baptiste, Philippe Laborie, Claude Le Pape, Wim Nuijten) Chapter 23. Vehicle Routing (Philip Kilby, Paul Shaw) Chapter 24. Configuration (Ulrich Junker) Chapter 25. Constraint Applications in Networks (Helmut Simonis) Chapter 26. Bioinformatics and Constraints (Rolf Backofen, David Gilbert)

1,527 citations

Journal ArticleDOI
TL;DR: This paper presents a review of the literature on personnel scheduling problems and discusses the classification methods in former review papers, and evaluates the literature in the many fields that are related to either the problem setting or the technical features.

706 citations

01 Jan 2007
TL;DR: Minimum Cardinality Matrix Decomposition into Consecutive-Ones Matrices: CP and IP Approaches and Connections in Networks: Hardness of Feasibility Versus Optimality.
Abstract: Minimum Cardinality Matrix Decomposition into Consecutive-Ones Matrices: CP and IP Approaches.- Connections in Networks: Hardness of Feasibility Versus Optimality.- Modeling the Regular Constraint with Integer Programming.- Hybrid Local Search for Constrained Financial Portfolio Selection Problems.- The "Not-Too-Heavy Spanning Tree" Constraint.- Eliminating Redundant Clauses in SAT Instances.- Cost-Bounded Binary Decision Diagrams for 0-1 Programming.- YIELDS: A Yet Improved Limited Discrepancy Search for CSPs.- A Global Constraint for Total Weighted Completion Time.- Computing Tight Time Windows for RCPSPWET with the Primal-Dual Method.- Necessary Condition for Path Partitioning Constraints.- A Constraint Programming Approach to the Hospitals / Residents Problem.- Best-First AND/OR Search for 0/1 Integer Programming.- A Position-Based Propagator for the Open-Shop Problem.- Directional Interchangeability for Enhancing CSP Solving.- A Continuous Multi-resources cumulative Constraint with Positive-Negative Resource Consumption-Production.- Replenishment Planning for Stochastic Inventory Systems with Shortage Cost.- Preprocessing Expression-Based Constraint Satisfaction Problems for Stochastic Local Search.- The Deviation Constraint.- The Linear Programming Polytope of Binary Constraint Problems with Bounded Tree-Width.- On Boolean Functions Encodable as a Single Linear Pseudo-Boolean Constraint.- Solving a Stochastic Queueing Control Problem with Constraint Programming.- Constrained Clustering Via Concavity Cuts.- Bender's Cuts Guided Large Neighborhood Search for the Traveling Umpire Problem.- A Large Neighborhood Search Heuristic for Graph Coloring.- Generalizations of the Global Cardinality Constraint for Hierarchical Resources.- A Column Generation Based Destructive Lower Bound for Resource Constrained Project Scheduling Problems.

497 citations