scispace - formally typeset
Search or ask a question
Author

Xavier Lorca

Bio: Xavier Lorca is an academic researcher from École des mines de Nantes. The author has contributed to research in topics: Constraint programming & Constraint (information theory). The author has an hindex of 11, co-authored 42 publications receiving 1088 citations. Previous affiliations of Xavier Lorca include French Institute for Research in Computer Science and Automation & Centre national de la recherche scientifique.

Papers
More filters
Proceedings ArticleDOI
11 Mar 2009
TL;DR: The Entropy resource manager for homogeneous clusters is proposed, which performs dynamic consolidation based on constraint programming and takes migration overhead into account and the use of constraint programming allows Entropy to find mappings of tasks to nodes that are better than those found by heuristics based on local optimizations.
Abstract: Clusters provide powerful computing environments, but in practice much of this power goes to waste, due to the static allocation of tasks to nodes, regardless of their changing computational requirements. Dynamic consolidation is an approach that migrates tasks within a cluster as their computational requirements change, both to reduce the number of nodes that need to be active and to eliminate temporary overload situations. Previous dynamic consolidation strategies have relied on task placement heuristics that use only local optimization and typically do not take migration overhead into account. However, heuristics based on only local optimization may miss the globally optimal solution, resulting in unnecessary resource usage, and the overhead for migration may nullify the benefits of consolidation.In this paper, we propose the Entropy resource manager for homogeneous clusters, which performs dynamic consolidation based on constraint programming and takes migration overhead into account. The use of constraint programming allows Entropy to find mappings of tasks to nodes that are better than those found by heuristics based on local optimizations, and that are frequently globally optimal in the number of nodes. Because migration overhead is taken into account, Entropy chooses migrations that can be implemented efficiently, incurring a low performance overhead.

546 citations

01 Jan 2008
TL;DR: Choco is a java library for constraint satisfaction problems (CSP), constraint programming (CP) and explanation-based constraint solving (e-CP), built on a event-based propagation mechanism with backtrackable structures.
Abstract: Choco is a java library for constraint satisfaction problems (CSP), constraint programming (CP) and explanation-based constraint solving (e-CP). It is built on a event-based propagation mechanism with backtrackable structures.

231 citations

Book ChapterDOI
12 Sep 2011
TL;DR: This work introduces the Bin Repacking Scheduling Problem, a problem to find a final packing and to schedule the transitions from a given initial packing, accordingly to new resource and placement requirements, while minimizing the average transition completion time.
Abstract: A datacenter can be viewed as a dynamic bin packing system where servers host applications with varying resource requirements and varying relative placement constraints When those needs are no longer satisfied, the system has to be reconfigured Virtualization allows to distribute applications into Virtual Machines (VMs) to ease their manipulation In particular, a VM can be freely migrated without disrupting its service, temporarily consuming resources both on its origin and destination We introduce the Bin Repacking Scheduling Problem in this context This problem is to find a final packing and to schedule the transitions from a given initial packing, accordingly to new resource and placement requirements, while minimizing the average transition completion time Our CP-based approach is implemented into Entropy, an autonomous VM manager which detects reconfiguration needs, generates and solves the CP model, then applies the computed decision CP provides the awaited flexibility to handle heterogeneous placement constraints and the ability to manage large datacenters with up to 2,000 servers and 10,000 VMs

61 citations

Book ChapterDOI
30 May 2005
TL;DR: An arc-consistency algorithm for the tree constraint, which enforces the partitioning of a digraph $G$ = ($\mathcal{G}$ + |V| + |E}|) into a set of vertex-disjoint anti-arborescences.
Abstract: This article presents an arc-consistency algorithm for the tree constraint, which enforces the partitioning of a digraph $\mathcal{G}$ = ($\mathcal{V},\mathcal{E}$) into a set of vertex-disjoint anti-arborescences. It provides a necessary and sufficient condition for checking the tree constraint in $\mathcal{O}(|\mathcal{V}| + |\mathcal{E}|)$ time, as well as a complete filtering algorithm taking $\mathcal{O}(|\mathcal{V}| \cdot |\mathcal{E}|)$ time.

58 citations

Journal ArticleDOI
TL;DR: This work describes filtering rules for this extended tree constraint and evaluates its effectiveness on three applications: the Hamiltonian path problem, the ordered disjoint paths problem, and the phylogenetic supertree problem.
Abstract: The tree constraint partitions a directed graph into node-disjoint trees. In many practical applications that involve such a partition, there exist side constraints specifying requirements on tree count, node degrees, or precedences and incomparabilities within node subsets. We present a generalisation of the tree constraint that incorporates such side constraints. The key point of our approach is to take partially into account the strong interactions between the tree partitioning problem and all the side constraints, in order to avoid thrashing during search. We describe filtering rules for this extended tree constraint and evaluate its effectiveness on three applications: the Hamiltonian path problem, the ordered disjoint paths problem, and the phylogenetic supertree problem.

22 citations


Cited by
More filters
Book
01 Jan 2006
TL;DR: Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas.
Abstract: Constraint programming is a powerful paradigm for solving combinatorial search problems that draws on a wide range of techniques from artificial intelligence, computer science, databases, programming languages, and operations research. Constraint programming is currently applied with success to many domains, such as scheduling, planning, vehicle routing, configuration, networks, and bioinformatics. The aim of this handbook is to capture the full breadth and depth of the constraint programming field and to be encyclopedic in its scope and coverage. While there are several excellent books on constraint programming, such books necessarily focus on the main notions and techniques and cannot cover also extensions, applications, and languages. The handbook gives a reasonably complete coverage of all these lines of work, based on constraint programming, so that a reader can have a rather precise idea of the whole field and its potential. Of course each line of work is dealt with in a survey-like style, where some details may be neglected in favor of coverage. However, the extensive bibliography of each chapter will help the interested readers to find suitable sources for the missing details. Each chapter of the handbook is intended to be a self-contained survey of a topic, and is written by one or more authors who are leading researchers in the area. The intended audience of the handbook is researchers, graduate students, higher-year undergraduates and practitioners who wish to learn about the state-of-the-art in constraint programming. No prior knowledge about the field is necessary to be able to read the chapters and gather useful knowledge. Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas. The handbook is organized in two parts. The first part covers the basic foundations of constraint programming, including the history, the notion of constraint propagation, basic search methods, global constraints, tractability and computational complexity, and important issues in modeling a problem as a constraint problem. The second part covers constraint languages and solver, several useful extensions to the basic framework (such as interval constraints, structured domains, and distributed CSPs), and successful application areas for constraint programming. - Covers the whole field of constraint programming - Survey-style chapters - Five chapters on applications Table of Contents Foreword (Ugo Montanari) Part I : Foundations Chapter 1. Introduction (Francesca Rossi, Peter van Beek, Toby Walsh) Chapter 2. Constraint Satisfaction: An Emerging Paradigm (Eugene C. Freuder, Alan K. Mackworth) Chapter 3. Constraint Propagation (Christian Bessiere) Chapter 4. Backtracking Search Algorithms (Peter van Beek) Chapter 5. Local Search Methods (Holger H. Hoos, Edward Tsang) Chapter 6. Global Constraints (Willem-Jan van Hoeve, Irit Katriel) Chapter 7. Tractable Structures for CSPs (Rina Dechter) Chapter 8. The Complexity of Constraint Languages (David Cohen, Peter Jeavons) Chapter 9. Soft Constraints (Pedro Meseguer, Francesca Rossi, Thomas Schiex) Chapter 10. Symmetry in Constraint Programming (Ian P. Gent, Karen E. Petrie, Jean-Francois Puget) Chapter 11. Modelling (Barbara M. Smith) Part II : Extensions, Languages, and Applications Chapter 12. Constraint Logic Programming (Kim Marriott, Peter J. Stuckey, Mark Wallace) Chapter 13. Constraints in Procedural and Concurrent Languages (Thom Fruehwirth, Laurent Michel, Christian Schulte) Chapter 14. Finite Domain Constraint Programming Systems (Christian Schulte, Mats Carlsson) Chapter 15. Operations Research Methods in Constraint Programming (John Hooker) Chapter 16. Continuous and Interval Constraints(Frederic Benhamou, Laurent Granvilliers) Chapter 17. Constraints over Structured Domains (Carmen Gervet) Chapter 18. Randomness and Structure (Carla Gomes, Toby Walsh) Chapter 19. Temporal CSPs (Manolis Koubarakis) Chapter 20. Distributed Constraint Programming (Boi Faltings) Chapter 21. Uncertainty and Change (Kenneth N. Brown, Ian Miguel) Chapter 22. Constraint-Based Scheduling and Planning (Philippe Baptiste, Philippe Laborie, Claude Le Pape, Wim Nuijten) Chapter 23. Vehicle Routing (Philip Kilby, Paul Shaw) Chapter 24. Configuration (Ulrich Junker) Chapter 25. Constraint Applications in Networks (Helmut Simonis) Chapter 26. Bioinformatics and Constraints (Rolf Backofen, David Gilbert)

1,527 citations

Proceedings ArticleDOI
26 Oct 2011
TL;DR: CloudScale is a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures that can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost.
Abstract: Elastic resource scaling lets cloud systems meet application service level objectives (SLOs) with minimum resource provisioning costs. In this paper, we present CloudScale, a system that automates fine-grained elastic resource scaling for multi-tenant cloud computing infrastructures. CloudScale employs online resource demand prediction and prediction error handling to achieve adaptive resource allocation without assuming any prior knowledge about the applications running inside the cloud. CloudScale can resolve scaling conflicts between applications using migration, and integrates dynamic CPU voltage/frequency scaling to achieve energy savings with minimal effect on application SLOs. We have implemented CloudScale on top of Xen and conducted extensive experiments using a set of CPU and memory intensive applications (RUBiS, Hadoop, IBM System S). The results show that CloudScale can achieve significantly higher SLO conformance than other alternatives with low resource and energy cost. CloudScale is non-intrusive and light-weight, and imposes negligible overhead (

662 citations

Journal ArticleDOI
TL;DR: Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.
Abstract: In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.

641 citations

Proceedings ArticleDOI
13 Apr 2010
TL;DR: Q-Clouds, a QoS-aware control framework that tunes resource allocations to mitigate performance interference effects, is developed, which uses online feedback to build a multi-input multi-output (MIMO) model that captures performance interference interactions, and uses it to perform closed loop resource management.
Abstract: Cloud computing offers users the ability to access large pools of computational and storage resources on demand. Multiple commercial clouds already allow businesses to replace, or supplement, privately owned IT assets, alleviating them from the burden of managing and maintaining these facilities. However, there are issues that must be addressed before this vision of utility computing can be fully realized. In existing systems, customers are charged based upon the amount of resources used or reserved, but no guarantees are made regarding the application level performance or quality-of-service (QoS) that the given resources will provide. As cloud providers continue to utilize virtualization technologies in their systems, this can become problematic. In particular, the consolidation of multiple customer applications onto multicore servers introduces performance interference between collocated workloads, significantly impacting application QoS. To address this challenge, we advocate that the cloud should transparently provision additional resources as necessary to achieve the performance that customers would have realized if they were running in isolation. Accordingly, we have developed Q-Clouds, a QoS-aware control framework that tunes resource allocations to mitigate performance interference effects. Q-Clouds uses online feedback to build a multi-input multi-output (MIMO) model that captures performance interference interactions, and uses it to perform closed loop resource management. In addition, we utilize this functionality to allow applications to specify multiple levels of QoS as application Q-states. For such applications, Q-Clouds dynamically provisions underutilized resources to enable elevated QoS levels, thereby improving system efficiency. Experimental evaluations of our solution using benchmark applications illustrate the benefits: performance interference is mitigated completely when feasible, and system utilization is improved by up to 35% using Q-states.

614 citations

Journal ArticleDOI
Yongqiang Gao1, Haibing Guan1, Zhengwei Qi1, Yang Hou1, Liang Liu2 
TL;DR: The proposed multi-objective ant colony system algorithm to efficiently obtain a set of non-dominated solutions (the Pareto set) that simultaneously minimize total resource wastage and power consumption is proposed.

602 citations