scispace - formally typeset
Search or ask a question

Showing papers by "Barry O'Sullivan published in 2017"


Journal ArticleDOI
TL;DR: The ICON Challenge on Algorithm Selection as mentioned in this paper evaluated many prominent approaches from the literature, making them directly comparable for the first time, and the results show that there is still room for improvement, even for the very best approaches.
Abstract: Algorithm selection is of increasing practical relevance in a variety of applications. Many approaches have been proposed in the literature, but their evaluations are often not comparable, making it hard to judge which approaches work best. The ICON Challenge on Algorithm Selection objectively evaluated many prominent approaches from the literature, making them directly comparable for the first time. The results show that there is still room for improvement, even for the very best approaches.

18 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: The notion of robustness in stable matching problems is studied and it is shown that checking whether a given stable matching is a $(1,b)$-supermatch can be done in polynomial time and the empirical evaluation on large instances show that local search outperforms the other approaches.
Abstract: We study the notion of robustness in stable matching problems. We first define robustness by introducing (a, b)-supermatches. An (a, b)-supermatch is a stable matching in which if a pairs break up it is possible to find another stable matching by changing the partners of those a pairs and at most b other pairs. In this context, we define the most robust stable matching as a (1, b)-supermatch where b is minimum. We show that checking whether a given stable matching is a (1, b)-supermatch can be done in polynomial time. Next, we use this procedure to design a constraint programming model, a local search approach, and a genetic algorithm to find the most robust stable matching. Our empirical evaluation on large instances show that local search out-performs the other approaches.

16 citations


Book ChapterDOI
19 Sep 2017
TL;DR: A model of DBMS-oriented normal behavior is described that can be used to detect frequency based anomalies in database access and can be transformed into the more traditional role-oriented profiles.
Abstract: Timely detection of an insider attack is prevalent among challenges in database security. Research on anomaly-based database intrusion detection systems has received significant attention because of its potential to detect zero-day insider attacks. Such approaches differ mainly in their construction of normative behavior of (insider) role/user. In this paper, a different perspective on the construction of normative behavior is presented, whereby normative behavior is captured instead from the perspective of the DBMS itself. Using techniques from Statistical Process Control, a model of DBMS-oriented normal behavior is described that can be used to detect frequency based anomalies in database access. The approach is evaluated using a synthetic dataset and we also demonstrate this DBMS-oriented profile can be transformed into the more traditional role-oriented profiles.

12 citations


22 Mar 2017
TL;DR: This chapter discusses the challenges of coordinated energy management in data centres and presents a novel scalable, integrated energy management system architecture for data centre wide optimisation.
Abstract: Data centres are part of today's critical information and communication infrastructure, and the majority of business transactions as well as much of our digital life now depend on them. At the same time, data centres are large primary energy consumers, with energy consumed by IT and server room air conditioning equipment and also by general building facilities. In many data centres, IT equipment energy and cooling energy requirements are not always coordinated, so energy consumption is not optimised. Most data centres lack an integrated energy management system that jointly optimises and controls all its energy consuming equipments in order to reduce energy consumption and increase the usage of local renewable energy sources. In this chapter, the authors discuss the challenges of coordinated energy management in data centres and present a novel scalable, integrated energy management system architecture for data centre wide optimisation. A prototype of the system has been implemented, including joint workload and thermal management algorithms. The control algorithms are evaluated in an accurate simulation‐based model of a real data centre. Results show significant energy savings potential, in some cases up to 40%, by integrating workload and thermal management.

11 citations


Journal ArticleDOI
TL;DR: The authors propose a new framework that they call the inductive constraint programming loop, which aims to bridge the gap between the areas of data mining and machine learning on one hand and constraint programming on the other.
Abstract: Constraint programming is used for a variety of real-world optimization problems, such as planning, scheduling, and resource allocation problems, all while we continuously gather vast amounts of data about these problems. Current constraint programming software doesn’t exploit such data to update schedules, resources, and plans. The authors propose a new framework that they call the inductive constraint programming loop. In this approach, data is gathered and analyzed systematically to dynamically revise and adapt constraints and optimization criteria. Inductive constraint programming aims to bridge the gap between the areas of data mining and machine learning on one hand and constraint programming on the other.

8 citations


Book ChapterDOI
28 Aug 2017
TL;DR: This work introduces new CP models for the many-to-many stable matching problem and uses the notion of rotation to give a novel encoding that is linear in the input size of the problem.
Abstract: We introduce new CP models for the many-to-many stable matching problem. We use the notion of rotation to give a novel encoding that is linear in the input size of the problem. We give extra filtering rules to maintain arc consistency in quadratic time. Our experimental study on hard instances of sex-equal and balanced stable matching shows the efficiency of one of our propositions as compared with the state-of-the-art constraint programming approach.

7 citations


Book ChapterDOI
16 Dec 2017
TL;DR: In this paper, the complexity of finding a (a, b)-supermatch is studied and it is shown that deciding if there exists a (1, 1)-super-match is NP-complete.
Abstract: Robust Stable Marriage (RSM) is a variant of the classical Stable Marriage problem, where the robustness of a given stable matching is measured by the number of modifications required for repairing it in case an unforeseen event occurs. We focus on the complexity of finding an (a, b)-supermatch. An (a, b)-supermatch is defined as a stable matching in which if any a (non-fixed) men/women break up it is possible to find another stable matching by changing the partners of those a men/women and also the partners of at most b other couples. In order to show deciding if there exists an (a, b)-supermatch is \(\mathcal {NP}\)-complete, we first introduce a SAT formulation that is \(\mathcal {NP}\)-complete by using Schaefer’s Dichotomy Theorem. Then, we show the equivalence between the SAT formulation and finding a (1, 1)-supermatch on a specific family of instances.

6 citations


Proceedings ArticleDOI
TL;DR: The notion of robustness in stable matching problems was introduced in this article, where the most robust stable matching was defined as a $(1,b)$-supermatch where b is minimum.
Abstract: We study the notion of robustness in stable matching problems We first define robustness by introducing (a,b)-supermatches An $(a,b)$-supermatch is a stable matching in which if $a$ pairs break up it is possible to find another stable matching by changing the partners of those $a$ pairs and at most $b$ other pairs In this context, we define the most robust stable matching as a $(1,b)$-supermatch where b is minimum We show that checking whether a given stable matching is a $(1,b)$-supermatch can be done in polynomial time Next, we use this procedure to design a constraint programming model, a local search approach, and a genetic algorithm to find the most robust stable matching Our empirical evaluation on large instances show that local search outperforms the other approaches

6 citations


Posted Content
TL;DR: In this paper, the complexity of finding a (a, b)-supermatch is shown to be NP-complete and the equivalence between the SAT formulation and finding a 1, 1-supermatch on a specific family of instances is shown.
Abstract: Robust Stable Marriage (RSM) is a variant of the classical Stable Marriage problem, where the robustness of a given stable matching is measured by the number of modifications required for repairing it in case an unforeseen event occurs. We focus on the complexity of finding an (a,b)-supermatch. An (a,b)-supermatch is defined as a stable matching in which if any 'a' (non-fixed) men/women break up it is possible to find another stable matching by changing the partners of those 'a' men/women and also the partners of at most 'b' other couples. In order to show deciding if there exists an (a,b)-supermatch is NP-Complete, we first introduce a SAT formulation that is NP-Complete by using Schaefer's Dichotomy Theorem. Then, we show the equivalence between the SAT formulation and finding a (1,1)-supermatch on a specific family of instances.

5 citations


Book ChapterDOI
22 Mar 2017
TL;DR: This chapter discusses the challenges of coordinated energy management in data centres and presents a novel scalable, integrated energy management system architecture for data centre wide optimisation.
Abstract: Data centres are part of today's critical information and communication infrastructure, and the majority of business transactions as well as much of our digital life now depend on them. At the same time, data centres are large primary energy consumers, with energy consumed by IT and server room air conditioning equipment and also by general build‐ ing facilities. In many data centres, IT equipment energy and cooling energy require‐ ments are not always coordinated, so energy consumption is not optimised. Most data centres lack an integrated energy management system that jointly optimises and controls all its energy consuming equipments in order to reduce energy consumption and increase the usage of local renewable energy sources. In this chapter, the authors discuss the chal‐ lenges of coordinated energy management in data centres and present a novel scalable, integrated energy management system architecture for data centre wide optimisation. A prototype of the system has been implemented, including joint workload and thermal management algorithms. The control algorithms are evaluated in an accurate simulation‐ based model of a real data centre. Results show significant energy savings potential, in some cases up to 40%, by integrating workload and thermal management.

4 citations


Proceedings ArticleDOI
01 Nov 2017
TL;DR: This paper proposes a method to formalize the acquisition of local preferences that formalizes the characteristics and size of the complete assignments required to acquire all local weights and presents an heuristic algorithm that searches for such assignments.
Abstract: Many real-life problems can be formulated as boolean satisfiability (SAT). In addition, in many of these problems, there are some hard clauses that must be satisfied but also some other soft clauses that can remain unsatisfied at some cost. These problems are referred to as Weighted Partial Maximum Satisfiability (WPMS). For solving them, the challenge is to find a solution that minimizes the total sum of costs of the unsatisfied clauses. Configuration problems are real-life examples of these, which involve customizing products according to a user's specific requirements. In the literature there exist many efficient techniques for finding solutions having minimum total cost. However, less attention has been paid to the fact that in many real-life problems the associated weights for soft clauses can be unknown. An example of such situations is when users cannot provide local preferences but instead express global preferences over complete assignments. In these cases, the acquisition of preferences can be the key for finding the best solution. In this paper, we propose a method to formalize the acquisition of local preferences. The process involves solving the associated system of linear equations for a set of complete assignments and their costs. Furthermore, we formalize the characteristics and size of the complete assignments required to acquire all local weights. We present an heuristic algorithm that searches for such assignments which performs promisingly on many benchmarks from the literature.

Book ChapterDOI
05 Jun 2017
TL;DR: This paper develops a new model of the GDDC as a DCOP where each DC operator is represented by an agent and introduces a novel semi-asynchronous distributed algorithm for solving such DCOPs.
Abstract: The geographically distributed data centres problem (GDDC) is a naturally distributed resource allocation problem The problem involves allocating a set of virtual machines (VM) amongst the data centres (DC) in each time period of an operating horizon The goal is to optimize the allocation of workload across a set of DCs such that the energy cost is minimized, while respecting limitations on data centre capacities, migrations of VMs, etc In this paper, we propose a distributed optimization method for GDDC using the distributed constraint optimization (DCOP) framework First, we develop a new model of the GDDC as a DCOP where each DC operator is represented by an agent Secondly, since traditional DCOP approaches are unsuited to these types of large-scale problem with multiple variables per agent and global constraints, we introduce a novel semi-asynchronous distributed algorithm for solving such DCOPs Preliminary results illustrate the benefits of the new method

Proceedings ArticleDOI
24 Nov 2017
TL;DR: Initial results indicate that the proposed model to construct normative behavior is effective in detecting insider attacks conforming to a demonstrated mimicry attack.
Abstract: One of the challenges in database security is timely detection of an insider attack. This gets more challenging in the case of sophisticated / expert insiders. Behavioral-based techniques have shown promising results in detecting insider attacks. Most of the behavioral-based techniques consider a query in isolation in order to model an insider's normative behavior thus only detecting malicious behavior that is limited to single query. A recently proposed approach considers sequences of queries to model an insider's normative behavior by using n-grams that capture shortterm correlations in an application [1]. However, behavioral-based approaches, including the n-gram approach, are vulnerable to mimicry attacks whereby a sophisticated inside attacker can craft a sequence of statements to mimic normal behavior as a set of legitimate transactions. Thus, a mechanism to detect this types of mimicry attack is desirable. In this paper, we first demonstrate an example mimicry attack on an n-gram based approach and then propose a behavioral-based technique that facilitate its detection. The proposed technique complements existing behavioral-based approaches including the n-gram approach and it can be deployed independently. Experiments are presented whereby a queryanalytics model is used to construct normative behavior from query logs of a synthetic banking application system. Initial results indicate that the proposed model to construct normative behavior is effective in detecting insider attacks conforming to a demonstrated mimicry attack.

Proceedings ArticleDOI
03 Apr 2017
TL;DR: Both the effect of the instance ordering on configuration performance and which candidate selection policy is most effective in each case are explored.
Abstract: Algorithm configuration has been repeatedly shown to have a large impact on improving the performance of solvers. Realtime algorithm configurators, such as ReACTR, are able to tune a solver online without incurring costly offline training. In order to do this ReACTR adopts a one-pass methodology where each instance in a stream of instances to be solved is considered only as it arrives. As such, the order in which instances are visited can affect the quality of the tuned parameters and, as a result, the solving time. ReACTR uses a selection procedure to choose multiple configurations to run from a set of possible candidates. These configurations are then run in parallel on each instance. A winner, which solves the problem quickest, or achieves the objective value, emerges. The information on this winner is then fed back into a ranking system which is used as part of the selection procedure and the cycle continues for all future instances. This paper explores both the effect of the instance ordering on configuration performance and which candidate selection policy is most effective in each case.

Proceedings ArticleDOI
01 Nov 2017
TL;DR: This paper defines new dominance rules for this problem and presents several novel graph properties characterising the posts that should be copied with priority, called Popular Matching with Copies, and presents a comprehensive set of experiments for the popular matching problem with copies.
Abstract: We study the problem of matching a set of applicants to a set of posts, where each applicant has an ordinal preference list, which may contain ties, ranking a subset of posts. A matching M is popular if there exists no matching M' where more applicants prefer M' to M . Several notions of optimality are studied in the literature for the case of strictly ordered preference lists. In this paper we address the case involving ties and propose novel algorithmic and complexity results for this variant. Next, we focus on the NP-hard case where additional copies of posts can be added in the preference lists, called Popular Matching with Copies. We define new dominance rules for this problem and present several novel graph properties characterising the posts that should be copied with priority. We present a comprehensive set of experiments for the popular matching problem with copies to evaluate our dominance rules as well as the different branching strategies. Our experimental study emphasizes the importance of the dominance rules and characterises the key aspects of a good branching strategy.


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This paper shows how the original reference network is reduced while maintaining important properties for the design of the architecture and elicit geotypes from the structure of the reference network and discus the importance of this computation in theDesign of the optical distribution network.
Abstract: Realistic reference networks are critical for all techno-economic evaluations in the design of an optical network architecture. As this information was not publicly available for Ireland, we generated such a reference network. In this paper we elaborate on the challenges faced during the generation and present some statistics. We also show how this is used in the design of an optical architecture for the same country. The level of detail of the reference network needed varies according to the layer of the architecture that is being designed. In this paper we show how the original reference network is reduced while maintaining important properties for the design of the architecture. We also elicit geotypes from the structure of the reference network and discus the importance of this computation in the design of the optical distribution network.