scispace - formally typeset
Search or ask a question
Author

Andrew J. Parkes

Bio: Andrew J. Parkes is an academic researcher from University of Nottingham. The author has contributed to research in topics: Heuristics & Metaheuristic. The author has an hindex of 28, co-authored 108 publications receiving 2707 citations. Previous affiliations of Andrew J. Parkes include University of Southampton & University of Cambridge.


Papers
More filters
Journal ArticleDOI
TL;DR: The broad aim of the competition was to create better understanding between researchers and practitioners by allowing emerging techniques to be developed and tested on real-world models of timetabling problems.
Abstract: The Second International Timetabling Competition (TTC2007) opened in August 2007. Building on the success of the first competition in 2002, this sequel aimed to further develop research activity in the area of educational timetabling. The broad aim of the competition was to create better understanding between researchers and practitioners by allowing emerging techniques to be developed and tested on real-world models of timetabling problems. To support this, a primary goal was to provide researchers with models of problems faced by practitioners through incorporating a significant number of real-world constraints. Another objective of the competition was to stimulate debate within the widening timetabling research community. The competition was divided into three tracks to reflect the important variations that exist in educational timetabling within higher education. Because these formulations incorporate an increased number of “real-world” issues, it is anticipated that the competition will now set the research agenda within the field. After finishing in January 2008, final results were made available in May 2008. Along with background to the competition, the competition tracks are described here along with a brief overview of the techniques used by the competition winners.

219 citations

Journal ArticleDOI
TL;DR: The necessary and sufficient conditions for a rigid supersymmetric theory to be finite at one loop are derived in this article, and it is shown that these conditions also imply two-loop finiteness.

195 citations

Journal ArticleDOI
TL;DR: This article surveys NP-Complete puzzles in the hope of motivating further research in this fascinating area, particularly for those puzzles which have received little scientific attention to date.
Abstract: Single-player games (often called puzzles) have received considerable attention from the scientific community. Consequently, interesting insights into some puzzles, and into the approaches for solving them, have emerged. However, many puzzles have been neglected, possibly because they are unknown to many people. In this article, we survey NP-Complete puzzles in the hope of motivating further research in this fascinating area, particularly for those puzzles which have received little scientific attention to date.

159 citations

Book ChapterDOI
11 Apr 2012
TL;DR: HyFlex as discussed by the authors is a software framework for the development of cross-domain search methodologies, which features a common software interface for dealing with different combinatorial optimisation problems and provides the algorithm components that are problem specific.
Abstract: This paper presents HyFlex, a software framework for the development of cross-domain search methodologies. The framework features a common software interface for dealing with different combinatorial optimisation problems and provides the algorithm components that are problem specific. In this way, the algorithm designer does not require a detailed knowledge of the problem domains and thus can concentrate his/her efforts on designing adaptive general-purpose optimisation algorithms. Six hard combinatorial problems are fully implemented: maximum satisfiability, one dimensional bin packing, permutation flow shop, personnel scheduling, traveling salesman and vehicle routing. Each domain contains a varied set of instances, including real-world industrial data and an extensive set of state-of-the-art problem specific heuristics and search operators. HyFlex represents a valuable new benchmark of heuristic search generality, with which adaptive cross-domain algorithms are being easily developed and reliably compared.This article serves both as a tutorial and a as survey of the research achievements and publications so far using HyFlex.

147 citations

Proceedings Article
01 Jul 1998
TL;DR: The theoretical properties of supermodels are investigated, showing that finding supermodels is typically of the same theoretical complexity as finding models, and a general way to modify a logical theory so that a model of the modified theory is a supermodel of the original is provided.
Abstract: When search techniques are used to solve a practical problem, the solution produced is often brittle in the sense that small execution difficulties can have an arbitrarily large effect on the viability of the solution. The AI community has responded to this difficulty by investigating the development of "robust problem solvers" that are intended to be proof against this difficulty.We argue that robustness is best cast not as a property of the problem solver, but as a property of the solution. We introduce a new class of models for a logical theory, called supermodels, that captures this idea. Supermodels guarantee that the model in question is robust, and allow us to quantify the degree to which it is so.We investigate the theoretical properties of supermodels, showing that finding supermodels is typically of the same theoretical complexity as finding models. We provide a general way to modify a logical theory so that a model of the modified theory is a supermodel of the original. Experimentally, we show that the supermodel problem exhibits phase transition behavior similar to that found in other satisfiability work.

100 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

20 Jul 1986

2,037 citations

Book
01 Jan 2006
TL;DR: Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas.
Abstract: Constraint programming is a powerful paradigm for solving combinatorial search problems that draws on a wide range of techniques from artificial intelligence, computer science, databases, programming languages, and operations research. Constraint programming is currently applied with success to many domains, such as scheduling, planning, vehicle routing, configuration, networks, and bioinformatics. The aim of this handbook is to capture the full breadth and depth of the constraint programming field and to be encyclopedic in its scope and coverage. While there are several excellent books on constraint programming, such books necessarily focus on the main notions and techniques and cannot cover also extensions, applications, and languages. The handbook gives a reasonably complete coverage of all these lines of work, based on constraint programming, so that a reader can have a rather precise idea of the whole field and its potential. Of course each line of work is dealt with in a survey-like style, where some details may be neglected in favor of coverage. However, the extensive bibliography of each chapter will help the interested readers to find suitable sources for the missing details. Each chapter of the handbook is intended to be a self-contained survey of a topic, and is written by one or more authors who are leading researchers in the area. The intended audience of the handbook is researchers, graduate students, higher-year undergraduates and practitioners who wish to learn about the state-of-the-art in constraint programming. No prior knowledge about the field is necessary to be able to read the chapters and gather useful knowledge. Researchers from other fields should find in this handbook an effective way to learn about constraint programming and to possibly use some of the constraint programming concepts and techniques in their work, thus providing a means for a fruitful cross-fertilization among different research areas. The handbook is organized in two parts. The first part covers the basic foundations of constraint programming, including the history, the notion of constraint propagation, basic search methods, global constraints, tractability and computational complexity, and important issues in modeling a problem as a constraint problem. The second part covers constraint languages and solver, several useful extensions to the basic framework (such as interval constraints, structured domains, and distributed CSPs), and successful application areas for constraint programming. - Covers the whole field of constraint programming - Survey-style chapters - Five chapters on applications Table of Contents Foreword (Ugo Montanari) Part I : Foundations Chapter 1. Introduction (Francesca Rossi, Peter van Beek, Toby Walsh) Chapter 2. Constraint Satisfaction: An Emerging Paradigm (Eugene C. Freuder, Alan K. Mackworth) Chapter 3. Constraint Propagation (Christian Bessiere) Chapter 4. Backtracking Search Algorithms (Peter van Beek) Chapter 5. Local Search Methods (Holger H. Hoos, Edward Tsang) Chapter 6. Global Constraints (Willem-Jan van Hoeve, Irit Katriel) Chapter 7. Tractable Structures for CSPs (Rina Dechter) Chapter 8. The Complexity of Constraint Languages (David Cohen, Peter Jeavons) Chapter 9. Soft Constraints (Pedro Meseguer, Francesca Rossi, Thomas Schiex) Chapter 10. Symmetry in Constraint Programming (Ian P. Gent, Karen E. Petrie, Jean-Francois Puget) Chapter 11. Modelling (Barbara M. Smith) Part II : Extensions, Languages, and Applications Chapter 12. Constraint Logic Programming (Kim Marriott, Peter J. Stuckey, Mark Wallace) Chapter 13. Constraints in Procedural and Concurrent Languages (Thom Fruehwirth, Laurent Michel, Christian Schulte) Chapter 14. Finite Domain Constraint Programming Systems (Christian Schulte, Mats Carlsson) Chapter 15. Operations Research Methods in Constraint Programming (John Hooker) Chapter 16. Continuous and Interval Constraints(Frederic Benhamou, Laurent Granvilliers) Chapter 17. Constraints over Structured Domains (Carmen Gervet) Chapter 18. Randomness and Structure (Carla Gomes, Toby Walsh) Chapter 19. Temporal CSPs (Manolis Koubarakis) Chapter 20. Distributed Constraint Programming (Boi Faltings) Chapter 21. Uncertainty and Change (Kenneth N. Brown, Ian Miguel) Chapter 22. Constraint-Based Scheduling and Planning (Philippe Baptiste, Philippe Laborie, Claude Le Pape, Wim Nuijten) Chapter 23. Vehicle Routing (Philip Kilby, Paul Shaw) Chapter 24. Configuration (Ulrich Junker) Chapter 25. Constraint Applications in Networks (Helmut Simonis) Chapter 26. Bioinformatics and Constraints (Rolf Backofen, David Gilbert)

1,527 citations