scispace - formally typeset
Search or ask a question
Author

Emma Hart

Bio: Emma Hart is an academic researcher from Edinburgh Napier University. The author has contributed to research in topics: Evolutionary algorithm & Heuristics. The author has an hindex of 28, co-authored 187 publications receiving 3968 citations. Previous affiliations of Emma Hart include University of Edinburgh & Artificial Intelligence Applications Institute.


Papers
More filters
Book ChapterDOI
01 Jan 2003
TL;DR: This chapter introduces and overviews an emerging methodology in search and optimisation called hyperheuristics, which aims to raise the level of generality at which optimisation systems can operate and will lead to more general systems that are able to handle a wide range of problem domains.
Abstract: This chapter introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyperheuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyper-heuristics will lead to more general systems that are able to handle a wide range of problem domains rather than current meta-heuristic technology which tends to be customised to a particular problem or a narrow class of problems. Hyper-heuristics are broadly concerned with intelligently choosing the right heuristic or algorithm in a given situation. Of course, a hyper-heuristic can be (often is) a (meta-)heuristic and it can operate on (meta-)heuristics. In a certain sense, a hyper-heuristic works at a higher level when compared with the typical application of meta-heuristics to optimisation problems, i.e., a hyper-heuristic could be thought of as a (meta)-heuristic which operates on lower level (meta-)heuristics. In this chapter we will introduce the idea and give a brief history of this emerging area. In addition, we will review some of the latest work to be published in the field.

691 citations

Journal ArticleDOI
01 Jan 2008
TL;DR: This paper attempts to suggest a set of problem features that it believes will allow the true potential of the immunological system to be exploited in computational systems, and define a unique niche for AIS.
Abstract: After a decade of research into the area of artificial immune systems, it is worthwhile to take a step back and reflect on the contributions that the paradigm has brought to the application areas to which it has been applied. Undeniably, there have been a lot of successful stories-however, if the field is to advance in the future and really carve out its own distinctive niche, then it is necessary to be able to illustrate that there are clear benefits to be obtained by applying this paradigm rather than others. This paper attempts to take stock of the application areas that have been tackled in the past, and ask the difficult question ''was it worth it ?''. We then attempt to suggest a set of problem features that we believe will allow the true potential of the immunological system to be exploited in computational systems, and define a unique niche for AIS.

348 citations

01 Jan 2008
TL;DR: In this paper, the authors take a step back and reflect on the contributions that the Artificial Immune Systems (AIS) has brought to the application areas to which it has been applied, and suggest a set of problem features that they believe will allow the true potential of the immunological system to be exploited in computational systems.
Abstract: After a decade of research into the area of artificial immune systems, it is worthwhile to take a step back and reflect on the contributions that the paradigm has brought to the application areas to which it has been applied. Undeniably, there have been a lot of successful stories—however, if the field is to advance in the future and really carve out its own distinctive niche, then it is necessary to be able to illustrate that there are clear benefits to be obtained by applying this paradigm rather than others. This paper attempts to take stock of the application areas that have been tackled in the past, and ask the difficult question ‘‘was it worth it ?’’. We then attempt to suggest a set of problem features that we believe will allow the true potential of the immunological system to be exploited in computational systems, and define a unique niche for AIS

265 citations

Book ChapterDOI
27 Sep 1998
TL;DR: This work has tested various diploid algorithms, with and without mechanisms for dominance change, on non-stationary problems, and concludes that some form of dominance change is essential, as a diploids encoding is not enough in itself to allow flexible response to change.
Abstract: It is sometimes claimed that genetic algorithms using diploid representations will be more suitable for problems in which the environment changes from time to time, as the additional information stored in the double chromosome will ensure diversity, which in turn allows the system to respond more quickly and robustly to a change in the fitness function. We have tested various diploid algorithms, with and without mechanisms for dominance change, on non-stationary problems, and conclude that some form of dominance change is essential, as a diploid encoding is not enough in itself to allow flexible response to change. Moreover, a haploid method which randomly mutates chromosomes whose fitness has fallen sharply also performs well on these problems.

165 citations

Proceedings Article
09 Jul 2002
TL;DR: This work uses an EA, in particular the learning classi£er system XCS, to learn a solution process rather than to solve individual problems, and shows that the evolved solution process can provide an optimal solution in over 78% of cases.
Abstract: Evolutionary algorithms (EAs) often appear to be a 'black box', neither offering worst-case bounds nor any guarantee of optimality when used to solve individual problems. They can also take much longer than non-evolutionary methods. We try to address these concerns by using an EA, in particular the learning classi£er system XCS, to learn a solution process rather than to solve individual problems. The process chooses one of various simple non-evolutionary heuristics to apply to each state of a problem, gradually transforming the problem from its initial state to a solved state. We test this on a large set of one-dimensional bin packing problems. For some of the problems, none of the heuristics used can £nd an optimal answer; however, the evolved solution process can £nd an optimal solution in over 78% of cases.

150 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: A coherent and comprehensive review of the vast research activity concerning epidemic processes is presented, detailing the successful theoretical approaches as well as making their limits and assumptions clear.
Abstract: Complex networks arise in a wide range of biological and sociotechnical systems. Epidemic spreading is central to our understanding of dynamical processes in complex networks, and is of interest to physicists, mathematicians, epidemiologists, and computer and social scientists. This review presents the main results and paradigmatic models in infectious disease modeling and generalized social contagion processes.

3,173 citations

Book
22 Jun 2009
TL;DR: This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling.
Abstract: A unified view of metaheuristics This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems Designing efficient metaheuristics for multi-objective optimization problems Designing hybrid, parallel, and distributed metaheuristics Implementing metaheuristics on sequential and parallel machines Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.

2,735 citations

Book
01 Jan 2004
TL;DR: In this article, the authors present a set of heuristics for solving problems with probability and statistics, including the Traveling Salesman Problem and the Problem of Who Owns the Zebra.
Abstract: I What Are the Ages of My Three Sons?.- 1 Why Are Some Problems Difficult to Solve?.- II How Important Is a Model?.- 2 Basic Concepts.- III What Are the Prices in 7-11?.- 3 Traditional Methods - Part 1.- IV What Are the Numbers?.- 4 Traditional Methods - Part 2.- V What's the Color of the Bear?.- 5 Escaping Local Optima.- VI How Good Is Your Intuition?.- 6 An Evolutionary Approach.- VII One of These Things Is Not Like the Others.- 7 Designing Evolutionary Algorithms.- VIII What Is the Shortest Way?.- 8 The Traveling Salesman Problem.- IX Who Owns the Zebra?.- 9 Constraint-Handling Techniques.- X Can You Tune to the Problem?.- 10 Tuning the Algorithm to the Problem.- XI Can You Mate in Two Moves?.- 11 Time-Varying Environments and Noise.- XII Day of the Week of January 1st.- 12 Neural Networks.- XIII What Was the Length of the Rope?.- 13 Fuzzy Systems.- XIV Everything Depends on Something Else.- 14 Coevolutionary Systems.- XV Who's Taller?.- 15 Multicriteria Decision-Making.- XVI Do You Like Simple Solutions?.- 16 Hybrid Systems.- 17 Summary.- Appendix A: Probability and Statistics.- A.1 Basic concepts of probability.- A.2 Random variables.- A.2.1 Discrete random variables.- A.2.2 Continuous random variables.- A.3 Descriptive statistics of random variables.- A.4 Limit theorems and inequalities.- A.5 Adding random variables.- A.6 Generating random numbers on a computer.- A.7 Estimation.- A.8 Statistical hypothesis testing.- A.9 Linear regression.- A.10 Summary.- Appendix B: Problems and Projects.- B.1 Trying some practical problems.- B.2 Reporting computational experiments with heuristic methods.- References.

2,089 citations

Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations