scispace - formally typeset
Search or ask a question
Book

Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques

TL;DR: The first edition of Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques was originally put together to offer a basic introduction to the various search and optimization techniques that students might need to use during their research, and this new edition continues this tradition.
Abstract: The first edition of Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques was originally put together to offer a basic introduction to the various search and optimization techniques that students might need to use during their research, and this new edition continues this tradition. Search Methodologies has been expanded and brought completely up to date, including new chapters covering scatter search, GRASP, and very large neighborhood search. The chapter authors are drawn from across Computer Science and Operations Research and include some of the worlds leading authorities in their field. The book provides useful guidelines for implementing the methods and frameworks described and offers valuable tutorials to students and researchers in the field.As I embarked on the pleasant journey of reading through the chapters of this book, I became convinced that this is one of the best sources of introductory material on the search methodologies topic to be found. The books subtitle, Introductory Tutorials in Optimization and Decision Support Techniques, aptly describes its aim, and the editors and contributors to this volume have achieved this aim with remarkable success. The chapters in this book are exemplary in giving useful guidelines for implementing the methods and frameworks described.Fred Glover, Leeds School of Business, University of Colorado Boulder, USA[The book] aims to present a series of well written tutorials by the leading experts in their fields. Moreover, it does this by covering practically the whole possible range of topics in the discipline. It enables students and practitioners to study and appreciate the beauty and the power of some of the computational search techniques that are able to effectively navigate through search spaces that are sometimes inconceivably large. I am convinced that this second edition will build on the success of the first edition and that it will prove to be just as popular.Jacek Blazewicz, Institute of Computing Science, Poznan University of Technology and Institute of Bioorganic Chemistry, Polish Academy of Sciences
Citations
More filters
Journal ArticleDOI
TL;DR: A critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas are presented.
Abstract: Hyper-heuristics comprise a set of approaches that are motivated (at least in part) by the goal of automating the design of heuristic methods to solve hard computational search problems. An underlying strategic research challenge is to develop more generally applicable search methodologies. The term hyper-heuristic is relatively new; it was first used in 2000 to describe heuristics to choose heuristics in the context of combinatorial optimisation. However, the idea of automating the design of heuristics is not new; it can be traced back to the 1960s. The definition of hyper-heuristics has been recently extended to refer to a search method or learning mechanism for selecting or generating heuristics to solve computational search problems. Two main hyper-heuristic categories can be considered: heuristic selection and heuristic generation. The distinguishing feature of hyper-heuristics is that they operate on a search space of heuristics (or heuristic components) rather than directly on the search space of solutions to the underlying problem that is being addressed. This paper presents a critical discussion of the scientific literature on hyper-heuristics including their origin and intellectual roots, a detailed account of the main types of approaches, and an overview of some related areas. Current research trends and directions for future research are also discussed.

1,023 citations


Cites methods from "Search Methodologies: Introductory ..."

  • ...Genetic programming (Koza, 1992; Koza and Poli, 2005) is an evolutionary computation technique that evolves a population of computer programs, and is the most common methodology used in the literature to automatically generate heuristics....

    [...]

  • ...Genetic programming (Koza, 1992; Koza and Poli, 2005) is an evolutionary computation technique that evolves a population of computer programs, and is the most common methodology used in the literature to automatically generate heuristics....

    [...]

Journal ArticleDOI
TL;DR: A comprehensive review of the work done, during the 1968-2005, in the application of statistical and intelligent techniques to solve the bankruptcy prediction problem faced by banks and firms is presented.

978 citations

Journal ArticleDOI
TL;DR: Variable neighbourhood search (VNS) is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods both in descent phase, to find a local minimum, and in perturbation phase to emerge from the corresponding valley as mentioned in this paper.
Abstract: Variable neighbourhood search (VNS) is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods both in descent phase, to find a local minimum, and in perturbation phase to emerge from the corresponding valley It was first proposed in 1997 and has since then rapidly developed both in its methods and its applications In the present paper, these two aspects are thoroughly reviewed and an extensive bibliography is provided Moreover, one section is devoted to newcomers It consists of steps for developing a heuristic for any particular problem Those steps are common to the implementation of other metaheuristics

718 citations

Journal ArticleDOI
TL;DR: The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.
Abstract: In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives.This article1 provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.

711 citations

Proceedings ArticleDOI
23 May 2007
TL;DR: The paper briefly reviews widely used optimization techniques and the key ingredients required for their successful application to software engineering, providing an overview of existing results in eight software engineering application domains.
Abstract: This paper describes work on the application of optimization techniques in software engineering. These optimization techniques come from the operations research and metaheuristic computation research communities. The paper briefly reviews widely used optimization techniques and the key ingredients required for their successful application to software engineering, providing an overview of existing results in eight software engineering application domains. The paper also describes the benefits that are likely to accrue from the growing body of work in this area and provides a set of open problems, challenges and areas for future work.

667 citations


Cites methods from "Search Methodologies: Introductory ..."

  • ...For more detail, the reader is referred to the recent survey of search methodologies edited by Burke and Kendall [17]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The possibility that the immune system does not care about self and non-self, that its primary driving force is the need to detect and protect against danger, and that it does not do the job alone, but receives positive and negative communications from an extended network of other bodily tissues is discussed.
Abstract: For many years immunologists have been well served by the viewpoint that the immune system's primary goal is to discriminate between self and non-self. I believe that it is time to change viewpoints and, in this essay, I discuss the possibility that the immune system does not care about self and non-self, that its primary driving force is the need to detect and protect against danger, and that it does not do the job alone, but receives positive and negative communications from an extended network of other bodily tissues.

4,825 citations


"Search Methodologies: Introductory ..." refers background in this paper

  • ...Technical overviews can be found in Matzinger (1994) and Matzinger (2001)....

    [...]

Journal ArticleDOI
TL;DR: Item-to-item collaborative filtering (ITF) as mentioned in this paper is a popular recommendation algorithm for e-commerce Web sites that scales independently of the number of customers and number of items in the product catalog.
Abstract: Recommendation algorithms are best known for their use on e-commerce Web sites, where they use input about a customer's interests to generate a list of recommended items. Many applications use only the items that customers purchase and explicitly rate to represent their interests, but they can also use other attributes, including items viewed, demographic data, subject interests, and favorite artists. At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother. There are three common approaches to solving the recommendation problem: traditional collaborative filtering, cluster models, and search-based methods. Here, we compare these methods with our algorithm, which we call item-to-item collaborative filtering. Unlike traditional collaborative filtering, our algorithm's online computation scales independently of the number of customers and number of items in the product catalog. Our algorithm produces recommendations in real-time, scales to massive data sets, and generates high quality recommendations.

4,372 citations

Journal ArticleDOI
TL;DR: This special section includes descriptions of five recommender systems, which provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients, and which combine evaluations with content analysis.
Abstract: Recommender systems assist and augment this natural social process. In a typical recommender system people provide recommendations as inputs, which the system then aggregates and directs to appropriate recipients. In some cases the primary transformation is in the aggregation; in others the system’s value lies in its ability to make good matches between the recommenders and those seeking recommendations. The developers of the first recommender system, Tapestry [1], coined the phrase “collaborative filtering” and several others have adopted it. We prefer the more general term “recommender system” for two reasons. First, recommenders may not explictly collaborate with recipients, who may be unknown to each other. Second, recommendations may suggest particularly interesting items, in addition to indicating those that should be filtered out. This special section includes descriptions of five recommender systems. A sixth article analyzes incentives for provision of recommendations. Figure 1 places the systems in a technical design space defined by five dimensions. First, the contents of an evaluation can be anything from a single bit (recommended or not) to unstructured textual annotations. Second, recommendations may be entered explicitly, but several systems gather implicit evaluations: GroupLens monitors users’ reading times; PHOAKS mines Usenet articles for mentions of URLs; and Siteseer mines personal bookmark lists. Third, recommendations may be anonymous, tagged with the source’s identity, or tagged with a pseudonym. The fourth dimension, and one of the richest areas for exploration, is how to aggregate evaluations. GroupLens, PHOAKS, and Siteseer employ variants on weighted voting. Fab takes that one step further to combine evaluations with content analysis. ReferralWeb combines suggested links between people to form longer referral chains. Finally, the (perhaps aggregated) evaluations may be used in several ways: negative recommendations may be filtered out, the items may be sorted according to numeric evaluations, or evaluations may accompany items in a display. Figures 2 and 3 identify dimensions of the domain space: The kinds of items being recommended and the people among whom evaluations are shared. Consider, first, the domain of items. The sheer volume is an important variable: Detailed textual reviews of restaurants or movies may be practical, but applying the same approach to thousands of daily Netnews messages would not. Ephemeral media such as netnews (most news servers throw away articles after one or two weeks) place a premium on gathering and distributing evaluations quickly, while evaluations for 19th century books can be gathered at a more leisurely pace. The last dimension describes the cost structure of choices people make about the items. Is it very costly to miss IT IS OFTEN NECESSARY TO MAKE CHOICES WITHOUT SUFFICIENT personal experience of the alternatives. In everyday life, we rely on

3,993 citations


"Search Methodologies: Introductory ..." refers background in this paper

  • ...Commercial applications are usually called recommender systems (Resnick and Varian 1997)....

    [...]

Journal Article

2,999 citations


"Search Methodologies: Introductory ..." refers methods in this paper

  • ...It was first proposed by Jerne (1973) and formalised into a model by Farmer et al (1986)....

    [...]

  • ...Whilst there is more than one mechanism at work (see Farmer (1986), Kubi (2002) or Jerne (1973) for more details), the essential process is the matching of antigen and antibody, which leads to increased concentrations (proliferation) of more closely matched antibodies....

    [...]

Journal ArticleDOI
TL;DR: The immune system is an organization of cells and molecules with specialized roles in defending against infection as discussed by the authors, and there are two fundamentally different types of responses to invading microbes: innate and adaptive.
Abstract: The immune system is an organization of cells and molecules with specialized roles in defending against infection. There are two fundamentally different types of responses to invading microbes. Innate (natural) responses occur to the same extent however many times the infectious agent is encountered, whereas acquired (adaptive) responses improve on repeated exposure to a given infection. The innate responses use phagocytic cells (neutrophils, monocytes, and macrophages), cells that release inflammatory mediators (basophils, mast cells, and eosinophils), and natural killer cells. The molecular components of innate responses include complement, acute-phase proteins, and cytokines such as the interferons.

1,542 citations