scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2007"


Journal ArticleDOI
TL;DR: A unified heuristic which is able to solve five different variants of the vehicle routing problem and shown promising results for a large class of vehicle routing problems with backhauls as demonstrated in Ropke and Pisinger.

1,282 citations


Journal ArticleDOI
TL;DR: This work presents a systematic method of distributed algorithms for power control that is geometric-programming-based and shows that in the high Signal-to- interference Ratios (SIR) regime, these nonlinear and apparently difficult, nonconvex optimization problems can be transformed into convex optimized problems in the form of geometric programming.
Abstract: In wireless cellular or ad hoc networks where Quality of Service (QoS) is interference-limited, a variety of power control problems can be formulated as nonlinear optimization with a system-wide objective, e.g., maximizing the total system throughput or the worst user throughput, subject to QoS constraints from individual users, e.g., on data rate, delay, and outage probability. We show that in the high Signal-to- interference Ratios (SIR) regime, these nonlinear and apparently difficult, nonconvex optimization problems can be transformed into convex optimization problems in the form of geometric programming; hence they can be very efficiently solved for global optimality even with a large number of users. In the medium to low SIR regime, some of these constrained nonlinear optimization of power control cannot be turned into tractable convex formulations, but a heuristic can be used to compute in most cases the optimal solution by solving a series of geometric programs through the approach of successive convex approximation. While efficient and robust algorithms have been extensively studied for centralized solutions of geometric programs, distributed algorithms have not been explored before. We present a systematic method of distributed algorithms for power control that is geometric-programming-based. These techniques for power control, together with their implications to admission control and pricing in wireless networks, are illustrated through several numerical examples.

906 citations


Journal ArticleDOI
TL;DR: An information-theoretic approach for estimating sequence conservation based on Jensen-Shannon divergence is introduced and a general heuristic that considers the estimated conservation of sequentially neighboring sites is developed that improves the performance of all methods tested.
Abstract: Motivation: All residues in a protein are not equally important. Some are essential for the proper structure and function of the protein, whereas others can be readily replaced. Conservation analysis is one of the most widely used methods for predicting these functionally important residues in protein sequences. Results: We introduce an information-theoretic approach for estimating sequence conservation based on Jensen–Shannon divergence. We also develop a general heuristic that considers the estimated conservation of sequentially neighboring sites. In large-scale testing, we demonstrate that our combined approach outperforms previous conservation-based measures in identifying functionally important residues; in particular, it is significantly better than the commonly used Shannon entropy measure. We find that considering conservation at sequential neighbors improves the performance of all methods tested. Our analysis also reveals that many existing methods that attempt to incorporate the relationships between amino acids do not lead to better identification of functionally important sites. Finally, we find that while conservation is highly predictive in identifying catalytic sites and residues near bound ligands, it is much less effective in identifying residues in protein–protein interfaces. Availability: Data sets and code for all conservation measures evaluated are available at http://compbio.cs.princeton.edu/conservation/ Contact: mona@cs.princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online.

620 citations


Journal ArticleDOI
TL;DR: This survey examines the p-median problem, with the aim of providing an overview on advances in solving it using recent procedures based on metaheuristic rules.

397 citations


Journal ArticleDOI
TL;DR: In this paper, a simple additive probability model that describes conflict can be mapped on to three different cognitive models: the pre-emptive conflict resolution model, the default interventionist model, and the parallel-competitive model.
Abstract: In this paper, I show that the question of how dual process theories of reasoning and judgement account for conflict between System 1 (heuristic) and System 2 (analytic) processes needs to be explicated and addressed in future research work. I demonstrate that a simple additive probability model that describes such conflict can be mapped on to three different cognitive models. The pre-emptive conflict resolution model assumes that a decision is made at the outset as to whether a heuristic or analytic process will control the response. The parallel-competitive model assumes that each system operates in parallel to deliver a putative response, resulting sometimes in conflict that then needs to be resolved. Finally, the default-interventionist model involves the cueing of default responses by the heuristic system that may or may not be altered by subsequent intervention of the analytic system. A second, independent issue also emerges from this discussion. The superior performance of higher-ability participan...

314 citations


Book ChapterDOI
17 Sep 2007
TL;DR: The results suggest that control-flow analysis of many real process models is feasible without significant delay (less than a second) and could be used frequently during editing time, which allows errors to be caught at earliest possible time.
Abstract: We present a technique to enhance control-flow analysis of business process models. The technique considerably speeds up the analysis and improves the diagnostic information that is given to the user to fix control-flow errors. The technique consists of two parts: Firstly, the process model is decomposed into single-entry-single-exit (SESE) fragments, which are usually substantially smaller than the original process. This decomposition is done in linear time. Secondly, each fragment is analyzed in isolation using a fast heuristic that can analyze many of the fragments occurring in practice. Any remaining fragments that are not covered by the heuristic can then be analyzed using any known complete analysis technique. We used our technique in a case study with more than 340 real business processes modeled with the IBM WebSphere Business Modeler. The results suggest that control-flow analysis of many real process models is feasible without significant delay (less than a second). Therefore, control-flow analysis could be used frequently during editing time, which allows errors to be caught at earliest possible time.

306 citations


Journal ArticleDOI
TL;DR: An efficient variable neighborhood search heuristic for the capacitated vehicle routing problem to design least cost routes for a fleet of identically capacitated vehicles to service geographically scattered customers with known demands is presented.

285 citations


Posted Content
TL;DR: In this article, a facility location model where facilities may be subject to disruptions, causing customers to seek service from the operating facilities is analyzed, and the optimal location patterns are seen to be strongly dependent on the probability of facility failure.
Abstract: In this paper we analyze a facility location model where facilities may be subject to disruptions, causing customers to seek service from the operating facilities. We generalize the classical p-Median problem on a network to explicitly include the failure probabilities, and analyze structural and algorithmic aspects of the resulting model. The optimal location patterns are seen to be strongly dependent on the probability of facility failure - with facilities becoming more centralized, or even co-located, as the failure probability grows. Several exact and heuristic solution approaches are developed. Extensive numerical computations are performed

259 citations


Journal ArticleDOI
TL;DR: A semi-automatic and efficient method for producing full polygonal models of range scanned trees, which are initially represented as sparse point clouds, which can be completed within minutes.
Abstract: We present a semi-automatic and efficient method for producing full polygonal models of range scanned trees, which are initially represented as sparse point clouds. First, a skeleton of the trunk and main branches of the tree is produced based on the scanned point clouds. Due to the unavoidable incompleteness of the point clouds produced by range scans of trees, steps are taken to synthesize additional branches to produce plausible support for the tree crown. Appropriate dimensions for each branch section are estimated using allometric theory. Using this information, a mesh is produced around the full skeleton. Finally, leaves are positioned, oriented and connected to nearby branches. Our process requires only minimal user interaction, and the full process including scanning and modeling can be completed within minutes.

259 citations


Journal ArticleDOI
TL;DR: The authors use statistical tools to model how the performance of heuristic rules varies as a function of environmental characteristics and highlight the trade-off between using linear models and heuristics.
Abstract: Much research has highlighted incoherent implications of judgmental heuristics, yet other findings have demonstrated high correspondence between predictions and outcomes. At the same time, judgment has been well modeled in the form of as if linear models. Accepting the probabilistic nature of the environment, the authors use statistical tools to model how the performance of heuristic rules varies as a function of environmental characteristics. They further characterize the human use of linear models by exploring effects of different levels of cognitive ability. They illustrate with both theoretical analyses and simulations. Results are linked to the empirical literature by a meta-analysis of lens model studies. Using the same tasks, the authors estimate the performance of both heuristics and humans where the latter are assumed to use linear models. Their results emphasize that judgmental accuracy depends on matching characteristics of rules and environments and highlight the trade-off between using linear models and heuristics. Whereas the former can be cognitively demanding, the latter are simple to implement. However, heuristics require knowledge to indicate when they should be used.

258 citations


Journal ArticleDOI
TL;DR: It is shown that the model provides an effective method to address uncertainties with little added cost in demand point coverage and the heuristics are able to generate good facility location solutions in an efficient manner.

Proceedings Article
22 Jul 2007
TL;DR: A novel way of constructing good patterns automatically from the specification of planning problem instances is presented, which allows a domain-independent planner to solve planning problems optimally in some very challenging domains, including a STRIPS formulation of the Sokoban puzzle.
Abstract: Heuristic search is a leading approach to domain-independent planning. For cost-optimal planning, however, existing admissible heuristics are generally too weak to effectively guide the search. Pattern database heuristics (PDBs), which are based on abstractions of the search space, are currently one of the most promising approaches to developing better admissible heuristics. The informedness of PDB heuristics depends crucially on the selection of appropriate abstractions (patterns). Although PDBs have been applied to many search problems, including planning, there are not many insights into how to select good patterns, even manually. What constitutes a good pattern depends on the problem domain, making the task even more difficult for domain-independent planning, where the process needs to be completely automatic and generaL We present a novel way of constructing good patterns automatically from the specification of planning problem instances. We demonstrate that this allows a domain-independent planner to solve planning problems optimally in some very challenging domains, including a STRIPS formulation of the Sokoban puzzle.

Proceedings Article
22 Jul 2007
TL;DR: In this paper, a local search approach for algorithm configuration is presented, which can be used for minimising run-time in decision problems or for maximising solution quality in optimisation problems, with no limitation on the number of parameters.
Abstract: The determination of appropriate values for free algorithm parameters is a challenging and tedious task in the design of effective algorithms for hard problems. Such parameters include categorical choices (e.g., neighborhood structure in local search or variable/value ordering heuristics in tree search), as well as numerical parameters (e.g., noise or restart timing). In practice, tuning of these parameters is largely carried out manually by applying rules of thumb and crude heuristics, while more principled approaches are only rarely used. In this paper, we present a local search approach for algorithm configuration and prove its convergence to the globally optimal parameter configuration. Our approach is very versatile: it can, e.g., be used for minimising run-time in decision problems or for maximising solution quality in optimisation problems. It further applies to arbitrary algorithms, including heuristic tree search and local search algorithms, with no limitation on the number of parameters. Experiments in four algorithm configuration scenarios demonstrate that our automatically determined parameter settings always outperform the algorithm defaults, sometimes by several orders of magnitude. Our approach also shows better performance and greater flexibility than the recent CALIBRA system. Our ParamILS code, along with instructions on how to use it for tuning your own algorithms, is available on-line at http://www.cs.ubc.ca/labs/beta/Projects/ParamILS.

Journal ArticleDOI
TL;DR: This paper presents a genetic algorithm (GA) for solving the Dial-a-Ride problem, based on the classical cluster-first, route-second approach, where it alternates between assigning customers to vehicles using a GA and solving independent routing problems for the Vehicles using a routing heuristic.
Abstract: In the Dial-a-Ride problem (DARP), customers request transportation from an operator. A request consists of a specified pickup location and destination location along with a desired departure or arrival time and capacity demand. The aim of DARP is to minimize transportation cost while satisfying customer service level constraints (Quality of Service). In this paper, we present a genetic algorithm (GA) for solving the DARP. The algorithm is based on the classical cluster-first, route-second approach, where it alternates between assigning customers to vehicles using a GA and solving independent routing problems for the vehicles using a routing heuristic. The algorithm is implemented in Java and tested on publicly available data sets. The new solution method has achieved solutions comparable with the current state-of-the-art methods.

Journal ArticleDOI
01 Mar 2007
TL;DR: The integrated system consists of a heuristic managerial decision rule for different scenarios of predictive and corrective cost compositions and can be applied in various industries and different kinds of equipment that possess well-defined degradation characteristics.
Abstract: This paper develops an integrated neural-network-based decision support system for predictive maintenance of rotational equipment. The integrated system is platform-independent and is aimed at minimizing expected cost per unit operational time. The proposed system consists of three components. The first component develops a vibration-based degradation database through condition monitoring of rolling element bearings. In the second component, an artificial neural network model is developed to estimate the life percentile and failure times of roller bearings. This is then used to construct a marginal distribution. The third component consists of the construction of a cost matrix and probabilistic replacement model that optimizes the expected cost per unit time. Furthermore, the integrated system consists of a heuristic managerial decision rule for different scenarios of predictive and corrective cost compositions. Finally, the proposed system can be applied in various industries and different kinds of equipment that possess well-defined degradation characteristics

Journal ArticleDOI
TL;DR: It is illustrated that the GA outperforms all state-of-the-art heuristics and that the DBGA further improves the performance of the GA.
Abstract: In the last few decades, the resource-constrained project-scheduling problem has become a popular problem type in operations research. However, due to its strongly NP-hard status, the effectiveness of exact optimisation procedures is restricted to relatively small instances. In this paper, we present a new genetic algorithm (GA) for this problem that is able to provide near-optimal heuristic solutions. This GA procedure has been extended by a so-called decomposition-based genetic algorithm (DBGA) that iteratively solves subparts of the project. We present computational experiments on two data sets. The first benchmark set is used to illustrate the performance of both the GA and the DBGA. The second set is used to compare the results with current state-of-the-art heuristics and to show that the procedure is capable of producing consistently good results for challenging problem instances. We illustrate that the GA outperforms all state-of-the-art heuristics and that the DBGA further improves the performance of the GA.

Journal ArticleDOI
TL;DR: This heuristic developed herein integrates the elements of randomizing the selection of priority rules, penalizing the worst columns when the searching space is highly condensed, and defining the core problem to speedup the algorithm.

Journal ArticleDOI
TL;DR: A heuristic method is suggested that guides the selection of bankruptcy predictors based on the correlations and partial correlations among variables and is found to perform well based on a 10-fold validation analysis.

Journal ArticleDOI
01 Aug 2007
TL;DR: This paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models to handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables.
Abstract: Fuzzy time-series models have been widely applied due to their ability to handle nonlinear data directly and because no rigid assumptions for the data are needed In addition, many such models have been shown to provide better forecasting results than their conventional counterparts However, since most of these models require complicated matrix computations, this paper proposes the adoption of a multivariate heuristic function that can be integrated with univariate fuzzy time-series models into multivariate models Such a multivariate heuristic function can easily be extended and integrated with various univariate models Furthermore, the integrated model can handle multiple variables to improve forecasting results and, at the same time, avoid complicated computations due to the inclusion of multiple variables

Journal ArticleDOI
TL;DR: This paper extends the Fischetti-Glover-Lodi approach in two main directions, namely handling as effectively as possible MIP problems with both binary and general-integer variables, and exploiting the FP information to drive a subsequent enumeration phase.

Journal ArticleDOI
TL;DR: This work considers the problem of minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals and proposes a family of iterative improvement heuristics based on previous work by Potts and Uzsoy and combines them with a genetic algorithm based on the random keys encoding of Bean.

Journal ArticleDOI
TL;DR: The experimental results show that, among the proposed algorithms, one algorithm that takes into account both the residual energy and the volume of data at each sensor node significantly outperforms the others.
Abstract: Energy-constrained sensor networks have been deployed widely for monitoring and surveillance purposes. Data gathering in such networks is often a prevalent operation. Since sensors have significant power constraints (battery life), energy efficient methods must be employed for data gathering to prolong network lifetime. We consider an online data gathering problem in sensor networks, which is stated as follows: assume that there is a sequence of data gathering queries, which arrive one by one. To respond to each query as it arrives, the system builds a routing tree for it. Within the tree, the volume of the data transmitted by each internal node depends on not only the volume of sensed data by the node itself, but also the volume of data received from its children. The objective is to maximize the network lifetime without any knowledge of future query arrivals and generation rates. In other words, the objective is to maximize the number of data gathering queries answered until the first node in the network fails. For the problem of concern, in this paper, we first present a generic cost model of energy consumption for data gathering queries if a routing tree is used for the query evaluation. We then show the problem to be NP-complete and propose several heuristic algorithms for it. We finally conduct experiments by simulation to evaluate the performance of the proposed algorithms in terms of network lifetime delivered. The experimental results show that, among the proposed algorithms, one algorithm that takes into account both the residual energy and the volume of data at each sensor node significantly outperforms the others

Journal ArticleDOI
TL;DR: A simple construction heuristic is developed to expand the solution of the tree design problem by adding road segments, which provides carriers with routing choices, which usually increase risks but reduce costs.

Journal ArticleDOI
TL;DR: The proposed method for determining the predominant sense of a word automatically from raw text does not work as well for verbs and adverbs as nouns and adjectives, but produces more accurate predominant sense information than the widely used SemCor corpus for nouns with low coverage in that corpus.
Abstract: There has been a great deal of recent research into word sense disambiguation, particularly since the inception of the Senseval evaluation exercises. Because a word often has more than one meaning, resolving word sense ambiguity could benefit applications that need some level of semantic interpretation of language input. A major problem is that the accuracy of word sense disambiguation systems is strongly dependent on the quantity of manually sense-tagged data available, and even the best systems, when tagging every word token in a document, perform little better than a simple heuristic that guesses the first, or predominant, sense of a word in all contexts. The success of this heuristic is due to the skewed nature of word sense distributions. Data for the heuristic can come from either dictionaries or a sample of sense-tagged data. However, there is a limited supply of the latter, and the sense distributions and predominant sense of a word can depend on the domain or source of a document. (The first sense of “star” for example would be different in the popular press and scientific journals). In this article, we expand on a previously proposed method for determining the predominant sense of a word automatically from raw text. We look at a number of different data sources and parameterizations of the method, using evaluation results and error analyses to identify where the method performs well and also where it does not. In particular, we find that the method does not work as well for verbs and adverbs as nouns and adjectives, but produces more accurate predominant sense information than the widely used SemCor corpus for nouns with low coverage in that corpus. We further show that the method is able to adapt successfully to domains when using domain specific corpora as input and where the input can either be hand-labeled for domain or automatically classified.

Journal ArticleDOI
Edward Rothberg1
TL;DR: This paper describes an evolutionary approach to improving solutions to mixed integer programming (MIP) models, and proposes coarse-grained approaches to mutating and combining MIP solutions, both built within a large-neighborhood search framework.
Abstract: Evolutionary algorithms adopt a natural-selection analogy, exploiting concepts such as population, combination, mutation, and selection to explore a diverse space of possible solutions to combinatorial optimization problems while, at the same time, retaining desirable properties from known solutions. This paper describes an evolutionary approach to improving solutions to mixed integer programming (MIP) models. We propose coarse-grained approaches to mutating and combining MIP solutions, both built within a large-neighborhood search framework. These techniques are then integrated within a MIP branch-and-bound framework. The resulting solution-polishing heuristic often finds significantly better feasible solutions to very difficult MIP models than do available alternatives. In contrast to most evolutionary algorithms, our polishing heuristic is domain-independent, requiring no structural information about the underlying combinatorial problem, above and beyond the information contained in the original MIP formulation.

Journal ArticleDOI
TL;DR: Examination of order acceptance decisions when capacity is limited, customers receive a discount for late delivery, but early delivery is neither penalized nor rewarded, and a variety of fast and high-quality heuristics based on this approach.

Journal ArticleDOI
TL;DR: This work addresses the two-stage assembly flowshop scheduling problem with respect to maximum lateness criterion where setup times are treated as separate from processing times, and proposes a self-adaptive differential evolution heuristic that performs as good as particle swarm optimization in terms of the average error.

Journal ArticleDOI
13 Feb 2007-Chaos
TL;DR: In this article, the problem of choosing all embedding parameters is viewed as being one and the same problem addressable using a single statistical test formulated directly from the reconstruction theorems, which allows for varying time delays appropriate to the data and simultaneously helps decide on embedding dimension.
Abstract: In the analysis of complex, nonlinear time series, scientists in a variety of disciplines have relied on a time delayed embedding of their data, i.e., attractor reconstruction. The process has focused primarily on intuitive, heuristic, and empirical arguments for selection of the key embedding parameters, delay and embedding dimension. This approach has left several longstanding, but common problems unresolved in which the standard approaches produce inferior results or give no guidance at all. We view the current reconstruction process as unnecessarily broken into separate problems. We propose an alternative approach that views the problem of choosing all embedding parameters as being one and the same problem addressable using a single statistical test formulated directly from the reconstruction theorems. This allows for varying time delays appropriate to the data and simultaneously helps decide on embedding dimension. A second new statistic, undersampling, acts as a check against overly long time delays and overly large embedding dimension. Our approach is more flexible than those currently used, but is more directly connected with the mathematical requirements of embedding. In addition, the statistics developed guide the user by allowing optimization and warning when embedding parameters are chosen beyond what the data can support. We demonstrate our approach on uni- and multivariate data, data possessing multiple time scales, and chaotic data. This unified approach resolves all the main issues in attractor reconstruction.

Journal ArticleDOI
TL;DR: By applying the derived upper bound for the number of hubs the proposed heuristic is capable of obtaining optimal solutions for all small-scaled problems very efficiently and outperforms a genetic algorithm and a simulated annealing method in solving USAHLP.
Abstract: The uncapacitated single allocation hub location problem (USAHLP), with the hub-and-spoke network structure, is a decision problem in regard to the number of hubs and location–allocation. In a pure hub-and-spoke network, all hubs, which act as switching points for internodal flows, are interconnected and none of the non-hubs (i.e., spokes) are directly connected. The key factors for designing a successful hub-and-spoke network are to determine the optimal number of hubs, to properly locate hubs, and to allocate the non-hubs to the hubs. In this paper two approaches to determine the upper bound for the number of hubs along with a hybrid heuristic based on the simulated annealing method, tabu list, and improvement procedures are proposed to resolve the USAHLP. Computational experiences indicate that by applying the derived upper bound for the number of hubs the proposed heuristic is capable of obtaining optimal solutions for all small-scaled problems very efficiently. Computational results also demonstrate that the proposed hybrid heuristic outperforms a genetic algorithm and a simulated annealing method in solving USAHLP.

Journal ArticleDOI
TL;DR: This work considers a multi-project scheduling problem, where each project is composed of a set of activities, with precedence relations, requiring specific amounts of local and shared resources, and provides a dynamic programming formulation and heuristic algorithms for both the combinatorial auction and the bidding process.
Abstract: We consider a multi-project scheduling problem, where each project is composed of a set of activities, with precedence relations, requiring specific amounts of local and shared (among projects) resources. The aim is to complete all the project activities, satisfying precedence and resource constraints, and minimizing each project schedule length. The decision making process is supposed to be decentralized, with as many local decision makers as the projects. A multi-agent system model, and an iterative combinatorial auction mechanism for the agent coordination are proposed. We provide a dynamic programming formulation for the combinatorial auction problem, and heuristic algorithms for both the combinatorial auction and the bidding process. An experimental analysis on the whole multi-agent system model is discussed.