scispace - formally typeset
Search or ask a question

Showing papers by "Rong Qu published in 2011"


Journal ArticleDOI
TL;DR: This paper presents mathematical models which cover specific aspects in the personnel scheduling literature and addresses complexity issues by identifying polynomial solvable and NP-hard special cases.

166 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed EMOSA algorithm with variable neighbourhoods is able to find high-quality non-dominated solutions for the problems tested.
Abstract: This paper presents the investigation of an evolutionary multi-objective simulated annealing (EMOSA) algorithm with variable neighbourhoods to solve the multi-objective multicast routing problems in telecommunications. The hybrid algorithm aims to carry out a more flexible and adaptive exploration in the complex search space by using features of the variable neighbourhood search to find more non-dominated solutions in the Pareto front. Different neighbourhood strictures have been designed with regard to the set of objectives, aiming to drive the search towards optimising all objectives simultaneously. A large number of simulations have been carried out on benchmark instances and random networks with real world features including cost, delay and link utilisations. Experimental results demonstrate that the proposed EMOSA algorithm with variable neighbourhoods is able to find high-quality non-dominated solutions for the problems tested. In particular, the neighbourhood structures that are specifically designed for each objective significantly improved the performance of the proposed algorithm compared with variants of the algorithm with a single neighbourhood.

33 citations


Journal ArticleDOI
TL;DR: Two fundamentally different data mining techniques are investigated, namely artificial neural networks and binary logistic regression, which are able to find global patterns hidden in large data sets and achieve the goal of appropriately classifying the data.
Abstract: A hyper-heuristic often represents a heuristic search method that operates over a space of heuristic rules. It can be thought of as a high level search methodology to choose lower level heuristics. Nearly 200 papers on hyper-heuristics have recently appeared in the literature. A common theme in this body of literature is an attempt to solve the problems in hand in the following way: at each decision point, first employ the chosen heuristic(s) to generate a solution, then calculate the objective value of the solution by taking into account all the constraints involved. However, empirical studies from our previous research have revealed that, under many circumstances, there is no need to carry out this costly 2-stage determination and evaluation at all times. This is because many problems in the real world are highly constrained with the characteristic that the regions of feasible solutions are rather scattered and small. Motivated by this observation and with the aim of making the hyper-heuristic search more efficient and more effective, this paper investigates two fundamentally different data mining techniques, namely artificial neural networks and binary logistic regression. By learning from examples, these techniques are able to find global patterns hidden in large data sets and achieve the goal of appropriately classifying the data. With the trained classification rules or estimated parameters, the performance (i.e. the worth of acceptance or rejection) of a resulting solution during the hyper-heuristic search can be predicted without the need to undertake the computationally expensive 2-stage of determination and calculation. We evaluate our approaches on the solutions (i.e. the sequences of heuristic rules) generated by a graph-based hyper-heuristic proposed for exam timetabling problems. Time complexity analysis demonstrates that the neural network and the logistic regression method can speed up the search significantly. We believe that our work sheds light on the development of more advanced knowledge-based decision support systems.

26 citations


Journal ArticleDOI
TL;DR: Investigation of the case example from a different perspective, for the supply of asphalt from a distribution centre to multiple work locations, gave a broader picture of the complexity and challenges for the improvement of road maintenance processes.
Abstract: There has been limited collaboration between researchers in human factors and operational research disciplines, particularly in relation to work in complex, distributed systems. This study aimed to investigate work at the interface between human factors and operational research in the case example of road resurfacing work. Descriptive material on the factors affecting performance in road maintenance work was collected with support from a range of human factors-based methods and was used to inform operational research analyses. Investigation of the case example from a different perspective, for the supply of asphalt from a distribution centre to multiple work locations, gave a broader picture of the complexity and challenges for the improvement of road maintenance processes. Factors affecting performance in the road maintenance context have been assessed for their potential for further investigation using an integrated human factors and operational research approach. Relative strengths of the disciplines a...

20 citations


Book ChapterDOI
27 Apr 2011
TL;DR: This paper studies the problem of minimizing the amount of coding operations required while meeting the end-to-end delay constraint in network coding based multicast, and develops a population based incremental learning algorithm, where a group of best so far individuals is maintained and used to update the probability vector, which enhances the global search capability of the algorithm.
Abstract: In network coding based multicast, coding operations are expected to be minimized as they not only incur additional computational cost at corresponding nodes in network but also increase data transmission delay. On the other hand, delay constraint must be concerned particularly in delay sensitive applications, e.g. video conferencing. In this paper, we study the problem of minimizing the amount of coding operations required while meeting the end-to-end delay constraint in network coding based multicast. A population based incremental learning (PBIL) algorithm is developed, where a group of best so far individuals, rather than a single one, is maintained and used to update the probability vector, which enhances the global search capability of the algorithm. Simulation results demonstrate the effectiveness of our PBIL.

19 citations


Journal ArticleDOI
TL;DR: A population based incremental learning algorithm is developed which shows to outperform existing algorithms in terms of both the solution obtained and computational time consumed on networks with various features.
Abstract: In network coding based multicast, coding operations need to be minimized as they consume computational resources and increase data processing complexity at corresponding nodes in the network. To address the problem, we develop a population based incremental learning algorithm which shows to outperform existing algorithms in terms of both the solution obtained and computational time consumed on networks with various features.

17 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This paper investigates the effectiveness of applying tie breakers to orderings used in graph colouring heuristics and presents the first results for the benchmark, showing that the approach is adaptive to all the problem instances that it addresses.
Abstract: Graph colouring heuristics have long been applied successfully to the exam timetabling problem. Despite the success of a few heuristic ordering criteria developed in the literature, the approaches lack the ability to handle the situations where ties occur. In this paper, we investigate the effectiveness of applying tie breakers to orderings used in graph colouring heuristics. We propose an approach to construct solutions for our problem after defining which heuristics to combine and the amount of each heuristic to be used in the orderings. Heuristic sequences are then adapted to help guide the search to find better quality solutions. We have tested the approach on the Toronto benchmark problems and are able to obtain results which are within the range of the best reported in the literature. In addition, to test the generality of our approach we introduced an exam timetabling instance generator and a new benchmark data set which has a similar format to the Toronto benchmark. The instances generated vary in size and conflict density. The publication of this problem data to the research community is aimed to provide researchers with a data set which covers a full range of conflict densities. Furthermore, it is possible using the instance generator to create random data sets with different characteristics to test the performance of approaches which rely on problem characteristics. We present the first results for the benchmark and the results obtained show that the approach is adaptive to all the problem instances that we address. We also encourage the use of the data set and generator to produce tailored instances and to investigate various methods on them.

3 citations