scispace - formally typeset
Search or ask a question
Author

Jie Xu

Bio: Jie Xu is an academic researcher from George Mason University. The author has contributed to research in topics: Stochastic optimization & Discrete optimization. The author has an hindex of 17, co-authored 74 publications receiving 1147 citations. Previous affiliations of Jie Xu include Northwestern University & Shanghai Jiao Tong University.


Papers
More filters
Journal ArticleDOI
TL;DR: How simulation optimization can benefit from cloud computing and high-performance computing, its integration with big data analytics, and the value of simulation optimization to help address challenges in engineering design of complex systems are discussed.
Abstract: Recent advances in simulation optimization research and explosive growth in computing power have made it possible to optimize complex stochastic systems that are otherwise intractable. In the first part of this paper, we classify simulation optimization techniques into four categories based on how the search is conducted. We provide tutorial expositions on representative methods from each category, with a focus in recent developments, and compare the strengths and limitations of each category. In the second part of this paper, we review applications of simulation optimization in various contexts, with detailed discussions on health care, logistics, and manufacturing systems. Finally, we explore the potential of simulation optimization in the new era. Specifically, we discuss how simulation optimization can benefit from cloud computing and high-performance computing, its integration with big data analytics, and the value of simulation optimization to help address challenges in engineering design of complex systems.

161 citations

Journal ArticleDOI
TL;DR: Small-sample validity of the statistical test and ranking-and-selection procedure is proven for normally distributed data, and ISC is compared to the commercial optimization via simulation package OptQuest on five test problems that range from 2 to 20 decision variables and on the order of 104 to 1020 feasible solutions.
Abstract: Industrial Strength COMPASS (ISC) is a particular implementation of a general framework for optimizing the expected value of a performance measure of a stochastic simulation with respect to integer-ordered decision variables in a finite (but typically large) feasible region defined by linear-integer constraints. The framework consists of a global-search phase, followed by a local-search phase, and ending with a “clean-up” (selection of the best) phase. Each phase provides a probability 1 convergence guarantee as the simulation effort increases without bound: Convergence to a globally optimal solution in the global-search phase; convergence to a locally optimal solution in the local-search phase; and convergence to the best of a small number of good solutions in the clean-up phase. In practice, ISC stops short of such convergence by applying an improvement-based transition rule from the global phase to the local phase; a statistical test of convergence from the local phase to the clean-up phase; and a ranking-and-selection procedure to terminate the clean-up phase. Small-sample validity of the statistical test and ranking-and-selection procedure is proven for normally distributed data. ISC is compared to the commercial optimization via simulation package OptQuest on five test problems that range from 2 to 20 decision variables and on the order of 104 to 1020 feasible solutions. These test cases represent response-surface models with known properties and realistic system simulation problems.

155 citations

Journal ArticleDOI
01 Aug 2012
TL;DR: This work compares the performances of eight well-known and widely used clustering validity indices and finds that the silhouette statistic index stands out in most of the data sets that are examined.
Abstract: Swarm intelligence has emerged as a worthwhile class of clustering methods due to its convenient implementation, parallel capability, ability to avoid local minima, and other advantages. In such applications, clustering validity indices usually operate as fitness functions to evaluate the qualities of the obtained clusters. However, as the validity indices are usually data dependent and are designed to address certain types of data, the selection of different indices as the fitness functions may critically affect cluster quality. Here, we compare the performances of eight well-known and widely used clustering validity indices, namely, the Calinski-Harabasz index, the CS index, the Davies-Bouldin index, the Dunn index with two of its generalized versions, the I index, and the silhouette statistic index, on both synthetic and real data sets in the framework of differential-evolution-particle-swarm-optimization (DEPSO)-based clustering. DEPSO is a hybrid evolutionary algorithm of the stochastic optimization approach (differential evolution) and the swarm intelligence method (particle swarm optimization) that further increases the search capability and achieves higher flexibility in exploring the problem space. According to the experimental results, we find that the silhouette statistic index stands out in most of the data sets that we examined. Meanwhile, we suggest that users reach their conclusions not just based on only one index, but after considering the results of several indices to achieve reliable clustering structures.

123 citations

Journal ArticleDOI
TL;DR: It is argued that simulation optimization is a decision-making tool that can be applied to many scenarios to tremendous effect and provides the “smart brain” required to drastically improve the efficiency of industrial systems.
Abstract: Simulation is an established tool for predicting and evaluating the performance of complex stochastic systems that are analytically intractable. Recent research in simulation optimization and explo...

103 citations

Journal ArticleDOI
TL;DR: A new framework to perform efficient simulation optimization when simulation models with different fidelity levels are available is proposed, consisting of two novel methodologies: ordinal transformation (OT) and optimal sampling (OS).
Abstract: Simulation optimization can be used to solve many complex optimization problems in automation applications such as job scheduling and inventory control. We propose a new framework to perform efficient simulation optimization when simulation models with different fidelity levels are available. The framework consists of two novel methodologies: ordinal transformation (OT) and optimal sampling (OS). The OT methodology uses the low-fidelity simulations to transform the original solution space into an ordinal space that encapsulates useful information from the low-fidelity model. The OS methodology efficiently uses high-fidelity simulations to sample the transformed space in search of the optimal solution. Through theoretical analysis and numerical experiments, we demonstrate the promising performance of the multi-fidelity optimization with ordinal transformation and optimal sampling (MO2TOS) framework.

74 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
01 May 1975
TL;DR: The Fundamentals of Queueing Theory, Fourth Edition as discussed by the authors provides a comprehensive overview of simple and more advanced queuing models, with a self-contained presentation of key concepts and formulae.
Abstract: Praise for the Third Edition: "This is one of the best books available. Its excellent organizational structure allows quick reference to specific models and its clear presentation . . . solidifies the understanding of the concepts being presented."IIE Transactions on Operations EngineeringThoroughly revised and expanded to reflect the latest developments in the field, Fundamentals of Queueing Theory, Fourth Edition continues to present the basic statistical principles that are necessary to analyze the probabilistic nature of queues. Rather than presenting a narrow focus on the subject, this update illustrates the wide-reaching, fundamental concepts in queueing theory and its applications to diverse areas such as computer science, engineering, business, and operations research.This update takes a numerical approach to understanding and making probable estimations relating to queues, with a comprehensive outline of simple and more advanced queueing models. Newly featured topics of the Fourth Edition include:Retrial queuesApproximations for queueing networksNumerical inversion of transformsDetermining the appropriate number of servers to balance quality and cost of serviceEach chapter provides a self-contained presentation of key concepts and formulae, allowing readers to work with each section independently, while a summary table at the end of the book outlines the types of queues that have been discussed and their results. In addition, two new appendices have been added, discussing transforms and generating functions as well as the fundamentals of differential and difference equations. New examples are now included along with problems that incorporate QtsPlus software, which is freely available via the book's related Web site.With its accessible style and wealth of real-world examples, Fundamentals of Queueing Theory, Fourth Edition is an ideal book for courses on queueing theory at the upper-undergraduate and graduate levels. It is also a valuable resource for researchers and practitioners who analyze congestion in the fields of telecommunications, transportation, aviation, and management science.

2,562 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a state-of-the-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/employed by organizations to help others understand this landscape with the objective of making robust investment decisions.

1,267 citations

Journal ArticleDOI
TL;DR: This review paper begins at the definition of clustering, takes the basic elements involved in the clustering process, such as the distance or similarity measurement and evaluation indicators, into consideration, and analyzes the clustered algorithms from two perspectives, the traditional ones and the modern ones.
Abstract: Data analysis is used as a common method in modern science research, which is across communication science, computer science and biology science. Clustering, as the basic composition of data analysis, plays a significant role. On one hand, many tools for cluster analysis have been created, along with the information increase and subject intersection. On the other hand, each clustering algorithm has its own strengths and weaknesses, due to the complexity of information. In this review paper, we begin at the definition of clustering, take the basic elements involved in the clustering process, such as the distance or similarity measurement and evaluation indicators, into consideration, and analyze the clustering algorithms from two perspectives, the traditional ones and the modern ones. All the discussed clustering algorithms will be compared in detail and comprehensively shown in Appendix Table 22.

1,234 citations

Journal ArticleDOI
TL;DR: This survey paper will help industrial users, data analysts, and researchers to better develop machine learning models by identifying the proper hyper-parameter configurations effectively and introducing several state-of-the-art optimization techniques.

739 citations