scispace - formally typeset
Search or ask a question
Author

Xiaoyu Chen

Bio: Xiaoyu Chen is an academic researcher from Virginia Tech. The author has contributed to research in topics: Information visualization & Visualization. The author has an hindex of 6, co-authored 17 publications receiving 75 citations.

Papers
More filters
Proceedings ArticleDOI
01 May 2018
TL;DR: This paper proposes a deadline constrained predictive offloading method based on a mobile-fog-cloud (MFC) network that optimizes the offloading decisions by solving a quadratically constrained integer linear programming constrained by latency requirements and predicted availability of devices.
Abstract: An industrial cyber-physical system (ICPS) integrates the physical processes, systems and networks with the computation resources to provide reliable and responsive computation services. A cyber-manufacturing system (CMS), which is generated from ICPS, poses significant challenges on reliability, accuracy and responsiveness of computation services for manufacturing decision making. In this paper, we focus on reliability and responsiveness. Due to the heterogeneities of the computation and communication capacities and conditions, demanding computation services may not be completed in a timely manner. To facilitate reliable and responsive computation services, we propose a deadline constrained predictive offloading method based on a mobile-fog-cloud (MFC) network. This method optimizes the offloading decisions by solving a quadratically constrained integer linear programming constrained by latency requirements and predicted availability of devices. A newly constructed hybrid cyber-additive manufacturing network is used to test the performance of the proposed predictive offloading method and the MFC network. The results show that the proposed method outperforms mobile computing, fog computing, cloud computing, and fog-cloud computing benchmarks to minimize resource consumption and comply with latency requirements.

20 citations

Journal ArticleDOI
Lening Wang1, Xiaoyu Chen1, SungKu Kang1, Xinwei Deng1, Ran Jin1 
TL;DR: In this article, a Gaussian process-constrained general path model is proposed to approximate the high-fidelity FEA simulation results based on lowfidelity results voxel-by-voxel.
Abstract: Finite element analysis (FEA) has been widely adopted to identify potential defects in additive manufacturing (AM) processes. For personalized product realization, it is necessary to validate a number of heterogeneous product and process designs before or during manufacturing by using FEA. Multi-fidelity FEA simulations can be readily implemented with different capabilities in terms of simulation accuracy. However, due to its complexity, high-fidelity FEA simulation is time-consuming and decreases the efficiency of product realization in AM, while low-fidelity FEA simulation has fast computation speed yet limited capability. Hence, our objective is to improve the capability of FEA by providing an efficient data-driven model. In this research, a Gaussian process-constrained general path model is proposed to approximate the high-fidelity FEA simulation results based on low-fidelity results voxel-by-voxel. The proposed model quantifies the heterogeneous discrepancies between low- and high-fidelity FEA simulation results by incorporating the product design information (e.g., Cartesian coordinates of deposition sequence) and process design information from inputs of FEA simulation (e.g., input heat). Therefore, it enables the validation of new product and process designs based on the simulation results with the desired capability in a timely manner. The advantages of the proposed method are illustrated by FEA simulations of the fused deposition modeling (FDM) process with two levels of fidelity (i.e., low- and high-fidelity).

16 citations

Journal ArticleDOI
Xiaoyu Chen1, Ran Jin1
TL;DR: The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation.

16 citations

Journal ArticleDOI
Xiaoyu Chen1, Ran Jin1
TL;DR: The results indicate that the proposed recommendation method outperforms the traditional matrix completion, tensor regression methods, and a state-of-the-art personalized recommendation model.
Abstract: The industrial cyber–physical systems (ICPS) will accelerate the transformation of offline data-driven modeling to fast computation services, such as computation pipelines for prediction, monitoring, prognosis, diagnosis, and control in factories. However, it is computationally intensive to adapt computation pipelines to heterogeneous contexts in ICPS in manufacturing. In this article, we propose to rank and select the best computation pipelines to match contexts and formulate the problem as a recommendation problem. The proposed method adaptive computation pipelines (AdaPipe) considers similarities of computation pipelines from word embedding, and features of contexts. Thus, without exploring all computation pipelines extensively in a trial-and-error manner, AdaPipe efficiently identifies top-ranked computation pipelines. We validated the proposed method with 60 bootstrapped datasets from three real manufacturing processes: thermal spray coating, printed electronics, and additive manufacturing. The results indicate that the proposed recommendation method outperforms the traditional matrix completion, tensor regression methods, and a state-of-the-art personalized recommendation model.

15 citations

Journal ArticleDOI
Ran Jin1, Xinwei Deng1, Xiaoyu Chen1, Liang Zhu, Jun Zhang 
TL;DR: A dynamic quality-process model is proposed to characterize the varying effects of a process to product quality due to equipment degradation, which can automatically estimate the dynamic effects via a meaningful parameter regularization, leading to accurate parameter estimation and model prediction.
Abstract: In many manufacturing processes, equipment reliability plays a crucial role for product quality assurance. It is important to consider the effect of equipment degradation for the quality-process model. In this article, we propose a dynamic quality-process model to characterize the varying effects of a process to product quality due to equipment degradation. The proposed model considers the effects of process variables on product quality as piecewise linear functions with respect to the equipment degradation. It can automatically estimate the dynamic effects via a meaningful parameter regularization, leading to accurate parameter estimation and model prediction. The merits of the proposed method are illustrated by both simulations and a real case study in a crystal growth manufacturing process.

12 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Proceedings Article
01 Jan 1999
Abstract: The ever-increasing power of computers and hardware rendering systems has, to date, primarily motivated the creation of visually rich and perceptually realistic virtual environment (VE) applications. Comparatively very little effort has been expended on the user interaction components of VEs. As a result, VE user interfaces are often poorly designed and are rarely evaluated with users. Although usability engineering is a newly emerging facet of VE development, user-centered design and usability evaluation in VEs as a practice still lags far behind what is needed. This paper presents a structured, iterative approach for the user-centered design and evaluation of VE user interaction. This approach consists of the iterative use of expert heuristic evaluation, followed by formative usability evaluation, followed by summative evaluation. We describe our application of this approach to a real-world VE for battlefield visualization, describe the resulting series of design iterations, and present evidence that this approach provides a cost-effective strategy for assessing and iteratively improving user interaction design in VEs. This paper is among the first to report applying an iterative, structured, user-centered design and evaluation approach to VE user interaction design.

145 citations

Book ChapterDOI
01 Jan 2004
TL;DR: The chapter reviews the most important methods for obtaining transformed signal characteristics such as principal component analysis, the discrete Fourier transform, and the discrete cosine and sine transform.
Abstract: This chapter gives an overview of the most relevant feature selection and extraction methods for biomedical image processing. Besides the traditional transformed and non-transformed signal characteristics and texture, feature extraction methods encompass structural and graph descriptors. The feature selection methods described in this chapter are the exhaustive search, branch and bound algorithm, max-min feature selection, sequential forward and backward selection, and also Fisher's linear discriminant. Feature extraction and selection in pattern recognition are based on finding mathematical methods for reducing dimensionality of pattern representation. A lower-dimensional representation based on pattern descriptors is a so-called feature. It plays a crucial role in determining the separating properties of pattern classes. The choice of features, attributes, or measurements has an important influence on: the accuracy of classification, the time needed for classification, the number of examples needed for learning, and the cost of performing classification. The chapter reviews the most important methods for obtaining transformed signal characteristics such as principal component analysis, the discrete Fourier transform, and the discrete cosine and sine transform. The basic idea employed in transformed signal characteristics is to find such transform-based features with a low redundancy and a high information density of the original input.

97 citations

Journal ArticleDOI
TL;DR: In this paper, the authors quantify the production and use economics of an additively-manufactured versus a traditionally forged GE engine bracket for commercial aviation with equivalent performance and show that the additively manufactured part and design is cheaper than the forged one for a wide range of scenarios, including at higher volumes of 2,000 to 12,000 brackets per year.
Abstract: Additive manufacturing is increasingly of interest for commercial and military applications due to its potential to create novel geometries with increased performance. For additive manufacturing to find commercial application, it will have to be cost competitive against traditional processes such as forging. Forecasting the production costs of future products prior to large-scale investment is challenging due to the limits of traditional cost accounting’s ability to handle the systemic process implications of new technologies and cognitive biases in humans’ additive and systemic estimates. Leveraging a method uniquely suited to these challenges, we quantify the production and use economics of an additively-manufactured versus a traditionally forged GE engine bracket for commercial aviation with equivalent performance. Our results show that, despite the simplicity of the engine bracket, when taking into account part redesign for AM and the associated lifetime fuel savings of the additively-designed bracket, the additively manufactured part and design is cheaper than the forged one for a wide range of scenarios, including at higher volumes of 2,000 to 12,000 brackets per year. Opportunities to further reduce costs include cheaper material prices without compromising quality, being able to produce vertical builds with equivalent performance to horizontal builds, and increasing process control so as to enable reduced testing. Given the conservative nature of our assumptions as well as our choice of part, these results suggest there may be broader economic viability for additively manufactured parts, especially when systemic factors and use costs are incorporated.

70 citations

Journal ArticleDOI
TL;DR: A taxonomy of different optimization problems in fog computing, a categorization of the metrics used in constraints and objective functions, and a mapping study of the relevant literature are proposed.

69 citations