scispace - formally typeset
Search or ask a question

Showing papers on "Performance prediction published in 2019"


Journal ArticleDOI
TL;DR: The DNN model demonstrated better performance for penetration rate estimation compared with the ANN model and it can be introduced as a newly developed model in the field of TBM performance assessment.
Abstract: Performance prediction in mechanized tunnel projects utilizing a tunnel boring machine (TBM) is a prerequisite to accurate and reliable cost estimation and project scheduling. A wide variety of artificial intelligence methods have been utilized in the prediction of the penetration rate of TBMs. This study focuses on developing a model based on deep neural networks (DNNs), which is an advanced version of artificial neural networks (ANNs), for prediction of the TBM penetration rate based on the data obtained from the Pahang–Selangor raw water transfer tunnel in Malaysia. To evaluate and document the success and reliability of the new DNN model, an ANN model based on five different data categories from the established database was developed and compared with the DNN model. Based on the results obtained of the coefficient of determination and root mean square error (RMSE), a significant increase in the performance prediction of the penetration rate is achieved by developing a DNN predictive model. The DNN model demonstrated better performance for penetration rate estimation compared with the ANN model and it can be introduced as a newly developed model in the field of TBM performance assessment.

99 citations


Proceedings ArticleDOI
02 Jun 2019
TL;DR: NAPEL is presented, a high-level performance and energy estimation framework for NMC architectures that leverages ensemble learning to develop a model that is based on micro architectural parameters and application characteristics and is capable of making accurate predictions for previously-unseen applications.
Abstract: The cost of moving data between the memory/storage units and the compute units is a major contributor to the execution time and energy consumption of modern workloads in computing systems. A promising paradigm to alleviate this data movement bottleneck is near-memory computing (NMC), which consists of placing compute units close to the memory/storage units. There is substantial research effort that proposes NMC architectures and identifies work-loads that can benefit from NMC. System architects typically use simulation techniques to evaluate the performance and energy consumption of their designs. However, simulation is extremely slow, imposing long times for design space exploration. In order to enable fast early-stage design space exploration of NMC architectures, we need high-level performance and energy models.We present NAPEL, a high-level performance and energy estimation framework for NMC architectures. NAPEL leverages ensemble learning to develop a model that is based on micro architectural parameters and application characteristics. NAPEL training uses a statistical technique, called design of experiments, to collect representative training data efficiently. NAPEL provides early design space exploration 220× faster than a state-of-the-art NMC simulator, on average, with error rates of to 8.5% and 11.6% for performance and energy estimations, respectively, compared to the NMC simulator. NAPEL is also capable of making accurate predictions for previously-unseen applications.

89 citations


Journal ArticleDOI
01 Jul 2019
TL;DR: In this paper, a plan-structured neural network is proposed to predict the latency of query operators and input relations in query execution plans, and a number of optimizations are proposed to reduce training overhead without sacrificing effectiveness.
Abstract: Query performance prediction, the task of predicting a query's latency prior to execution, is a challenging problem in database management systems. Existing approaches rely on features and performance models engineered by human experts, but often fail to capture the complex interactions between query operators and input relations, and generally do not adapt naturally to workload characteristics and patterns in query execution plans. In this paper, we argue that deep learning can be applied to the query performance prediction problem, and we introduce a novel neural network architecture for the task: a plan-structured neural network. Our neural network architecture matches the structure of any optimizer-selected query execution plan and predict its latency with high accuracy, while eliminating the need for human-crafted input features. A number of optimizations are also proposed to reduce training overhead without sacrificing effectiveness. We evaluated our techniques on various workloads and we demonstrate that our approach can out-perform the state-of-the-art in query performance prediction.

65 citations


Proceedings ArticleDOI
25 May 2019
TL;DR: This paper proposes a novel approach to model highly configurable software system using a deep feedforward neural network (FNN) combined with a sparsity regularization technique, e.g. the L1 regularization, which can predict performance values of highlyconfigurable software systems with binary and/or numeric configuration options at much higher prediction accuracy than the state-of-the art approaches.
Abstract: Many software systems provide users with a set of configuration options and different configurations may lead to different runtime performance of the system. As the combination of configurations could be exponential, it is difficult to exhaustively deploy and measure system performance under all possible configurations. Recently, several learning methods have been proposed to build a performance prediction model based on performance data collected from a small sample of configurations, and then use the model to predict system performance under a new configuration. In this paper, we propose a novel approach to model highly configurable software system using a deep feedforward neural network (FNN) combined with a sparsity regularization technique, e.g. the L1 regularization. Besides, we also design a practical search strategy for automatically tuning the network hyperparameters efficiently. Our method, called DeepPerf, can predict performance values of highly configurable software systems with binary and/or numeric configuration options at much higher prediction accuracy with less training data than the state-of-the art approaches. Experimental results on eleven public real-world datasets confirm the effectiveness of our approach.

63 citations


Journal ArticleDOI
TL;DR: It is found that interactions among four or more configuration options have only a minor influence on the prediction error and that ignoring them when learning a performance-influence model can save a substantial amount of computation time, while keeping the model small without considerably increasing the predictionerror.
Abstract: Modeling the performance of a highly configurable software system requires capturing the influences of its configuration options and their interactions on the system’s performance. Performance-influence models quantify these influences, explaining this way the performance behavior of a configurable system as a whole. To be useful in practice, a performance-influence model should have a low prediction error, small model size, and reasonable computation time. Because of the inherent tradeoffs among these properties, optimizing for one property may negatively influence the others. It is unclear, though, to what extent these tradeoffs manifest themselves in practice, that is, whether a large configuration space can be described accurately only with large models and significant resource investment. By means of 10 real-world highly configurable systems from different domains, we have systematically studied the tradeoffs between the three properties. Surprisingly, we found that the tradeoffs between prediction error and model size and between prediction error and computation time are rather marginal. That is, we can learn accurate and small models in reasonable time, so that one performance-influence model can fit different use cases, such as program comprehension and performance prediction. We further investigated the reasons for why the tradeoffs are marginal. We found that interactions among four or more configuration options have only a minor influence on the prediction error and that ignoring them when learning a performance-influence model can save a substantial amount of computation time, while keeping the model small without considerably increasing the prediction error. This is an important insight for new sampling and learning techniques as they can focus on specific regions of the configuration space and find a sweet spot between accuracy and effort. We further analyzed the causes for the configuration options and their interactions having the observed influences on the systems’ performance. We were able to identify several patterns across subject systems, such as dominant configuration options and data pipelines, that explain the influences of highly influential configuration options and interactions, and give further insights into the domain of highly configurable systems.

44 citations


Journal ArticleDOI
TL;DR: An extensive assessment of a predicting model used to evaluate Pumps-as-Turbines’ characteristic curves is presented, with specific attention to the off-design operating conditions, and it is possible to conclude that the proposed model is able to predict the performances of the studied PaTs with errors included in the range of ±7% with respect to the Best Efficiency Point (BEP) in turbine mode.

41 citations


Journal ArticleDOI
TL;DR: This paper focuses on roadheader performance prediction using six different machine learning algorithms and a combination of various machineLearning algorithms via ensemble techniques, and the best success rate obtained is 90.2% successful prediction, which is relatively better than contemporary research.
Abstract: Mechanical excavators are widely used in mining, tunneling and civil engineering projects. There are several types of mechanical excavators, such as a roadheader, tunnel boring machine and impact hammer. This is because these tools can bring productivity to the project quickly, accurately and safely. Among these, roadheaders have some advantages like selective mining, mobility, less over excavation, minimal ground disturbances, elimination of blast vibration, reduced ventilation requirements and initial investment cost. A critical issue in successful roadheader application is the ability to evaluate and predict the machine performance named instantaneous (net) cutting rate. Although there are several prediction methods in the literature, for the prediction of roadheader performance, only a few of them have been developed via artificial neural network techniques. In this study, for this purpose, 333 data sets including uniaxial compressive strength and power on cutting boom, 103 data set including RQD, and 125 data sets including machine weight are accumulated from the literature. This paper focuses on roadheader performance prediction using six different machine learning algorithms and a combination of various machine learning algorithms via ensemble techniques. Algorithms are ZeroR, random forest (RF), Gaussian process, linear regression, logistic regression and multi-layer perceptron (MLP). As a result, MLP and RF give better results than the other algorithms also the best solution achieved was bagging technique on RF and principle component analysis (PCA). The best success rate obtained in this study is 90.2% successful prediction, and it is relatively better than contemporary research.

41 citations


Journal ArticleDOI
TL;DR: This paper presents PPT-GPU, a scalable and accurate simulation framework that enables GPU code developers and architects to predict the performance of applications in a fast, and accurate manner on different GPU architectures.
Abstract: Performance modeling is a challenging problem due to the complexities of hardware architectures. In this paper, we present PPT-GPU, a scalable and accurate simulation framework that enables GPU code developers and architects to predict the performance of applications in a fast, and accurate manner on different GPU architectures. PPT-GPU is part of the open source project, Performance Prediction Toolkit (PPT) developed at the Los Alamos National Laboratory. We extend the old GPU model in PPT that predict the runtimes of computational physics codes to offer better prediction accuracy, for which, we add models for different memory hierarchies found in GPUs and latencies for different instructions. To further show the utility of PPT-GPU, we compare our model against real GPU device(s) and the widely used cycle-accurate simulator, GPGPU-Sim using different workloads from RODINIA and Parboil benchmarks. The results indicate that the predicted performance of PPT-GPU is within a 10 percent error compared to the real device(s). In addition, PPT-GPU is highly scalable, where it is up to 450x faster than GPGPU-Sim with more accurate results.

31 citations


Journal ArticleDOI
TL;DR: The presented methodology consists in the combination of first-order mathematical models with CFD simulations, which enables extrapolation for other conditions in beta-type Stirling engines without the necessity of initial and boundary experimental conditions, obtaining an error around −2.6%.

28 citations


Journal ArticleDOI
TL;DR: A 3D finite element model proposed by a commercial finite element software ADINA provides a powerful tool for the structure design and performance prediction of the ultrasonic motor.

26 citations


Journal ArticleDOI
TL;DR: The results of a case study prove that the proposed framework can effectively integrate the system off-design performance when designing a system, and downsizing the equipment to match the probability of occurrence of the possible off- design operating conditions can lead to a medium-sized system that is much more favorable in terms of economic performance over its whole lifetime.

Journal ArticleDOI
TL;DR: In this paper, a framework for improving the accuracy of the coupled building energy simulation and computational fluid dynamics (CFD) models is proposed, consisting of an approximation technique and a stochastic optimization approach.

Journal ArticleDOI
19 Mar 2019-Water
TL;DR: In this article, a slip factor correlation was introduced based on those CFD simulations, which showed that the inclusion of this parameter in a 1-D performance prediction model allows to reduce the performance prediction errors with respect to experiments on a pump with a similar specific speed by 5.5% at design point, compared to no slip model, and by 8% at part-loads, rather than using Busemann and Stodola formulas.
Abstract: In recent years, pumps operated as turbines (PaTs) have been gaining the interest of industry and academia. For instance, PaTs can be effectively used in micro hydropower plants (MHP) and water distribution systems (WDS). Therefore, further efforts are necessary to investigate their fluid dynamic behavior. Compared to conventional turbines, a lower number of blades is employed in PaTs, lowering their capability to correctly guide the flow, hence reducing the Euler’s work; thus, the slip phenomenon cannot be neglected at the outlet section of the runner. In the first part of the paper, the slip phenomenon is numerically investigated on a simplified geometry, evidencing the dependency of the lack in guiding the flow on the number of blades. Then, a commercial double suction centrifugal pump, characterized by the same specific speed, is considered, evaluating the dependency of the slip on the flow rate. In the last part, a slip factor correlation is introduced based on those CFD simulations. It is shown how the inclusion of this parameter in a 1-D performance prediction model allows us to reduce the performance prediction errors with respect to experiments on a pump with a similar specific speed by 5.5% at design point, compared to no slip model, and by 8% at part-loads, rather than using Busemann and Stodola formulas.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This work investigates the cost-benefits of using supervised ML models for predicting the performance of applications on Spark, one of today's most widely used frameworks for big data analysis, and compares their approach with Ernest.
Abstract: Big data applications and analytics are employed in many sectors for a variety of goals: improving customers satisfaction, predicting market behavior or improving processes in public health. These applications consist of complex software stacks that are often run on cloud systems. Predicting execution times is important for estimating the cost of cloud services and for effectively managing the underlying resources at runtime. Machine Learning (ML), providing black box solutions to model the relationship between application performance and system configuration without requiring in-detail knowledge of the system, has become a popular way of predicting the performance of big data applications. We investigate the cost-benefits of using supervised ML models for predicting the performance of applications on Spark, one of today's most widely used frameworks for big data analysis. We compare our approach with Ernest (an ML-based technique proposed in the literature by the Spark inventors) on a range of scenarios, application workloads, and cloud system configurations. Our experiments show that Ernest can accurately estimate the performance of very regular applications, but it fails when applications exhibit more irregular patterns and/or when extrapolating on bigger data set sizes. Results show that our models match or exceed Ernest's performance, sometimes enabling us to reduce the prediction error from 126-187% to only 5-19%.

Journal Article
TL;DR: This paper investigates an approach for re-training neural-network models, which is based on transfer learning, and finds that this method significantly reduces the number of new measurements required to compute a new model after a change.

Journal ArticleDOI
07 May 2019-Energies
TL;DR: In this article, an off-grid-type small wind turbine for street lighting was designed and analyzed using a computational fluid dynamics model, and its performance was predicted using a CFD model.
Abstract: In this study, an off-grid–type small wind turbine for street lighting was designed and analyzed. Its performance was predicted using a computational fluid dynamics model. The proposed wind turbine has two blades with a radius of 0.29 m and a height of 1.30 m. Ansys Fluent, a commercial computational fluid dynamics solver, was used to predict the performance, and the k-omega SST model was used as the turbulence model. The simulation result revealed a tip-speed ratio of 0.54 with a maximum power coefficient, or an aerodynamic rotor efficiency of 0.17. A wind turbine was installed at a measurement site to validate the simulation, and a performance test was used to measure the power production. To compare the simulation results obtained from the CFD simulation with the measured electrical power performance, the efficiencies of the generator and the controller were measured using a motor-generator testbed. Also, the control strategy of the controller was found from the field test and applied to the simulation results. Comparing the results of the numerical simulation with the experiment, the maximum power-production error at the same wind speed was found to be 4.32%.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: The approach is based on an analytical resolution of the Maxwell’s equations inside a set of subdomains, which allows detailed modelling and accurate steady-state performance prediction of AFIMs in a very short time, making possible evaluation of numerous design for the optimization.
Abstract: This paper presents design optimization of axial- flux induction motors (AFIMs) for electric vehicles. The approach is based on an analytical resolution of the Maxwell’s equations inside a set of subdomains. The analytical method is verified against 2D finite-element analysis. It allows detailed modelling and accurate steady-state performance prediction of AFIMs in a very short time (under 1sec), making possible evaluation of numerous design for the optimization. The optimization effortlessly simulates and predict the performance of various AFIM designs based on driving cycles and vehicle limitations. An optimization based on driving cycle of electric vehicles is studied through considering a driving cycle and designing minimum-mass AFIMs capable of fulfilling it. It is observed that driving-cycle based motor designs benefit from mass reduction and ensures the feasibility of all operating conditions.

Journal ArticleDOI
TL;DR: A method for prediction of ship performance in actual seas based on a physical model and the prediction of the engine operating point in winds and waves is described here.
Abstract: In recent years, it has become important to evaluate whether ship propulsive performance achieves the design performance not only in a calm sea condition but also in a seaway. Various on-board monitoring systems have been developed and fitted on-board to check the performance of ships in a seaway. The evaluation can also be fed back to a new ship design. A method for prediction of ship performance in actual seas based on a physical model is described here. Prediction of steady forces in waves, wind forces, drift forces, and steering forces is described from the viewpoint of accurate practical prediction. The prediction of the engine operating point in winds and waves is also treated here. Examples of these prediction methods are illustrated. Performance analysis by an on-board monitoring system using the performance prediction method discussed here is described in the Part 2 of this paper.

Journal ArticleDOI
TL;DR: In this article, the authors describe modeling and performance prediction of a kW-size reciprocating piston expander adopted in micro-Organic Rankine Cycle (ORC) energy systems.

Journal ArticleDOI
TL;DR: The proposed prediction surrogate functions and the variable speed hill chart model are useful engineering tools for improving the design of pump as turbine hydropower plants and for optimising the pump running as turbine control settings to maximise the produced energy.

Proceedings ArticleDOI
05 Jul 2019
TL;DR: The approach of naive bayes is applied in the proposed work for the student performance prediction analysis and the proposed and existing algorithm is implemented in python.
Abstract: The prediction analysis is the approach which predicts future possibilities from the previous data. The student performance prediction technique has the three phases which are pre-processing, feature extraction and classification. In the previous technique approach of SVM classifier is applied for the student performance prediction. The approach of naive bayes is applied in the proposed work for the student performance prediction analysis. The proposed and existing algorithm is implemented in python. The results of proposed model are compared with existing model in terms of accuracy and execution time.

Journal ArticleDOI
TL;DR: This paper proposes a performance prediction framework, called d-Simplexed, to build performance models with varied configurable parameters on Spark, and takes inspiration from the field of Computational Geometry to construct a d-dimensional mesh using Delaunay Triangulation over a selected set of features.
Abstract: Big Data processing systems (e.g., Spark) have a number of resource configuration parameters, such as memory size, CPU allocation, and the number of running nodes. Regular users and even expert administrators struggle to understand the mutual relation between different parameter configurations and the overall performance of the system. In this paper, we address this challenge by proposing a performance prediction framework, called d-Simplexed, to build performance models with varied configurable parameters on Spark. We take inspiration from the field of Computational Geometry to construct a d-dimensional mesh using Delaunay Triangulation over a selected set of features. From this mesh, we predict execution time for various feature configurations. To minimize the time and resources in building a bootstrap model with a large number of configuration values, we propose an adaptive sampling technique to allow us to collect as few training points as required. Our evaluation on a cluster of computers using WordCount, PageRank, Kmeans, and Join workloads in HiBench benchmarking suites shows that we can achieve less than 5% error rate for estimation accuracy by sampling less than 1% of data.

Journal ArticleDOI
TL;DR: An accelerated testing version of the trend-renewal process model is proposed to address the long-term health performance of rechargeable batteries and has better performance in the EOP prediction.
Abstract: Rechargeable batteries are critical components for the performance of portable electronics and electric vehicles. The long-term health performance of rechargeable batteries is characterized by state of health, which can be quantified by end of performance (EOP) and remaining useful performance. Focusing on EOP prediction, this article first proposes an accelerated testing version of the trend-renewal process model to address this decision problem. The proposed model is also applied to a real case study. Finally, a NASA dataset is used to address the prediction performance of the proposed model. Comparing with the existing prediction methods and time series models, our proposed procedure has better performance in the EOP prediction.

Journal ArticleDOI
TL;DR: In this article, a modified capacitance-resistance model for gas flooding systems based on gas density and average reservoir pressure is developed, and a detailed procedure is described in a synthetic reservoir model using a genetic algorithm.

Journal ArticleDOI
23 Jul 2019-Energies
TL;DR: In this article, a three-dimensional multi-physics model coupled with multi-objective genetic algorithm was proposed to optimize the optimal element number allocation with the coefficient of performance and cooling capacity simultaneously as multiobjective functions.
Abstract: Due to their advantages of self-powered capability and compact size, combined thermoelectric devices, in which a thermoelectric cooler module is driven by a thermoelectric generator module, have become promising candidates for cooling applications in extreme conditions or environments where the room is confined and the power supply is sacrificed. When the device is designed as two-stage configuration for larger temperature difference, the design degree is larger than that of a single-stage counterpart. The element number allocation to each stage in the system has a significant influence on the device performance. However, this issue has not been well-solved in previous studies. This work proposes a three-dimensional multi-physics model coupled with multi-objective genetic algorithm to optimize the optimal element number allocation with the coefficient of performance and cooling capacity simultaneously as multi-objective functions. This method increases the accuracy of performance prediction compared with the previously reported examples studied by the thermal resistance model. The results show that the performance of the optimized device is remarkably enhanced, where the cooling capacity is increased by 23.3% and the coefficient of performance increased by 122.0% compared with the 1# Initial Solution. The mechanism behind this enhanced performance is analyzed. The results in this paper should be beneficial for engineers and scientists seeking to design a combined thermoelectric device with optimal performance under the constraint of total element number.

Journal ArticleDOI
01 Dec 2019-Entropy
TL;DR: A hybrid convolutional recurrent neural network model based on self-attention mechanism is presented, which can automatically learn discriminative features and capture global contextual information from personnel performance data and is applied to a real case of personnel performance prediction.
Abstract: Personnel performance is important for the high-technology industry to ensure its core competitive advantages are present. Therefore, predicting personnel performance is an important research area in human resource management (HRM). In this paper, to improve prediction performance, we propose a novel framework for personnel performance prediction to help decision-makers to forecast future personnel performance and recruit the best suitable talents. Firstly, a hybrid convolutional recurrent neural network (CRNN) model based on self-attention mechanism is presented, which can automatically learn discriminative features and capture global contextual information from personnel performance data. Moreover, we treat the prediction problem as a classification task. Then, the k-nearest neighbor (KNN) classifier was used to predict personnel performance. The proposed framework is applied to a real case of personnel performance prediction. The experimental results demonstrate that the presented approach achieves significant performance improvement for personnel performance compared to existing methods.

Journal ArticleDOI
TL;DR: A Software Component Allocation Framework (SCAF) is proposed with the goal to acquire a (sub–) optimal software configuration with respect to multiple NFPs, thus providing performance prediction of a software configuration in its early design phase.
Abstract: Context: Application of component based software engineering methods to heterogeneous computing (HC) enables different software configurations to realize the same function with different non–functional properties (NFP). Finding the best software configuration with respect to multiple NFPs is a non–trivial task. Objective: We propose a Software Component Allocation Framework (SCAF) with the goal to acquire a (sub–) optimal software configuration with respect to multiple NFPs, thus providing performance prediction of a software configuration in its early design phase. We focus on the software configuration optimization for the average energy consumption and average execution time. Method: We validated SCAF through its instantiation on a real–world demonstrator and a simulation. Firstly, we verified the correctness of our model through comparing the performance prediction of six software configurations to the actual performance, obtained through extensive measurements with a confidence interval of 95%. Secondly, to demonstrate how SCAF scales up, we performed software configuration optimization on 55 generated use–cases (with solution spaces ranging from 1030 to 3070) and benchmark the results against best performing random configurations. Results: The performance of a configuration as predicted by our framework matched the configuration implemented and measured on a real–world platform. Furthermore, by applying the genetic algorithm and simulated annealing to the weight function given in SCAF, we obtain sub–optimal software configurations differing in performance at most 7% and 13% from the optimal configuration (respectfully). Conclusion: SCAF is capable of correctly describing a HC platform and reliably predict the performance of software configuration in the early design phase. Automated in the form of an Eclipse plugin, SCAF allows software architects to model architectural constraints and preferences, acting as a multi–criterion software architecture decision support system. In addition to said, we also point out several interesting research directions, to further investigate and improve our approach.

Journal ArticleDOI
15 May 2019-Energy
TL;DR: In this article, an artificial neural network (ANN) based prediction model was established after evaluating different learning rates, hidden layer neural numbers and train functions, and combined the genetic algorithm with the ANN model, a parametric optimization and performance prediction for maximum power output was conducted.

Proceedings ArticleDOI
01 Jun 2019
TL;DR: A novel prediction approach is proposed that uses the performance spectrum for feature selection and extraction to pose machine learning problems used for performance prediction in non-isolated cases and shows that the use of the performance Spectrum enables much better predictions than baseline approaches.
Abstract: Predictive performance analysis is crucial for supporting operational processes. Prediction is challenging when cases are not isolated but influence each other by competing for resources (spaces, machines, operators). The so-called performance spectrum maps a variety of performance-related measures within and across cases over time. We propose a novel prediction approach that uses the performance spectrum for feature selection and extraction to pose machine learning problems used for performance prediction in non-isolated cases. Although the approach is general, we focus on material handling systems as a primary example. We report on a feasibility study conducted for the material handling systems of a major European airport. The results show that the use of the performance spectrum enables much better predictions than baseline approaches.

Journal ArticleDOI
TL;DR: This paper focuses on constructing the performance maps of pressure ratio and isentropic efficiency using a limited number of sample data while maintaining accuracy and shows that, when predicting inside data boundary, the loss-analysis-based model and the kriging model produce higher accuracy prediction even in a small data set, while the neural network model provides better results only in a more extensive data set with more speed lines.
Abstract: Centrifugal compressor is widely used in various engineering domains, and predicting the performance of a centrifugal compressor is an essential task for its conceptual design, optimization, and system simulation. For years, researchers seek to implement this mission through various kinds of methods, including interpolation, curve fitting, neural network, and other statistics-based algorithms. However, these methods usually need a large amount of data, and obtaining data may cost considerable computing or experimental resources. This paper focuses on constructing the performance maps of pressure ratio and isentropic efficiency using a limited number of sample data while maintaining accuracy. Firstly, sample data are generated from simulation using Vista CCD. Then, corrected flow rate and corrected rotational speed are used as independent variables, and the regression expressions with physical meaning of pressure ratio and isentropic efficiency are derived and simplified through thermodynamic analysis and loss analysis of centrifugal compressor, resulting in two loss-analysis-based models. Meanwhile, kriging models based on a second-order polynomial and neural network models are built. Results show that, when predicting inside data boundary, the loss-analysis-based model and the kriging model produce higher accuracy prediction even in a small data set, and the predicting result is stable, while the neural network model provides better results only in a more extensive data set with more speed lines. For the prediction outside the data boundary, the loss-analysis-based model can provide relatively accurate results. Besides, it takes less time to train and utilize a loss-analysis-based model than other models.