scispace - formally typeset
Search or ask a question

Showing papers on "Performance prediction published in 2013"


Proceedings ArticleDOI
11 Nov 2013
TL;DR: A variability-aware approach to performance prediction via statistical learning that works progressively with random samples, without additional effort to detect feature interactions is proposed.
Abstract: Configurable software systems allow stakeholders to derive program variants by selecting features. Understanding the correlation between feature selections and performance is important for stakeholders to be able to derive a program variant that meets their requirements. A major challenge in practice is to accurately predict performance based on a small sample of measured variants, especially when features interact. We propose a variability-aware approach to performance prediction via statistical learning. The approach works progressively with random samples, without additional effort to detect feature interactions. Empirical results on six real-world case studies demonstrate an average of 94% prediction accuracy based on small random samples. Furthermore, we investigate why the approach works by a comparative analysis of performance distributions. Finally, we compare our approach to an existing technique and guide users to choose one or the other in practice.

180 citations


Journal ArticleDOI
TL;DR: In this paper, an analytical model for double-sided permanent-magnet radial-flux eddy-current couplers is presented that can easily handle complex geometries.
Abstract: Analytical models are widely utilised in the study and performance prediction of electric machines by providing fast, yet accurate solutions. By combining conventional magnetic equivalent circuit techniques with Faraday's and Ampere's laws, an analytical model for double-sided permanent-magnet radial-flux eddy-current couplers is presented that can easily handle complex geometries. The proposed approach is also capable of taking the three-dimensional (3D) impacts into account. The characteristics and design considerations are also studied for a surface-mounted permanent-magnet structure. Moreover, the 2D and 3D finite-element methods are employed to verify the results, as well as transient study of the device under two different scenarios. Finally, sensitivity analysis is carried out to investigate the influence of the design variables on the characteristics of the coupler, which provides valuable information in the current and future studies of such devices.

62 citations


Journal ArticleDOI
Lixun Zhang1, Ying-bin Liang1, Xing Liu1, Q F Jiao1, Jianhua Guo1 
TL;DR: In this article, numerical simulation had become an attractive method to carry out researches on structure design and aerodynamic performance prediction of straight-bladed vertical axis wind turbine, while the predefined numerical simulation method was used to evaluate the performance of wind turbine.
Abstract: Numerical simulation had become an attractive method to carry out researches on structure design and aerodynamic performance prediction of straight-bladed vertical axis wind turbine, while the pred...

45 citations


Book ChapterDOI
27 Aug 2013
TL;DR: The survey considers several aspects of the execution of sets of tasks on multi-core platforms that have to do with the interference of the tasks on shared resources, which leads to a combinatorial explosion of the analysis complexity.
Abstract: Multi-core processors are increasingly considered as execution platforms for embedded systems because of their good performance/ energy ratio However, the interference on shared resources poses several problems It may severely reduce the performance of tasks executed on the cores, and it increases the complexity of timing analysis and/or decreases the precision of its results In this paper, we survey recent work on the impact of shared buses, caches, and other resources on performance and performance prediction

44 citations


Journal ArticleDOI
TL;DR: A new model based on the geological and geotechnical site conditions is developed to predict the road header performance using soft computing technique that applies the concept of fuzzy logic to take into account the uncertainty and complexity derived from the interaction between rock properties and road header parameters.

42 citations


Proceedings ArticleDOI
17 Nov 2013
TL;DR: This paper proposes new hybrid metrics that provide high correlation with application performance, and may be useful for accurate performance prediction, and demonstrates a very strong correlation between the proposed metrics and the execution time of these codes.
Abstract: Task mapping on torus networks has traditionally focused on either reducing the maximum dilation or average number of hops per byte for messages in an application. These metrics make simplified assumptions about the cause of network congestion, and do not provide accurate correlation with execution time. Hence, these metrics cannot be used to reasonably predict or compare application performance for different mappings. In this paper, we attempt to model the performance of an application using communication data, such as the communication graph and network hardware counters. We use supervised learning algorithms, such as randomized decision trees, to correlate performance with prior and new metrics. We propose new hybrid metrics that provide high correlation with application performance, and may be useful for accurate performance prediction. For three different communication patterns and a production application, we demonstrate a very strong correlation between the proposed metrics and the execution time of these codes.

41 citations


01 Jul 2013
TL;DR: A hybrid method of time prediction that is both profile-based and historic-based is defined, which is achieved by combining a program structure analysis with an instance-based learning method.
Abstract: This article describes some work in the domain of application execution time prediction, which is always necessary for schedulers. We define a hybrid method of time prediction that is both profile-based and historic-based. This prediction is achieved by combining a program structure analysis with an instance-based learning method. We demonstrate that taking account of an application's profile improves predictions compared with classical historic-based prediction methods.

38 citations


Journal ArticleDOI
TL;DR: In this article, the capacitance-resistive model (CRM) approach is used to estimate and optimize waterflooding performance in a single layer reservoir, and a genetic algorithm was used to solve the developed CRM.

35 citations


Journal ArticleDOI
TL;DR: In this paper, a physics-based comprehensive model for predicting the performance of a miniature horizontal axis wind turbine (MHAWT) was established and an approximation of the power coefficient of the turbine rotor was made.
Abstract: A miniature wind turbine (MWT) has received great attention recently for powering low-power devices. In this paper, a physics-based comprehensive model for predicting the performance of a miniature horizontal axis wind turbine (MHAWT) was established. The turbine rotor performance was investigated and an approximation of the power coefficient of the turbine rotor was made. Incorporation of the approximation with the equivalent circuit model, which was proposed in accordance with the principles of the MHAWT, in addition to its overall system performance versus the resistive load and ambient wind speed, was predicted. To demonstrate predictive modeling capability, the MHAWT system comprised of commercially available off-the-shelf components was designed and its performance was experimentally tested. The results matched well with those by prediction modeling, which implies that the proposed model holds promise in estimating and optimizing the performance of the MWT.

34 citations


Patent
Premanand Sakarda1
19 Sep 2013
TL;DR: In this paper, a policy manager may receive operating system scheduling information, performance prediction information for at least one future quantum, and current processor utilization information, and determine a performance prediction for a future quantum and whether to cause a switch between asymmetric cores of a multicore processor based at least in part on this received information.
Abstract: In one embodiment, a policy manager may receive operating system scheduling information, performance prediction information for at least one future quantum, and current processor utilization information, and determine a performance prediction for a future quantum and whether to cause a switch between asymmetric cores of a multicore processor based at least in part on this received information. Other embodiments are described and claimed.

31 citations


Proceedings ArticleDOI
TL;DR: The key idea is to create a variant simulator that can simulate the behavior of all program variants that can be used to measure performance of individual methods, trace methods to features, and infer feature interactions based on the call graph.
Abstract: Most contemporary programs are customizable. They provide many features that give rise to millions of program variants. Determining which feature selection yields an optimal performance is challenging, because of the exponential number of variants. Predicting the performance of a variant based on previous measurements proved successful, but induces a trade-off between the measurement effort and prediction accuracy. We propose the alternative approach of family-based performance measurement, to reduce the number of measurements required for identifying feature interactions and for obtaining accurate predictions. The key idea is to create a variant simulator (by translating compile-time variability to run-time variability) that can simulate the behavior of all program variants. We use it to measure performance of individual methods, trace methods to features, and infer feature interactions based on the call graph. We evaluate our approach by means of five feature-oriented programs. On average, we achieve accuracy of 98%, with only a single measurement per customizable program. Observations show that our approach opens avenues of future research in different domains, such an feature-interaction detection and testing.

Proceedings ArticleDOI
01 Oct 2013
TL;DR: The results show that the proposed model can predict applications behaviors with a 91% accuracy and is able to characterize unknown applications based on their performance similarities with an existing database of benchmark to predict their likely performance bottlenecks.
Abstract: Understanding performance bottlenecks of applications in high performance computing can lead to dramatic improvements of applications performances. For example, a key problem in GPU programming is finding performance bottlenecks and solving them to reach the best possible performance. These bottlenecks in GPU architectures span a variety of factors such as memory access latency, branch divergence, utilization, and the amount of existing parallelism. In addition, a simple profiling cannot demonstrate the relations between these bottlenecks. In this paper, we propose a statistical performance model that not only helps us find bottlenecks but also shows the relations between them which is not possible by using a profiler. The OpenCL programming standard can be used in a variety of platforms (e.g., CPUs and GPUs); therefore, a program written in one platform can be imported to other platforms with minimal effort. As a result, we selected the OpenCL programming standard in order to design our performance model for NVIDIA GPUs. For this, we first measure the values of a GPU performance counters for the selected benchmarks. Then, using the achieved results and applying a regression model and the principle component analysis we develop a model to show how different GPU parameters account for applications performance bottlenecks. Our results show that the proposed model can predict applications behaviors with a 91% accuracy. Moreover, the proposed model is able to characterize unknown applications based on their performance similarities with an existing database of benchmark to predict their likely performance bottlenecks.

Proceedings Article
27 May 2013
TL;DR: This work creates a pseudo-application that runs the same set of distributed components and executes the same sequence of system calls as those of the real application and applies it to Apache and TPC-W and finds that PseudoApp accurately predicts their performance with 2-8% error in throughput.
Abstract: To migrate an existing application to cloud, a user needs to estimate and compare the performance and resource consumption of the application running in different clouds, in order to select the best service provider and the right virtual machine size. However, it is prohibitively expensive to install a complex application in multiple new environments solely for the purpose of performance benchmarking. Performance modeling is more practical but the accuracy is limited by system factors that are hard to model. We propose a new technique called PseudoApp to address these challenges. Our solution creates a pseudo-application to mimic the resource consumption of a real application. A pseudo-application runs the same set of distributed components and executes the same sequence of system calls as those of the real application. By benchmarking a simple and easyto-install PseudoApp in different cloud environments, a user can accurately obtain the performance and resource consumption of the real application. We apply PseudoApp to Apache and TPC-W and find that PseudoApp accurately predicts their performance with 2-8% error in throughput.

Journal ArticleDOI
TL;DR: A novel model-driven prediction method called Q-ImPrESS is applied on a large-scale process control system from ABB consisting of several million lines of code and the achieved performance prediction accuracy and reliability prediction sensitivity analyses are reported on.
Abstract: During software system evolution, software architects intuitively trade off the different architecture alternatives for their extra-functional properties, such as performance, maintainability, reliability, security, and usability. Researchers have proposed numerous model-driven prediction methods based on queuing networks or Petri nets, which claim to be more cost-effective and less error-prone than current practice. Practitioners are reluctant to apply these methods because of the unknown prediction accuracy and work effort. We have applied a novel model-driven prediction method called Q-ImPrESS on a large-scale process control system from ABB consisting of several million lines of code. This paper reports on the achieved performance prediction accuracy and reliability prediction sensitivity analyses as well as the effort in person hours for achieving these results.

Book ChapterDOI
25 Sep 2013
TL;DR: This paper proposes a new on-line prediction model with incremental learning method based on extreme learning machine (ELM), which randomly chooses hidden nodes and analytically determines the output weights of single-hidden layer feed forward neural networks (SLFNs).
Abstract: Performance prediction of hard rock TBM is the key to the successful tunnel excavations. A series of TBM performance prediction models have been developed since 1970s. The empirical, semi-empirical models such as CSM, NTNU models have their limitations, because the models are unable to completely reflect the correlation between the parameters of the models and penetration rate (PR). Researchers propose some models based on data-driven, like neural network model, which have the over fitting problem generally. This paper proposes a new on-line prediction model with incremental learning method based on extreme learning machine (ELM). This algorithm randomly chooses hidden nodes and analytically determines the output weights of single-hidden layer feed forward neural networks (SLFNs). Unlike neural network model, over fitting does not need to be concerned and the iterative learning steps are not required in ELM. The database used to validate the model is collected from the Queens Water Tunnel #3, Stage 2, New York City, USA. Compared with other methods such as PLS, GP, LSSVM, ELM prediction model tends to provide precise prediction at extremely fast learning speed.

Proceedings Article
01 Jan 2013
TL;DR: This paper investigates the relationship between performance prediction and knowledge estimation with a series of simulation studies and correlates the accuracy of estimating the moment of learning (mastery) with a host of error metrics calculated based on performance.
Abstract: Models of student knowledge have occupied a significant portion of the literature in the area of Educational Data Mining In the context of Intelligent Tutoring Systems, these models are designed for the purpose of improving prediction of student knowledge and improving prediction of skill mastery New models or model modifications need to be justified by marked improvement in evaluation results compared to prior-art The standard evaluation has been to forecast student responses with an N-fold student level cross-validation and compare the results of prediction to the prior-art model using a chosen error or accuracy metric The hypothesis of this often employed methodology is that improved performance prediction, given a chosen evaluation metric, translates to improved knowledge and mastery prediction Since knowledge is a latent, the estimation of knowledge cannot be validated directly If knowledge were directly observable, would we find that models with better prediction of performance also estimate knowledge more accurately? Which evaluation metrics of performance would best correlate with improvements in knowledge estimation? In this paper we investigate the relationship between performance prediction and knowledge estimation with a series of simulation studies The studies allow for observation of the ground truth knowledge states of simulated students With this information we correlate the accuracy of estimating the moment of learning (mastery) with a host of error metrics calculated based on performance

Journal ArticleDOI
TL;DR: A model for performance prediction is proposed, which is used to derive a simple yet effective feasibility parameter to be embedded in the design tool, which represents an improvement of more than 3 dB over the designtool using the nonlinear phase shift as the criterion.
Abstract: Coherent detection offers the ability to compensate for linear transmission impairments such as fiber chromatic dispersion and polarization-mode dispersion in the digital domain, thereby enabling dispersion-uncompensated optical transmission for high performance and high cost effectiveness. In dispersion-uncompensated transmission systems, the statistics of optical nonlinearity induced distortions have been proven to be essentially Gaussian-distributed, and new physical models have emerged showing profound differences with respect to legacy systems based on direct detection. From such differences stems the need to adapt the design tool to capture these new propagation properties. In that respect, we propose a model for performance prediction, which is used to derive a simple yet effective feasibility parameter to be embedded in the design tool. The feasibility parameter is experimentally validated with real time product transponders, and realistic system configurations: a precision of ±0.5 dB is achieved for 40 Gb/s, 100 Gb/s and 400 Gb/s coherent channels, which represents an improvement of more than 3 dB over the design tool using the nonlinear phase shift as the criterion.

Journal ArticleDOI
TL;DR: Detailed analysis of the implementation is presented that allows identification of bottlenecks in algorithm, indicating that code optimization and improvements on GPUs could allow microsecond scale simulation throughput on workstations and inexpensive GPU clusters, putting widely desired biologically relevant simulation time‐scales within reach of a large user community.
Abstract: SUMMARY Performance improvements in biomolecular simulations based on molecular dynamics (MD) codes are widely desired. Unfortunately, the factors, which allowed past performance improvements, particularly the microprocessor clock frequencies, are no longer increasing. Hence, novel software and hardware solutions are being explored for accelerating performance of widely used MD codes. In this paper, we describe our efforts on porting, optimizing and tuning of Large-scale Atomic/Molecular Massively Parallel Simulator, a popular MD framework, on heterogeneous architectures: multi-core processors with graphical processing unit (GPU) accelerators. Our implementation is based on accelerating the most computationally expensive non-bonded interaction terms on the GPUs and overlapping the computation on the CPU and GPUs. This functionality is built on top of message passing interface that allows multi-level parallelism to be extracted even at the workstation level with the multi-core CPUs and allows extension of the implementation on GPU-enabled clusters. We hypothesize that the optimal benefit of heterogeneous architectures for applications will come by utilizing all possible resources (for example, CPU-cores and GPU devices on GPU-enabled clusters). Benchmarks for a range of biomolecular system sizes are provided, and an analysis is performed on four generations of NVIDIA's GPU devices. On GPU-enabled Linux clusters, by overlapping and pipelining computation and communication, we observe up to 10-folds application acceleration in multi-core and multi-GPU environments illustrating significant performance improvements. Detailed analysis of the implementation is presented that allows identification of bottlenecks in algorithm, indicating that code optimization and improvements on GPUs could allow microsecond scale simulation throughput on workstations and inexpensive GPU clusters, putting widely desired biologically relevant simulation time-scales within reach of a large user community. In order to systematically optimize simulation throughput and to enable performance prediction, we have developed a parameterized performance model that will allow developers and users to explore the performance potential of future heterogeneous systems for biological simulations. Copyright © 2012 John Wiley & Sons, Ltd.

Patent
01 May 2013
TL;DR: In this article, a method is disclosed to perform performance prediction for cloud-based databases by building on a computer a cloud database performance model using one or more training workloads; and using the learned model on the computer to predict database performance in the cloud for a new workload.
Abstract: A method is disclosed to perform performance prediction for cloud-based databases by building on a computer a cloud database performance model using one or more training workloads; and using the learned model on the computer to predict database performance in the cloud for a new workload.

Proceedings ArticleDOI
10 Jun 2013
TL;DR: A 5-node network spanning an area of about 1 km2 in Singapore waters over the period of a few days was deployed to understand the network and link performance variability better, and preliminary results from this experiment are presented.
Abstract: Performance prediction of underwater acoustic network protocols is difficult due to the variability of performance of individual links in the network. Link performance is usually a complex function of several environmental and modem parameters. To understand the network and link performance variability better, we deployed a 5-node network spanning an area of about 1 km2 in Singapore waters over the period of a few days. We also deployed several environmental sensors to measure water currents, sea surface motion, wind speed, rain, sound speed profile, and ambient noise. Acoustic ranging between the network nodes allowed us to accurately localize the nodes underwater. By transmitting probe signals, we were able to accurately measure acoustic propagation between nodes, and understand its impact on link performance. We present preliminary results from this experiment to show how link performance varied with location, range and environmental changes.

Journal ArticleDOI
TL;DR: In this paper, a regression based macrotexture prediction model was developed using the construction parameters, traffic volume, daily temperature conditions as well as the laboratory and in situ test results.

Journal ArticleDOI
TL;DR: In this article, the authors focused on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model.
Abstract: A growing interest has been devoted to the contra-rotating propellers (CRPs) due to their high propulsive efficiency, torque balance, low fuel consumption, low cavitations, low noise performance and low hull vibration. Compared with the single-screw system, it is more difficult for the open water performance prediction because forward and aft propellers interact with each other and generate a more complicated flow field around the CRPs system. The current work focuses on the open water performance prediction of contra-rotating propellers by RANS and sliding mesh method considering the effect of computational time step size and turbulence model. The validation study has been performed on two sets of contra-rotating propellers developed by David W Taylor Naval Ship R & D center. Compared with the experimental data, it shows that RANS with sliding mesh method and SST k-ω turbulence model has a good precision in the open water performance prediction of contra-rotating propellers, and small time step size can improve the level of accuracy for CRPs with the same blade number of forward and aft propellers, while a relatively large time step size is a better choice for CRPs with different blade numbers.

Proceedings ArticleDOI
14 Nov 2013
TL;DR: A new meta-model designed for the performance modeling of network infrastructures in modern data centers is presented, which delivers predictions with errors less than 32% and correctly detects bottlenecks in the modeled network.
Abstract: In this paper, we address the problem of performance analysis in computer networks. We present a new meta-model designed for the performance modeling of network infrastructures in modern data centers. Instances of our metamodel can be automatically transformed into stochastic simulation models for performance prediction. We evaluate the approach in a case study of a road traffic monitoring system. We compare the performance prediction results against the real system and a benchmark. The presented results show that our approach, despite of introducing many modeling abstractions, delivers predictions with errors less than 32% and correctly detects bottlenecks in the modeled network.

Journal ArticleDOI
TL;DR: The -SVR based prediction method is proposed to predict the positioning errors of navigation systems, and particle swarm optimization (PSO) is used for the SVM parameters optimization to save 75% of calculation time.
Abstract: The strapdown inertial navigation systems (SINS) have been widely used for many vehicles, such as commercial airplanes, Unmanned Aerial Vehicles (UAVs), and other types of aircrafts. In order to evaluate the navigation errors precisely and efficiently, a prediction method based on support vector machine (SVM) is proposed for positioning error assessment. Firstly, SINS error models that are used for error calculation are established considering several error resources with respect to inertial units. Secondly, flight paths for simulation are designed. Thirdly, the -SVR based prediction method is proposed to predict the positioning errors of navigation systems, and particle swarm optimization (PSO) is used for the SVM parameters optimization. Finally, 600 sets of error parameters of SINS are utilized to train the SVM model, which is used for the performance prediction of new navigation systems. By comparing the predicting results with the real errors, the latitudinal predicting accuracy is 92.73%, while the longitudinal predicting accuracy is 91.64%, and PSO is effective to increase the prediction accuracy compared with traditional SVM with fixed parameters. This method is also demonstrated to be effective for error prediction for an entire flight process. Moreover, the prediction method can save 75% of calculation time compared with analyses based on error models.

Patent
Kazuo Horikawa1, Norihiro Hara1
20 Dec 2013
TL;DR: In this article, a performance prediction method, performance prediction system and program for predicting a performance of a monitoring target system including processing devices is presented, where a plurality of types of measurement values are acquired from the monitor at regular intervals, and a value at a future time of a reference index which is a portion of the measurement values is predicted, and the probability, based on a probability model, that a target event will be generated is calculated.
Abstract: A performance prediction method, performance prediction system and program for predicting a performance of a monitoring target system including processing devices. A plurality of types of measurement values are acquired from the monitoring target system at regular intervals. A value at a future time of a reference index which is a portion of the measurement values is predicted, and the probability, based on a probability model, that a target event will be generated is calculated, the target event being an event in which a specific measurement value, which is different from the reference index at the future time, lies within the specific range, with the value of the reference index regarded as a prerequisite. An operation results value of the monitoring target system is included in the measurement values and an operation plan value of the monitoring target system is included in the reference index, which is time-series predicted.

Journal ArticleDOI
TL;DR: A new method for map adaptation is investigated to improve steady-state off design prediction accuracy of a generic gas turbine component and is integrated inside TSHAFT, the gas turbine prediction code developed at the University of Padova.
Abstract: Gas turbine off design performance prediction is strictly dependent on the accuracy of compressor and turbine map characteristics. Experimental data regarding component maps are very difficult to find in literature, since it is undisclosed proprietary information of the engine manufacturers. To overcome this limitation, gas turbine engineers use available generic component maps and modify them to reach the maximum adherence with the experimental measures.Different scaling and adaptation techniques have been employed to this aim; these methodologies are usually based upon analytic regression models which minimize the deviation from experimental data. However, since these models are built mainly for a specific compressor or turbine map, their generalization is quite difficult: in fact, regression is highly shape-dependent and therefore requires a different model for each different specific component.This paper proposes a solution to the problem stated above: a new method for map adaptation is investigated to improve steady-state off design prediction accuracy of a generic gas turbine component. The methodology does not employ analytical regression models; its main principle relies in performing map modifications in an appropriate neighborhood of the multiple experimental points used for the adaptation. When using gas turbine simulation codes, component maps are usually stored in a data matrix and are ordered in a format suitable for 2-D interpolation. A perturbation of the values contained in the matrix results in component map morphing. An optimization algorithm varies the perturbation intensity vector in order to minimize the deviation between experimental and predicted points.The adaptation method is integrated inside TSHAFT, the gas turbine prediction code developed at the University of Padova. The assessment of this methodology will be exposed by illustrating a case study carried out upon a turbojet engine.Copyright © 2013 by ASME

Proceedings ArticleDOI
14 Aug 2013
TL;DR: This paper describes how to obtain relevant parameters, such as the virtualization overhead, depending on the amount and type of available monitoring data, and adapt classical queueing-theory-based modeling techniques to make them usable for different configurations of virtualized environments.
Abstract: Performance management and performance prediction of services deployed in virtualized environments is a challenging task. On the one hand, the virtualization layer makes the estimation of performance model parameters difficult and inaccurate. On the other hand, it is difficult to model the hyper visor scheduler in a representative and practically feasible manner. In this paper, we describe how to obtain relevant parameters, such as the virtualization overhead, depending on the amount and type of available monitoring data. We adapt classical queueing-theory-based modeling techniques to make them usable for different configurations of virtualized environments. We provide answers how to include the virtualization overhead into queueing network models, and how to take the contention between different VMs into account. Finally, we evaluate our approach in representative scenarios based on the SPECjEnterprise2010 standard benchmark and XenServer 5.5, showing significant improvements in the prediction accuracy and discussing further open issues for performance prediction in virtualized environments.

Proceedings ArticleDOI
03 Jun 2013
TL;DR: In this article, the authors presented the flow analysis of centrifugal compressor stages using high fidelity computational fluid dynamics with a particular attention to the cavity flow modeling and comparison with experimental data, using an advanced fast response aerodynamic pressure probe.
Abstract: During the design of modern high efficiency, wide operating range centrifugal compressor stages, Computational Fluid Dynamics (CFD) plays an increasing role in the assessment of the performance prediction Nevertheless experimental data are valuable and necessary to assess the performance of the stages and to better understand the flow features in detailA big effort is currently being made to increase the fidelity of the numerical models and the probe measurement accuracy during both the design and validation phases of centrifugal compressor stages This study presents the flow analysis of centrifugal compressor stages using high fidelity computational fluid dynamics with a particular attention to the cavity flow modeling and comparison with experimental data, using an advanced fast response aerodynamic pressure probeDifferent flow coefficient centrifugal compressor stages were used for the validation of the numerical models with a particular attention to the effects of cavity flow on the flow phenomenaThe computational domain faithfully reproduced the geometry of the stages including secondary flow cavities The availability of a new in-house automated tool for cavity meshing allowed to accurately resolve leakage flows with a reasonable increase in computational and user timeTime averaged data from CFD analysis were compared with advanced experimental ones coming from the unsteady pressure probe, for both overall performance and detailed two-dimensional maps of the main flow quantities at design and off design conditions It was found that the increase in computational accuracy with the complete geometry modeling including leakage flows was substantial and the results of the computational model were in good agreement with the experimental data Moreover the combination of both advanced computational and experimental techniques enabled deeper insights in the flow field featuresThe comparison showed that only with advanced high fidelity CFD including leakage flows modeling did the numerical predictions meet the requirements for efficiency, head and operating margin, otherwise not achievable with simplified models (CFD without cavities)Copyright © 2013 by ASME

Patent
10 Apr 2013
TL;DR: In this paper, a numerical control machine tool machining performance prediction method based on intervals is proposed, which includes a first step of acquiring a plurality of measured values of each type of measured data, a second step of converting each measured value of the measured data into an interval mode, a third step of extracting time domain or time-frequency domain features, a fourth step of observing the extracted time domain features to obtain an optimized generalized hidden markov model, a fifth step of finding out a state transition probability matrix as a markov chain transition matrix, a sixth step of selecting an
Abstract: The invention discloses a numerical control machine tool machining performance prediction method based on intervals. The numerical control machine tool machining performance prediction method includes a first step of acquiring a plurality of measured values of each type of measured data, a second step of converting each measured value of each type of measured data into an interval mode, a third step of extracting time domain or time-frequency domain features, a fourth step of observing the extracted time domain or time-frequency domain features to obtain an optimized generalized hidden markov model, a fifth step of finding out a state transition probability matrix as a markov chain transition matrix from the optimized generalized hidden markov model, a sixth step of selecting an interval initial state probability vector to form a performance prediction model A (n) with the markov chain transition matrix, and a seventh step of solving the biggest value in the model, namely, the prediction state of numerical control machine tool machining performance. The numerical control machine tool machining performance prediction method processes occasional uncertainty through probability, and obtains uncertainty caused by lack of knowledge through the intervals, and enables the prediction accuracy to be remarkably improved, and has quite strong prediction robustness.

Patent
18 Sep 2013
TL;DR: In this paper, an extreme learning machine (ELM) is applied to prediction and modeling in the performance prediction method, and a feed-forward neural network of the ELM is introduced into the semiconductor manufacturing system and a prediction model is built by the aid of available data of the production line.
Abstract: The invention discloses a performance prediction method applicable to dynamic scheduling for a semiconductor production line. An extreme learning machine (ELM) is applied to prediction and modeling in the performance prediction method. Feeding control and scheduling rules are considered in a unified manner in the method, short-term scheduling key performance indexes such as an equipment utilization rate and a movement step number are predicted on the basis of a real-time state of a system, and a foundation is provided for dynamic real-time scheduling. A novel feed-forward neural network of the ELM is introduced into the semiconductor manufacturing system, and a prediction model is built by the aid of available data of the production line. As shown by test results, ideal prediction results can be quickly acquired by the method implemented by the aid of the ELM, the method has obvious advantages and an obvious application prospect in the aspects of parameter selection and learning speed as compared with the traditional neural network modeling method, and a new idea is provided for online optimal control.