scispace - formally typeset
Search or ask a question

Showing papers on "Performance prediction published in 2017"


Journal ArticleDOI
TL;DR: An enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs) is developed and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.
Abstract: This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.

151 citations


Posted Content
TL;DR: Standard frequentist regression models can predict the final performance of partially trained model configurations using features based on network architectures, hyperparameters, and time-series validation performance data and an early stopping method is proposed, which obtains a speedup of a factor up to 6x in both hyperparameter optimization and meta-modeling.
Abstract: Methods for neural network hyperparameter optimization and meta-modeling are computationally expensive due to the need to train a large number of model configurations. In this paper, we show that standard frequentist regression models can predict the final performance of partially trained model configurations using features based on network architectures, hyperparameters, and time-series validation performance data. We empirically show that our performance prediction models are much more effective than prominent Bayesian counterparts, are simpler to implement, and are faster to train. Our models can predict final performance in both visual classification and language modeling domains, are effective for predicting performance of drastically varying model architectures, and can even generalize between model classes. Using these prediction models, we also propose an early stopping method for hyperparameter optimization and meta-modeling, which obtains a speedup of a factor up to 6x in both hyperparameter optimization and meta-modeling. Finally, we empirically show that our early stopping method can be seamlessly incorporated into both reinforcement learning-based architecture selection algorithms and bandit based search methods. Through extensive experimentation, we empirically show our performance prediction models and early stopping algorithm are state-of-the-art in terms of prediction accuracy and speedup achieved while still identifying the optimal model configurations.

91 citations


Journal ArticleDOI
TL;DR: In this paper, the simulation approach described here combines a one-dimensional modeling of a RED stack with a fully three-dimensional finite volume modelling of the electrolyte channels, either planar or equipped with different spacers or profiled membranes.

72 citations


Journal ArticleDOI
TL;DR: It is shown that an accurate metric to predict the performance of coded modulation based on nonbinary FEC is the mutual information, which must be universal if performance prediction based on thresholds is used.
Abstract: In this paper, we compare different metrics to predict the error rate of optical systems based on nonbinary forward error correction (FEC). It is shown that an accurate metric to predict the performance of coded modulation based on nonbinary FEC is the mutual information. The accuracy of the prediction is verified in a detailed example with multiple constellation formats and FEC overheads, in both simulations and optical transmission experiments over a recirculating loop. It is shown that the employed FEC codes must be universal if performance prediction based on thresholds is used. A tutorial introduction into the computation of the thresholds from optical transmission measurements is also given.

58 citations


Proceedings ArticleDOI
17 Apr 2017
TL;DR: Empirical studies on three real-world software systems show that the proposed technique for enhancing generality of performance models across different hardware environments using linear transformation is computationally efficient and can achieve high accuracy when predicting system performance across 23 different hardware platforms.
Abstract: Many software systems provide configuration options relevant to users, which are often called features. Features influence functional properties of software systems as well as non-functional ones, such as performance and memory consumption. Researchers have successfully demonstrated the correlation between feature selection and performance. However, the generality of these performance models across different hardware platforms has not yet been evaluated.We propose a technique for enhancing generality of performance models across different hardware environments using linear transformation. Empirical studies on three real-world software systems show that our approach is computationally efficient and can achieve high accuracy (less than 10% mean relative error) when predicting system performance across 23 different hardware platforms. Moreover, we investigate why the approach works by comparing performance distributions of systems and structure of performance models across different platforms.

58 citations


Journal ArticleDOI
TL;DR: In this paper, an optimized absorber geometry, capable of reducing overall thermal losses, is presented, being able to increase the final thermal efficiency of more than 12% compared to the current state-of-the-art.

56 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: This research proposes a performance approximation approach FiM to model the computing performance of iterative, multi-stage applications running on a master-compute framework and is able to provide an accurate prediction of parallel computation time for the datasets which have much larger size than that of the training datasets.
Abstract: Predicting performance of an application running on high performance computing (HPC) platforms in a cloud environment is increasingly becoming important because of its influence on development time and resource management. However, predicting the performance with respect to parallel processes is complex for iterative, multi-stage applications. This research proposes a performance approximation approach FiM to model the computing performance of iterative, multi-stage applications running on a master-compute framework. FiM consists of two key components that are coupled with each other: 1) Stochastic Markov Model to capture non-deterministic runtime that often depends on parallel resources, e.g., number of processes. 2) Machine Learning Model that extrapolates the parameters for calibrating our Markov model when we have changes in application parameters such as dataset. Our new modeling approach considers different design choices along multiple dimensions, namely (i) process level parallelism, (ii) distribution of cores on multi-core processors in cloud computing, (iii) application related parameters, and (iv) characteristics of datasets. The major contribution of our prediction approach is that FiM is able to provide an accurate prediction of parallel computation time for the datasets which have much larger size than that of the training datasets. Such calculation prediction provides data analysts a useful insight of optimal configuration of parallel resources (e.g., number of processes and number of cores) and also helps system designers to investigate the impact of changes in application parameters on system performance.

52 citations


Proceedings ArticleDOI
26 Jun 2017
TL;DR: A predictive model useful for output performance prediction of supercomputer file systems under production load of Titan and its Lustre-based multi-stage write path is developed, using feature transformations to capture non-linear relationships.
Abstract: In this paper, we develop a predictive model useful for output performance prediction of supercomputer file systems under production load. Our target environment is Titan---the 3rd fastest supercomputer in the world---and its Lustre-based multi-stage write path. We observe from Titan that although output performance is highly variable at small time scales, the mean performance is stable and consistent over typical application run times. Moreover, we find that output performance is non-linearly related to its correlated parameters due to interference and saturation on individual stages on the path. These observations enable us to build a predictive model of expected write times of output patterns and I/O configurations, using feature transformations to capture non-linear relationships. We identify the candidate features based on the structure of the Lustre/Titan write path, and use feature transformation functions to produce a model space with 135,000 candidate models. By searching for the minimal mean square error in this space we identify a good model and show that it is effective.

52 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the integrated business IT impact simulation (IntBIIS) approach to adequately reflect the mutual impact between business process and enterprise information system (IS) in simulation.
Abstract: Business process (BP) designs and enterprise information system (IS) designs are often not well aligned. Missing alignment may result in performance problems at run-time, such as large process execution time or overloaded IS resources. The complex interrelations between BPs and ISs are not adequately understood and considered in development so far. Simulation is a promising approach to predict performance of both BP and IS designs. Based on prediction results, design alternatives can be compared and verified against requirements. Thus, BP and IS designs can be aligned to improve performance. In current simulation approaches, BP simulation and IS simulation are not adequately integrated. This results in limited prediction accuracy due to neglected interrelations between the BP and the IS in simulation. In this paper, we present the novel approach Integrated Business IT Impact Simulation (IntBIIS) to adequately reflect the mutual impact between BPs and ISs in simulation. Three types of mutual impact between BPs and ISs in terms of performance are specified. We discuss several solution alternatives to predict the impact of a BP on the performance of ISs and vice versa. It is argued that an integrated simulation of BPs and ISs is best suited to reflect their interrelations. We propose novel concepts for continuous modeling and integrated simulation. IntBIIS is implemented by extending the Palladio tool chain with BP simulation concepts. In a real-life case study with a BP and IS from practice, we validate the feasibility of IntBIIS and discuss the practicability of the corresponding tool support.

50 citations


Journal ArticleDOI
TL;DR: This paper improves on the roofline model following a quantitative approach and presents a completely automated GPU performance prediction technique that utilizes micro-benchmarking and profiling in a “black box” fashion as no inspection of source/binary code is required.

48 citations


Journal ArticleDOI
03 Apr 2017-Energies
TL;DR: In this paper, the authors consider the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter and compare with the practices often followed for wave tank testing.
Abstract: Empirically based modeling is an essential aspect of design for a wave energy converter. Empirically based models are used in structural, mechanical and control design processes, as well as for performance prediction. Both the design of experiments and methods used in system identification have a strong impact on the quality of the resulting model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followed for wave tank testing. The general system identification processes are shown to have a number of advantages, including an increased signal-to-noise ratio, reduced experimental time and higher frequency resolution. The experimental wave tank data is used to produce multiple models using different formulations to represent the dynamics of the wave energy converter. These models are validated and their performance is compared against one another. While most models of wave energy converters use a formulation with surface elevation as an input, this study shows that a model using a hull pressure measurement to incorporate the wave excitation phenomenon has better accuracy.

Journal ArticleDOI
TL;DR: In this paper, a sequential optimization procedure and a performance prediction of an oscillating water column (OWC) is presented, where the effects of the power take-off (PTO) model, the geometry, and the wave conditions on the device performance are investigated.

Proceedings ArticleDOI
26 Jun 2017
TL;DR: This work engineer features for endpoint CPU load, network interface card load, and transfer characteristics in both linear and nonlinear models of transfer performance and shows that the resulting models have high explanatory power.
Abstract: Disk-to-disk wide-area file transfers involve many subsystems and tunable application parameters that pose significant challenges for bottleneck detection, system optimization, and performance prediction. Performance models can be used to address these challenges but have not proved generally usable because of a need for extensive online experiments to characterize subsystems. We show here how to overcome the need for such experiments by applying machine learning methods to historical data to estimate parameters for predictive models. Starting with log data for millions of Globus transfers involving billions of files and hundreds of petabytes, we engineer features for endpoint CPU load, network interface card load, and transfer characteristics; and we use these features in both linear and nonlinear models of transfer performance, We show that the resulting models have high explanatory power. For a representative set of 30,653 transfers over 30 heavily used source-destination pairs ("edges''),totaling 2,053 TB in 46.6 million files, we obtain median absolute percentage prediction errors (MdAPE) of 7.0% and 4.6% when using distinct linear and nonlinear models per edge, respectively; when using a single nonlinear model for all edges, we obtain an MdAPE of 7.8%. Our work broadens understanding of factors that influence file transfer rate by clarifying relationships between achieved transfer rates, transfer characteristics, and competing load. Our predictions can be used for distributed workflow scheduling and optimization, and our features can also be used for optimization and explanation.

Journal ArticleDOI
TL;DR: A novel space vector approach is proposed that exploits the three-phase nature of the monitored signals together with proper lowpass and differentiation filters to derive the filter design criteria for phasor, frequency, and ROCOF computation.
Abstract: Phasor measurement units (PMUs) are expected to be the basis of modern power networks monitoring systems. They are conceived to allow measuring the phasor, frequency, and rate of change of frequency (ROCOF) of electrical signals in a synchronized way and with unprecedented accuracy. PMUs are intended to apply to three-phase systems and to track signal parameters evolution during network dynamics. For these reasons, the design of the algorithms and, in particular, of the filters that allow rejecting the disturbances while preserving the passband signal content is a paramount concern. In this paper, a novel space vector approach is proposed. It exploits the three-phase nature of the monitored signals together with proper lowpass and differentiation filters. Analytical formulas for performance prediction under almost all the test conditions prescribed by the synchrophasor standard C37.118.1 for PMUs, are introduced. The given expressions are extremely accurate, thus allowing to derive the filter design criteria for phasor, frequency, and ROCOF computation, so that the requirements in terms of estimation errors can be easily translated into filter specifications. The implications of the proposed approach in practical PMU design are illustrated by means of two simple design examples matching P and M compliance classes, respectively, for all the test cases of the standard. The reported performance proves the validity of the proposal.

Journal ArticleDOI
Zheng Huang1, Jiajun Peng1, Huijuan Lian1, Jie Guo1, Weidong Qiu1 
TL;DR: This paper proposes using RNN with long short-term memory (LSTM) units for server load and performance prediction, and provides a new way to reproduce user request sequence to solve the problem of server performance.
Abstract: Recurrent neural network (RNN) has been widely applied to many sequential tagging tasks such as natural language process (NLP) and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM) units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests), which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

Proceedings ArticleDOI
Haggai Roitman1, Shai Erera1, Bar Weiner1
01 Oct 2017
TL;DR: A robust standard deviation estimator is derived from a novel bootstrap sampling approach which is inspired by user search behavior and results in an enhanced query performance prediction.
Abstract: We derive a robust standard deviation estimator for post-retrieval query performance prediction. To this end, we propose a novel bootstrap sampling approach which is inspired by user search behavior. Using an evaluation with several TREC benchmarks and a comparison with several different types of baselines, we demonstrate that, overall, our estimator results in an enhanced query performance prediction.

Journal ArticleDOI
TL;DR: An analytical model based on solving Maxwell equations in the machine layers is presented for linear resolver (LR) and good correlations between the results obtained and the superiority of the proposed method over the FEM due to its much lower computational time.
Abstract: In this study an analytical model based on solving Maxwell equations in the machine layers is presented for linear resolver (LR). Anisotropy, field harmonics, slot effects, number of slots per pole per phase and the effect of tooth skewing are considered in the model. The proposed method is a design oriented technique that can be used for performance prediction and design optimisation of the LR due to its acceptable accuracy and fast computation time. Two- and three-dimensional time stepping finite element method (FEM) is employed to validate the results of the proposed model. Good correlations between the results obtained by the proposed method and the FEM confirm the superiority of the proposed method over the FEM due to its much lower computational time. Finally, the prototype of the proposed sensor is built and tested. The results of the experimental tests verify the accuracy of the simulations.

Book ChapterDOI
28 Aug 2017
TL;DR: A grey box approach to estimate an application execution time on Spark cluster for higher data size using measurements on low volume data in a small size cluster and proposed approaches are able to predict within 20% error bound for Wordcount, Terasort, K-means and few TPC-H SQL workloads.
Abstract: The wide availability of open source big data processing frameworks, such as Spark, has increased migration of existing applications and deployment of new applications to these cost-effective platforms. One of the challenges is assuring performance of an application with increase in data size in production system. We have addressed this problem in our work for Spark platform using a performance prediction model in development environment. We have proposed a grey box approach to estimate an application execution time on Spark cluster for higher data size using measurements on low volume data in a small size cluster. The proposed model may also be used iteratively to estimate the competent cluster size for desired application performance in production environment. We have discussed both machine learning and analytic based techniques to build the model. The model is also flexible to different configurations of Spark cluster. This flexibility enables the use of the prediction model with optimization techniques to get tuned value of Spark parameters for optimal performance of deployed application on Spark cluster. Our key innovations in building Spark performance prediction model are support for different configurations of Spark platform, and simulator to estimate Spark stage execution time which includes task execution variability due to HDFS, data skew and cluster nodes heterogeneity. We have shown that our proposed approaches are able to predict within 20% error bound for Wordcount, Terasort, K-means and few TPC-H SQL workloads.

Journal ArticleDOI
TL;DR: In this paper, a GA-BP neural network model was used to predict the complex nonlinear relationship between the input variables and thermal performance of parabolic trough solar collector (PTC) systems.
Abstract: The aim of this paper is to optimize the thermal performance (system output energy, thermal efficiency, and heat loss of cavity absorber) of parabolic trough solar collector (PTC) systems in order to improve its thermal performance, based on the genetic algorithm-back propagation (GA-BP) neural network model. There are a number of undefined problems, fuzzy or incomplete information and a complex thermal performance of the PTC systems. Therefore, the thermal performance prediction of the PTC systems based on GA-BP neural network model was developed. Subsequently, the metrics performances have been adopted to comprehensively understand the algorithm and evaluate the prediction accuracy. Results revealed that the GA-BP neural network model can be successfully used to predict the complex nonlinear relationship between the input variables and thermal performance of the PTC systems. The cosine effect has a great influence on the thermal performance; thereby the geometrical structure of the PTC systems w...

Proceedings Article
01 Jan 2017
TL;DR: This paper sketches the design of a learningbased service for IaaS-deployed data management applications that uses reinforcement learning to learn, over time, low-cost policies for provisioning virtual machines and dispatching queries across them.
Abstract: The onset of cloud computing has brought about computing power that can be provisioned and released on-demand. This capability has drastically increased the complexity of workload and resource management for database applications. Existing solutions rely on query latency prediction models, which are notoriously inaccurate in cloud environments. We argue for a substantial shift away from query performance prediction models and towards machine learning techniques that directly model the monetary cost of using cloud resources and processing query workloads on them. Towards this end, we sketch the design of a learningbased service for IaaS-deployed data management applications that uses reinforcement learning to learn, over time, low-cost policies for provisioning virtual machines and dispatching queries across them. Our service can effectively handle dynamic workloads and changes in resource availability, leading to applications that are continuously adaptable, cost effective, and performance aware. In this paper, we discuss several challenges involved in building such a service, and we present results from a proof-of-concept implementation of our approach.

Journal ArticleDOI
TL;DR: In this article, the authors utilized risk and sensitivity analysis and applied artificial neural networks (ANNs) to predict the energy performance of buildings in terms of primary energy consumption and CO2 emissions represented in the Building Energy Rating (BER) scale.
Abstract: The energy in buildings is influenced by numerous factors characterized by non-linear multi-interrelationships. Consequently, the prediction of the energy performance of a building, in the presence of these factors, becomes a complex task. The work presented in this paper utilizes risk and sensitivity analysis and applies artificial neural networks (ANNs) to predict the energy performance of buildings in terms of primary energy consumption and CO2 emissions represented in the Building Energy Rating (BER) scale. Training, validation, and testing of the utilized ANN was implemented using simulation data generated from a stochastic analysis on the ‘Dwellings Energy Assessment Procedure’ (DEAP) energy model. Four alternative ANN models for varying levels of detail and accuracy are devised for fast and efficient energy performance prediction. Two fine-detailed models, one with 68 energy-related input factors and one with 34 energy-related input factors, offer quick and multi-factored estimations of the energy performance of buildings with 80 and 85% accuracy, respectively. Two low-detailed models, one with 16 and one with 8 energy-related input factors, offer less computationally intensive yet sufficiently accurate predictions with 92 and 94% accuracy, respectively.

Journal ArticleDOI
TL;DR: This study proposes more accurate and practical statistical models compared to the previous ones based on multivariate regression analyses to estimate the performance of hard rock TBMs.
Abstract: The risk of excavation operations due to high capital costs can be reduced by correct estimation of machine performance. Many models have been proposed to study this issue, but considering the nature of the problem, it is rather difficult to estimate tunnel boring machine performance by simple linear prediction models. The purpose of the present study is to construct linear and non-linear multivariate prediction models to estimate TBM performance as a function of rock mass properties in granitic and mica-gniess rocks. For this purpose, rock properties and machine data were obtained from a historical TBM tunneling project in Norway and then the database was established to develop performance prediction models utilizing the linear and the non-linear multiple regression methods. This study proposes more accurate and practical statistical models compared to the previous ones based on multivariate regression analyses to estimate the performance of hard rock TBMs.

Journal ArticleDOI
TL;DR: It is proven that the proposed methods and the system have the capability to progressively update and refine gas turbine performance models with improved accuracy, which is crucial for model-based gas path diagnostics and prognostics.
Abstract: One of the key challenges of the gas turbine community is to empower the condition based maintenance with simulation, diagnostic and prognostic tools which improve the reliability and availability of the engines. Within this context, the inverse adaptive modelling methods have generated much attention for their capability to tune engine models for matching experimental test data and/or simulation data. In this study, an integrated performance adaptation system for estimating the steady-state off-design performance of gas turbines is presented. In the system, a novel method for compressor map generation and a genetic algorithm-based method for engine off-design performance adaptation are introduced. The methods are integrated into PYTHIA gas turbine simulation software, developed at Cranfield University and tested with experimental data of an aero derivative gas turbine. The results demonstrate the promising capabilities of the proposed system for accurate prediction of the gas turbine performance. This is achieved by matching simultaneously a set of multiple off-design operating points. It is proven that the proposed methods and the system have the capability to progressively update and refine gas turbine performance models with improved accuracy, which is crucial for model-based gas path diagnostics and prognostics.

Journal ArticleDOI
TL;DR: The experimental results have demonstrated that unlike the PSP in e-Learning systems, the regression- based approach should give better performance than the recommender system-based approach.
Abstract: Abstract This paper presents a study on Predicting Student Performance (PSP) in academic systems. In order to solve the task, we have proposed and investigated different strategies. Specifically, we consider this task as a regression problem and a rating prediction problem in recommender systems. To improve the performance of the former, we proposed the use of additional features based on course-related skills. Moreover, to effectively utilize the outputs of these two strategies, we also proposed a combination of the two methods to enhance the prediction performance. We evaluated the proposed methods on a dataset which was built using the mark data of students in information technology at Vietnam National University, Hanoi (VNU). The experimental results have demonstrated that unlike the PSP in e-Learning systems, the regression-based approach should give better performance than the recommender system-based approach. The integration of the proposed features also helps to enhance the performance of the regression-based systems. Overall, the proposed hybrid method achieved the best RMSE score of 1.668. These promising results are expected to provide students early feedbacks about their (predicted) performance on their future courses, and therefore saving times of students and their tutors in determining which courses are appropriate for students’ ability.

Proceedings ArticleDOI
19 Mar 2017
TL;DR: Gaussian process regression is numerically and experimentally investigated to predict the bit error rate of a 24 × 28 GBd QPSK WDM system.
Abstract: Gaussian process regression is numerically and experimentally investigated to predict the bit error rate of a 24 × 28 GBd QPSK WDM system. The proposed method produces accurate predictions from multi-dimensional and sparse measurement data.

Journal ArticleDOI
28 Feb 2017
TL;DR: The aim was to develop a model which can derive the conclusion on students' academic success, which is helpful to identify the low performance students at the beginning of the learning process.
Abstract: A high prediction accuracy of the students’ performance is helpful to identify the low performance students at the beginning of the learning process. Machine learning is used to attain this objective. Machine learning techniques are used to discover models or patterns of data, and it is helpful in the decision-making. The ability to predict performance of students is very crucial in our present education system. We applied Machine learning concepts for this study. The dataset used in our study is taken from the Wolkite university registries office for college of computing and informatics from 2004 up to 2007 E.C with respect to each department. In this study, we have been collected student’s transcript data that included their final GPA and their grades in all courses. After pre-processing the data, we applied the machine learning methods, neural networks, Naive Bayesian and Support Vector Machine (SMO). Finally, we built the model for each method, evaluate the performance and compare the results of each model. Using machine learning, the aim was to develop a model which can derive the conclusion on students' academic success. KeywordsClassification, Machine Learning, Higher Education, Prediction, Student Success


Journal ArticleDOI
TL;DR: In this paper, the effect of the number of input variables on both the accuracy and the reliability of the artificial neural network (ANN) method for predicting the performance parameters of a solar energy system was investigated.
Abstract: In recent years, there has been a strong growth in solar power generation industries. The need for highly efficient and optimised solar thermal energy systems, stand-alone or grid connected photovoltaic systems, has substantially increased. This requires the development of efficient and reliable performance prediction capabilities of solar heat and power production over the day. This contribution investigates the effect of the number of input variables on both the accuracy and the reliability of the artificial neural network (ANN) method for predicting the performance parameters of a solar energy system. This paper describes the ANN models and the optimisation process in detail for predicting performance. Comparison with experimental data from a solar energy system tested in Ottawa, Canada during two years under different weather conditions demonstrates the good prediction accuracy attainable with each of the models using reduced input variables. However, it is likely true that the degree of model accuracy would gradually decrease with reduced inputs. Overall, the results of this study demonstrate that the ANN technique is an effective approach for predicting the performance of highly non-linear energy systems. The suitability of the modelling approach using ANNs as a practical engineering tool in renewable energy system performance analysis and prediction is clearly demonstrated.

Journal ArticleDOI
TL;DR: It is shown how discretization of a wave-equation can be theoretically studied to understand the performance limitations of the method on modern computer architectures and a first principles analysis of operational intensity for key time-stepping finite-difference algorithms is presented.

Journal ArticleDOI
TL;DR: This paper identifies the performance interference factors and design synthetic micro-benchmarks to obtain VM's contention sensitivity and intensity features that are correlated with VM performance degradation, and builds VM performance prediction model using machine learning techniques to quantify the precise levels of performance degradation.