scispace - formally typeset
Search or ask a question

Showing papers on "Performance prediction published in 2003"


Journal ArticleDOI
22 Apr 2003
TL;DR: Experimental evidence shows that by integrating these performance tools for local- and wide-area management, considerable improvements can be made to task scheduling, resource utilisation and load balancing on heterogeneous distributed computing systems.
Abstract: A performance prediction framework is described in which predictive data generated by the PACE toolkit is stored and published through a Globus MDS-based performance information service. Distributing this data allows additional performance-based middleware tools to be built; the paper describes two such tools, a local-level scheduler and a system for wide-area task management. Experimental evidence shows that by integrating these performance tools for local- and wide-area management, considerable improvements can be made to task scheduling, resource utilisation and load balancing on heterogeneous distributed computing systems.

89 citations


Dissertation
01 Dec 2003
TL;DR: In this article, a new extended empirical relation for the loss coefficient of fills is proposed where the viscous and form drag effects are accounted for as well as the buoyancy, momentum and fill height effects.
Abstract: The thermal performance prediction ofwet-cooling towers is critically analyzed and refined. Natural draft counterflow towers and mechanical draft counterflow and crossflow towers are considered. The Merkel, Poppe and e-NTU heat and mass transfer methods of analysis are derived from first principles, as these methods form the cornerstone of wet-cooling tower performance evaluation. The critical differences between these methods, when applied to fill performance analyses and cooling tower performance evaluations, are highlighted. The reasons for these differences are discussed with the aid of psychrometric charts. A new extended empirical relation for the loss coefficient of fills is proposed where the viscous and form drag effects are accounted for as well as the buoyancy, momentum and fill height effects. The empirical equation for the transfer characteristic of fills is extended to include the effects of fill height and the inlet water temperature. Empirical equations to predict the temperature inversion profile, height of the temperature inversion and the height from which air is drawn into the cooling tower are developed. The influence of temperature and humidity inversions on the performance of wet-cooling towers is subsequently investigated. A comprehensive analytical computer program is developed to predict and optimize the performance of wet-cooling towers. Computer programs are also developed to generate cooling tower performance curves, analyze fill performance test data and plot psychrometric charts. Keywords; Wet-cooling tower, Merke~ Poppe, e-NTU, natural draft, mechanical draft, fill, temperature inversion.

88 citations


Proceedings ArticleDOI
22 Apr 2003
TL;DR: A novel performance model for non-dedicated network computing is put forward and a performance prediction and task scheduling system called Grid Harvest Service (GHS) is designed and implemented.
Abstract: With the emergence of Grid computing environment, performance measurement, analysis and prediction of non-dedicated distributed systems have become increasingly important. In this study, we put forward a novel performance model for non-dedicated network computing. Based on this model, a performance prediction and task scheduling system called Grid Harvest Service (GHS), has been designed and implemented. GHS consists of a performance measurement component, a prediction component and a scheduling component. Different scheduling algorithms are proposed for different situations. Experimental results show that the GHS system provides satisfactory solution for performance prediction and scheduling of large applications and that GHS has a real potential.

75 citations


Proceedings ArticleDOI
01 Jan 2003
TL;DR: Results from a parallel version of the Neighbourhood Approximation algorithm on a Linux cluster are shown, demonstrating the advantages of perfect parallelism and how it is possible to sample from the posterior probability distribution both to assess accuracy of the approximate misfit surface, and also to generate automatic history match models.
Abstract: This paper will describe a strategy for rapid quantification of uncertainty in reservoir performance prediction. The strategy is based on a combination of streamline and conventional finite difference simulators. Our uncertainty framework uses the Neighbourhood Approximation algorithm to generate an ensemble of history match models, and has been described previously. A speedup in generating the misfit surface is essential since effective quantification of uncertainty can require thousands of reservoir model runs. Our speedup strategy for quantifying uncertainty in performance prediction involves using an approximate streamline simulator to rapidly explore the parameter space to identify good history matching regions, and to generate an approximate misfit surface. We then switch to a conventional, finite difference simulator, and selectively explore the identified parameter space regions. This paper will show results from a parallel version of the Neighbourhood Approximation algorithm on a Linux cluster, demonstrating the advantages of perfect parallelism. We show how it is possible to sample from the posterior probability distribution both to assess accuracy of the approximate misfit surface, and also to generate automatic history match models.

67 citations


Journal ArticleDOI
TL;DR: This work presents an analytic performance modeling approach aimed to minimize prediction cost, while providing a prediction accuracy that is sufficient to enable major code and data mapping decisions, based on a performance simulation language called PAMELA.
Abstract: Performance prediction is an important engineering tool that provides valuable feedback on design choices in program synthesis and machine architecture development. We present an analytic performance modeling approach aimed to minimize prediction cost, while providing a prediction accuracy that is sufficient to enable major code and data mapping decisions. Our approach is based on a performance simulation language called PAMELA. Apart from simulation, PAMELA features a symbolic analysis technique that enables PAMELA models to be compiled into symbolic performance models that trade prediction accuracy for the lowest possible solution cost. We demonstrate our approach through a large number of theoretical and practical modeling case studies, including six parallel programs and two distributed-memory machines. The average prediction error of our approach is less than 10 percent, while the average worst-case error is limited to 50 percent. It is shown that this accuracy is sufficient to correctly select the best coding or partitioning strategy. For programs expressed in a high-level, structured programming model, such as data-parallel programs, symbolic performance modeling can be entirely automated. We report on experiments with a PAMELA model generator built within a dataparallel compiler for distributed-memory machines. Our results show that with negligible program annotation, symbolic performance models are automatically compiled in seconds, while their solution cost is in the order of milliseconds.

65 citations


Patent
27 Jun 2003
TL;DR: In this paper, the authors proposed a method for detecting a fault in a material processing system using a process performance prediction model, which includes sensors coupled to the tool to measure tool data and a controller coupled to sensors to receive tool data, where the controller is configured to predict the process performance for the tool using the tool data.
Abstract: A material processing system including a process tool and a process performance prediction system. The performance prediction system includes sensors coupled to the tool to measure tool data and a controller coupled to the sensors to receive tool data, where the controller is configured to predict the process performance for the tool using the tool data. A method for detecting a fault in a material processing system using a process performance prediction model is also provided. The method includes preparing the tool, initiating a process in the tool, and recording tool data to form to a tool data matrix. The method also includes performing a matrix multiplication of the tool data matrix and a correlation matrix to form predicted process performance data, where the correlation matrix includes the performance prediction model, comparing the predicted data with target data, and determining a fault condition of the processing system from the comparing step.

64 citations


01 Jan 2003
TL;DR: In this paper, the authors explore a method to side-step these problems by combining sampling, modeling, and simulation, and show that if we take a sample of the input traffic, and feed it into a suitably scaled version of the system, we can extrapolate from the performance of the scaled system to that of the original.
Abstract: In networks and in Web server farms, it is useful to collect performance measurements, to monitor the state of the system, and to perform simulations. However, the sheer volume of traffic in large high-speed network systems makes it hard to monitor their performance or to simulate them efficiently. And the heterogeneity of the Internet means it is time-consuming and difficult to devise the traffic models and analytic tools which would allow us to work with summary statistics. We explore a method to side-step these problems by combining sampling, modeling and simulation. Our hypothesis is this: if we take a sample of the input traffic, and feed it into a suitably scaled version of the system, we can extrapolate from the performance of the scaled system to that of the original. Our main findings are: When we scale an IP network which is shared by TCP-like, UDP and Web flows; and which is controlled by a variety of active queue management schemes, then performance measures such as queueing delay and drop probability are left virtually unchanged. We show this in theory and in simulations. This makes it possible to capture the performance of large networks quite faithfully using smaller scale replicas.

34 citations


01 Jan 2003
TL;DR: This paper describes an approach based on performance submodels for each component, and a system assembly model to describe the binding together of library components and new components into a product.
Abstract: Component Based Software Engineering (CBSE) exploits re-usability of configurable components to generate software products more quickly, and with higher quality. CBSE offers potential advantages for performance engineering. If most of a new system consists of existing software components, it should be possible to predict properties like performance more easily, than if all of the software is new. The performance-sensitive properties of the components can be extracted and stored in a library, and used to build a predictive model for the performance of a proposed product. This paper describes an approach based on performance submodels for each component, and a system assembly model to describe the binding together of library components and new components into a product. In this work a component can be arbitrarily complex, including a subsystem of concurrent processes. The description pays particular attention to identifying the information that must be provided with the components, and with the bindings, and to providing for parameterization to describe different configurations and workloads. General Terms Performance

30 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: If the authors take a sample of the input traffic, and feed it into a suitably scaled version of the system, they can extrapolate from the performance of the scaled system to that of the original, and this makes it possible to capture theperformance of large networks quite faithfully using smaller scale replicas.
Abstract: In networks and in Web server farms, it is useful to collect performance measurements, to monitor the state of the system, and to perform simulations. However, the sheer volume of traffic in large high-speed network systems makes it hard to monitor their performance or to simulate them efficiently. And the heterogeneity of the Internet means it is time-consuming and difficult to devise the traffic models and analytic tools which would allow us to work with summary statistics. We explore a method to side-step these problems by combining sampling, modeling and simulation. Our hypothesis is this: if we take a sample of the input traffic, and feed it into a suitably scaled version of the system, we can extrapolate from the performance of the scaled system to that of the original. Our main findings are: When we scale an IP network which is shared by TCP-like, UDP and Web flows; and which is controlled by a variety of active queue management schemes, then performance measures such as queueing delay and drop probability are left virtually unchanged. We show this in theory and in simulations. This makes it possible to capture the performance of large networks quite faithfully using smaller scale replicas.

25 citations


Journal ArticleDOI
TL;DR: In this letter virtual sensing is investigated for tonal noise (both on and off resonance) in a long narrow duct and the results highlight the significant errors encountered as a result of using broadband transfer functions in lightly damped enclosures.
Abstract: Virtual error sensing is a novel active noise control technique, which is designed to produce a zone of attenuation remote from the physical error sensors. In this letter virtual sensing is investigated for tonal noise (both on and off resonance) in a long narrow duct. The performance of the virtual error sensors using real-time control is compared to the performance determined from an analytical model and the performance determined through the postprocessing of experimental data. Two examples of control using postprocessed experimental transfer function data are presented; the first relied on transfer functions measured using broadband noise and the second relied on transfer functions measured at discrete frequencies. The results highlight the significant errors encountered as a result of using broadband transfer functions in lightly damped enclosures.

22 citations


Journal ArticleDOI
TL;DR: In this article, a neural network-based approach was proposed to predict losses and permeability in wound toroidal cores of varying geometries over an induction range of 0.2 −1.8 T (50 −Hz).

Book Chapter
01 Jan 2003
TL;DR: In this paper, the authors describe how data mining can be used to enhance the analysis of results obtained from a simulation exercise and identify clustering as a particular useful analysis technique and illustrates its potential in enhancing the performance of building simulation performance predictions.
Abstract: Simulation exercises covering long periods (e.g.. annual simulations) can produce large quantities of data. The result data set is often primarily used to determine key performance parameters such as the frequency binning of internal temperatures. Efforts to obtain an understanding for reasons behind the predicted building performance are often only carried out to a limited extent and simulation is therefore not used to its full potential. This paper describes how data mining can be used to enhance the analysis of results obtained from a simulation exercise. It identifies clustering as a particular useful analysis technique and illustrates its potential in enhancing the analysis of building simulation performance predictions.

Patent
24 Sep 2003
TL;DR: In this article, the problem of finding an optimum system configuration satisfying requirement performance in a present operation pattern by monitoring the operation pattern fluctuating every moment and performing performance prediction on the basis of the operation patterns is addressed.
Abstract: PROBLEM TO BE SOLVED: To exhibit an optimum system configuration satisfying requirement performance in a present operation pattern by monitoring the operation pattern fluctuating every moment and performing performance prediction on the basis of the operation pattern. SOLUTION: By monitoring an access log of a system, the operation pattern is extracted. Throughput and a response time in the operation pattern are calculated by a performance prediction simulation part. When prediction performance does not satisfy the requirement performance of the system, a target of a configuration change is specified by a determination policy, and recalculation is repeated by performance prediction simulation. The optimum configuration satisfying the requirement performance is exhibited to an output device as the proposal configuration. COPYRIGHT: (C)2005,JPO&NCIPI

Proceedings ArticleDOI
22 Apr 2003
TL;DR: An executable analytical performance model of a high performance multithreaded architecture that supports percolation is developed and the results reveal thatpercolation and multithreading can complement each other.
Abstract: Percolation has recently been proposed as a key component of an advanced program execution model for future generation high-end machines featuring adaptive data/code transformation and movement for effective latency tolerance. An early evaluation of the performance effect of percolation is very important in the design space exploration of future generations of supercomputers. In this paper, we develop an executable analytical performance model of a high performance multithreaded architecture that supports percolation. A novel feature of our approach is modeling interactions between software (program) and hardware (architecture) components. We solve the analytical model using a queuing simulation tool enriched with synchronization. The proposed approach is effective and facilitates obtaining performance trends quickly. Our results indicate that percolation brings in significant performance gains (by a factor of 2.7 to 11). Further, our results reveal that percolation and multithreading can complement each other.

Proceedings ArticleDOI
01 Jan 2003
TL;DR: In this paper, a simplified mathematical model of a twin shaft gas turbine suitable for use in dynamic studies of both electric power generation plants and variable speed mechanical drive applications is presented, with relatively little loss in dynamic accuracy but considerable advantages in terms of computational time.
Abstract: This paper provides a simplified mathematical model of twin shaft gas turbine suitable for use in dynamic studies of both electric power generation plants and variable speed mechanical drive applications. The main purpose was to define a simulation block diagram, constituted by algebraic equations and simplified transfer functions, which can be easily derived from the gas plant design data utilising the suitable equations and nomographs presented in the paper. The 3 to 30 MW power range of twin shaft gas turbines is covered. The set-up parameters and details applicable to the model are listed, in the paper, for the various machine sizes and model series. The dynamic model has been developed by simplifying a more detailed one, also here presented, with relatively little loss in dynamic accuracy but considerable advantages in terms of computational time. In the proposed test case, the results of both models have been compared simulating the transient response of a twin shaft gas turbine powering a water-jet propulsor for high-speed ships. The accurate performance prediction capability of both models is verified, for a wide range of operating conditions, by comparison with test results from actual field installations.Copyright © 2003 by ASME


Book ChapterDOI
TL;DR: A simulation-based technique for the performance prediction of message- passing applications on cluster systems, using a detailed simulation model to predict the performance of fully-developed or skeletal code off-line without resorting to the real hardware.
Abstract: This paper describes a simulation-based technique for the performance prediction of message- passing applications on cluster systems. Given data measuring the performance of a target cluster in the form of standard benchmark results, along with the details of the chosen computing configuration (e.g., the number of nodes), it is possible to build and to validate automatically a detailed simulation model. This makes it possible to predict the performance of fully-developed or skeletal code off-line, i.e., without resorting to the real hardware. The reasonable accuracy obtained makes this approach particularly useful for preliminary performance testing of parallel code on non-available hardware. After a description of the approach and of the construction and validation of the simulation model, the paper presents a case study.

Patent
22 Jan 2003
TL;DR: In this paper, a performance and an architecture parameter of an existing system are inputted to a conversion processing in an L1 cache complete hit status to obtain a performance prediction of the new system.
Abstract: PROBLEM TO BE SOLVED: To precisely predict the performance of a new system without requiring man-hours for a simulation processing or the like. SOLUTION: A performance 101 and an architecture parameter of an existing system are inputted to a conversion processing 102 in an L1 cache complete hit status to obtain a performance 103 of an L1 cache complete hit status. Afterwards, the performance 103 of the L1 cache complete hit status and the architecture parameter of the new system are inputted to a conversion processing 104 into the new system to obtain the predicted performance of performance 105 of the new system. Thus, it is possible to precisely predict the performance of the new system without developing a simulator which requires much man-hours. COPYRIGHT: (C)2004,JPO&NCIPI


Dissertation
01 May 2003
TL;DR: In this article, a performance prediction and characterisation environment (PACE) is proposed to characterise the performance-critical elements of both an application and its target execution environment and deduce from this model a predicted behavior of the application prior to its execution.
Abstract: This research builds on an application performance prediction and characterisation environment (known as PACE), whose aim is to characterise the performance-critical elements of both an application and its target execution environment and deduce from this model a predicted behaviour of the application prior to its execution. Underlying the research presented in this thesis are a number of themes: the tasks involved in the performance characterisation of applications and how this might be semi- automated: the level of abstraction at which these characterisations are performed in order to maintain a sufficient predictive accuracy: the automated refinement of these characterisations from runtime performance data: the extension of both the target programming languages and the class of application at which these techniques are aimed. In this thesis a number of novel extensions to PACE are described. These include: a new transaction-based performance characterisation language that provides a flexible framework for describing broader classes of application; a performance monitoring framework (based on an extension to the OpenGroup’s Application Response Measurement (ARM) standard) for the runtime monitoring of an application's data-dependent components and the automated refinement of performance models: an adaptation of this performance characterisation for the prediction of Java applications. These contributions are demonstrated through their application to a number of scientific kernels. This thesis also documents how these predictive results can be used in a real-time distributed runtime management environment, and also how these techniques can be applied to non-scientific codes, in particular to an IBM request-driven distributed web services demonstrator.

Book ChapterDOI
02 Jun 2003
TL;DR: A performance modeling and prediction approach that relies on the synergistic collaboration of compiler analysis, compiler-generated instrumentation (to observe relevant run-time input values) and multimodel performance modeling to allow for higher performance modeling accuracy and more importantly to more precise identification of what the causes of performance problems are.
Abstract: Performance understanding and prediction are extremely important goals for guiding the application of program optimizations or in helping programmers focus their efforts when tuning their applications. In this paper we survey current approaches in performance understanding and modeling for high-performance scientific applications. We also describe a performance modeling and prediction approach that relies on the synergistic collaboration of compiler analysis, compiler-generated instrumentation (to observe relevant run-time input values) and multimodel performance modeling. A compiler analyzes the source code to derive a discrete set of parameterizable performance models. The models use run-time data to define the values of their parameters. This approach, we believe, will allow for higher performance modeling accuracy and more importantly to more precise identification of what the causes of performance problems are.

Journal ArticleDOI
TL;DR: An analytical approach to predict the execution time distribution of data-dependent parallel programs that feature N-ary and binary speculative parallel compositions is introduced, based on the use of statistical moments which allows program execution time Distribution to be approximated at O(1) solution complexity.
Abstract: Speculative parallelism refers to searching in parallel for a solution, such as finding a pattern in a data base, where finding the first solution terminates the whole parallel process. Different performance prediction methods are required as compared to traditional parallelism. In this paper we introduce an analytical approach to predict the execution time distribution of data-dependent parallel programs that feature N-ary and binary speculative parallel compositions. The method is based on the use of statistical moments which allows program execution time distribution to be approximated at O(1) solution complexity. Measurement results for synthetic distributions indicate an accuracy that lies in the percent range while for empirical distributions on internet search engines the prediction accuracy is acceptable, provided sufficient workload unimodality.

01 Jan 2003
TL;DR: The APPEAR method aims at the performance estimation of newly developed or adapted parts of software product families during the architecting phase and is currently being extended to address performance prediction for component compositions.
Abstract: The Analysis and Prediction of Performance for Evolving Architectures (APPEAR) method aims at the performance estimation of newly developed or adapted parts of software product families during the architecting phase. Early performance prediction allows checking the feasibility of systems before their implementation and thus saves money and effort from developing potentially infeasible products. In contrast to all the existing methods, it combines both structural and statistical techniques. It allows choosing which part of the application is structurally modeled, and which part is statistically approximated. The statistical approach is employed to model those parts of a system that remain unchanged for a long time during the evolution. The analytical approach is used to model the parts of the system that evolve rapidly and that are thus not yet implemented. Also here, statistical modeling helps to abstract from internal details of components and thus to reduce the modeling complexity. Often, a simulation model can be built that provides fast feedback on the changes of relevant parts. The method was checked using case studies in the Consumer Electronics and the Medical Imaging System domains. The initial results are encouraging for the case of single components. The APPEAR method is currently being extended to address performance prediction for component compositions.

Journal ArticleDOI
TL;DR: In this article, the performance prediction of side channel type regenerative pumps has been developed based on the consideration of the centrifugal force field in the side-channel and in the impeller vane grooves.
Abstract: The periphery pump (or regenerative pump) has been generally applied in the automotive fuel pump due to their low specific speed (high heads and small flow rate) with stable performance curves. In this study, the performance prediction of side channel type periphery pumps has been developed. The prediction of the circulatory flow rate is based on the consideration of the centrifugal force field in the side-channel and in the impeller vane grooves. For the determination of performance curve (head-flow rate), momentum exchange theory is used. The effects of various geometric parameters and loss coefficients used in the performance prediction method on the head and efficiency are discussed, and the results were compared with experimental data.

Book ChapterDOI
26 Aug 2003
TL;DR: An analytical approach to predict the execution time distribution of data-dependent parallel programs that feature N-ary and binary speculative parallel compositions is introduced, based on the use of statistical moments which allows program execution time Distribution to be approximated at O(1) solution complexity.
Abstract: Speculative parallelism refers to searching in parallel for a solution, such as finding a pattern in a data base, where finding the first solution terminates the whole parallel process Different performance prediction methods are required as compared to traditional parallelism In this paper we introduce an analytical approach to predict the execution time distribution of data-dependent parallel programs that feature N-ary and binary speculative parallel compositions The method is based on the use of statistical moments which allows program execution time distribution to be approximated at O(1) solution complexity Measurement results for synthetic distributions indicate an accuracy that lies in the percent range while for empirical distributions on internet search engines the prediction accuracy is acceptable, provided sufficient workload unimodality

Proceedings ArticleDOI
18 Sep 2003
TL;DR: The principal conclusion for model-based performance models (predictive models derived from fundamental physics- and statistics-based considerations) is that analytical progress can be made for performance of ATR system components, but that performance prediction for an entire AtR system under realistic conditions will likely require the combined use of Monte Carlo simulations, analytical development, and careful comparison to MOPs from real experiments.
Abstract: Performance of automatic target recognition (ATR) systems depends on numerous factors including the mission description, operating conditions, sensor modality, and ATR algorithm itself. Performance prediction models sensitive to these factors could be applied to ATR algorithm design, mission planning, sensor resource management, and data collection design for algorithm verification. Ideally, such a model would return measures of performance (MOPs) such as probability of detection (Pd), correct classification (Pc), and false alarm (Pfa), all as a function of the relevant predictor variables. Here we discuss the challenges of model-based and data-based approaches to performance prediction, concentrating especially on the synthetic aperture radar (SAR) modality. Our principal conclusion for model-based performance models (predictive models derived from fundamental physics- and statistics-based considerations) is that analytical progress can be made for performance of ATR system components, but that performance prediction for an entire ATR system under realistic conditions will likely require the combined use of Monte Carlo simulations, analytical development, and careful comparison to MOPs from real experiments. The latter are valuable for their high-fidelity, but have a limited range of applicability. Our principal conclusion for data-based performance models (that fit empirically derived MOPs) offer a potentially important means for extending the utility of empirical results. However, great care must be taken in their construction due to the necessarily sparse sampling of operating conditions, the high-dimensionality of the input space, and the diverse character of the predictor variables. Also the applicability of such models for extrapolation is an open question.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: In this paper, a method for deriving time-series of attenuation on fixed satellite or terrestrial links is described, which uses meteorological model forecast data and radar data and hence permits time coincident derivations for multiple sites that include the spatial correlation properties inherent in weather systems.
Abstract: In this paper, a method for deriving time-series of attenuation on fixed satellite or terrestrial links is described. The method uses meteorological model forecast data and radar data and hence permits time coincident derivations for multiple sites that include the spatial correlation properties inherent in weather systems. The success of the technique depends upon insertion of the short interval temporal properties (varying typically over 1 second to 15 minutes) that are statistically independent between stations. The technique has immediate application to simulation of the performance of multi-media satellite to earth links and has longer-term significance in terms of operational real-time performance prediction.

Proceedings ArticleDOI
N. Caouras1, M. Freda1, F. Monfet1, V.S. Aldea1, O. Naeem1, Tho Le-Ngoc1, Benoit Champagne1 
04 May 2003
TL;DR: A highly flexible simulation platform catering to the easy and rapid evaluation of existing and future digital subscriber line (DSL) deployments as well as DSL modem performance prediction using practical modem implementations in a complex multi-segment environment is presented.
Abstract: This paper presents a highly flexible simulation platform catering to the easy and rapid evaluation of existing and future digital subscriber line (DSL) deployments as well as DSL modem performance prediction using practical modem implementations in a complex multi-segment environment. The paper outlines the methodology employed to architect and develop the core software, followed by a description of the performance prediction hooks for a variety of current and future DSL modem technologies. The graphical user interface (GUI) abstracting the core software for the user is described in terms of the various configuration options and the quick and easy graphical design of typical and complex deployment scenarios. The proposed simulator's calculations, notably theoretical SNR margin, maximum theoretical capacity and reach, plus performance evaluation using user-designed modem models, are also outlined. To support the accuracy of the new simulator, results for some example scenarios are presented and compared against other available simulators.

Proceedings ArticleDOI
20 Mar 2003
TL;DR: In this article, the performance of sensors operating within the marine boundary layer is severely influenced by the actual atmospheric conditions and the sea surface, and the results for multispectral (IR/mmW) approach are discussed.
Abstract: The performance of sensors operating within the marine boundary layer is severely influenced by the actual atmospheric conditions and the sea surface. Propagation models are in existence, which cope with the varying environment and allow a performance prediction for sensors in different bands of the electromagnetic spectrum. Model calculations give evidence for a complementary performance of sensors operating in the IR region and at millimeterwaves (35/94 GHz). To validate existing radar propagation models like TERPEM and to compare IR and mm-wave propagation over sea under various atmospherical conditions, joint experiments were conducted over transmission ranges well beyond the horizon, assisted by a careful characterization of the environment. This paper describes the experimental approach and gives representative results for measurement and simulation. The implications on performance especially for a multispectral (IR/mmW) approach are discussed.

Journal ArticleDOI
TL;DR: It is shown that the k-processor prediction method delivers accurate performance predictions and the validity of the analysis on several robot motion planning problems is demonstrated.
Abstract: In this paper we discuss methods for predicting the performance of any formulation of randomized parallel search, and propose a new performance prediction method that is based on obtaining an accurate estimate of the k-processor run-time distribution. We show that the k-processor prediction method delivers accurate performance predictions and demonstrate the validity of our analysis on several robot motion planning problems.