scispace - formally typeset
Search or ask a question

Showing papers on "Benchmark (computing) published in 1980"


Journal ArticleDOI
TL;DR: The use of least-squares techniques for this and G. W. Stewart, LINPACK Users' Guide for Intel® Math Kernel Library 11.3 for Linux* OS are provided.
Abstract: We provide further discussion of the use of least-squares techniques for this and G. W. Stewart, LINPACK Users' Guide (Society of Industrial and Applied. User's Guide for Intel® Math Kernel Library 11.3 for Linux* OS. Revision: Benchmark your cluster with Intel® Optimized MP LINPACK Benchmark for Clusters. Running Linpack on Linux High performance computer. No problem. I need a working guide for hpl if someone can do help me at this. The errors are.

671 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyze benchmark error in the context of the current widespread practice of using the capital asset pricing model (CAPM) to measure risk and show that performance evaluations based on the CAPM are prone to systematic errors of various kinds.
Abstract: I n portfolio performance evaluation, one compares the return obtained on a managed portfolio to the return expected on an unmanaged portfolio having the same risk. The benchmark is the expected return on the unmanaged portfolio. It should accurately reflect the risk associated with the managed portfolio during the evaluation period. However, since it is always difficult to measure the risk associated with a managed portfolio, there is always potential. for error in the benchmark. The purpose of this paper is to analyze benchmark error, and I do so in the context of the current widespread practice of using the capital asset pricing model (CAPM) to measure risk. As we shall see, performance evaluations based on the CAPM are prone to systematic errors of various kinds. Error in performance measurement can be ascribed to two sources. The first is random variation: The actual return is in part a function of unforeseeable events that cause parameter mis-estimation, events that tend to cancel each other's effects over repeated measurements. A second source of error is in the ex ante CAPM benchmark, an error that cannot be eliminated by repeated evaluations. Thus, ex ante benchmark errors are much more important than errors due to random causes; they make particular managers appear to "outperform" expectations when they fortuitously choose portfolios with negative errors in the benchmark, while managers unfortunate enough to choose portfolios with positive benchmark error will appear to do relatively poorly. We must remember that true portfolio management ability is not accurately indicated if the measured performance reflects the benchmark's

45 citations


Journal ArticleDOI
TL;DR: Using an extensive data base of sensitivities and evaluated covariances, this paper incorporated 11 fast-reactor benchmark experiments and 2 neutron-field benchmark experiments into the adjustment of the adjustment algorithm.
Abstract: Using an extensive data base of sensitivities and evaluated covariances, this work incorporates 11 fast-reactor benchmark experiments and 2 neutron-field benchmark experiments into the adjustment o...

22 citations



01 Sep 1980
TL;DR: The problem of identifying the maximum size embedded pure network rows within the coefficient matrix of such models is shown to be among the class of NP-hard problems, therefore, the polynomially-bounded algorithms presented here do not guarantee network sets of maximum size.
Abstract: : The solution of contemporary large-scale linear, integer, and mixed integer programming problems is often facilitated by the exploitation of intrinsic special structure in the model. This paper deals with the problem of identifying embedded pure network rows within the coefficient matrix of such models and presents two heuristic algorithms for identifying such structure. The problem of identifying the maximum size embedded pure network is shown to be among the class of NP-hard problems, therefore, the polynomially-bounded algorithms presented here do not guarantee network sets of maximum size. However, upper bounds on the size of the maximum network set are developed and used to evalaute the algorithms. Finally, the algorithms were tested with a number of large-scale, real-world models and the results of these benchmark runs are presented. (Author)

7 citations


DOI
01 Jul 1980
TL;DR: The paper presents the important features of the MU5 instruction set and introduces the notation used in ISPS through the essential Features of the ISPS description of MU5, and some new MU5 performance figures are given.
Abstract: The Instruction Set Processor (ISP) notation was originally developed as a means of describing formally the behavioural aspects of computer systems. ISPS is a computer language based on this notation, and for which a compiler and simulator have been produced. An ISPS description of the MU5 computer has been written, verified, and used in a series of evaluation experiments conducted at Carnegie-Mellon University (CMU), Pittsburgh, from Manchester University, England, using the ARPA network. The paper presents the important features of the MU5 instruction set and introduces the notation used in ISPS through the essential features of the ISPS description of MU5. Results of benchmark programs run on the ISPS simulation model of MU5 are related to actual results obtained by hardware monitoring of the MU5 processor, and some new MU5 performance figures are given. Results are also presented for the CMU Computer Family Architecture (CFA) project test programs, and some comment is included on the validity of this type of architectural evaluation.

7 citations


Journal ArticleDOI
Reeves1, Bruner
TL;DR: A general scheme is described for generating efficient programs to implement arbitrary functions on bit-serial-arithmetic processors, based on logic design methodology and involves designing a logic network to realize a desired function.
Abstract: Parallel processors with bit-serial processing elements (PE's) usually implement arithmetic functions by a sequence of word-level arithmetic operations; however, basic operations must be specified at the bit level. In this correspondence the possibility of more efficiently implementing a function with a special tailored sequence of bit-serial operations is considered. A general scheme is described for generating efficient programs to implement arbitrary functions on bit-serial-arithmetic processors. This scheme is based on logic design methodology and involves designing a logic network to realize a desired function. The parallel processor is then used to efficiently simulate a set of these networks. Heuristic design algorithms are used to generate the logic networks; several algorithms are described and compared with some benchmark functions. Several efficient PE designs are described and analyzed.

6 citations


Journal ArticleDOI
01 Jul 1980
TL;DR: Results are presented of measurements of static and dynamic code usage for a number of benchmark programs run on the ISPS simulation models of these systems, and comparisons between the three architectures are made.
Abstract: In name-space architectures, the mapping of names onto fast registers is a hardware, rather than a software, function. The MU5 computer is an example of such an architecture, having a single-address instruction format with some stacking facilities, and this paper introduces proposed two-store-address and three-store-address architectures developed from MU5 concepts. ISPS descriptions of all three architectures have been written, verified and used in a series of experiments conducted at Carnegie-Mellon University, Pittsburgh, from Manchester University, England, using the ARPA Network. Results are presented of measurements of static and dynamic code usage for a number of benchmark programs run on the ISPS simulation models of these systems, and comparisons between the three architectures are made on the basis of these results.

5 citations


Journal ArticleDOI
TL;DR: This paper investigates various aspects of the computer performance measurement and evaluation procedure within the context of theComputer evaluation, comparison, and selection process based on the Logic Scoring of Preference method.
Abstract: Computer evaluation, comparison, and selection is essentially a decision process. The decision making is based on a number of worth indicators, including various computer performance indicators. The performance indicators are obtained through the computer performance measurement procedure. Consequently, in this environment the measurement procedure should be completely conditioned by the decision process. This paper investigates various aspects of the computer performance measurement and evaluation procedure within the context of the computer evaluation, comparison, and selection process based on the Logic Scoring of Preference method. A set of elementary criteria for performance evaluation is proposed and the corresponding set of performance indicators is defined. The necessary performance measurements are based on a standardized set of synthetic benchmark programs and include three separate measurements: monoprogramming performance measurement, multiprogramming performance measurement, and multiprogramming efficiency measurement. Using the proposed elementary criteria, the measured performance indicators can be transformed into elementary preferences and then aggregated with other nonperformance elementary preferences obtained through the evaluation process. The applicability of presented elementary criteria is illustrated by numerical examples.

3 citations



Journal ArticleDOI
01 Sep 1980
TL;DR: This paper compares the precision of several commonly used methods in a single test case, namely cycle time, instruction mix analysis and benchmarking.
Abstract: A number of tools exist for computer selection evaluation. The operational cost of applying these vary considerably as does the precision of the performance prediction. This paper compares the precision of several commonly used methods in a single test case, namely cycle time, instruction mix analysis and benchmarking.

Journal ArticleDOI
30 Nov 1980
TL;DR: This paper reports on the functional evaluation of the three 16-bit microprocessors, namely the Intel 8086, the Zilog Z8000, and the Motorola MC68000, employed in several CRT applications, both monochrome and color.
Abstract: This paper reports on the functional evaluation of the three 16-bit microprocessors, namely the Intel 8086, the Zilog Z8000, and the Motorola MC68000. These microprocessors were employed in several CRT applications, both monochrome and color. Execution time benchmark tests were made, mechanization problems compared and instruction/architectural characteristics highlighted. Conclusions and recommendations are made applicable to terminals and similar Sperry Univac products.


01 Dec 1980
TL;DR: This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project, which considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes.
Abstract: This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications.

Journal ArticleDOI
28 May 1980
TL;DR: The methodology and the problems that were encountered and solutions which have been used in calibrating a benchmark model for a specific application designed to model a large interactive information processing application on a procurement requiring loosely-coupled (no shared memory) multicomputer systems are examined.
Abstract: A methodology for benchmarking dedicated, interactive systems has been developed at The MITRE Corporation. This methodology uses a synthetic program model of the application which runs on the proposed hardware/operating system configurations and is driven by a statistically derived load. System performance is measured by analyzing the synthetic transaction response times. The methodology yields assurances to a buyer that the benchmarked system has at least an a priori defined amount of computer power available for applications-oriented software. This paper examines the methodology and the problems that were encountered and solutions which have been used in calibrating a benchmark model for a specific application. The benchmark was designed to model a large interactive information processing application on a procurement requiring loosely-coupled (no shared memory) multicomputer systems. The model consists of a set of interacting synthetic program cells, each composed of several abstractly defined components. The model is maintained in a very high level language that is automatically translated into a standard High Order Language (typically FORTRAN or COBOL) for delivery to the competing vendors. These delivered model cells contain automatically generated size and time filler code that “calibrate” the cells to consume the appropriate CPU time and memory space as defined by the abstract size units after accounting for each vendor's hardware and proposed system design.