scispace - formally typeset
Search or ask a question

Showing papers on "Benchmark (computing) published in 1982"


Journal ArticleDOI
TL;DR: The Monte Carlo analysis of the Winfrith Iron Benchmark required about 2h computing time on an IBM 3033; the discrete ordinates analysis required an order of magnitude more than this.
Abstract: The Monte Carlo analysis of the Winfrith Iron Benchmark required about 2h computing time on an IBM 3033; the discrete ordinates analysis required an order of magnitude more than this. The Winfrith Iron Benchmark had relatively simple geometry; for more complicated geometries the difference in computing times is expected to become even more significant. For integral measurements taken in complex situations such as a power reactor, the Monte Carlo adjustment procedure presented in this work may be the only one that is feasible. Other advantages pertinent to this method are that the removal of bias factors gives less opportunity for subjective judgment to influence the adjustment procedure and that the use of point nuclear data makes it clear that the adjustments are not simply compensating for group averaging errors. This work enables adjustment procedures to be applied more efficiently, more generally, and more reliably.

35 citations


Journal ArticleDOI
TL;DR: In this paper, an experimental bilateral bargaining mechanism is developed for use on a computer and then studied under two types of time constraint: shortening the time of each contract negotiation period and replicating these periods over one long experimental session, a multiperiod version of the bilateral mechanism is examined.
Abstract: An experimental bilateral bargaining mechanism is developed for use on a computer and then studied under two types of time constraint. First a single buyer and single seller are allowed to negotiate a market contract during one long experimental session. The relative Pareto optimality results of these negotiations are comparable to the benchmark Siegel and Fourakel bargaining experiments. Then, by shortening the time of each contract negotiation period and replicating these periods over one long experimental session, a multiperiod version of the bilateral mechanism is examined. Using this repetitive mechanism, the occurrence of an impasse as a bargaining tool becomes important. The multiple contract Pareto optimality results and multiple impasse occurrences are then compared to the original single contract results.

26 citations


Proceedings ArticleDOI
17 Mar 1982
TL;DR: This paper presents further results in development of a discrete event simulation computer based on a network of micro processors, using two processors for the event set and the third for state statistics accumulation.
Abstract: This paper presents further results in development of a discrete event simulation computer based on a network of micro processors. The network is being designed by identifying simulation tasks which may be performed in parallel with other computation required by the simulation, and then assigning those subtasks to attached processing elements in the network. The tasks of priority queue processing and state accounting are considered in this paper. A three attached processor simulation computer has been designed, using two processors for the event set and the third for state statistics accumulation. In a simulation model of this system, a forty to fifty percent reduction in the execution of a benchmark simulation program is easily achieved. (The benchmark program itself uses an adaptive scheduling algorithm). Further observations and suggestions for future research are presented.

15 citations



Journal ArticleDOI
Nobuhide Suda1
TL;DR: The test is aimed at computational rather than physical benchmark, and clues for making reliable data base of the reactor noise are obtained and it is encouraged to proceed to a physical benchmark test.

5 citations


Proceedings ArticleDOI
30 Aug 1982
TL;DR: This paper describes the process of bench-marking the diverse research environment that constitutes the workload of VAX/VMS at the University Analysis and Control Center at North Carolina State University, and an analytic model was introduced and used in conjunction with benchmark data and hardware characteristics to derive performance measures for the system.
Abstract: This paper describes the process of bench-marking the diverse research environment that constitutes the workload of VAX/VMS at the University Analysis and Control Center at North Carolina State University. The benchmarking process began with a study of the system load and performance characteristics over the six-month period from January to June of 1981. Statistics were compiled on the number of active users, CPU usage by individual accounts, and peak load periods. Individual users were interviewed to determine the nature and major computing characteristics of the research they were conducting on VAX. Information from all sources was compiled to produce a benchmark that closely paralleled actual system activity.An analytic model was introduced and used in conjunction with the benchmark data and hardware characteristics to derive performance measures for the system. Comparisons with measured system performance were conducted to demonstrate the accuracy of the model. The model was then employed to predict performance as the system workload was increased, to suggest improvements for the system, and to examine the effects of those improvements.

5 citations


Journal ArticleDOI
TL;DR: A ‘synthetic’ program was constructed to load the different parts of the system, and the program embodied into a job script with edit, compile and run phases that contributed significantly to obtaining a satisfactory system and show a reasonable correlation with subsequent system usage.
Abstract: The paper describes how a benchmark was constructed to assist in the selection of a general purpose interactive computing system. Because no existing user programs were available, a ‘synthetic’ program was constructed to load the different parts of the system, and the program embodied into a job script with edit, compile and run phases. The calculation of the program and job parameters were successfully derived from the operational requirements. The problems of running the benchmark and analysing the results are also described. The results of the exercise contributed significantly to obtaining a satisfactory system and show a reasonable correlation with subsequent system usage.

4 citations


Journal ArticleDOI
TL;DR: The comparative performance characteristics of COBOL programs in a small versus large computer systems are investigated.
Abstract: The comparative performance characteristics of COBOL programs in a small versus large computer systems are investigated. The vehicle consists of a set of synthetic benchmark COBOL programs, each measuring a particular aspect of COBOL computations, data manipulation, and input/output is made on both a large scale computer (IBM 370/158) and a minicomputer (Texas Instruments TI980). Results of a number of such experiments are presented and comparisons made between results obtained from the two systems.

3 citations


Posted Content
TL;DR: The ultimate test of an econometric forecasting model is how well it performs relative to the optimal naive benchmark, defined as the mechanical extrapolation with the minimum mean square error (MSE).
Abstract: According to Mincer and Zarnowitz (1969), the ultimate test of an econometric forecasting model is how well it performs relative to the optimal naive benchmark, defined as the mechanical extrapolation with the minimum mean square error (MSE). The justification for this criterion is clear: since the optimal naive benchmark is available to all forecasters, the accuracy of any forecasting model depends on the additional information it contains relative to this extrapolative model.

2 citations


Journal ArticleDOI
TL;DR: Digital computers ranging from microcomputers to large, general-purpose mainframes were tested with a benchmark program that measured the time required by a large system simulation, indicating that all but the smallest computers can handle the intended application.
Abstract: Digital computers ranging from microcomputers to large, general-purpose mainframes were tested with a benchmark program that measured the time required by a large continu ous system simulation. The relative accuracy of the com puted solutions is also reported. The benchmark was expressed in several programming languages as well. The results indicate that all but the smallest computers can handle the intended application. However, factors outside the scope of the benchmark must also be considered when evaluating a computer system for a particular application.

1 citations




Journal ArticleDOI
TL;DR: The AMPX-KENO computer code package is commonly used to evaluate criticality in high-density spent fuel storage rack designs and it is important to know the reliability that can be place as mentioned in this paper.
Abstract: The AMPX-KENO computer code package is commonly used to evaluate criticality in high-density spent fuel storage rack designs. Consequently, it is important to know the reliability that can be place...


Journal ArticleDOI
TL;DR: Criteria for the development of a Benchmark test program are described, especially for the important simulation of terminal sessions and detailed in an example.