scispace - formally typeset
Search or ask a question

Showing papers on "Benchmark (computing) published in 1986"


Journal ArticleDOI
TL;DR: In this article, the robustness of the security market line relationship when the market proxy employed is not mean-variance efficient is theoretically evaluated. But the analysis focuses on the behavior of the "benchmark errors," the deviations of assets and portfolios from the Security Market Line, and the results indicate that the magnitudes of the errors exhibit continuous but not uniformly continuous behaviors.
Abstract: This paper theoretically evaluates the robustness of the Security Market Line relationship when the market proxy employed is not mean-variance efficient. The analysis focuses on the behavior of the "benchmark errors," the deviations of assets and portfolios from the Security Market Line. First, we characterize how the location of an asset in mean-variance space determines its benchmark error. Then the continuity properties of the benchmark errors are studied. The results indicate that the magnitudes of the errors exhibit continuous but not uniformly continuous behaviors. The relative rankings based on deviations from the Security Market Line, however, exhibit some severe discontinuities. In fact, these can be exactly reversed for two proxies arbitrarily close in mean-variance space.

66 citations


Journal ArticleDOI
01 May 1986
TL;DR: From the standpoint that performance of a single processor of a data flow computer must be comparable to that of a Von Neumann computer, comparison of both computers is discussed and improvement of the SIGMA-1 instruction set is proposed.
Abstract: A processing element and a structure element of data flow computer SIGMA-1 for scientific computations is now operational. The elements are evaluated for several benchmark programs. For efficient execution of loop constructs, the sticky token mechanism which holds loop invariants is evaluated and exhibits a remarkable effect. From the standpoint that performance of a single processor of a data flow computer must be comparable to that of a Von Neumann computer, comparison of both computers is discussed and improvement of the SIGMA-1 instruction set is proposed.

63 citations


Journal ArticleDOI
15 Jun 1986
TL;DR: A prototype of a temporal database management system that supports the temporal query language TQuel, a superset of Quel, handling four types of database static, rollback, historical and temporal is built by extending Ingres.
Abstract: A prototype of a temporal database management system was built by extending Ingres. It supports the temporal query language TQuel, a superset of Quel, handling four types of database static, rollback, historical and temporal. A benchmark set of queries was run to study the performance of the prototype on the four types of databases. We analyze the results of the benchmark, and identify major factors that have the greatest impact on the performance of the system. We also discuss several mechanisms to address the performance bottlenecks we encountered.

58 citations


Journal ArticleDOI
K. B. Gaylin1
01 Apr 1986
TL;DR: Users of a windowing system were studied for the purpose of creating an empirically based windowing benchmark and revealed a relatively high use of a small number of commands that were primarily concerned with moving between windows.
Abstract: Users of a windowing system were studied for the purpose of creating an empirically based windowing benchmark. Each filled out a paper questionnaire that sampled subjective opinions of windowing commands, and were observed for approximately 22 minutes while performing typical daily activities on the computer. Subjects were also asked to demonstrate a typical log-on procedure and were personally interviewed. Windowing command frequencies, and screen layout characteristics were collected and analyzed. The data revealed a relatively high use of a small number of commands that were primarily concerned with moving between windows. This study enabled the creation of a more accurate windowing benchmark task.

44 citations


01 Nov 1986
TL;DR: To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes.
Abstract: It was long a truism in computer systems design that balanced systems achieve the best performance Message passing parallel processors are no different To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns

38 citations


Journal ArticleDOI
01 May 1986
TL;DR: A DFM prototype machine is implemented and DFM performance is evaluated in a simulation on the register transfer level using several benchmark programs, showing the DFM single processor system to be about five times faster than conventional machines which use the same device technology.
Abstract: The architecture of a data flow machine, called DFM, is developed for parallel list processing. The DFM can maximally exploit parallelism inherent in list processing, due to its ultra-multi-processing mechanism, packet communication-based parallel and pipeline execution mechanism, and lenient cons mechanism. A practical DFM implementation is described. A DFM prototype machine is implemented and DFM performance is evaluated in a simulation on the register transfer level using several benchmark programs. The DFM single processor system is shown to be about five times faster than conventional machines which use the same device technology, while a multi-processor DFM system is shown to achieve a linear speed-up ratio of 0.6 ~ 0.9.

36 citations


Book ChapterDOI
14 Jul 1986
TL;DR: The design and implementation of a relatively portable Prolog compiler achieving 12K LIPS on the standard benchmark is described and uses decompilation to implement retract, clause, and listing.
Abstract: The design and implementation of a relatively portable Prolog compiler achieving 12K LIPS on the standard benchmark is described. The compiler is incremental and uses decompilation to implement retract, clause, and listing, as well as support the needs of its four-port debugger. The system supports modules, garbage collection, database pointers, and a full range of built-ins.

28 citations


Proceedings ArticleDOI
01 Jul 1986
TL;DR: A retargetable instruction reorganizer that performs targeting and evaluation order determination by applying a well known algorithm for optimal code generation for expressions to object code to avoid phase ordering problem often encountered by higher-level optimizers.
Abstract: Extant peephole optimizers can perform many optimizations that are handled by higher-level optimizers. This paper describes a retargetable instruction reorganizer that performs targeting and evaluation order determination by applying a well known algorithm for optimal code generation for expressions to object code. By applying the algorithm to object code after code generation and optimization, a phase ordering problem often encountered by higher-level optimizers is avoided. It minimizes the number of registers and temporaries required to compile expressions by rearranging machine instructions. For some machines, this can result in smaller and faster programs. By generalizing its operation, the reorganizer can also be used to reorder instructions to avoid delays in pipelined machines. For one pipelined machine, it has provided a 5 to 10 percent improvement in the execution speed of benchmark programs.

25 citations


Proceedings ArticleDOI
05 Feb 1986
TL;DR: Since MBDS is designed specifically for performance gains, capacity growth, and hardware upgrade, it is benchmarked to verify whether its design and implementation can indeed relate the gains and growth directly to the multiplicity of backends in terms of the response-time reduction and invariance.
Abstract: Traditional database systems have long been plagued by performance problems when there is either an increase in the mainframe usage or in the database applications. Solutions to these problems have been sought, first, by offloading the database system from the mainframe computer to a single, dedicated backend computer. The backend computer has its own disk storage, is used to perform all of the database operations, and interacts with the mainframe. However. database systems with this software single-backend approach still encounter the performance problems when either the backend usage or database applications increase. The software multiple-backend approach to database management and hardware upgrade is therefore proposed to overcome the performance-gains and capacity-growth problems of either traditional mainframe-based database systems or conventional software single-backend database systems. In this paper we specify the design requirements and issues of the software multi-backend database systems. We show how these requirements and issues affect the design and implementation of a multi-backend database system known as MBDS. Since MBDS is designed specifically for performance gains, capacity growth, and hardware upgrade, we benchmark MBDS in order to verify whether its design and implementation can indeed relate the gains and growth directly to the multiplicity of backends in terms of the response-time reduction and invariance.

21 citations


Journal ArticleDOI
TL;DR: The simulated annealing method (SAM) as discussed by the authors is a new and powerful approach to solving integer optimization problems and has been applied to the facility layout problem (QAP) and shows how it can generally match or produce superior solutions to the best known values for classical benchmark problems.

13 citations


01 Jan 1986
TL;DR: A comprehensive benchmark of the relational Main Memory Database System (MMDBS), that is the foundation of the interactive office system, and identifies issues that must be considered in the design and implementation of MMDBS''s.
Abstract: In this paper we present the results of a comprehensive benchmark of the relational Main Memory Database System (MMDBS), that is the foundation of the interactive office system. Office-By-Example (OBE). Based on this case study, we identify issues that must be considered in the design and implementation of MMDBS''s. We determine relevant performance metrics and describe techniques for benchmarking MMDBS''s.

01 Jan 1986
TL;DR: In intensive work over a four-week period in the summer of 1986, seven problems were studied and implemented on the Butterfly, Inspired by various capabilities in computer vision and proposed as benchmarks for a DARPA workshop on parallel architectures.
Abstract: In intensive work over a four-week period in the summer of 1986, seven problems were studied and implemented on the Butterfly. The problems were Inspired by various capabilities in computer vision, and were proposed as benchmarks for a DARPA workshop on parallel architectures. They were: convolution and zero-crossing detection for edges, edge tracking, connected component labeling, hough transform, three computational geometry problems (convex hull, voronoi diagram, and minimum spanning tree), three-dimensional visibility calculations, subgraph isomorphism and minimum cost path calculation. BPRs 10, 11, and 14 are detailed reports on three of the problems. BPR13 contains the conclusions of the study and writeups of the work not covered in other BPRs. This work was supported in part by the Defense Advanced Research Projects Agency U.S. Army Topographic Labs under grant number DACA76-85-C-OOOl and in part by the National Science Foundation under grant number DCR-8320136.

Journal ArticleDOI
Y. Singh1, G. M. King1, J. W. Anderson1
TL;DR: The IBM 3090 system represents the highest level of system performance offered by IBM to date and to realize the full performance potential of this system, it is essential to maintain a balance among its various components.
Abstract: The IBM 3090 system represents the highest level of system performance offered by IBM to date. To realize the full performance potential of this system, it is essential to maintain a balance among its various components. The major components of the system are the processor(s), storage, I/O, and the software that manages the system resources. Their performance attributes are discussed and their effect on system performance illustrated by laboratory benchmark measurements for the MVS and VM operating systems.

Journal ArticleDOI
TL;DR: The benchmarking of four 32-bit microprocessors revealed some specific reasons for the differences in performance that were observed.
Abstract: The benchmarking of four 32-bit microprocessors?the MC68020, the NS32032, the AT&T 32100, and the Intel 80386?revealed some specific reasons for the differences in performance that were observed.

Book
02 Jan 1986
TL;DR: In this article, the authors examined four performance enhancements to a database management system: dynamic compilation, microcoded routines, a special-purpose file system, and a specialpurpose operating system.
Abstract: In this paper we examine four performance enhancements to a database management system: dynamic compilation, microcoded routines, a special-purpose file system, and a special-purpose operating system. All were examined in the context of the INGRES database management system. Benchmark timings that are included suggest the attractiveness of dynamic compilation and a special-purpose file system. Microcode and a special-purpose operating system are analyzed and appear to be of more limited utility in the INGRES context.

Proceedings ArticleDOI
01 Apr 1986
TL;DR: A massively parallel hardware/software architecture that is applicable to accelerating a wide class of large vocabulary speech recognition algorithms, and the general principles of applicability supporting this claim will be described.
Abstract: The investigation of multiprocessor architectures and parallel algorithms for speech recognition is important. Large vocabulary speech recognition is a computationally intensive problem, which can require orders of magnitude acceleration over uniprocessors to achieve real-time performance. Also, there is still much algorithm development work to be done, which requires a programmable computer rather than a hardware implementation. This paper describes a massively parallel hardware/software architecture that is applicable to accelerating a wide class of large vocabulary speech recognition algorithms. The general principles of applicability supporting this claim will be described. Timing and sizing results obtained by applying these principles to Rabiner's level-building DTW algorithm for connected-word recognition will be given. Finally, a benchmark algorithm is described that demonstrates the programmability and performance of the architecture.

Journal ArticleDOI
D Gannon1, J Panetta1
01 Oct 1986
TL;DR: A mapping of the algorithms to a configurable highly parallel (CHiP) computer being designed at the University of Washington is described and the way in which parallelism can be used to speed up execution is discussed.
Abstract: The SIMPLE program is a commonly used benchmark for testing new architectures designed for high speed scientific computation. As the name implies, the code is a simple example of a Lagrangian hydrodynamics application. In this paper we describe the SIMPLE benchmark in detail and discuss the way in which parallelism can be used to speed up execution. The focus of the work is a mapping of the algorithms to a configurable highly parallel (CHiP) computer being designed at the University of Washington.

ReportDOI
01 Jan 1986
TL;DR: The Convex C-1 computer is benchmarked as a mini-supercomputer executing a UNIX operating system and the machine architecture was changed to expand the cache bypass for vector memory accesses to include the significant architecture modification.
Abstract: In July 1985, we benchmarked the Convex C-1 computer at the Convex plant in Richardson, Texas. The machine is marketed as a mini-supercomputer executing a UNIX operating system. The architecture includes vector functional units, 16-million 64-bit words of physical memory and 64 kbytes of set-associative cache between main memory and the CPU. The standard one-processor Los Alamos benchmarks were executed and timed in both single-precision (32-bit) and double-precision (64-bit) floating-point mode. Subsequent to the July benchmark, the machine architecture was changed to expand the cache bypass for vector memory accesses. The benchmarks were redone in October 1985 to include the significant architecture modification. The results in this paper are from the latest benchmark.


Journal ArticleDOI
01 Sep 1986
TL;DR: In this paper, the authors describe application of a methodology to evaluate the operational suitability of an air traffic control console design, which combines specified requirements, operations concepts, and the expertise of a qualified user group.
Abstract: This paper describes application of a methodology to evaluate the operational suitability of an air traffic control console design. This approach combines specified requirements, operations concepts, and the expertise of a qualified user group to assess the suitability of proposed designs. In addition to these elements, basic human factors engineering principles are used to provide the evaluations in a “real world” (non-laboratory) setting. Conclusions highlight the utility of this methodology in providing a link from system, requirements to a suitability evaluation and in allowing the comparison of multiple designs to a single benchmark specification.

Journal ArticleDOI
TL;DR: Three commonly used benchmark programs: the Whetsone, the Dhrystone, and the Sieve, were run on an IBM PC, an IBMPC/AT, a Vax 11/785 and a VX 8600 computer, showing large differences in performances predicted by the different benchmarks.

Proceedings Article
01 Jul 1986

Journal ArticleDOI
TL;DR: A library may require that benchmark tests be conducted after a vendor has been selected, but before a computer is installed in the library, if the vendor does not have installed systems comparable to the library's requirements, or if other similar systems have not previously met theLibrary's performance objectives.
Abstract: Acceptance and benchmark tests will provide important assurances that an automation system will meet defined performance requirements. Three acceptance tests should be required by a library and performed by the vendor: 1) a system reliability acceptance test, 2) one or more functional performance acceptance test, and 3) a full load response time acceptance test. Additionally, a library may require that benchmark tests be conducted after a vendor has been selected, but before a computer is installed in the library, if the vendor does not have installed systems comparable to the library's requirements, or if other similar systems have not previously met the library's performance objectives.

Book ChapterDOI
01 Jan 1986
TL;DR: The KEDDC software package for computer-aided control system design, developed at Ruhr-University, Bochum, F. R. G. is presently applied by the Communications Research Centre (CRC), Ottawa, Canada to the development of control techniques for future large flexible spacecraft in a joint industry, university and government research and development project.
Abstract: The KEDDC software package for computer-aided control system design, developed at Ruhr-University, Bochum, F. R. G. is presently applied by the Communications Research Centre (CRC), Ottawa, Canada to the development of control techniques for future large flexible spacecraft in a joint industry, university and government research and development project. The Operational Mobile Communications Satellite (OMSAT) proposed for the mid 1990's featuring a 44m offset-fed antenna, is used as target application. A reduced version of a 146th order high fidelity dynamics model of the satellite serves as a benchmark system. This application is very demanding and forms an important performance benchmark for any software system because the high order of structural models requires the use of highly stable and efficient numerical algorithms. KEDDC has been tested on this problem for more than one year and has performed flawlessly, improving quite considerably the design efficiency as well as the scope of system analysis.


Proceedings ArticleDOI
01 Dec 1986
TL;DR: These tests were found useful in the lab in determining optimal environments for research in several different categories of non-numerical applications and performed well on microcomputer PROLOG interpreters and compilers.
Abstract: In this paper we present the results of several different categories of benchmarks performed on microcomputer PROLOG interpreters and compilers. These tests were found useful in our lab in determining optimal environments for research in several different categories of non-numerical applications. The PROLOG clause sets, as well as the benchmark results, are provided.

Journal ArticleDOI
TL;DR: The Los Alamos benchmark set has been executed on two Japanese supercomputers, Fujitsu's VP-200 and Hitachi's S810/20, as well as on the American-made CRAY X-MP/24, and results can be analyzed in terms of scalar speed, raw vector speed, and, finally, overall performance.

Journal ArticleDOI
TL;DR: The 4096 processor DAP performs far better than predicted by most methods of parallel-performance estimation, mainly due to the properties of bit-serial arithmetic, dynamic reconfiguration by software maintaining system utilization, and high performance on logical operations.

Journal ArticleDOI
15 Jun 1986
TL;DR: It appears as though both the TP1 and the Wisconsin benchmark have the potential of becoming de facto standard benchmarks, in their respective areas, to be used in a variety of ways.
Abstract: In the past few years we have seen in the literature a number of proposals for benchmarks to be used in measuring the performance of database management and transaction processing systems. The TP1 benchmark [Anon et al 1985] and the Wisconsin benchmark [Bitton et al 1983], [Boral and DeWitt 1984], and [Bitton and Turbyfill 1985] have been used to benchmark several systems. Other benchmarks have also been proposed.The TP1 benchmark actually consists of three different benchmarks Debit-Credit, Scan, and Sort. It is oriented towards transaction processing systems. Each of the benchmarks consists of a single transaction type and operates on a large database — around 10 GBytes. The database consists of artificial data but is modeled around data maintained by a large bank.The Debit-Credit benchmark consists of a transaction that reads and updates a small number (about 4) of random records. It imposes stringent response time and throughput requirements on the system.The Scan benchmark consists of a COBOL program that exercises the system by executing 1,000 scan transactions each of which accesses and updates 1,000 records in a sequentially organized file.Finally, the Sort benchmark sorts 1M records of 100 bytes each.Each of the benchmarks stresses different aspects of the system. Each requires different amount of CPU, communication, and I/O cycles. In addition to the diversity of system resource requirements the benchmark methodology described in [Anon et al 1985] also requires that the cost of the system be calculated. Thus, the final measure one obtains from running the TP1 benchmark is $K/TPS.Whereas TP1 is oriented towards transaction processing the Wisconsin benchmark was conceived for the purpose of measuring the performance of relational database systems. It consists of two parts a single user benchmark in which a suite of approximately 30 different queries are used to obtain response time measures in standalone mode (described in [Bitton et al 1983], and a multi-user benchmark in which several queries of varying complexity are used to determine the response time and throughput behavior under a variety of conditions (one version of the multi-user benchmark is described in [Boral and DeWitt 1984] and a second version in [Bitton and Turbyfill 1985]).The test database consists of a number of relations of varying sizes. The relations are generated according to statistical distributions and do not model any real world data. Users of the benchmark can modify the database generator routines to adapt the database characteristics so that they are more representative of their application.It appears as though both the TP1 and the Wisconsin benchmark have the potential of becoming de facto standard benchmarks, in their respective areas, to be used in a variety of ways. For example, a vendor could use the benchmarks to stress test a system under development. Another use for a vendor is in establishing a particular rating for a system (equivalent MIPS Whetstones, etc. for mainframes). Finally, a user can use a benchmark to compare several systems before purchasing one.The purpose of this panel is to discuss the use of benchmarking for measuring the performance of transaction processing systems and database management systems in general and the use of the TP1 and Wisconsin benchmarks in particular.The panelists have been chosen so that we have a representation of experts in the particular benchmarks (Gawlick) and DeWitt), a benchmark “consumer” (Hawthorn), and a “performance expert” — someone who understands benchmarking as a science/art (Brice).The panelists will address the following issues (as well as others raised by the audience) What are the strengths and weaknesses of the TP1 and Wisconsin benchmarks?Is benchmarking a good technique for measuring the performance of data management and transaction processing systems?What can these benchmarks tell us about a system and what can they not tell us about it?

Journal ArticleDOI
TL;DR: This paper evaluates the sizing of power system security functions and timing of the power flow problem for a 64-bit low cost array processor and a VAX 11/780 and shows that the IEEE 118-bus system will need 98% less CPU time of the host if an attached 6410 array processor is used.