scispace - formally typeset
Search or ask a question
Topic

Benchmark (computing)

About: Benchmark (computing) is a research topic. Over the lifetime, 19650 publications have been published within this topic receiving 419117 citations. The topic is also known as: software benchmark & computer benchmark.


Papers
More filters
Proceedings ArticleDOI
08 May 1989
TL;DR: A set of 31 digital sequential circuits described at the gate level that extend the size and complexity of the ISCAS'85 set of combinational circuits and can serve as benchmarks for researchers interested in sequential test generation, scan-basedtest generation, and mixed sequential/scan-based test generation using partial scan techniques.
Abstract: A set of 31 digital sequential circuits described at the gate level is presented. These circuits extend the size and complexity of the ISCAS'85 set of combinational circuits and can serve as benchmarks for researchers interested in sequential test generation, scan-based test generation, and mixed sequential/scan-based test generation using partial scan techniques. Although all the benchmark circuits are sequential, synchronous, and use only D-type flip-flops, additional interior faults and asynchronous behavior can be introduced by substituting for some or all of the flip-flops their appropriate functional models. The standard functional model of the D flip-flop provides a reference point that is independent of the faults particular to the flip-flop implementation. A testability profile of the benchmarks in the full-scan-mode configuration is discussed. >

1,972 citations

Journal ArticleDOI
John L. Henning1
TL;DR: On August 24, 2006, the Standard Performance Evaluation Corporation (SPEC) announced CPU2006, which replaces CPU2000, and the SPEC CPU benchmarks are widely used in both industry and academia.
Abstract: On August 24, 2006, the Standard Performance Evaluation Corporation (SPEC) announced CPU2006 [2], which replaces CPU2000. The SPEC CPU benchmarks are widely used in both industry and academia [3].

1,864 citations

Proceedings ArticleDOI
07 Jun 2004
TL;DR: It is concluded that no single descriptor is best for all classifications, and thus the main contribution of this paper is to provide a framework to determine the conditions under which each descriptor performs best.
Abstract: In recent years, many shape representations and geometric algorithms have been proposed for matching 3D shapes. Usually, each algorithm is tested on a different (small) database of 3D models, and thus no direct comparison is available for competing methods. We describe the Princeton Shape Benchmark (PSB), a publicly available database of polygonal models collected from the World Wide Web and a suite of tools for comparing shape matching and classification algorithms. One feature of the benchmark is that it provides multiple semantic labels for each 3D model. For instance, it includes one classification of the 3D models based on function, another that considers function and form, and others based on how the object was constructed (e.g., man-made versus natural objects). We find that experiments with these classifications can expose different properties of shape-based retrieval algorithms. For example, out of 12 shape descriptors tested, extended Gaussian images by B. Horn (1984) performed best for distinguishing man-made from natural objects, while they performed among the worst for distinguishing specific object types. Based on experiments with several different shape descriptors, we conclude that no single descriptor is best for all classifications, and thus the main contribution of this paper is to provide a framework to determine the conditions under which each descriptor performs best.

1,561 citations

Journal ArticleDOI
TL;DR: Algorithmic techniques are presented that substantially improve the running time of the loopy belief propagation approach and reduce the complexity of the inference algorithm to be linear rather than quadratic in the number of possible labels for each pixel, which is important for problems such as image restoration that have a large label set.
Abstract: Markov random field models provide a robust and unified framework for early vision problems such as stereo and image restoration. Inference algorithms based on graph cuts and belief propagation have been found to yield accurate results, but despite recent advances are often too slow for practical use. In this paper we present some algorithmic techniques that substantially improve the running time of the loopy belief propagation approach. One of the techniques reduces the complexity of the inference algorithm to be linear rather than quadratic in the number of possible labels for each pixel, which is important for problems such as image restoration that have a large label set. Another technique speeds up and reduces the memory requirements of belief propagation on grid graphs. A third technique is a multi-grid method that makes it possible to obtain good results with a small fixed number of message passing iterations, independent of the size of the input images. Taken together these techniques speed up the standard algorithm by several orders of magnitude. In practice we obtain results that are as accurate as those of other global methods (e.g., using the Middlebury stereo benchmark) while being nearly as fast as purely local methods.

1,560 citations

Dissertation
01 Jan 2002
TL;DR: This thesis presents a theoretical model that can be used to describe the long-term behaviour of the Particle Swarm Optimiser and results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties.
Abstract: Many scientific, engineering and economic problems involve the optimisation of a set of parameters. These problems include examples like minimising the losses in a power grid by finding the optimal configuration of the components, or training a neural network to recognise images of people's faces. Numerous optimisation algorithms have been proposed to solve these problems, with varying degrees of success. The Particle Swarm Optimiser (PSO) is a relatively new technique that has been empirically shown to perform well on many of these optimisation problems. This thesis presents a theoretical model that can be used to describe the long-term behaviour of the algorithm. An enhanced version of the Particle Swarm Optimiser is constructed and shown to have guaranteed convergence on local minima. This algorithm is extended further, resulting in an algorithm with guaranteed convergence on global minima. A model for constructing cooperative PSO algorithms is developed, resulting in the introduction of two new PSO-based algorithms. Empirical results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties. The various PSO-based algorithms are then applied to the task of training neural networks, corroborating the results obtained on the synthetic benchmark functions.

1,498 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
87% related
Scheduling (computing)
78.6K papers, 1.3M citations
85% related
Object detection
46.1K papers, 1.3M citations
84% related
Robustness (computer science)
94.7K papers, 1.6M citations
83% related
Wireless sensor network
142K papers, 2.4M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202244
20212,013
20201,803
20191,490
20181,322
20171,154