scispace - formally typeset
Search or ask a question
Institution

Hewlett-Packard

CompanyPalo Alto, California, United States
About: Hewlett-Packard is a company organization based out in Palo Alto, California, United States. It is known for research contribution in the topics: Signal & Layer (electronics). The organization has 34663 authors who have published 59808 publications receiving 1467218 citations. The organization is also known as: Hewlett Packard & Hewlett-Packard Company.


Papers
More filters
Journal ArticleDOI
TL;DR: A calibrated, high-quality disk drive model is demonstrated in which the overall error factor is 14 times smaller than that of a simple first-order model, which enables an informed trade-off between effort and accuracy.
Abstract: Although disk storage densities are improving impressively (60% to 130% compounded annually), performance improvements have been occurring at only about 7% to 10% compounded annually over the last decade. As a result, disk system performance is fast becoming a dominant factor in overall svstem behavior. Naturally, researchers want to improve overall I/O performance, of which a large component is the performance of the disk drive itself. This research often involves using analytical or simulation models to compare alternative approaches, and the quality of these models determines the quality of the conclusions: indeed, the wrong modeling assumptions can lead to erroneous conclusions. Nevertheless, little work has been done to develop or describe accurate disk drive models. This may explain the commonplace use of simple, relatively inaccurate models. We believe there is much room for improvement. This article demonstrates and describes a calibrated, high-quality disk drive model in which the overall error factor is 14 times smaller than that of a simple first-order model. We describe the various disk drive performance components separately, then show how their inclusion improves the simulation model. This enables an informed trade-off between effort and accuracy. In addition, we provide detailed characteristics for two disk drives, as well as a brief description of a simulation environment that uses the disk drive model. >

938 citations

Journal ArticleDOI
01 May 2000
TL;DR: The design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor, are described and evaluated.
Abstract: We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. The input native instruction stream to Dynamo can be dynamically generated (by a JIT for example), or it can come from the execution of a statically compiled native binary. This paper evaluates the Dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. Our experiments demonstrate that even statically optimized native binaries can be accelerated Dynamo, and often by a significant degree. For example, the average performance of -O optimized SpecInt95 benchmark binaries created by the HP product C compiler is improved to a level comparable to their -O4 optimized version running without Dynamo. Dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. Dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. The Dynamo prototype presented here is a realistic implementation running on an HP PA-8000 workstation under the HPUX 10.20 operating system.

935 citations

Proceedings Article
13 Aug 2004
TL;DR: Fairplay is introduced, a full-fledged system that implements generic secure function evaluation (SFE) and provides a test-bed of ideas and enhancements concerning SFE, whether by replacing parts of it, or by integrating with it.
Abstract: Advances in modern cryptography coupled with rapid growth in processing and communication speeds make secure two-party computation a realistic paradigm. Yet, thus far, interest in this paradigm has remained mostly theoretical. This paper introduces Fairplay [28], a full-fledged system that implements generic secure function evaluation (SFE). Fairplay comprises a high level procedural definition language called SFDL tailored to the SFE paradigm; a compiler of SFDL into a one-pass Boolean circuit presented in a language called SHDL; and Bob/Alice programs that evaluate the SHDL circuit in the manner suggested by Yao in [39]. This system enables us to present the first evaluation of an overall SFE in real settings, as well as examining its components and identifying potential bottlenecks. It provides a test-bed of ideas and enhancements concerning SFE, whether by replacing parts of it, or by integrating with it. We exemplify its utility by examining several alternative implementations of oblivious transfer within the system, and reporting on their effect on overall performance.

911 citations

Journal ArticleDOI
TL;DR: In this article, a method for predicting the long-term popularity of online content from early measurements of user access is presented, using two content sharing portals, Youtube and Digg, using the accrual of views and votes on content offered by these services.
Abstract: We present a method for accurately predicting the long time popularity of online content from early measurements of user’s access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.

910 citations

Journal ArticleDOI
01 Dec 1998
TL;DR: In this article, the authors describe a tool for measuring web server performance called httperf, which provides a flexible facility for generating various HTTP workloads and for measuring server performance.
Abstract: This paper describes httperf, a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. The focus of httperf is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro- and macro-level benchmarks. The three distinguishing characteristics of httperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 protocol, and its extensibility to new workload generators and performance measurements. In addition to reporting on the design and implementation of httperf this paper also discusses some of the experiences and insights gained while realizing this tool.

909 citations


Authors

Showing all 34676 results

NameH-indexPapersCitations
Andrew White1491494113874
Stephen R. Forrest1481041111816
Rafi Ahmed14663393190
Leonidas J. Guibas12469179200
Chenming Hu119129657264
Robert E. Tarjan11440067305
Hong-Jiang Zhang11246149068
Ching-Ping Wong106112842835
Guillermo Sapiro10466770128
James R. Heath10342558548
Arun Majumdar10245952464
Luca Benini101145347862
R. Stanley Williams10060546448
David M. Blei98378111547
Wei-Ying Ma9746440914
Network Information
Related Institutions (5)
IBM
253.9K papers, 7.4M citations

94% related

Samsung
163.6K papers, 2M citations

90% related

Carnegie Mellon University
104.3K papers, 5.9M citations

90% related

Microsoft
86.9K papers, 4.1M citations

90% related

Bell Labs
59.8K papers, 3.1M citations

89% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20231
202223
2021240
20201,028
20191,269
2018964