Institution
Hewlett-Packard
Company•Palo Alto, California, United States•
About: Hewlett-Packard is a company organization based out in Palo Alto, California, United States. It is known for research contribution in the topics: Signal & Layer (electronics). The organization has 34663 authors who have published 59808 publications receiving 1467218 citations. The organization is also known as: Hewlett Packard & Hewlett-Packard Company.
Papers published on a yearly basis
Papers
More filters
••
11 May 2009TL;DR: The hurdles in network power instrumentation are described and a power measurement study of a variety of networking gear such as hubs, edge switches, core switches, routers and wireless access points in both stand-alone mode and a production data center are presented.
Abstract: Energy efficiency is becoming increasingly important in the operation of networking infrastructure, especially in enterprise and data center networks. Researchers have proposed several strategies for energy management of networking devices. However, we need a comprehensive characterization of power consumption by a variety of switches and routers to accurately quantify the savings from the various power savings schemes. In this paper, we first describe the hurdles in network power instrumentation and present a power measurement study of a variety of networking gear such as hubs, edge switches, core switches, routers and wireless access points in both stand-alone mode and a production data center. We build and describe a benchmarking suite that will allow users to measure and compare the power consumed for a large set of common configurations at any switch or router of their choice. We also propose a network energy proportionality index, which is an easily measurable metric, to compare power consumption behaviors of multiple devices.
409 citations
•
TL;DR: TernGrad as discussed by the authors uses ternary gradients to accelerate distributed deep learning in data parallelism, which can reduce the communication cost of synchronizing gradients and parameters by ternarizing and gradient clipping.
Abstract: High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1,0,1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet does not incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available.
408 citations
••
TL;DR: Many investigators work with the Hodgkin-Huxley model of membrane behavior or extensions thereof, in which action potentials are found as solutions of simultaneous non-linear differential equations which must be solved using numerical techniques on a digital computer.
Abstract: Many investigators work with the Hodgkin-Huxley model of membrane behavior or extensions thereof. In these models action potentials are found as solutions of simultaneous non-linear differential equations which must be solved using numerical techniques on a digital computer. Recent membrane models showing pacemaker activity, such as that of McAllister, Noble, and Tsien, involve solutions covering long periods of time, up to fisve seconds, and many ionic currents. Those added requirements make it desirable to have an efficient algorithm to minimize computer costs, and a systematic and simple solution method to keep the program writing and debugging to manageable levels.
407 citations
••
Argonne National Laboratory1, Intel2, University of Texas at Austin3, University of Illinois at Urbana–Champaign4, Purdue University5, Lawrence Livermore National Laboratory6, IBM7, University of Chicago8, Los Alamos National Laboratory9, Information Sciences Institute10, Oak Ridge National Laboratory11, Booz Allen Hamilton12, Science Applications International Corporation13, Pacific Northwest National Laboratory14, Advanced Micro Devices15, Stanford University16, Hewlett-Packard17, Sandia National Laboratories18
TL;DR: This report presents a report produced by a workshop on ‘Addressing failures in exascale computing’ held in Park City, Utah, 4–11 August 2012, which summarizes and builds on discussions on resilience.
Abstract: We present here a report produced by a workshop on 'Addressing failures in exascale computing' held in Park City, Utah, 4-11 August 2012. The charter of this workshop was to establish a common taxonomy about resilience across all the levels in a computing system, discuss existing knowledge on resilience across the various hardware and software layers of an exascale system, and build on those results, examining potential solutions from both a hardware and software perspective and focusing on a combined approach.
The workshop brought together participants with expertise in applications, system software, and hardware; they came from industry, government, and academia, and their interests ranged from theory to implementation. The combination allowed broad and comprehensive discussions and led to this document, which summarizes and builds on those discussions.
406 citations
••
TL;DR: In this paper, the authors examine how fragmentation of trading is affecting the quality of trading in U.S. markets and find that market fragmentation generally reduces transactions costs and increases execution speeds.
Abstract: Equity markets world-wide have seen a proliferation of trading venues and the consequent fragmentation of order flow. In this paper, we examine how fragmentation of trading is affecting the quality of trading in U.S. markets. We propose using newly-available TRF (trade reporting facilities) volumes to proxy for fragmentation levels in individual stocks, and we use a matched sample to compare execution quality and efficiency of stocks with more and less fragmented trading. We find that market fragmentation generally reduces transactions costs and increases execution speeds. Fragmentation does increase short-term volatility, but prices are more efficient in that they are closer to being a random walk. Our results that fragmentation does not appear to harm market quality have important implications for regulatory policy.
406 citations
Authors
Showing all 34676 results
Name | H-index | Papers | Citations |
---|---|---|---|
Andrew White | 149 | 1494 | 113874 |
Stephen R. Forrest | 148 | 1041 | 111816 |
Rafi Ahmed | 146 | 633 | 93190 |
Leonidas J. Guibas | 124 | 691 | 79200 |
Chenming Hu | 119 | 1296 | 57264 |
Robert E. Tarjan | 114 | 400 | 67305 |
Hong-Jiang Zhang | 112 | 461 | 49068 |
Ching-Ping Wong | 106 | 1128 | 42835 |
Guillermo Sapiro | 104 | 667 | 70128 |
James R. Heath | 103 | 425 | 58548 |
Arun Majumdar | 102 | 459 | 52464 |
Luca Benini | 101 | 1453 | 47862 |
R. Stanley Williams | 100 | 605 | 46448 |
David M. Blei | 98 | 378 | 111547 |
Wei-Ying Ma | 97 | 464 | 40914 |