Institution
IBM
Company•Armonk, New York, United States•
About: IBM is a company organization based out in Armonk, New York, United States. It is known for research contribution in the topics: Layer (electronics) & Cache. The organization has 134567 authors who have published 253905 publications receiving 7458795 citations. The organization is also known as: International Business Machines Corporation & Big Blue.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A family of explicit Runge-Kutta formulas that contains imbedded formulas of all orders 1 through 4 is derived, which is very efficient for problems with smooth solution as well as problems having rapidly varying solutions.
Abstract: Explicit Runge-Kutta methods (RKMs) are among the most popular classes of formulas for the approximate numerical integration of nonstiff, initial value problems. However, high-order Runge-Kutta methods require more function evaluations per integration step than, for example, Adams methods used in PECE mode, and so, with RKMs, it is expecially important to avoid rejected steps. Steps are often rejected when certain derivatives of the solutions are very large for part of the region of integration. This corresponds, for example, to regions where the solution has a sharp front or, in the limit, some derivative of the solution is discontinuous. In these circumstances the assumption that the local truncation error is changing slowly is invalid, and so any step-choosing algorithm is likely to produce an unacceptable step. In this paper we derive a family of explicit Runge-Kutta formulas. Each formula is very efficient for problems with smooth solution as well as problems having rapidly varying solutions. Each member of this family consists of a fifty-order formula that contains imbedded formulas of all orders 1 through 4. By computing solutions at several different orders, it is possible to detect sharp fronts or discontinuities before all the function evaluations defining the full Runge-Kutta step have been computed. We can then either accpet a lower order solution or abort the step, depending on which course of action seems appropriate. The efficiency of the new algorithm is demonstrated on the DETEST test set as well as on some difficult test problems with sharp fronts or discontinuities.
673 citations
••
20 Apr 2002TL;DR: Results show that cost-benefit tradeoffs are a key consideration in the adoption of UCD methods and that there is a major discrepancy between the commonly cited measures and the actually applied ones.
Abstract: This paper reports the results of a recent survey of user-centered design (UCD) practitioners. The survey involved over a hundred respondents who were CHI'2000 attendees or current UPA members. The paper identifies the most widely used methods and processes, the key factors that predict success, and the critical tradeoffs practitioners must make in applying UCD methods and processes. Results show that cost-benefit tradeoffs are a key consideration in the adoption of UCD methods. Measures of UCD effectiveness are lacking and rarely applied. There is also a major discrepancy between the commonly cited measures and the actually applied ones. These results have implications for the introduction, deployment, and execution of UCD projects
672 citations
••
IBM1
TL;DR: The proposed transmission code translates each source byte into a constrained 10-bit binary sequence which has excellent performance parameters near the theoretical limits for 8B/10B codes.
Abstract: This paperd escribes a byte-oriented binary transmission code and its implementation. This code is particularly well suited for high-speed local area networks and similar data links, where the information format consists of packets, variable in length, from about a dozen up to several hundred 8-bit bytes. The proposed transmission code translates each source byte into a constrained 10-bit binary sequence which hase excellent performance parameters near the theoretical limits for 8B/10B codes. The maximum run length is 5 and the maximum digital sum variation is 6. A single error in the encoded bits can, at most, generate an error burst of length 5 in the decoded domain. A very simple implementation of the code has been accomplished by partitioning the coder into 5B/6B and 3B/4B subordinate coders.
672 citations
••
29 Sep 2006TL;DR: This paper designs and implements a new Robust Rate Adaptation Algorithm (RRAA), which uses short-term loss ratio to opportunistically guide its rate change decisions, and an adaptive RTS filter to prevent collision losses from triggering rate decrease.
Abstract: Rate adaptation is a mechanism unspecified by the 802.11 standards, yet critical to the system performance by exploiting the multi-rate capability at the physical layer.I n this paper, we conduct a systematic and experimental study on rate adaptation over 802.11 wireless networks. Our main contributions are two-fold. First, we critique five design guidelines adopted by most existing algorithms. Our study reveals that these seemingly correct guidelines can be misleading in practice, thus incur significant performance penalty in certain scenarios. The fundamental challenge is that rate adaptation must accurately estimate the channel condition despite the presence of various dynamics caused by fading, mobility and hidden terminals. Second, we design and implement a new Robust Rate Adaptation Algorithm (RRAA)that addresses the above challenge. RRAA uses short-term loss ratio to opportunistically guide its rate change decisions, and an adaptive RTS filter to prevent collision losses from triggering rate decrease. Our extensive experiments have shown that RRAA outperforms three well-known rate adaptation solutions (ARF, AARF, and SampleRate) in all tested scenarios, with throughput improvement up to 143%.
670 citations
••
TL;DR: In this article, the authors examined the spatial patterns in trends of four monthly variables: average temperature, precipitation, streamflow, and average of the daily temperature range for the continental United States for the period 1948-88.
Abstract: Spatial patterns in trends of four monthly variables: average temperature, precipitation, streamflow, and average of the daily temperature range were examined for the continental United States for the period 1948–88. The data used are a subset of the Historical Climatology Network (1036 stations) and a stream gage network of 1009 stations. Trend significance was determined using the nonparametric seasonal Kendall's test on a monthly and annual basis, and a robust slope estimator was used for determination of trend magnitudes. A bivariate test was used for evaluation of relative changes in the variables, specifically, streamflow relative to precipitation, streamflow relative to temperature, and precipitation relative to temperature. Strong trends were found in all of the variables at many more stations than would be expected due to chance. There is a strong spatial and seasonal structure in the trend results. For instance, although annual temperature increases were found at many stations, mostly i...
670 citations
Authors
Showing all 134658 results
Name | H-index | Papers | Citations |
---|---|---|---|
Zhong Lin Wang | 245 | 2529 | 259003 |
Anil K. Jain | 183 | 1016 | 192151 |
Hyun-Chul Kim | 176 | 4076 | 183227 |
Rodney S. Ruoff | 164 | 666 | 194902 |
Tobin J. Marks | 159 | 1621 | 111604 |
Jean M. J. Fréchet | 154 | 726 | 90295 |
Albert-László Barabási | 152 | 438 | 200119 |
György Buzsáki | 150 | 446 | 96433 |
Stanislas Dehaene | 149 | 456 | 86539 |
Philip S. Yu | 148 | 1914 | 107374 |
James M. Tour | 143 | 859 | 91364 |
Thomas P. Russell | 141 | 1012 | 80055 |
Naomi J. Halas | 140 | 435 | 82040 |
Steven G. Louie | 137 | 777 | 88794 |
Daphne Koller | 135 | 367 | 71073 |