Papers published on a yearly basis
Papers
More filters
••
TL;DR: A new kind of adaptive equalizer that operates in the spatial-frequency domain and uses either least mean square (LMS) or recursive least squares (RLS) adaptive processing and requires only /spl sim/50 complex operations per detected bit, which is close to achievable with state-of-the-art digital signal processing technology.
Abstract: We introduce a new kind of adaptive equalizer that operates in the spatial-frequency domain and uses either least mean square (LMS) or recursive least squares (RLS) adaptive processing. We simulate the equalizer's performance in an 8-Mb/s quaternary phase-shift keying (QPSK) link over a frequency-selective Rayleigh fading multipath channel with /spl sim/3 /spl mu/s RMS delay spread, corresponding to 60 symbols of dispersion. With the RLS algorithm and two diversity branches, our results show rapid convergence and channel tracking for a range of mobile speeds (up to /spl sim/100 mi/h). With a mobile speed of 40 mi/h, for example, the equalizer achieves an average bit error rate (BER) of 10/sup -4/ at a signal-to-noise ratio (SNR) of 15 dB, falling short of optimum linear receiver performance by about 4 dB. Moreover, it requires only /spl sim/50 complex operations per detected bit, i.e., /spl sim/400 M operations per second, which is close to achievable with state-of-the-art digital signal processing technology. An equivalent time-domain equalizer, if it converged at all, would require orders-of-magnitude more processing.
231 citations
••
01 Jan 1997TL;DR: Paradise (PARAdigm for DIalogue System Evaluation) as mentioned in this paper is a general framework for evaluating spoken dialogue agents, which decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.
Abstract: This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.
231 citations
••
11 Jul 2003TL;DR: This paper presents the results from the ACL-SIGHAN-sponsored First International Chinese Word Segmentation Bakeoff held in 2003 and reported in conjunction with the Second SIGHAN Workshop on Chinese Language Processing, Sapporo, Japan.
Abstract: This paper presents the results from the ACL-SIGHAN-sponsored First International Chinese Word Segmentation Bakeoff held in 2003 and reported in conjunction with the Second SIGHAN Workshop on Chinese Language Processing, Sapporo, Japan. We give the motivation for having an international segmentation contest (given that there have been two within-China contests to date) and we report on the results of this first international contest, analyze these results, and make some recommendations for the future.
230 citations
••
30 Nov 2010TL;DR: This work presents an approach for intelligent content placement that scales to large library sizes and employs a Lagrangian relaxation-based decomposition technique combined with integer rounding to overcome the challenges of scale.
Abstract: IPTV service providers offering Video-on-Demand currently use servers at each metropolitan office to store all the videos in their library. With the rapid increase in library sizes, it will soon become infeasible to replicate the entire library at each office. We present an approach for intelligent content placement that scales to large library sizes (e.g., 100Ks of videos). We formulate the problem as a mixed integer program (MIP) that takes into account constraints such as disk space, link bandwidth, and content popularity. To overcome the challenges of scale, we employ a Lagrangian relaxation-based decomposition technique combined with integer rounding. Our technique finds a near-optimal solution (e.g., within 1-2%) with orders of magnitude speedup relative to solving even the LP relaxation via standard software. We also present simple strategies to address practical issues such as popularity estimation, content updates, short-term popularity fluctuation, and frequency of placement updates. Using traces from an operational system, we show that our approach significantly outperforms simpler placement strategies. For instance, our MIP-based solution can serve all requests using only half the link bandwidth used by LRU or LFU cache replacement policies. We also investigate the trade-off between disk space and network bandwidth.
230 citations
••
TL;DR: It is argued that filers may engage in premature filing: to clear their workspace, they archives information that later turns out to be of low value, given the effort involved in organzing data, they are also loath to discard filed information, even when its value is uncertain.
Abstract: We explored general issues concerning personal information management by investigating the characteristics of office workers' paper-based information, in an industrial research environment. we examined the reasons people collect paper, types of data they collect, problems encountered in handling paper, and strategies used for processing it. We tested three specific hypotheses in the course of an office move. The greater availability of public digital data along with changes in people's jobs or interests should lead to wholescale discarding of paper data, while preparing for the move. Instead we found workers kept large, highly valued papar archives. We also expected that the major part of people's personal archives would be unique documents. However, only 49% of people's archives were unique documents, the remainder being copies of publicly available data and unread information, and we explore reasons for this. We examined the effects of paper-processing strategies on archive structure. We discovered different paper-processing strategies (filing and piling)that were relatively independent of job type. We predicated that filers' attempted to evaluate and catergorize incoming documents would produce smaller archives that were accessed frequently. Contrary to our predictions, filers amassed more information, and accessed it less frequently than pilers. We argue that filers may engage in premature filing: to clear their workspace, they archives information that later turns out to be of low value. Given the effort involved in organzing data, they are also loath to discard filed information, even when its value is uncertain. We discuss the implications of this research for digital personal information management.
230 citations
Authors
Showing all 1881 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yoshua Bengio | 202 | 1033 | 420313 |
Scott Shenker | 150 | 454 | 118017 |
Paul Shala Henry | 137 | 318 | 35971 |
Peter Stone | 130 | 1229 | 79713 |
Yann LeCun | 121 | 369 | 171211 |
Louis E. Brus | 113 | 347 | 63052 |
Jennifer Rexford | 102 | 394 | 45277 |
Andreas F. Molisch | 96 | 777 | 47530 |
Vern Paxson | 93 | 267 | 48382 |
Lorrie Faith Cranor | 92 | 326 | 28728 |
Ward Whitt | 89 | 424 | 29938 |
Lawrence R. Rabiner | 88 | 378 | 70445 |
Thomas E. Graedel | 86 | 348 | 27860 |
William W. Cohen | 85 | 384 | 31495 |
Michael K. Reiter | 84 | 380 | 30267 |