scispace - formally typeset
Search or ask a question
Institution

Qualcomm

CompanyFarnborough, United Kingdom
About: Qualcomm is a company organization based out in Farnborough, United Kingdom. It is known for research contribution in the topics: Wireless & Signal. The organization has 19408 authors who have published 38405 publications receiving 804693 citations. The organization is also known as: Qualcomm Incorporated & Qualcomm, Inc..


Papers
More filters
Patent
11 Mar 1997
TL;DR: In this paper, a radio link manager (22) provides a common threshold for determining the proper power level of the reverse link signal at each base station (14, 16, 18, 20).
Abstract: Method and apparatus for providing centralized power control in a communication system, in which each base station (14, 16, 18, 20) in the system operates to control both the forward link and the reverse link power. A radio link manager (22) provides a common threshold for determining the proper power level of the reverse link signal at each base station (14, 16, 18, 20). The radio link manager (22) also provides a ratio of the forward link signal strength to a pilot signal strength to control forward link power control. The radio link manager (22) provides the threshold and ratio uniformly to all base stations (14, 16, 18, 20) to provide a uniform operating point for all base stations (14, 16, 18, 20) in the system, thus, increasing capacity. The same centralized power control is easily expanded to provide a mechanism for intersystem soft handoff.

139 citations

Patent
14 Feb 1994
TL;DR: In this paper, a method and system for use in a communication system in which data is transmitted in data frames of a predetermined time duration, for the positioning of the data within the data frames for transmission is presented.
Abstract: A method and system, for use in a communication system in which data is transmitted in data frames of a predetermined time duration, for the positioning of the data within the data frames for transmission. A computation circuit computes according to the deterministic code a pseudorandom position for the data within each data frame. A positioning circuit positions the data within each data frame in the computed position.

139 citations

Patent
Eric C. Rosen1, Maggenti Mark1
14 May 2002
TL;DR: In this paper, a method and apparatus for reducing dormand-wakeup latency in a group communication network (100) provides for a significant reduction in the actual total dormant wakeup time and the PTT{XE'PTT'} latency perceived by the talker through caching the network-initiated wakeup tiggers destined for target listeners, and delivering a wakeup trigger to a target mobile station (104, 1o6, 108) as soon as the target mobile stations has re-established its traffic channel.
Abstract: A method and apparatus for reducing dormand-wakeup latency in a group communication network (100) provides for a significant reduction in the actual total dormant-wakeup time and the PTT{XE'PTT'} latency perceived by the talker through caching the network-initiated wakeup tiggers destined for target listeners, and delivering a wakeup trigger to a target mobile station (104, 1o6, 108) as soon as the target mobile station has re-established its traffic channel.

139 citations

Journal ArticleDOI
TL;DR: A method is presented to design the encoder filter to minimize the reconstruction error and reduce the prediction gain in the filter, leaving the redundancy in the signal for robustness, which illuminates the basic tradeoff between compression and robustness.
Abstract: Predictive quantization is a simple and effective method for encoding slowly-varying signals that is widely used in speech and audio coding. It has been known qualitatively that leaving correlation in the encoded samples can lead to improved estimation at the decoder when encoded samples are subject to erasure. However, performance estimation in this case has required Monte Carlo simulation. Provided here is a novel method for efficiently computing the mean-squared error performance of a predictive quantization system with erasures via a convex optimization with linear matrix inequality constraints. The method is based on jump linear system modeling and applies to any autoregressive moving average (ARMA) signal source and any erasure channel described by an aperiodic and irreducible Markov chain. In addition to this quantification for a given encoder filter, a method is presented to design the encoder filter to minimize the reconstruction error. Optimization of the encoder filter is a nonconvex problem, but we are able to parameterize with a single scalar a set of encoder filters that yield low MSE. The design method reduces the prediction gain in the filter, leaving the redundancy in the signal for robustness. This illuminates the basic tradeoff between compression and robustness.

139 citations

Proceedings ArticleDOI
24 Feb 2014
TL;DR: This work is the first to explore GPU Memory Management Units (MMUs) consisting of Translation Lookaside Buffers (TLBs) and page table walkers (PTWs) for address translation in unified heterogeneous systems and shows that a little TLB-awareness can make other GPU performance enhancements feasible in the face of cache-parallel address translation.
Abstract: The proliferation of heterogeneous compute platforms, of which CPU/GPU is a prevalent example, necessitates a manageable programming model to ensure widespread adoption. A key component of this is a shared unified address space between the heterogeneous units to obtain the programmability benefits of virtual memory. To this end, we are the first to explore GPU Memory Management Units(MMUs) consisting of Translation Lookaside Buffers (TLBs) and page table walkers (PTWs) for address translation in unified heterogeneous systems. We show the performance challenges posed by GPU warp schedulers on TLBs accessed in parallel with L1 caches, which provide many well-known programmability benefits. In response, we propose modest TLB and PTW augmentations that recover most of the performance lost by introducing L1 parallel TLB access. We also show that a little TLB-awareness can make other GPU performance enhancements (e.g., cache-conscious warp scheduling and dynamic warp formation on branch divergence) feasible in the face of cache-parallel address translation, bringing overheads in the range deemed acceptable for CPUs (10-15\% of runtime). We presume this initial design leaves room for improvement but anticipate that our bigger insight, that a little TLB-awareness goes a long way in GPUs, will spur further work in this fruitful area.

139 citations


Authors

Showing all 19413 results

NameH-indexPapersCitations
Jian Yang1421818111166
Xiaodong Wang1351573117552
Jeffrey G. Andrews11056263334
Martin Vetterli10576157825
Vinod Menon10126960241
Michael I. Miller9259934915
David Tse9243867248
Kannan Ramchandran9159234845
Michael Luby8928234894
Max Welling8944164602
R. Srikant8443226439
Jiaya Jia8029433545
Hai Li7957033848
Simon Haykin7745462085
Christopher W. Bielawski7633432512
Network Information
Related Institutions (5)
Intel
68.8K papers, 1.6M citations

92% related

Motorola
38.2K papers, 968.7K citations

89% related

Samsung
163.6K papers, 2M citations

88% related

NEC
57.6K papers, 835.9K citations

87% related

Texas Instruments
39.2K papers, 751.8K citations

86% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20229
20211,188
20202,266
20192,224
20182,124
20171,477