scispace - formally typeset
Search or ask a question
Institution

Qualcomm

CompanyFarnborough, United Kingdom
About: Qualcomm is a company organization based out in Farnborough, United Kingdom. It is known for research contribution in the topics: Wireless & Signal. The organization has 19408 authors who have published 38405 publications receiving 804693 citations. The organization is also known as: Qualcomm Incorporated & Qualcomm, Inc..


Papers
More filters
Patent
01 Apr 2005
TL;DR: In this article, a closed-loop power control method is proposed for a mobile communication system, in which a mobile station provides information on the quality of the signal received from the base station, and the BS responds by adjusting the power allocated to that user in a shared base station signal.
Abstract: A method and apparatus for controlling transmission power levels in a mobile communication system. The method provides for a closed-loop power control method. A mobile station provides information on the quality of the signal received from the base station, and the base station responds by adjusting the power allocated to that user in a shared base station signal. The transmission power is adjusted initially by a large increment and then ramped down at an increasingly decreasing rate. The mobile station also provides information to the base station as to its relative velocity and the base station adjusts its transmission power in accordance with this velocity information.

195 citations

Patent
Keith W. Saints1, Tao Chen1
13 Nov 1997
TL;DR: In this article, the authors proposed a method and apparatus for providing improved quality or power control by recognizing the delays inherent in a closed-loop communication system, where the mobile station (12) or receiver properly adjusts its forward link quality and power level thresholds or measurements with which it compares incoming frames or portions thereof to reflect the level it anticipates receiving.
Abstract: The present invention provides a method and apparatus for providing improved quality or power control by recognizing the delays inherent in a closed-loop communication system. The mobile station (12) or receiver properly adjusts its forward link quality or power level thresholds or measurements with which it compares incoming frames or portions thereof to reflect the level it anticipates receiving (after the delay). For example, the mobile station (12) can recognize that at a given measurement time, two outstanding messages have not been executed by the transmitter (where each message indicates a corresponding increase of 1 dB). As a result, the mobile station (12) can adjust its measurement threshold down by 2 dB to more closely correspond to future power adjustments. If the currently received frame or portion thereof is still below the readjusted threshold, then the mobile station (12) sends out a new message to request a further increase in the forward link channel.

195 citations

Journal ArticleDOI
TL;DR: Single-ended and differential phased array front-ends developed for Ka-band applications using a 0.12 mum SiGe BiCMOS process are competitive with GaAs and InP designs, and are building blocks for low-cost millimeter-wave phased arrayFront-ends based on silicon technology.
Abstract: Single-ended and differential phased array front-ends are developed for Ka-band applications using a 0.12 mum SiGe BiCMOS process. The phase shifters are based on CMOS switched delay networks and have 22.5deg phase resolution and <4deg rms phase error at 35 GHz, and can handle +10 dBm of RF power (P1dB) with a 3rd order intermodulation intercept point (IIP3) of +21 dBm. For the single-ended design, a SiGe low noise amplifier is placed before the CMOS phase shifter, and the LNA/phase shifter results in 11 plusmn 1.5 dB gain and <3.4 dB of noise figure (NF), for a total power consumption of only 11 mW. For the differential front-end, a variable gain LNA is also developed and shows 9-20 dB gain and <1deg rms phase imbalance between the eight different gain states. The differential variable gain LNA/phase shifter consumes 33 mW, and results in 10 + 1.3 dB gain and 3.8 dB of NF. The gain variation is reduced to 9.1 plusmn 0.45 dB with the variable gain function applied. The single-ended and differential front-ends occupy a small chip area, with a size of 350 times 800 mum2 and 350 times 950 mum2, respectively, excluding pads. These chips are competitive with GaAs and InP designs, and are building blocks for low-cost millimeter-wave phased array front-ends based on silicon technology.

194 citations

Proceedings Article
01 Jan 2017
TL;DR: The 16-bit Flexpoint data format as discussed by the authors is a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications.
Abstract: Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the \emph{neon} deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.

194 citations

Journal ArticleDOI
TL;DR: A novel Deep Adversarial Metric Learning approach, termed DAML for cross-modal retrieval, which introduces a modality classifier to predict the modality of a transformed feature, which ensures that the transformed features are also statistically indistinguishable.
Abstract: Cross-modal retrieval has become a highlighted research topic, to provide flexible retrieval experience across multimedia data such as image, video, text and audio. The core of existing cross-modal retrieval approaches is to narrow down the gap between different modalities either by finding a maximally correlated embedding space. Recently, researchers leverage Deep Neural Network (DNN) to learn nonlinear transformations for each modality to obtain transformed features in a common subspace where cross-modal matching can be performed. However, the statistical characteristics of the original features for each modality are not explicitly preserved in the learned subspace. Inspired by recent advances in adversarial learning, we propose a novel Deep Adversarial Metric Learning approach, termed DAML for cross-modal retrieval. DAML nonlinearly maps labeled data pairs of different modalities into a shared latent feature subspace, under which the intra-class variation is minimized and the inter-class variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. In addition to maximizing the correlations between modalities, we add an additional regularization by introducing adversarial learning. In particular, we introduce a modality classifier to predict the modality of a transformed feature, which ensures that the transformed features are also statistically indistinguishable. Experiments on three popular multimodal datasets show that DAML achieves superior performance compared to several state of the art cross-modal retrieval methods.

194 citations


Authors

Showing all 19413 results

NameH-indexPapersCitations
Jian Yang1421818111166
Xiaodong Wang1351573117552
Jeffrey G. Andrews11056263334
Martin Vetterli10576157825
Vinod Menon10126960241
Michael I. Miller9259934915
David Tse9243867248
Kannan Ramchandran9159234845
Michael Luby8928234894
Max Welling8944164602
R. Srikant8443226439
Jiaya Jia8029433545
Hai Li7957033848
Simon Haykin7745462085
Christopher W. Bielawski7633432512
Network Information
Related Institutions (5)
Intel
68.8K papers, 1.6M citations

92% related

Motorola
38.2K papers, 968.7K citations

89% related

Samsung
163.6K papers, 2M citations

88% related

NEC
57.6K papers, 835.9K citations

87% related

Texas Instruments
39.2K papers, 751.8K citations

86% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20229
20211,188
20202,266
20192,224
20182,124
20171,477