scispace - formally typeset
Search or ask a question

Showing papers by "Qualcomm published in 2017"


Posted Content
TL;DR: The prediction difference analysis method for visualizing the response of a deep neural network to a specific input highlights areas in a given input image that provide evidence for or against a certain class.
Abstract: This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).

536 citations


Journal ArticleDOI
TL;DR: In this article, a distributed denial-of-service attack demonstrated the high vulnerability of Internet of Things (IoT) systems and devices and addressed this challenge will require scalable security solutions optimized for the IoT ecosystem.
Abstract: Recent distributed denial-of-service attacks demonstrate the high vulnerability of Internet of Things (IoT) systems and devices. Addressing this challenge will require scalable security solutions optimized for the IoT ecosystem.

470 citations


Posted Content
TL;DR: In this article, a soft weight sharing method was proposed to achieve competitive compression rates by using a version of soft weight-sharing (Nowlan & Hinton, 1992), which achieves both quantization and pruning in one simple (re-)training procedure.
Abstract: The success of deep learning in numerous application domains created the de- sire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2015a) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of soft weight-sharing (Nowlan & Hinton, 1992). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.

346 citations


Journal ArticleDOI
01 Apr 2017
TL;DR: The use of smartphone grade hardware and the small scale provides an inexpensive and practical solution for autonomous flight in indoor environments with extensive experimental results showing aggressive flights through and around obstacles with large rotation angular excursions and accelerations.
Abstract: We address the state estimation, control, and planning for aggressive flight with a 150 cm diameter, 250 g quadrotor equipped only with a single camera and an inertial measurement unit (IMU). The use of smartphone grade hardware and the small scale provides an inexpensive and practical solution for autonomous flight in indoor environments. The key contributions of this paper are: 1) robust state estimation and control using only a monocular camera and an IMU at speeds of 4.5 m/s, accelerations of over 1.5 g, roll and pitch angles of up to 90 $^\circ$ , and angular rate of up to 800 $^\circ$ /s without requiring any structure in the environment; 2) planning of dynamically feasible three-dimensional trajectories for slalom paths and flights through narrow windows; and 3) extensive experimental results showing aggressive flights through and around obstacles with large rotation angular excursions and accelerations.

275 citations


Journal ArticleDOI
TL;DR: A fast level set model‐based method for intensity inhomogeneity correction and a spectral properties‐based color correction method to overcome obstacles in the wound healing process.
Abstract: Summary Wound area changes over multiple weeks are highly predictive of the wound healing process. A big data eHealth system would be very helpful in evaluating these changes. We usually analyze images of the wound bed for diagnosing injury. Unfortunately, accurate measurements of wound region changes from images are difficult. Many factors affect the quality of images, such as intensity inhomogeneity and color distortion. To this end, we propose a fast level set model-based method for intensity inhomogeneity correction and a spectral properties-based color correction method to overcome these obstacles. State-of-the-art level set methods can segment objects well. However, such methods are time-consuming and inefficient. In contrast to conventional approaches, the proposed model integrates a new signed energy force function that can detect contours at weak or blurred edges efficiently. It ensures the smoothness of the level set function and reduces the computational complexity of re-initialization. To increase the speed of the algorithm further, we also include an additive operator-splitting algorithm in our fast level set model. In addition, we consider using a camera, lighting, and spectral properties to recover the actual color. Numerical synthetic and real-world images demonstrate the advantages of the proposed method over state-of-the-art methods. Experimental results also show that the proposed model is at least twice as fast as methods used widely. Copyright © 2016 John Wiley & Sons, Ltd.

216 citations


Proceedings ArticleDOI
Se Rim Park, Jinwon Lee1
20 Aug 2017
TL;DR: In this article, the authors proposed using fully convolutional neural networks, which consist of a lesser number of parameters than fully connected networks, for removing the babble noise without creating artifacts in human speech.
Abstract: In hearing aids, the presence of babble noise degrades hearing intelligibility of human speech greatly. However, removing the babble without creating artifacts in human speech is a challenging task in a low SNR environment. Here, we sought to solve the problem by finding a `mapping' between noisy speech spectra and clean speech spectra via supervised learning. Specifically, we propose using fully Convolutional Neural Networks, which consist of lesser number of parameters than fully connected networks. The proposed network, Redundant Convolutional Encoder Decoder (R-CED), demonstrates that a convolutional network can be 12 times smaller than a recurrent network and yet achieves better performance, which shows its applicability for an embedded system: the hearing aids.

214 citations


Proceedings Article
01 Jan 2017
TL;DR: The 16-bit Flexpoint data format as discussed by the authors is a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications.
Abstract: Deep neural networks are commonly developed and trained in 32-bit floating point format. Significant gains in performance and energy efficiency could be realized by training and inference in numerical formats optimized for deep learning. Despite advances in limited precision inference in recent years, training of neural networks in low bit-width remains a challenging problem. Here we present the Flexpoint data format, aiming at a complete replacement of 32-bit floating point format training and inference, designed to support modern deep network topologies without modifications. Flexpoint tensors have a shared exponent that is dynamically adjusted to minimize overflows and maximize available dynamic range. We validate Flexpoint by training AlexNet, a deep residual network and a generative adversarial network, using a simulator implemented with the \emph{neon} deep learning framework. We demonstrate that 16-bit Flexpoint closely matches 32-bit floating point in training all three models, without any need for tuning of model hyperparameters. Our results suggest Flexpoint as a promising numerical format for future hardware for training and inference.

194 citations


Posted Content
TL;DR: In this paper, an energy harvesting sensor continuously monitors a system and sends time-stamped status updates to a destination, where the destination keeps track of the system status through the received updates.
Abstract: In this paper, we consider a scenario where an energy harvesting sensor continuously monitors a system and sends time-stamped status updates to a destination. The destination keeps track of the system status through the received updates. We use the metric Age of Information (AoI), the time that has elapsed since the last received update was generated, to measure the "freshness" of the status information available at the destination. We assume energy arrives randomly at the sensor according to a Poisson process, and each status update consumes one unit of energy. Our objective is to design optimal online status update policies to minimize the long-term average AoI, subject to the energy causality constraint at the sensor. We consider three scenarios, i.e., the battery size is infinite, finite, and one unit only, respectively. For the infinite battery scenario, we adopt a best-effort uniform status update policy and show that it minimizes the long-term average AoI. For the finite battery scenario, we adopt an energy-aware adaptive status update policy, and prove that it is asymptotically optimal when the battery size goes to infinity. For the last scenario where the battery size is one, we first show that within a broadly defined class of online policies, the optimal policy should have a renewal structure, i.e., the status update epochs form a renewal process, and the length of each renewal interval depends on the first energy arrival over that interval only. We then focus on a renewal interval, and prove that if the AoI in the system is below a threshold when the first energy arrives, the sensor should store the energy and hold status update until the AoI reaches the threshold, otherwise, it updates the status immediately. We analytically characterize the long-term average AoI under such a threshold-based policy, and explicitly identify the optimal threshold.

180 citations


Journal ArticleDOI
TL;DR: In this article, a portable radar system for short-range localization, inverse synthetic aperture radar imaging, and vital sign tracking is presented, which incorporates frequency-modulated continuous-wave (FMCW) and interferometry (Doppler) modes.
Abstract: This paper presents a portable radar system for short-range localization, inverse synthetic aperture radar imaging, and vital sign tracking. The proposed sensor incorporates frequency-modulated continuous-wave (FMCW) and interferometry (Doppler) modes, which enable this radar system to obtain both absolute range information and tiny vital signs (i.e., respiration and heartbeat) of human targets. These two different operation modes can be switched through an on-board microcontroller. To simplify the system, the proposed radar utilizes the audio card of a laptop to sample the baseband signal. The FMCW mode of the radar uses operational-amplifier-based circuits to generate an analog sawtooth signal and a reference pulse sequence (RPS). The RPS is locked to the sawtooth signal to obtain coherence for the radar system. For the interferometry mode, a low-intermediate-frequency modulation method is implemented to avoid the slow vital signs from being distorted by the high-pass filter of the audio card. Several experiments were carried out to reveal the capability and distinct operational features of the proposed portable hybrid radar. The experiments also showed that the system can easily detect glass, which is usually difficult to identify for optical-based sensors. In addition, 2-D scanning in a complex environment revealed that the proposed radar was able to differentiate human targets from other objects. Moreover, ISAR images were used to isolate moving human targets from surrounding clutter. Finally, the proposed radar also demonstrated its ability to accurately measure vital signs when a human subject sits still.

176 citations


Patent
12 Jan 2017
TL;DR: In this paper, a system and method of locating a position of a wireless device in range of one or more base stations is presented, where three signals are received that each contain a unique identifier for a base station.
Abstract: A system and method of locating a position of a wireless device in range of one or more base stations. Three signals are received that each contain a unique identifier for a base station. An estimate of the distance between the wireless device and each base station is performed. Previously determined locations for each base station are referenced. At least one of the three base stations is capable of communication to remote locations and unavailable to the wireless device for communication to remote locations.

150 citations


Proceedings Article
28 Nov 2017
TL;DR: In this article, Xu et al. introduce a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster, where the similarity is category-agnostic and can be learned from data in the source domain using a similarity network.
Abstract: This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not. This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement.

Proceedings Article
15 Feb 2017
TL;DR: In this paper, a prediction difference analysis method for visualizing the response of a deep neural network to a specific input is presented. But the method is not suitable for medical images.
Abstract: This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).

Journal ArticleDOI
Roberto Avanzi1
TL;DR: It is argued that QARMA provides sufficient security margins within the constraints determined by the mentioned applications, while still achieving best-in-class latency, and a technique to extend the length of the tweak by using, for instance, a universal hash function, which can also be used to strengthen the security of QARma.
Abstract: This paper introduces QARMA, a new family of lightweight tweakable block ciphers targeted at applications such as memory encryption, the generation of very short tags for hardware-assisted prevention of software exploitation, and the construction of keyed hash functions. QARMA is inspired by reflection ciphers such as PRINCE, to which it adds a tweaking input, and MANTIS. However, QARMA differs from previous reflector constructions in that it is a three-round Even-Mansour scheme instead of a FX-construction, and its middle permutation is non-involutory and keyed . We introduce and analyse a family of Almost MDS matrices defined over a ring with zero divisors that allows us to encode rotations in its operation while maintaining the minimal latency associated to {0, 1}-matrices. The purpose of all these design choices is to harden the cipher against various classes of attacks. We also describe new S-Box search heuristics aimed at minimising the critical path. QARMA exists in 64- and 128-bit block sizes, where block and tweak size are equal, and keys are twice as long as the blocks. We argue that QARMA provides sufficient security margins within the constraints determined by the mentioned applications, while still achieving best-in-class latency. Implementation results on a state-of-the art manufacturing process are reported. Finally, we propose a technique to extend the length of the tweak by using, for instance, a universal hash function, which can also be used to strengthen the security of QARMA.

Proceedings ArticleDOI
Li Chih-Ping1, Jiang Jing1, Wanshi Chen1, Tingfang Ji1, John Edward Smee1 
12 Jun 2017
TL;DR: Theoretical queueing analysis and system-level simulations are provided to support these systems design choices, many of which have been considered as work items in the 3GPP Release 15 standards, which will be the first release for 5G NR.
Abstract: 5G New Radio (NR) is envisioned to support three broad categories of services: evolved mobile broadband (eMBB), ultra-reliable and low-latency communications (URLLC), and massive machine-type communications (mMTC). The URLLC services refer to future applications that require secure data communications from one end to another with ultra-high reliability and deadline-based low latency requirements. This type of quality-of-service is vastly different from that of traditional mobile broadband applications. In this paper, we discuss the systems design principles to enable the URLLC services in 5G. Theoretical queueing analysis and system-level simulations are provided to support these systems design choices, many of which have been considered as work items in the 3GPP Release 15 standards, which will be the first release for 5G NR.

Journal ArticleDOI
TL;DR: This work shows that symmetry alone implies near-optimal performance in any sequence of linear codes where the blocklengths are strictly increasing, the code rates converge, and the permutation group of each code is doubly transitive.
Abstract: We introduce a new approach to proving that a sequence of deterministic linear codes achieves capacity on an erasure channel under maximum a posteriori decoding. Rather than relying on the precise structure of the codes, our method exploits code symmetry. In particular, the technique applies to any sequence of linear codes where the blocklengths are strictly increasing, the code rates converge, and the permutation group of each code is doubly transitive. In other words, we show that symmetry alone implies near-optimal performance. An important consequence of this result is that a sequence of Reed–Muller codes with increasing blocklength and converging rate achieves capacity. This possibility has been suggested previously in the literature but it has only been proven for cases where the limiting code rate is 0 or 1. Moreover, these results extend naturally to all affine-invariant codes and, thus, to extended primitive narrow-sense BCH codes. This also resolves, in the affirmative, the existence question for capacity-achieving sequences of binary cyclic codes. The primary tools used in the proof are the sharp threshold property for symmetric monotone Boolean functions and the area theorem for extrinsic information transfer functions.

Journal ArticleDOI
TL;DR: The theoretical and experimental studies show that the proposed algorithm calculates the exact optimal L1-PCs with high frequency and achieves higher value in the L 1-PC optimization metric than any known alternative algorithm of comparable computational cost.
Abstract: It was shown recently that the $K$ L1-norm principal components (L1-PCs) of a real-valued data matrix $\mathbf X \in \mathbb {R}^{D \times N}$ ( $N$ data samples of $D$ dimensions) can be exactly calculated with cost $\mathcal {O}(2^{NK})$ or, when advantageous, $\mathcal {O}(N^{dK - K + 1})$ where $d=\mathrm{rank}(\mathbf X)$ , $K . In applications where $\mathbf X$ is large (e.g., “big” data of large $N$ and/or “heavy” data of large $d$ ), these costs are prohibitive. In this paper, we present a novel suboptimal algorithm for the calculation of the $K L1-PCs of $\mathbf X$ of cost $\mathcal O (ND \mathrm{min} \lbrace N,D\rbrace + N^2K^2(K^2 + d))$ , which is comparable to that of standard L2-norm PC analysis. Our theoretical and experimental studies show that the proposed algorithm calculates the exact optimal L1-PCs with high frequency and achieves higher value in the L1-PC optimization metric than any known alternative algorithm of comparable computational cost. The superiority of the calculated L1-PCs over standard L2-PCs (singular vectors) in characterizing potentially faulty data/measurements is demonstrated with experiments in data dimensionality reduction and disease diagnosis from genomic data.

Proceedings Article
13 Feb 2017
TL;DR: This paper shows that competitive compression rates can be achieved by using a version of “soft weight-sharing” (Nowlan & Hinton, 1992) and achieves both quantization and pruning in one simple (re-)training procedure, exposing the relation between compression and the minimum description length (MDL) principle.
Abstract: The success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work by Han et al. (2015a) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of “soft weight-sharing” (Nowlan & Hinton, 1992). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.

Journal ArticleDOI
Benqing Guo, Jun Chen, Lei Li, Haiyan Jin, Guoning Yang1 
TL;DR: A complementary noise-canceling CMOS low-noise amplifier (LNA) with enhanced linearity is proposed, while an active shunt feedback input stage offers input matching, while extended input matching bandwidth is acquired by a
Abstract: A complementary noise-canceling CMOS low-noise amplifier (LNA) with enhanced linearity is proposed. An active shunt feedback input stage offers input matching, while extended input matching bandwidth is acquired by a $\pi$ -type matching network. The intrinsic noise cancellation mechanism maintains acceptable noise figure (NF) with reduced power consumption due to the current reuse principle. Multiple complementary nMOS and pMOS configurations commonly restrain nonlinear components in individual stage of the LNA. Complementary multigated transistor architecture is further employed to nullify the third-order distortion of noise-canceling stage and compensate the second-order nonlinearity of that. High third-order input intercept point (IIP3) is thus obtained, while the second-order input intercept point (IIP2) is guaranteed by differential operation. Implemented in a 0.18- $\mu \text{m}$ CMOS process, the experimental results show that the proposed LNA provides a maximum gain of 17.5 dB and an input 1-dB compression point (IP1 dB) of −3 dBm. An NF of 2.9–3.5 dB and an IIP3 of 10.6–14.3 dBm are obtained from 0.1 to 2 GHz, respectively. The circuit core only draws 9.7 mA from a 2.2 V supply.

Journal ArticleDOI
TL;DR: In this paper, the reliability of sintered Ag die attach for Si and SiC die has been studied on both thick film substrates for lower current power applications and direct bond copper (DBC) substrate for higher current power application.
Abstract: Low-temperature Ag sintering provides a lead-free die attachment method that is compatible with high-temperature (300 °C) power electronics applications. The reliability of sintered Ag die attach for Si and SiC die has been studied on both thick film substrates for lower current power applications and direct bond copper (DBC) substrates for higher current power applications. Pressureless and low-pressure sintering were evaluated. Sintering with low pressure yielded lower porosity (15–17%) versus pressureless sintering (∼30%). Reliability was evaluated with thermal aging (300 °C) and thermal cycling (–55 °C to + 300 °C) tests. Reliable Ag sintered die attach was achieved with assemblies having Ag-bearing surface finishes on both the die and the substrate. In contrast, the shear strength after 300 °C aging was greatly reduced when Au metallization was used either on the die or on substrate surface. In some cases, low-pressure sintering delayed the failure of the sintered Ag die attach to Au surfaces when aged at 300 °C compared to the pressureless sintering. The reliability with Pd-containing substrate metallizations was intermediate between Ag and Au metallizations. The thermal cycle reliability on DBC substrates was limited by failure at the Cu-to-alumina interface over the wide temperature range, while on the thick film substrates high adhesion was maintained after 1000 thermal cycles.

Patent
Gang Zhong1, Feng Ge1, Li Shen1
03 Jan 2017
TL;DR: In this article, a device is configured to generate a primitive visibility stream that indicates whether respective primitives of a set of primitives are visible when rendered and to generate, based on the primitive visibility streams, a draw call visibility stream which indicates whether the respective draw call includes instructions for rendering visible primitives.
Abstract: This disclosure describes a device configured to generate a primitive visibility stream that indicates whether respective primitives of a set of primitives are visible when rendered and to generate, based on the primitive visibility stream, a draw call visibility stream that indicates whether respective draw calls for rendering the set of primitives include instructions for rendering visible primitives of the set of primitives Based on the draw call visibility stream indicating that a respective draw call does not include instructions for rendering visible primitives, the device is further configured to drop the respective draw call Based on the draw call visibility stream indicating that the respective draw call includes instructions for rendering visible primitives, the device is further configured to execute the respective draw call

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A novel Unsupervised Cross-modal retrieval method based on Adversarial Learning, namely UCAL is proposed, which adds an additional regularization by introducing adversarial learning and introduces a modality classifier to predict the modality of a transformed feature.
Abstract: The core of existing cross-modal retrieval approaches is to close the gap between different modalities either by finding a maximally correlated subspace or by jointly learning a set of dictionaries. However, the statistical characteristics of the transformed features were never considered. Inspired by recent advances in adversarial learning and domain adaptation, we propose a novel Unsupervised Cross-modal retrieval method based on Adversarial Learning, namely UCAL. In addition to maximizing the correlations between modalities, we add an additional regularization by introducing adversarial learning. In particular, we introduce a modality classifier to predict the modality of a transformed feature. This can be viewed as a regularization on the statistical aspect of the feature transforms, which ensures that the transformed features are also statistically indistinguishable. Experiments on popular multimodal datasets show that UCAL achieves competitive performance compared to state of the art supervised cross-modal retrieval methods.

Patent
30 Mar 2017
TL;DR: In this paper, the first UE may determine that beam tracking is to be performed with the UE, including identifying a new beam for communication between the UE and the apparatus, and then the UE may perform beam tracking based on the UE's decision.
Abstract: A first apparatus may communicate with a user equipment through a first active beam. The first apparatus may determine that beam tracking is to be performed with the UE, including identifying a new beam for communication between the UE and the apparatus. The first apparatus may perform beam tracking with the UE based on the determination that beam tracking is to be performed. The first apparatus may communicate with the UE through a second active beam based on the beam tracking.

Journal ArticleDOI
TL;DR: In this article, the authors focus on the uplink and show that even in the case of a finite number of base station antennas, LSFD yields a very large performance gain.
Abstract: A massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of terminals. These systems demonstrate large gains in spectral and energy efficiency compared with the conventional MIMO technology. As the number of antennas grows, the performance of a massive MIMO system gets limited by the interference caused by pilot contamination. Ashikhmin and Marzetta proposed (under the name of Pilot Contamination Precoding) large scale fading precoding (LSFP) and large scale fading decoding (LSFD) based on limited cooperation between base stations. They showed that zero-forcing LSFP and LSFD eliminate pilot contamination entirely and lead to an infinite throughput as the number of antennas grows. In this paper, we focus on the uplink and show that even in the case of a finite number of base station antennas, LSFD yields a very large performance gain. In particular, one of our algorithms gives a more than 140 fold increase in the 5% outage data transmission rate! We show that the performance can be improved further by optimizing the transmission powers of the users. Finally, we present decentralized LSFD that requires limited cooperation only between neighboring cells.

Journal ArticleDOI
TL;DR: Small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions and a hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems.
Abstract: Objective: The purpose of this contribution is to estimate the path loss of capacitive human body communication (HBC) systems under practical conditions. Methods: Most prior work utilizes large grounded instruments to perform path loss measurements, resulting in overly optimistic path loss estimates for wearable HBC devices. In this paper, small battery-powered transmitter and receiver devices are implemented to measure path loss under realistic assumptions. A hybrid electrostatic finite element method simulation model is presented that validates measurements and enables rapid and accurate characterization of future capacitive HBC systems. Results: Measurements from form-factor-accurate prototypes reveal path loss results between 31.7 and 42.2 dB from 20 to 150 MHz. Simulation results matched measurements within 2.5 dB. Comeasurements using large grounded benchtop vector network analyzer (VNA) and large battery-powered spectrum analyzer (SA) underestimate path loss by up to 33.6 and 8.2 dB, respectively. Measurements utilizing a VNA with baluns, or large battery-powered SAs with baluns still underestimate path loss by up to 24.3 and 6.7 dB, respectively. Conclusion: Measurements of path loss in capacitive HBC systems strongly depend on instrumentation configurations. It is thus imperative to simulate or measure path loss in capacitive HBC systems utilizing realistic geometries and grounding configurations. Significance: HBC has a great potential for many emerging wearable devices and applications; accurate path loss estimation will improve system-level design leading to viable products.

Patent
Islam Muhammad Nazmul1, Bilal Sadiq1, Tao Luo1, Sundar Subramanian1, Junyi Li1 
14 Apr 2017
TL;DR: In this article, a UE may receive a beam modification command that indicates a set of transmit beam indexes corresponding to the transmit beams of a base station, and each transmit beam index of the set of beam indexes may indicate at least a transmit direction for transmitting a transmit beam by the UE.
Abstract: A UE may receive a beam modification command that indicates a set of transmit beam indexes corresponding to a set of transmit beams of a base station, and each transmit beam index of the set of transmit beam indexes may indicate at least a transmit direction for transmitting a transmit beam by the base station. The UE may determine a set of receive beam indexes corresponding to receive beams of the UE based on the set of transmit beam indexes, and each receive beam index of the set of receive beam indexes may indicate at least a receive direction for receiving a receive beam by the UE. The UE may receive, from the base station, a signal through at least one receive beam corresponding to at least one receive beam index included in the set of receive beam indexes.

Proceedings ArticleDOI
01 Feb 2017
TL;DR: CMOS PAs with wideband linearity/PAE can enable economical UE/AP devices to deliver 5G data-rates and sufficient element counts in the envisaged 5G phased-array modules can overcome path loss despite low Pout per PA, e.g., by combining RFICs in an AP.
Abstract: To meet rising demand, broadband-cellular-data providers are racing to deploy fifth generation (5G) mm-wave technology, e.g., rollout of some 28GHz-band services is intended in 2017 in the USA with ∼5/1Gb/s downlink/uplink targets. Even with 64-QAM signaling, this translates to an RF bandwidth (RFBW) as large as ∼800MHz. With ∼100m cells and a dense network of 5G access points (APs), potential manufacturing volumes make low-cost CMOS technology attractive for both user equipment (UE) and AP devices. However, the poor P out and linearity of CMOS power amplifiers (PAs) are a bottleneck, as ∼10dB back-off is typical for meeting error-vector-magnitude (EVM) specifications. This limits communication range and PA power added efficiency (PAE), with wider RFBWs accentuating these issues further. On the other hand, sufficient element counts in the envisaged 5G phased-array modules can overcome path loss despite low P out per PA, e.g., by combining RFICs in an AP. CMOS PAs with wideband linearity/PAE can therefore enable economical UE/AP devices to deliver 5G data-rates.

Journal ArticleDOI
TL;DR: In this paper, the authors present a comprehensive validation of high endurance of deeply scaled perpendicular magnetic tunnel junctions (pMTJs) in light of various potential spin-transfer torque magnetoresistive random-access memory (STT-MRAM) use cases.
Abstract: Magnetic tunnel junctions integrated for spin-transfer torque magnetoresistive random-access memory are by far the only known solid-state memory element that can realize a combination of fast read/write speed and high endurance. This paper presents a comprehensive validation of high endurance of deeply scaled perpendicular magnetic tunnel junctions (pMTJs) in light of various potential spin-transfer torque magnetoresistive random-access memory (STT-MRAM) use cases. A statistical study is conducted on the time-dependent dielectric breakdown (TDDB) properties and the dependence of the pMTJ lifetime on voltage, polarity, pulsewidth, duty cycle, and temperature. The experimental results coupled with TDDB models project $> 10^{15}$ write cycles. Furthermore, this work reports system-level workload characterizations to understand the practical endurance requirements for realistic memory applications. The results suggest that the cycling endurance of STT-MRAM is “practically unlimited,” which exceeds the requirements of various memory use cases, including high-performance applications such as CPU level-2 and level-3 caches.

Patent
Feng Zou1, Chen Jianle1, Marta Karczewicz1, Li Xiang1, Hsiao-Chiang Chuang1, Chien Wei-Jung1 
04 May 2017
TL;DR: In this paper, the affine motion model of the current block of video data is derived from the MVs of a neighboring block of data and the predictors of the predicted MVs.
Abstract: An example method includes obtaining, for a current block of video data, values of motion vectors (MVs) of an affine motion model of a neighboring block of video data; deriving, from the values of the MVs of the affine motion model of the neighboring block, values of predictors for MVs of an affine motion model of the current block; decoding, from a video bitstream, a representation of differences between the values of the MVs of the affine motion model for the current block and the values of the predictors; determining the values of the MVs of the affine motion model for the current block from the values of the predictors and the decoded differences; determining, based on the determined values of the MVs of the affine motion model for the current block, a predictor block of video data; and reconstructing the current block based on the predictor block.

Journal ArticleDOI
TL;DR: This paper establishes the relevance of directional precode structures as a low-complexity and robust solution to meet the data rate demands of single-user MIMO systems relative to the more complex and less robust eigen-mode-based precoding structures, and proposes a simple class of directional schedulers that offers a low, complex and yet approximately fair separation plane in the user space.
Abstract: Given the cost and complexity associated with the deployment of a large number of radio frequency (RF) chains in millimeter wave (mmW) multi-input multi-output (MIMO) systems, this paper addresses the question of network efficiency considerations for hybrid precoding. We first establish the relevance of directional precoding structures as a low-complexity and robust solution to meet the data rate demands of single-user MIMO systems relative to the more complex and less robust eigen-mode-based precoding structures. Key to the relevance of the directional precoding structures is the sparsity of the mmW channel coupled with higher antenna dimensionality affordable due to smaller wavelengths. We then leverage the directional structure of the channel to propose a simple class of directional schedulers that offers a low-complexity and yet approximately fair separation plane in the user space. We then compare the performance of single-user precoding schemes with multi-user precoding schemes and show that from network efficiency considerations, it would be more worthwhile to expend the RF chain resource on multi-user transmissions.

Patent
03 Apr 2017
TL;DR: In this article, a beam adjustment request is sent to the base station to indicate an index associated with the selected beam, which is used to determine a beam index of a beam in the first set of beams based on the request and at least one resource.
Abstract: One apparatus may be configured to detect a set of beams from a base station. The apparatus may be further configured to select a beam of the set of beams. The apparatus may be further configured to determine at least one resource based on the selected beam. The apparatus may be further configured to transmit, on the at least one determined resource, a beam adjustment request to the base station. The request may indicate an index associated with the selected beam. Another apparatus may be configured to transmit a first set of beams. The other apparatus may be further configured to receive a beam adjustment request on at least one resource. The other apparatus may be further configured to determine a beam index of a beam in the first set of beams based on the request and the at least one resource.