scispace - formally typeset
Search or ask a question

Showing papers by "Dinesh Rajan published in 2017"


Journal ArticleDOI
TL;DR: Low decoding complexity and the fine granularity make it feasible to efficiently implement the proposed capacity-approaching PC-LDPC convolutional code and the associated trellis-based QC-MAP decoder in next generation ultra-high data rate mobile systems.
Abstract: In this paper, we develop a new capacity-approaching code, namely, parallel-concatenated (PC)-Low Density Parity Check (LDPC) convolutional code that is based on the parallel concatenation of trellis-based quasi-cyclic LDPC (TQC-LDPC) convolutional codes. The proposed PC-LDPC convolutional code can be derived from any QC-LDPC block code by introducing the trellis-based convolutional dependency to the code. The capacity-approaching PC-LDPC convolutional codes are encoded through parallel concatenated trellis-based QC recursive systematic convolutional (RSC) encoder (namely, QC-RSC encoder) that is also proposed in this paper. The proposed PC-LDPC convolutional code and the associated encoder retain a fine input granularity on the order of the lifting factor of the underlying block code. We also describe the corresponding trellis-based QC maximum a posteriori probability (namely, QC-MAP) decoder that efficiently decodes the PC-LDPC convolutional code. Performance and hardware implementation results show that the PC-LDPC convolutional codes with the QC-MAP decoder have two times lower complexity for a given bit-error-rate (BER), signal-to-noise ratio, and data rate, than conventional QC-LDPC block codes and LDPC convolutional codes. Moreover, the PC-LDPC convolutional code with the QC-MAP decoder outperforms the conventional QC-LDPC block codes by more than 0.5 dB for a given BER, complexity, and data rate and approaches Shannon capacity limit with a gap smaller than 1.25 dB. This low decoding complexity and the fine granularity make it feasible to efficiently implement the proposed capacity-approaching PC-LDPC convolutional code and the associated trellis-based QC-MAP decoder in next generation ultra-high data rate mobile systems.

11 citations


Proceedings ArticleDOI
TL;DR: Experimental results indicate that one CCD camera and the presented method adequately capture displacements with similar accuracy and precision of strain gages.
Abstract: Structural impairments can be detected by monitoring and analyzing dynamic structural responses. In this paper, accurate dynamic deformations of multiple vibrating structures are estimated using one off-the-shelf CCD (Charge-coupled Device) camera and a novel target configuration and computation algorithm. In this method, a CCD camera, which can be mounted on either an adjacent vibrating structure or stationary support, is used to track global movements of a vibrating structure. The method utilizes subpixel feature extraction, Zhang’s camera calibration-based method, and a novel approach for target configuration which eliminates the need of having a stationary support for the recording camera. Two scaled models of a traffic signal structure were designed and constructed to implement the proposed method in laboratory conditions. The camera is mounted on one of the structures while facing targets on the other structure. The image data from the camera is processed to extract the structural response of both the structures in the field-of-view (FOV) of the camera as well as the structure on which the camera is mounted. Data from strain gages mounted on these structures provide measurement benchmarks for the dynamic structural responses. Experimental results indicate that one CCD camera and the presented method adequately capture displacements with similar accuracy and precision of strain gages. In practice, a camerabased method may be less expensive and more easily installed

10 citations


Proceedings ArticleDOI
21 Nov 2017
TL;DR: This work investigates the accuracy of characterizing large-scale fading using crowdsourced data in presence of the aforementioned phone measurement shortcomings, and analyzing the quality of the smartphone measurements in LTE indicates that the inferred radio propagation models, is comparable with models obtained by advanced equipment.
Abstract: Conducting in-field performance analysis for wireless carrier coverage and capacity evaluation is extremely costly, in terms of equipment, manpower, and time. Hence, there is a growing number of opportunities that exist for crowdsourcing via smart applications, firmware, and cellular standards. These facilities offer carriers feedback about user-perceived wireless channel quality. Crowdsourcing provides the ability to rapidly collect feedback with dense levels of penetration using client smartphones. However, mobile phones often fail to capture the fidelity and high sampling rate of more advanced equipment (e.g., a channel scanner) used when drive testing for analysis of propagation characteristics. In this work, we study the impact of various effects induced by user equipment (UE), when sampling signal quality. These shortcomings include averaging over multiple samples, imprecise quantization, and non-uniform and/or less frequent channel sampling. We specifically, investigate the accuracy of characterizing large-scale fading using crowdsourced data in presence of the aforementioned phone measurement shortcomings. To do so, we conduct extensive in-field experiments across heterogeneous devices and environments to empirically quantify the perceived channel characteristics by phone measurements. Analyzing the quality of the smartphone measurements in LTE indicates that the inferred radio propagation models, is comparable with models obtained by advanced equipment.

5 citations


Patent
06 Jan 2017
TL;DR: In this article, an apparatus and method for determining cell coverage in a region with reduced in-field propagation measurements comprising: obtaining geographical features of the region; predicting the number of measurements required to accurately characterize its path loss; determining the path loss prediction accuracy of wardriving and crowdsourcing by oversampling a suburban and a downtown region from cell measurements that comprise signal strength and global positioning system coordinates; and using statistical learning to build a relationship between these geographical features and the measurements required.
Abstract: The present invention includes an apparatus and method for determining cell coverage in a region with reduced in-field propagation measurements comprising: obtaining geographical features of the region; predicting the number of measurements required to accurately characterize its path loss; determining the path loss prediction accuracy of wardriving and crowdsourcing by oversampling a suburban and a downtown region from cell measurements that comprise signal strength and global positioning system coordinates; and using statistical learning to build a relationship between these geographical features and the measurements required, thereby reducing the number of measurements needed to determine path loss accuracy.

4 citations


Proceedings ArticleDOI
21 Nov 2017
TL;DR: This work uses geographical features of a region to reduce in-field propagation experimentation by predicting the number of measurements required to accurately characterize its path loss, and uses statistical learning to build a relationship between these geographical features and the measurements required.
Abstract: Ensuring cellular coverage is an important and costly concern for carriers due to the expense of in-field experimentation (i.e., drive testing). With the ubiquity of smartphones, apps, and social media, there has been an explosion of crowdsourcing to understand a vast array of trends and topics at a minimal cost to the organization. While cellular carriers might seek to replace the expensive act of drive testing with the nearly cost-free crowdsourcing, questions remain as to: (i) ~the accuracy of crowdsourcing, considering the lack of user control, (ii) ~the detection of when drive testing might still be required, and (iii) ~the quantification of how many additional in-field measurements to perform for a certain accuracy level. In this work, we use geographical features of a region to reduce in-field propagation experimentation by predicting the number of measurements required to accurately characterize its path loss. In particular, we study the path loss prediction accuracy of drive testing and crowdsourcing by taking millions of measurements in a suburban and downtown region. We then use statistical learning to build a relationship between these geographical features and the measurements required. In doing so, we find that the number of measurements collected to achieve a certain path loss accuracy over the entire region can be reduced by up to $58%$ in a high density drive testing scenario.