scispace - formally typeset
Search or ask a question
Author

Yong Liao

Bio: Yong Liao is an academic researcher from Chongqing University. The author has contributed to research in topics: Codebook & Channel state information. The author has an hindex of 2, co-authored 5 publications receiving 21 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A deep learning (DL)-based MIMO-OFDM channel estimation algorithm that can be effectively utilized to adapt the characteristics of fast time-varying channels in the high mobility scenarios by performing offline training to the learning network.
Abstract: Channel estimation is very challenging for multiple-input and multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems in high mobility environments with non-stationarity channel characteristics. In order to handle this problem, we propose a deep learning (DL)-based MIMO-OFDM channel estimation algorithm. By performing offline training to the learning network, the channel state information (CSI) generated by the training samples can be effectively utilized to adapt the characteristics of fast time-varying channels in the high mobility scenarios. The simulation results show that the proposed DL-based algorithm is more robust for the scenarios of high mobility in MIMO-OFDM systems, compared to the conventional algorithms.

55 citations

Journal ArticleDOI
TL;DR: In this work, an efficient low complexity receiver is proposed for PUCCH formats 2a/2b and it is also practical for other P UCCH formats with all priori-known RSs.
Abstract: In 3rd. Generation Partnership Project (3GPP) Long Term Evolution (LTE) systems, physical uplink control channel (PUCCH) incorporate Reference Signals (RSs) for eNodeB to perform channel estimation. However the RSs in PUCCH formats 2a/2b are related to the transmitted 10th symbol in UE. As a result, the receivers can not have a priori knowledge about the transmitted RSs and perform channel estimation aided by all of the RSs directly. In this work, an efficient low complexity receiver is proposed for PUCCH formats 2a/2b. What's more, it is also practical for other PUCCH formats with all priori-known RSs. As will be shown in our simulation results, remarkable performance gains are obtained for all PUCCH formats.

11 citations

Journal ArticleDOI
TL;DR: A small amount of the rotation factors p in a ZC sequence can be calculated off-line and stored in memory firstly and then all the other rotation factors can be obtained by looking up the table and some simple computations, and the improved ZC sequences DFT can be achieved with less computational effort at a little expense of memory space.
Abstract: In the LTE system, Zadoff---Chu (ZC) sequence is used for the generation and detection of physical random access channel (PRACH) preamble sequence. The key step of PRACH baseband signal generation is to perform a discrete Fourier transform (DFT) of ZC sequence, which is characterized by the root index u and a rotation factor p. On-line computation according to the DFT formula is adopted in traditional algorithm, which has high computational complexity and is hard to meet the real-time requirement of LTE system. In this paper, it is mainly to improve the calculation of rotation factor p in DFT formula. Based on theoretical analysis of the ZC sequence DFT property, the symmetry property and recurrence relations of its rotation factor p are derived. Hence, a small amount of the rotation factors p in a ZC sequence can be calculated off-line and stored in memory firstly. And then all the other rotation factors can be obtained by looking up the table and some simple computations. As a result, the improved ZC sequence DFT can be achieved with less computational effort at a little expense of memory space.
Journal ArticleDOI
TL;DR: This paper introduces a new PMI feedback scheme for wideband sources with coherent signal subspace method, which estimates PMI based on an approximately coherent combination of each subcarrier by a focusing matrix.
Abstract: The double codebook structures are proposed and applied in LTE-advanced, which can track both long-term and short-term channel state information In traditional incoherent estimation, precoding matrix indicators (PMI) estimation is performed on each individual subcarrier and some form of averaging procedure then follows to combine the results from individual narrow-band processing As a result, if there is a large deviation in any estimates of a subcarrier, it will lead to a larger deviation in the final result Thus incoherent PMI estimation has a poor performance in low signal to noise ratio This paper introduces a new PMI feedback scheme for wideband sources with coherent signal subspace method This new technique estimates PMI based on an approximately coherent combination of each subcarrier by a focusing matrix Simulation results show that the coherent estimation scheme has a better throughput performance than the traditional incoherent estimation
Journal ArticleDOI
TL;DR: Simulation results shows that the proposed schemes can effectively improve the performance of double codebook estimation with outdated CSI.
Abstract: Due to the time-variant channel, outdated channel state information (CSI) largely reduce the performance of double codebook beamforming system. This paper studies the two codebooks independently and proposes an adaptive feedback period adjustment Strategy for long-term codebook and a prediction estimation algorithm for short-term codebook respectively. The former dynamically adjusts feedback period so as to improve the feedback timeliness and uplink feedback resource utilization. The later adopts improved Kalman prediction algorithm to predict CSI. Finally simulation results shows that the proposed schemes can effectively improve the performance of double codebook estimation with outdated CSI.

Cited by
More filters
Journal ArticleDOI
TL;DR: Resource allocation and link adaptation in Long-Term Evolution (LTE) and LTE Advanced are discussed with focus on the location and formatting of the pertinent reference and control signals, as well as the decisions they enable.
Abstract: Resource allocation and link adaptation in Long-Term Evolution (LTE) and LTE Advanced are discussed with focus on the location and formatting of the pertinent reference and control signals, as well as the decisions they enable. In particular, after reviewing the units for resource allocation and the time-frequency resource grid, the enabled resource-allocation modes and their purposes are reviewed. A detailed description of the way the resource allocations are encoded under these different modes is also given. Similarly, the various methods of link adaptation, including power control and rate control, both through the use of adaptive modulation and coding and hybrid automatic repeat request, are reviewed. The control signaling encoding for link adaptation is provided in detail, as is the encoding of channel state feedback for the purposes of link adaptation and resource allocation.

129 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated the mean square error (MSE) performance of machine learning-based channel estimation in orthogonal frequency division multiplexing (OFDM) systems and derived a clear analytical relation between the size of the training data and performance.
Abstract: Recently, machine learning-based channel estimation has attracted much attention. The performance of machine learning-based estimation has been validated by simulation experiments. However, little attention has been paid to the theoretical performance analysis. In this paper, we investigate the mean square error (MSE) performance of machine learning-based estimation. Hypothesis testing is employed to analyze its MSE upper bound. Furthermore, we build a statistical model for hypothesis testing, which holds when the linear learning module with a low input dimension is used in machine learning-based channel estimation, and derive a clear analytical relation between the size of the training data and performance. Then, we simulate the machine learning-based channel estimation in orthogonal frequency division multiplexing (OFDM) systems to verify our analysis results. Finally, the design considerations for the situation where only limited training data is available are discussed. In this situation, our analysis results can be applied to assess the performance and support the design of machine learning-based channel estimation.

17 citations

Journal ArticleDOI
TL;DR: This article develops important insights derived from the physical radio frequency (RF) channel properties and presents a comprehensive overview on the application of DL for accurately estimating channel state information (CSI) with low overhead.
Abstract: A new wave of wireless services, including virtual reality, autonomous driving and Internet of Things, is driving the design of new generations of wireless systems to deliver ultra-high data rates, massive numbers of connections and ultra-low latency. Massive multiple-input multiple-output (MIMO) is one of the critical underlying technologies that allow future wireless networks to meet these service needs. This article discusses the application of deep learning (DL) for massive MIMO channel estimation in wireless networks by integrating the underlying characteristics of channels in future high-speed cellular deployment. We develop important insights derived from the physical radio frequency (RF) channel properties and present a comprehensive overview on the application of DL for accurately estimating channel state information (CSI) with low overhead. We provide examples of successful DL application in CSI estimation for massive MIMO wireless systems and highlight several promising directions for future research.

16 citations

Journal ArticleDOI
TL;DR: In this paper, a machine learning-based channel estimation for orthogonal frequency division multiplexing (OFDM) systems is proposed, in which the training of the estimator is performed online.
Abstract: In this paper, we devise a highly efficient machine learning-based channel estimation for orthogonal frequency division multiplexing (OFDM) systems, in which the training of the estimator is performed online. A simple learning module is employed for the proposed learning-based estimator. The training process is thus much faster and the required training data is reduced significantly. Besides, a training data construction approach utilizing least square (LS) estimation results is proposed so that the training data can be collected during the data transmission. The feasibility of this novel construction approach is verified by theoretical analysis and simulations. Based on this construction approach, two alternative training data generation schemes are proposed. One scheme transmits additional block pilot symbols to create training data, while the other scheme adopts a decision-directed method and does not require extra pilot overhead. Simulation results show the robustness of the proposed channel estimation method. Furthermore, the proposed method shows better adaptation to practical imperfections compared with the conventional minimum mean-square error (MMSE) channel estimation. It outperforms the existing machine learning-based channel estimation techniques under varying channel conditions.

13 citations

Journal ArticleDOI
TL;DR: This article focuses its attention on four promising physical layer concepts foreseen to dominate next-generation communications, namely massive multiple-input multiple-output systems, sophisticated multi-carrier waveform designs, reconfigurable intelligent surface-empowered communications, and physical layer security.
Abstract: Deep learning (DL) has proven its unprecedented success in diverse fields such as computer vision, natural language processing, and speech recognition by its strong representation ability and ease of computation. As we move forward to a thoroughly intelligent society with 6G wireless networks, new applications and use cases have been emerging with stringent requirements for next-generation wireless communications. Therefore, recent studies have focused on the potential of DL approaches in satisfying these rigorous needs and overcoming the deficiencies of existing model-based techniques. The main objective of this article is to unveil the state-of-the-art advancements in the field of DL-based physical layer methods to pave the way for fascinating applications of 6G. In particular, we have focused our attention on four promising physical layer concepts foreseen to dominate next-generation communications, namely massive multiple-input multiple-output systems, sophisticated multi-carrier waveform designs, reconfigurable intelligent surface-empowered communications, and physical layer security. We examine up-to-date developments in DL-based techniques, provide comparisons with state-of-the-art methods, and introduce a comprehensive guide for future directions. We also present an overview of the underlying concepts of DL, along with the theoretical background of well-known DL techniques. Furthermore, this article provides programming examples for a number of DL techniques and the implementation of a DL-based multiple-input multiple-output by sharing user-friendly code snippets, which might be useful for interested readers.

11 citations