scispace - formally typeset
Search or ask a question
Author

Yo-Seb Jeon

Other affiliations: Intel, Princeton University, Samsung
Bio: Yo-Seb Jeon is an academic researcher from Pohang University of Science and Technology. The author has contributed to research in topics: MIMO & Computer science. The author has an hindex of 12, co-authored 45 publications receiving 495 citations. Previous affiliations of Yo-Seb Jeon include Intel & Princeton University.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a low-complexity near-maximum-likelihood-detection (near-MLD) algorithm called one-bit sphere decoding for an uplink massive MIMO system with one bit analog-to-digital converters is proposed.
Abstract: This paper presents a low-complexity near-maximum-likelihood-detection (near-MLD) algorithm called one-bit sphere decoding for an uplink massive multiple-input multiple-output system with one-bit analog-to-digital converters. The idea of the proposed algorithm is to estimate the transmitted symbol vector sent by uplink users (a codeword vector) by searching over a sphere, which contains a collection of codeword vectors close to the received signal vector at the base station in terms of a weighted Hamming distance . To reduce the computational complexity for the construction of the sphere, the proposed algorithm divides the received signal vector into multiple subvectors each with a reduced dimension. Then, it generates multiple spheres in parallel, where each sphere is centered at the subvector and contains a list of subcodeword vectors. The detection performance of the proposed algorithm is also analyzed by characterizing the probability that the proposed algorithm performs worse than the MLD. The analysis shows how the dimension of each sphere and the size of the subcodeword list are related to the performance-complexity tradeoff achieved by the proposed algorithm. Simulation results demonstrate that the proposed algorithm achieves near-MLD performance, while reducing the computational complexity compared to the existing MLD method.

109 citations

Journal ArticleDOI
TL;DR: Simulations demonstrate the detection error reduction of the proposed framework compared to conventional detection techniques that are based on channel estimation.
Abstract: This paper considers a multiple-input multiple-output system with low-resolution analog-to-digital converters (ADCs). In this system, we propose a novel communication framework that is inspired by supervised learning. The key idea of the proposed framework is to learn the nonlinear input–output system, formed by the concatenation of a wireless channel and a quantization function used at the ADCs for data detection. In this framework, a conventional channel estimation process is replaced by a system learning process, in which the conditional probability mass functions (PMFs) of the nonlinear system are empirically learned by sending the repetitions of all possible data signals as pilot signals. Then, the subsequent data detection process is performed based on the empirical conditional PMFs obtained during the system learning. To reduce both the training overhead and the detection complexity, we also develop a supervised-learning-aided successive-interference-cancellation method. In this method, a data signal vector is divided into two subvectors with reduced dimensions. Then, these two subvectors are successively detected based on the conditional PMFs that are learned using artificial noise signals and an estimated channel. For the case of 1-bit ADCs, we derive an analytical expression for the vector error rate of the proposed framework under perfect channel knowledge at the receiver. Simulations demonstrate the detection error reduction of the proposed framework compared to conventional detection techniques that are based on channel estimation.

93 citations

Journal ArticleDOI
TL;DR: A compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems in which the central server equipped with a massive antenna array communicates with the wireless devices is presented.
Abstract: Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices, each with its own local training data set. In this paper, we present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems in which the central server equipped with a massive antenna array communicates with the wireless devices. One major challenge in system design is to reconstruct local gradient vectors accurately at the central server, which are computed-and-sent from the wireless devices. To overcome this challenge, we first establish a transmission strategy to construct sparse transmitted signals from the local gradient vectors at the devices. We then propose a compressive sensing algorithm enabling the server to iteratively find the linear minimum-mean-square-error (LMMSE) estimate of the transmitted signal by exploiting its sparsity. We also derive an analytical threshold for the residual error at each iteration, to design the stopping criterion of the proposed algorithm. We show that for a sparse transmitted signal, the proposed algorithm requires less computationally complexity than LMMSE. Simulation results demonstrate that the presented approach outperforms conventional linear beamforming approaches and reduces the performance gap between federated learning and centralized learning with perfect reconstruction.

53 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: In this article, the authors proposed a novel detection framework that performs data symbol detection without explicitly knowing channel state information at a receiver. And they also provided an analytical expression for the symbol-vector-error probability of the MIMO systems with one-bit ADCs.
Abstract: This paper considers a multiple-input-multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs) In this system, we propose a novel detection framework that performs data symbol detection without explicitly knowing channel state information at a receiver The underlying idea of the proposed framework is to exploit supervised learning Specifically, during channel training, the proposed approach sends a sequence of data symbols as pilots so that the receiver learns a nonlinear function that is determined by both a channel matrix and a quantization function of the ADCs During data transmission, the receiver uses the learned nonlinear function to detect which data symbols were transmitted In this context, we propose two blind detection methods to determine the nonlinear function from the training-data set We also provide an analytical expression for the symbol-vector-error probability of the MIMO systems with one-bit ADCs when employing the proposed framework Simulations demonstrate the performance improvement of the proposed framework compared to existing detection techniques

53 citations

Posted Content
TL;DR: In this paper, a low-complexity near-maximum-likelihood-detection (near-MLD) algorithm called one-bit-sphere-decoding for an uplink massive MIMO system with one bit analog-to-digital converters was proposed.
Abstract: This paper presents a low-complexity near-maximum-likelihood-detection (near-MLD) algorithm called one-bit-sphere-decoding for an uplink massive multiple-input multiple-output (MIMO) system with one-bit analog-to-digital converters (ADCs). The idea of the proposed algorithm is to estimate the transmitted symbol vector sent by uplink users (a codeword vector) by searching over a sphere, which contains a collection of codeword vectors close to the received signal vector at the base station in terms of a weighted Hamming distance. To reduce the computational complexity for the construction of the sphere, the proposed algorithm divides the received signal vector into multiple sub-vectors each with reduced dimension. Then, it generates multiple spheres in parallel, where each sphere is centered at the sub-vector and contains a list of sub-codeword vectors. The detection performance of the proposed algorithm is also analyzed by characterizing the probability that the proposed algorithm performs worse than the MLD. The analysis shows how the dimension of each sphere and the size of the sub-codeword list are related to the performance-complexity tradeoff achieved by the proposed algorithm. Simulation results demonstrate that the proposed algorithm achieves near-MLD performance, while reducing the computational complexity compared to the existing MLD method.

49 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
TL;DR: In this article, an end-to-end reconstruction task was proposed to jointly optimize transmitter and receiver components in a single process, which can be extended to networks of multiple transmitters and receivers.
Abstract: We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. This paper is concluded with a discussion of open challenges and areas for future investigation.

1,879 citations

Journal ArticleDOI
TL;DR: This paper builds, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios and open-source deep learning software libraries, and proposes a two-step learning procedure based on the idea of transfer learning that circumvents the challenges of training such a system over actual channels.
Abstract: End-to-end learning of communications systems is a fascinating novel concept that has so far only been validated by simulations for block-based transmissions. It allows learning of transmitter and receiver implementations as deep neural networks (NNs) that are optimized for an arbitrary differentiable end-to-end performance metric, e.g., block error rate (BLER). In this paper, we demonstrate that over-the-air transmissions are possible: We build, train, and run a complete communications system solely composed of NNs using unsynchronized off-the-shelf software-defined radios and open-source deep learning software libraries. We extend the existing ideas toward continuous data transmission, which eases their current restriction to short block lengths but also entails the issue of receiver synchronization. We overcome this problem by introducing a frame synchronization module based on another NN. A comparison of the BLER performance of the “learned” system with that of a practical baseline shows competitive performance close to $\text{1}$ dB, even without extensive hyperparameter tuning. We identify several practical challenges of training such a system over actual channels, in particular, the missing channel gradient, and propose a two-step learning procedure based on the idea of transfer learning that circumvents this issue.

757 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of the applications of DL algorithms for different network layers, including physical layer modulation/coding, data link layer access control/resource allocation, and routing layer path search, and traffic balancing is performed.
Abstract: As a promising machine learning tool to handle the accurate pattern recognition from complex raw data, deep learning (DL) is becoming a powerful method to add intelligence to wireless networks with large-scale topology and complex radio conditions. DL uses many neural network layers to achieve a brain-like acute feature extraction from high-dimensional raw data. It can be used to find the network dynamics (such as hotspots, interference distribution, congestion points, traffic bottlenecks, spectrum availability, etc.) based on the analysis of a large amount of network parameters (such as delay, loss rate, link signal-to-noise ratio, etc.). Therefore, DL can analyze extremely complex wireless networks with many nodes and dynamic link quality. This paper performs a comprehensive survey of the applications of DL algorithms for different network layers, including physical layer modulation/coding, data link layer access control/resource allocation, and routing layer path search, and traffic balancing. The use of DL to enhance other network functions, such as network security, sensing data compression, etc., is also discussed. Moreover, the challenging unsolved research issues in this field are discussed in detail, which represent the future research trends of DL-based wireless networks. This paper can help the readers to deeply understand the state-of-the-art of the DL-based wireless network designs, and select interesting unsolved issues to pursue in their research.

580 citations

Journal ArticleDOI
TL;DR: In this paper, the authors explain how the first chapter of the massive MIMO research saga has come to an end, while the story has just begun, and outline five new massive antenna array related research directions.

556 citations