scispace - formally typeset
Search or ask a question
Author

Ke He

Bio: Ke He is an academic researcher from Guangzhou University. The author has contributed to research in topics: Computer science & Tree (data structure). The author has an hindex of 4, co-authored 6 publications receiving 80 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes the joint use of a maximum likelihood detector (MLD) and a deep convolutional neural network (DCNN), where MLD is used to produce an initial detection result and DCNN improves the detection by exploiting the local correlation to suppress the interference.
Abstract: In this paper, we investigate the classical detection problem for vehicle networks with multiple antennas, by considering practical communication scenarios, where the interfering signals are correlated over time or frequency. In such cases, the conventional detector requires to estimate the joint distribution of the interfering signals, which imposes a huge computational complexity. To tackle this issue, we propose the joint use of a maximum likelihood detector (MLD) and a deep convolutional neural network (DCNN), where MLD is used to produce an initial detection result and DCNN improves the detection by exploiting the local correlation to suppress the interference. Furthermore, the improved DCNN is enhanced by devising the loss function through the cross-entropy of the detection, which can help to suppress the interfering signals and simultaneously force the residual interference to approach the Gaussian distribution. Simulation results are presented to verify the effectiveness of the proposed detector compared to the conventional one. The trained model and source code for this work are available at https://github.com/skypitcher/project_dcnnmld .

56 citations

Journal ArticleDOI
TL;DR: This paper aims to devise a generalized maximum likelihood (ML) estimator to robustly detect signals with unknown noise statistics in multiple-input multiple-output (MIMO) systems by proposing a novel ML detection framework driven by an unsupervised learning approach.
Abstract: This paper aims to devise a generalized maximum likelihood (ML) estimator to robustly detect signals with unknown noise statistics in multiple-input multiple-output (MIMO) systems. In practice, there is little or even no statistical knowledge on the system noise, which in many cases is non-Gaussian, impulsive and not analyzable. Existing detection methods have mainly focused on specific noise models, which are not robust enough with unknown noise statistics. To tackle this issue, we propose a novel ML detection framework to effectively recover the desired signal. Our framework is a fully probabilistic one that can efficiently approximate the unknown noise distribution through a normalizing flow. Importantly, this framework is driven by an unsupervised learning approach, where only the noise samples are required. To reduce the computational complexity, we further present a low-complexity version of the framework, by utilizing an initial estimation to reduce the search space. Simulation results show that our framework outperforms other existing algorithms in terms of bit error rate (BER) in non-analytical noise environments, while it can reach the ML performance bound in analytical noise environments.

53 citations

Journal ArticleDOI
TL;DR: This work proposes a memory-efficient pruning strategy by leveraging the combinatorial nature of the GSM signal structure and proposes an efficient memory-bounded maximum likelihood (ML) search (EM-MLS) algorithm that can achieve the optimal bit error rate (BER) performance, while its memory size can be bounded.
Abstract: We investigate the optimal signal detection problem in large-scale multiple-input multiple-output (MIMO) system with the generalized spatial modulation (GSM) scheme, which can be formulated as a closest lattice point search (CLPS). To identify invalid signals, an efficient pruning strategy is needed while searching on the GSM decision tree. However, the existing algorithms have exponential complexity, whereas they are infeasible in large-scale GSM-MIMO systems. In order to tackle this problem, we propose a memory-efficient pruning strategy by leveraging the combinatorial nature of the GSM signal structure. Thus, the required memory size is squared to the number of transmit antennas. We further propose an efficient memory-bounded maximum likelihood (ML) search (EM-MLS) algorithm by jointly employing the proposed pruning strategy and the memory-bounded best-first algorithm. Theoretical and simulation results show that our proposed algorithm can achieve the optimal bit error rate (BER) performance, while its memory size can be bounded. Moreover, the expected time complexity decreases exponentially with increasing the signal-to-noise ratio (SNR) as well as the system’s excess degree of freedom, and it often converges to squared time under practical scenarios.

27 citations

Posted Content
TL;DR: In this article, a multi-exit-based federated edge learning (ME-FEEL) framework is proposed, where the deep model can be divided into several sub-models with different depths and output prediction from the exit in the corresponding submodel.
Abstract: In this paper, we investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks. In this system, the IoT devices can collaboratively train a shared model without compromising data privacy. However, due to limited resources in the industrial IoT networks, including computational power, bandwidth, and channel state, it is challenging for many devices to accomplish local training and upload weights to the edge server in time. To address this issue, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework, where the deep model can be divided into several sub-models with different depths and output prediction from the exit in the corresponding sub-model. In this way, the devices with insufficient computational power can choose the earlier exits and avoid training the complete model, which can help reduce computational latency and enable devices to participate into aggregation as much as possible within a latency threshold. Moreover, we propose a greedy approach-based exit selection and bandwidth allocation algorithm to maximize the total number of exits in each communication round. Simulation experiments are conducted on the classical Fashion-MNIST dataset under a non-independent and identically distributed (non-IID) setting, and it shows that the proposed strategy outperforms the conventional FL. In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.

27 citations

Journal ArticleDOI
TL;DR: In this paper , a multi-tier cache-aided relaying network is studied, where the destination is randomly located in the network and it requests files from the source through the help of cache-assisted base station (BS) and relays.
Abstract: This paper studies a multi-tier cache-aided relaying network, where the destination $D$ is randomly located in the network and it requests files from the source $S$ through the help of cache-aided base station (BS) and $N$ relays. In this system, the multi-tier architecture imposes a significant impact on the system collaborative caching and file delivery, which brings a big challenge to the system performance evaluation and optimization. To address this problem, we first evaluate the system performance by deriving analytical outage probability expression, through fully taking into account the random location of the destination and different file delivery modes related to the file caching status. We then perform the asymptotic analysis on the system outage probability when the signal-to-noise ratio (SNR) is high, to enclose some important and meaningful insights on the network. We further optimize the caching strategies among the relays and BS, to improve the network outage probability. Simulations are performed to show the effectiveness of the derived analytical and asymptotic outage probability for the proposed caching strategy. In particular, the proposed caching is superior to the conventional caching strategies such as the most popular content (MPC) and equal probability caching (EPC) strategies.

20 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2016
TL;DR: The digital design and computer architecture is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you for downloading digital design and computer architecture. As you may know, people have search numerous times for their chosen novels like this digital design and computer architecture, but end up in malicious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some infectious virus inside their laptop. digital design and computer architecture is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library hosts in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the digital design and computer architecture is universally compatible with any devices to read.

246 citations

Journal ArticleDOI
TL;DR: This article first optimize the bandwidth allocation by presenting three schemes for the second-hop wireless relaying, then optimize the computation offloading based on the discrete particle swarm optimization algorithm, and presents three relay selection criteria by taking into account the tradeoff between the system performance and implementation complexity.
Abstract: In this article, we investigate a communication and computation problem for industrial Internet of Things (IoT) networks, where $K$ relays can help accomplish the computation tasks with the assist of $M$ computational access points. In industrial IoT networks, latency and energy consumption are two important metrics of interest to measure the system performance. To enhance the system performance, a three-hierarchical optimization framework is proposed to reduce the latency and energy consumption, which involves bandwidth allocation, offloading, and relay selection. Specifically, we first optimize the bandwidth allocation by presenting three schemes for the second-hop wireless relaying. We then optimize the computation offloading based on the discrete particle swarm optimization algorithm. We further present three relay selection criteria by taking into account the tradeoff between the system performance and implementation complexity. Simulation results are finally demonstrated to show the effectiveness of the proposed three-hierarchical optimization framework.

122 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated mobile edge computing (MEC) networks for intelligent internet of things (IoT), where multiple users have some computational tasks assisted by multiple computational access points (CAPs) by offloading some tasks to the CAPs, the system performance can be improved through reducing the latency and energy consumption.

81 citations

Journal ArticleDOI
Yinghao Guo1, Zichao Zhao1, He Ke1, Shiwei Lai1, Junjuan Xia1, Lisheng Fan1 
TL;DR: In this paper, the authors proposed a federated learning approach for MEC-aided industrial Internet of Things (IIoT) networks, where the task offloading ratio, bandwidth allocation ratio, and transmit power were optimized using deep reinforcement learning (DRL) algorithm.

79 citations