scispace - formally typeset
Search or ask a question

Showing papers by "Zheng Chang published in 2022"


Journal ArticleDOI
TL;DR: In this paper , a distributed deep learning based computation offloading and resource allocation (DDL-CORA) algorithm was proposed for SD-MEC IoT in which multiple parallel deep neural networks (DNNs) are invoked to generate the optimal offloading decision and resource scheduling.
Abstract: In this paper, a software defined mobile edge computing (SD-MEC) in Internet of Things (IoT) is investigated, in which multiple IoT devices choose to offload their computation tasks to an appropriate edge server to support the emerging IoT applications with strict computation-intensive and latency-critical requirements. In considered SD-MEC networks, a joint computation offloading and power allocation problem is proposed to minimize the utility of weighted delay and power consumption in the distributed dense IoT. The optimization problem is a mixed-integer non-linear programming problem and difficult to solve by general optimization tools due to the nonconvexity and complexity. We propose a distributed deep learning based computation offloading and resource allocation (DDL-CORA) algorithm for SD-MEC IoT in which multiple parallel deep neural networks (DNNs) are invoked to generate the optimal offloading decision and resource scheduling. Additionally, we design a shared replay memory mechanism to effectively store newly generated offloading decisions which are further used to train and improve DNNs. The simulation results show that the proposed DDL-CORA algorithm can reduce the system utility on average 7.72% than reference Deep Q-network (DQN) algorithm and 31.9% than reference Branch-and-Bound (BNB) algorithm, and keep a good tradeoff between the complexity and utility performance.

15 citations


Journal ArticleDOI
TL;DR: The qualitative and quantitative evaluation of experimental results demonstrates that the proposed methods could not only show the superiority of class balancing through the confusion matrix and classwise metrics, but also get better N1 stage and whole stages classification accuracies compared to other state-of-the-art approaches.
Abstract: For real-world automatic sleep-stage classification tasks, various existing deep learning-based models are biased toward the majority with a high proportion. Because of the unique sleep structure, most of the current polysomnography (PSG) datasets suffer an inherent class imbalance problem (CIP), in which the number of each sleep stage is severely unequal. In this study, we first define the class imbalance factor (CIF) to describe the level of CIP quantitatively. Afterward, we propose two balancing methods to alleviate this problem from the dataset quantity and the relationship between the class distribution and the applied model, respectively. The first one is to employ the data augmentation (DA) with the generative adversarial network (GAN) model and different intensities of Gaussian white noise (GWN) to balance samples, thereinto, GWN addition is specifically tailored to deep learning-based models, which can work on raw electroencephalogram (EEG) data while preserving their properties. In addition, we try to balance the relationship between the imbalanced class and biased network model to achieve a balanced state with the help of class distribution and neuroscience principles. We further propose an effective deep convolutional neural network (CNN) model utilizing bidirectional long short-term memory (Bi-LSTM) with single-channel EEG as the baseline. It is used for evaluating the efficiency of two balancing approaches on three imbalanced PSG datasets (CCSHS, Sleep-EDF, and Sleep-EDF-V1). The qualitative and quantitative evaluation of experimental results demonstrates that the proposed methods could not only show the superiority of class balancing through the confusion matrix and classwise metrics, but also get better N1 stage and whole stages classification accuracies compared to other state-of-the-art approaches.

9 citations


Journal ArticleDOI
TL;DR: In this article , a machine learning-based trajectory design and resource allocation scheme for a multi-UAV communications system is presented, with the objective to maximize the system utility over all served users, a joint user association, power allocation and trajectory design problem is presented.
Abstract: The future mobile communication system is expected to provide ubiquitous connectivity and unprecedented services over billions of devices. The unmanned aerial vehicle (UAV), which is prominent in its flexibility and low cost, emerges as a significant network entity to realize such ambitious targets. In this work, novel machine learning-based trajectory design and resource allocation schemes are presented for a multi-UAV communications system. In the considered system, the UAVs act as aerial Base Stations (BSs) and provide ubiquitous coverage. In particular, with the objective to maximize the system utility over all served users, a joint user association, power allocation and trajectory design problem is presented. To solve the problem caused by high dimensionality in state space, we first propose a machine learning-based strategic resource allocation algorithm which comprises of reinforcement learning and deep learning to design the optimal policy of all the UAVs. Then, we also present a multi-agent deep reinforcement learning scheme for distributed implementation without knowing a priori knowledge of the dynamic nature of networks. Extensive simulation studies are conducted and illustrated to evaluate the advantages of the proposed scheme.

5 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a novel transmission and resource allocation strategy for the scenario where multiple wireless powered vehicle area networks (VAN) co-existed with high density.
Abstract: The wireless-powered communication paradigm brings self-sustainability to the on-vehicle sensors by harvesting the energy from radiated radio frequency (RF) signals. This paper proposes a novel transmission and resource allocation strategy for the scenario where multiple wireless powered vehicle area networks (VAN) co-existed with high density. The considered multi-VAN system consists of a remote master access point (MAP), multiple on-vehicle hybrid access points (HAPs) and sensors. Unlike previous works, we consider that the sensors can recycle the radiated radio frequency energy from all the HAPs when HAPs communicate with MAP, so the dedicated signals for energy harvesting (EH) are unnecessary. The proposed strategy can achieve simultaneous wireless information and power transfer (SWIPT) without complex receiver architecture requirements. The extra EH and interference caused by the dense distribution of VANs, which are rarely explored, are fully considered. To maximize the sum throughput of all the sensors while guaranteeing the transmission from HAPs to the MAP, we jointly optimize the time allocation, system energy consumption, power allocation, and receive beamforming. Due to the non-convexity of the formulated problem, we address the sub-problems separately through the Rayleigh quotient, Frobenius norm minimization and convex optimization. Then an efficient iterative algorithm to obtain sub-optimal solutions. The simulation results and discussions illustrate the proposed scheme's effectiveness and advantages.

2 citations


Journal ArticleDOI
TL;DR: An edge computing-based blockchain framework is considered, where multiple edge service providers (ESPs) can provide computational resources to the devices for mining and the effectiveness of the proposed incentive mechanism on forming the blockchain via the assistance of edge computing is demonstrated.
Abstract: Dueto its distributed characteristics, the development and deployment of the blockchain framework are able to provide feasible solutions for a wide range of Internet of Things (IoT) applications. While the IoT devices are usually resource-limited, how to make sure the acquisition of computational resources and participation of the devices will be the driving force to realize blockchain at the network edge. In this article, an edge computing-based blockchain framework is considered, where multiple edge service providers (ESPs) can provide computational resources to the devices for mining. We mainly focus on investigating the trading between the devices and ESPs in the computational resource market, where ESPs act as the sellers and devices act as the buyers. Accordingly, a sequential game model is formulated and by exploring the sequential Nash equilibrium (SE), the existence of the optimal solutions of selling and buying strategies can be proved. Then, a deep Q-network-based algorithm with modified experience replay update method is applied to find the optimal strategies. Through theoretical analysis and simulations, we demonstrate the effectiveness of the proposed incentive mechanism on forming the blockchain via the assistance of edge computing.

2 citations


Proceedings ArticleDOI
01 Jun 2022
TL;DR: This paper determines the sum squared position error bound (SPEB) as the localization accuracy metric for the presented localization-communication system and proposes an iterative algorithm to obtain a suboptimal solution by utilizing the Lagrange duality as well as penalty-based optimization methods.
Abstract: Joint localization and communication systems have drawn significant attention due to their high resource utilization. In this paper, we consider a reconfigurable intelligent surface (RIS)-aided simultaneously localization and communication system. We first determine the sum squared position error bound (SPEB) as the localization accuracy metric for the presented localization-communication system. Then, a joint RIS discrete phase shifts design and subcarrier assignment problem is formulated to minimize the SPEB while guaranteeing each user’s achievable data rate requirement. For the presented non-convex mixed-integer problem, we propose an iterative algorithm to obtain a suboptimal solution by utilizing the Lagrange duality as well as penalty-based optimization methods. Simulation results are provided to validate the performance of the proposed algorithm.

2 citations


Journal ArticleDOI
TL;DR: Considering a wireless powered MEC system, the average age of information (AoI) is studied, which is a crucial performance metric to measure the freshness of information.
Abstract: Mobile edge computing (MEC) has been recognized as a promising technique to provide enhanced computation services for low-power wireless devices at the network edge. How to evaluate the timeliness of the task and data delivery is critical for the development of MEC applications. Considering a wireless powered MEC system, in this letter we study the average age of information (AoI), which is a crucial performance metric to measure the freshness of information. Specifically, in the considered system, a sensor node harvests energy from an energy transmitter and transmits computation tasks to the MEC server. The charging time of the sensor node’s capacitor, the waiting time and service time at the MEC server are taken into account when calculating the average AoI. The closed-form expression of the average AoI is obtained accordingly and evaluated through numerical simulations.

2 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a reputation-based incentive mechanism for vehicular networks, where EVs act as the blockchain nodes, and the reputations of blockchain nodes are calculated according to their historical behavior and interactions.
Abstract: The sharing of high-quality traffic information plays a crucial role in enhancing the driving experience and safety performance for vehicular networks, especially in the development of electric vehicles (EVs). The crowdsourcing-based real-time navigation of charging piles is characterized by low delay and high accuracy. However, due to the lack of an effective incentive mechanism and the resource-consuming bottleneck of sharing real-time road conditions, methods to recruit or motivate more EVs to provide high-quality information gathering has attracted considerable interest. In this paper, we first introduce a blockchain platform, where EVs act as the blockchain nodes, and a reputation-based incentive mechanism for vehicular networks. The reputations of blockchain nodes are calculated according to their historical behavior and interactions. Further, we design and implement algorithms for updating honest-behavior-based reputation as well as for screening low-reputation miners, to optimize the profits of miners and address spatial crowdsourcing tasks for sharing information on road conditions. The experimental results show that the proposed reputation-based incentive method can improve the reputation and profits of vehicle users and ensure data timeliness and reliability.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a 1D-CNN-based deep learning method of one-dimensional convolutional neural networks (1DCNN) combined with channel increment strategy was proposed for the effective seizure prediction.
Abstract: The application of intracranial electroencephalogram (iEEG) to predict seizures remains challenging. Although channel selection has been utilized in seizure prediction and detection studies, most of them focus on the combination with conventional machine learning methods. Thus, channel selection combined with deep learning methods can be further analyzed in the field of seizure prediction. Given this, in this work, a novel iEEG-based deep learning method of One-Dimensional Convolutional Neural Networks (1D-CNN) combined with channel increment strategy was proposed for the effective seizure prediction. First, we used 4-sec sliding windows without overlap to segment iEEG signals. Then, 4-sec iEEG segments with an increasing number of channels (channel increment strategy, from one channel to all channels) were sequentially fed into the constructed 1D-CNN model. Next, the patient-specific model was trained for classification. Finally, according to the classification results in different channel cases, the channel case with the best classification rate was selected for each patient. Our method was tested on the Freiburg iEEG database, and the system performances were evaluated at two levels (segment- and event-based levels). Two model training strategies (Strategy-1 and Strategy-2) based on the K-fold cross validation (K-CV) were discussed in our work. (1) For the Strategy-1, a basic K-CV, a sensitivity of 90.18%, specificity of 94.81%, and accuracy of 94.42% were achieved at the segment-based level. At the event-based level, an event-based sensitivity of 100%, and false prediction rate (FPR) of 0.12/h were attained. (2) For the Strategy-2, the difference from the Strategy-1 is that a trained model selection step is added during model training. We obtained a sensitivity, specificity, and accuracy of 86.23%, 96.00% and 95.13% respectively at the segment-based level. At the event-based level, we achieved an event-based sensitivity of 98.65% with 0.08/h FPR. Our method also showed a better performance in seizure prediction compared to many previous studies and the random predictor using the same database. This may have reference value for the future clinical application of seizure prediction.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the authors formulated the joint data sensing and processing optimization problem to ensure the freshness of the status updates and reduce the energy consumption of IoT devices and proposed a multi-variable iterative system cost minimization algorithm to optimize the system overhead.
Abstract: IoT devices recently are utilized to detect the state transition in the surrounding environment and then transmit the status updates to the base station for future system operations. To satisfy the stringent timeliness requirement of the status updates for the accurate system control, age of information (AoI) is introduced to quantify the freshness of the sensory data. Due to the limited computing resources, the status update can be offloaded to the mobile edge computing (MEC) server for execution to ensure the information freshness. Since the status updates generated by insufficient sensing operations may be invalid and cause additional processing time, the data sensing and processing operations need to be considered simultaneously. In this work, we formulate the joint data sensing and processing optimization problem to ensure the freshness of the status updates and reduce the energy consumption of IoT devices. Then, the formulated NP-hard problem is decomposed into the sampling, sensing and computation offloading optimization problems. Afterwards, we propose a multi-variable iterative system cost minimization algorithm to optimize the system overhead. Simulation results show the efficiency of our method in decreasing the system cost and dominance of sensing and processing under different scenarios.

1 citations


Journal ArticleDOI
01 Aug 2022-Sensors
TL;DR: This work focuses on optimizing the offloading decision, full/half-duplex energy harvesting mode and energy harvesting (EH) time allocation with the objective of minimizing the energy consumption of the MDs.
Abstract: In this paper, we investigate a resource allocation and computation offloading problem in a heterogeneous mobile edge computing (MEC) system. In the considered system, a wireless power transfer (WPT) base station (BS) with an MEC sever is able to deliver wireless energy to the mobile devices (MDs), and the MDs can utilize the harvested energy for local computing or task offloading to the WPT BS or a Macro BS (MBS) with a stronger computing server. In particular, we consider that the WPT BS can utilize full- or half-duplex wireless energy transmission mode to empower the MDs. The aim of this work focuses on optimizing the offloading decision, full/half-duplex energy harvesting mode and energy harvesting (EH) time allocation with the objective of minimizing the energy consumption of the MDs. As the formulate problem has a non-convex mixed integer programming structure, we use the quadratically constrained quadratic program (QCQP) and semi-definite relaxation (SDR) methods to solve it. The simulation results demonstrate the effectiveness of the proposed scheme.

Proceedings ArticleDOI
04 Dec 2022
TL;DR: In this article , a short-packet secure UAV-aided data collection and transmission scheme was proposed to guarantee the freshness and security of the transmission from the sensors to the base station (BS).
Abstract: In this paper, we propose a short-packet secure UAV-aided data collection and transmission scheme to guarantee the freshness and security of the transmission from the sensors to the base station (BS). First, for the data collection phase, the trajectory, the flight duration, and the user scheduling are jointly optimized with the objective to maximize the energy efficiency (EE). To solve the non-convex EE maximization problem, we adopt the first-order Taylor expansion to convert it into two convex subproblems, which are then solved via successive convex approximation. Furthermore, we consider the maximum rate transmission in the UAV data transmission phase to achieve a maximum secrecy rate. The transmit power and the blocklength of UAV-to-BS transmission are jointly optimized subject to the constraints of eavesdropping rate and outage probability. Simulation results are provided to validate the effectiveness of the proposed scheme.