scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Vehicular Technology in 2019"


Journal ArticleDOI
TL;DR: Numerical results show that using the proposed phase shift design can achieve the maximum ergodic spectral efficiency, and a 2-bit quantizer is sufficient to ensure spectral efficiency degradation of no more than 1 bit/s/Hz.
Abstract: Large intelligent surface (LIS)-assisted wireless communications have drawn attention worldwide. With the use of low-cost LIS on building walls, signals can be reflected by the LIS and sent out along desired directions by controlling its phases, thereby providing supplementary links for wireless communication systems. In this paper, we evaluate the performance of an LIS-assisted large-scale antenna system by formulating a tight upper bound of the ergodic spectral efficiency and investigate the effect of the phase shifts on the ergodic spectral efficiency in different propagation scenarios. In particular, we propose an optimal phase shift design based on the upper bound of the ergodic spectral efficiency and statistical channel state information. Furthermore, we derive the requirement on the quantization bits of the LIS to promise an acceptable spectral efficiency degradation. Numerical results show that using the proposed phase shift design can achieve the maximum ergodic spectral efficiency, and a 2-bit quantizer is sufficient to ensure spectral efficiency degradation of no more than 1 bit/s/Hz.

717 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed novel heuristic algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.
Abstract: Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.

705 citations


Journal ArticleDOI
TL;DR: The simulation results show that the proposed algorithm can effectively improve the system utility and computation time, especially for the scenario where the MEC servers fail to meet demands due to insufficient computation resources.
Abstract: Computation offloading services provide required computing resources for vehicles with computation-intensive tasks. Past computation offloading research mainly focused on mobile edge computing (MEC) or cloud computing, separately. This paper presents a collaborative approach based on MEC and cloud computing that offloads services to automobiles in vehicular networks. A cloud-MEC collaborative computation offloading problem is formulated through jointly optimizing computation offloading decision and computation resource allocation. Since the problem is non-convex and NP-hard, we propose a collaborative computation offloading and resource allocation optimization (CCORAO) scheme, and design a distributed computation offloading and resource allocation algorithm for CCORAO scheme that achieves the optimal solution. The simulation results show that the proposed algorithm can effectively improve the system utility and computation time, especially for the scenario where the MEC servers fail to meet demands due to insufficient computation resources.

543 citations


Journal ArticleDOI
TL;DR: A deep learning-based method, combined with two convolutional neural networks trained on different datasets, to achieve higher accuracy AMR, demonstrating the ability to classify QAM signals even in scenarios with a low signal-to-noise ratio.
Abstract: Automatic modulation recognition (AMR) is an essential and challenging topic in the development of the cognitive radio (CR), and it is a cornerstone of CR adaptive modulation and demodulation capabilities to sense and learn environments and make corresponding adjustments. AMR is essentially a classification problem, and deep learning achieves outstanding performances in various classification tasks. So, this paper proposes a deep learning-based method, combined with two convolutional neural networks (CNNs) trained on different datasets, to achieve higher accuracy AMR. A CNN is trained on samples composed of in-phase and quadrature component signals, otherwise known as in-phase and quadrature samples, to distinguish modulation modes, that are relatively easy to identify. We adopt dropout instead of pooling operation to achieve higher recognition accuracy. A CNN based on constellation diagrams is also designed to recognize modulation modes that are difficult to distinguish in the former CNN, such as 16 quadratic-amplitude modulation (QAM) and 64 QAM, demonstrating the ability to classify QAM signals even in scenarios with a low signal-to-noise ratio.

489 citations


Journal ArticleDOI
TL;DR: In this article, a decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning is proposed, which can be applied to both unicast and broadcast scenarios.
Abstract: In this paper, we develop a novel decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning, which can be applied to both unicast and broadcast scenarios. According to the decentralized resource allocation mechanism, an autonomous “agent,” a V2V link or a vehicle, makes its decisions to find the optimal sub-band and power level for transmission without requiring or having to wait for global information. Since the proposed method is decentralized, it incurs only limited transmission overhead. From the simulation results, each agent can effectively learn to satisfy the stringent latency constraints on V2V links while minimizing the interference to vehicle-to-infrastructure communications.

438 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a two-stage soft security enhancement solution: miner selection and block verification, which evaluates candidates' reputation using both past interactions and recommended opinions from other vehicles The candidates with high reputation are selected to be active miners and standby miners in order to prevent internal collusion among active miners.
Abstract: In the Internet of Vehicles (IoV), data sharing among vehicles is critical for improving driving safety and enhancing vehicular services To ensure security and traceability of data sharing, existing studies utilize efficient delegated proof-of-stake consensus scheme as hard security solutions to establish blockchain-enabled IoV (BIoV) However, as the miners are selected from miner candidates by stake-based voting, defending against voting collusion between the candidates and compromised high-stake vehicles becomes challenging To address the challenge, in this paper, we propose a two-stage soft security enhancement solution: 1) miner selection and 2) block verification In the first stage, we design a reputation-based voting scheme to ensure secure miner selection This scheme evaluates candidates’ reputation using both past interactions and recommended opinions from other vehicles The candidates with high reputation are selected to be active miners and standby miners In the second stage, to prevent internal collusion among active miners, a newly generated block is further verified and audited by standby miners To incentivize the participation of the standby miners in block verification, we adopt the contract theory to model the interactions between active miners and standby miners, where block verification security and delay are taken into consideration Numerical results based on a real-world dataset confirm the security and efficiency of our schemes for data sharing in BIoV

434 citations


Journal ArticleDOI
TL;DR: A reinforcement learning (RL) based offloading scheme for an IoT device with EH to select the edge device and the offloading rate according to the current battery level, the previous radio transmission rate to each edge device, and the predicted amount of the harvested energy.
Abstract: Internet of Things (IoT) devices can apply mobile edge computing (MEC) and energy harvesting (EH) to provide high-level experiences for computational intensive applications and concurrently to prolong the lifetime of the battery. In this paper, we propose a reinforcement learning (RL) based offloading scheme for an IoT device with EH to select the edge device and the offloading rate according to the current battery level, the previous radio transmission rate to each edge device, and the predicted amount of the harvested energy. This scheme enables the IoT device to optimize the offloading policy without knowledge of the MEC model, the energy consumption model, and the computation latency model. Further, we present a deep RL-based offloading scheme to further accelerate the learning speed. Their performance bounds in terms of the energy consumption, computation latency, and utility are provided for three typical offloading scenarios and verified via simulations for an IoT device that uses wireless power transfer for energy harvesting. Simulation results show that the proposed RL-based offloading scheme reduces the energy consumption, computation latency, and task drop rate, and thus increases the utility of the IoT device in the dynamic MEC in comparison with the benchmark offloading schemes.

409 citations


Journal ArticleDOI
TL;DR: This work investigates the collaboration between cloud computing and edge computing, where the tasks of mobile devices can be partially processed at the edge node and at the cloud server and obtains the closed-form computation resource allocation strategy by leveraging the convex optimization theory.
Abstract: By performing data processing at the network edge, mobile edge computing can effectively overcome the deficiencies of network congestion and long latency in cloud computing systems. To improve edge cloud efficiency with limited communication and computation capacities, we investigate the collaboration between cloud computing and edge computing, where the tasks of mobile devices can be partially processed at the edge node and at the cloud server. First, a joint communication and computation resource allocation problem is formulated to minimize the weighted-sum latency of all mobile devices. Then, the closed-form optimal task splitting strategy is derived as a function of the normalized backhaul communication capacity and the normalized cloud computation capacity. Some interesting and useful insights for the optimal task splitting strategy are also highlighted by analyzing four special scenarios. Based on this, we further transform the original joint communication and computation resource allocation problem into an equivalent convex optimization problem and obtain the closed-form computation resource allocation strategy by leveraging the convex optimization theory. Moreover, a necessary condition is also developed to judge whether a task should be processed at the corresponding edge node only, without offloading to the cloud server. Finally, simulation results confirm our theoretical analysis and demonstrate that the proposed collaborative cloud and edge computing scheme can evidently achieve a better delay performance than the conventional schemes.

395 citations


Journal ArticleDOI
TL;DR: This paper explores a vehicle edge computing network architecture in which the vehicles can act as the mobile edge servers to provide computation services for nearby UEs and proposes as vehicle-assisted offloading scheme for UEs while considering the delay of the computation task.
Abstract: Mobile Edge Computing (MEC) is a promising technology to extend the diverse services to the edge of Internet of Things (IoT) system. However, the static edge server deployment may cause “service hole” in IoT networks in which the location and service requests of the User Equipments (UEs) may be dynamically changing. In this paper, we firstly explore a vehicle edge computing network architecture in which the vehicles can act as the mobile edge servers to provide computation services for nearby UEs. Then, we propose as vehicle-assisted offloading scheme for UEs while considering the delay of the computation task. Accordingly, an optimization problem is formulated to maximize the long-term utility of the vehicle edge computing network. Considering the stochastic vehicle traffic, dynamic computation requests and time-varying communication conditions, the problem is further formulated as a semi-Markov process and two reinforcement learning methods: $Q$ -learning based method and deep reinforcement learning (DRL) method, are proposed to obtain the optimal policies of computation offloading and resource allocation. Finally, we analyze the effectiveness of the proposed scheme in the vehicular edge computing network by giving numerical results.

338 citations


Journal ArticleDOI
TL;DR: A summary of global charging standards and electric vehicle (EV) related trends are presented, which demonstrates momentum toward the OBCs with higher power rating.
Abstract: This paper provides a comprehensive review and analyses on the state-of-the-art and future trends for high-power conductive on-board chargers (OBCs) for electric vehicles. To provide a global context, a summary of global charging standards and electric vehicle (EV) related trends are presented, which demonstrates momentum toward the OBCs with higher power rating. High-power OBCs are either unidirectional or bidirectional, and they have either an integrated or non-integrated system architecture. Non-integrated high-power OBCs are studied both from industry and academia, and the former are used to illustrate the current state of the art. The latter are classified on the basis of the converter design approach, studied for their principle of operation, and compared over power density, weight, efficiency, and other metrics. In addition to non-integrated OBCs, recent advancements in propulsion-machine integrated OBC solutions are also presented. Other integrated OBC techniques, such as system integration with the EV's auxiliary power module and wireless charging systems, are also discussed. Finally, future charging strategies and functionalities in charging infrastructures are addressed, and global OBC trends are summarized.

301 citations


Journal ArticleDOI
TL;DR: The OTFS input–output relation has a simple sparse structure that enables one to use low-complexity detection algorithms and the reduction of out-of-band power may introduce nonuniform channel gains for the transmitted symbols, thus impairing the overall error performance.
Abstract: In this paper, we model $M\times N$ orthogonal time frequency space modulation (OTFS) over a $P$ -path doubly dispersive channel with delays less than $\tau _{\max }$ and Doppler shifts in the range $( u _{\min }, u _{\max })$ . We first derive in a simple matrix form the input–output relation in the delay-Doppler domain for practical (e.g., rectangular) pulse-shaping waveforms, next generalize it to arbitrary waveforms. This relation extends the original OTFS input–output approach, which assumes ideal pulse-shaping waveforms that are bi-orthogonal in both time and frequency. We show that the OTFS input–output relation has a simple sparse structure that enables one to use low-complexity detection algorithms. Different from previous work, only a single cyclic prefix is added at the end of the OTFS frame, significantly reducing the overhead, without incurring any penalty from the loss of bi-orthogonality of the pulse-shaping waveforms. Finally, we compare the OTFS performance with different pulse-shaping waveforms, and show that the reduction of out-of-band power may introduce nonuniform channel gains for the transmitted symbols, thus impairing the overall error performance.

Journal ArticleDOI
TL;DR: A deep reinforcement learning model is proposed to control the traffic light cycle that incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay.
Abstract: Existing inefficient traffic light cycle control causes numerous problems, such as long delay and waste of energy. To improve efficiency, taking real-time traffic information as an input and dynamically adjusting the traffic light duration accordingly is a must. Existing works either split the traffic signal into equal duration or only leverage limited traffic information. In this paper, we study how to decide the traffic signal duration based on the collected data from different sensors. We propose a deep reinforcement learning model to control the traffic light cycle. In the model, we quantify the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The reward is the cumulative waiting time difference between two cycles. To solve the model, a convolutional neural network is employed to map states to rewards. The proposed model incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay. We evaluate our model via simulation on a Simulation of Urban MObility simulator. Simulation results show the efficiency of our model in controlling traffic lights.

Journal ArticleDOI
TL;DR: This paper proposes an efficient incentive mechanism based on contract theoretical modeling to minimize the network delay from a contract-matching integration perspective and demonstrates that significant performance improvement can be achieved by the proposed scheme.
Abstract: Vehicular fog computing (VFC) has emerged as a promising solution to relieve the overload on the base station and reduce the processing delay during the peak time. The computation tasks can be offloaded from the base station to vehicular fog nodes by leveraging the under-utilized computation resources of nearby vehicles. However, the wide-area deployment of VFC still confronts several critical challenges, such as the lack of efficient incentive and task assignment mechanisms. In this paper, we address the above challenges and provide a solution to minimize the network delay from a contract-matching integration perspective. First, we propose an efficient incentive mechanism based on contract theoretical modeling. The contract is tailored for the unique characteristic of each vehicle type to maximize the expected utility of the base station. Next, we transform the task assignment problem into a two-sided matching problem between vehicles and user equipment. The formulated problem is solved by a pricing-based stable matching algorithm, which iteratively carries out the “propose” and “price-rising” procedures to derive a stable matching based on the dynamically updated preference lists. Finally, numerical results demonstrate that significant performance improvement can be achieved by the proposed scheme.

Journal ArticleDOI
TL;DR: In this paper, a multi-agent Q-learning-based placement algorithm is proposed for determining the optimal positions of the UAVs in each time slot based on the movement of users.
Abstract: A novel framework is proposed for the trajectory design of multiple unmanned aerial vehicles (UAVs) based on the prediction of users’ mobility information. The problem of joint trajectory design and power control is formulated for maximizing the instantaneous sum transmit rate while satisfying the rate requirement of users. In an effort to solve this pertinent problem, a three-step approach is proposed, which is based on machine learning techniques to obtain both the position information of users and the trajectory design of UAVs. First, a multi-agent Q-learning-based placement algorithm is proposed for determining the optimal positions of the UAVs based on the initial location of the users. Second, in an effort to determine the mobility information of users based on a real dataset, their position data is collected from Twitter to describe the anonymous user-trajectories in the physical world. In the meantime, an echo state network (ESN) based prediction algorithm is proposed for predicting the future positions of users based on the real dataset. Third, a multi-agent Q-learning-based algorithm is conceived for predicting the position of UAVs in each time slot based on the movement of users. In this algorithm, multiple UAVs act as agents to find optimal actions by interacting with their environment and learn from their mistakes. Additionally, we also prove that the proposed multi-agent Q-learning-based trajectory design and power control algorithm can converge under mild conditions. Numerical results are provided to demonstrate that as the size of the reservoir increases, the proposed ESN approach improves the prediction accuracy. Finally, we demonstrate that the throughput gains of about $17\%$ are achieved.

Journal ArticleDOI
TL;DR: It is demonstrated that the proposed channel estimation in OTFS significantly outperforms OFDM with known channel information, and extensions of the proposed schemes to multiple-input multiple-output (MIMO) and multi-user uplink/downlink are presented.
Abstract: Orthogonal time frequency space (OTFS) modulation was shown to provide significant error performance advantages over orthogonal frequency division multiplexing (OFDM) in delay–Doppler channels. In order to detect OTFS modulated data, the channel impulse response needs to be known at the receiver. In this paper, we propose embedded pilot-aided channel estimation schemes for OTFS. In each OTFS frame, we arrange pilot, guard, and data symbols in the delay–Doppler plane to suitably avoid interference between pilot and data symbols at the receiver. We develop such symbol arrangements for OTFS over multipath channels with integer and fractional Doppler shifts, respectively. At the receiver, channel estimation is performed based on a threshold method and the estimated channel information is used for data detection via a message passing algorithm. Thanks to our specific embedded symbol arrangements, both channel estimation and data detection are performed within the same OTFS frame with minimum overhead. We compare through simulations the error performance of OTFS using the proposed channel estimation and OTFS with ideally known channel information and observe only a marginal performance loss. We also demonstrate that the proposed channel estimation in OTFS significantly outperforms OFDM with known channel information. Finally, we present extensions of the proposed schemes to multiple-input multiple-output (MIMO) and multi-user uplink/downlink.

Journal ArticleDOI
TL;DR: An effective health indicator to indicate lithium-ion battery state of health and moving-window-based method to predict battery remaining useful life and capacity estimation results show that the capacity estimation errors were within 1.5%.
Abstract: This paper developed an effective health indicator to indicate lithium-ion battery state of health and moving-window-based method to predict battery remaining useful life. The health indicator was extracted based on the partial charge voltage curve of cells. Battery remaining useful life was predicted using a linear aging model constructed based on the capacity data within a moving window, combined with Monte Carlo simulation to generate prediction uncertainties. Both the developed capacity estimation and remaining useful life prediction methods were implemented based on a real battery management system used in electric vehicles. Experimental data for cells tested at different current rates, including 1 and 2 C, and different temperatures, including 25 and 40 °C, were collected and used. The implementation results show that the capacity estimation errors were within 1.5%. During the last 20% of battery lifetime, the root-mean-square errors of remaining useful life predictions were within 20 cycles, and the 95% confidence intervals mainly cover about 20 cycles.

Journal ArticleDOI
TL;DR: A driver activities recognition system is designed based on the deep convolutional neural networks (CNN) to identify whether the driver is being distracted or not and the binary detection rate achieved 91.4% accuracy shows the advantages of using the proposed deep learning approach.
Abstract: Driver decisions and behaviors are essential factors that can affect the driving safety. To understand the driver behaviors, a driver activities recognition system is designed based on the deep convolutional neural networks (CNN) in this paper. Specifically, seven common driving activities are identified, which are the normal driving, right mirror checking, rear mirror checking, left mirror checking, using in-vehicle radio device, texting, and answering the mobile phone, respectively. Among these activities, the first four are regarded as normal driving tasks, while the rest three are classified into the distraction group. The experimental images are collected using a low-cost camera, and ten drivers are involved in the naturalistic data collection. The raw images are segmented using the Gaussian mixture model to extract the driver body from the background before training the behavior recognition CNN model. To reduce the training cost, transfer learning method is applied to fine tune the pre-trained CNN models. Three different pre-trained CNN models, namely, AlexNet, GoogLeNet, and ResNet50 are adopted and evaluated. The detection results for the seven tasks achieved an average of 81.6% accuracy using the AlexNet, 78.6% and 74.9% accuracy using the GoogLeNet and ResNet50, respectively. Then, the CNN models are trained for the binary classification task and identify whether the driver is being distracted or not. The binary detection rate achieved 91.4% accuracy, which shows the advantages of using the proposed deep learning approach. Finally, the real-world application are analyzed and discussed.

Journal ArticleDOI
TL;DR: This paper considers a cognitive vehicular network that uses the TVWS band, and forms a dual-side optimization problem, to minimize the cost of VTs and that of the MEC server at the same time, and designs an algorithm called DDORV to tackle the joint optimization problem.
Abstract: The proliferation of smart vehicular terminals (VTs) and their resource hungry applications impose serious challenges to the processing capabilities of VTs and the delivery of vehicular services. Mobile Edge Computing (MEC) offers a promising paradigm to solve this problem by offloading VT applications to proximal MEC servers, while TV white space (TVWS) bands can be used to supplement the bandwidth for computation offloading. In this paper, we consider a cognitive vehicular network that uses the TVWS band, and formulate a dual-side optimization problem, to minimize the cost of VTs and that of the MEC server at the same time. Specifically, the dual-side cost minimization is achieved by jointly optimizing the offloading decision and local CPU frequency on the VT side, and the radio resource allocation and server provisioning on the server side, while guaranteeing network stability. Based on Lyapunov optimization, we design an algorithm called DDORV to tackle the joint optimization problem, where only current system states, such as channel states and traffic arrivals, are needed. The closed-form solution to the VT-side problem is obtained easily by derivation and comparing two values. For MEC server side optimization, we first obtain server provisioning independently, and then devise a continuous relaxation and Lagrangian dual decomposition based iterative algorithm for joint radio resource and power allocation. Simulation results demonstrate that DDORV converges fast, can balance the cost-delay tradeoff flexibly, and can obtain more performance gains in cost reduction as compared with existing schemes.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed deep reinforcement learning (DRL) method can enable UAVs to autonomously perform navigation in a virtual large-scale complex environment and can be generalized to more complex, larger-scale, and three-dimensional environments.
Abstract: In this paper, we propose a deep reinforcement learning (DRL)-based method that allows unmanned aerial vehicles (UAVs) to execute navigation tasks in large-scale complex environments. This technique is important for many applications such as goods delivery and remote surveillance. The problem is formulated as a partially observable Markov decision process (POMDP) and solved by a novel online DRL algorithm designed based on two strictly proved policy gradient theorems within the actor-critic framework. In contrast to conventional simultaneous localization and mapping-based or sensing and avoidance-based approaches, our method directly maps UAVs’ raw sensory measurements into control signals for navigation. Experiment results demonstrate that our method can enable UAVs to autonomously perform navigation in a virtual large-scale complex environment and can be generalized to more complex, larger-scale, and three-dimensional environments. Besides, the proposed online DRL algorithm addressing POMDPs outperforms the state-of-the-art.

Journal ArticleDOI
Xiaoyu Qiu1, Luobin Liu1, Wuhui Chen1, Zicong Hong1, Zibin Zheng1 
TL;DR: This paper forms the online offloading problem as a Markov decision process by considering both the blockchain mining tasks and data processing tasks and introduces an adaptive genetic algorithm into the exploration of deep reinforcement learning to effectively avoid useless exploration and speed up the convergence without reducing performance.
Abstract: Offloading computation-intensive tasks (e.g., blockchain consensus processes and data processing tasks) to the edge/cloud is a promising solution for blockchain-empowered mobile edge computing. However, the traditional offloading approaches (e.g., auction-based and game-theory approaches) fail to adjust the policy according to the changing environment and cannot achieve long-term performance. Moreover, the existing deep reinforcement learning-based offloading approaches suffer from the slow convergence caused by high-dimensional action space. In this paper, we propose a new model-free deep reinforcement learning-based online computation offloading approach for blockchain-empowered mobile edge computing in which both mining tasks and data processing tasks are considered. First, we formulate the online offloading problem as a Markov decision process by considering both the blockchain mining tasks and data processing tasks. Then, to maximize long-term offloading performance, we leverage deep reinforcement learning to accommodate highly dynamic environments and address the computational complexity. Furthermore, we introduce an adaptive genetic algorithm into the exploration of deep reinforcement learning to effectively avoid useless exploration and speed up the convergence without reducing performance. Finally, our experimental results demonstrate that our algorithm can converge quickly and outperform three benchmark policies.

Journal ArticleDOI
TL;DR: Train, validation, and test are conducted for two commercial Li-ion batteries with Li(NiCoMn)1/3O2 cathode and graphite anode, indicating that the algorithm can estimate the battery SOH with less than 2% error for 80% of all the cases, and less than 3%error for 95% ofall the cases.
Abstract: The online estimation of battery state-of-health (SOH) is an ever significant issue for the intelligent energy management of the autonomous electric vehicles. Machine-learning based approaches are promising for the online SOH estimation. This paper proposes a machine-learning based algorithm for the online SOH estimation of Li-ion battery. A predictive diagnosis model used in the algorithm is established based on support vector machine (SVM). The support vectors, which reflects the intrinsic characteristics of the Li-ion battery, are determined from the charging data of fresh cells. Furthermore, the coefficients of the SVMs for cells at different SOH are identified once the support vectors are determined. The algorithm functions by comparing partial charging curves with the stored SVMs. Similarity factor is defined after comparison to quantify the SOH of the data under evaluation. The operation of the algorithm only requires partial charging curves, e.g., 15 min charging curves, making fast on-board diagnosis of battery SOH into reality. The partial charging curves can be intercepted from a wide range of voltage section, thereby relieving the pain that there is little chance that the driver charges the battery pack from a predefined state-of-charge. Train, validation, and test are conducted for two commercial Li-ion batteries with Li(NiCoMn)1/3O2 cathode and graphite anode, indicating that the algorithm can estimate the battery SOH with less than 2% error for 80% of all the cases, and less than 3% error for 95% of all the cases.

Journal ArticleDOI
TL;DR: This paper proposes a less environment-dependent and a priori knowledge-independent NLOS identification and mitigation method for ranging which is able to determine the specific NLOS channel and an equality constrained Taylor series robust least squares technique is proposed to suppress residual NLOS range errors by introducing robustness to Taylor series least squares method.
Abstract: Non-line-of-sight (NLOS) propagation of radio signals can significantly degrade the performance of ultra-wideband localization systems indoors, it is hence crucial to mitigate the NLOS effect to enhance the accuracy of positioning. The existing NLOS mitigation algorithms to improve localization accuracy are either by compensating range errors through NLOS identification and mitigation methods for ranging or by using dedicated localization techniques. However, they are only applicable to some specific scenarios due to some special assumptions or the need of a priori knowledge, such as thresholds and distribution functions. Another disadvantage is that they neither have the capability to evaluate the magnitude of NLOS effect nor take account of the residual NLOS range errors during location estimation. To remedy these problems, this paper proposes a less environment-dependent and a priori knowledge-independent NLOS identification and mitigation method for ranging which is able to determine the specific NLOS channel. Based on the identified channel information, a rule is developed to select appropriate NLOS ranges for location estimation. Meanwhile, an equality constrained Taylor series robust least squares (ECTSRLS) technique is proposed to suppress residual NLOS range errors by introducing robustness to Taylor series least squares method. All these constitute our FCE-ECTSRLS NLOS mitigation algorithm. The performance of the proposed algorithm is compared with four existing NLOS mitigation algorithms by both static and mobile localization experiments in a harsh indoor environment. Experimental results have demonstrated that the proposed FCE-ECTSRLS algorithm outperforms the other four algorithms significantly.

Journal ArticleDOI
TL;DR: In this paper, a framework is proposed for quality of experience driven deployment and dynamic movement of multiple UAVs for maximizing the sum mean opinion score of ground users, which is proved to be NP-hard.
Abstract: A novel framework is proposed for quality of experience driven deployment and dynamic movement of multiple unmanned aerial vehicles (UAVs). The problem of joint non-convex three-dimensional (3-D) deployment and dynamic movement of the UAVs is formulated for maximizing the sum mean opinion score of ground users, which is proved to be NP-hard. In the aim of solving this pertinent problem, a three-step approach is proposed for attaining 3-D deployment and dynamic movement of multiple UAVs. First, a genetic algorithm based K-means (GAK-means) algorithm is utilized for obtaining the cell partition of the users. Second, Q-learning based deployment algorithm is proposed, in which each UAV acts as an agent, making their own decision for attaining 3-D position by learning from trial and mistake. In contrast to the conventional genetic algorithm based learning algorithms, the proposed algorithm is capable of training the direction selection strategy offline. Third, Q-learning based movement algorithm is proposed in the scenario that the users are roaming. The proposed algorithm is capable of converging to an optimal state. Numerical results reveal that the proposed algorithms show a fast convergence rate after a small number of iterations. Additionally, the proposed Q-learning based deployment algorithm outperforms K-means algorithms and Iterative-GAKmean algorithms with low complexity.

Journal ArticleDOI
TL;DR: In this paper, a deep-learning-enabled mmWave massive MIMO framework for effective hybrid precoding is proposed, in which each selection of the precoders for obtaining the optimized decoder is regarded as a mapping relation in the deep neural network (DNN).
Abstract: Millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) has been regarded to be an emerging solution for the next generation of communications, in which hybrid analog and digital precoding is an important method for reducing the hardware complexity and energy consumption associated with mixed signal components. However, the fundamental limitations of the existing hybrid precoding schemes are that they have high-computational complexity and fail to fully exploit the spatial information. To overcome these limitations, this paper proposes a deep-learning-enabled mmWave massive MIMO framework for effective hybrid precoding, in which each selection of the precoders for obtaining the optimized decoder is regarded as a mapping relation in the deep neural network (DNN). Specifically, the hybrid precoder is selected through training based on the DNN for optimizing precoding process of the mmWave massive MIMO. Additionally, we present extensive simulation results to validate the excellent performance of the proposed scheme. The results exhibit that the DNN-based approach is capable of minimizing the bit error ratio and enhancing the spectrum efficiency of the mmWave massive MIMO, which achieves better performance in hybrid precoding compared with conventional schemes while substantially reducing the required computational complexity.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive overview of vehicular communications from the network layer perspective and identify the challenges confronted by the current vehicular networks and present corresponding research opportunities.
Abstract: Vehicular communications, referring to information exchange among vehicles, infrastructures, etc., have attracted a lot of attention recently due to great potential to support intelligent transportation, various safety applications, and on-road infotainment. In this paper, we provide a comprehensive overview of a recent research on enabling efficient and reliable vehicular communications from the network layer perspective. First, we introduce general applications and unique characteristics of vehicular communication networks and the corresponding classifications. Based on different driving patterns, we categorize vehicular networks into manual driving vehicular networks and automated driving vehicular networks, and then discuss the available communication techniques, network structures, routing protocols, and handoff strategies applied in these vehicular networks. Finally, we identify the challenges confronted by the current vehicular networks and present the corresponding research opportunities.

Journal ArticleDOI
TL;DR: The results show that the average secrecy rate performance of the proposed scheme provides about 20% and 150% performance gains over the joint trajectory and transmit power optimized without UAV-J scheme and the transmit power optimization with fixed trajectory scheme at flight period $T=150$ s, respectively.
Abstract: This paper studies the physical layer security of an unmanned aerial vehicle (UAV) network, where a UAV base station (UAV-B) transmits confidential information to multiple information receivers (IRs) with the aid of a UAV jammer (UAV-J) in the presence of multiple eavesdroppers. We formulate an optimization problem to jointly design the trajectories and transmit power of UAV-B and UAV-J in order to maximize the minimum average secrecy rate over all IRs. The optimization problem is non-convex and the optimization variables are coupled, which leads to the optimization problem being mathematically intractable. As such, we decompose the optimization problem into two subproblems and then solve it by employing an alternating iterative algorithm and the successive convex approximation technique. Our results show that the average secrecy rate performance of the proposed scheme provides about 20% and 150% performance gains over the joint trajectory and transmit power optimization without UAV-J scheme and the transmit power optimization with fixed trajectory scheme at flight period $T=150$ s, respectively.

Journal ArticleDOI
TL;DR: The proposed analysis shows that the main technical challenges are related to the PHY/MAC procedures, in particular random access, timing advance, and hybrid automatic repeat request and depending on the considered service and architecture, different solutions are proposed.
Abstract: Satellite communication systems are a promising solution to extend and complement terrestrial networks in unserved or under-served areas, as reflected by recent commercial and standardization endeavors. In particular, 3GPP recently initiated a study item for new radio, i.e., 5G, non-terrestrial networks aimed at deploying satellite systems either as a stand-alone solution or as an integration to terrestrial networks in mobile broadband and machine-type communication scenarios. However, typical satellite channel impairments, as large path losses, delays, and Doppler shifts, pose severe challenges to the realization of a satellite-based NR network. In this paper, based on the architecture options currently being discussed in the standardization fora, we discuss and assess the impact of the satellite channel characteristics on the physical and medium access control layers, both in terms of transmitted waveforms and procedures for enhanced mobile broadband and narrowband-Internet of Things applications. The proposed analysis shows that the main technical challenges are related to the PHY/MAC procedures, in particular random access, timing advance, and hybrid automatic repeat request and depending on the considered service and architecture, different solutions are proposed.

Journal ArticleDOI
TL;DR: This correspondence considers non-orthogonal multiple access (NOMA) assisted mobile edge computing (MEC), where the power and time allocation is jointly optimized to reduce the energy consumption of computation offloading.
Abstract: This correspondence considers non-orthogonal multiple access (NOMA) assisted mobile edge computing (MEC), where the power and time allocation is jointly optimized to reduce the energy consumption of computation offloading. Closed-form expressions for the optimal power and time allocation solutions are obtained and used to establish the conditions for determining whether the conventional orthogonal multiple access (OMA), pure NOMA or hybrid NOMA should be used for MEC offloading.

Journal ArticleDOI
TL;DR: In this paper, the authors present analytical models for the average packet delivery ratio (PDR) as a function of the distance between transmitter and receiver, and for the four different types of transmission errors that can be encountered in C-V2X or LTE-V Mode 4.
Abstract: The C-V2X or LTE-V standard has been designed to support vehicle to everything (V2X) communications. The standard is an evolution of LTE, and it has been published by the 3GPP in Release 14. This new standard introduces the C-V2X or LTE-V Mode 4 that is specifically designed for V2V communications using the PC5 sidelink interface without any cellular infrastructure support. In Mode 4, vehicles autonomously select and manage their radio resources. Mode 4 is highly relevant since V2V safety applications cannot depend on the availability of infrastructure-based cellular coverage. This paper presents the first analytical models of the communication performance of C-V2X or LTE-V Mode 4. In particular, the paper presents analytical models for the average packet delivery ratio (PDR) as a function of the distance between transmitter and receiver, and for the four different types of transmission errors that can be encountered in C-V2X Mode 4. The models are validated for a wide range of transmission parameters and traffic densities. To this aim, this study compares the results obtained with the analytical models to those obtained with a C-V2X Mode 4 simulator implemented over Veins.

Journal ArticleDOI
TL;DR: This work conceive an energy-efficient computation offloading technique for UAV-MEC systems, with an emphasis on physical-layer security, and formulate a number of energy-efficiency problems for secure UAVs, which are then transformed to convex problems and found for both active and passive eavesdroppers.
Abstract: Characterized by their ease of deployment and bird's-eye view, unmanned aerial vehicles (UAVs) may be widely deployed both in surveillance and traffic management. However, the moderate computational capability and the short battery life restrict the local data processing at the UAV side. Fortunately, this impediment may be mitigated by employing the mobile-edge computing (MEC) paradigm for offloading demanding computational tasks from the UAV through a wireless transmission link. However, the offloaded information may become compromised by eavesdroppers. To address this issue, we conceive an energy-efficient computation offloading technique for UAV-MEC systems, with an emphasis on physical-layer security. We formulate a number of energy-efficiency problems for secure UAV-MEC systems, which are then transformed to convex problems. Finally, their optimal solutions are found for both active and passive eavesdroppers. Furthermore, the conditions of zero, partial, and full offloading are analyzed from a physical perspective. The numerical results highlight the specific conditions of activating the three abovementioned offloading options and quantify the performance of our proposed offloading strategy in various scenarios.