scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Vehicular Technology in 2018"


Journal ArticleDOI
TL;DR: The developed method is able to predict the battery's RUL independent of offline training data, and when some offline data is available, the RUL can be predicted earlier than in the traditional methods.
Abstract: Remaining useful life (RUL) prediction of lithium-ion batteries can assess the battery reliability to determine the advent of failure and mitigate battery risk. The existing RUL prediction techniques for lithium-ion batteries are inefficient for learning the long-term dependencies among the capacity degradations. This paper investigates deep-learning-enabled battery RUL prediction. The long short-term memory (LSTM) recurrent neural network (RNN) is employed to learn the long-term dependencies among the degraded capacities of lithium-ion batteries. The LSTM RNN is adaptively optimized using the resilient mean square back-propagation method, and a dropout technique is used to address the overfitting problem. The developed LSTM RNN is able to capture the underlying long-term dependencies among the degraded capacities and construct an explicitly capacity-oriented RUL predictor, whose long-term learning performance is contrasted to the support vector machine model, the particle filter model, and the simple RNN model. Monte Carlo simulation is combined to generate a probabilistic RUL prediction. Experimental data from multiple lithium-ion cells at two different temperatures is deployed for model construction, verification, and comparison. The developed method is able to predict the battery's RUL independent of offline training data, and when some offline data is available, the RUL can be predicted earlier than in the traditional methods.

613 citations


Journal ArticleDOI
TL;DR: Simulation results corroborate that the proposed deep learning based scheme can achieve better performance in terms of the DOA estimation and the channel estimation compared with conventional methods, and the proposed scheme is well investigated by extensive simulation in various cases for testing its robustness.
Abstract: The recent concept of massive multiple-input multiple-output (MIMO) can significantly improve the capacity of the communication network, and it has been regarded as a promising technology for the next-generation wireless communications. However, the fundamental challenge of existing massive MIMO systems is that high computational complexity and complicated spatial structures bring great difficulties to exploit the characteristics of the channel and sparsity of these multi-antennas systems. To address this problem, in this paper, we focus on channel estimation and direction-of-arrival (DOA) estimation, and a novel framework that integrates the massive MIMO into deep learning is proposed. To realize end-to-end performance, a deep neural network (DNN) is employed to conduct offline learning and online learning procedures, which is effective to learn the statistics of the wireless channel and the spatial structures in the angle domain. Concretely, the DNN is first trained by simulated data in different channel conditions with the aids of the offline learning, and then corresponding output data can be obtained based on current input data during online learning process. In order to realize super-resolution channel estimation and DOA estimation, two algorithms based on the deep learning are developed, in which the DOA can be estimated in the angle domain without additional complexity directly. Furthermore, simulation results corroborate that the proposed deep learning based scheme can achieve better performance in terms of the DOA estimation and the channel estimation compared with conventional methods, and the proposed scheme is well investigated by extensive simulation in various cases for testing its robustness.

577 citations


Journal ArticleDOI
TL;DR: This paper proposes an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of next generation vehicular networks and formulate the resource allocation strategy in this framework as a joint optimization problem.
Abstract: The developments of connected vehicles are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching, and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme.

469 citations


Journal ArticleDOI
TL;DR: The proposed IEEE 802.11ad-based radar meets the minimum accuracy/resolution requirement of range and velocity estimates for LRR applications and exploits the preamble of a single-carrier physical layer frame, which consists of Golay complementary sequences with good correlation properties that make it suitable for radar.
Abstract: Millimeter-wave (mmWave) radar is widely used in vehicles for applications such as adaptive cruise control and collision avoidance. In this paper, we propose an IEEE 802.11ad-based radar for long-range radar (LRR) applications at the 60 GHz unlicensed band. We exploit the preamble of a single-carrier physical layer frame, which consists of Golay complementary sequences with good correlation properties that make it suitable for radar. This system enables a joint waveform for automotive radar and a potential mmWave vehicular communication system based on the mmWave consumer wireless local area network standard, allowing hardware reuse. To formulate an integrated framework of vehicle-to-vehicle communication and LRR, we make typical assumptions for LRR applications, incorporating the full duplex radar operation. This new feature is motivated by the recent development of systems with sufficient isolation and self-interference cancellation. We develop single- and multi-frame radar receiver algorithms for target detection as well as range and velocity estimation for both single- and multi-target scenarios. Our proposed radar processing algorithms leverage channel estimation and time–frequency synchronization techniques used in a conventional IEEE 802.11ad receiver with minimal modifications. Analysis and simulations show that in a single-target scenario, a gigabits-per-second data rate is achieved simultaneously with cm-level range accuracy and cm/s-level velocity accuracy. The target vehicle is detected with a high probability (above 99.99 $\%$ ) at a low false alarm rate of 10 $^{-6}$ for an equivalent isotropically radiated power of 40 dBm up to a vehicle separation distance of about 200 m. The proposed IEEE 802.11ad-based radar meets the minimum accuracy/resolution requirement of range and velocity estimates for LRR applications.

469 citations


Journal ArticleDOI
TL;DR: A novel and effective deep learning (DL)-aided NOMA system, in which several N OMA users with random deployment are served by one base station, and a long short-term memory (LSTM) network based on DL is incorporated into a typical NOMa system, enabling the proposed scheme to detect the channel characteristics automatically.
Abstract: Nonorthogonal multiple access (NOMA) has been considered as an essential multiple access technique for enhancing system capacity and spectral efficiency in future communication scenarios. However, the existing NOMA systems have a fundamental limit: high computational complexity and a sharply changing wireless channel make exploiting the characteristics of the channel and deriving the ideal allocation methods very difficult tasks. To break this fundamental limit, in this paper, we propose a novel and effective deep learning (DL)-aided NOMA system, in which several NOMA users with random deployment are served by one base station. Since DL is advantageous in that it allows training the input signals and detecting sharply changing channel conditions, we exploit it to address wireless NOMA channels in an end-to-end manner. Specifically, it is employed in the proposed NOMA system to learn a completely unknown channel environment. A long short-term memory (LSTM) network based on DL is incorporated into a typical NOMA system, enabling the proposed scheme to detect the channel characteristics automatically. In the proposed strategy, the LSTM is first trained by simulated data under different channel conditions via offline learning, and then the corresponding output data can be obtained based on the current input data used during the online learning process. In general, we build, train and test the proposed cooperative framework to realize automatic encoding, decoding and channel detection in an additive white Gaussian noise channel. Furthermore, we regard one conventional user activity and data detection scheme as an unknown nonlinear mapping operation and use LSTM to approximate it to evaluate the data detection capacity of DL based on NOMA. Simulation results demonstrate that the proposed scheme is robust and efficient compared with conventional approaches. In addition, the accuracy of the LSTM -aided NOMA scheme is studied by introducing the well-known tenfold cross-validation procedure.

418 citations


Journal ArticleDOI
TL;DR: Results show that the maximum steady-state errors of SOC and SOH estimation can be achieved within 1%, in the presence of initial deviation, noise, and disturbance, and the resilience of the co-estimation scheme against battery aging is verified through experimentation.
Abstract: Lithium-ion batteries have emerged as the state-of-the-art energy storage for portable electronics, electrified vehicles, and smart grids. An enabling Battery Management System holds the key for efficient and reliable system operation, in which State-of-Charge (SOC) estimation and State-of-Health (SOH) monitoring are of particular importance. In this paper, an SOC and SOH co-estimation scheme is proposed based on the fractional-order calculus. First, a fractional-order equivalent circuit model is established and parameterized using a Hybrid Genetic Algorithm/Particle Swarm Optimization method. This model is capable of predicting the voltage response with a root-mean-squared error less than 10 mV under various driving-cycle-based tests. Comparative studies show that it improves the modeling accuracy appreciably from its second- and third-order counterparts. Then, a dual fractional-order extended Kalman filter is put forward to realize simultaneous SOC and SOH estimation. Extensive experimental results show that the maximum steady-state errors of SOC and SOH estimation can be achieved within 1%, in the presence of initial deviation, noise, and disturbance. The resilience of the co-estimation scheme against battery aging is also verified through experimentation.

356 citations


Journal ArticleDOI
TL;DR: In this article, a UAV-based mobile cloud computing system is studied in which a moving UAV is endowed with computing capabilities to offer computation offloading opportunities to MUs with limited local processing capabilities.
Abstract: Unmanned aerial vehicles (UAVs) have been recently considered as means to provide enhanced coverage or relaying services to mobile users (MUs) in wireless systems with limited or no infrastructure. In this paper, a UAV-based mobile cloud computing system is studied in which a moving UAV is endowed with computing capabilities to offer computation offloading opportunities to MUs with limited local processing capabilities. The system aims at minimizing the total mobile energy consumption while satisfying quality of service requirements of the offloaded mobile application. Offloading is enabled by uplink and downlink communications between the mobile devices and the UAV, which take place by means of frequency division duplex via orthogonal or nonorthogonal multiple access schemes. The problem of jointly optimizing the bit allocation for uplink and downlink communications as well as for computing at the UAV, along with the cloudlet's trajectory under latency and UAV's energy budget constraints is formulated and addressed by leveraging successive convex approximation strategies. Numerical results demonstrate the significant energy savings that can be accrued by means of the proposed joint optimization of bit allocation and cloudlet's trajectory as compared to local mobile execution as well as to partial optimization approaches that design only the bit allocation or the cloudlet's trajectory.

352 citations


Journal ArticleDOI
Ping Shen1, Minggao Ouyang1, Languang Lu1, Jianqiu Li1, Xuning Feng1 
TL;DR: This paper proposes a co-estimation scheme of state of charge, state of health (SOH), and state of function (SOF) for lithium-ion batteries in electric vehicles that is validated in a real battery management system with good real-time performance and convincible estimation accuracy.
Abstract: This paper proposes a co-estimation scheme of state of charge (SOC), state of health (SOH), and state of function (SOF) for lithium-ion batteries in electric vehicles. The co-estimation denotes that the SOC, SOH, and SOF are estimated simultaneously in real-time application. The model-based SOC estimation is fulfilled by the extended Kalman filter. The battery parameters related with the battery SOH and SOF are online identified using the recursive least square algorithm with a forgetting factor. The capacity and the maximum available output power are then estimated based on the identified parameters. The online update of the capacity and correlated parameters help improve the accuracy of the state estimation but with limited increase in the computation load, by making good use of the correlations among the states. The co-estimation scheme is validated in a real battery management system with good real-time performance and convincible estimation accuracy.

335 citations


Journal ArticleDOI
TL;DR: This correspondence studies a UAV-enabled data collection system, and considers two practical UAV trajectories, namely circular flight and straight flight, to find the optimal GT transmit power and UAV trajectory that achieve different Pareto optimal tradeoffs between them.
Abstract: Unmanned aerial vehicles (UAVs) have a great potential for improving the performance of wireless communication systems due to their high mobility. In this correspondence, we study a UAV-enabled data collection system, where a UAV is dispatched to collect a given amount of data from a ground terminal (GT) at fixed location. Intuitively, if the UAV flies closer to the GT, the uplink transmission energy of the GT required to send the target data can be more reduced. However, such UAV movement may consume more propulsion energy of the UAV, which needs to be properly controlled to save its limited on-board energy. As a result, the transmission energy reduction of the GT is generally at the cost of higher propulsion energy consumption of the UAV, which leads to a new fundamental energy tradeoff in ground-to-UAV wireless communication. To characterize this tradeoff, we consider two practical UAV trajectories, namely circular flight and straight flight. In each case, we first derive the energy consumption expressions of the UAV and GT and then find the optimal GT transmit power and UAV trajectory that achieve different Pareto optimal tradeoffs between them. Numerical results are provided to corroborate our study.

323 citations


Journal ArticleDOI
TL;DR: A deep reinforcement learning with the multi-timescale framework to tackle the grand challenges of the vehicular networks and proposes the mobility-aware reward estimation for the large timescale model to mitigate the complexity due to the large action space.
Abstract: This paper studies the joint communication, caching and computing design problem for achieving the operational excellence and the cost efficiency of the vehicular networks Moreover, the resource allocation policy is designed by considering the vehicle's mobility and the hard service deadline constraint These critical challenges have often been either neglected or addressed inadequately in the existing work on the vehicular networks because of their high complexity We develop a deep reinforcement learning with the multi-timescale framework to tackle these grand challenges in this paper Furthermore, we propose the mobility-aware reward estimation for the large timescale model to mitigate the complexity due to the large action space Numerical results are presented to illustrate the theoretical findings developed in the paper and to quantify the performance gains attained

318 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of cloud-MEC collaborative computation offloading is studied, and two schemes are proposed as the solutions, i.e., an approximation collaborative offloading scheme, and a game-theoretic collaborative computation Offloading scheme.
Abstract: By offloading the computation tasks of the mobile devices (MDs) to the edge server, mobile-edge computing (MEC) provides a new paradigm to meet the increasing computation demands from mobile applications. However, existing mobile-edge computation offloading (MECO) research only took the resource allocation between the MDs and the MEC servers into consideration, and ignored the huge computation resources in the centralized cloud computing center. Moreover, current MEC hosted networks mostly adopt the networking technology integrating cellular and backbone networks, which have the shortcomings of single access mode, high congestion, high latency, and high energy consumption. Toward this end, we introduce hybrid fiber–wireless (FiWi) networks to provide supports for the coexistence of centralized cloud and multiaccess edge computing, and present an architecture by adopting the FiWi access networks. The problem of cloud-MEC collaborative computation offloading is studied, and two schemes are proposed as our solutions, i.e., an approximation collaborative computation offloading scheme, and a game-theoretic collaborative computation offloading scheme. Numerical results corroborate that our solutions not only achieve better offloading performance than the available MECO schemes but also scale well with the increasing number of computation tasks.

Journal ArticleDOI
TL;DR: In this correspondence, the sum rate of UAV-served edge users is maximized subject to the rate requirements for all the users, by optimizing the UAV trajectory in each flying cycle to offload traffic for BSs.
Abstract: In future mobile networks, it is difficult for static base stations (BSs) to support the rapidly increasing data services, especially for cell-edge users. Unmanned aerial vehicle (UAV) is a promising method that can assist BSs to offload the data traffic, due to its high mobility and flexibility. In this correspondence, we focus on the UAV trajectory at the edges of three adjacent cells to offload traffic for BSs. In the proposed scheme, the sum rate of UAV-served edge users is maximized subject to the rate requirements for all the users, by optimizing the UAV trajectory in each flying cycle. The optimization is a mixed-integer nonconvex problem, which is difficult to solve. Thus, it is transformed into two convex problems, and an iterative algorithm is proposed to solve it by optimizing the UAV trajectory and edge user scheduling alternately. Simulation results are presented to show the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: This paper proposes a novel two-tier computation offloading framework in heterogeneous networks, and formulates joint computation off loading and user association problem for multi-task mobile edge computing system to minimize overall energy consumption.
Abstract: Computation intensive and delay-sensitive applications impose severe requirements on mobile devices of providing required computation capacity and ensuring latency. Mobile edge computing (MEC) is a promising technology that can alleviate computation limitation of mobile users and prolong their lifetime through computation offloading. However, computation offloading in an MEC environment faces severe issues due to dense deployment of MEC servers. Moreover, a mobile user has multiple mutually dependent tasks, which make offloading policy design even more challenging. To address the above-mentioned problems in this paper, we first propose a novel two-tier computation offloading framework in heterogeneous networks. Then, we formulate joint computation offloading and user association problem for multi-task mobile edge computing system to minimize overall energy consumption. To solve the optimization problem, we develop an efficient computation offloading algorithm by jointly optimizing user association and computation offloading where computation resource allocation and transmission power allocation are also considered. Numerical results illustrate fast convergence of the proposed algorithm, and demonstrate the superior performance of our proposed algorithm compared to state of the art solutions.

Journal ArticleDOI
TL;DR: An end- to-end convolution neural network (CNN) based AMC (CNN-AMC) is proposed, which automatically extracts features from the long symbol-rate observation sequence along with the estimated signal-to-noise ratio (SNR) and can outperform the feature-based method, and obtain a closer approximation to the optimal ML- AMC.
Abstract: Automatic modulation classification (AMC), which plays critical roles in both civilian and military applications, is investigated in this paper through a deep learning approach. Conventional AMCs can be categorized into maximum likelihood (ML) based (ML-AMC) and feature-based AMC. However, the practical deployment of ML-AMCs is difficult due to its high computational complexity, and the manually extracted features require expert knowledge. Therefore, an end-to-end convolution neural network (CNN) based AMC (CNN-AMC) is proposed, which automatically extracts features from the long symbol-rate observation sequence along with the estimated signal-to-noise ratio (SNR). With CNN-AMC, a unit classifier is adopted to accommodate the varying input dimensions. The direct training of CNN-AMC is challenging with the complicated model and complex tasks, so a novel two-step training is proposed, and the transfer learning is also introduced to improve the efficiency of retraining. Different digital modulation schemes have been considered in distinct scenarios, and the simulation results show that the CNN-AMC can outperform the feature-based method, and obtain a closer approximation to the optimal ML-AMC. Besides, CNN-AMCs have the certain robustness to estimation error on carrier phase offset and SNR. With parallel computation, the deep-learning-based approach is about $ 40$ to $ 1700$ times faster than the ML-AMC regarding inference speed.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed relay strategy can efficiently reduce the bit error rate of the OBU message and thus increase the utility of the VANET compared with a Q-learning-based scheme.
Abstract: Frequency hopping-based antijamming techniques are not always applicable in vehicular ad hoc networks (VANETs) due to the high mobility of onboard units (OBUs) and the large-scale network topology. In this paper, we use unmanned aerial vehicles (UAVs) to relay the message of an OBU and improve the communication performance of VANETs against smart jammers that observe the ongoing OBU and UAV communication status and even induce the UAV to use a specific relay strategy and then attack it accordingly. More specifically, the UAV relays the OBU message to another roadside unit (RSU) with a better radio transmission condition if the serving RSU is heavily jammed or interfered. The interactions between a UAV and a smart jammer are formulated as an antijamming UAV relay game, in which the UAV decides whether or not to relay the OBU message to another RSU, and the jammer observes the UAV and the VANET strategy and chooses the jamming power accordingly. The Nash equilibria of the UAV relay game are derived to reveal how the optimal UAV relay strategy depends on the transmit cost and the UAV channel model. A hotbooting policy hill climbing-based UAV relay strategy is proposed to help the VANET resist jamming in the dynamic game without being aware of the VANET model and the jamming model. Simulation results show that the proposed relay strategy can efficiently reduce the bit error rate of the OBU message and thus increase the utility of the VANET compared with a Q-learning-based scheme.

Journal ArticleDOI
TL;DR: An iterative suboptimal algorithm is proposed to solve a UAV-ground communication system with multiple potential eavesdroppers on the ground by jointly designing the robust trajectory and transmit power of the UAV over a given flight duration by exploiting the mobility of UAV via its trajectory design.
Abstract: Unmanned aerial vehicles (UAVs) are anticipated to be widely deployed in future wireless communications, due to their advantages of high mobility and easy deployment. However, the broadcast nature of air-to-ground line-of-sight wireless channels brings a new challenge to the information security of UAV-ground communication. This paper tackles such a challenge in the physical layer by exploiting the mobility of UAV via its trajectory design. We consider a UAV-ground communication system with multiple potential eavesdroppers on the ground, where the information on the locations of the eavesdroppers is imperfect. We formulate an optimization problem, which maximizes the average worst case secrecy rate of the system by jointly designing the robust trajectory and transmit power of the UAV over a given flight duration. The nonconvexity of the optimization problem and the imperfect location information of the eavesdroppers make the problem difficult to be solved optimally. We propose an iterative suboptimal algorithm to solve this problem efficiently by applying the block coordinate descent method, $\mathcal {S}$ -procedure, and successive convex optimization method. Simulation results show that the proposed algorithm can improve the average worst case secrecy rate significantly, as compared to two other benchmark algorithms without robust design.

Journal ArticleDOI
TL;DR: Results show that the answers to channel performance metrics, such as spectrum efficiency, coverage, hardware/signal processing requirements, etc., are extremely sensitive to the choice of channel models.
Abstract: Fifth-generation (5G) wireless networks are expected to operate at both microwave and millimeter-wave (mmWave) frequency bands, including frequencies in the range of 24 to 86 GHz. Radio propagation models are used to help engineers design, deploy, and compare candidate wireless technologies, and have a profound impact on the decisions of almost every aspect of wireless communications. This paper provides a comprehensive overview of the channel models that will likely be used in the design of 5G radio systems. We start with a discussion on the framework of channel models, which consists of classical models of path loss versus distance, large-scale, and small-scale fading models, and multiple-input multiple-output channel models. Then, key differences between mmWave and microwave channel models are presented, and two popular mmWave channel models are discussed: the 3rd Generation Partnership Project model, which is adopted by the International Telecommunication Union, and the NYUSIM model, which was developed from several years of field measurements in New York City. Examples on how to apply the channel models are then given for several diverse applications demonstrating the wide impact of the models and their parameter values, where the performance comparisons of the channel models are done with promising hybrid beamforming approaches, including leveraging coordinated multipoint transmission. These results show that the answers to channel performance metrics, such as spectrum efficiency, coverage, hardware/signal processing requirements, etc., are extremely sensitive to the choice of channel models.

Journal ArticleDOI
TL;DR: By exploiting non-orthogonal multiple access (NOMA) for improving the efficiency of multi-access radio transmission, this paper studies the NOMA-enabled multi- access MEC and proposes efficient algorithms to find the optimal offloading solution.
Abstract: Multi-access mobile edge computing (MEC), which enables mobile users (MUs) to offload their computation-workloads to the computation-servers located at the edge of cellular networks via multi-access radio access, has been considered as a promising technique to address the explosively growing computation-intensive applications in mobile Internet services. In this paper, by exploiting non-orthogonal multiple access (NOMA) for improving the efficiency of multi-access radio transmission, we study the NOMA-enabled multi-access MEC. We aim at minimizing the overall delay of the MUs for finishing their computation requirements, by jointly optimizing the MUs’ offloaded workloads and the NOMA transmission-time. Despite the non-convexity of the formulated joint optimization problem, we propose efficient algorithms to find the optimal offloading solution. For the single-MU case, we exploit the layered structure of the problem and propose an efficient layered algorithm to find the MU's optimal offloading solution that minimizes its overall delay. For the multi-MU case, we propose a distributed algorithm (in which the MUs individually optimize their respective offloaded workloads) to determine the optimal offloading solution for minimizing the sum of all MUs’ overall delay. Extensive numerical results have been provided to validate the effectiveness of our proposed algorithms and the performance advantage of our NOMA-enabled multi-access MEC in comparison with conventional orthogonal multiple access enabled multi-access MEC.

Journal ArticleDOI
TL;DR: This correspondence investigates the physical layer security for cooperative nonorthogonal multiple access (NOMA) systems, where both amplify-and-forward (AF) and decode-and -forward (DF) protocols are considered.
Abstract: In this correspondence, we investigate the physical layer security for cooperative nonorthogonal multiple access (NOMA) systems, where both amplify-and-forward (AF) and decode-and-forward (DF) protocols are considered. More specifically, some analytical expressions are derived for secrecy outage probability (SOP) and strictly positive secrecy capacity. Results show that AF and DF almost achieve the same secrecy performance. Moreover, asymptotic results demonstrate that the SOP tends to a constant at high signal-to-noise ratio. Finally, our results show that the secrecy performance of considered NOMA systems is independent of the channel conditions between the relay and the poor user.

Journal ArticleDOI
TL;DR: This paper formulate an energy optimization problem of offloading, which aims at minimizing the overall energy consumption at all system entities and takes into account of the constraints from both computation capabilities and service delay requirement, and develop an artificial fish swarm algorithm based scheme.
Abstract: Mobile edge computing has been proposed in recent years to offload computation tasks from user equipments (UEs) to the network edge to break hardware limitations and resource constraints at UEs. Although there have been some existing works on computation offloading in 5G, most of them fail to take into account the unique property of 5G in their scheme design. In this paper, we consider small-cell network architecture for task offloading. In order to achieve energy efficiency, we model the energy consumption of offloading from both task computation and communication aspects. Besides, transmission scheduling are carried over both the fronthaul and backhaul links. We first formulate an energy optimization problem of offloading, which aims at minimizing the overall energy consumption at all system entities and takes into account of the constraints from both computation capabilities and service delay requirement. We then develop an artificial fish swarm algorithm based scheme to solve the energy optimization problem. Besides, the global convergence property of the our scheme is formally proven. Finally, various simulation results demonstrate the efficiency of our scheme.

Journal ArticleDOI
TL;DR: In this article, the optimal time allocation for maximizing the system spectral efficiency of a TDMA-based WPCN (T-WPCN) and a non-orthogonal multiple access (NOMA)-based wireless powered communication networks (WPCNs) was derived for the uplink of WPCNs based IoT networks with a massive number of devices.
Abstract: Wireless powered communication networks (WPCNs), where multiple energy-limited devices first harvest energy in the downlink and then transmit information in the uplink, have been envisioned as a promising solution for the future Internet-of-Things (IoT). Meanwhile, nonorthogonal multiple access (NOMA) has been proposed to improve the system spectral efficiency (SE) of the fifth-generation (5G) networks by allowing concurrent transmissions of multiple users in the same spectrum. As such, NOMA has been recently considered for the uplink of WPCNs based IoT networks with a massive number of devices. However, simultaneous transmissions in NOMA may also incur more transmit energy consumption as well as circuit energy consumption in practice which is critical for energy constrained IoT devices. As a result, compared to orthogonal multiple access schemes such as time-division multiple access (TDMA), whether the SE can be improved and/or the total energy consumption can be reduced with NOMA in such a scenario still remains unknown. To answer this question, we first derive the optimal time allocations for maximizing the SE of a TDMA-based WPCN (T-WPCN) and a NOMA-based WPCN (N-WPCN), respectively. Subsequently, we analyze the total energy consumption as well as the maximum SE achieved by these two networks. Surprisingly, it is found that N-WPCN not only consumes more energy, but also is less spectral efficient than T-WPCN. Simulation results verify our theoretical findings and unveil the fundamental performance bottleneck, i.e., “worst user bottleneck problem”, in multiuser NOMA systems.

Journal ArticleDOI
TL;DR: A novel mobile edge computing (MEC) enabled wireless blockchain framework where the computation-intensive mining tasks can be offloaded to nearby edge computing nodes and the cryptographic hashes of blocks can be cached in the MEC server.
Abstract: Blockchain technology has been applied in a variety of fields due to its capability of establishing trust in a decentralized fashion. However, the application of blockchain in wireless mobile networks is hindered by a major challenge brought by the proof-of-work puzzle during the mining process, which sets a high demand for the computational capability and storage availability in mobile devices. To address this problem, we propose a novel mobile edge computing (MEC) enabled wireless blockchain framework where the computation-intensive mining tasks can be offloaded to nearby edge computing nodes and the cryptographic hashes of blocks can be cached in the MEC server. Particularly, two offloading modes are considered, i.e., offloaded to the nearby access point or a group of nearby users. First, we conduct the performance analysis of each mode with stochastic geometry methods. Then, the joint offloading decision and caching strategy is formulated as an optimization problem. Furthermore, an alternating direction method of multipliers based algorithm is utilized to solve the problem in a distributed manner. Finally, simulation results demonstrate the effectiveness of our proposed scheme.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed reinforcement learning-based power control scheme for the downlink NOMA transmission can significantly increase the sum data rates of users, and thus, the utilities compared with the standard Q-learning-based strategy.
Abstract: Nonorthogonal multiple access (NOMA) systems are vulnerable to jamming attacks, especially smart jammers who apply programmable and smart radio devices such as software-defined radios to flexibly control their jamming strategy according to the ongoing NOMA transmission and radio environment. In this paper, the power allocation of a base station in a NOMA system equipped with multiple antennas contending with a smart jammer is formulated as a zero-sum game, in which the base station as the leader first chooses the transmit power on multiple antennas, while a jammer as the follower selects the jamming power to interrupt the transmission of the users. A Stackelberg equilibrium of the antijamming NOMA transmission game is derived and conditions assuring its existence are provided to disclose the impact of multiple antennas and radio channel states. A reinforcement learning-based power control scheme is proposed for the downlink NOMA transmission without being aware of the jamming and radio channel parameters. The Dyna architecture that formulates a learned world model from the real antijamming transmission experience and the hotbooting technique that exploits experiences in similar scenarios to initialize the quality values are used to accelerate the learning speed of the Q-learning-based power allocation, and thus, improve the communication efficiency of the NOMA transmission in the presence of smart jammers. Simulation results show that the proposed scheme can significantly increase the sum data rates of users, and thus, the utilities compared with the standard Q-learning-based strategy.

Journal ArticleDOI
TL;DR: A convolutional neural network model is used to detect, recognize, and abstract the information in the input road scene, which is captured by the on-board sensors, and a decision-making system calculates the specific commands to control the vehicles based on the abstractions.
Abstract: The autonomous vehicle, as an emerging and rapidly growing field, has received extensive attention for its futuristic driving experiences. Although the fast developing depth sensors and machine learning methods have given a huge boost to self-driving research, existing autonomous driving vehicles do meet with several avoidable accidents during their road testings. The major cause is the misunderstanding between self-driving systems and human drivers. To solve this problem, we propose a humanlike driving system in this paper to give autonomous vehicles the ability to make decisions like a human. In our method, a convolutional neural network model is used to detect, recognize, and abstract the information in the input road scene, which is captured by the on-board sensors. And then a decision-making system calculates the specific commands to control the vehicles based on the abstractions. The biggest advantage of our work is that we implement a decision-making system which can well adapt to real-life road conditions, in which a massive number of human drivers exist. In addition, we build our perception system with only the depth information, rather than the unstable RGB data. The experimental results give a good demonstration of the efficiency and robustness of the proposed method.

Journal ArticleDOI
TL;DR: This paper investigates unmanned aerial vehicle (UAV) enabled secure communication systems where a mobile UAV wishes to send confidential messages to multiple ground users by jointly optimizing the trajectory and the transmit power of the UAVs as well as the user scheduling.
Abstract: This paper investigates unmanned aerial vehicle (UAV) enabled secure communication systems where a mobile UAV wishes to send confidential messages to multiple ground users. To improve the security performance, a cooperative UAV is additionally considered that transmits the jamming signal. In this system, we maximize the minimum secrecy rate among the ground users by jointly optimizing the trajectory and the transmit power of the UAVs as well as the user scheduling. To efficiently solve this nonconvex problem, we adopt block successive upper bound minimization techniques, which address a sequence of approximated convex problems for each block of variables. Numerical results verify that the proposed algorithm outperforms baseline methods.

Journal ArticleDOI
TL;DR: A detailed investigation of channel correlation functions is found that a small number of clusters leads to high channel correlation, and the degree of intracluster nonisotropic scattering has major impact on correlation only if cluster number is small.
Abstract: A geometric multiple-input multiple-output (MIMO) channel model is proposed for millimeter-wave (mmWave) mobile-to-mobile (M2M) applications based on the two-ring reference model, where cluster-based nonisotropic scattering at both ends of the radio link is considered. The proposed model employs a few clusters of scatterers located on two rings centered on the transmitter and receiver, and intracluster azimuth spread of scatterers is further characterized according to mmWave channel characteristics. From the model, the time–frequency correlation function, power delay profile (PDP), and the Doppler power spectrum are derived. By adjusting the cluster number, cluster center position, and the degree of intracluster nonisotropic scattering, the model is adaptable to a variety of mmWave M2M scenarios. Model validation is further conducted by comparing the simulated PDPs with mmWave outdoor measurements. Based on a detailed investigation of channel correlation functions, it is found that a small number of clusters leads to high channel correlation, and the degree of intracluster nonisotropic scattering has major impact on correlation only if cluster number is small. In addition, the Doppler power spectrum is similar to the U-shaped spectrum, and several factors (e.g., small cluster number, high intracluster azimuth spread, and large antenna spacing) introduce significant fluctuations in the Doppler power spectrum. Finally, the model is implemented with directional antennas for safety related M2M scenarios, which leads to high channel correlation compared with using omnidirectional antennas. These observations and conclusions can be considered as a guidance for the mmWave M2M MIMO system design.

Journal ArticleDOI
TL;DR: This paper proposes a data sanitization strategy that does not greatly reduce the benefits brought by social network data, while sensitive latent information can still be protected, and is the first work that preserves both data benefits and social structure simultaneously and combats against powerful adversaries.
Abstract: Social network data can help with obtaining valuable insight into social behaviors and revealing the underlying benefits. New big data technologies are emerging to make it easier to discover meaningful social information from market analysis to counterterrorism. Unfortunately, both diverse social datasets and big data technologies raise stringent privacy concerns. Adversaries can launch inference attacks to predict sensitive latent information, which is unwilling to be published by social users. Therefore, there is a tradeoff between data benefits and privacy concerns. In this paper, we investigate how to optimize the tradeoff between latent-data privacy and customized data utility. We propose a data sanitization strategy that does not greatly reduce the benefits brought by social network data, while sensitive latent information can still be protected. Even considering powerful adversaries with optimal inference attacks, the proposed data sanitization strategy can still preserve both data benefits and social structure, while guaranteeing optimal latent-data privacy. To the best of our knowledge, this is the first work that preserves both data benefits and social structure simultaneously and combats against powerful adversaries.

Journal ArticleDOI
Qi Li1, Tianhong Wang1, Chaohua Dai1, Weirong Chen1, Lei Ma1 
TL;DR: The results obtained from RT-LAB platform testify that the proposed power management strategy is able to manage and coordinate multiple power sources based on their natural characteristics, maintain dc bus voltage stabilization, improve the efficiency of overall tramway, and alleviate the stress on the hybrid power system.
Abstract: In order to coordinate multiple power sources appropriately, and avoid the transients and rapid changes of power demand, a power management strategy based on an adaptive droop control, which is combined with a multimode strategy and an equivalent consumption minimization strategy, is proposed for a large-scale and high-power hybrid tramway. According to the hybrid system model of tramway developed with commercially available devices, the proposed power management strategy is evaluated with a real driving cycle of tramway. The results obtained from RT-LAB platform testify that the proposed strategy is able to manage and coordinate multiple power sources based on their natural characteristics, maintain dc bus voltage stabilization, improve the efficiency of overall tramway, and alleviate the stress on the hybrid power system.

Journal ArticleDOI
TL;DR: An IoV-aided local traffic information collection architecture, a sink node selection scheme for the information influx, and an optimal traffic information transmission model are proposed, which show the efficiency and feasibility of the proposed models.
Abstract: In view of the emergence and rapid development of the Internet of Vehicles (IoV) and cloud computing, intelligent transport systems are beneficial in terms of enhancing the quality and interactivity of urban transportation services, reducing costs and resource wastage, and improving the traffic management capability Efficient traffic management relies on the accurate and prompt acquisition as well as diffusion of traffic information To achieve this, research is mostly focused on optimizing the mobility models and communication performance However, considering the escalating scale of IoV networks, the interconnection of heterogeneous smart vehicles plays a critical role in enhancing the efficiency of traffic information collection and diffusion In this paper, we commence by establishing a weighted and undirected graph model for IoV sensing networks and verify its time-invariant complex characteristics relying on a real-world taxi GPS dataset Moreover, we propose an IoV-aided local traffic information collection architecture, a sink node selection scheme for the information influx, and an optimal traffic information transmission model Our simulation results and theoretical analysis show the efficiency and feasibility of our proposed models

Journal ArticleDOI
TL;DR: This work investigates a joint radio and computational resource allocation problem to optimize the system performance and improve user satisfaction, and proposes to use a matching game framework, in particular, student project allocation (SPA) game, to provide a distributed solution for the formulated joint resource allocationproblem.
Abstract: The current cloud-based Internet-of-Things (IoT) model has revealed great potential in offering storage and computing services to the IoT users. Fog computing, as an emerging paradigm to complement the cloud computing platform, has been proposed to extend the IoT role to the edge of the network. With fog computing, service providers can exchange the control signals with the users for specific task requirements, and offload users’ delay-sensitive tasks directly to the widely distributed fog nodes at the network edge, and thus improving user experience. So far, most existing works have focused on either the radio or computational resource allocation in the fog computing. In this work, we investigate a joint radio and computational resource allocation problem to optimize the system performance and improve user satisfaction. Important factors, such as service delay, link quality, mandatory benefit, and so on, are taken into consideration. Instead of the conventional centralized optimization, we propose to use a matching game framework, in particular, student project allocation (SPA) game, to provide a distributed solution for the formulated joint resource allocation problem. The efficient SPA-(S,P) algorithm is implemented to find a stable result for the SPA problem. In addition, the instability caused by the external effect, i.e., the interindependence between matching players, is removed by the proposed user-oriented cooperation (UOC) strategy. The system performance is also further improved by adopting the UOC strategy.