scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Mobile Networks and Management in 2020"


Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a real private Ethereum blockchain network with a laptop and Raspberry Pi 3b+ was constructed for the latency measurement, and the experiment results reveal the latencies-hop correlation, as well as the latency relation in different workloads.
Abstract: There has recently been an increasing number of blockchain applications in different realms. Among the popular blockchain technologies, Ethereum is an emerging platform featuring smart contracts with the public Ethereum associated to the Ether currency. Besides, the private Ethereum has been gaining interest due to its applicability to the Internet of Things. An Ethereum blockchain network includes distributed records that are immutable and transparent through replicating among network nodes. Ethereum manages information in blocks that are submitted to the chain as transactions. This paper aims to characterize latency performance in the private Ethereum blockchain network. Initially, we clarify two perspectives of latency according to the lifecycle of transactions (transaction-oriented and block-oriented latency). We then construct a real private blockchain network with a laptop and Raspberry Pi 3b+ for the latency measurement. We write and deploy a smart contract to read and write data to the blockchain and measure the latencies in a baseline and realistic scenario. The experiment results reveal the latencies-hop correlation, as well as the latencies’ relation in different workloads. Moreover, the blockchain network spends averagely 63.92 ms (except the mining time) to take one transaction into effect in one hop.

6 citations


Book ChapterDOI
10 Nov 2020
TL;DR: Wang et al. as mentioned in this paper proposed a novel bottleneck feature extraction (BFE) method based on the deep neural network (DNN) model for facial emotion recognition, which used the Haar cascade classifier with a randomly generated mask to extract the face and remove the background from the image.
Abstract: Deep learning is one of the most effective and efficient methods for facial emotion recognition, but it still encounters stability and infinite feasibility problems for faces of different races. To address this issue, we proposed a novel bottleneck feature extraction (BFE) method based on the deep neural network (DNN) model for facial emotion recognition. First, we used the Haar cascade classifier with a randomly generated mask to extract the face and remove the background from the image. Second, we removed the last output layer of the VGG16 transfer learning model, which was applied only for bottleneck feature extraction. Third, we designed a DNN model with five dense layers for feature training and used the famous Cohn-Kanade dataset for model training. Finally, we compared the proposed model with the K-nearest neighbor and logistic regression models on the same dataset. The experimental results showed that our model was more stable and could achieve a higher accuracy and F-measure, up to 98.59%, than other methods.

5 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, a distributed power and spectrum allocation scheme for D2D-U communication on unlicensed bands (D2DU) enabled networks is proposed, where an online trained neural network (NN) is first utilized to determine the price to use the unlicensed channels according to the channel state and traffic loads.
Abstract: In this paper, a distributed power and spectrum allocation scheme is proposed for Device-to-Device communication on unlicensed bands (D2D-U) enabled networks. To make full use of the spectrum resources on the unlicensed bands while guaranteeing the fairness among D2D-U links and the harmonious coexistence with WiFi networks, an online trained Neural network (NN) is first utilized on each D2D-U pair to determine the price to use the unlicensed channels according to the channel state and traffic loads. Then, a non-convex optimization problem can be formulated and solved on each D2D-U link to determine the optimal spectrum and power allocation scheme which can maximize the its transmission data rate. Numerical simulation results are demonstrated to verify the performance of the proposed method which enables each D2D-U link to maximize its own data-rate individually under the constraint of the fair coexistence with other D2D-U devices and WiFi networks.

5 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a dual-mode Gaussian distribution is used to fit the color distribution of the ground and then a prior model is built where pixels near the class boundary are more likely to be classified as the foreground.
Abstract: In this paper we describe an airport ground movement surveillance network. Airport ground videos are captured by multiple cameras, and than transmitted to the airport control center based on the optical fiber network. On the high-performance servers in the control center, various intelligent applications process video data, visualize the processing results and provide them to the air traffic controllers as a reference for airport management. Moving object detection is the foundation of many video based intelligent applications in airport surveillance. We propose detecting the moving objects in the airport ground by the use of the prior knowledge, that is, the airport ground made of cement has a gray-white color distribution. Based on this fact, firstly we use a dual-mode Gaussian distribution to fit the color distribution of the ground. Next, based on the fitted distribution we build a prior model, where pixels near the class boundary are more likely to be classified as the foreground. Finally, the prior model is used to detect moving targets within a Bayesian classification framework. Experiments are conducted on the AGVS benchmark and the results demonstrate the effectiveness of the proposed moving object detection algorithm.

3 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, the authors proposed a lightweight gait recognition system, which is named as B-Net, by reconstructing original data into a frequency energy graph, and a Balloon mechanism based on the concept of channel information integration is designed to reduce storage cost, training time and so on.
Abstract: The challenges in current WiFi based gait recognition models, such as the limited classification ability, high storage cost, long training time and restricted deployment on hardware platforms, motivate us to propose a lightweight gait recognition system, which is named as B-Net. By reconstructing original data into a frequency energy graph, B-Net extracts the spatial features of different carriers. Moreover, a Balloon mechanism based on the concept of channel information integration is designed to reduce the storage cost, training time and so on. The key benefit of the Balloon mechanism is to realize the compression of model scale and relieve the gradient disappearance to some extent. Experimental results show that B-Net has less parameters and training time and is with higher accuracy and better robustness, compared with the previous gait recognition models.

2 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a model called pseudo-double hidden layer feed forward neural network is proposed to approximatively predict the practical demand of bike-sharing, which is very difficult to make the prediction absolutely accurate due to the stochasticity and nonlinearity in the bike sharing system.
Abstract: Accurate demand prediction of bike-sharing is a prerequisite to reduce the cost of scheduling and improve the users’ satisfaction. However, it is very difficult to make the prediction absolutely accurate due to the stochasticity and nonlinearity in the bike-sharing system. In this paper, a model called pseudo-double hidden layer feedforward neural network is proposed to approximatively predict the practical demand of bike-sharing. In this neural network, an algorithm called improved particle swarm optimization in extreme learning machine is proposed to define its learning rule. On the basis of fully mining the massive operational data of “Shedd Aquarium” bike-sharing station in Chicago (USA), the demand of this station is predicted by the model proposed in this paper.

1 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a cooperative resource allocation scheme in heterogeneous smart grid networks is proposed to minimize the average delay of the smart grid service, where the effect of computing and channel resources on the initial latency of the electric services is studied.
Abstract: Recently, in heterogeneous smart grids, mobile network traffic and various power interconnection services grow in a high way. Caching the service at the edge of smart grid network is a common optimization method to reduce the heavy network traffic. In this paper, we propose a cooperative resource allocation scheme in heterogeneous smart grid networks. Firstly, the effect of computing and channel resources on the initial latency of the electric services is studied. Secondly, the model of topology and resource distribution in the network is established, with the goal of minimizing the overall network delay. Thirdly, an algorithm combining KM matching and genetic algorithm is proposed to solve the proposed problem. Finally, the simulation results show that the proposed algorithm optimizes the average delay of the smart grid service.

1 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, the relative vehicle velocity and computational capability are considered in the virtual edge selection for task offloading in VE. The authors proposed a virtual edge scheme where a node can offload its tasks to a VE that consists of multiple vehicles in vicinity.
Abstract: Edge computing can reduce service latency through task offloading. Since computational resources on the edge of the network are scarce, selecting a node with rich computational capability is the key for getting a high-quality service. In this paper, we propose a virtual edge scheme where a node can offload its tasks to a virtual edge node that consists of multiple vehicles in vicinity. The relative vehicle velocity and computational capability are considered in the virtual edge selection. We compare our proposed scheme with several baseline schemes and show the superiority of the scheme.

1 citations


Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, an uplink scheduling transmission method for sampling data with optimized throughput according to the requirements of system delay and reliability is proposed, and the simulation results show that under the condition of satisfying the delay requirement, the proposed framework can optimally allocate the wireless communication resource and maximize the throughput of the uplink transmission system.
Abstract: Smart grid is an energy network that integrates advanced power equipment, communication technology and control technology. It can transmit two-way power and data among all components of the grid at the same time. The existing smart grid communication technologies include power line carrier (PLC) communication, industrial Ethernet, passive optical networks and wireless communication, each of which have different advantages. Due to the complex application scenarios, massive sampling points and high transmission reliability requirements, a single communication method cannot fully meet the communication requirements of smart grid, and heterogeneous communication modes are required. In addition, with the development of cellular technology, long term evolution (LTE)-based standards have been identified as a promising technology that can meet the strict requirements of various operations in smart grid. In this paper, we analyze the advantages and disadvantages of PLC and LTE communication, and design a network framework for PLC and LTE communication uplink heterogeneous communication in smart grid. Then, we propose an uplink scheduling transmission method for sampling data with optimized throughput according to the requirements of system delay and reliability. Then, we use the formula derivation to prove the stability and solvability of the scheduling system in theory. Finally, the simulation results show that under the condition of satisfying the delay requirement, our proposed framework can optimally allocate the wireless communication resource and maximize the throughput of the uplink transmission system.

Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a novel research on multi-level computation offloading taking account into the heterogeneity of computation tasks and computation resource backup pool, and introducing opportunistic networks in multi-access networks simultaneously is performed.
Abstract: Mobile Edge Computing (MEC) is regarded as a promising technology that migrates cloud computing platforms with computing and storage capabilities to the edge of the wireless access network, enabling rich applications and services in close proximity to the mobile users (MUs). There are a lot of literatures that have studied computation offloading. Different from them, this paper performs a novel research on multi-level computation offloading taking account into the heterogeneity of computation tasks and computation resource backup pool, and introducing opportunistic networks in multi-access networks simultaneously. Firstly, we describe the computation offloading model. Then, we formulate the multi-level computation offloading problem as a Stackelberg game and demonstrate the existence of the game Nash equilibrium. In order to solve above problem, we design a global optimal algorithm based on game theory. Finally, the performance of the proposed algorithm is verified by comparing with other algorithms. Simulation results corroborate that the algorithm can not only decrease the energy consumption, but also is stable.

Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a joint handover and transmission strategy for users is investigated to minimize the service delay, which considers the overlapped coverage of SBSs and the limited capacity of backhaul link.
Abstract: In smart grid systems, heterogeneous networks are considered as a promising solution to address the expeditious growth of mobile traffic. Considering the different user preferences, how to efficiently utilize the limited resource of small base stations (SBSs) becomes a challenge. In this paper, we investigate a joint handover and transmission strategy for users. We formulate the handover and transmission problem to minimize the service delay, which considers the overlapped coverage of SBSs and the limited capacity of backhaul link. To solve this NP-hard problem, we design a heuristic algorithm with two phases. In content caching phase, the contents are cached at SBSs according to the greedy algorithm. In content delivery phase, a transmission strategy is designed to meet the user demands for videos with different quality level. Simulation results show that our proposed algorithm has the advantage of reducing video delivery delay and saving the backhaul traffic compared with other algorithms.

Book ChapterDOI
10 Nov 2020
TL;DR: In this article, the authors proposed a novel design framework for UAV enabled video relay system with the aim of minimizing energy consumption of the UAV, subjecting to the QoE requirement of each GU.
Abstract: With the explosive growth of mobile video services, unmanned aerial vehicle (UAV) is flexibly deployed as a relay node to offload cellular traffic or provide video services for emergency scenario without infrastructures. This paper proposes a novel design framework for UAV enabled video relay system with the aim of minimizing energy consumption of the UAV, subjecting to the QoE requirement of each GU. A dynamic resource allocation strategy is employed to model the UAV’s power and bandwidth allocation and the optimization problem is formulated as a non-convex problem, via optimizing the transmit power and bandwidth allocation of the UAV jointly with the UAV trajectory. To tackle this non-convex problem, the original problem is decoupled into two sub-problems: bandwidth and transmit power allocation optimization, as well as UAV trajectory optimization. We propose an efficient iterative algorithm to obtain a Karush-Kuhn-Tucker (KKT) solution via solving the two sub-problems with successive convex approximation and alternating optimization techniques. Extensive simulations are conducted to evaluate the performance and the results demonstrate that with the proposed joint design, the UAV’s energy consumption is significantly reduced, by up to \(30\%\), and the QoE requirement for GUs can be well satisfied simultaneously.

Book ChapterDOI
10 Nov 2020
TL;DR: In this article, a novel displacement estimation method that combines the block-matching method and the phase-zero search method was proposed to increase the robustness of the phase zero search under large displacement conditions.
Abstract: Traditional medicine requires doctors and patients to engage in face-to-face palpation, which is a great challenge in underdeveloped areas, especially in rural areas. Telemedicine provides an opportunity for patients to connect with doctors who may be thousands of miles away via mobile devices or the Internet. When using this method, elastography is a crucial medical imaging modality that maps the elastic properties of soft tissue, which can then be sent to doctors remotely. Ultrasound elastography has become a research focus because it can accurately measure soft tissue lesions. Displacement estimation is a key step in ultrasound elastography. The phase-zero search method is a popular displacement estimation method that is accurate and rapid. However, the method is ineffective when the displacement is more than a 1/4 wavelength. The block-matching method can address this shortcoming because it is suitable for large displacements, although it is not accurate. Notably, the quality-guided block matching method has exhibited good robustness under complex mutational conditions. In this paper, we propose a novel displacement estimation method that combines the block-matching method and the phase-zero search method. The block-matching method provides prior knowledge to increase the robustness of the phase-zero search under large displacement conditions. The experimental results show that our method exhibits stronger robustness, more accurate results, and faster calculation speed.

Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, the authors proposed a protocol for tag information sampling in RFID systems with a small communication cost and proved that the communication cost of the protocol stays within a factor of 2 of the total communication cost.
Abstract: Given a population S of N tags in an RFID system, the tag-information sampling problem is to randomly choose K distinct tags from S to form a subset T, and then inform each tag in T of a unique integer from \(\{1,2,..., K\}\). This is a fundamental problem in many real-time analysis applications in RFID systems. Because it enables rapidly selecting a random subset T and collecting the tag-information from T. However, existing protocols for this problem are far from satisfactory due to high communication costs. In this paper, our objective is to solve this problem by using a small communication cost. We first obtain a lower bound on communication cost, denoted by \(C_\mathrm{{lb}}\), for this problem. Then we design a protocol, denoted by \(P_{\text {s}}\), to solve this problem, and prove that the communication cost of \(P_{\text {s}}\) stays within a factor of 2 of \(C_\mathrm{{lb}}\). Extensive simulations verifies the advantages of \(P_{\text {s}}\) comparing with other protocols.

Book ChapterDOI
Fuchao Wang1, Pengsong Duan1, Yangjie Cao1, Jinsheng Kong1, Hao Li1 
10 Nov 2020
TL;DR: In this paper, a human activity perception recognition model based on deep learning is proposed to solve the problems of difficulty in extracting perceptual features of Wi-Fi signals and low recognition accuracy in traditional Machine Learning methods.
Abstract: In recent years, with the prominent population aging problem, health conditions of aged solitaries are inherently gaining more and more attentions. Among the techniques allowing real-time health monitoring, activity perception has become an important and promising eld in both academia and industry. In this paper, a human activity perception recognition model, named MSHNet (Multi-Stream-Hybrid-Network) based on Deep Learning is proposed to solve the problems of difficulty in extracting perceptual features of Wi-Fi signals and low recognition accuracy in traditional Machine Learning methods. MSHNet adopts passive wireless sensing technology, it uses commercial off-the-shelf Wi-Fi devices to collect Channel State Information (CSI) based on underlying physical equipment and automatically extracts human activity features characterized by amplitude in CSI. Then MSHNet aggregates the data streams of the same receiving antenna using the wireless signal transceiving characteristics of Multiple Input Multiple Output (MIMO) and trains the aggregated data streams respectively. At last, the voting mechanism is adopted to select the best training result. The experimental results demonstrate that MSHNet’s results on the public dataset have reached the state-of-the-art and on the datasets of four environments collected by ourselves the average recognition accuracy rate has reached 97.41%, satisfying the daily activity monitoring of the elderly, especially those living alone.

Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, a new space time coding scheme is proposed to improve the system spectral efficiency (SE) while achieving full diversity gain, the ST coded symbol transmission sequence (denoted as ST coding pattern in time domain) is exploited to improve transmission rate.
Abstract: In this letter, a new space time (ST) coding scheme is proposed to improve the system spectral efficiency (SE). To improve the SE while achieving full diversity gain, the ST coded symbol transmission sequence (denoted as ST coding pattern in time domain) is exploited to improve the transmission rate. Since the orthogonal construction of the ST codes is preserved, the simple decoding scheme (linear maximum likelihood detector) is still applicable. Based on the analysis and Monte Carlo simulations, we demonstrate that the data rate increases by 25\(\%\) and the its bit error rate (BER) is close to the conventional ST codes when 16-QAM or 16-PSK is used in the system.

Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, a cache enhanced offload strategy and a collaborative scheduling algorithm are proposed to optimize the total delay of all tasks of the inspection robot in the eIoT. The model includes offloading, calculating and backhaul for uncached task and downloading of cached content.
Abstract: With the continuous development and improvement of 5G networks, many emerging technology architectures have been introduced to support 5G service requirements. As one of them, mobile edge computing can meet the exponentially increasing computing requirements, and with its advantages of being more efficient, smarter, and more flexible, it can be well adapted to smart grid scenarios. However, most of the existing research contents of the eIoT focus on the research of computing offloading and content caching separately, ignoring the problem of reusability of some computing results. This paper considers the certain content caching capabilities of the MEC system itself, and aims to design a cache-enhanced MEC eIoT. The model includes offloading, calculating and backhaul for uncached task and downloading of cached content. On the other hand, the problem of task diversity and inspection robot mobility is fully analyzed. Subsequently, we studied the impact of caching capabilities on computing power to get the best MEC server parameter information. Based on the above research, this paper proposes a cache enhanced offload strategy and a collaborative scheduling algorithm to optimize the total delay of all tasks of the inspection robot in the eIoT. Simulation results show that the strategy can effectively reduce the computational offloading latency.

Book ChapterDOI
10 Nov 2020
TL;DR: In this paper, a graph-based terminal ranking scheme is designed, where all terminals are used as the vertexes in the graph to form a complete weighted graph, and edge weights represent the degree of dissimilarity between terminals.
Abstract: In the future mobile communication system, inter-cell interference becomes a serious problem due to the intensive deployment of cells and terminals. Traditional interference coordination schemes take long time for optimization in ultra-dense networks. Meanwhile, due to the increase of factors affecting communication and in order to better meet the communication needs of each terminal, an interference coordination scheme needs to fully consider multiple characteristic parameters of the terminal, which will further increase the scheme’s computational time. Therefore, we should compress all the data through sparsification of parameters before optimization. There are many terminal parameters, and the essence of sparsification of parameters is to rank terminals. In this paper, a graph-based terminal ranking scheme is designed. First, each terminal can be represented by its multiple parameters. Then, all terminals are used as the vertexes in the graph to form a complete weighted graph, and edge weights represent the degree of dissimilarity between terminals. A ranking of terminals is obtained by finding a minimum Hamiltonian path in the graph. Finally, the ranking of all parameter sequences is obtained according to terminals ranking, which makes the sparsity of all parameter sequences better. Simulation results show that the proposed scheme can accomplish sparsification of parameter sequences effectively, especially when the number of sequences increases. In addition, compared with the optimal coordination of traditional scheme, this scheme improves the fairness of the system while ensuring high system capacity, and dramatically reduces the computational time of interference coordination.

Book ChapterDOI
10 Nov 2020
TL;DR: Wang et al. as discussed by the authors proposed a novel reputation mechanism to encourage the responders to provide positive navigation services, and the optimization problem of the system was established to maximize the total utility of system.
Abstract: At present, the crowdsourcing-based indoor navigation system has attracted extensive attention from both the industry and the academia. The crowdsourcing-based indoor navigation system commendably solves the deficiencies (e.g., high cost, low accuracy, etc.) of traditional navigation methods. Unfortunately, the system that relies on crowdsourced data is vulnerable to the collusion attack, which may threaten the security of the system. In this paper, a novel crowdsourcing-based secure indoor navigation system is proposed. Specifically, we first propose a novel reputation mechanism. Then, we employ the offensive and defensive game to model the interactions between the fog service platform and responders. Next, the optimization problem of the system is established to maximize the total utility of the system. Finally, the simulation results demonstrate that the proposed system can effectively encourage responders to provide positive navigation services.