scispace - formally typeset
Search or ask a question

Showing papers in "Mobile Networks and Applications in 2020"


Journal ArticleDOI
TL;DR: A smartphone inertial accelerometer-based architecture for HAR is designed and a real-time human activity classification method based on a convolutional neural network (CNN) is proposed, which uses a CNN for local feature extraction on the UCI and Pamap2 datasets.
Abstract: With the widespread application of mobile edge computing (MEC), MEC is serving as a bridge to narrow the gaps between medical staff and patients. Relatedly, MEC is also moving toward supervising in ...

316 citations


Journal ArticleDOI
TL;DR: A new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN), named Joint CNN-MF (JCM), which is capable of using the learned deep latent features of neighbors to infer the features of a user or a service.
Abstract: Along with the popularity of intelligent services and mobile services, service recommendation has become a key task, especially the task based on quality-of-service (QoS) in edge computing environment. Most existing service recommendation methods have some serious defects, and cannot be directly adopted in edge computing environment. For example, most of existing methods cannot learn deep features of users or services, but in edge computing environment, there are a variety of devices with different configurations and different functions, and it is necessary to learn deep features behind those complex devices. In order to fully utilize hidden features, this paper proposes a new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN). The proposed mode is named Joint CNN-MF (JCM). JCM is capable of using the learned deep latent features of neighbors to infer the features of a user or a service. Meanwhile, to improve the accuracy of neighbors selection, the proposed model contains a novel similarity computation method. CNN learns the neighbors features, forms a feature matrix and infers the features of the target user or target service. We conducted experiments on a real-world service dataset under a batch of cases of data densities, to reflect the complex invocation cases in edge computing environment. The experimental results verify that compared to counterpart methods, our method can consistently achieve higher QoS prediction results.

176 citations


Journal ArticleDOI
TL;DR: This paper proposes to mine the periodic trends of users’ consuming behavior from historical records by KNN(K-nearest neighbor) and SVR (support vector regression) based time series prediction, and predicts the next time when a user re-purchases the item, so that it can recommend the items which users have purchased before at proper time.
Abstract: Recently, more and more mobile apps are employed in the marketing field with technical advances. Mobile marketing apps have become a prevalent way for enterprise marketing. Therefore, it has been an important and urgent problem to provide personalized and accurate recommendation in mobile marketing, with a large number of items and limited capability of mobile devices. Recommendation have been investigated widely, however, most existing approaches fail to consider the stability or change of users’ behaviors over time. In this paper, we first propose to mine the periodic trends of users’ consuming behavior from historical records by KNN(K-nearest neighbor) and SVR (support vector regression) based time series prediction, and predict the next time when a user re-purchases the item, so that we can recommend the items which users have purchased before at proper time. Second, we aim to find the regularity of users’ purchasing behavior during different life stages and recommend the new items that are needed and proper for their current life stage. In order to solve this, we mine the mapping model from items to user’s life stage first. Based on the model, users’ current life stage can be estimated from their recent behaviors. Finally, users will be recommended with new items which are proper to their estimated life stage. Experimental results show that it has improved the effectiveness of recommendation obviously by mining users’ consuming behaviors with temporal evolution.

104 citations


Journal ArticleDOI
TL;DR: This paper analyzes UAV safety from three aspects, including sensors, communications and multi-UAVs, noting that ad-hoc network the mainstream of current networking method but it still exists many potential dangers.
Abstract: The rapid development of the Unmanned Aerial Vehicle(UAV) brings much convenience to our life. However, security and privacy problems caused by UAVs are gradually exposed. This paper analyzes UAV safety from three aspects, including sensors, communications and multi-UAVs. A UAV relies on different sensors to locate and calculate its flight attitude, which means spoof and attacks on sensors are fatal. On the one hand, wrong information from sensors will lead UAVs to make wrong judgments. On the other hand, damage to sensors can cause UAVs to fail to obtain information and severely cause UAVs to crash. Information exchange between UAV and Ground Control Station(GCS) relies on communication links and an unsafe link is susceptible to attacks. Multi-UAVs applications rely on stable network among UAVs. Ad-hoc network the mainstream of current networking method but it still exists many potential dangers. Besides, another possibility of privacy disclosure caused by aerial photos is also mentioned. These photos often contain private information such as location and shooting time which is likely to be leaked when the photographer shares photos on social applications. Finally, we summarize the paper and discuss the future research direction.

104 citations


Journal ArticleDOI
TL;DR: A sparsity alleviation recommendation approach is presented in this paper that achieves a better product recommendation performance and a hybrid collaborative formula that incorporates product attribute information to generate better recommendation results.
Abstract: The goal of a recommender system is to return related items that users may be interested in. However recommendation methods result in a sparsity problem that affects the generation of recommendation results and, thus, the user experience. Considering different user performance-related information in recommender systems, the recommendation models face new sparsity challenges. Specifically, the sparsity problem that existed in our previously proposed Product Attribute Model is due to the subjectivity of product reviews. When users comment on items, they do not include all aspects of the product. As a result, the user preference information acquired by the model is incomplete after data preprocessing. To solve this problem, a sparsity alleviation recommendation approach is presented in this paper that achieves a better product recommendation performance. The new sparsity alleviation algorithm for the recommendation model is designed to solve the sparsity problem by addressing the zero values. Based on the Multiplication Convergence Rule and Constraint Condition, the algorithm replaces zero values through equations. The sparsity problem of the Product Attribute Model can be alleviated in view of the accuracy of matrix factorization. We also propose a hybrid collaborative formula that incorporates product attribute information to generate better recommendation results. Experimental results on a sparsity dataset from Amazon demonstrate the effectiveness and applicability of our proposed recommendation approach, which outperforms a number of competitive baselines in both the within sparsity and without sparsity experiments.

91 citations


Journal ArticleDOI
TL;DR: This paper proposed a LOW-COST, STABLE, HIGH precision apple leaf diseases identification method using MobileNet model, and compared the efficiency and precision with the famous CNN models: i.e. ResNet152 and InceptionV3.
Abstract: Alternaria leaf blotch, and rust are two common types of apple leaf diseases that severely affect apple yield. A timely and effective detection of apple leaf diseases is crucial for ensuring the healthy development of the apple industry. In general, these diseases are inspected by experienced experts one by one. This is a time-consuming task with unstable precision. Therefore, in this paper, we proposed a LOW-COST, STABLE, HIGH precision apple leaf diseases identification method. This is achieved by employing MobileNet model. Firstly, comparing with general deep learning model, it is a LOW-COST model because it can be easily deployed on mobile devices. Secondly, instead of experienced experts, everyone can finish the apple leaf diseases inspection STABLELY by the help of our algorithm. Thirdly, the precision of MobileNet is nearly the same with existing complicated deep learning models. Finally, in order to demonstrated the effectiveness of our proposed method, several experiments have been carried out for apple leaf diseases identification. We have compared the efficiency and precision with the famous CNN models: i.e. ResNet152 and InceptionV3. Here, the apple disease datasets (including classes: Alternaria leaf blotch and rust leaf) were collected by the agriculture experts in Shaanxi Province, China.

79 citations


Journal ArticleDOI
TL;DR: An improved strategy to detect three type of skin cancers in early stages is suggested and the novelty of the work suggests that DE-ANN is best compared among other traditional classifiers in terms of detection accuracy.
Abstract: As per recent developments in medical science, the skin cancer is considered as one of the common type disease in human body. Although the presence of melanoma is viewed as a form of cancer, it is challenging to predict it. If melanoma or other skin diseases are identified in the early stages, prognosis can then be successfully achieved to cure them. For this, medical imaging science plays an essential role in detecting such types of skin lesions quickly and accurately. The application of our approaches is to improve skin cancer detection accuracy in medical imaging and further, can be automated using electronic devices such as mobile phones etc. In the proposed paper, an improved strategy to detect three type of skin cancers in early stages are suggested. The considered input is a skin lesion image which by using the proposed method, the system would classify it into cancerous or non-cancerous type of skin. The image segmentation is implemented using fuzzy C-means clustering to separate homogeneous image regions. The preprocessing is done using different filters to enhance the image attributes while the other features are assessed by implementing rgb color-space, Local Binary Pattern (LBP) and GLCM methods altogether. Further, for classification, artificial neural network (ANN) is trained using differential evolution (DE) algorithm. Various features are accurately estimated to achieve better results using skin cancer image datasets namely HAM10000 and PH2. The novelty of the work suggests that DE-ANN is best compared among other traditional classifiers in terms of detection accuracy as discussed in result section of this paper. The simulated result shows that the proposed technique effectually detects skin cancer and produces an accuracy of 97.4%. The results are highly accurate compare to other traditional approaches in the same domain.

75 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed BehavDT context-aware model is more effective when compared with the traditional machine learning approaches, in predicting user diverse behaviors considering multi-dimensional contexts.
Abstract: This paper formulates the problem of building a context-aware predictive model based on user diverse behavioral activities with smartphones. In the area of machine learning and data science, a tree-like model as that of decision tree is considered as one of the most popular classification techniques, which can be used to build a data-driven predictive model. The traditional decision tree model typically creates a number of leaf nodes as decision nodes that represent context-specific rigid decisions, and consequently may cause overfitting problem in behavior modeling. However, in many practical scenarios within the context-aware environment, the generalized outcomes could play an important role to effectively capture user behavior. In this paper, we propose a behavioral decision tree, “BehavDT” context-aware model that takes into account user behavior-oriented generalization according to individual preference level. The BehavDT model outputs not only the generalized decisions but also the context-specific decisions in relevant exceptional cases. The effectiveness of our BehavDT model is studied by conducting experiments on individual user real smartphone datasets. Our experimental results show that the proposed BehavDT context-aware model is more effective when compared with the traditional machine learning approaches, in predicting user diverse behaviors considering multi-dimensional contexts.

75 citations


Journal ArticleDOI
TL;DR: The experimental results show that the DSEGA algorithm can get the shortest response time among the service, data components and edge servers.
Abstract: In the information age, the amount of data is huge which shows an exponential growth. In addition, most services of application need to be interdependent with data, cause that they can be executed under the driven data. In fact, such a data-intensive service deployment requires a good coordination among different edge servers. It is not easy to handle such issues while data transmission and load balancing conditions change constantly between edge servers and data-intensive services. Based on the above description, this paper proposes a Data-intensive Service Edge deployment scheme based on Genetic Algorithm (DSEGA). Firstly, a data-intensive edge service composition and an edge server model will be generated based on a graph theory algorithm, then five algorithms of Genetic Algorithm (GA), Simulated Annealing Algorithm (SA), Ant Colony Algorithm (ACO), Optimized Ant Colony Algorithm (ACO_v) and Hill Climbing will be respectively used to obtain an optimal deployment scheme, so that the response time of the data-intensive edge service deployment reaches a minimum under storage constraints and load balancing conditions. The experimental results show that the DSEGA algorithm can get the shortest response time among the service, data components and edge servers.

74 citations


Journal ArticleDOI
TL;DR: This paper detects faces using Viola-jones algorithm followed by tracking through Kanade-Lucas-Tomasi algorithm, recognizes facial expressions using the proposed light-weight convolutional neural network (CNN), and predicts human behaviors using an occurrence matrix acquired from facial recognition and expressions.
Abstract: Human behavior analysis from big multimedia data has become a trending research area with applications to various domains such as surveillance, medical, sports, and entertainment. Facial expression analysis is one of the most prominent clues to determine the behavior of an individual, however, it is very challenging due to variations in face poses, illuminations, and different facial tones. In this paper, we analyze human behavior using facial expressions by considering some famous TV-series videos. Firstly, we detect faces using Viola-jones algorithm followed by tracking through Kanade-Lucas-Tomasi (KLT) algorithm. Secondly, we use histogram of oriented gradients (HOG) features with support vector machine (SVM) classifier for facial recognition. Next, we recognize facial expressions using the proposed light-weight convolutional neural network (CNN). We utilize data augmentation techniques to overcome the issue of appearance of faces from different views and lightening conditions in video data. Finally, we predict human behaviors using an occurrence matrix acquired from facial recognition and expressions. The subjective and objective experimental evaluations prove better performance for both facial expression recognition and human behavior understanding.

65 citations


Journal ArticleDOI
TL;DR: The use of HD computing to classify electroencephalography (EEG) error-related potentials for noninvasive brain–computer interfaces is described and achieves on average 5% higher single-trial classification accuracy compared to a conventional machine learning method on this task.
Abstract: The mathematical properties of high-dimensional (HD) spaces show remarkable agreement with behaviors controlled by the brain. Computing with HD vectors, referred to as “hypervectors,” is a brain-inspired alternative to computing with numbers. HD computing is characterized by generality, scalability, robustness, and fast learning, making it a prime candidate for utilization in application domains such as brain–computer interfaces. We describe the use of HD computing to classify electroencephalography (EEG) error-related potentials for noninvasive brain–computer interfaces. Our algorithm naturally encodes neural activity recorded from 64 EEG electrodes to a single temporal–spatial hypervector without requiring any electrode selection process. This hypervector represents the event of interest, can be analyzed to identify the most discriminative electrodes, and is used for recognition of the subject’s intentions. Using the full set of training trials, HD computing achieves on average 5% higher single-trial classification accuracy compared to a conventional machine learning method on this task (74.5% vs. 69.5%) and offers further advantages: (1) Our algorithm learns fast: using only 34% of training trials it achieves an average accuracy of 70.5%, surpassing the conventional method. (2) Conventional method requires prior domain expert knowledge, or a separate process, to carefully select a subset of electrodes for a subsequent preprocessor and classifier, whereas our algorithm blindly uses all 64 electrodes, tolerates noises in data, and the resulting hypervector is intrinsically clustered into HD space; in addition, most preprocessing of the electrode signal can be eliminated while maintaining an average accuracy of 71.7%.

Journal ArticleDOI
TL;DR: Result of systematic review reveals that consumption of energy is the most fundamental issue in WSN however, it is not noticed by the researchers and practitioners where as it can contribute for the improvement of the energy efficiency.
Abstract: In wireless sensor networks (WSN), routing is quite challenging area of research where packets are forwarded through multiple nodes to the base station. The packet being sent over the network should be shared in an energy efficient manner. It also considers the residual power of battery to enhance the network life time. Existing energy efficient routing solutions and surveys are presented but still there is a need for Systematic Literature Review (SLR) to identify the valid problems. This paper performs SLR for energy efficiency routing with 172 papers at initial stage. Next, 50 papers are shortlisted after filtration based on quality valuation and selection criteria by ensuring relevance with energy efficiency. Initially, we present literature that includes schemes for threshold sensitive, adaptive periodic threshold sensitive, power efficient, hybrid energy efficient distribution and low energy adaptive mechanisms. Result of systematic review reveals that consumption of energy is the most fundamental issue in WSN however, is not noticed by the researchers and practitioners where as it can contribute for the improvement of the energy efficiency. It also elaborates the weaknesses of the existing approaches which make them inappropriate for energy efficient routing in WSN.

Journal ArticleDOI
TL;DR: A system that combines an infrastructure made of sensors to collect real-time data in a University Campus and a rich web-based application to interact with spatio-temporal data, available in a public interactive touch monitor is designed and developed.
Abstract: Interconnected computational devices in the Internet of Things (IoT) context make possible to collect real-time data about a specific environment. The IoT paradigm can be exploited together with data visualization techniques to put into effect intelligent environments, where pervasive technologies enable people to experience and interact with the generated data. In this paper, we present a case study where these emerging areas and related technologies have been explored to benefit communities, making their members actively involved as central players of such an intelligent environment. To give practical effect to our approach, we designed and developed a system, named Smart Campus, composed of: i) an infrastructure made of sensors to collect real-time data in a University Campus, and ii) a rich web-based application to interact with spatio-temporal data, available in a public interactive touch monitor. To validate the system and grasp insights, we involved 135 students through a survey, and we extracted meaningful data from the interactive sessions with the public display. Results show that this Campus community understood the potential of the system and students are willing to actively contribute to it, pushing us to better investigate future scenarios where students can participate with ideas, visualizations/services to integrate into the web-based system, as well as sensors to plug into the infrastructure.

Journal ArticleDOI
TL;DR: This paper proposes the design and development of a digital twin for a case study of a pharmaceutical company based on simulators, solvers and data analytic tools that allow these functions to be connected in an integral interface for the company.
Abstract: Digital twin technology consists of creating virtual replicas of objects or processes that simulate the behavior of their real counterparts. The objective is to analyze its effectiveness or behavior in certain cases to improve its effectiveness. Applied to products, machines and even complete business ecosystems, the digital twin model can reveal information from the past, optimize the present and even predict the future performance of the different areas analyzed. In the context of supply chains, digital twins are changing the way they do business, providing a range of options to facilitate collaborative environments and data-based decision making and making business processes more robust. This paper proposes the design and development of a digital twin for a case study of a pharmaceutical company. The technology used is based on simulators, solvers and data analytic tools that allow these functions to be connected in an integral interface for the company.

Journal ArticleDOI
TL;DR: Iktishaf, developed over Apache Spark, a big data tool for traffic-related event detection from Twitter data in Saudi Arabia, uses three machine learning (ML) algorithms to build multiple classifiers to detect eight event types.
Abstract: Road transportation is the backbone of modern economies despite costing annually millions of human deaths and injuries and trillions of dollars. Twitter is a powerful information source for transportation but major challenges in big data management and Twitter analytics need addressing. We propose Iktishaf, developed over Apache Spark, a big data tool for traffic-related event detection from Twitter data in Saudi Arabia. It uses three machine learning (ML) algorithms to build multiple classifiers to detect eight event types. The classifiers are validated using widely used criteria and against external sources. Iktishaf Stemmer improves text preprocessing, event detection and feature space. Using 2.5 million tweets, we detect events without prior knowledge including the KSA national day, a fire in Riyadh, rains in Makkah and Taif, and the inauguration of Al-Haramain train. We are not aware of any work, apart from ours, that uses big data technologies for event detection of road traffic events from tweets in Arabic. Iktishaf provides hybrid human-ML methods and is a prime example of bringing together AI theory, big data processing, and human cognition applied to a practical problem.

Journal ArticleDOI
TL;DR: This paper proposes an end-to-end infrared small target detection model (called CDAE) based on denoising autoencoder network and convolutional neural network, which treats small targets as “noise” in infrared images and transforms small target Detection tasks into denoised problems.
Abstract: The method of infrared small target detection is a crucial technology for infrared early-warning tasks, infrared imaging guidance, and large field of view target monitoring, and it is very important for certain early-warning tasks. In this paper, we propose an end-to-end infrared small target detection model (called CDAE) based on denoising autoencoder network and convolutional neural network, which treats small targets as “noise” in infrared images and transforms small target detection tasks into denoising problems. In addition, we use the perceptual loss to solve the problem of background texture feature loss in the encoding process, and propose the structural loss to make up for the perceptual loss defect in which small targets appear. We compare ten methods on six sequences and one single-frame dataset. Experimental results show that our method obtains the highest SCRG value on four sequences and the highest BSF value on six sequences. From the ROC curve, we can see that our method achieves the best results in all test sets.

Journal ArticleDOI
TL;DR: This paper designs an image processing module for a mobile device based on the characteristics of a CNN, and proposes a lightweight network structure for optical character recognition (OCR) on specific data sets.
Abstract: Deep learning (DL) is a hot topic in current pattern recognition and machine learning. DL has unprecedented potential to solve many complex machine learning problems and is clearly attractive in the framework of mobile devices. The availability of powerful pattern recognition tools creates tremendous opportunities for next-generation smart applications. A convolutional neural network (CNN) enables data-driven learning and extraction of highly representative, hierarchical image features from appropriate training data. However, for some data sets, the CNN classification method needs adjustments in its structure and parameters. Mobile computing has certain requirements for running time and network weight of the neural network. In this paper, we first design an image processing module for a mobile device based on the characteristics of a CNN. Then, we describe how to use the mobile to collect data, process the data, and construct the data set. Finally, considering the computing environment and data characteristics of mobile devices, we propose a lightweight network structure for optical character recognition (OCR) on specific data sets. The proposed method using a CNN has been validated by comparison with the results of existing methods, used for optical character recognition.

Journal ArticleDOI
TL;DR: This paper deals with interconnected Smart City and Safe City concepts and it presents systems, that are common to both, equal concepts and they are presented within description of specific system Smart Safety and Smart Healthcare.
Abstract: With current environmental and social challenges in mind, we can say, that Smart Cities are becoming the need of modern society. Many cities in the world are already transforming their ways to be more efficient, greener and also safer. According to our studies, field of safety receives less focus in development strategies than other fields of smart development. However, to create a real Smart City, it is needed to understand the city as a complex environment, with their smart and safe concepts as interconnected parts. This paper deals with interconnected Smart City and Safe City concepts and it presents systems, that are common to both, equal concepts. Systems are divided into layers and features, needed in every system, are further summarized. Number of studied and compared works, dealing with respective systems from various points of view, our own experiences and communication with executives, responsible for smart development of Tampere city resulted in definitions, describing both concepts and each of the system layers, that are presented. The big picture was visualised by a structure for concepts and their systems, where the reader can see their relationships. Overview, of how smartness can improve safety is presented within description of specific system Smart Safety and Smart Healthcare.

Journal ArticleDOI
TL;DR: A balanced service offloading method, abbreviated BSOM, is proposed to improve the resource utilization and load balance for all the ENs while protecting the privacy information and satisfying the time requirement.
Abstract: Nowadays, due to the advances in mobile and wireless communication, mobile devices are widely used in our daily life. Meanwhile, in the mobile devices, there exits diverse applications which are developed to satisfy the various requirements of mobile users. Correspondingly, a large number of services are produced by the mobile devices. Since the mobile devices have limitations on the battery capacity, physical size, etc., they can hardly complete all the services. To relieve this problem, driven by edge computing, the central units (CUs) in fifth-generation wireless systems (5G) could be enhanced into edge nodes (ENs) for processing. However, during the transmission of edge services, the privacy leakage may occur, and the overall performance of the networks needs to be taken into consideration. In this paper, an optimization problem is defined to improve the resource utilization and load balance for all the ENs while protecting the privacy information and satisfying the time requirement. Then, a balanced service offloading method, abbreviated BSOM, is proposed. Finally, abundant experiments and evaluations are conducted to validate our proposed method is both effective and feasible.

Journal ArticleDOI
TL;DR: A secure and Robust Healthcare-based Blockchain (SRHB) the approach proposes with Attribute-based Encryption to transmit the healthcare data securely to offer a better solution for the above issues.
Abstract: Health Electronic Records (HER) share the data to improve the quality and decrease the cost of Healthcare. It is challenging because of techniques complexities and privacy compatibilities issues. The existing system is more popular to use the cloud-based healthcare system. However, the healthcare system is affected by content privacy and secure data transformation during data gathering and analyzing personal health records in cloud environments. The patient’s records shared with patients, healthcare organizations, and insurance agents in a cloud environment. To offer a better solution for the above issues, a secure and Robust Healthcare-based Blockchain (SRHB) the approach proposes with Attribute-based Encryption to transmit the healthcare data securely. The proposed technique collects the data from the patient by using wearable devices in a centralized healthcare system. It observes patient health condition while in sleeping, heartbeat as well as walking distance. The patient obtained data is uploaded and stored in a cloud storage server. The doctor reviews the patient’s clinical test, genetic information, and observation report to prescribe the medicine and precaution for a speedy recovery. Meantime, an insurance agent also evaluates the patient’s clinical test, genetic information, and observation report to release the insurance amount for medical treatments. Blockchain concept implemented to maintain privacy in individual patient records. Each time it creates a separate block as a chain. Any changes in the block will be added as a new entry. Based on the experimental evaluation, SRHB reduces 2.85 AD (Average Delay) in seconds, 1.69 SET (System Execution Time) in seconds, and improves 28% SR (Success Rate) compared to conventional techniques.

Journal ArticleDOI
TL;DR: The user’s perception of privacy is influenced by the knowledge of the data used by the installed applications; applications access to much more data than they need is analyzed.
Abstract: Our smartphone is full of applications and data that analytically organize, facilitate and describe our lives. We install applications for the most varied reasons, to inform us, to have fun and for work, but, unfortunately, we often install them without reading the terms and conditions of use. The result is that our privacy is increasingly at risk. Considering this scenario, in this paper, we analyze the user’s perception towards privacy while using smartphone applications. In particular, we formulate two different hypotheses: 1) the perception of privacy is influenced by the knowledge of the data used by the installed applications; 2) applications access to much more data than they need. The study is based on two questionnaires (within-subject experiments with 200 volunteers) and on the lists of installed apps (30 volunteers). Results show a widespread abuse of data related to location, personal contacts, camera, Wi-Fi network list, running apps list, and vibration. An in-depth analysis shows that some features are more relevant to certain groups of users (e.g., adults are mainly worried about contacts and Wi-Fi connection lists; iOS users are sensitive to smartphone vibration; female participants are worried about possible misuse of the smartphone camera).

Journal ArticleDOI
TL;DR: This paper develops a method for underwater real-time recognition and tracking of multi-objects, which is called “You Only Look Once: YOLO”, and provides a very fast and accurate tracker.
Abstract: Deep-sea organism automatic tracking has rarely been studied because of a lack of training data. However, it is extremely important for underwater robots to recognize and to predict the behavior of organisms. In this paper, we first develop a method for underwater real-time recognition and tracking of multi-objects, which we call “You Only Look Once: YOLO”. This method provides us with a very fast and accurate tracker. At first, we remove the haze, which is caused by the turbidity of the water from a captured image. After that, we apply YOLO to allow recognition and tracking of marine organisms, which include shrimp, squid, crab and shark. The experiments demonstrate that our developed system shows satisfactory performance.

Journal ArticleDOI
TL;DR: The physical layer security performance of the mobile vehicular networks over N- Nakagami fading channels is investigated and exact closed-form expressions for the probability of strictly positive secrecy capacity, secrecy outage probability, and average secrecy capacity are derived.
Abstract: Vehicular communication is an emergent technology with promising future, which can promote the development of mobile vehicular networks. Due to the broadcast nature of wireless channels, vehicular user mobility, and the diversity of vehicular network structures, the physical layer security issue of the mobile vehicular networks is a major concern. In this paper, the physical layer security performance of the mobile vehicular networks over N-Nakagami fading channels is investigated. Exact closed-form expressions for the probability of strictly positive secrecy capacity (SPSC), secrecy outage probability (SOP), and average secrecy capacity (ASC) are derived. Monte-Carlo simulation is used to verify the secrecy performance under different conditions. We further investigate the relationship between secrecy performance and the system parameters.

Journal ArticleDOI
TL;DR: This work proposes a mobile edge computing enabled federated learning framework, called FedMEC, which integrating model partition technique and differential privacy simultaneously, and applies the differentially private data perturbation method to prevent the privacy leakage from the local model parameters.
Abstract: Federated learning is a recently proposed paradigm that presents significant advantages in privacy-preserving machine learning services It enables the deep learning applications on mobile devices, where a deep neural network (DNN) is trained in a decentralized manner among thousands of edge clients However, directly apply the federated learning algorithm to the mobile edge computing environment will incur unacceptable computation costs in mobile edge devices Moreover, among the training process, frequent model parameters exchanging between participants and the central server will increase the leakage possibility of the users’ sensitive training data Aiming at reducing the heavy computation cost of DNN training on edge devices while providing strong privacy guarantees, we propose a mobile edge computing enabled federated learning framework, called FedMEC, which integrating model partition technique and differential privacy simultaneously In FedMEC, the most complex computations can be outsourced to the edge servers by splitting a DNN model into two parts Furthermore, we apply the differentially private data perturbation method to prevent the privacy leakage from the local model parameters, in which the updates from an edge device to the edge server is perturbed by the Laplace noise To validate the proposed FedMEC, we conduct a series of experiments on an image classification task under the settings of federated learning The results demonstrate the effectiveness and practicality of our FedMEC scheme

Journal ArticleDOI
TL;DR: This article investigates and analyzes the key technologies of IEEE 802.11be including multi-band operation, multi-AP coordination, enhanced link reliability, and latency & jitter guarantee and gives a brief overview on the standardization process.
Abstract: The IEEE 80211ax for Wireless Local Area Network (WLAN), one of the most important wireless networks, will be released in 2020 In recent years, ultra-high definition video service and real-time applications attract increasing attention Therefore, the next generation WLAN (beyond IEEE 80211ax): IEEE 80211be task group (TGbe) was formally established in 2019, which regards achieving extremely high throughput (EHT) as its core technical objective This article investigates and analyzes the key technologies of IEEE 80211be, and further provides our perspectives and insights on it Specifically, this article gives a brief overview on IEEE 80211be, including the target scenario and technical objective, key technologies overview, and the standardization process After that, we further investigate, analyze and provide perspectives on the key technologies of IEEE 80211be including multi-band operation, multi-AP coordination, enhanced link reliability, and latency & jitter guarantee To the best of our knowledge, this is the first work to investigate, analyze and provide insights on IEEE 80211be

Journal ArticleDOI
TL;DR: The lesson the authors have learnt is that a human-in-the-loop approach may significantly help to clean and re-organize noised datasets for an empowered ML design experience.
Abstract: Supervised Machine Learning (ML) requires that smart algorithms scrutinize a very large number of labeled samples before they can make right predictions. And this is not always true either. In our experience, in fact, a neural network trained with a huge database comprised of over fifteen million water meter readings had essentially failed to predict when a meter would malfunction/need disassembly based on a history of water consumption measurements. With a second step, we developed a methodology, based on the enforcement of a specialized data semantics, that allowed us to extract only those samples for training that were not noised by data impurities. With this methodology, we re-trained the neural network up to a prediction accuracy of over 80%. Yet, we simultaneously realized that the new training dataset was significantly different from the initial one in statistical terms, and much smaller, as well. We had reached a sort of paradox: We had alleviated the initial problem with a better interpretable model, but we had changed the replicated form of the initial data. To reconcile that paradox, we further enhanced our data semantics with the contribution of field experts. This has finally led to the extrapolation of a training dataset truly representative of regular/defective water meters and able to describe the underlying statistical phenomenon, while still providing an excellent prediction accuracy of the resulting classifier. At the end of this path, the lesson we have learnt is that a human-in-the-loop approach may significantly help to clean and re-organize noised datasets for an empowered ML design experience.

Journal ArticleDOI
TL;DR: This paper proposes a framework, AdDroid, for analyzing and detecting malicious behaviour in Android applications based on various combinations of artefacts called Rules, which has exceptionally low computational complexity, making it possible to analyze applications in real-time.
Abstract: Recent years have witnessed huge growth in Android malware development. Colossal reliance on Android applications for day to day working and their massive development dictates for an automated mechanism to distinguish malicious applications from benign ones. A significant amount of research has been devoted to analyzing and mitigating this growing problem; however, attackers are using more complicated techniques to evade detection. This paper proposes a framework, AdDroid; for analyzing and detecting malicious behaviour in Android applications based on various combinations of artefacts called Rules. The artefacts represent actions of an Android application such as connecting to the Internet, uploading a file to a remote server or installing another package on the device etc. AdDroid employs an ensemble-based machine learning technique where Adaboost is combined with traditional classifiers in order to train a model founded on static analysis of Android applications that is capable of recognizing malicious applications. Feature selection and extraction techniques are used to get the most distinguishing Rules. The proposed model is created using a dataset comprising 1420 Android applications with 910 malicious and 510 benign applications. Our proposed system achieved an accuracy of 99.11% with 98.61% True Positive (TP) and 99.33% True Negative (TN) rate. The high TP and TN rates reflect the efficacy on both major and minor class. Since the proposed solution has exceptionally low computational complexity, therefore, making it possible to analyze applications in real-time.

Journal ArticleDOI
TL;DR: A novel marginal resource allocation decision support model to assist cloud providers to manage the cloud SLAs before its execution, covering all possible scenarios, including whether a consumer is new or not, and whether the consumer requests the same or different marginal resources.
Abstract: One of the significant challenges for cloud providers is how to manage resources wisely and how to form a viable service level agreement (SLA) with consumers to avoid any violation or penalties. Some consumers make an agreement for a fixed amount of resources, these being the required resources that are needed to execute its business. Consumers may need additional resources on top of these fixed resources, known as– marginal resources that are only consumed and paid for in case of an increase in business demand. In such contracts, both parties agree on a pricing model in which a consumer pays upfront only for the fixed resources and pays for the marginal resources when they are used. A marginal resource allocation is a challenge for service provider particularly small- to medium-sized service providers as it can affect the usage of their resources and consequently their profits. This paper proposes a novel marginal resource allocation decision support model to assist cloud providers to manage the cloud SLAs before its execution, covering all possible scenarios, including whether a consumer is new or not, and whether the consumer requests the same or different marginal resources. The model relies on the capabilities of the user-based collaborative filtering method with an enhanced top-k nearest neighbor algorithm and a fuzzy logic system to make a decision. The proposed framework assists cloud providers manage their resources in an optimal way and avoid violations or penalties. Finally, the performance of the proposed model is shown through a cloud scenario which demonstrates that our proposed approach can assists cloud providers to manage their resources wisely to avoid violations.

Journal ArticleDOI
TL;DR: This paper proposed an efficient and secure identity-based encryption scheme under the RSA assumption providing equality test, and proved the security of the scheme for one-way secure against chosen-identity and chosen-ciphertext attacks (OW-ID-CCA) by means of the random oracle model.
Abstract: Wireless body area network (WBAN) constitutes a widely implemented technique for remote acquisition and monitoring of patient health-related information via the use of embodied sensors. Given that security and privacy protection, including, but not limited to user authentication, integrity, and confidentiality, are both key challenges and a matter of deep concern when it comes to the deployment of emerging technologies in healthcare applications, state-of-the-art measures and solutions are needed to fully address security and privacy concerns in an effective and sensible manner by considering all the benefits and limitation of remote healthcare systems. In this paper, we proposed an efficient and secure identity-based encryption scheme under the RSA assumption providing equality test. We then proved the security of our scheme for one-way secure against chosen-identity and chosen-ciphertext attacks (OW-ID-CCA) by means of the random oracle model. The performance evaluation results indicated that our scheme outperforms other security schemes in terms of providing relatively low computational cost and stable compatibility with WBAN applications.

Journal ArticleDOI
TL;DR: The proposed DeepML system is a deep long short-term memory (LSTM) based system for indoor localization using magnetic and light sensors on smartphones, which first builds bimodal images by data preprocessing, and then trains a deep LSTM network in the offline phase.
Abstract: With the increasing demand for location-based services, indoor localization has attracted great interest. In this paper, we present DeepML, a deep long short-term memory (LSTM) based system for indoor localization using magnetic and light sensors on smartphones. We experimentally verify the feasibility of using bimodal data from magnetic and light sensors for indoor localization for closed environments where there is no ambient light. We then design the DeepML system, which first builds bimodal images by data preprocessing, and then trains a deep LSTM network in the offline phase. Newly received magnetic field and light data are then exploited for estimating the location of the mobile device using a probabilistic method. The extensive experiments verify the effectiveness of the proposed DeepML system.