scispace - formally typeset
Search or ask a question

Showing papers in "Wireless Communications and Mobile Computing in 2020"


Journal ArticleDOI
TL;DR: This survey comprehensively covers the location privacy and trust management models of the VANETs and discusses the security and privacy issues in the VCC, which fills the gap of existing surveys.
Abstract: Vehicular networks are becoming a prominent research field in the intelligent transportation system (ITS) due to the nature and characteristics of providing high-level road safety and optimized traffic management. Vehicles are equipped with the heavy communication equipment which requires a high power supply, on-board computing device, and data storage devices. Many wireless communication technologies are deployed to maintain and enhance the traffic management system. The ITS is capable of providing services to the traffic authorities and precautionary measures to the drivers and passengers. Several methods have been proposed for discussing the security and privacy issues for the vehicular ad hoc networks (VANETs) and vehicular cloud computing (VCC). They receive a great deal of attention from researchers around the world since they are new technologies, and they can improve road safety and enhance traffic flow by utilizing the vehicles resources and communication system. Firstly, the VANETs are presented, including the basic overview, characteristics, threats, and attacks. The location privacy methodologies are elaborated, which can protect the confidential information of the vehicle, such as the location detail and driver information. Secondly, the trust management models in the VANETs are comprehensively discussed, followed by the comparison of the cryptography and trust models in terms of different kinds of attacks. Then, the simulation tools and applications of the VANETs are discussed, and the evolution is presented from the VANETs to VCC in the vehicular network. Thirdly, the VCC is discussed from its architecture and the security and privacy issues. Finally, several research challenges on the VANETs and VCC are presented. In sum, this survey comprehensively covers the location privacy and trust management models of the VANETs and discusses the security and privacy issues in the VCC, which fills the gap of existing surveys. Also, it indicates the research challenges in the VANETs and VCC.

86 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel method that vigorously and efficiently achieves waste management by predicting the probability of the waste level in trash bins by using machine learning and graph theory and saves time by finding the best route in the management of waste collection.
Abstract: Along with the development of the Internet of Things (IoT), waste management has appeared as a serious issue. Waste management is a daily task in urban areas, which requires a large amount of labour resources and affects natural, budgetary, efficiency, and social aspects. Many approaches have been proposed to optimize waste management, such as using the nearest neighbour search, colony optimization, genetic algorithm, and particle swarm optimization methods. However, the results are still too vague and cannot be applied in real systems, such as in universities or cities. Recently, there has been a trend of combining optimal waste management strategies with low-cost IoT architectures. In this paper, we propose a novel method that vigorously and efficiently achieves waste management by predicting the probability of the waste level in trash bins. By using machine learning and graph theory, the system can optimize the collection of waste with the shortest path. This article presents an investigation case implemented at the real campus of Ton Duc Thang University (Vietnam) to evaluate the performance and practicability of the system’s implementation. We examine data transfer on the LoRa module and demonstrate the advantages of the proposed system, which is implemented through a simple circuit designed with low cost, ease of use, and replace ability. Our system saves time by finding the best route in the management of waste collection.

65 citations


Journal ArticleDOI
TL;DR: The state-of-the-art trust management schemes are deeply investigated for WSN and their advantages and disadvantages are symmetrically compared and analyzed in defending against internal attacks.
Abstract: As a key component of the information sensing and aggregating for big data, cloud computing, and Internet of Things (IoT), the information security in wireless sensor network (WSN) is critical. Due to constrained resources of sensor node, WSN is becoming a vulnerable target to many security attacks. Compared to external attacks, it is more difficult to defend against internal attacks. The former can be defended by using encryption and authentication schemes. However, this is invalid for the latter, which can obtain all keys of the network. The studies have proved that the trust management technology is one of effective approaches for detecting and defending against internal attacks. Hence, it is necessary to investigate and review the attack and defense with trust management. In this paper, the state-of-the-art trust management schemes are deeply investigated for WSN. Moreover, their advantages and disadvantages are symmetrically compared and analyzed in defending against internal attacks. The future directions of trust management are further provided. Finally, the conclusions and prospects are given.

61 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to show how deep learning architecture such as convolutional neural network (CNN) which can be useful in real-time malaria detection effectively and accurately from input images and to reduce manual labor with a mobile application.
Abstract: Malaria is a contagious disease that affects millions of lives every year. Traditional diagnosis of malaria in laboratory requires an experienced person and careful inspection to discriminate healthy and infected red blood cells (RBCs). It is also very time-consuming and may produce inaccurate reports due to human errors. Cognitive computing and deep learning algorithms simulate human intelligence to make better human decisions in applications like sentiment analysis, speech recognition, face detection, disease detection, and prediction. Due to the advancement of cognitive computing and machine learning techniques, they are now widely used to detect and predict early disease symptoms in healthcare field. With the early prediction results, healthcare professionals can provide better decisions for patient diagnosis and treatment. Machine learning algorithms also aid the humans to process huge and complex medical datasets and then analyze them into clinical insights. This paper looks for leveraging deep learning algorithms for detecting a deadly disease, malaria, for mobile healthcare solution of patients building an effective mobile system. The objective of this paper is to show how deep learning architecture such as convolutional neural network (CNN) which can be useful in real-time malaria detection effectively and accurately from input images and to reduce manual labor with a mobile application. To this end, we evaluate the performance of a custom CNN model using a cyclical stochastic gradient descent (SGD) optimizer with an automatic learning rate finder and obtain an accuracy of 97.30% in classifying healthy and infected cell images with a high degree of precision and sensitivity. This outcome of the paper will facilitate microscopy diagnosis of malaria to a mobile application so that reliability of the treatment and lack of medical expertise can be solved.

54 citations


Journal ArticleDOI
TL;DR: A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper and gives 90% accuracy to recognize the Arabic hand signs and gestures which assures it as a highly dependable system.
Abstract: Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities. An automated sign recognition system requires two main courses of action: the detection of particular features and the categorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put forward for improving system performance. However, the recent progress in the computer vision field has geared us towards the further exploration of hand signs/gestures’ recognition with the aid of deep neural networks. The Arabic sign language has witnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the Arabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters which assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures recognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will be fed to the text into the speech engine which produces the audio of the Arabic language as an output.

53 citations


Journal ArticleDOI
TL;DR: The HPCA model’s conclusion can obviously reduce the interference of redundant information and effectively separate the saliency object from the background, and it had more improved detection accuracy than others.
Abstract: Aiming at the problems of intensive background noise, low accuracy, and high computational complexity of the current significant object detection methods, the visual saliency detection algorithm based on Hierarchical Principal Component Analysis (HPCA) has been proposed in the paper. Firstly, the original RGB image has been converted to a grayscale image, and the original grayscale image has been divided into eight layers by the bit surface stratification technique. Each image layer contains significant object information matching the layer image features. Secondly, taking the color structure of the original image as the reference image, the grayscale image is reassigned by the grayscale color conversion method, so that the layered image not only reflects the original structural features but also effectively preserves the color feature of the original image. Thirdly, the Principal Component Analysis (PCA) has been performed on the layered image to obtain the structural difference characteristics and color difference characteristics of each layer of the image in the principal component direction. Fourthly, two features are integrated to get the saliency map with high robustness and to further refine our results; the known priors have been incorporated on image organization, which can place the subject of the photograph near the center of the image. Finally, the entropy calculation has been used to determine the optimal image from the layered saliency map; the optimal map has the least background information and most prominently saliency objects than others. The object detection results of the proposed model are closer to the ground truth and take advantages of performance parameters including precision rate (PRE), recall rate (REC), and - measure (FME). The HPCA model’s conclusion can obviously reduce the interference of redundant information and effectively separate the saliency object from the background. At the same time, it had more improved detection accuracy than others.

51 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that AICRL is effective in generating captions for the images and has been trained over a big dataset MS COCO 2014 to maximize the likelihood of the target description sentence given the training images and evaluated it in various metrics.
Abstract: Captioning the images with proper descriptions automatically has become an interesting and challenging problem. In this paper, we present one joint model AICRL, which is able to conduct the automatic image captioning based on ResNet50 and LSTM with soft attention. AICRL consists of one encoder and one decoder. The encoder adopts ResNet50 based on the convolutional neural network, which creates an extensive representation of the given image by embedding it into a fixed length vector. The decoder is designed with LSTM, a recurrent neural network and a soft attention mechanism, to selectively focus the attention over certain parts of an image to predict the next sentence. We have trained AICRL over a big dataset MS COCO 2014 to maximize the likelihood of the target description sentence given the training images and evaluated it in various metrics like BLEU, METEROR, and CIDEr. Our experimental results indicate that AICRL is effective in generating captions for the images.

48 citations


Journal ArticleDOI
TL;DR: A tutorial and a survey of VLC in terms of the design, development, and evaluation techniques as well as current challenges and their envisioned solutions, and the potential of incorporating VLC techniques in the current and upcoming technologies including sixth-generation (6G), and intelligent reflective surfaces (IRSs) among others.
Abstract: With the advancement of solid-state devices for lighting, illumination is on the verge of being completely restructured. This revolution comes with numerous advantages and viable opportunities that can transform the world of wireless communications for the better. Solid-state LEDs are rapidly replacing the contemporary incandescent and fluorescent lamps. In addition to their high energy efficiency, LEDs are desirable for their low heat generation, long lifespan, and their capability to switch on and off at an extremely high rate. The ability of switching between different levels of luminous intensity at such a rate has enabled the inception of a new communication technology referred to as visible light communication (VLC). With this technology, the LED lamps are additionally being used for data transmission. This paper provides a tutorial and a survey of VLC in terms of the design, development, and evaluation techniques as well as current challenges and their envisioned solutions. The focus of this paper is mainly directed towards an indoor setup. An overview of VLC, theory of illumination, system receivers, system architecture, and ongoing developments are provided. We further provide some baseline simulation results to give a technical background on the performance of VLC systems. Moreover, we provide the potential of incorporating VLC techniques in the current and upcoming technologies such as fifth-generation (5G), beyond fifth-generation (B5G) wireless communication trends including sixth-generation (6G), and intelligent reflective surfaces (IRSs) among others.

47 citations


Journal ArticleDOI
TL;DR: An anomaly detection method with a composite autoencoder model learning the normal pattern is proposed, which makes prediction and reconstruction on input data at the same time, which overcomes the shortcoming of using each one alone.
Abstract: As the Industrial Internet of Things (IIoT) develops rapidly, cloud computing and fog computing become effective measures to solve some problems, e.g., limited computing resources and increased network latency. The Industrial Control Systems (ICS) play a key factor within the development of IIoT, whose security affects the whole IIoT. ICS involves many aspects, like water supply systems and electric utilities, which are closely related to people’s lives. ICS is connected to the Internet and exposed in the cyberspace instead of isolating with the outside recent years. The risk of being attacked increases as a result. In order to protect these assets, intrusion detection systems (IDS) have drawn much attention. As one kind of intrusion detection, anomaly detection provides the ability to detect unknown attacks compared with signature-based techniques, which are another kind of IDS. In this paper, an anomaly detection method with a composite autoencoder model learning the normal pattern is proposed. Unlike the common autoencoder neural network that predicts or reconstructs data separately, our model makes prediction and reconstruction on input data at the same time, which overcomes the shortcoming of using each one alone. With the error obtained by the model, a change ratio is put forward to locate the most suspicious devices that may be under attack. In the last part, we verify the performance of our method by conducting experiments on the SWaT dataset. The results show that the proposed method exhibits improved performance with 88.5% recall and 87.0% F1-score.

46 citations


Journal ArticleDOI
TL;DR: Temporal Convolution Neural Network (TCNN) as discussed by the authors is a deep learning framework for intrusion detection systems in IoT, which combines CNN with synthetic minority oversampling technique-nominal continuous (SMOTE-NC) to handle unbalanced dataset.
Abstract: In the era of the Internet of Things (IoT), connected objects produce an enormous amount of data traffic that feed big data analytics, which could be used in discovering unseen patterns and identifying anomalous traffic. In this paper, we identify five key design principles that should be considered when developing a deep learning-based intrusion detection system (IDS) for the IoT. Based on these principles, we design and implement Temporal Convolution Neural Network (TCNN), a deep learning framework for intrusion detection systems in IoT, which combines Convolution Neural Network (CNN) with causal convolution. TCNN is combined with Synthetic Minority Oversampling Technique-Nominal Continuous (SMOTE-NC) to handle unbalanced dataset. It is also combined with efficient feature engineering techniques, which consist of feature space reduction and feature transformation. TCNN is evaluated on Bot-IoT dataset and compared with two common machine learning algorithms, i.e., Logistic Regression (LR) and Random Forest (RF), and two deep learning techniques, i.e., LSTM and CNN. Experimental results show that TCNN achieves a good trade-off between effectiveness and efficiency. It outperforms the state-of-the-art deep learning IDSs that are tested on Bot-IoT dataset and records an accuracy of 99.9986% for multiclass traffic detection, and shows a very close performance to CNN with respect to the training time.

42 citations


Journal ArticleDOI
TL;DR: A framework for real network traffic collection and labeling in a scalable way is proposed and three of the most representative deep learning models and design are introduced and evaluated, namely, a SDAE, a 1D CNN, and a bidirectional LSTM network, respectively.
Abstract: The proliferation of mobile devices over recent years has led to a dramatic increase in mobile traffic. Demand for enabling accurate mobile app identification is coming as it is an essential step to improve a multitude of network services: accounting, security monitoring, traffic forecasting, and quality-of-service. However, traditional traffic classification techniques do not work well for mobile traffic. Besides, multiple machine learning solutions developed in this field are severely restricted by their handcrafted features as well as unreliable datasets. In this paper, we propose a framework for real network traffic collection and labeling in a scalable way. A dedicated Android traffic capture tool is developed to build datasets with perfect ground truth. Using our established dataset, we make an empirical exploration on deep learning methods for the task of mobile app identification, which can automate the feature engineering process in an end-to-end fashion. We introduce three of the most representative deep learning models and design and evaluate our dedicated classifiers, namely, a SDAE, a 1D CNN, and a bidirectional LSTM network, respectively. In comparison with two other baseline solutions, our CNN and RNN models with raw traffic inputs are capable of achieving state-of-the-art results regardless of TLS encryption. Specifically, the 1D CNN classifier obtains the best performance with an accuracy of 91.8% and macroaverage F-measure of 90.1%. To further understand the trained model, sample-specific interpretations are performed, showing how it can automatically learn important and advanced features from the uppermost bytes of an app’s raw flows.

Journal ArticleDOI
TL;DR: The simulation results show that the proposed resource allocation algorithm can significantly improve the spectral efficiency of the system and URLLC reliability, compared with the adaptive particle swarm optimization (APSO), the equal power allocation (EPA), and the equal subcarrier allocation (ESA) algorithm.
Abstract: This paper investigates the network slicing in the virtualized wireless network. We consider a downlink orthogonal frequency division multiple access system in which physical resources of base stations are virtualized and divided into enhanced mobile broadband (eMBB) and ultrareliable low latency communication (URLLC) slices. We take the network slicing technology to solve the problems of network spectral efficiency and URLLC reliability. A mixed-integer programming problem is formulated by maximizing the spectral efficiency of the system in the constraint of users’ requirements for two slices, i.e., the requirement of the eMBB slice and the requirement of the URLLC slice with a high probability for each user. By transforming and relaxing integer variables, the original problem is approximated to a convex optimization problem. Then, we combine the objective function and the constraint conditions through dual variables to form an augmented Lagrangian function, and the optimal solution of this function is the upper bound of the original problem. In addition, we propose a resource allocation algorithm that allocates the network slicing by applying the Powell–Hestenes–Rockafellar method and the branch and bound method, obtaining the optimal solution. The simulation results show that the proposed resource allocation algorithm can significantly improve the spectral efficiency of the system and URLLC reliability, compared with the adaptive particle swarm optimization (APSO), the equal power allocation (EPA), and the equal subcarrier allocation (ESA) algorithm. Furthermore, we analyze the spectral efficiency of the proposed algorithm with the users’ requirements change of two slices and get better spectral efficiency performance.

Journal ArticleDOI
TL;DR: A relay selection joint consecutive packet routing (RS-CPR) scheme is proposed to reduce channel competition conflicts and energy consumption, increase network throughput, and then reduce end-to-end delay in data transmission for WuR-enabled WSNs.
Abstract: Reducing energy consumption, increasing network throughput, and reducing delay are the pivot issues for wake-up radio- (WuR-) enabled wireless sensor networks (WSNs). In this paper, a relay selection joint consecutive packet routing (RS-CPR) scheme is proposed to reduce channel competition conflicts and energy consumption, increase network throughput, and then reduce end-to-end delay in data transmission for WuR-enabled WSNs. The main innovations of the RS-CPR scheme are as follows: (1) Relay selection: when selecting a relay node for routing, the sender will select the node with the highest evaluation weight from its forwarding node set (FNS). The weight of the node is weighted by the distance from the node to sink, the number of packets in the queue, and the residual energy of the node. (2) The node sends consecutive packets once it accesses the channel successfully, and it gives up the channel after sending all packets. Nodes that fail the competition sleep during the consecutive packet transmission of the winner to reduce collisions and energy consumption. (3) Every node sets two thresholds: the packet queue length threshold and the packet maximum waiting time threshold . When the corresponding value of the node is greater than the threshold, the node begins to contend for the channel. Besides, to make full use of energy and reduce delay, the threshold of nodes which are far from sink is small while that of nodes which are close to sink is large. In such a way, nodes in RS-CPR scheme will select those with much residual energy, a large number of packets, and a short distance from sink as relay nodes. As a result, the probability that a node with no packets to transmit becomes a relay is very small, and the probability that a node with many data packets in the queue becomes a relay is large. In this strategy, only a few nodes in routing need to contend for the channel to send packets, thereby reducing channel contention conflicts. Since the relay node has a large number of data packets, it can send many packets continuously after a successful competition. It also reduces the spending of channel competition and improves the network throughput. In summary, RS-CPR scheme combines the selection of relay nodes with consecutive packet routing strategy, which greatly improves the performance of the network. As is shown in our theoretical analysis and experimental results, compared with the receiver-initiated consecutive packet transmission WuR (RI-CPT-WuR) scheme and RI-WuR protocol, the RS-CPR scheme reduces end-to-end delay by 45.92% and 65.99%, respectively, and reduces channel collisions by 51.92% and 76.41%. Besides, it reduces energy consumption by 61.24% and 70.40%. At the same time, RS-CPR scheme improves network throughput by 47.37% and 75.02%.

Journal ArticleDOI
TL;DR: This research is based on cloud computing technology, breaks the framework of the traditional sports model, establishes a personalized sports teaching system according to the basic theory of physical education, and designs and discusses the future college sports model.
Abstract: The emergence of cloud computing, the change of education methods, and the requirements of lifelong education make the traditional teaching platform face great challenges. With the rapid development of network technology and computer technology, the speed of updating knowledge is accelerating day by day, and the way of education is gradually changing. Facing the informationization of education, our physical education teaching methods and means are still stuck in the traditional words and deeds, which obviously cannot meet the needs of the development of physical education and health curriculum. In terms of the overall development of sports, school sports is the cornerstone of the country’s sports development. This research is based on cloud computing technology, breaks the framework of the traditional sports model, establishes a personalized sports teaching system according to the basic theory of physical education, and designs and discusses the future college sports model. The construction and application of digital teaching resources of physical education courses in colleges and universities can help solve the problems such as shortage of teachers and contradiction between learning and training. The construction of digital teaching resources of physical education courses in colleges and universities based on cloud computing can save costs and improve resource utilization efficiency.

Journal ArticleDOI
TL;DR: This paper examines the principles behind energy-efficient wireless communication network design, and presents a broad taxonomy that tracks the areas of impact of these techniques in the network and discusses the trends in renewable energy supply systems for future networks.
Abstract: The projected rise in wireless communication traffic has necessitated the advancement of energy-efficient (EE) techniques for the design of wireless communication systems, given the high operating costs of conventional wireless cellular networks, and the scarcity of energy resources in low-power applications. The objective of this paper is to examine the paradigm shifts in EE approaches in recent times by reviewing traditional approaches to EE, analyzing recent trends, and identifying future challenges and opportunities. Considering the current energy concerns, nodes in emerging wireless networks range from limited-energy nodes (LENs) to high-energy nodes (HENs) with entirely different constraints in either case. In view of these extremes, this paper examines the principles behind energy-efficient wireless communication network design. We then present a broad taxonomy that tracks the areas of impact of these techniques in the network. We specifically discuss the preponderance of prediction-based energy-efficient techniques and their limits, and then discuss the trends in renewable energy supply systems for future networks. Finally, we recommend more context-specific energy-efficient research efforts and cross-vendor collaborations to push the frontiers of energy efficiency in the design of wireless communication networks.

Journal ArticleDOI
TL;DR: An efficient ECG denoising approach based on empirical mode decomposition (EMD), sample entropy, and improved threshold function that can better remove the noise of ECG signals and provide better diagnosis service for the computer-based automatic medical system is proposed.
Abstract: The electrocardiogram (ECG) signal can easily be affected by various types of noises while being recorded, which decreases the accuracy of subsequent diagnosis. Therefore, the efficient denoising of ECG signals has become an important research topic. In the paper, we proposed an efficient ECG denoising approach based on empirical mode decomposition (EMD), sample entropy, and improved threshold function. This method can better remove the noise of ECG signals and provide better diagnosis service for the computer-based automatic medical system. The proposed work includes three stages of analysis: (1) EMD is used to decompose the signal into finite intrinsic mode functions (IMFs), and according to the sample entropy of each order of IMF following EMD, the order of IMFs denoised is determined; (2) the new threshold function is adopted to denoise these IMFs after the order of IMFs denoised is determined; and (3) the signal is reconstructed and smoothed. The proposed method solves the shortcoming of discarding the first-order IMF directly in traditional EMD denoising and proposes a new threshold denoising function to improve the traditional soft and hard threshold functions. We further conduct simulation experiments of ECG signals from the MIT-BIH database, in which three types of noise are simulated: white Gaussian noise, electromyogram (EMG), and power line interference. The experimental results show that the proposed method is robust to a variety of noise types. Moreover, we analyze the effectiveness of the proposed method under different input SNR with reference to improving SNR ( ) and mean square error ( ), then compare the denoising algorithm proposed in this paper with previous ECG signal denoising techniques. The results demonstrate that the proposed method has a higher and a lower . Qualitative and quantitative studies demonstrate that the proposed algorithm is a good ECG signal denoising method.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a blockchain-based trust mechanism for distributed IoT devices, where trustrank is quantified by normative trust and risk measures, and a new storage structure is designed for the domain administration manager to identify and delete the malicious evaluations of the devices.
Abstract: The development of Internet of Things (IoT) and Mobile Edge Computing (MEC) has led to close cooperation between electronic devices. It requires strong reliability and trustworthiness of the devices involved in the communication. However, current trust mechanisms have the following issues: (1) heavily relying on a trusted third party, which may incur severe security issues if it is corrupted, and (2) malicious evaluations on the involved devices which may bias the trustrank of the devices. By introducing the concepts of risk management and blockchain into the trust mechanism, we here propose a blockchain-based trust mechanism for distributed IoT devices in this paper. In the proposed trust mechanism, trustrank is quantified by normative trust and risk measures, and a new storage structure is designed for the domain administration manager to identify and delete the malicious evaluations of the devices. Evidence shows that the proposed trust mechanism can ensure data sharing and integrity, in addition to its resistance against malicious attacks to the IoT devices.

Journal ArticleDOI
TL;DR: Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical & Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
Abstract: Smart homes are an element of developing smart cities. In recent years, countries around the world have spared no effort in promoting smart cities. Smart homes are an interesting technological advancement that can make people’s lives much more convenient. The development of smart homes involves multiple technological aspects, which include big data, mobile networks, cloud computing, Internet of Things, and even artificial intelligence. Digital information is the main component of signal control and flow in a smart home, while information security is another important aspect. In the event of equipment failure, the task of safeguarding the system’s information is of the utmost importance. Since smart homes are automatically controlled, the problem of mobile network security must be taken seriously. To address these issues, this paper focuses on information security, big data, mobile networks, cloud computing, and the Internet of Things. Security efficiency can be enhanced by using a Secure Hash Algorithm 256 (SHA-256), which is an authentication mechanism that, with the help of the user, can authenticate each interaction of a given device with a WebServer by using an encrypted username, password, and token. This framework could be used for an automated burglar alarm system, guest attendance monitoring, and light switches, all of which are easily integrated with any smart city base. In this way, IoT solutions can allow real-time monitoring and connection with central systems for automated burglar alarms. The monitoring framework is developed on the strength of the web application to obtain real-time display, storage, and warning functions for local or remote monitoring control. The monitoring system is stable and reliable when applying SHA-256.

Journal ArticleDOI
TL;DR: This paper discusses the limitations of a recently introduced IoT-based authentication scheme for cloud computing and presents an enhanced three-factor authentication scheme using chaotic maps based on Chebyshev chaotic-based Diffie–Hellman key exchange.
Abstract: With the development of Internet of Things (IoT) technologies, Internet-enabled devices have been widely used in our daily lives. As a new service paradigm, cloud computing aims at solving the resource-constrained problem of Internet-enabled devices. It is playing an increasingly important role in resource sharing. Due to the complexity and openness of wireless networks, the authentication protocol is crucial for secure communication and user privacy protection. In this paper, we discuss the limitations of a recently introduced IoT-based authentication scheme for cloud computing. Furthermore, we present an enhanced three-factor authentication scheme using chaotic maps. The session key is established based on Chebyshev chaotic-based Diffie–Hellman key exchange. In addition, the session key involves a long-term secret. It ensures that our scheme is secure against all the possible session key exposure attacks. Besides, our scheme can effectively update user password locally. Burrows–Abadi–Needham logic proof confirms that our scheme provides mutual authentication and session key agreement. The formal analysis under random oracle model proves the semantic security of our scheme. The informal analysis shows that our scheme is immune to diverse attacks and has desired features such as three-factor secrecy. Finally, the performance comparisons demonstrate that our scheme provides optimal security features with an acceptable computation and communication overheads.

Journal ArticleDOI
TL;DR: Two main algorithms of localization of simultaneous localization and mapping with the Extended Kalman Filter are proposed and tested by simulations to be efficient and viable.
Abstract: For more than two decades, the issue of simultaneous localization and mapping (SLAM) has gained more attention from researchers and remains an influential topic in robotics. Currently, various algorithms of the mobile robot SLAM have been investigated. However, the probability-based mobile robot SLAM algorithm is often used in the unknown environment. In this paper, the authors proposed two main algorithms of localization. First is the linear Kalman Filter (KF) SLAM, which consists of five phases, such as (a) motionless robot with absolute measurement, (b) moving vehicle with absolute measurement, (c) motionless robot with relative measurement, (d) moving vehicle with relative measurement, and (e) moving vehicle with relative measurement while the robot location is not detected. The second localization algorithm is the SLAM with the Extended Kalman Filter (EKF). Finally, the proposed SLAM algorithms are tested by simulations to be efficient and viable. The simulation results show that the presented SLAM approaches can accurately locate the landmark and mobile robot.

Journal ArticleDOI
TL;DR: These devices will monitor the movement of targeted patients at home or out of their homes and, based on their behavior and movement, the treatment will be provided to Alzheimer's patients.
Abstract: In the last decade, the Internet of Things (IoT) has become a new technology that aims to facilitate life and help people in all aspects of their lives This technology is used for smart homes, smart grid stations, smart agriculture, health systems, transport services, smart cities, etc The number of sensors and IoT devices along with applications is used for monitoring the health condition of patients These devices will monitor the movement of targeted patients at home or out of their homes Based on their behavior and movement, the treatment will be provided to Alzheimer's patients The data will be collected from multiple sensors installed at patient's homes and smartwatches for checking their blood pressure level and temperature, which is too important in the current Corona Virus Disease (COVID-19) pandemic for these types of patients On the other hand, due to the diminishing mobility of people around the world, increasing environmental pollution and stress which is caused by modern machine life and various brain and neurological diseases including Alzheimer's, Parkinson, etc are widespread among people all over the world The different types of communication protocols such as Message Queue Telemetry Transport (MQTT) and WebSocket (with authentication and autoclosing of connection) for sensors and the smartwatch have been used The secure backend admin panel is used for tracing the location of doctors, patients, and ambulance These protocols are implemented with security to protect the privacy of patients also © 2020 Rozita Jamili Oskouei et al

Journal ArticleDOI
TL;DR: This work employed the different types of wireless sensor nodes to monitor the water quality in real time at the Weija intake in the Greater Accra Region of Ghana and showed a significant effect on plant and aquatic life.
Abstract: Water quality monitoring (WQM) systems seek to ensure high data precision, data accuracy, timely reporting, easy accessibility of data, and completeness. The conventional monitoring systems are inadequate when used to detect contaminants/pollutants in real time and cannot meet the stringent requirements of high precision for WQM systems. In this work, we employed the different types of wireless sensor nodes to monitor the water quality in real time. Our approach used an energy-efficient data transmission schedule and harvested energy using solar panels to prolong the node lifetime. The study took place at the Weija intake in the Greater Accra Region of Ghana. The Weija dam intake serves as a significant water source to the Weija treatment plant which supplies treated water to the people of Greater Accra and parts of Central regions of Ghana. Smart water sensors and smart water ion sensor devices from Libelium were deployed at the intake to measure physical and chemical parameters. The sensed data obtained at the central repository revealed a pH value of 7. Conductivity levels rose from 196 S/cm to 225 S/cm. Calcium levels rose to about 3.5 mg/L and dropped to about 0.16 mg/L. The temperature of the river was mainly around 35°C to 36°C. We observed fluoride levels between 1.24 mg/L and 1.9 mg/L. The oxygen content rose from the negative DO to reach 8 mg/L. These results showed a significant effect on plant and aquatic life.

Journal ArticleDOI
TL;DR: The results show that the virtual teaching platform makes the interconnection of users, machines, and things at any time and any place; realizes information self-verification, transmission, deep unsupervised learning, and management; and gives students a more realistic visual experience in high security.
Abstract: To extend a broad application of blockchain technology to the fields of online English education, this paper aims to improve a virtual platform for English teaching and learning of landscape design majors, mainly composed of presentation layer, business layer, and data layer by analyzing the performance of the proposed algorithm, and comparing with other existing algorithms. In the platform, through the service layer, the communication between the presentation layer and the data layer is completed, and the data in the data layer is transferred to the presentation layer. The user first establishes a connection with the server in the presentation layer. Using the transmitted data information, the server assigns an identifier to the user and establishes a role model. Users can download the teaching courseware through the server and simulate the real learning scene by controlling the interaction of XAML files. The results show that the virtual teaching platform makes the interconnection of users (students and teachers), machines, and things at any time and any place; realizes information self-verification, transmission, deep unsupervised learning, and management; and gives students a more realistic visual experience in high security.

Journal ArticleDOI
TL;DR: A novel flower detection application anchor-based method is proposed, which is combined with an attention mechanism to detect the flowers in a smart garden in AIoT more accurately and fast.
Abstract: In this paper, a novel flower detection application anchor-based method is proposed, which is combined with an attention mechanism to detect the flowers in a smart garden in AIoT more accurately and fast. While many researchers have paid much attention to the flower classification in existing studies, the issue of flower detection has been largely overlooked. The problem we have outlined deals largely with the study of a new design and application of flower detection. Firstly, a new end-to-end flower detection anchor-based method is inserted into the architecture of the network to make it more precious and fast and the loss function and attention mechanism are introduced into our model to suppress unimportant features. Secondly, our flower detection algorithms can be integrated into the mobile device. It is revealed that our flower detection method is very considerable through a series of investigations carried out. The detection accuracy of our method is similar to that of the state-of-the-art, and the detection speed is faster at the same time. It makes a major contribution to flower detection in computer vision.

Journal ArticleDOI
TL;DR: The dynamic contract incentive approach is studied to attract UAVs to participate in traffic offloading effectively and a sequence optimization algorithm is investigated to acquire the maximum expected utility of the base station.
Abstract: Traffic offloading is considered to be a promising technology in the Unmanned Aerial Vehicles- (UAVs-) assisted cellular networks. Due to their selfishness property, UAVs may be reluctant to take part in traffic offloading without any incentive. Moreover, considering the dynamic position of UAVs and the dynamic condition of the transmission channel, it is challenging to design a long-term effective incentive mechanism for multi-UAV networks. In this work, the dynamic contract incentive approach is studied to attract UAVs to participate in traffic offloading effectively. The two-stage contract incentive method is introduced under the information symmetric scenario and the information asymmetric scenario. Considering the sufficient conditions and necessary conditions in the contract design, a sequence optimization algorithm is investigated to acquire the maximum expected utility of the base station. The simulation experiment shows that the designed two-stage dynamic contract improves the performance of traffic offloading effectively.

Journal ArticleDOI
TL;DR: The use of big data technologies such as genetic algorithms to explore massive travel data and establish a comprehensive tourism information service platform for governments, enterprises, tourists, and scientific research institutions is explored.
Abstract: The development of global tourism has put forward new requirements for the construction of smart tourism. More and more travel-related data have reached the level of TB or even PB, which has brought great difficulties to tourism management. This article explores the use of big data technologies such as genetic algorithms to explore massive travel data and establish a comprehensive tourism information service platform for governments, enterprises, tourists, and scientific research institutions. The overall design of an industrial information service platform based on big tourist data is proposed. The overall function, data source, data standard, and application scope of the platform are all focused on. The traceability and nontampering of blockchain technology can also help passengers retain and verify their identity information. From the perspective of the design goals of the system, in general, the time required for repeated authentication will greatly reduce air ticket bookings, accommodation reservations, and bill verification, and improving efficiency is the only way to establish a “trust ecology.” Architecture design, distributed architecture, and intelligent service design, as well as the key implementation technology of service platform construction, route recommendation algorithm and tourism information big data mining, research and analysis on the construction of tourism information intelligent service.

Journal ArticleDOI
TL;DR: Simulation results using NS2 have been analyzed, regarding packet delivery ratio, packet error rate, communication overhead, and end-to-end delay; USPF indeed has outperformed, leading into the evidence of applicability's favor.
Abstract: The selection of optimal relay node ever remains a stern challenge for underwater routing. Due to a rigid and uncouth underwater environment, the acoustic channel faces inevitable masses that tarnish the transmission cycle. None of the protocols can cover all routing issues; therefore, designing underwater routing protocol demands a cognitive coverage that cannot be accomplished without meticulous research. An angle-based shrewd technique is being adopted to improve the data packet delivery, as well as revitalize the network lifespan. From source to destination, one complete cycle comprises three phases indeed; in the first phase, the eligibility of data packet belonging to the same transmission zone is litigated by Forwarder Hop Angle (FHA) and Counterpart Hop Angle (CHA). If FHA value is equal or greater than CHA, it presages that the generated packet belongs to the same transmission zone; otherwise, it portends that packet is maverick from other sectors. The second phase picks out the best relay node by computing a three-state link quality with prefix values using the Additive-Rise and Additive-Fall method. Finally, the third phase renders a decisive solution regarding exorbitant overhead fistula; a packet holding time is contemplated to prevent the packet loss probability. Simulation results using NS2 have been analyzed, regarding packet delivery ratio, packet error rate, communication overhead, and end-to-end delay. Comparing to HHVBF and GEDAR, USPF indeed has outperformed, leading into the evidence of applicability’s favor.

Journal ArticleDOI
TL;DR: A modified focal loss function is put forward, as a replacement for the cross-entropy function to achieve a better treatment of the imbalance problem of the training data and prove that the presented method can get better segmentation results.
Abstract: In recent years, the convolutional neural network (CNN) has made remarkable achievements in semantic segmentation. The method of semantic segmentation has a desirable application prospect. Nowadays, the methods mostly use an encoder-decoder architecture as a way of generating pixel by pixel segmentation prediction. The encoder is for extracting feature maps and decoder for recovering feature map resolution. An improved semantic segmentation method on the basis of the encoder-decoder architecture is proposed. We can get better segmentation accuracy on several hard classes and reduce the computational complexity significantly. This is possible by modifying the backbone and some refining techniques. Finally, after some processing, the framework has achieved good performance in many datasets. In comparison with the traditional architecture, our architecture does not need additional decoding layer and further reuses the encoder weight, thus reducing the complete quantity of parameters needed for processing. In this paper, a modified focal loss function is also put forward, as a replacement for the cross-entropy function to achieve a better treatment of the imbalance problem of the training data. In addition, more context information is added to the decode module as a way of improving the segmentation results. Experiments prove that the presented method can get better segmentation results. As an integral part of a smart city, multimedia information plays an important role. Semantic segmentation is an important basic technology for building a smart city.

Journal ArticleDOI
TL;DR: A deep-learning model based on neural network, entitled Capsules TCN Network, is proposed to predict the traffic flow in local areas of the city at once to unlock the power of knowledge from urban computing and achieve better results in the experimental verification.
Abstract: Predicting urban traffic is of great importance to smart city systems and public security; however, it is a very challenging task because of several dynamic and complex factors, such as patterns of urban geographical location, weather, seasons, and holidays. To tackle these challenges, we are stimulated by the deep-learning method proposed to unlock the power of knowledge from urban computing and proposed a deep-learning model based on neural network, entitled Capsules TCN Network, to predict the traffic flow in local areas of the city at once. Capsules TCN Network employs a Capsules Network and Temporal Convolutional Network as the basic unit to learn the spatial dependence, time dependence, and external factors of traffic flow prediction. In specific, we consider some particular scenarios that require accurate traffic flow prediction (e.g., smart transportation, business circle analysis, and traffic flow assessment) and propose a GAN-based superresolution reconstruction model. Extensive experiments were conducted based on real-world datasets to demonstrate the superiority of Capsules TCN Network beyond several state-of-the-art methods. Compared with HA, ARIMA, RNN, and LSTM classic methods, respectively, the method proposed in the paper achieved better results in the experimental verification.

Journal ArticleDOI
TL;DR: This paper combines the advantages of blockchain and edge computing and constructs the key technology solutions of edge computing based on blockchain that achieves the security protection and integrity check of cloud data and introduces the Paillier cryptosystem which supports additive homomorphism.
Abstract: With its decentralization, reliable database, security, and quasi anonymity, blockchain provides a new solution for data storage and sharing as well as privacy protection This paper combines the advantages of blockchain and edge computing and constructs the key technology solutions of edge computing based on blockchain On one hand, it achieves the security protection and integrity check of cloud data; and on the other hand, it also realizes more extensive secure multiparty computation In order to assure the operating efficiency of blockchain and alleviate the computational burden of client, it also introduces the Paillier cryptosystem which supports additive homomorphism The task execution side encrypts all data, while the edge node can process the ciphertext of the data received, acquire and return the ciphertext of the final result to the client The simulation experiment proves that the proposed algorithm is effective and feasible