scispace - formally typeset
Search or ask a question

Showing papers in "Concurrency and Computation: Practice and Experience in 2021"


Journal ArticleDOI
TL;DR: The empirical results indicate that the proposed deep learning architecture outperforms the conventional deep learning methods on sentiment analysis on product reviews obtained from Twitter.
Abstract: Sentiment analysis is one of the major tasks of natural language processing, in which attitudes, thoughts, opinions, or judgments toward a particular subject has been extracted. Web is an unstructured and rich source of information containing many text documents with opinions and reviews. The recognition of sentiment can be helpful for individual decision makers, business organizations, and governments. In this article, we present a deep learning‐based approach to sentiment analysis on product reviews obtained from Twitter. The presented architecture combines TF‐IDF weighted Glove word embedding with CNN‐LSTM architecture. The CNN‐LSTM architecture consists of five layers, that is, weighted embedding layer, convolution layer (where, 1‐g, 2‐g, and 3‐g convolutions have been employed), max‐pooling layer, followed by LSTM, and dense layer. In the empirical analysis, the predictive performance of different word embedding schemes (ie, word2vec, fastText, GloVe, LDA2vec, and DOC2vec) with several weighting functions (ie, inverse document frequency, TF‐IDF, and smoothed inverse document frequency function) have been evaluated in conjunction with conventional deep neural network architectures. The empirical results indicate that the proposed deep learning architecture outperforms the conventional deep learning methods.

197 citations


Journal ArticleDOI
TL;DR: Experimental results show that the improved detection algorithm has a better effect on the detection speed and accuracy of the helmet detection task.
Abstract: In the production and construction of industry, safety accidents caused by unsafe behaviors of staff often occur. In a complex construction site scene, due to improper operations by personnel, huge safety risks will be buried in the entire production process. The use of deep learning algorithms to replace manual monitoring of site safety regulations is a powerful guarantee for sticking to the line of safety in production. First, the improved YOLO v3 algorithm is used to output the predicted anchor box of the target object, and then pixel feature statistics are performed on the anchor box, and the weight coefficients are respectively multiplied to output the confidence of the standard wearing of the helmet in each predicted anchor box area, according to the empirical threshold determine whether workers meet the standards for wearing helmets. Experimental results show that the helmet wearing detection algorithm based on deep learning in this paper increases the feature map scale, optimizes the prior dimensional algorithm of specific helmet dataset, and improves the loss function, and then combines image processing pixel feature statistics to accurately detect whether the helmet is worn by the standard. The final result is that mAP reaches 93.1% and FPS reaches 55 f/s. In the helmet recognition task, compared to the original YOLO v3 algorithm, mAP is increased by 3.5% and FPS is increased by 3 f/s. It shows that the improved detection algorithm has a better effect on the detection speed and accuracy of the helmet detection task.

98 citations


Journal ArticleDOI
TL;DR: Experimental and analytical results verify the proposed encryption scheme for images on cloud based on a single‐round dictionary and chaotic sequences' effectiveness, high level of security, and substantial image quality improvement after decryption.
Abstract: With the increasing popularity of multimedia technology and the prevalence of various smart electronic devices, severe security problems have recently arisen in information and communication systems. The image maintenance can be treated as a typical example of cloud storage outsourcing as images require much more storage space than text documents. An image encryption method with high efficiency is crucial to preserve the privacy of sensitive and critical images in cloud‐edge communications. Compressive sensing (CS) is capable of reconstructing compressible or sparse signals using fewer measurements than traditional methods when the support of the signals is unobtainable in advance. Encryption methods based on CS decrease resources demands during signal acquisition and protect the image data to avoid theft of confidential information by malicious users. Moreover, the CS‐based encryption method can achieve the encryption and the compression of an image simultaneously. Recently, a considerable amount of work using CS theory has been reported. However, most of the existing methods result in recovered images with unsatisfactory quality and employ a massive measurement matrix as the encryption key. This paper proposes an encryption scheme for images on cloud based on a single‐round dictionary and chaotic sequences. Both the experimental and analytical results verify the proposed scheme's effectiveness, high level of security, and substantial image quality improvement after decryption.

84 citations


Journal ArticleDOI
TL;DR: A fog‐based detection system (FDS) is designed, which is based on the trust evaluation mechanism in the behavior level, and manifest that FDS has certain advantages in detecting hidden data attacks.
Abstract: With the popularity of Sensor‐Cloud, its security issues get more attention from industry and academia. Especially, Sensor‐Cloud underlying network is very vulnerable to internal attacks due to its limitations in computing, storage, and analysis. Most existing trust evaluation mechanisms are proposed to detect internal attack issues from the behavior level. However, there are some special internal attacks in the data level such as hidden data attacks, which are normal in the behavior level but generate malicious data to lead user to make wrong decisions. To detect this type of attacks, we design a fog‐based detection system (FDS), which is based on the trust evaluation mechanism in the behavior level. In this paper, three types of scenes (the redundant data, the parameter curve characteristic, and the data validation) are defined, and three detection schemes are given. Some experiments are conducted, which manifest that FDS has certain advantages in detecting hidden data attacks.

69 citations


Journal ArticleDOI
TL;DR: The CNN model effectively solves the limitations of traditional machine learning in sEMG gesture recognition, and combines 1‐dim convolution kernel to extract deep abstract features to improve the recognition effect.
Abstract: For the problem of surface electromyography (sEMG) gesture recognition, considering the fact that the traditional machine learning model is susceptible to the sEMG feature extraction method, it is difficult to distinguish the subtle differences between similar gestures. The NinaPro DB1 dataset is used as the research object, and the sEMG feature image and the Convolutional Neural Network (CNN) are combined to recognize 52 gesture movements. The CNN model effectively solves the limitations of traditional machine learning in sEMG gesture recognition, and combines 1‐dim convolution kernel to extract deep abstract features to improve the recognition effect. Finally, the simulation experiment shows that compared with the accuracy of the raw‐sEMG images based on the CNN and the sEMG‐feature‐images based on the CNN and sEMG based on the traditional machine learning, the multi‐sEMG‐features image based on the CNN is the highest, which coming up to 82.54%.

69 citations


Journal ArticleDOI
TL;DR: A multiscale fast correlation filtering tracking algorithm based on a feature fusion model that exhibits better robustness and improved performance under real‐time conditions in sophisticated scenarios, including scale variation, deformation, fast motion, occlusion, and so on.
Abstract: In scenes high in visual complexity, the identification of a moving object can be affected by changes in scale and occlusion factors during the tracking process, resulting in reduced tracking accuracy. Accordingly, to address the problem of low accuracy, a multiscale fast correlation filtering tracking algorithm based on a feature fusion model is proposed in the present work with the aim of reducing the poor tracking effects caused by occlusion discrimination and scale changes in complex scenes. The object's grayscale (GRAY) feature, histogram of oriented gradient (HOG) feature, and color name (CN) feature are reduced to dimensions and fused to form a feature matrix. Moreover, a hierarchical principal component analysis (HPCA) algorithm is used to extract visually salient features and reconstruct the feature matrix under real‐time conditions, the correlation filtering position is trained, the number of dimensions is effectively reduced, and the feature fusion matrix is used to train the multiscale fast correlation filtering, with the result that the object's position and scale can be accurately predicted. The proposed algorithm is then compared with five popular correlation filtering tracking algorithms. Experimental results demonstrate that its average tracking speed reaches a reasonable frames/second velocity; moreover, it can also achieve promising object tracking results on the OTB benchmark data sets. The tracking accuracy is superior to that of the other five correlation filtering tracking algorithms when applied to scenes featuring object occlusion and changes in scale. The proposed algorithm also exhibits better robustness and improved performance under real‐time conditions in sophisticated scenarios, including scale variation, deformation, fast motion, occlusion, and so on.

67 citations


Journal ArticleDOI
TL;DR: A blockchain oriented platform to guarantee the origin and provenance of food items in a Smart Tourism Region context and a real case study applied to local products from Sardinia, Italy, is proposed at the end of the article.
Abstract: This article proposes a blockchain oriented platform to guarantee the origin and provenance of food items in a Smart Tourism Region context. Local food and beverage, in fact, can become a good combination to attract tourist and to promote the area provided that their provenance is clearly certified. We designed and developed a blockchain‐based system to manage an agri‐food supply chain for tracking food items. By using smart contracts the platform guarantees transparency, efficiency and trustworthiness. Our system is particularly suitable to manage cold chain since the system interfaces with IoT network devices providing detailed information about data monitoring food such as storage temperature, environment humidity, and GPS data. All involved actors can share data and information in a more efficient, transparent, and tamper proof way than traditional systems. The final consumer can access with transparency to all the agri‐food chain of the purchased product and verify provenance by retrieving all detailed information registered in the blockchain public ledger. The proposed system has been designed according to the ABCDE method, an agile development process recently conceived, to obtain a higher software quality to design a general blockchain system by means software engineering practices. A real case study applied to local products from Sardinia, Italy, is proposed at the end of the article.

66 citations


Journal ArticleDOI
TL;DR: This paper optimizes the effect of RGB‐D information processing by considering the independent features and related features of multi‐modal data to construct a weight adaptive algorithm to fuse different features.
Abstract: With the continuous development of sensor technology, the acquisition cost of RGB‐D images is getting lower and lower, and gesture recognition based on depth images and Red‐Green‐Blue (RGB) images has gradually become a research direction in the field of pattern recognition. However, most of the current processing methods for RGB‐D gesture images are relatively simple, ignoring the relationship and influence between its two modes, and unable to make full use of the correlation factors between different modes. In view of the above problems, this paper optimizes the effect of RGB‐D information processing by considering the independent features and related features of multi‐modal data to construct a weight adaptive algorithm to fuse different features. Simulation experiments show that the method proposed in this paper is better than the traditional RGB‐D gesture image processing method and the gesture recognition rate is higher. Comparing the current more advanced gesture recognition methods, the method proposed in this paper also achieves higher recognition accuracy, which verifies the feasibility and robustness of this method.

62 citations


Journal ArticleDOI
TL;DR: An accurate, real‐time robot grasp detection method based on convolutional neural networks that can quickly calculate the optimal gripping point and posture for irregular objects with arbitrary poses and different shapes is proposed.
Abstract: Robot grasping technology is a hot spot in robotics research. In relatively fixed industrialized scenarios, using robots to perform grabbing tasks is efficient and lasts a long time. However, in an unstructured environment, the items are diverse, the placement posture is random, and multiple objects are stacked and occluded each other, which makes it difficult for the robot to recognize the target when it is grasped and the grasp method is complicated. Therefore, we propose an accurate, real‐time robot grasp detection method based on convolutional neural networks. A cascaded two‐stage convolutional neural network model with course to fine position and attitude was established. The R‐FCN model was used as the extraction of the candidate frame of the picking position for screening and rough angle estimation, and aiming at the insufficient accuracy of the previous methods in pose detection, an Angle‐Net model is proposed to finely estimate the picking angle. Tests on the Cornell dataset and online robot experiment results show that the method can quickly calculate the optimal gripping point and posture for irregular objects with arbitrary poses and different shapes. The accuracy and real‐time performance of the detection have been improved compared to previous methods.

61 citations



Journal ArticleDOI
TL;DR: The main purpose of the study was to measure and analyze the cultural intelligence of university students, and the content and results are expected to be an important foundation for the direction of future development of education contents in universities.
Abstract: Academic discussions on cultural intelligence (CQ) are now paying attention to their potential utilization from various angles. The field of study is expanded not only in business administration but also in psychology, education, tourism, communication, and arts. This is due to the widespread study of global communication competence in multicultural situations because of the deepening of globalization. In this paper, we try to find a way to utilize cultural intelligence model proposed by David Livermore. The aim is to develop education contents for the improvement of cultural intelligence of university students. The target is limited to university students and aims to develop education contents to enhance their cultural intelligence. The main purpose of the study was to measure and analyze the cultural intelligence of university students. For that, the level of cultural intelligence of Korean university freshmen was measured and analyzed. The individual level of the four areas constituting the cultural intelligence was identified, and the difference between the male and female was examined. At the same time, the differences in cultural intelligence were analyzed according to the duration of multicultural contact and experience in the case of foreign language lectures taught by foreigners. Finally, we analyzed how the correlation between the four areas that comprise cultural intelligence is occurring, and as a result, the content and results of this study are expected to be an important foundation for the direction of future development of education contents in universities.

Journal ArticleDOI
TL;DR: This article proposes four deep and reinforcement learning‐based scheduling approaches to automate the process of scheduling large‐scale workloads onto cloud computing resources, while reducing both the resource consumption and task waiting time.
Abstract: Cloud computing is undeniably becoming the main computing and storage platform for today's major workloads. From Internet of things and Industry 4.0 workloads to big data analytics and decision‐making jobs, cloud systems daily receive a massive number of tasks that need to be simultaneously and efficiently mapped onto the cloud resources. Therefore, deriving an appropriate task scheduling mechanism that can both minimize tasks' execution delay and cloud resources utilization is of prime importance. Recently, the concept of cloud automation has emerged to reduce the manual intervention and improve the resource management in large‐scale cloud computing workloads. In this article, we capitalize on this concept and propose four deep and reinforcement learning‐based scheduling approaches to automate the process of scheduling large‐scale workloads onto cloud computing resources, while reducing both the resource consumption and task waiting time. These approaches are: reinforcement learning (RL), deep Q networks, recurrent neural network long short‐term memory (RNN‐LSTM), and deep reinforcement learning combined with LSTM (DRL‐LSTM). Experiments conducted using real‐world datasets from Google Cloud Platform revealed that DRL‐LSTM outperforms the other three approaches. The experiments also showed that DRL‐LSTM minimizes the CPU usage cost up to 67% compared with the shortest job first (SJF), and up to 35% compared with both the round robin (RR) and improved particle swarm optimization (PSO) approaches. Moreover, our DRL‐LSTM solution decreases the RAM memory usage cost up to 72% compared with the SJF, up to 65% compared with the RR, and up to 31.25% compared with the improved PSO.

Journal ArticleDOI
TL;DR: In tensorflow environment, based on the improved network model, the self‐occlusion gesture and object occluding gesture are trained in color map, depth map, and color and depth fusion respectively.
Abstract: Gesture recognition has always been a research hotspot in the field of human‐computer interaction. Its purpose is to realize the natural interaction with the machine by recognizing the semantics expressed by gesture. In the process of gesture recognition, the occlusion of gesture is an inevitable problem. In the process of gesture recognition, some or even all of the gesture features will be lost due to the occlusion of the gesture, resulting in the wrong recognition or even unrecognizability of the gesture. Therefore, it is of great significance to study gesture recognition under occlusion. The single shot multibox detector (SSD) algorithm is analyzed, and the front‐end network is compared. Mobilenets is selected as the front‐end network, and the Mobilenets‐SSD network is improved. In tensorflow environment, based on the improved network model, the self‐occlusion gesture and object occluding gesture are trained in color map, depth map, and color and depth fusion respectively. The recognition models of self‐occlusion gestures and object‐occlusion gestures in color map, depth map, and color and depth fusion are obtained. And compare and analyze the learning rate, loss function, and average accuracy of various models obtained for occlusion gesture recognition.

Journal ArticleDOI
TL;DR: An improved SSD model based on deep feature fusion is proposed that improves the detection accuracy and detection rate of the target, and the effect is more obvious for the relatively small‐scale target.
Abstract: The feature layers of different layers in the single shot multibox detector (SSD) are independently used as the input of the classification network, so it is easy to detect the same object. This article proposes an improved SSD model based on deep feature fusion. In the SSD algorithm, the deep feature fusion between the target detection layer and its adjacent feature layer is used, including convolution kernels and pooling kernels of different sizes, down‐sampling of low‐level features and up‐sampling of deconvolution of high‐level features. The network is improved by combining the target frame recommendation strategy in the SSD algorithm and the frame regression algorithm. The experimental results show that the improved SSD algorithm improves the detection accuracy and detection rate of the target, and the effect is more obvious for the relatively small‐scale target.

Journal ArticleDOI
TL;DR: The proposed FFA could outperform the previous version of the FFA with minimum modification in convergence, CPU‐time, and complexity and could minimize most standard functions and engineering constraints within the minimum time and the least function evaluation to the optimum value.
Abstract: Solving constrained engineering optimization problems is a highly significant issue, and many different approaches have been proposed in this regard. In this article, a modified farmland fertility algorithm (FFA) has been proposed. This algorithm improves new solutions by benefiting from neighborhoods produced by the new method. In the proposed algorithm, the FFA algorithm phases are modified in the update functions, and some of its variables are replaced with new ones. In the first phase of the algorithm, a mutation is also used to improve the solution of the problem with a particular rule. The results on CEC2019 standard functions were examined to determine the impact of the new parameters. Experiments were performed on 26 standard functions and two constrained engineering optimization problems. To make a better comparison, the analysis of variance, without parameters such as the Friedman test and Pairwise test, was used to evaluate and compare the algorithms. These experiments showed that the proposed algorithm is capable of high‐speed convergence and can minimize most standard functions and engineering constraints within the minimum time and the least function evaluation to the optimum value. The proposed FFA could outperform the previous version of the FFA with minimum modification in convergence, CPU‐time, and complexity.

Journal ArticleDOI
TL;DR: A survey on deep learning models in medical image analysis for computer‐aided diagnosis in modern medicine finds that deep learning is the most promising model for diagnosing spleen and stomach disease in smart Chinese medicine.
Abstract: Cloud computing is significantly contributing to the development of smart Chinese medicine. The diagnosis and treatment of spleen and stomach diseases has been arousing great interest in smart Chinese medicine with cloud computing since many persons are suffering from spleen and stomach diseases. Currently, spleen and stomach diseases present some new characteristics with the dramatic changes in natural climate, social environment, and human living habits. Recently, deep learning, together with cloud computing techniques, has successfully used in medical image analysis and therefore it is the most promising model for diagnosing spleen and stomach disease in smart Chinese medicine. In this paper, we present a survey on deep learning models in medical image analysis for computer‐aided diagnosis in modern medicine. Afterwards, we summarize the syndrome types of spleen and stomach diseases and furthermore analyze the causes and pathogenesis for each syndrome. Finally, we discuss the open challenges and research directions of deep learning models applicable to the computer‐aided diagnosis of spleen and stomach diseases, which is expected to contribute to the development of smart Chinese medicine with cloud computing.

Journal ArticleDOI
TL;DR: The article could open a discussion and highlight the problems of data storage and usage obtained from the communication user—chatbot and propose some standards to protect the user.
Abstract: Chatbots are artificial communication systems becoming increasingly popular and not all their security questions are clearly solved. People use chatbots for assistance in shopping, bank communication, meal delivery, healthcare, cars, and many other actions. However, it brings an additional security risk and creates serious security challenges which have to be handled. Understanding the underlying problems requires defining the crucial steps in the techniques used to design chatbots related to security. There are many factors increasing security threats and vulnerabilities. All of them are comprehensively studied, and security practices to decrease security weaknesses are presented. Modern chatbots are no longer rule‐based models, but they employ modern natural language and machine learning techniques. Such techniques learn from a conversation, which can contain personal information. The paper discusses circumstances under which such data can be used and how chatbots treat them. Many chatbots operate on a social/messaging platform, which has their terms and conditions about data. The paper aims to present a comprehensive study of security aspects in communication with chatbots. The article could open a discussion and highlight the problems of data storage and usage obtained from the communication user—chatbot and propose some standards to protect the user.

Journal ArticleDOI
TL;DR: A new model called hybrid‐based intrusion detection system (GA‐Fuzzy) is proposed for handling large volume NSL‐KDD Dataset for detecting attacks effectively and for reducing misclassification alarm rate.
Abstract: Usage of computer resources, being a very important part in day to day life, it is to be noticed that the security threats have also increased. Hence, Intrusion Detection System (IDS) is used for detection and prevention of computer resources from security threats generated by malicious attackers. Existing techniques like encryption mechanism, authentication mechanism, and access control do not support for analyzing large volume of data and it is efficient only in the case of limited number of attacks. Attackers attack the computer resources based on the weakness of the security level in the Information system and they can violate the rules and regulation of computer system (Confidentiality, Integrity, and Availability) easily. Handling threats on computer resources still remains a challenging issue. Distributed Denial of Service attacks (DDoS) is an important attack that sends more than one number of requests to the destination server from multiple compromised systems that makes the Information system unable to process the request thereby resulting in non‐response to the attacker as well as normal end user, which results in large number of false alarms and less detection accuracy rates. We propose a new model called hybrid‐based intrusion detection system (GA‐Fuzzy) for handling large volume NSL‐KDD Dataset for detecting attacks effectively and for reducing misclassification alarm rate. Here, Genetic algorithm (GA) is used for creating new pattern (new features, records) for training the Fuzzy classifier effectively. We use Principle Component Analysis (PCA) as a feature selection method that eliminates irrelevant and redundant data from the NSL‐KDD dataset that improves the efficiency and to attain 99.96% detection accuracy and 0.04% false alarm rate.

Journal ArticleDOI
TL;DR: A novel SLA management framework is proposed, which facilitates the specification and enforcement of dynamic SLAs that enable one to describe how, and under which conditions, the offered service level can change over time and increases flexibility in resource management and trust in the offered cloud services.
Abstract: The current cloud market is dominated by a few providers, which offer cloud services in a take‐it‐or‐leave‐it manner. However, the dynamism and uncertainty of cloud environments may require the change over time of both application requirements and service capabilities. The current service‐level agreement (SLA) management solutions cannot easily guarantee a trustworthy, distributed SLA adaptation due to the centralized authority of the cloud provider who could also misbehave to pursue individual goals. To address the above issues, we propose a novel SLA management framework, which facilitates the specification and enforcement of dynamic SLAs that enable one to describe how, and under which conditions, the offered service level can change over time. The proposed framework relies on a two‐level blockchain architecture. At the first level, the smart SLA is transformed into a smart contract that dynamically guides service provisioning. At the second level, a permissioned blockchain is built through a federation of monitoring entities to generate objective measurements for the smart SLA/contract assessment. The scalability of this permissioned blockchain is also thoroughly evaluated. The proposed framework enables creating open distributed clouds, which offer manageable and dynamic services, and facilitates cost reduction for cloud consumers, while it increases flexibility in resource management and trust in the offered cloud services.

Journal ArticleDOI
TL;DR: Different machine learning algorithms, namely, support vector machine, naive Bayes, linear regression, artificial neural network, decision tree, random forest, the fuzzy classifier, K‐nearest neighbor, adaptive boosting, gradient boosting, and tree ensemble have been implemented for botnet attack detection.
Abstract: With the arrival of the Internet of Things (IoT) many devices such as sensors, nowadays can communicate with each other and share data easily. However, the IoT paradigm is prone to security concerns as many attackers try to hit the network and make it vulnerable. In this scenario, security concerns are the most important and to address them various models have been designed to overcome these security issues, but still there exist many emerging variants of botnet attacks such as Mirai, Persirai, and Bashlite that exploits the security breaches. This research article aims to investigate cyber security in the advent of B‐IDS, DDOS, and malware attacks. For this purpose, different machine learning algorithms, namely, support vector machine, naive Bayes, linear regression, artificial neural network, decision tree, random forest, the fuzzy classifier, K‐nearest neighbor, adaptive boosting, gradient boosting, and tree ensemble have been implemented for botnet attack detection. For performance measures, these algorithms have been tested on nine sensor devices over N‐BaIoT datasets to measure the security and accuracy of the intrusion detection system. The results show that the tree‐based algorithm achieved more than 99% accuracy which is quite higher as compared to other tested methods on the same sensor devices.

Journal ArticleDOI
TL;DR: The results show that the fringe pattern based on photoelasticity can be used for 3D reconstruction as a structured light pattern.
Abstract: A three‐dimensional (3D) reconstruction method of structured light based on photoelastic fringes was proposed in this research. The photoelastic fringes are produced by both simulation and a polycarbonate disk under diametric compression load. Six fringes are projected onto an object by using the six‐step phase shifting technique. Therefore, the isochromatic phase image is calculated. After phase unwrapping, the isochromatic phase image can be used for 3D reconstruction. In order to verify the effectiveness of this method, two experiment devices were built by using projector and photoelastic instrument, respectively. The results show that the fringe pattern based on photoelasticity can be used for 3D reconstruction as a structured light pattern. Compared to the simulation results, the fringes produced by load are more blurred. In order to obtain a better reconstruction result, a large load should be applied to produce dense fringes.

Journal ArticleDOI
TL;DR: A conceptual framework to aid software architects, developers, and decision makers to adopt the right blockchain technology, allowing to assess technologies more objectively and select the one that best fit developers' needs, ultimately cutting costs, reducing time‐to‐market and accelerating return on investment.
Abstract: Blockchain is a decentralized transaction and data management solution, the technological leap behind the success of Bitcoin and other cryptocurrencies. As the variety of existing blockchains and distributed ledgers continues to increase, adopters should focus on selecting the solution that best fits their needs and the requirements of their decentralized applications, rather than developing yet another blockchain from scratch. In this article we present a conceptual framework to aid software architects, developers, and decision makers to adopt the right blockchain technology. The framework exposes the interrelation between technological decisions and architectural features, capturing the knowledge from existing academic literature, industrial products, technical forums/blogs, and experts' feedback. We empirically show the applicability of our framework by dissecting the platforms behind Bitcoin and other top 10 cryptocurrencies, aided by a focus group with researchers and industry practitioners. Then, we leverage the framework together with key notions of the architectural tradeoff analysis method to analyze four real-world blockchain case studies from industry and academia. Results shown that applying our framework leads to a deeper understanding of the architectural tradeoffs, allowing to assess technologies more objectively and select the one that best fit developers' needs, ultimately cutting costs, reducing time-to-market and accelerating return on investment.

Journal ArticleDOI
TL;DR: This paper discusses a lightweight encryption–based secure digital watermarking technique for medical applications that uses redundant discrete wavelet transform and singular value decomposition along with nonsubsampled contourlet transform (NSCT) to improve robustness and imperceptibility.
Abstract: This paper discusses a lightweight encryption–based secure digital watermarking technique for medical applications. The technique uses redundant discrete wavelet transform (RDWT) and singular value decomposition (SVD) along with nonsubsampled contourlet transform (NSCT) to improve robustness and imperceptibility. The security of the proposed technique is further improved by incorporating a lightweight (low‐complexity) cryptographic mechanism that is applied after embedding multiple watermarks. The proposed scheme first partitions the host image into subcomponents and then calculates the entropy values for it. To the maximum entropy value, the NSCT is applied, followed by RDWT decomposition. Finally, SVD is applied to obtain a singular vector of RDWT‐decomposed components. The watermark images are also processed using the same procedure mentioned above. The method uses singular values to hide watermarks into a host image. The experimental outcome shows that the combined technique makes our proposed approach more robust and imperceptible, while it is evaluated for various wavelet filters and 10 different types of medical and five different types of nonmedical cover images. Furthermore, the strength of the cryptographic mechanism is tested using standard performance measures and confirms its effectiveness in security. Moreover, it is evident from the results that our method shows improvement in robustness in comparison to previously reported techniques under consideration.

Journal ArticleDOI
TL;DR: A target localization algorithm under the framework of yolov4 is designed to apply in the process of SLAM global mapping to search and locate the target by using rich RGBD images information to establish a local dense point cloud map of the target object.
Abstract: Target localization in unknown environment is one of the development directions of mobile robots. Simultaneous localization and mapping (SLAM) can be used to build maps in unknown environments, but it has the problem of poor readability and interactivity. In this article, target detection and SLAM are combined to search and locate the target by using rich RGBD images information. The determined position in the global map is conducive to the follow‐up operation of the target by mobile robots. By establishing a local dense point cloud map of the target object, the current state of the target object is directly displayed, the readability of the map is improved, and the disadvantages of difficult understanding of the global sparse map and slow construction of the global dense map are avoided. A target localization algorithm under the framework of yolov4 is designed to apply in the process of SLAM global mapping. Our works are helpful for obtaining positions of objects in three‐dimensional space. The experimental results show that the time‐consuming of this method in dense mapping is reduced by 50%–70%, and the number of point clouds is also reduced by 60%–70%.

Journal ArticleDOI
Mengting Han1, Xuan Zhang1, Xin Yuan1, Jiahao Jiang1, Wei Yun1, Chen Gao1 
TL;DR: This survey conducts a comprehensive and systematic analysis of semantic similarity, proposing three categories of semantic similarities: corpus‐based, knowledge-based, and deep learning (DL)‐based and evaluating state‐of‐the‐art DL methods on four common datasets which proved that DL‐based can better solve the challenges of the short text similarity, such as sparsity and complexity.
Abstract: Short text similarity plays an important role in natural language processing (NLP). It has been applied in many fields. Due to the lack of sufficient context in the short text, it is difficult to measure the similarity. The use of semantics similarity to calculate textual similarity has attracted the attention of academia and industry and achieved better results. In this survey, we have conducted a comprehensive and systematic analysis of semantic similarity. We first propose three categories of semantic similarity: corpus‐based, knowledge‐based, and deep learning (DL)‐based. We analyze the pros and cons of representative and novel algorithms in each category. Our analysis also includes the applications of these similarity measurement methods in other areas of NLP. We then evaluate state‐of‐the‐art DL methods on four common datasets, which proved that DL‐based can better solve the challenges of the short text similarity, such as sparsity and complexity. Especially, bidirectional encoder representations from transformer model can fully employ scarce information of short texts and semantic information and obtain higher accuracy and F1 value. We finally put forward some future directions.

Journal ArticleDOI
TL;DR: The aim of this article is to wholly and systematically review big data handling approaches in smart cities, in which research efforts published between 2013 and February 2021 are analyzed and these techniques are categorized based on their algorithms and architectures.
Abstract: Recently, the notion of a smart city, which includes smart well‐being, smart transit, and smart society, has attracted much attention due to its impact on people's quality of living. Data in smart cities are characterized by variety, velocity, volume, value, and veracity that are the well‐known characteristics of big data. The fast pace expanding of IoT devices and sensors in smart cities generates a huge volume of data that can help decision‐makers and managers in city management. The aim of this article is to wholly and systematically review big data handling approaches in smart cities, in which we analyze research efforts published between 2013 and February 2021, where these techniques are categorized based on their algorithms and architectures. Further, the main ideas, evaluation techniques, tools, evaluation metrics, algorithm types, advantages, and disadvantages are explored. Additionally, essential evaluation factors are introduced in which scalability and availability by 16%, time by 15% and accuracy by 11% are more in focus, and finally, some of the challenges, open issues, and future trends that are valuable for further research are suggested in big data handling approaches in smart cities.

Journal ArticleDOI
TL;DR: The article establishes an angle prediction model based on a genetic algorithm to optimize the extreme learning machine (ELM) to identify and predict the wrist angle under different loads continuously quantitatively and realizes the continuous quantitative angle of the precise wrist prediction.
Abstract: In sEMG (surface electromyography) pattern recognition, most of the research focuses on the static pattern recognition of different limbs, ignoring the importance of changing load intensity, and joint angle movement information. Traditional static qualitative pattern recognition cannot adjust the motion amplitude and load intensity, so it is of great significance to study the continuous prediction of wrist angle under different load intensities. Based on the correlation between the surface EMG signal and the joint angle signal, the article is based on the neural network to identify and predict the wrist angle under different loads continuously quantitatively. The sEMG signal in this article was collected with the approval and review of the Ethics Committee and the people's informed consent. Since qualitative pattern recognition cannot adjust the wrist movement range and the different load training intensity, the article establishes an angle prediction model based on a genetic algorithm to optimize the extreme learning machine (ELM). In addition, the article analyzes the influence of different loads on the continuous prediction accuracy of the wrist angle, realizes the continuous quantitative angle of the precise wrist prediction. Experimental analysis shows that the wrist joint angle predicted by the ELM optimized based on genetic algorithm is close to the actual angle, and the average error is about 5.96 degrees.

Journal ArticleDOI
TL;DR: This paper presents a blockchain‐based architecture for the current electronic health record (EHR) systems built on top of existing databases maintained by health providers, which implements a blockchain solution to ensure the integrity of data records and improve interoperability of the systems through tracking all events that happen to the data in the databases.
Abstract: This paper presents a blockchain‐based architecture for our current electronic health record (EHR) systems. Being built on top of existing databases maintained by health providers, the architecture implements a blockchain solution to ensure the integrity of data records and improve interoperability of the systems through tracking all events that happen to the data in the databases. In this proposed architecture, we also introduce a new incentive mechanism for the creation of new blocks on the blockchain. The architecture is independent of any specific blockchain platforms and open to further extensions; hence, it potentially fits in with other electronic record systems that require protection against data misuse.

Journal ArticleDOI
TL;DR: The proposed KL estimator is developed for the inverse Gaussian regression model and its performance with some existing estimators is compared in terms of theoretical comparison, the simulation study, and real‐life application.
Abstract: Multicollinearity poses an undesirable effect on the efficiency of the maximum likelihood estimator (MLE) in both Gaussian and non‐Gaussian regression models. The ridge and the Liu estimators have been developed as an alternative to the MLE. Both estimators possess smaller mean squared error (MSE) over the MLE. Recently, Kibria and Lukman developed KL estimator, which was found to outperform the ridge and the Liu estimators in the linear regression model. With this expectation, we developed the KL estimator for the inverse Gaussian regression model. We compare the proposed estimator's performance with some existing estimators in terms of theoretical comparison, the simulation study, and real‐life application. Smaller MSE criterion shows that the proposed estimator with one of its shrinkage parameter performs the best.

Journal ArticleDOI
TL;DR: The competitive results show that the MGAN model outperforms the state‐of‐the‐art methods, resulting in lower noise and better visual quality, and reflects the superiority in image structure restoration.
Abstract: Recently, most deep convolutional neural networks used for image super‐resolution have achieved impressive performance on ideal datasets. However, these methods always fail in real‐world super‐resolution, and the results are blurred and structurally deformed. In this paper, a multiscale generative adversarial network (MGAN) is proposed to alleviate these issues. The model's multiscale loss function can effectively reduce the solution space and obtain the best features to reconstruct the image. The degraded framework based on kernel estimation and noise injection is mainly applied to obtain LR images that share the same domain with real‐world pictures. Moreover, the gradient branch is presented to provide other structural priors for SR processing. Simultaneously, to obtain better visual effects, LPIPS is used for perceptual losses instead of Visual Geometry Group (VGG). The competitive results show that our MGAN model outperforms the state‐of‐the‐art methods, resulting in lower noise and better visual quality, and reflects the superiority in image structure restoration.