scispace - formally typeset
Search or ask a question

Showing papers in "Applied Sciences in 2022"


Journal ArticleDOI
TL;DR: This work provides a survey on the key technologies proposed in the literature for the implementation of IoT frameworks, and then a review of the main smart city approaches and frameworks, based on classification into eight domains, which extends the traditional six domain classification that is typically adopted in most of the related works.
Abstract: In recent years, smart cities have been significantly developed and have greatly expanded their potential. In fact, novel advancements to the Internet of things (IoT) have paved the way for new possibilities, representing a set of key enabling technologies for smart cities and allowing the production and automation of innovative services and advanced applications for the different city stakeholders. This paper presents a review of the research literature on IoT-enabled smart cities, with the aim of highlighting the main trends and open challenges of adopting IoT technologies for the development of sustainable and efficient smart cities. This work first provides a survey on the key technologies proposed in the literature for the implementation of IoT frameworks, and then a review of the main smart city approaches and frameworks, based on classification into eight domains, which extends the traditional six domain classification that is typically adopted in most of the related works.

88 citations


Journal ArticleDOI
TL;DR: In this paper , the authors identify the underlying enablers of how these capabilities affect the transition to a CEBM that integrates sustainability, and highlight the interplay of CEBM, innovation success factors, and obstacles at a micro-level.
Abstract: The integration of sustainability in the circular economy is an emerging paradigm that can offer a long term vision to achieve environmental and social sustainability targets in line with the United Nation’s Sustainable Development Goals. Developing scalable and sustainable impacts in circular economy business models (CEBMs) has many challenges. While many advanced technology manufacturing firms start as small enterprises, remarkably little is known about how material reuse firms in sociotechnical systems transition towards circular business models. Research into CEBMs integrating sustainability and environmental conservation is still in its early stages. There has been increased interest in sustainability and circular economy research, but current research is fragmented. The innovation surrounding CEBMs eludes some firms with relatively limited evidence of the transitional perspective necessary to integrate aspects of sustainability. This lack of evidence is especially applicable to the context of circular economy practices in small and medium enterprises in the United States regarding capabilities, operations obstacles, and elements of success in designing circular business models. Based on a qualitative, interview-based inductive study of a material reuse firm, our research develops a conceptual model of the critical success factors and obstacles that are part of implementing circular economy practices. Firms must first manage strategic enablers and monitor tactical enablers to achieve sustainability goals. In this study, we identify the underlying enablers of how these capabilities affect the transition to a CEBM that integrates sustainability. The framework emerging from our findings highlights the interplay of CEBM, innovation success factors, and obstacles at a micro-level. The investigation of a material reuse firm serves as the foundation for developing a framework for how managers can alter a company and revise the business model to transition towards a more innovative circular economy.

85 citations


Journal ArticleDOI
TL;DR: This paper considers how the deep learning method uses meta-learning to learn and generalize from a small sample size in image classification, and designs a multi-scale relational network (MSRN) aiming at the above problems.
Abstract: Learning information from a single or a few samples is called few-shot learning. This learning method will solve deep learning’s dependence on a large sample. Deep learning achieves few-shot learning through meta-learning: “how to learn by using previous experience”. Therefore, this paper considers how the deep learning method uses meta-learning to learn and generalize from a small sample size in image classification. The main contents are as follows. Practicing learning in a wide range of tasks enables deep learning methods to use previous empirical knowledge. However, this method is subject to the quality of feature extraction and the selection of measurement methods supports set and the target set. Therefore, this paper designs a multi-scale relational network (MSRN) aiming at the above problems. The experimental results show that the simple design of the MSRN can achieve higher performance. Furthermore, it improves the accuracy of the datasets within fewer samples and alleviates the overfitting situation. However, to ensure that uniform measurement applies to all tasks, the few-shot classification based on metric learning must ensure the task set’s homologous distribution.

84 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a system that incorporates virtual reality and metaverse methods into the classroom to compensate for the shortcomings of the existing remote models of practical education, and they developed an aircraft maintenance simulation and conducted an experiment comparing their system to a video training method.
Abstract: Due to the COVID-19 pandemic, there has been a shift from in-person to remote education, with most students taking classes via video meetings. This change inhibits active class participation from students. In particular, video education has limitations in replacing practical classes, which require both theoretical and empirical knowledge. In this study, we propose a system that incorporates virtual reality and metaverse methods into the classroom to compensate for the shortcomings of the existing remote models of practical education. Based on the proposed system, we developed an aircraft maintenance simulation and conducted an experiment comparing our system to a video training method. To measure educational effectiveness, knowledge acquisition, and retention tests were conducted and presence was investigated via survey responses. The results of the experiment show that the group using the proposed system scored higher than the video training group on both knowledge tests. As the responses given to the presence questionnaire confirmed a sense of spatial presence felt by the participants, the usability of the proposed system was judged to be appropriate.

71 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a systematic literature review (SLR) on the recent developments of explainable artificial intelligence methods and evaluation metrics concerning different application domains and tasks, and identify 137 articles published in recent years and identified through the prominent bibliographic databases.
Abstract: Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers’ understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system.

70 citations


Journal ArticleDOI
TL;DR: A novel particle swarm optimization (PSO) algorithm for dynamic adjustment of the FO PIλDµ controller parameters, which has the advantages of a small overshoot, short adjustment time, precise control, and strong anti-disturbance control.
Abstract: In this paper, a new fractional-order (FO) PIλDµ controller is designed with the desired gain and phase margin for the automatic rudder of underactuated surface vessels (USVs). The integral order λ and the differential order μ are introduced in the controller, and the two additional adjustable factors make the FO PIλDµ controller have better accuracy and robustness. Simulations are carried out for comparison with a ship’s digital PID autopilot. The results show that the FO PIλDµ controller has the advantages of a small overshoot, short adjustment time, and precise control. Due to the uncertainty of the model parameters of USVs and two extra parameters, it is difficult to compute the parameters of an FO PIλDµ controller. Secondly, this paper proposes a novel particle swarm optimization (PSO) algorithm for dynamic adjustment of the FO PIλDµ controller parameters. By dynamically changing the learning factor, the particles carefully search in their own neighborhoods at the early stage of the algorithm to prevent them from missing the global optimum and converging on the local optimum, while at the later stage of evolution, the particles converge on the global optimal solution quickly and accurately to speed up PSO convergence. Finally, comparative experiments of four different controllers under different sailing conditions are carried out, and the results show that the FO PIλDµ controller based on the IPSO algorithm has the advantages of a small overshoot, short adjustment time, precise control, and strong anti-disturbance control.

67 citations


Journal ArticleDOI
TL;DR: This study combines the new weighted kernel with SKELM and proposes a semi-supervised extreme learning machine algorithm based on the weighted kernel, SELMWK, which has good classification performance and can solve the semi- supervised gas classification task of the same domain data well on the used dataset.
Abstract: At present, machine sense of smell has shown its important role and advantages in many scenarios. The development of machine sense of smell is inseparable from the support of corresponding data and algorithms. However, the process of olfactory data collection is relatively cumbersome, and it is more difficult to collect labeled data. However, in many scenarios, to use a small amount of labeled data to train a good-performing classifier, it is not feasible to rely only on supervised learning algorithms, but semi-supervised learning algorithms can better cope with only a small amount of labeled data and a large amount of unlabeled data. This study combines the new weighted kernel with SKELM and proposes a semi-supervised extreme learning machine algorithm based on the weighted kernel, SELMWK. The experimental results show that the proposed SELMWK algorithm has good classification performance and can solve the semi-supervised gas classification task of the same domain data well on the used dataset.

63 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive review of the use of UAVs for agricultural tasks and highlights the importance of simultaneous localization and mapping (SLAM) for a UAV solution in the greenhouse.
Abstract: The increasing world population makes it necessary to fight challenges such as climate change and to realize production efficiently and quickly. However, the minimum cost, maximum income, environmental pollution protection and the ability to save water and energy are all factors that should be taken into account in this process. The use of information and communication technologies (ICTs) in agriculture to meet all of these criteria serves the purpose of precision agriculture. As unmanned aerial vehicles (UAVs) can easily obtain real-time data, they have a great potential to address and optimize solutions to the problems faced by agriculture. Despite some limitations, such as the battery, load, weather conditions, etc., UAVs will be used frequently in agriculture in the future because of the valuable data that they obtain and their efficient applications. According to the known literature, UAVs have been carrying out tasks such as spraying, monitoring, yield estimation, weed detection, etc. In recent years, articles related to agricultural UAVs have been presented in journals with high impact factors. Most precision agriculture applications with UAVs occur in outdoor environments where GPS access is available, which provides more reliable control of the UAV in both manual and autonomous flights. On the other hand, there are almost no UAV-based applications in greenhouses where all-season crop production is available. This paper emphasizes this deficiency and provides a comprehensive review of the use of UAVs for agricultural tasks and highlights the importance of simultaneous localization and mapping (SLAM) for a UAV solution in the greenhouse.

63 citations


Journal ArticleDOI
TL;DR: In this experiment, the Gaussian Laplacian second-order differential operator is introduced as a new similarity measure to increase edge information and internal detail information to solve single information and small convergence regions of the normalized cross-correlation algorithm.
Abstract: Image-guided surgery (IGS) can reduce the risk of tissue damage and improve the accuracy and targeting of lesions by increasing the surgery’s visual field. Three-dimensional (3D) medical images can provide spatial location information to determine the location of lesions and plan the operation process. For real-time tracking and adjusting the spatial position of surgical instruments, two-dimensional (2D) images provide real-time intraoperative information. In this experiment, 2D/3D medical image registration algorithm based on the gray level is studied, and the registration based on normalized cross-correlation is realized. The Gaussian Laplacian second-order differential operator is introduced as a new similarity measure to increase edge information and internal detail information to solve single information and small convergence regions of the normalized cross-correlation algorithm. The multiresolution strategy improves the registration accuracy and efficiency to solve the low efficiency of the normalized cross-correlation algorithm.

62 citations


DOI
TL;DR: A deep fusion matching network is designed in this paper, which mainly includes a coding layer, matching layer, dependency convolution layer, information aggregation layer, and inference prediction layer based on a deep matching network, and the performance of the model is verified on several datasets.
Abstract: As the vital technology of natural language understanding, sentence representation reasoning technology mainly focuses on sentence representation methods and reasoning models. Although the performance has been improved, there are still some problems, such as incomplete sentence semantic expression, lack of depth of reasoning model, and lack of interpretability of the reasoning process. Given the reasoning model’s lack of reasoning depth and interpretability, a deep fusion matching network is designed in this paper, which mainly includes a coding layer, matching layer, dependency convolution layer, information aggregation layer, and inference prediction layer. Based on a deep matching network, the matching layer is improved. Furthermore, the heuristic matching algorithm replaces the bidirectional long-short memory neural network to simplify the interactive fusion. As a result, it improves the reasoning depth and reduces the complexity of the model; the dependency convolution layer uses the tree-type convolution network to extract the sentence structure information along with the sentence dependency tree structure, which improves the interpretability of the reasoning process. Finally, the performance of the model is verified on several datasets. The results show that the reasoning effect of the model is better than that of the shallow reasoning model, and the accuracy rate on the SNLI test set reaches 89.0%. At the same time, the semantic correlation analysis results show that the dependency convolution layer is beneficial in improving the interpretability of the reasoning process.

60 citations


Journal ArticleDOI
TL;DR: The fungus Aspergillus terreus-mediated synthesis of bi-metallic Ag-Cu NPs was optimized using response surface methodology (RSM) to reach the maximum yield of NPs, and the DPPH and hydrogen peroxide scavenging activities of the NPs were high, reaching 90% scavenging.
Abstract: Bi-metallic nanoparticles (NPs) have appeared to be more efficient as antimicrobials than mono-metallic NPs. The fungus Aspergillus terreus-mediated synthesis of bi-metallic Ag-Cu NPs was optimized using response surface methodology (RSM) to reach the maximum yield of NPs. The optimal conditions were validated using ANOVA. The optimal conditions were 1.5 mM total metal (Ag + Cu) concentration, 1.25 mg fungal biomass, 350 W microwave power, and 15 min reaction time. The structure and shape of the synthesized NPs (mostly 20–30 nm) were characterized using several analytical tools. The biological activities of the synthesized NPs were assessed by studying their antioxidant, antibacterial, and cytotoxic activity in different NP concentrations. A dose-dependent response was observed in each test. Bi-metallic Ag-Cu NPs inhibited three clinically relevant human pathogens: Klebsiella pneumoniae, Enterobacter cloacae, and Pseudomonas aeruginosa. Escherichia coli, Enterococcus faecalis, and Staphylococcus aureus were inhibited less. The DPPH and hydrogen peroxide scavenging activities of the NPs were high, reaching 90% scavenging. Ag-Cu NPs could be studied as antimicrobials in different applications. The optimization procedure using statistical analyses was successful in improving the yield of nanoparticles.

Journal ArticleDOI
TL;DR: In this paper , the authors examined the relevant literature to identify and understand the mechanisms behind the discrepancy between traditional extractions and subcritical water extraction, and the overestimation of total phenolic content by the Folin-Ciocâlteu assay was also discussed.
Abstract: Background: Polyphenols are a set of bioactive compounds commonly found in plants. These compounds are of great interest, as they have shown high antioxidant power and are correlated to many health benefits. Hence, traditional methods of extraction such as solvent extraction, Soxhlet extraction and novel extraction technologies such as ultrasound-assisted extraction and subcritical water extraction (SWE) have been investigated for the extraction of polyphenols. Scope and Approach: Generally, for traditional extractions, the total phenolic content (TPC) is highest at an extraction temperature of 60–80 °C. For this reason, polyphenols are regularly regarded as heat-labile compounds. However, in many studies that investigated the optimal temperature for subcritical water extraction (SWE), temperatures as high as 100–200 °C have been reported. These SWE extractions showed extremely high yields and antioxidant capacities at these temperatures. This paper aimed to examine the relevant literature to identify and understand the mechanisms behind this discrepancy. Results: Thermal degradation is the most common explanation for the degradation of polyphenols. This may be the case for specific or sub-groups of phenolic acids. The different extraction temperatures may have also impacted the types of polyphenols extracted. At high extraction temperatures, the formation of new compounds known as Maillard reaction products may also influence the extracted polyphenols. The selection of source material for extraction, i.e., the plant matrix, and the effect of extraction conditions, i.e., oxidation and light exposure, are also discussed. The overestimation of total phenolic content by the Folin–Ciocâlteu assay is also discussed. There is also a lack of consensus in TPC’s correlation to antioxidant activity.

Journal ArticleDOI
TL;DR: A two-dimensional weighted spatial histogram of gradient directions is used to extract statistical features, overcome the algorithm’s limitations, and expand the applicable scenarios under the premise of ensuring accuracy.
Abstract: The key to image-guided surgery (IGS) technology is to find the transformation relationship between preoperative 3D images and intraoperative 2D images, namely, 2D/3D image registration. A feature-based 2D/3D medical image registration algorithm is investigated in this study. We use a two-dimensional weighted spatial histogram of gradient directions to extract statistical features, overcome the algorithm’s limitations, and expand the applicable scenarios under the premise of ensuring accuracy. The proposed algorithm was tested on CT and synthetic X-ray images, and compared with existing algorithms. The results show that the proposed algorithm can improve accuracy and efficiency, and reduce the initial value’s sensitivity.

DOI
TL;DR: A domain transformation semi-supervised weighted kernel extreme learning machine (DTSWKELM) algorithm, which converts the data through the domain and uses SWKELm algorithmic classification to transform the semi- Supervised classification problem of different domain data into a semi- supervised classification problems of the same domain data.
Abstract: This research mainly studies the semi-supervised learning algorithm of different domain data in machine olfaction, also known as sensor drift compensation algorithm. Usually for this kind of problem, it is difficult to obtain better recognition results by directly using the semi-supervised learning algorithm. For this reason, we propose a domain transformation semi-supervised weighted kernel extreme learning machine (DTSWKELM) algorithm, which converts the data through the domain and uses SWKELM algorithmic classification to transform the semi-supervised classification problem of different domain data into a semi-supervised classification problem of the same domain data.

Journal ArticleDOI
TL;DR: This study proposes an improved performance of the original YOLOv5 model, and applies the obtained data to each model, to calculate the key indicators and draw a conclusion on the best model of object detection under various conditions.
Abstract: With the recent development of drone technology, object detection technology is emerging, and these technologies can also be applied to illegal immigrants, industrial and natural disasters, and missing people and objects. In this paper, we would like to explore ways to increase object detection performance in these situations. Photography was conducted in an environment where it was confusing to detect an object. The experimental data were based on photographs that created various environmental conditions, such as changes in the altitude of the drone, when there was no light, and taking pictures in various conditions. All the data used in the experiment were taken with F11 4K PRO drone and VisDrone dataset. In this study, we propose an improved performance of the original YOLOv5 model. We applied the obtained data to each model: the original YOLOv5 model and the improved YOLOv5_Ours model, to calculate the key indicators. The main indicators are precision, recall, F-1 score, and mAP (0.5), and the YOLOv5_Ours values of mAP (0.5) and function loss were improved by comparing it with the original YOLOv5 model. Finally, the conclusion was drawn based on the data comparing the original YOLOv5 model and the improved YOLOv5_Ours model. As a result of the analysis, we were able to arrive at a conclusion on the best model of object detection under various conditions.

Peer ReviewDOI
TL;DR: What Deep Residual Networks are, how they achieve their excellent results, and why their successful implementation in practice represents a significant advance over existing techniques are explained are explained.
Abstract: Deep Residual Networks have recently been shown to significantly improve the performance of neural networks trained on ImageNet, with results beating all previous methods on this dataset by large margins in the image classification task. However, the meaning of these impressive numbers and their implications for future research are not fully understood yet. In this survey, we will try to explain what Deep Residual Networks are, how they achieve their excellent results, and why their successful implementation in practice represents a significant advance over existing techniques. We also discuss some open questions related to residual learning as well as possible applications of Deep Residual Networks beyond ImageNet. Finally, we discuss some issues that still need to be resolved before deep residual learning can be applied on more complex problems.

Journal ArticleDOI
TL;DR: The study developed an effective approach to detect brain tumors using MRI to aid in making quick, efficient, and precise decisions and implemented a convolutional neural network model framework to train the model for this challenge.
Abstract: A brain tumor is a distorted tissue wherein cells replicate rapidly and indefinitely, with no control over tumor growth. Deep learning has been argued to have the potential to overcome the challenges associated with detecting and intervening in brain tumors. It is well established that the segmentation method can be used to remove abnormal tumor regions from the brain, as this is one of the advanced technological classification and detection tools. In the case of brain tumors, early disease detection can be achieved effectively using reliable advanced A.I. and Neural Network classification algorithms. This study aimed to critically analyze the proposed literature solutions, use the Visual Geometry Group (VGG 16) for discovering brain tumors, implement a convolutional neural network (CNN) model framework, and set parameters to train the model for this challenge. VGG is used as one of the highest-performing CNN models because of its simplicity. Furthermore, the study developed an effective approach to detect brain tumors using MRI to aid in making quick, efficient, and precise decisions. Faster CNN used the VGG 16 architecture as a primary network to generate convolutional feature maps, then classified these to yield tumor region suggestions. The prediction accuracy was used to assess performance. Our suggested methodology was evaluated on a dataset for brain tumor diagnosis using MR images comprising 253 MRI brain images, with 155 showing tumors. Our approach could identify brain tumors in MR images. In the testing data, the algorithm outperformed the current conventional approaches for detecting brain tumors (Precision = 96%, 98.15%, 98.41% and F1-score = 91.78%, 92.6% and 91.29% respectively) and achieved an excellent accuracy of CNN 96%, VGG 16 98.5% and Ensemble Model 98.14%. The study also presents future recommendations regarding the proposed research work.

Journal ArticleDOI
TL;DR: A novel feature selection and extraction approach for anomaly-based IDS that is superior and competent with a very high 99.98% classification accuracy is proposed and compared with other state-of-the-art studies.
Abstract: The Internet of Things (IoT) ecosystem has experienced significant growth in data traffic and consequently high dimensionality. Intrusion Detection Systems (IDSs) are essential self-protective tools against various cyber-attacks. However, IoT IDS systems face significant challenges due to functional and physical diversity. These IoT characteristics make exploiting all features and attributes for IDS self-protection difficult and unrealistic. This paper proposes and implements a novel feature selection and extraction approach (i.e., our method) for anomaly-based IDS. The approach begins with using two entropy-based approaches (i.e., information gain (IG) and gain ratio (GR)) to select and extract relevant features in various ratios. Then, mathematical set theory (union and intersection) is used to extract the best features. The model framework is trained and tested on the IoT intrusion dataset 2020 (IoTID20) and NSL-KDD dataset using four machine learning algorithms: Bagging, Multilayer Perception, J48, and IBk. Our approach has resulted in 11 and 28 relevant features (out of 86) using the intersection and union, respectively, on IoTID20 and resulted 15 and 25 relevant features (out of 41) using the intersection and union, respectively, on NSL-KDD. We have further compared our approach with other state-of-the-art studies. The comparison reveals that our model is superior and competent, scoring a very high 99.98% classification accuracy.

Journal ArticleDOI
TL;DR: A Two-Stage Industrial Defect Detection Framework based on Improved-YOLOv5 and Optimized-Inception-ResnetV2, which completes positioning and classification tasks through two specific models is proposed, and the superiority and adaptability of the two-stage framework is verified.
Abstract: Aiming to address the currently low accuracy of domestic industrial defect detection, this paper proposes a Two-Stage Industrial Defect Detection Framework based on Improved-YOLOv5 and Optimized-Inception-ResnetV2, which completes positioning and classification tasks through two specific models. In order to make the first-stage recognition more effective at locating insignificant small defects with high similarity on the steel surface, we improve YOLOv5 from the backbone network, the feature scales of the feature fusion layer, and the multiscale detection layer. In order to enable second-stage recognition to better extract defect features and achieve accurate classification, we embed the convolutional block attention module (CBAM) attention mechanism module into the Inception-ResnetV2 model, then optimize the network architecture and loss function of the accurate model. Based on the Pascal Visual Object Classes 2007 (VOC2007) dataset, the public dataset NEU-DET, and the optimized dataset Enriched-NEU-DET, we conducted multiple sets of comparative experiments on the Improved-YOLOv5 and Inception-ResnetV2. The testing results show that the improvement is obvious. In order to verify the superiority and adaptability of the two-stage framework, we first test based on the Enriched-NEU-DET dataset, and further use AUBO-i5 robot, Intel RealSense D435 camera, and other industrial steel equipment to build actual industrial scenes. In experiments, a two-stage framework achieves the best performance of 83.3% mean average precision (mAP), evaluated on the Enriched-NEU-DET dataset, and 91.0% on our built industrial defect environment.

Journal ArticleDOI
TL;DR: In this article , a better and in-depth understanding of the function and interactions of plants and associated microorganisms directly in the matrix of interest, especially in the presence of persistent contamination, could provide new opportunities for phytoremediation.
Abstract: Phytoremediation is a cost-effective and sustainable technology used to clean up pollutants from soils and waters through the use of plant species. Indeed, plants are naturally capable of absorbing metals and degrading organic molecules. However, in several cases, the presence of contaminants causes plant suffering and limited growth. In such situations, thanks to the production of specific root exudates, plants can engage the most suitable bacteria able to support their growth according to the particular environmental stress. These plant growth-promoting rhizobacteria (PGPR) may facilitate plant growth and development with several beneficial effects, even more evident when plants are grown in critical environmental conditions, such as the presence of toxic contaminants. For instance, PGPR may alleviate metal phytotoxicity by altering metal bioavailability in soil and increasing metal translocation within the plant. Since many of the PGPR are also hydrocarbon oxidizers, they are also able to support and enhance plant biodegradation activity. Besides, PGPR in agriculture can be an excellent support to counter the devastating effects of abiotic stress, such as excessive salinity and drought, replacing expensive inorganic fertilizers that hurt the environment. A better and in-depth understanding of the function and interactions of plants and associated microorganisms directly in the matrix of interest, especially in the presence of persistent contamination, could provide new opportunities for phytoremediation.

Journal ArticleDOI
TL;DR: A hybrid deep residual model for transitional activity recognition utilizing signal data from wearable sensors enhances the ResNet model with hybrid Squeeze-and-Excitation residual blocks combining a Bidirectional Gated Recurrent Unit (BiGRU) to extract deep spatio-temporal features hierarchically, and to distinguish transitional activities efficiently.
Abstract: Numerous learning-based techniques for effective human behavior identification have emerged in recent years. These techniques focus only on fundamental human activities, excluding transitional activities due to their infrequent occurrence and short period. Nevertheless, postural transitions play a critical role in implementing a system for recognizing human activity and cannot be ignored. This study aims to present a hybrid deep residual model for transitional activity recognition utilizing signal data from wearable sensors. The developed model enhances the ResNet model with hybrid Squeeze-and-Excitation (SE) residual blocks combining a Bidirectional Gated Recurrent Unit (BiGRU) to extract deep spatio-temporal features hierarchically, and to distinguish transitional activities efficiently. To evaluate recognition performance, the experiments are conducted on two public benchmark datasets (HAPT and MobiAct v2.0). The proposed hybrid approach achieved classification accuracies of 98.03% and 98.92% for the HAPT and MobiAct v2.0 datasets, respectively. Moreover, the outcomes show that the proposed method is superior to the state-of-the-art methods in terms of overall accuracy. To analyze the improvement, we have investigated the effects of combining SE modules and BiGRUs into the deep residual network. The findings indicates that the SE module is efficient in improving transitional activity recognition.

Journal ArticleDOI
TL;DR: In this article , a dual-energy gamma source and two sodium iodide detectors were used with the help of artificial intelligence to determine the flow pattern and volume percentage in a two-phase flow by considering the thickness of the scale in the tested pipeline.
Abstract: One of the factors that significantly affects the efficiency of oil and gas industry equipment is the scales formed in the pipelines. In this innovative, non-invasive system, the inclusion of a dual-energy gamma source and two sodium iodide detectors was investigated with the help of artificial intelligence to determine the flow pattern and volume percentage in a two-phase flow by considering the thickness of the scale in the tested pipeline. In the proposed structure, a dual-energy gamma source consisting of barium-133 and cesium-137 isotopes emit photons, one detector recorded transmitted photons and a second detector recorded the scattered photons. After simulating the mentioned structure using Monte Carlo N-Particle (MCNP) code, time characteristics named 4th order moment, kurtosis and skewness were extracted from the recorded data of both the transmission detector (TD) and scattering detector (SD). These characteristics were considered as inputs of the multilayer perceptron (MLP) neural network. Two neural networks that were able to determine volume percentages with high accuracy, as well as classify all flow regimes correctly, were trained.

Journal ArticleDOI
TL;DR: A novel 14-layered deep convolutional neural network (14-DCNN) to detect plant leaf diseases using leaf images was proposed and the overall performance of the proposed DCNN model was better than the existing transfer learning approaches.
Abstract: In this research, we proposed a novel 14-layered deep convolutional neural network (14-DCNN) to detect plant leaf diseases using leaf images. A new dataset was created using various open datasets. Data augmentation techniques were used to balance the individual class sizes of the dataset. Three image augmentation techniques were used: basic image manipulation (BIM), deep convolutional generative adversarial network (DCGAN) and neural style transfer (NST). The dataset consists of 147,500 images of 58 different healthy and diseased plant leaf classes and one no-leaf class. The proposed DCNN model was trained in the multi-graphics processing units (MGPUs) environment for 1000 epochs. The random search with the coarse-to-fine searching technique was used to select the most suitable hyperparameter values to improve the training performance of the proposed DCNN model. On the 8850 test images, the proposed DCNN model achieved 99.9655% overall classification accuracy, 99.7999% weighted average precision, 99.7966% weighted average recall, and 99.7968% weighted average F1 score. Additionally, the overall performance of the proposed DCNN model was better than the existing transfer learning approaches.

Journal ArticleDOI
TL;DR: The proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods and demonstrates that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results.
Abstract: Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results.

Journal ArticleDOI
TL;DR: A significant decrease is reported for both postbiotic and chlorhexidine for all peri-implant mucositis indices studied, and greater improvements for BS, GBI and MMC inflammatory indices of the postbiotics gel compared to chlor hexidine suggest the importance of further studies to investigate the relevance of the product alone.
Abstract: Peri-implant mucositis is a pathological condition characterized by an inflammatory process in the peri-implant soft tissues. Progression to peri-implantitis takes place in case of peri-implant bone resorption. Recently, an aid for non-surgical treatment by mechanical debridement (SRP) has been identified in probiotics. As there are no recent studies regarding their use for peri-implant mucositis, the aim of this study was to test a new postbiotic gel for this clinical condition. A split-mouth randomized clinical trial was performed. Twenty patients undergoing SRP were randomly assigned to two treatments based on the following oral gels: chlorhexidine-based Curasept Periodontal Gel (Group 1) and postbiotic-based Biorepair Parodontgel Intensive (Group 2). At baseline (T0) and after three (T1) and six (T2) months, the following peri-implant mucositis indexes were recorded: Probing Pocket Depth (PPD), Plaque Index (PI), Gingival Bleeding Index (GBI), Bleeding Score (BS), Marginal Mucosal Condition (MMC). A significant decrease is reported for both postbiotic and chlorhexidine for all peri-implant mucositis indices studied. Quite the opposite, no significant variation was present in intergroup comparisons. Greater improvements for BS, GBI and MMC inflammatory indices of the postbiotic gel compared to chlorhexidine suggest the importance of further studies to investigate the relevance of the product alone.

Journal ArticleDOI
TL;DR: An overview of the PSO algorithm is presented, the basic concepts and parameters of PSO are explained, and various advances in relation to PSO, including its modifications, extensions, hybridization, theoretical analysis, are included.
Abstract: Particle swarm optimization (PSO) is one of the most famous swarm-based optimization techniques inspired by nature. Due to its properties of flexibility and easy implementation, there is an enormous increase in the popularity of this nature-inspired technique. Particle swarm optimization (PSO) has gained prompt attention from every field of researchers. Since its origin in 1995 till now, researchers have improved the original Particle swarm optimization (PSO) in varying ways. They have derived new versions of it, such as the published theoretical studies on various parameters of PSO, proposed many variants of the algorithm and numerous other advances. In the present paper, an overview of the PSO algorithm is presented. On the one hand, the basic concepts and parameters of PSO are explained, on the other hand, various advances in relation to PSO, including its modifications, extensions, hybridization, theoretical analysis, are included.

Journal ArticleDOI
TL;DR: In this paper , the authors express the significance of smart industrial robot control in manufacturing towards future factories by listing the needs, requirements, and introducing the envisioned concept of smart robots, and explore current trends that are based on different learning strategies and methods.
Abstract: Industrial robots and associated control methods are continuously developing. With the recent progress in the field of artificial intelligence, new perspectives in industrial robot control strategies have emerged, and prospects towards cognitive robots have arisen. AI-based robotic systems are strongly becoming one of the main areas of focus, as flexibility and deep understanding of complex manufacturing processes are becoming the key advantage to raise competitiveness. This review first expresses the significance of smart industrial robot control in manufacturing towards future factories by listing the needs, requirements and introducing the envisioned concept of smart industrial robots. Secondly, the current trends that are based on different learning strategies and methods are explored. Current computer-vision, deep reinforcement learning and imitation learning based robot control approaches and possible applications in manufacturing are investigated. Gaps, challenges, limitations and open issues are identified along the way.

Journal ArticleDOI
TL;DR: A real-time monitoring hybrid deep learning-based model to detect and predict Type 2 diabetes mellitus using the publicly available PIMA Indian diabetes database and it is demonstrated that CNN-Bi-LSTM surpasses the other deep learning methods in terms of accuracy, sensitivity, and specificity.
Abstract: Diabetes is a long-term illness caused by the inefficient use of insulin generated by the pancreas. If diabetes is detected at an early stage, patients can live their lives healthier. Unlike previously used analytical approaches, deep learning does not need feature extraction. In order to support this viewpoint, we developed a real-time monitoring hybrid deep learning-based model to detect and predict Type 2 diabetes mellitus using the publicly available PIMA Indian diabetes database. This study contributes in four ways. First, we perform a comparative study of different deep learning models. Based on experimental findings, we next suggested merging two models, CNN-Bi-LSTM, to detect (and predict) Type 2 diabetes. These findings demonstrate that CNN-Bi-LSTM surpasses the other deep learning methods in terms of accuracy (98%), sensitivity (97%), and specificity (98%), and it is 1.1% better compared to other existing state-of-the-art algorithms. Hence, our proposed model helps clinicians obtain complete information about their patients using real-time monitoring and can check real-time statistics about their vitals.

Journal ArticleDOI
TL;DR: In this article , the results of a multi-criteria decision-making study when using powder-mixed electrical discharge machining (PMEDM) of cylindrically shaped parts in 90CrSi tool steel were presented.
Abstract: Multi-criteria decision making (MCDM) is used to determine the best alternative among various options. It is of great importance as it hugely affects the efficiency of activities in life, management, business, and engineering. This paper presents the results of a multi-criteria decision-making study when using powder-mixed electrical discharge machining (PMEDM) of cylindrically shaped parts in 90CrSi tool steel. In this study, powder concentration, pulse duration, pulse off time, pulse current, and host voltage were selected as the input process parameters. Moreover, the Taguchi method was used for the experimental design. To simultaneously ensure minimum surface roughness (RS) and maximum material-removal speed (MRS) and to implement multi-criteria decision making, MARCOS (Measurement of Alternatives and Ranking according to Compromise Solution), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution), and MAIRCA (Multi-Attributive Ideal–Real Comparative Analysis) methods were applied. Additionally, the weight calculation for the criteria was calculated using the MEREC (Method based on the Removal Effects of Criteria) method. From the results, the best alternative for the multi-criteria problem with PMEDM cylindrically shaped parts was proposed.

Journal ArticleDOI
TL;DR: This survey presented 5G mobility management in ultra-dense small cells networks using reinforcement learning techniques and discussed how machine learning algorithms can help in different HO scenarios.
Abstract: The fifth generation (5G) wireless technology emerged with marvelous effort to state, design, deployment and standardize the upcoming wireless network generation. Artificial intelligence (AI) and machine learning (ML) techniques are well capable to support 5G latest technologies that are expected to deliver high data rate to upcoming use cases and services such as massive machine type communications (mMTC), enhanced mobile broadband (eMBB), and ultra-reliable low latency communications (uRLLC). These services will surely help Gbps of data within the latency of few milliseconds in Internet of Things paradigm. This survey presented 5G mobility management in ultra-dense small cells networks using reinforcement learning techniques. First, we discussed existing surveys then we are focused on handover (HO) management in ultra-dense small cells (UDSC) scenario. Following, this study also discussed how machine learning algorithms can help in different HO scenarios. Nevertheless, future directions and challenges for 5G UDSC networks were concisely addressed.