scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Problems in Engineering in 2021"


Journal ArticleDOI
TL;DR: The results presented herein showed an effective manner in selecting the appropriate ratios of datasets and the best ML model to predict the soil shear strength accurately, which would be helpful in the design and engineering phases of construction projects.
Abstract: The main objective of this study is to evaluate and compare the performance of different machine learning (ML) algorithms, namely, Artificial Neural Network (ANN), Extreme Learning Machine (ELM), and Boosting Trees (Boosted) algorithms, considering the influence of various training to testing ratios in predicting the soil shear strength, one of the most critical geotechnical engineering properties in civil engineering design and construction. For this aim, a database of 538 soil samples collected from the Long Phu 1 power plant project, Vietnam, was utilized to generate the datasets for the modeling process. Different ratios (i.e., 10/90, 20/80, 30/70, 40/60, 50/50, 60/40, 70/30, 80/20, and 90/10) were used to divide the datasets into the training and testing datasets for the performance assessment of models. Popular statistical indicators, such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Correlation Coefficient (R), were employed to evaluate the predictive capability of the models under different training and testing ratios. Besides, Monte Carlo simulation was simultaneously carried out to evaluate the performance of the proposed models, taking into account the random sampling effect. The results showed that although all three ML models performed well, the ANN was the most accurate and statistically stable model after 1000 Monte Carlo simulations (Mean R = 0.9348) compared with other models such as Boosted (Mean R = 0.9192) and ELM (Mean R = 0.8703). Investigation on the performance of the models showed that the predictive capability of the ML models was greatly affected by the training/testing ratios, where the 70/30 one presented the best performance of the models. Concisely, the results presented herein showed an effective manner in selecting the appropriate ratios of datasets and the best ML model to predict the soil shear strength accurately, which would be helpful in the design and engineering phases of construction projects.

171 citations


Journal ArticleDOI
TL;DR: An integrated model is proposed that combines both the feature extraction engine and classification engine from the input raw text datasets from a social media engine and shows that the ANN-DRL has higher classification results than conventional machine learning classifiers.
Abstract: In the modern era, the cyberbullying (CB) is an intentional and aggressive action of an individual or a group against a victim via electronic media. The consequence of CB is increasing alarmingly, affecting the victim either physically or psychologically. This allows the use of automated detection tools, but research on such automated tools is limited due to poor datasets or elimination of wide features during the CB detection. In this paper, an integrated model is proposed that combines both the feature extraction engine and classification engine from the input raw text datasets from a social media engine. The feature extraction engine extracts the psychological features, user comments, and the context into consideration for CB detection. The classification engine using artificial neural network (ANN) classifies the results, and it is provided with an evaluation system that either rewards or penalizes the classified output. The evaluation is carried out using Deep Reinforcement Learning (DRL) that improves the performance of classification. The simulation is carried out to validate the efficacy of the ANN-DRL model against various metrics that include accuracy, precision, recall, and f-measure. The results of the simulation show that the ANN-DRL has higher classification results than conventional machine learning classifiers.

102 citations


Journal ArticleDOI
TL;DR: A new method of structural modeling is utilized to generate the structured derivation relationship, thus completing the natural language knowledge extraction process of the object-oriented knowledge system.
Abstract: Withthe technological advent, the clustering phenomenon is recently being used in various domains and in natural language recognition. This article contributes to the clustering phenomenon of natural language and fulfills the requirements for the dynamic update of the knowledge system. This article proposes a method of dynamic knowledge extraction based on sentence clustering recognition using a neural network-based framework. The conversion process from natural language papers to object-oriented knowledge system is studied considering the related problems of sentence vectorization. This article studies the attributes of sentence vectorization using various basic definitions, judgment theorem, and postprocessing elements. The sentence clustering recognition method of the network uses the concept of prereliability as a measure of the credibility of sentence recognition results. An ART2 neural network simulation program is written using MATLAB, and the effect of the neural network on sentence recognition is utilized for the corresponding analysis. A postreliability evaluation indexing is done for the credibility of the model construction, and the implementation steps for the conjunctive rule sentence pattern are specifically introduced. A new method of structural modeling is utilized to generate the structured derivation relationship, thus completing the natural language knowledge extraction process of the object-oriented knowledge system. An application example with mechanical CAD is used in this work to demonstrate the specific implementation of the example, which confirms the effectiveness of the proposed method.

101 citations


Journal ArticleDOI
TL;DR: In this paper, a generalized fractional derivative (GFD) definition is proposed for a differentiable function expanded by a Taylor series, and GFD is applied for some functions to investigate that the GFD coincides with the results from Caputo and Riemann–Liouville fractional derivatives.
Abstract: A generalized fractional derivative (GFD) definition is proposed in this work. For a differentiable function expanded by a Taylor series, we show that . GFD is applied for some functions to investigate that the GFD coincides with the results from Caputo and Riemann–Liouville fractional derivatives. The solutions of the Riccati fractional differential equation are obtained via the GFD. A comparison with the Bernstein polynomial method , enhanced homotopy perturbation method , and conformable derivative is also discussed. Our results show that the proposed definition gives a much better accuracy than the well-known definition of the conformable derivative. Therefore, GFD has advantages in comparison with other related definitions. This work provides a new path for a simple tool for obtaining analytical solutions of many problems in the context of fractional calculus.

70 citations


Journal ArticleDOI
TL;DR: A framework is designed by using Convolutional Neural Networks to diagnose COVID-19 patients using chest X-ray images and shows that the proposed CO VID-19 identification model obtains remarkably better results and may be utilized for real-time testing of patients.
Abstract: COVID-19 is a new disease, caused by the novel coronavirus SARS-CoV-2, that was firstly delineated in humans in 2019. Coronaviruses cause a range of illness in patients varying from common cold to advanced respiratory syndromes such as Severe Acute Respiratory Syndrome (SARS-CoV) and Middle East Respiratory Syndrome (MERS-CoV). The SARS-CoV-2 outbreak has resulted in a global pandemic, and its transmission is increasing at a rapid rate. Diagnostic testing and approaches provide a valuable tool for doctors and support them with the screening process. Automatic COVID-19 identification in chest X-ray images can be useful to test for COVID-19 infection at a good speed. Therefore, in this paper, a framework is designed by using Convolutional Neural Networks (CNN) to diagnose COVID-19 patients using chest X-ray images. A pretrained GoogLeNet is utilized for implementing the transfer learning (i.e., by replacing some sets of final network CNN layers). 20-fold cross-validation is considered to overcome the overfitting quandary. Finally, the multiobjective genetic algorithm is considered to tune the hyperparameters of the proposed COVID-19 identification in chest X-ray images. Extensive experiments show that the proposed COVID-19 identification model obtains remarkably better results and may be utilized for real-time testing of patients.

66 citations


Journal ArticleDOI
TL;DR: A smart-engine-based decision has been developed, which further uses classification and regression trees to shift towards decision-making, and a recommendation engine that is powered by deep learning network to suggest the escalation of a farmer from lower to higher category, namely, small to medium to large.
Abstract: Recently, many companies have substituted human labor with robotics. Some farmers are sharing different perspectives on the incorporation of technology into farming techniques. Some are willing to accept the technology, some are hesitant and bemused to adapt modern technology, and others are uncertain and are worried about the potential of technology to cause havoc and decrease yields. The third group prevails the most in the developed world, for lack of know-how, including translation of utility and, most significantly, the expense involved. A special Smart Tillage platform is established to solve the above issues. A smart-engine-based decision has been developed, which further uses classification and regression trees to shift towards decision-making. The decision is focused entirely on different input factors, such as type of crop, time/month of harvest, type of plant required for the crop, type of harvest, and authorised rental budget. Sitting on top of this would be a recommendation engine that is powered by deep learning network to suggest the escalation of a farmer from lower to higher category, namely, small to medium to large. A metaheuristic is one of the best computing techniques that help for solving a problem without the exhaustive application of a procedure. Recommendations will be cost-effective and suitable for an escalating update depending on the use of sufficient amends, practices, and services. We carried out a study of 562 agriculturists. Owing to the failure to buy modern equipment, growers are flooded by debt. We question if customers will be able to rent and exchange appliances. The farmers would be able to use e-marketplace to develop their activities.

62 citations


Journal ArticleDOI
TL;DR: A new metaheuristic named dingo optimizer (DOX) which is motivated by the behavior of dingo is presented which performed significantly better than other nature-inspired algorithms.
Abstract: Optimization is a buzzword, whenever researchers think of engineering problems. This paper presents a new metaheuristic named dingo optimizer (DOX) which is motivated by the behavior of dingo (Canis familiaris dingo). The overall concept is to develop this method involving the collaborative and social behavior of dingoes. The developed algorithm is based on the hunting behavior of dingoes that includes exploration, encircling, and exploitation. All the above prey hunting steps are modeled mathematically and are implemented in the simulator to test the performance of the proposed algorithm. Comparative analyses are drawn among the proposed approach and grey wolf optimizer (GWO) and particle swarm optimizer (PSO). Some of the well-known test functions are used for the comparative study of this work. The results reveal that the dingo optimizer performed significantly better than other nature-inspired algorithms.

62 citations


Journal ArticleDOI
TL;DR: The goal of this study is to develop the notion of the Maclaurin symmetric mean (MSM) operator as it aggregates information under uncertain environments and considers the relationship of the input arguments, which make it unique.
Abstract: To evaluate objects under uncertainty, many fuzzy frameworks have been designed and investigated so far. Among them, the frame of picture fuzzy set (PFS) is of considerable significance which can describe the four possible aspects of expert’s opinion using a degree of membership (DM), degree of nonmembership (DNM), degree of abstinence (DA), and degree of refusal (DR) in a certain range. Aggregation of information is always challenging especially when the input arguments are interrelated. To deal with such cases, the goal of this study is to develop the notion of the Maclaurin symmetric mean (MSM) operator as it aggregates information under uncertain environments and considers the relationship of the input arguments, which make it unique. In this paper, we studied the theory of MSM operators in the layout of PFSs and discussed their applications in the selection of the most suitable enterprise resource management (ERP) scheme for engineering purposes. We developed picture fuzzy MSM (PFMSM) operators and investigated their validity. We developed the multiattribute decision-making (MADM) algorithm based on the PFMSM operators to examine the performance of the ERP systems using picture fuzzy information. A numerical example to evaluate the performance of ERP systems is studied, and the effects of the associated parameters are discussed. The proposed aggregated results using PFMSM operators are found to be reliable as it takes into account the interrelationship of the input information, unlike traditional aggregation operators. A comparative study of the proposed PFMSM operators is also studied.

61 citations


Journal ArticleDOI
TL;DR: To detect these cotton leaf diseases appropriately, the prior knowledge and utilization of several image processing methods and machine learning techniques are helpful.
Abstract: Cotton is the natural fiber produced, and the commercial crop grown in monoculture on 2.5% of total agricultural land. Cotton is a drought-resistant crop that provides a reliable income to the farmers that grow under the area with a threat from climatic change. These cotton crops are being affected by bacterial, fungal, viral, and other parasitic diseases that may vary due to the climatic conditions resulting in the crop’s low productivity. The most prone to diseases is the leaf that results in the damage of the plant and sometimes the whole crop. Most of the diseases occur only on leaf parts of the cotton plant. The primary purpose of disease detection has always been to identify the diseases affecting the plant in the early stages using traditional techniques for better production. To detect these cotton leaf diseases appropriately, the prior knowledge and utilization of several image processing methods and machine learning techniques are helpful.

54 citations


Journal ArticleDOI
TL;DR: In this article, a linear minimum-variance unbiased estimation criterion was proposed to estimate the state and unknown input when the system is affected by unknown input, but the recursive three-step filter cannot be applied when the unknown input distribution matrix is not of full column rank.
Abstract: The classical recursive three-step filter can be used to estimate the state and unknown input when the system is affected by unknown input, but the recursive three-step filter cannot be applied when the unknown input distribution matrix is not of full column rank. In order to solve the above problem, this paper proposes two novel filters according to the linear minimum-variance unbiased estimation criterion. Firstly, while the unknown input distribution matrix in the output equation is not of full column rank, a novel recursive three-step filter with direct feedthrough was proposed. Then, a novel recursive three-step filter was developed when the unknown input distribution matrix in the system equation is not of full column rank. Finally, the specific recursive steps of the corresponding filters are summarized. And the simulation results show that the proposed filters can effectively estimate the system state and unknown input.

54 citations


Journal ArticleDOI
TL;DR: In this article, the primary motion artifact is detected from a single-channel EEG signal using support vector machine (SVM) and preceded with further artifacts suppression using canonical correlation analysis (CCA) filtering approach.
Abstract: The electroencephalogram (EEG) signals are a big data which are frequently corrupted by motion artifacts. As human neural diseases, diagnosis and analysis need a robust neurological signal. Consequently, the EEG artifacts’ eradication is a vital step. In this research paper, the primary motion artifact is detected from a single-channel EEG signal using support vector machine (SVM) and preceded with further artifacts’ suppression. The signal features’ abstraction and further detection are done through ensemble empirical mode decomposition (EEMD) algorithm. Moreover, canonical correlation analysis (CCA) filtering approach is applied for motion artifact removal. Finally, leftover motion artifacts’ unpredictability is removed by applying wavelet transform (WT) algorithm. Finally, results are optimized by using Harris hawks optimization (HHO) algorithm. The results of the assessment confirm that the algorithm recommended is superior to the algorithms currently in use.

Journal ArticleDOI
TL;DR: A novel bio-inspired algorithm, namely, Dingo Optimization Algorithm (DOA), is proposed for solving optimization problems that mimics the social behavior of the Australian dingo dog and is tested against five popular evolutionary algorithms.
Abstract: A novel bio-inspired algorithm, namely, Dingo Optimization Algorithm (DOA), is proposed for solving optimization problems. The DOA mimics the social behavior of the Australian dingo dog. The algorithm is inspired by the hunting strategies of dingoes which are attacking by persecution, grouping tactics, and scavenging behavior. In order to increment the overall efficiency and performance of this method, three search strategies associated with four rules were formulated in the DOA. These strategies and rules provide a fine balance between intensification (exploitation) and diversification (exploration) over the search space. The proposed method is verified using several benchmark problems commonly used in the optimization field, classical design engineering problems, and optimal tuning of a Proportional-Integral-Derivative (PID) controller are also presented. Furthermore, the DOA’s performance is tested against five popular evolutionary algorithms. The results have shown that the DOA is highly competitive with other metaheuristics, beating them at the majority of the test functions.

Journal ArticleDOI
TL;DR: In this paper, a novel ultrawide band planar antenna with band notched characteristics is presented, which is achieved by loading a pair of metamaterial inspired rectangular split ring resonator (SRR) near the feed line and by etching the SRR slots on a radiating patch.
Abstract: A novel compact size ultrawide band planar antenna with band notched characteristics is present. The band rejection characteristic is achieved by loading a pair of metamaterial inspired rectangular split ring resonator (SRR) near the feed line and by etching the SRR slots on a radiating patch. The simulated and measured results reveal that the proposed antenna exhibits the impedance bandwidth over the ultrawide band (UWB) frequency range from 3.1 to 14 GHz with the voltage standing wave ratio less than 2 except for band stop bands at 3.29 to 3.7 GHz (WiMAX band), 3.7 to 4.10 GHz (C-band), 5.1 to 5.9 GHz (WLAN band), and 7.06 to 7.76 GHz (downlink X-band satellite communication), respectively. The proposed antenna fabricated on low-cost FR-4 substrate has compact size of 24 × 20 × 1.6 mm3. The simulation results are compared with measured results and demonstrate good agreement with stable gain over pass bands. The proposed antenna also exhibits dipole-like radiation pattern in E-plane and omni-directional pattern in H-plane. These results led to conclusion that the presented antenna is a suitable candidate for ultrawide band UWB applications with desired band notch characteristics.

Journal ArticleDOI
TL;DR: This paper identifies the applications of machine learning (ML) in SCM as one of the most well-known artificial intelligence (AI) techniques.
Abstract: In today’s complex and ever-changing world, concerns about the lack of enough data have been replaced by concerns about too much data for supply chain management (SCM). (e volume of data generated from all parts of the supply chain has changed the nature of SCM analysis. By increasing the volume of data, the efficiency and effectiveness of the traditional methods have decreased. Limitations of these methods in analyzing and interpreting a large amount of data have led scholars to generate some methods that have high capability to analyze and interpret big data. (erefore, the main purpose of this paper is to identify the applications of machine learning (ML) in SCM as one of the most well-known artificial intelligence (AI) techniques. By developing a conceptual framework, this paper identifies the contributions of ML techniques in selecting and segmenting suppliers, predicting supply chain risks, and estimating demand and sales, production, inventory management, transportation and distribution, sustainable development (SD), and circular economy (CE). Finally, the implications of the study on the main limitations and challenges are discussed, and then managerial insights and future research directions are given.

Journal ArticleDOI
TL;DR: The intelligent recommendation system based on association rules can recommend products more in line with user needs and interests and promote higher click-through rate and purchase rate, but user satisfaction can be further improved.
Abstract: With the advent of the era of big data, data mining has become one of the key technologies in the field of research and business. In order to improve the efficiency of data mining, this paper studies data mining based on the intelligent recommendation system. Firstly, this paper makes mathematical modeling of the intelligent recommendation system based on association rules. After analyzing the requirements of the intelligent recommendation system, Java 2 Platform, Enterprise Edition, technology is used to divide the system architecture into the presentation layer, business logic layer, and data layer. Recommendation module is divided into three substages: data representation, model learning, and recommendation engine. Then, the fuzzy clustering algorithm is used to optimize the system. After the system is built, the performance of the system is evaluated, and the evaluation indexes include accuracy, coverage, and response time. Finally, the system is put into a trial operation of an e-commerce platform. The click-through rate and purchase conversion rate of recommended products before and after the operation are compared, and a questionnaire survey is randomly launched to the platform users to analyze the user satisfaction. The experimental data show that the MAE of this system is the lowest, maintained at about 0.73, and its accuracy is the highest; before the recommended threshold exceeds 0.5, the average coverage rate of this system is the highest: 0.75; in Q1–Q5 subsets, the shortest response time of the system is 0.2 s. Before and after the operation of the system, the average click-through rate increased by 11.04%, and the average purchase rate increased by 9.35%. Among the 1216 users, 43% of the users were satisfied with 4 and 9% with 1. This shows that the system algorithm convergence speed is fast; it can recommend products more in line with user needs and interests and promote higher click-through rate and purchase rate, but user satisfaction can be further improved.

Journal ArticleDOI
TL;DR: Some of the strategies of DSM including peak shaving and load scheduling are highlighted and the implementation of numerous optimization techniques on DSM is reviewed.
Abstract: The concept of smart grid was introduced a decade ago. Demand side management (DSM) is one of the crucial aspects of smart grid that provides users with the opportunity to optimize their load usage pattern to fill the gap between energy supply and demand and reduce the peak to average ratio (PAR), thus resulting in energy and economic efficiency ultimately. The application of DSM programs is lucrative for both utility and consumers. Utilities can implement DSM programs to improve the system power quality, power reliability, system efficiency, and energy efficiency, while consumers can experience energy savings, reduction in peak demand, and improvement of system load profile, and they can also maximize usage of renewable energy resources (RERs). In this paper, some of the strategies of DSM including peak shaving and load scheduling are highlighted. Furthermore, the implementation of numerous optimization techniques on DSM is reviewed.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of trade openness, renewable energy consumption, and foreign direct investment in carbon emission in the world developing and developed countries by employing static, dynamic and long run estimators.
Abstract: Studies regarding environmental degradation and its association with different factors have got considerable attention recently in the prevalent literature but with assorted outcomes which have been a guide to the ongoing debate on environmental studies. Energy from renewable sources has been considered beneficial for environmental quality while it is still below the anticipated level especially in developing economies. Openness to trade is important to enhance economic growth while it has been overawed to worsen the quality of environment due to deprived policies especially in developing countries. Subsequently, the present research investigates trade openness, renewable energy consumption, and foreign direct investment in carbon emission in the world developing and developed countries by employing static, dynamic and long run estimators. Trade openness has been found to have a decreasing effect on carbon emission in developed countries while degrading the quality of environment in developing countries while renewable energy consumption enhances environmental quality in both samples. The impact of tourism on carbon emission varies in different samples where FDI increases emission in developed countries while having a negative effect of carbon emission in developing countries. The long run estimators also evidence the existence of long run association among variables. The outcomes of this study have considerable policy implication regarding trade openness policy formulation to upsurge environmental quality especially in developing countries. The study has further suggestions regarding tourism and promoting the use of renewable energy sources by avoiding the use of former’s energy to enhance environmental quality.

Journal ArticleDOI
TL;DR: This work develops a computer-based fully automated system to identify basic armaments, particularly handguns and rifles, and implements YOLO V3 “You Only Look Once” object detection model by training it on the authors' customized dataset.
Abstract: Every year, a large amount of population reconciles gun-related violence all over the world. In this work, we develop a computer-based fully automated system to identify basic armaments, particularly handguns and rifles. Recent work in the field of deep learning and transfer learning has demonstrated significant progress in the areas of object detection and recognition. We have implemented YOLO V3 “You Only Look Once” object detection model by training it on our customized dataset. The training results confirm that YOLO V3 outperforms YOLO V2 and traditional convolutional neural network (CNN). Additionally, intensive GPUs or high computation resources were not required in our approach as we used transfer learning for training our model. Applying this model in our surveillance system, we can attempt to save human life and accomplish reduction in the rate of manslaughter or mass killing. Additionally, our proposed system can also be implemented in high-end surveillance and security robots to detect a weapon or unsafe assets to avoid any kind of assault or risk to human life.

Journal ArticleDOI
TL;DR: It is concluded that the accuracy of the fitting prediction between the predicted data and the actual value is as high as 98%, which can make the traditional machine learning algorithm meet the requirements of the big data era.
Abstract: With the rapid development of the Internet and the rapid development of big data analysis technology, data mining has played a positive role in promoting industry and academia. Classification is an important problem in data mining. This paper explores the background and theory of support vector machines (SVM) in data mining classification algorithms and analyzes and summarizes the research status of various improved methods of SVM. According to the scale and characteristics of the data, different solution spaces are selected, and the solution of the dual problem is transformed into the classification surface of the original space to improve the algorithm speed. Research Process. Incorporating fuzzy membership into multicore learning, it is found that the time complexity of the original problem is determined by the dimension, and the time complexity of the dual problem is determined by the quantity, and the dimension and quantity constitute the scale of the data, so it can be based on the scale of the data Features Choose different solution spaces. The algorithm speed can be improved by transforming the solution of the dual problem into the classification surface of the original space. Conclusion. By improving the calculation rate of traditional machine learning algorithms, it is concluded that the accuracy of the fitting prediction between the predicted data and the actual value is as high as 98%, which can make the traditional machine learning algorithm meet the requirements of the big data era. It can be widely used in the context of big data.

Journal ArticleDOI
TL;DR: It is argued “quantum image classification and recognition” would be the most significant opportunity to exhibit the real quantum superiority and dwell on the challenges for this opportunity in the era of NISQ (Noisy Intermediate-Scale Quantum).
Abstract: Quantum image processing (QIP) is a research branch of quantum information and quantum computing. It studies how to take advantage of quantum mechanics’ properties to represent images in a quantum computer and then, based on that image format, implement various image operations. Due to the quantum parallel computing derived from quantum state superposition and entanglement, QIP has natural advantages over classical image processing. But some related works misuse the notion of quantum superiority and mislead the research of QIP, which leads to a big controversy. In this paper, after describing this field’s research status, we list and analyze the doubts about QIP and argue “quantum image classification and recognition” would be the most significant opportunity to exhibit the real quantum superiority. We present the reasons for this judgment and dwell on the challenges for this opportunity in the era of NISQ (Noisy Intermediate-Scale Quantum).

Journal ArticleDOI
TL;DR: This study represents the first investigation of real applications of soft separation axioms and proves that pt-soft αTi-spaces are additive and topological properties and are preserved under finite product of soft spaces.
Abstract: In this work, we introduce new types of soft separation axioms called - soft regular and - soft - spaces using partial belong and total nonbelong relations between ordinary points and soft - open sets. These soft separation axioms enable us to initiate new families of soft spaces and then obtain new interesting properties. We provide several examples to elucidate the relationships between them as well as their relationships with - soft , soft , and - soft - spaces. Also, we determine the conditions under which they are equivalent and link them with their counterparts on topological spaces. Furthermore, we prove that - soft - spaces are additive and topological properties and demonstrate that - soft - spaces are preserved under finite product of soft spaces. Finally, we discuss an application of optimal choices using the idea of - soft - spaces on the content of soft weak structure. We provide an algorithm of this application with an example showing how this algorithm is carried out. In fact, this study represents the first investigation of real applications of soft separation axioms.

Journal ArticleDOI
TL;DR: In this paper, a new form of two-stage robust optimization is suggested, where facility locations and activation of BCT for VSCND is the first stage of decisions; finally, flow transshipment between components is determined in the next stage.
Abstract: Nowadays, using Blockchain Technology (BCT) is growing faster in each country. It is essential to apply BCT in Supply Chain Network Design (SCND) and is considered by the designer and manager of SC. This research indicates Viable Supply Chain Network Design (VSCND) by applying BCT. A new form of two-stage robust optimization is suggested. Facility locations and activation BCT for VSCND is the first stage of decisions; finally, we determine flow transshipment between components in the next stage. The GAMS-CPLEX is used for solving the model. The results show that running BCT will decrease 0.99% in costs. There is an economic justification for using BCT when demand is high. A fix-and-optimize and Lagrange relaxation (LR) generate lower and upper bound to estimate large scale in minimum time. The gap between the main model and fix-and-optimize is better than the LR algorithm. Finally, this research suggests equipping VSCND by BCT that becomes more resilient against demand fluctuation, sustainable, and agile.

Journal ArticleDOI
TL;DR: In this paper, the authors provided an analysis of chaotic information transmission from the COVID-19 pandemic to global equity markets in a novel denoised frequency domain entropy framework.
Abstract: This study provides an analysis of chaotic information transmission from the COVID-19 pandemic to global equity markets in a novel denoised frequency domain entropy framework. The current length of the pandemic data offers the opportunity to examine its role in the asymmetric behaviour patterns of investors according to time horizons and the diversification potentials available to them. We employ the total daily global confirmed cases of COVID-19 and 27 equity indices from December 31, 2019, to April 18, 2021. Our results corroborate the idea that diversification potentials are stronger in the short to medium term. The Global Index (higher risk) and Canada and New Zealand (lower risk) remain at both ends to pair some other equities to offer diversification prospects because of the transmission of information from COVID-19 to the selected equity markets. In addition, we provide the source of these diversification prospects as information flow rather than transmission of shocks, which is common in the literature. Furthermore, our results suggest detailed levels of risk (lower vis-a-vis higher) in the situation where they have been stripped of the noise in the market. The findings allow both investors and policymakers to make informed decisions based on the time horizons since the pandemic communicates different chaotic information with the lapse of time. This is imperative to avoid the negative consequences of the increasing infection rate on global stock markets. [ABSTRACT FROM AUTHOR] Copyright of Mathematical Problems in Engineering is the property of Hindawi Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

Journal ArticleDOI
TL;DR: This research aims to fine-tune ResNet50 by using network surgery and creation of network head along with the fine-tuning of hyperparameters to classify the images in a more effective and efficient manner as compared to the state-of-the-art research.
Abstract: Image classification has gained lot of attention due to its application in different computer vision tasks such as remote sensing, scene analysis, surveillance, object detection, and image retrieval. The primary goal of image classification is to assign the class labels to images according to the image contents. The applications of image classification and image analysis in remote sensing are important as they are used in various applied domains such as military and civil fields. Earlier approaches for remote sensing images and scene analysis are based on low-level feature representations such as color- and texture-based features. Vector of Locally Aggregated Descriptors (VLAD) and orderless Bag-of-Features (BoF) representations are the examples of mid-level approaches for remote sensing image classification. Recent trends for remote sensing and scene classification are focused on the use of Convolutional Neural Network (CNN). Keeping in view the success of CNN models, in this research, we aim to fine-tune ResNet50 by using network surgery and creation of network head along with the fine-tuning of hyperparameters. The learning of hyperparameters is tuned by using a linear decay learning rate scheduler known as piecewise scheduler. To tune the optimizer hyperparameter, Stochastic Gradient Descent with Momentum (SGDM) is used with the usage of weight learn and bias learn rate factor. Experiments and analysis are conducted on five different datasets, that is, UC Merced Land Use Dataset (UCM), RSSCN (the remote sensing scene classification image dataset), SIRI-WHU, Corel-1K, and Corel-1.5K. The analysis and competitive results exemplify that our proposed image classification-based model can classify the images in a more effective and efficient manner as compared to the state-of-the-art research.

Journal ArticleDOI
TL;DR: The Suricata IDS/IPS is deployed along with the NN model for the metaheuristic’s manual detection of malicious traffic in the targeted network and the fuzzy logic is used in this research paper.
Abstract: With the expansion of communication in today’s world and the possibility of creating interactions between people through communication networks regardless of the distance dimension, the issue of creating security for the data and information exchanged has received much attention from researchers. Various methods have been proposed for this purpose; one of the most important methods is intrusion detection systems to quickly detect intrusions into the network and inform the manager or responsible people to carry out an operational set to reduce the amount of damage caused by these intruders. The main challenge of the proposed intrusion detection systems is the number of erroneous warning messages generated and the low percentage of accurate detection of intrusions in them. In this research, the Suricata IDS/IPS is deployed along with the NN model for the metaheuristic’s manual detection of malicious traffic in the targeted network. For the metaheuristic-based feature selection, the neural network, and the anomaly-based detection, the fuzzy logic is used in this research paper. The latest stable version of Kali Linux 2020.3 is used as an attacking system for web applications and different types of operating systems. The proposed method has achieved 96.111% accuracy for detecting network intrusion.

Journal ArticleDOI
TL;DR: A discrete mathematical model is proposed to design the evaluation index and evaluation system of urban environmental and economic coordination that has high accuracy and effectively improves the reliability and evaluation time.
Abstract: The urban ecological environment is the material basis and condition for human beings to engage in social and economic activities and the supporting system for the formation and sustainable development of cities. With the acceleration of urbanization and industrialization, urban living environments and economic development have become the focus of people’s attention. This leads to the necessity of studying how to improve the quality of the urban living environment and promote the harmonious coexistence of population, natural environment, and social economy. Traditional methods focus on multiple regression models to evaluate the urban environmental and economic harmony, but this method does not consider the weight of each index, resulting in poor accuracy of the evaluation results. This paper proposes a discrete mathematical model to design the evaluation index and evaluation system of urban environmental and economic coordination. It calculates the weight of each index; carrying capacity of the urban environment, the value of each environmental factor, and the comprehensive value of the environment is determined. The static evaluation and dynamic evaluation are used to evaluate the coordination of the urban environmental economy. The experimental results show that the designed evaluation method of urban environmental economic coordination has high accuracy and effectively improves the reliability and evaluation time.

Journal ArticleDOI
TL;DR: Through experiment results analysis on the Epinions dataset, the proposed matrix factorization recommendation algorithm has a significant improvement over the traditional and matrix factorized recommendation algorithms that integrate a single social relationship.
Abstract: With the widespread use of social networks, social recommendation algorithms that add social relationships between users to recommender systems have been widely applied. Existing social recommendation algorithms only introduced one type of social relationship to the recommendation system, but in reality, there are often multiple social relationships among users. In this paper, a new matrix factorization recommendation algorithm combined with multiple social relationships is proposed. Through experiment results analysis on the Epinions dataset, the proposed matrix factorization recommendation algorithm has a significant improvement over the traditional and matrix factorization recommendation algorithms that integrate a single social relationship.

Journal ArticleDOI
TL;DR: In this article, the authors used three different machine learning methods, namely logistic regression (LR) and artificial neural network (ANN), to classify seven different date fruit types, i.e., Barhee, Deglet Nour, Sukkary, Rotab Mozafati, Ruthana, Safawi, and Sagai.
Abstract: A great number of fruits are grown around the world, each of which has various types. The factors that determine the type of fruit are the external appearance features such as color, length, diameter, and shape. The external appearance of the fruits is a major determinant of the fruit type. Determining the variety of fruits by looking at their external appearance may necessitate expertise, which is time-consuming and requires great effort. The aim of this study is to classify the types of date fruit, that are, Barhee, Deglet Nour, Sukkary, Rotab Mozafati, Ruthana, Safawi, and Sagai by using three different machine learning methods. In accordance with this purpose, 898 images of seven different date fruit types were obtained via the computer vision system (CVS). Through image processing techniques, a total of 34 features, including morphological features, shape, and color, were extracted from these images. First, models were developed by using the logistic regression (LR) and artificial neural network (ANN) methods, which are among the machine learning methods. Performance results achieved with these methods are 91.0% and 92.2%, respectively. Then, with the stacking model created by combining these models, the performance result was increased to 92.8%. It has been concluded that machine learning methods can be applied successfully for the classification of date fruit types.

Journal ArticleDOI
TL;DR: The experimental results have proved that the established AdaBoost-ISSA-S4VM classification model has good performance on labeled and unlabeled lung CT images.
Abstract: The Adaptive Boosting (AdaBoost) classifier is a widely used ensemble learning framework, and it can get good classification results on general datasets. However, it is challenging to apply the AdaBoost classifier directly to pulmonary nodule detection of labeled and unlabeled lung CT images since there are still some drawbacks to ensemble learning method. Therefore, to solve the labeled and unlabeled data classification problem, the semi-supervised AdaBoost classifier using an improved sparrow search algorithm (AdaBoost-ISSA-S4VM) was established. Firstly, AdaBoost classifier is used to construct a strong semi-supervised classifier using several weak classifiers S4VM (AdaBoost-S4VM). Next, in order to solve the accuracy problem of AdaBoost-S4VM, sparrow search algorithm (SSA) is introduced in the AdaBoost classifier and S4VM. Then, sine cosine algorithm and new labor cooperation structure are adopted to increase the global optimal solution and convergence performance of sparrow search algorithm, respectively. Furthermore, based on the improved sparrow search algorithm and adaptive boosting classifier, the AdaBoost-S4VM classifier is improved. Finally, the effective improved AdaBoost-ISSA-S4VM classification model was developed for actual pulmonary nodule detection based on the publicly available LIDC-IDRI database. The experimental results have proved that the established AdaBoost-ISSA-S4VM classification model has good performance on labeled and unlabeled lung CT images.

Journal ArticleDOI
TL;DR: In this article, the authors examined the factors influencing the intention of small and medium enterprises (SMEs) in India to adopt blockchain technology in their supply chains and proposed an integrated technology adoption framework consisting of the Technology Acceptance Model (TAM), Diffusion of Innovation (DOI), and Technology-Organization Environment (TOE).
Abstract: In recent times, organizations are increasingly adopting blockchain technology in their supply chains due to various advantages such as cost optimization, effective and verified record-keeping, transparency, and route tracking. This paper aims to examine the factors influencing the intention of small and medium enterprises (SMEs) in India to adopt blockchain technology in their supply chains. A questionnaire-based survey was used to collect data from 216 SMEs in the northern states of India. The study has considered an integrated technology adoption framework consisting of the Technology Acceptance Model (TAM), Diffusion of Innovation (DOI), and Technology-Organization-Environment (TOE). Using this integrated TAM-TOE-DOI framework, the study has proposed eleven hypotheses related to factors of blockchain technology adoption. Confirmatory factor analysis (CFA) and structural equation modeling (SEM) have been used to test the hypotheses. The results show that relative advantage, technology compatibility, technology readiness, top management support, perceived usefulness, and vendor support have a positive influence on the intention of Indian SMEs to adopt blockchain technology in their supply chains. The complexity of technology and cost concerns act as inhibitors to the technology adoption by SMEs. Furthermore, the three factors, namely, security concerns, perceived ease of use, and regulatory support, do not influence the intention to adopt the technology. The study contributes to filling a significant gap in the academic literature since only a few studies have endeavored to ascertain the technology adoption factors by supply chains of SMEs in a developing country like India. The study has also proposed a novel integrated technology adoption framework that can be employed by future studies. The findings are expected to enable SMEs to understand important factors to be considered for adopting blockchain technology in their supply chains. Furthermore, the study may benefit the blockchain technology developers and suppliers as they can offer customized solutions based on the findings.