scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Intelligent and Fuzzy Systems in 2023"


Journal ArticleDOI
TL;DR: In this article , the authors proposed a secure lightweight bioacoustics based user authentication scheme using fuzzy embedder for the Internet of Medical Things applications, the proposed scheme adopts chinese remainder technique for generate a group secret key to protect the network from the attacks of former sensor nodes.
Abstract: The Internet of Medical Things (IoMT) is a network of medical devices, hardware infrastructure, and software that allows healthcare information technology to be communicated over the web. The IoMT sensors communicate medical data to server for the quick diagnosis. As, it handles private and confidential information of a user, security is the primary objective. The existing IoT authentication schemes either using two-factor(Username, password) or multi-factor (username, password, biometric) to authenticate a user. Typically the structural characteristics-based biometric trait like Face, Iris, Palm print or finger print is used as a additional factor. There are chances that these biometrics can be fabricated. Thus, these structural biometrics based authentication schemes are fail to provide privacy, security, authenticity, and integrity. The biodynamic-based bioacoustics signals are gained attention in the era of human-computer interactions to authenticate a user as it is a unique feature to each user. So, we use a frequency domain based bio-acoustics as a biometric input. Thus, this work propose a Secure Lightweight Bioacoustics based User Authentication Scheme using fuzzy embedder for the Internet of Medical Things applications. Also, the IoT sensors tends to join and leave the network dynamically, the proposed scheme adopts chinese remainder technique for generate a group secret key to protect the network from the attacks of former sensor nodes. The proposed scheme’s security is validated using the formal verification tool AVISPA(Automated Validation of Internet Security Protocols and Applications). The system’s performance is measured by comparing the proposed scheme to existing systems in terms of security features, computation and communication costs. It demonstrates that the proposed system outperforms existing systems.

8 citations


Journal ArticleDOI
TL;DR: In this article , a Pythagorean fuzzy sets-based VIKOR and TOPSIS-based multi-criteria decision-making model (PFSVT-MCDM) is proposed for counteracting with the impacts of resource depletion attacks to improve Quality of Service (QoS) in the network.
Abstract: In Wireless Sensor Networks (WSNs), resource depletion attacks that focusses on the compromization of routing protocol layer is identified to facilitate a major influence over the network. These resource depletion attacks drain the batter power of the sensor nodes drastically with persistent network disruption. Several protocols were established for handling the impact of Denial of Service (DoS) attack, but majority of them was not able to handle it perfectly. In specific, thwarting resource depletion attack, a specific class of DoS attack was a herculean task. At this juncture, Multicriteria Decision Making Model (MCDM) is identified as the ideal candidate for evaluating the impact introduced by each energy depletion compromised sensor nodes towards the process of cooperation into the network. In this paper, A Pythagorean Fuzzy Sets-based VIKOR and TOPSIS-based multi-criteria decision-making model (PFSVT-MCDM) is proposed for counteracting with the impacts of resource depletion attacks to improve Quality of Service (QoS) in the network. This PFSVT-MCDM used the merits of Pythagorean Fuzzy Sets information for handling uncertainty and vagueness of information exchanged in the network during the process of data routing. It utilized VIKOR and TOPSIS for exploring the trust of each sensor nodes through the exploration of possible dimensions that aids in detecting resource depletion attacks. The experimental results of PFSVT-MCDM confirmed better throughput of 21.29%, enhanced packet delivery fraction of 22.38%, minimized energy consumptions 18.92%, and reduced end-to-end delay of 21.84%, compared to the comparative resource depletion attack thwarting strategies used for evaluation.

4 citations


Journal ArticleDOI
TL;DR: In this paper , the Multi Response Performance Index (MRPI) was used to measure the performance of the metal removal rate (MRR) and electrode wear rate (EWR) of a Si3N4-TiN composite.
Abstract: Electro discharge machining (EDM) is a cycle for molding tough materials and framing profound contour formed openings by warm disintegration in all sort of electrically conductive materials. The goal of the venture to be concentrating because of working parameters of EDM for machining of silicon nitride-titanium nitride in the machining qualities with copper electrode, for example input Spark on time (Son), current (Ip), Spark off time (Soff), spark gap and dielectric pressure on the metal removal rate (MRR) and Electrode Wear Rate (EWR) were analyzed. Subsequently, using Taguchi analysis of various plots like Mean effect plots, Interaction plots, and contour plots, performance characteristics are looked at in relation to multiple process factors. Fuzzy logic and Regression analysis is utilized to combine various reactions into a solitary trademark record known as the Multi Response Performance Index (MRPI).The trial and anticipated qualities were in a decent programming instrument for discovering the MRPI esteem. For numerous performance aspects, such as material removal rate, electrode wear rate and so on, the optimal process parameter combination was established using fuzzy logic analysis. The key process factors, which included spark off time and current, were found using an ANOVA based on a fuzzy algorithm. Topography on machined surface and cross-sectional view of conductive Si3N4-TiN composite and surface characteristics of machined electrode is examined by SEM analysis and identified the best hole surface and worst hole surface. Sensitivity analysis is being utilized to determine how much the input values, such as Ip, Son and Soff, will need to alter in order to get the desired, optimal result. In the complexity analysis, each constraint of the machine, composite and process is addressed. Future researches might look into various electrodes to assess geometrical tolerances including angularity, parallelism, total run out, flatness, straightness, concentricity, and line profile employing other optimization methodologies to achieve the best outcome. The findings of the confirmatory experiment have been established, indicating that it may be feasible to successfully strengthen the spark eroding technique.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the CoM-polynomials for molecular graph of linear and multiple Anthracene are computed from which eleven degree based topological coindices are derived.
Abstract: Topological indices and coindices are numerical invariants that relate to quantitative structure property/activity connections. The purpose of topological indices and coindices were introduced to draw the data related to chemical graphs with respect to adjacent & non adjacent pairs of vertex degrees respectively. These indices equip the researchers with a lot of information related to the properties and structure of the chemical compound. In this article, CoM-polynomials for molecular graph of linear and multiple Anthracene are computed from which eleven degree based topological coindices are derived. Finally numerical and graphical comparisons of coindices for both forms of anthracene are drawn while conclusions are summarized based on the results obtained.

2 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a fault detection and diagnosis method based on multi-block probabilistic kernel partial least squares (MBPKPLS) for industrial data and fault tracing problems.
Abstract: In view of the large-scale and high-dimensional problems of industrial data and fault-tracing problems, a fault detection and diagnosis method based on multi-block probabilistic kernel partial least squares (MBPKPLS) is proposed. First, the process variables are divided into several blocks in a decentralized manner to address the large-scale and high-dimensional problems. The probabilistic characteristics and relationship between the corresponding process variables and the quality variables of each block are analyzed using latent variables, and the PKPLS model of each block is established separately. Second, the MBPKPLS model is applied to process monitoring, statistics of each block are established in a high-dimensional space, and the monitoring indicators in each block are used to detect faults. Third, based on fault detection, the multi-block concept is further used to locate the cause of fault, thereby solving the problem of fault tracing. Finally, a numerical example and the penicillin fermentation process (PFP) are used to test the effectiveness of the MBPKPLS method. The results demonstrate that the proposed method is suitable for processing large-scale, high-dimensional data with strong nonlinear characteristics, and the MBPKPLS process monitoring method is effective for improving the performance of fault detection and diagnosis.

2 citations


Journal ArticleDOI
TL;DR: In this article , a single ML model based on Gradient Boosting algorithm was proposed to predict the unconfined compressive strength (Qu) of stabilized soil to design in order to evaluate the effective of soft soil improvement.
Abstract: The unconfined compressive strength (Qu) is one of the most important criteria of stabilized soil to design in order to evaluate the effective of soft soil improvement. The unconfined compressive strength of stabilized soil is strongly affected by numerous factors such as the soil properties, the binder content, etc. Machine Learning (ML) approach can take into account these factors to predict the unconfined compressive strength (Qu) with high performance and reliability. The aim of this paper is to select a single ML model to design Qu of stabilized soil containing some chemical stabilizer agents such as lime, cement and bitumen. In order to build the single ML model, a database is created based on the literature investigation. The database contains 200 data samples, 12 input variables (Liquid limit, Plastic limit, Plasticity index, Linear shrinkage, Clay content, Sand content, Gravel content, Optimum water content, Density of stabilized soil, Lime content, Cement content, Bitumen content) and the output variable Qu. The performance and reliability of ML model are evaluated by the popular validation technique Monte Carlo simulation with aided of three criteria metrics including coefficient of determination R2, Root Mean Square Error (RMSE) and Mean Square Error (MAE). ML model based on Gradient Boosting algorithm is selected as highest performance and highest reliability ML model for designing Qu of stabilized soil. Explanation of feature effects on the unconfined compressive strength Qu of stabilized soil is carried out by Permutation importance, Partial Dependence Plot (PDP 2D) in two dimensions and SHapley Additive exPlanations (SHAP) local value. The ML model proposed in this investigation is single and useful for professional engineers with using the mapping Maximal dry density-Linear shrinkage created by PDP 2D.

2 citations


Journal ArticleDOI
TL;DR: A comprehensive history of entity relation extraction can be found in this paper , where the relation extraction methods based on Machine Mearning, Deep Learning and Deep Learning for open domains are presented.
Abstract: In today’s big data era, there are a large number of unstructured information resources on the web. Natural language processing researchers have been working hard to figure out how to extract useful information from them. Entity Relation Extraction is a crucial step in Information Extraction and provides technical support for Knowledge Graphs, Intelligent Q&A systems and Intelligent Retrieval. In this paper, we present a comprehensive history of entity relation extraction and introduce the relation extraction methods based on Machine Mearning, the relation extraction methods based on Deep Learning and the relation extraction methods for open domains. Then we summarize the characteristics and representative results of each type of method and introduce the common datasets and evaluation systems for entity relation extraction. Finally, we summarize current entity relation extraction methods and look forward to future technologies.

2 citations



Journal ArticleDOI
TL;DR: In this article , a telerobotic-based stroke rehabilitation optimization and recommendation technique cum framework is proposed and evaluated, the proposed framework is developed on providing decision support with a recommendation of activities and task flow, these recommendations are independent and have higher feasibility with the scenario of evaluation.
Abstract: Technological development in biomedical procedures has given an upper understanding of the ease of evaluating and handling critical scenarios and diseases. A sustainable model design is required for the post-medical procedures to maintain the consistency of medical treatment. In this article, a telerobotic-based stroke rehabilitation optimization and recommendation technique cum framework is proposed and evaluated. Selecting optimal features for training deep neural networks can help in optimizing the training time and also improve the performance of the model. To achieve this, we have used Whale Optimization Algorithm (WOA) due to its higher convergence accuracy, better stability, stronger global search ability, and faster convergence speed to streamline the dependency matrix of each attribute associated with post-stroke rehabilitation. Deep Neural Networking assures the selection of datasets from training and testing validation. The proposed framework is developed on providing decision support with a recommendation of activities and task flow, these recommendations are independent and have higher feasibility with the scenario of evaluation. The proposed model achieved a precision of 99.6%, recall of 99.5 %, F1-score of 99.7%, and accuracy of 99.9%, which outperform the other considered optimization algorithms such as antlion and gravitational search algorithms. The proposed technique has provided an efficient recommendation model compared to the trivial SVM-based models and techniques.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a fine-tuned convolutional neural network (CNN) model using improved EfficientNetB5 was proposed for rapid detection of COVID-19 with thorax chest X-ray (CXR) images.
Abstract: COVID-19 is an epidemic, causing an enormous death toll. The mutational changing of an RNA virus is causing diagnostic complexities. RT-PCR and Rapid Tests are used for the diagnosis, but unfortunately, these methods are ineffective in diagnosing all strains of COVID-19. There is an utmost need to develop a diagnostic procedure for timely identification. In the proposed work, we come up with a lightweight algorithm based on deep learning to develop a rapid detection system for COVID-19 with thorax chest x-ray (CXR) images. This research aims to develop a fine-tuned convolutional neural network (CNN) model using improved EfficientNetB5. Design is based on compound scaling and trained on the best possible feature extraction algorithm. The low convergence rate of the proposed work can be easily deployed into limited computational resources. It will be helpful for the rapid triaging of victims. 2-fold cross-validation further improves the performance. The algorithm proposed is trained, validated, and testing is performed in the form of internal and external validation on a self-collected and compiled a real-time dataset of CXR. The training dataset is relatively extensive compared to the existing ones. The performance of the proposed technique is measured, validated, and compared with other state-of-the-art pre-trained models. The proposed methodology gives remarkable accuracy (99.5%) and recall (99.5%) for biclassification. The external validation using two different test dataset also give exceptional predictions. The visual depiction of predictions is represented by Grad-CAM maps, presenting the extracted features of the predicted results.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors extended the geometric Heronian mean (GHM) operator to fuzzy number intuitionistic fuzzy numbers (FNIFNs) to propose the fuzzy number intuistic fuzzy GHM operator. And then, the multiple-attribute decision-making (MADM) methods are built on FNIFGHM operator, and a numerical example for sustainable education value evaluation based on the integration of regional culture into international students' ideological education was used to prove the built methods' credibility.
Abstract: For thousands of years, the Chinese people have accumulated and inherited profound cultural traditions. The uniqueness of this cultural tradition lies in its amazing creative wisdom and power. “The ideological and political education of the integration of Chinese regional culture into international students refers to the educative influence of excellent regional culture that can run through the entire international education management system, curriculum system and extracurricular practice system to achieve “all-round, full-process, full-staff” Education goals. The sustainable education value evaluation based on the integration of regional culture into international students’ ideological education is a classical multiple-attribute decision-making (MADM) issue. In this paper, we extend the geometric Heronian mean (GHM) operator to fuzzy number intuitionistic fuzzy numbers (FNIFNs) to propose the fuzzy number intuitionistic fuzzy GHM (FNIFGHM) operator. Then, the multiple-attribute decision-making (MADM) methods are built on FNIFGHM operator. Finally, a numerical example for sustainable education value evaluation based on the integration of regional culture into international students’ ideological education and some comparative analysis are used to prove the built methods’ credibility and reliability.

Journal ArticleDOI
TL;DR: In this paper , the authors developed a Radial Basis Function Neural Network (RBFNN) to model the hardness features of high-performance concrete (HPC) mixtures.
Abstract: The compressive strength and slump of concrete have highly nonlinear functions relative to given components. The importance of predicting these properties for researchers is greatly diagnosed in developing constructional technologies. Such capacities should be progressed to decrease the cost of expensive experiments and enhance the measurements’ accuracy. This study aims to develop a Radial Basis Function Neural Network (RBFNN) to model the hardness features of High-Performance Concrete (HPC) mixtures. In this function, optimizing the predicting process via RBFNN will be aimed to be accurate, as the aim of this research, conducted with metaheuristic approaches of Henry gas solubility optimization (HGSO) and Multiverse Optimizer (MVO). The training phase of models RBHG and RBMV was performed by the dataset of 181 HPC mixtures having fly ash and superplasticizer. Regarding the results of hybrid models, the MVO had more correlation between the predicted and observed compressive strength and slump values than HGSO in the R2 index. The RMSE of RBMV (3.7 mm) was obtained 43.2 percent lower than that of RBHG (5.3 mm) in the appraising slump of HPC samples, while, for compressive strength, RMSE was 3.66 MPa and 5 MPa for RBMV and RBHG respectively. Moreover, to appraise slump flow rates, the R2 correlation rate for RBHG was computed at 96.86 % while 98.25 % for RBMV in the training phase, with a 33.30% difference. Generally, both hybrid models prospered in doing assigned tasks of modeling the hardness properties of HPC samples.

Journal ArticleDOI
TL;DR: In this paper , an extended probabilistic simplified neutrosophic number GRA (PSNN-GRA) method is established for talent training quality evaluation of segmented education.
Abstract: The “3 + 2” segmented training between higher vocational colleges and applied undergraduate courses has opened up the rising channel of vocational education from junior college level to undergraduate level, and promoted the organic connection between higher vocational colleges and Universities of Applied Sciences. It is one of the important ways to establish a modern vocational education system. Exploring the monitoring mechanism of talent training quality is an important measure to ensure the achievement of the segmented training goal, and it is a necessary condition to successfully train high-quality skilled applied talents. The talent training quality evaluation of segmented education is viewed as multiple attribute decision-making (MADM) issue. In this paper, an extended probabilistic simplified neutrosophic number GRA (PSNN-GRA) method is established for talent training quality evaluation of segmented education. The PSNN-GRA method integrated with CRITIC method in probabilistic simplified neutrosophic sets (PSNSs) circumstance is applied to rank the optional alternatives and a numerical example for talent training quality evaluation of segmented education is used to proof the newly proposed method’s practicability along with the comparison with other methods. The results display that the approach is uncomplicated, valid and simple to compute.

Journal ArticleDOI
TL;DR: In this article , a new Penguin Search Optimization Algorithm with Multi-agent Reinforcement Learning for Disease Prediction and Recommendation (PSOAMRL-DPR) model is proposed to identify the presence of disease and recommend treatment to the patient.
Abstract: Multi-agent reinforcement learning (MARL) is a generally researched approach for decentralized controlling in difficult large-scale autonomous methods. Typical features create RL system as an appropriate candidate to develop powerful solutions in variation of healthcare fields, whereas analyzing decision or treatment systems can be commonly considered by a prolonged and sequential process. This study develops a new Penguin Search Optimization Algorithm with Multi-agent Reinforcement Learning for Disease Prediction and Recommendation (PSOAMRL-DPR) model. This research aimed to use a unique PSOAMRL-DPR algorithm to forecast diseases based on data collected from networks and the cloud by a mobile agent. The major intention of the proposed PSOAMRL-DPR algorithm is to identify the presence of disease and recommend treatment to the patient. The model manages the agent container with different mobile agents and fetched data from dissimilar locations of the network as well as cloud. For disease detection and prediction, the PSOAMRL-DPR technique exploits deep Q-network (DQN) technique. In order to tune the hyperparameters related to the DQN technique, the PSOA technique is used. The experimental result analysis of the PSOAMRL-DPR technique is validated on heart disease dataset. The simulation values demonstrate that the PSOAMRL-DPR technique outperforms the other existing methods.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a dynamic multi-graph convolution recurrent neural network (DMGCRNN), which models the dynamic correlations of road networks over time based on various information of road network.
Abstract: Traffic speed prediction is a crucial task of the intelligent traffic system. However, due to the highly nonlinear temporal patterns and non-static spatial dependence of traffic data, timely and accurate traffic forecasting remains a challenge. The existing methods usually use a static adjacency matrix to model spatial dependence while ignoring the spatial dynamic characteristics of the road network.Meanwhile, the dynamic influence of different time steps on the prediction target is ignored. Thus, we propose a dynamic multi-graph convolution recurrent neural network (DMGCRNN), which models the dynamic correlations of road networks over time based on various information of road network. Dynamic correlation is an essential factor for accurate traffic prediction, because it reflects the change of the traffic conditions in real-time. In this model, we design a dynamic graph construction method, which utilizes the local temporal and spatial characteristics of each road segment to construct dynamic graphs. Then, a dynamic multi-graph convolution fusion module is proposed, which considers the dynamic characteristics of spatial correlations and global information to model the dynamic trend of spatial dependence. Moreover, by combing the global context information, temporal attention is provided to capture the dynamic temporal dependence among different time steps. The experimental results from two real-world traffic datasets demonstrate that our method outperforms the state-of-the-art baselines.

Journal ArticleDOI
TL;DR: In this article , the second-order fuzzy homogeneous differential equation is transformed into a more special simplest form under the condition that the solution of the boundary value problem of the equation exists and is unique.
Abstract: In this paper, the second-order fuzzy homogeneous differential equation is transformed into a more special simplest form under the condition that the solution of the boundary value problem of the equation exists and is unique. Then the eigenvalues of the boundary value problem of the second-order simplest fuzzy homogeneous differential equation are studied and the theorems that make the eigenvalues exist are proposed and then illustrated with examples. Finally, it is proved that when the second-order fuzzy coefficient p ˜ ( t ) in the second-order fuzzy homogeneous differential equation is a fuzzy number, the solution set of its corresponding second-order granular homogeneous differential equation becomes larger, that is, the solution set of fuzzy differential equations with real numbers is a subset of the solution set with fuzzy coefficients as fuzzy numbers.

Journal ArticleDOI
TL;DR: In this article , a new entropy measure for Pythagorean fuzzy sets via the Sugeno integral that uses fuzzy measures to model the interaction between criteria is proposed. And a similarity measure based on entropies is presented.
Abstract: As an extension of the concepts of fuzzy set and intuitionistic fuzzy set, the concept of Pythagorean fuzzy set better models some real life problems. Distance, entropy, and similarity measures between Pythagorean fuzzy sets play important roles in decision making. In this paper, we give a new entropy measure for Pythagorean fuzzy sets via the Sugeno integral that uses fuzzy measures to model the interaction between criteria. Moreover, we provide a theoretical approach to construct a similarity measure based on entropies. Combining this theoretical approach with the proposed entropy, we define a distance measure that considers the interaction between criteria. Finally, using the proposed distance measure, we provide an extended Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) for multi-criteria decision making and apply the proposed technique to a real life problem from the literature. Finally, a comparative analysis is conducted to compare the results of this paper with those of previous studies in the literature.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed the uncertain relative error least squares estimation model of the linear regression, which can not only solve the fitting regression equation of the imprecise observation data, but also fully consider the variation of the error with the given data, so the regression equation is more reasonable and reliable.
Abstract: Uncertain least squares estimation is one of the important methods to deal with imprecise data, which can fully consider the influence of given data on regression equation and minimize the absolute error. In fact, some scientific studies or observational data are often evaluated in terms of relative error, which to some extent allows the error of the forecasting value to vary with the size of the observed value. Based on the least squares estimation and the uncertainty theory, this paper proposed the uncertain relative error least squares estimation model of the linear regression. The uncertain relative error least squares estimation minimizes the relative error, which can not only solve the fitting regression equation of the imprecise observation data, but also fully consider the variation of the error with the given data, so the regression equation is more reasonable and reliable. Two numerical examples verified the feasibility of the uncertain relative error least squares estimation, and compared it with the existing method. The data analysis shows that the uncertain relative error least squares estimation has a good fitting effect.

Journal ArticleDOI
TL;DR: In this paper , a multi-modal classifier is introduced which makes use of the output from dual deep neural networks: GRU for text analysis and Faster R-CNN for image analysis.
Abstract: The internet and social networks produce an increasing amount of data. There is a serious necessity for a recommendation system because exploring through the huge collection is time-consuming and difficult. In this study, a multi-modal classifier is introduced which makes use of the output from dual deep neural networks: GRU for text analysis and Faster R-CNN for image analysis. These two networks reduce overall complexity with minimal computational time while retaining accuracy. More precisely, the GRU network is utilized to process movie reviews and the Faster RCNN is used to recognize each frames of the movie trailers. Gated Recurrent Unit (GRU) is a well-known variety of RNN that computes sequential data across recurrent structures. Faster RCNN is an enhanced version of Fast RCNN, it combines with the rectangular region proposals and with the features is extract by the ResNet-101. Initially, the trailer of the movie is manually splitted into frames and these frames are pre-processed using fuzzy elliptical filter for image analysis and the movie reviews are also tokenized for text analysis. The pre-processed text is taken as an input for GRU to classify offensive and non-offensive movies and the pre-processed images are taken as an input for Faster R-CNN to classify violence and non- violence movies based on the extracted features from the movie trailer. Afterwards, the four classified outputs are given as input for fuzzy decision-making unit for recommending best movies based on the Mamdani fuzzy inference system with gauss membership functions. The performance of the dual deep neural networks was evaluated using the specific parameters like specificity, precision, recall, accuracy and F1 score measures. The proposed GRU yields accuracy range of 97.73% for reviews and FRCNN yields the accuracy range of 98.42% for movie trailer.

Journal ArticleDOI
TL;DR: In this paper , the authors employ uncertain statistics, including uncertain time series analysis, uncertain regression analysis and uncertain differential equation, to model the birth rate in China, and explain the reason why uncertain statistics is used instead of probability statistics by analyzing the characteristics of the residual plot.
Abstract: Uncertain statistics is a set of mathematical techniques to collect, analyze and interpret data based on uncertainty theory. In addition, probability statistics is another set of mathematical techniques based on probability theory. In practice, when to use uncertain statistics and when to use probability statistics to model some quality depends on whether the distribution function of the quality is close enough to the actual frequency. If it is close enough, then probability statistics may be used. Otherwise, uncertain statistics is recommended. In order to illustrate it, this paper employs uncertain statistics, including uncertain time series analysis, uncertain regression analysis and uncertain differential equation, to model the birth rate in China, and explains the reason why uncertain statistics is used instead of probability statistics by analyzing the characteristics of the residual plot. In addition, uncertain hypothesis test is used to determine whether the estimated uncertain statistical models are appropriate.

Journal ArticleDOI
TL;DR: In this paper , the Girvan-Newman and Louvain community detection algorithms were used to establish communities and validate the interactions between them, and positive results were obtained when checking the interactions of two sets of drugs for disease treatments: diabetes and anxiety; diabetes and antibiotics.
Abstract: This paper presents the development and application of graph neural networks to verify drug interactions, consisting of drug-protein networks. For this, the DrugBank databases were used, creating four complex networks of interactions: target proteins, transport proteins, carrier proteins, and enzymes. The Louvain and Girvan-Newman community detection algorithms were used to establish communities and validate the interactions between them. Positive results were obtained when checking the interactions of two sets of drugs for disease treatments: diabetes and anxiety; diabetes and antibiotics. There were found 371 interactions by the Girvan-Newman algorithm and 58 interactions via Louvain.

Journal ArticleDOI
TL;DR: In this paper , the authors introduced the properties of interval quadripartitioned single valued neutrosophic graph and highlighted potential applications of the usual apricot plant that thrives in extremely cold climates and is appropriate for higher production.
Abstract: The interval-valued quadripartitioned neutrosophic set is represented by the partition of the interval-valued neutrosophic set’s indeterminacy function into contradiction and ignorance parts. This article introduces the properties of interval quadripartitioned single valued neutrosophic graph. The properties like complementary, self-complementary, strong and complete interval-valued quadripartitioned neutrosophic graphs are investigated. The finest illustration of locating a climate conducive to apricot cultivation in Ladakh is provided by the notion that has been offered. The model gives us details on the location that should be chosen for apricot farming. Using the proposed concepts, we highlight potential applications of the usual apricot plant that thrives in extremely cold climates and is appropriate for higher production. The adopted approach makes a superior fit to consider the problems in application viewpoint.

Journal ArticleDOI
TL;DR: In this paper , a method for predicting peak particle velocity (PPV) based on Mamdani Fuzzy Inference System was developed for predicting ground vibration, which is highly possible to result in serious losses such as destroyed buildings.
Abstract: Blast-induced ground vibration is highly possible to result in serious losses such as destroyed buildings. The crucial parameter of the mentioned vibration is peak particle velocity (PPV). Many equations have been developed to predict PPV, however, worse performance has been reported by multiple literatures. This paper developed a method for predicting PPV based on Mamdani Fuzzy Inference System. Firstly, Minimum Redundancy Maximum Relevance was employed to identify the blasting design parameters which significantly contribute to the PPV induced by blasting. Secondly, K-means method was applied to determine the value ranges of the selected parameters. The selected parameters and corresponding value ranges were combined to input into Mamdani Fuzzy Inference System for obtaining predicted PPV. Totally, 280 samples were collected from a blasting site. 260 out of them were used to train the proposed method and 20 were assigned for test. The proposed method was tested in the comparison with empirical equation USBM, multiple linear regression analysis, pure Mamdani Fuzzy Inference System in terms of the difference between predicted PPV and measured PPV, coefficient of correlation, root-mean-square error, and mean absolute error. The results from that showed that the proposed method has the better performance in PPV prediction.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed high-quality development of the marine economy and combined the improved entropy value method, fuzzy hierarchical analysis method (FAHP), and data envelopment analysis (DEA) to establish a quadratic relative evaluation model.
Abstract: The ocean plays a crucial role in human society’s survival and development. While China’s marine economy has grown rapidly in recent years, it has also led to serious problems inhibiting ecosystem sustainability. This paper proposes high-quality development of the marine economy and combines the improved entropy value method, fuzzy hierarchical analysis method (FAHP), and data envelopment analysis (DEA) method to establish a quadratic relative evaluation model. A two-layer comprehensive index framework with 19 indicators is built to measure various aspects of the marine economy, including innovation, coordination, green, openness, and sharing. Empirical analysis conducted on 11 coastal provinces in China using data mainly collected from the Chinese Statistical Yearbook reveals significant spatial patchiness in the high-quality development level of the marine economy. This discrepancy is largely due to differences in geographical locations, resources, and government policies. The study analyzes four benchmark provinces of high-quality development and summarizes their experiences. The paper concludes by providing suggestions and implications to support government decision-making.

Journal ArticleDOI
TL;DR: Opt2Ada as mentioned in this paper is a straightforward method for low-light image enhancement, which consists of pixel-level operations, including an optimized illuminance channel decomposition, an adaptive illumination enhancement, and an adaptive global scaling.
Abstract: This paper proposes that the task of single-image low-light enhancement can be accomplished by a straightforward method named Opt2Ada. It contains a series of pixel-level operations, including an optimized illuminance channel decomposition, an adaptive illumination enhancement, and an adaptive global scaling. Opt2Ada is traditional and it does not rely on architecture engineering, super-parameter tuning, or specific training dataset. Its parameters are generic and it has better generalization capability than existing data-driven methods. For evaluation, both the full-reference, non-reference, and semantic metrics are calculated. Extensive experiments on real-world low-light images demonstrate the superiority of Opt2Ada over recent traditional and deep learning algorithms. Due to its flexibility and effectiveness, Opt2Ada can be deployed as a pre-processing subroutine for high-level computer vision applications.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a new ranking approach for intuitionistic fuzzy sets based on geometrical representation, which satisfies the properties of weak admissibility, membership degree robustness, nonmembership degree robusts, and determinism.
Abstract: Ranking intuitionistic fuzzy numbers is an important issue in the practical application of intuitionistic fuzzy sets. Many scholars rank intuitionistic fuzzy numbers by defining different measures. These measures do not comprehensively consider the fuzzy semantics expressed by membership degree, nonmembership degree, and hesitancy degree. As a result, the ranking results are often counterintuitive, such as the indifference problems, the non-robustness problems, etc. In this paper, according to geometrical representation, a novel measure for intuitionistic fuzzy numbers is defined, which is called the ideal measure. After that, a new ranking approach is proposed. It’s proved that the ideal measure satisfies the properties of weak admissibility, membership degree robustness, nonmembership degree robustness, and determinism. A numerical example is applied to illustrate the effectiveness and feasibility of this method. Finally, using the presented approach, the optimal alternative can be acquired in multi-attribute decision-making problems. Comparison analysis shows that the ideal measure is more effective and simple than other existing methods.

Journal ArticleDOI
TL;DR: In this article , the authors presented a novel approach to detect and predict the epileptic signal in the recorded electroencephalogram (EEG) using LSTM Fully connected neural network.
Abstract: Epilepsy, the most common neurological disorder by which over 65 million people are affected across the world. Recent research has shown a very large interest to predict and diagnose epilepsy well before time. The continuous monitoring of EEG signals for seizure detection in electroencephalogram (EEG) is a very tedious and time taking process and therefore requires a qualified and trained clinical specialist. This paper presents a novel approach to detect and predict the epileptic signal in the recorded electroencephalogram (EEG). There is always a requirement for a nonlinear technique to examine the EEG signals due to the random nature of EEG signals. Therefore, we are providing an alternate method that extracts various entropy measures such Sample Entropy, Spectral Entropy, Permutation Entropy, and Shannon Entropy as statistical features from EEG signal. Based on these extracted features LSTM Fully connected Neural Network is used to classify the EEG signal as Focal and Non-focal. The proposed method gives a new insight into EEG signals by providing sensitivity as an added measure using deep learning along with accuracy and precision.

Journal ArticleDOI
TL;DR: In this paper , a fuzzy analytic hierarchy process (FAHP) point-factored inference system was proposed to detect cardiovascular disease in clinical data, medical practitioners, and literature review.
Abstract: The World health organization (WHO) reported that cardiovascular disease is the leading cause of death worldwide, particularly in developing countries. But while diagnosing cardiovascular disease, medical practitioners might have differences of opinions and faced challenging when there is inadequate information and uncertainty of the problem. Therefore, to resolve ambiguity and vagueness in diagnosing disease, a perfect decision-making model is required to assist medical practitioners in detecting the disease at an early stage. Thus, this study designs a fuzzy analytic hierarchy process (FAHP) point-factored inference system to detect cardiovascular disease. The attributes are selected and classified into sub-attributes and point factor scale using the clinical data, medical practitioners, and literature review. Fuzzy AHP is used in calculating the attribute weights, the strings are generated using the Mamdani fuzzy inference system, and the strength of each set of fuzzy rules is calculated by multiplying the attribute weights with the point factor scale. The string weights determine the output ranges of cardiovascular disease. Moreover, the results are validated using sensitivity analysis, and comparative analysis is performed with AHP techniques. The results show that the proposed method outperforms other methods, which are elucidated by the case study.

Journal ArticleDOI
TL;DR: In this paper , a simple approach to link the input ingredients of concrete with the resulted compressive with a high accuracy rate and overcome the existing nonlinearity is defined to carry out the modeling process.
Abstract: The difficulties in determining the compressive strength of concrete are inherited due to the various nonlinearities rooted in the mix designs. These difficulties raise dramatically considering the modern mix designs of high-performance concrete. Presents study tries to define a simple approach to link the input ingredients of concrete with the resulted compressive with a high accuracy rate and overcome the existing nonlinearity. For this purpose, the radial base function is defined to carry out the modeling process. The optimal results were obtained by determining the optimal structure of radial base function neural networks. This task was handled well with two precise optimization algorithms, namely Henry’s gas solubility algorithm and particle swarm optimization algorithm. The results defined both models’ best performance earned in the training section. Considering the root mean square error values, the best value stood at 2.5629 for the radial base neural network optimized by Henry’s gas solubility algorithm, whereas the same value for the the radial base neural network optimized by particle swarm optimization was 2.6583 although both hybrid models provided acceptable output results, the radial base neural network optimized by Henry’s gas solubility algorithm showed higher accuracy in predicting high performance concrete compressive strength.

Journal ArticleDOI
TL;DR: The generalized neighborhood system-based rough set is an important extension of the original rough set as mentioned in this paper , which enables decision makers to choose decisions based on personal preferences and is used to rule extraction of incomplete information systems.
Abstract: The generalized neighborhood system-based rough set is an important extension of Pawlak’s rough set. The rough sets based on generalized neighborhood systems include two basic models: optimistic and pessimistic rough sets. In this paper, we give a further study on pessimistic rough sets. At first, to regain some properties of Pawlak’s rough sets that are lost in pessimistic rough sets, we introduce the mediate, transitive, positive (negative) alliance conditions for generalized neighborhood systems. At second, some approximation operators generated by special generalized neighborhood systems are characterized, which include serial, reflexive, symmetric, mediate, transitive, and negative alliance generalized neighborhood systems and their combinations (e.g. reflexive and transitive). At third, we discuss the topologies generated by the upper and lower approximation operators of the pessimistic rough sets. Finally, combining practical examples, we apply pessimistic rough sets to rule extraction of incomplete information systems. Particularly, we prove that different decision rules can be obtained when different neighborhood systems are chosen. This enables decision makers to choose decisions based on personal preferences.