scispace - formally typeset
Search or ask a question

Showing papers in "Computational Intelligence and Neuroscience in 2023"


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an intelligent DR classification model of fundus images, which can detect all the five stages of diabetic retinopathy including no DR, mild, moderate, severe, and proliferative.
Abstract: Diabetic retinopathy (DR) is a common retinal vascular disease, which can cause severe visual impairment. It is of great clinical significance to use fundus images for intelligent diagnosis of DR. In this paper, an intelligent DR classification model of fundus images is proposed. This method can detect all the five stages of DR, including of no DR, mild, moderate, severe, and proliferative. This model is composed of two key modules. FEB, feature extraction block, is mainly used for feature extraction of fundus images, and GPB, grading prediction block, is used to classify the five stages of DR. The transformer in the FEB has more fine-grained attention that can pay more attention to retinal hemorrhage and exudate areas. The residual attention in the GPB can effectively capture different spatial regions occupied by different classes of objects. Comprehensive experiments on DDR datasets well demonstrate the superiority of our method, and compared with the benchmark method, our method has achieved competitive performance.

8 citations


Journal ArticleDOI
TL;DR: In this paper , a personalized path recommendation strategy that can track and study user's path preference is proposed, where the road network is quantized separately according to the user preference weight vector, and the optimal path is obtained by using Tabu search algorithm.
Abstract: With the increasing frequency of autonomous driving, more and more attention is paid to personalized path planning. However, the path selection preferences of users will change with internal or external factors. Therefore, this paper proposes a personalized path recommendation strategy that can track and study user's path preference. First, we collect the data of the system, establish the relationship with the user preference factor, and get the user's initial preference weight vector by dichotomizing the K-means algorithm. The system then determines whether user preferences change based on a set threshold, and when the user's preference changes, the current preference weight vector can be obtained by redefining the preference factor or calling difference perception. Finally, the road network is quantized separately according to the user preference weight vector, and the optimal path is obtained by using Tabu search algorithm. The simulation results of two scenarios show that the proposed strategy can meet the requirements of autopilot even when user preferences change.

7 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a computational framework for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images, which achieved an outstanding classification accuracy of 93%, surpassing other models trained on the same dataset.
Abstract: Medical image analysis places a significant focus on breast cancer, which poses a significant threat to women's health and contributes to many fatalities. An early and precise diagnosis of breast cancer through digital mammograms can significantly improve the accuracy of disease detection. Computer-aided diagnosis (CAD) systems must analyze the medical imagery and perform detection, segmentation, and classification processes to assist radiologists with accurately detecting breast lesions. However, early-stage mammography cancer detection is certainly difficult. The deep convolutional neural network has demonstrated exceptional results and is considered a highly effective tool in the field. This study proposes a computational framework for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images. To train and classify the INbreast dataset into benign or malignant categories, the framework utilizes transfer learning from the pretrained ResNet-50 CNN on ImageNet. The results revealed that the proposed framework achieved an outstanding classification accuracy of 93%, surpassing other models trained on the same dataset. This novel approach facilitates early diagnosis and classification of malignant and benign breast cancer, potentially saving lives and resources. These outcomes highlight that deep convolutional neural network algorithms can be trained to achieve highly accurate results in various mammograms, along with the capacity to enhance medical tools by reducing the error rate in screening mammograms.

5 citations


Journal ArticleDOI
TL;DR: A critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography is presented in this article .
Abstract: Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.

3 citations


Journal ArticleDOI
TL;DR: In this article , a machine learning framework based on automated hyperparameter optimization is proposed to rank the potential nonclinical markers for autism in Q-chat scores of individuals across different age groups.
Abstract: Autism spectrum disorder is the most used umbrella term for a myriad of neuro-degenerative/developmental conditions typified by inappropriate social behavior, lack of communication/comprehension skills, and restricted mental and emotional maturity. The intriguing factor of this disorder is attributed to the fact that it can be detected only by close monitoring of developmental milestones after childbirth. Moreover, the exact causes for the occurrence of this neurodevelopmental condition are still unknown. Besides, autism is prevalent across individuals irrespective of ethnicity, genetic/familial history, and economic/educational background. Although research suggests that autism is genetic in nature and early detection of this disorder can greatly enhance the independent lifestyle and societal adaptability of affected individuals, there is still a great dearth of information to support the statement of proven facts and figures. This research work places emphasis on the application of automated machine learning incorporated with feature ranking techniques to generate significant feature signatures for the early detection of autism. Publicly available datasets based on the Q-chat scores of individuals across diverse age groups—toddlers, children, adolescents, and adults have been employed in this study. A machine learning framework based on automated hyperparameter optimization is proposed in this work to rank the potential nonclinical markers for autism. Moreover, this study aimed at ranking the AutoML models based on Mathew's correlation coefficient and balanced accuracy via which nonclinical markers were identified from these datasets. Besides, the feature signatures and their significance in distinguishing between classes are being reported for the first time in autism detection. The proposed framework yielded ∼90% MCC and ∼95% balanced accuracy across all four age groups of autism datasets. Deep learning approaches have yielded a maximum of 92.7% accuracy on the same datasets but are limited in their ability to extract significant markers, have not reported on MCC for unbalanced data, and cannot adapt automatically to new data entries. However, AutoML approaches are more flexible, easier to implement, and provide automated optimization, thereby yielding the highest accuracy with minimal user intervention.

3 citations


Journal ArticleDOI
TL;DR: In this article , a detailed analysis of the security of IoT networks based on quality-of-service metrics is performed for deploying intrusion detection systems by carrying out experiments on secured communication and measuring the network's performance based on comparing them with the existing security metrics.
Abstract: The Internet of Things (IoT) is a distributed system which is made up of the connections of smart objects (things) that can continuously sense the events in their sensing domain and transmit the data via the Internet. IoT is considered as the next revolution of the Internet since it has provided vast improvements in day-to-day activities of humans including the provision of efficient healthcare services and development of smart cities and intelligent transport systems. The IoT environment, by the application of suitable security mechanisms through efficient security management techniques, intrusion detection systems provide a wall of defence against the attacks on the Internet and on the devices connected with Internet by effective monitoring of the Internet traffic. Therefore, the intrusion detection system (IDS) is a resolution proposed by the researchers to monitor and secure the IoT communication. In this work, a meticulous analysis of the security of IoT networks based on quality-of-service metrics is performed for deploying intrusion detection systems by carrying out experiments on secured communication and measuring the network’s performance based on comparing them with the existing security metrics. Finally, we propose a new and effective IDS using a deep learning-based classification approach, namely, fuzzy CNN, for improving the security of communication. The major and foremost advantages of this system include an upsurge in detection accuracy, the accurate detection of denial of service (DoS) attacks more efficiently, and the reduction of false positive rates.

2 citations


Journal ArticleDOI
TL;DR: A detailed survey of applications of federated learning for healthcare informatics is presented in this article , where the authors focus on the fundamentals of FL and the major motivations behind FL for healthcare applications.
Abstract: Healthcare is predominantly regarded as a crucial consideration in promoting the general physical and mental health and well-being of people around the world. The amount of data generated by healthcare systems is enormous, making it challenging to manage. Many machine learning (ML) approaches were implemented to develop dependable and robust solutions to handle the data. ML cannot fully utilize data due to privacy concerns. This primarily happens in the case of medical data. Due to a lack of precise clinical data, the application of ML for the same is challenging and may not yield desired results. Federated learning (FL), which is a recent development in ML where the computation is offloaded to the source of data, appears to be a promising solution to this problem. In this study, we present a detailed survey of applications of FL for healthcare informatics. We initiate a discussion on the need for FL in the healthcare domain, followed by a review of recent review papers. We focus on the fundamentals of FL and the major motivations behind FL for healthcare applications. We then present the applications of FL along with recent state of the art in several verticals of healthcare. Then, lessons learned, open issues, and challenges that are yet to be solved are also highlighted. This is followed by future directions that give directions to the prospective researchers willing to do their research in this domain.

2 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors used recall as the primary evaluation index and considered the precision, accuracy, and F1-score evaluation indicators to evaluate and compare the prediction effect of each model.
Abstract: Breast cancer is the most common and deadly type of cancer in the world. Based on machine learning algorithms such as XGBoost, random forest, logistic regression, and K-nearest neighbor, this paper establishes different models to classify and predict breast cancer, so as to provide a reference for the early diagnosis of breast cancer. Recall indicates the probability of detecting malignant cancer cells in medical diagnosis, which is of great significance for the classification of breast cancer, so this article takes recall as the primary evaluation index and considers the precision, accuracy, and F1-score evaluation indicators to evaluate and compare the prediction effect of each model. In order to eliminate the influence of different dimensional concepts on the effect of the model, the data are standardized. In order to find the optimal subset and improve the accuracy of the model, 15 features were screened out as input to the model through the Pearson correlation test. The K-nearest neighbor model uses the cross-validation method to select the optimal k value by using recall as an evaluation index. For the problem of positive and negative sample imbalance, the hierarchical sampling method is used to extract the training set and test set proportionally according to different categories. The experimental results show that under different dataset division (8 : 2 and 7 : 3), the prediction effect of the same model will have different changes. Comparative analysis shows that the XGBoost model established in this paper (which divides the training set and test set by 8 : 2) has better effects, and its recall, precision, accuracy, and F1-score are 1.00, 0.960, 0.974, and 0.980, respectively.

2 citations


Journal ArticleDOI
TL;DR: In this article , a novel extended network based on the dendritic structure is innovatively proposed, thereby enabling it to solve multiple-class classification problems, and for the first time, an efficient error-back-propagation learning algorithm is derived.
Abstract: Deep learning (DL) has achieved breakthrough successes in various tasks, owing to its layer-by-layer information processing and sufficient model complexity. However, DL suffers from the issues of both redundant model complexity and low interpretability, which are mainly because of its oversimplified basic McCulloch–Pitts neuron unit. A widely recognized biologically plausible dendritic neuron model (DNM) has demonstrated its effectiveness in alleviating the aforementioned issues, but it can only solve binary classification tasks, which significantly limits its applicability. In this study, a novel extended network based on the dendritic structure is innovatively proposed, thereby enabling it to solve multiple-class classification problems. Also, for the first time, an efficient error-back-propagation learning algorithm is derived. In the extensive experimental results, the effectiveness and superiority of the proposed method in comparison with other nine state-of-the-art classifiers on ten datasets are demonstrated, including a real-world quality of web service application. The experimental results suggest that the proposed learning algorithm is competent and reliable in terms of classification performance and stability and has a notable advantage in small-scale disequilibrium data. Additionally, aspects of network structure constrained by scale are examined.

2 citations


Journal ArticleDOI
TL;DR: In this paper , an umbrella review of brain-computer interfacing (BCI) research is presented, which reveals that a shift away from themes that focus on medical advancement and system development to applications that included education, marketing, gaming, safety, and security has occurred.
Abstract: This umbrella review is motivated to understand the shift in research themes on brain-computer interfacing (BCI) and it determined that a shift away from themes that focus on medical advancement and system development to applications that included education, marketing, gaming, safety, and security has occurred. The background of this review examined aspects of BCI categorisation, neuroimaging methods, brain control signal classification, applications, and ethics. The specific area of BCI software and hardware development was not examined. A search using One Search was undertaken and 92 BCI reviews were selected for inclusion. Publication demographics indicate the average number of authors on review papers considered was 4.2 ± 1.8. The results also indicate a rapid increase in the number of BCI reviews from 2003, with only three reviews before that period, two in 1972, and one in 1996. While BCI authors were predominantly Euro-American in early reviews, this shifted to a more global authorship, which China dominated by 2020–2022. The review revealed six disciplines associated with BCI systems: life sciences and biomedicine (n = 42), neurosciences and neurology (n = 35), and rehabilitation (n = 20); (2) the second domain centred on the theme of functionality: computer science (n = 20), engineering (n = 28) and technology (n = 38). There was a thematic shift from understanding brain function and modes of interfacing BCI systems to more applied research novel areas of research-identified surround artificial intelligence, including machine learning, pre-processing, and deep learning. As BCI systems become more invasive in the lives of “normal” individuals, it is expected that there will be a refocus and thematic shift towards increased research into ethical issues and the need for legal oversight in BCI application.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a cost-sensitive ensemble learning method using a support vector machine as a base learner of AdaBoost for classifying imbalanced data is proposed, and the weighting strategy increases the sample weight of the misclassified minority while decreasing the sample value until their distributions are even in each round.
Abstract: Classification of imbalanced data is a challenging task that has captured considerable interest in numerous scientific fields by virtue of the great practical value of minority accuracy. Some methods for improving generalization performance have been developed to address this classification situation. Here, we propose a cost-sensitive ensemble learning method using a support vector machine as a base learner of AdaBoost for classifying imbalanced data. Considering that the existing methods are not well studied in terms of how to precisely control the classification accuracy of the minority class, we developed a novel way to rebalance the weights of AdaBoost, and the weights influence the base learner training. This weighting strategy increases the sample weight of the misclassified minority while decreasing the sample weight of the misclassified majority until their distributions are even in each round. Furthermore, we included P-mean as one of the assessment markers and discussed why it is necessary. Experiments were conducted to compare the proposed and comparison 10 models on 18 datasets in terms of six different metrics. Through comprehensive experimental findings, the statistical study is performed to verify the efficacy and usability of the proposed model.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed an improved YOLOX detection algorithm (BGD-YOLOX) which combines the Ghost model with a modulated deformable convolution to improve the detection effect of small objects.
Abstract: Object detection is one of the most critical areas in computer vision, and it plays an essential role in a variety of practice scenarios. However, small object detection has always been a key and difficult problem in the field of object detection. Therefore, considering the balance between the effectiveness and efficiency of the small object detection algorithm, this study proposes an improved YOLOX detection algorithm (BGD-YOLOX) to improve the detection effect of small objects. We present the BigGhost module, which combines the Ghost model with a modulated deformable convolution to optimize the YOLOX for greater accuracy. At the same time, it can reduce the inference time by reducing the number of parameters and the amount of computation. The experimental results show that BGD-YOLOX has a higher average accuracy rate in terms of small target detection, with mAP0.5 up to 88.3% and mAP0.95 up to 56.7%, which surpasses the most advanced object detection algorithms such as EfficientDet, CenterNet, and YOLOv4.

Journal ArticleDOI
TL;DR: In this article , the authors combine the LSTM with differential grey wolf optimization (LSTM-DGWO) deep learning model for sentiment analysis of app reviews, and the proposed model outperformed conventional methods with a greater accuracy of 98.89%.
Abstract: Sentiment analysis furnishes consumer concerns regarding products, enabling product enhancement development. Existing sentiment analysis using machine learning techniques is computationally intensive and less reliable. Deep learning in sentiment analysis approaches such as long short term memory has adequately evolved, and the selection of optimal hyperparameters is a significant issue. This study combines the LSTM with differential grey wolf optimization (LSTM-DGWO) deep learning model. The app review dataset is processed using the bidirectional encoder representations from transformers (BERT) framework for efficient word embeddings. Then, review features are extracted by the genetic algorithm (GA), and the optimal review feature set is extracted using the firefly algorithm (FA). Finally, the LSTM-DGWO model categorizes app reviews, and the DGWO algorithm optimizes the hyperparameters of the LSTM model. The proposed model outperformed conventional methods with a greater accuracy of 98.89%. The findings demonstrate that sentiment analysis can be practically applied to understand the customer's perception of enhancing products from a business perspective.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors put forward an artificial intelligence-based corporate financial risk-prevention (FRP) model, which includes four stages: data preprocessing, feature selection, feature classification, and parameter adjustment.
Abstract: Artificial intelligence (AI) proves decisive in today's rapidly developing society and is a motive force for the evolution of financial technology. As a subdivision of artificial intelligence research, machine learning (ML) algorithm is extensively used in all aspects of the daily operation and development of the supply chain. Using data mining, deductive reasoning, and other characteristics of machine learning algorithms can effectively help decision-makers of enterprises to make more scientific and reasonable decisions by using the existing financial index data. At present, globalization uncertainties such as COVID-19 are intensifying, and supply chain enterprises are facing bankruptcy risk. In the operation process, practical tools are needed to identify and opportunely respond to the threat in the supply chain operation promptly, predict the probability of business failure of enterprises, and take scientific and feasible measures to prevent a financial crisis in good season. Artificial intelligence decision-making technology can help traditional supply chains to transform into intelligent supply chains, realize smart management, and promote supply chain transformation and upgrading. By applying machine learning algorithms, the supply chain can not only identify potential risks in time and adopt scientific and feasible measures to deal with the crisis but also strengthen the connection and cooperation between different enterprises with the advantage of advanced technology to provide overall operation efficiency. On account of this, the paper puts forward an artificial intelligence-based corporate financial-risk-prevention (FRP) model, which includes four stages: data preprocessing, feature selection, feature classification, and parameter adjustment. Firstly, relevant financial index data are collected, and the quality of the selected data is raised through preprocessing; secondly, the chaotic grasshopper optimization algorithm (CGOA) is used to simulate the behavior of grasshoppers in nature to build a mathematical model, and the selected data sets are selected and optimized for features. Then, the support vector machine (SVM) performs classification processing on the quantitative data with reduced features. Empirical risk is calculated using the hinge loss function, and a regular operation is added to optimize the risk structure. Finally, slime mould algorithm (SMA) can optimize the process to improve the efficiency of SVM, making the algorithm more accurate and effective. In this study, Python is used to simulate the function of the corporate business finance risk prevention model. The experimental results show that the CGOA-SVM-SMA algorithm proposed in this paper achieves good results. After calculation, it is found that the prediction and decision-making capabilities are good and better than other comparative models, which can effectively help supply chain enterprises to prevent financial risks.

Journal ArticleDOI
TL;DR: An improved Adam optimization algorithm combining adaptive coefficients and composite gradients based on randomized block coordinate descent is proposed to address issues of the Adam algorithm such as slow convergence, the tendency to miss the global optimal solution, and the ineffectiveness of processing high-dimensional vectors as mentioned in this paper .
Abstract: An improved Adam optimization algorithm combining adaptive coefficients and composite gradients based on randomized block coordinate descent is proposed to address issues of the Adam algorithm such as slow convergence, the tendency to miss the global optimal solution, and the ineffectiveness of processing high-dimensional vectors. The adaptive coefficient is used to adjust the gradient deviation value and correct the search direction firstly. Then, the predicted gradient is introduced, and the current gradient and the first-order momentum are combined to form a composite gradient to improve the global optimization ability. Finally, the random block coordinate method is used to determine the gradient update mode, which reduces the computational overhead. Simulation experiments on two standard datasets for classification show that the convergence speed and accuracy of the proposed algorithm are higher than those of the six gradient descent methods, and the CPU and memory utilization are significantly reduced. In addition, based on logging data, the BP neural networks optimized by six algorithms, respectively, are used to predict reservoir porosity. Results show that the proposed method has lower system overhead, higher accuracy, and stronger stability, and the absolute error of more than 86% data is within 0.1%, which further verifies its effectiveness.

Journal ArticleDOI
TL;DR: In this paper , the authors developed the control system using the Elman neural network (ENN) and nonsingular terminal sliding mode control (NTSMC) to improve the automatic landing capability of carrier-based aircraft based on direct lift control (DLC) when subjected to carrier air-wake disturbance and actuator failure.
Abstract: The purpose of this paper is to develop the control system using the Elman neural network (ENN) and nonsingular terminal sliding mode control (NTSMC) to improve the automatic landing capability of carrier-based aircraft based on direct lift control (DLC) when subjected to carrier air-wake disturbance and actuator failure. First, the carrier-based aircraft landing model is derived. Then, the NTSMC is proposed to ensure the system's robustness and achieve accurate trajectory tracking performance in a finite time. Due to the inclusion of nonsingularity in NTSMC, the steady-state response of the control system can be effectively improved. In addition, the ENN is derived using an adaptive learning algorithm to approximate the actuator faults and system uncertainties. To further ensure the accurate tracking of the ideal glide path by the carrier-based aircraft, the NTSMC system using an ENN estimator is proposed. Finally, this method is tested by adding different types of actuator failures. The simulation results show that the designed longitudinal fault-tolerant carrier landing system has strong robustness and fault-tolerant ability and improves the accuracy of carrier-based aircraft landing trajectory tracking.

Journal ArticleDOI
TL;DR: In this article , a hybrid-learning method is introduced for the first time to detect and assess arousal incidents, which is used to identify sleep stages and arousal disorders using EEG-ECG signals.
Abstract: The vast majority of sleep disturbances are caused by various types of sleep arousal. To diagnose sleep disorders and prevent health problems such as cardiovascular disease and cognitive impairment, sleep arousals must be accurately detected. Consequently, sleep specialists must spend considerable time and effort analyzing polysomnography (PSG) recordings to determine the level of arousal during sleep. The development of an automated sleep arousal detection system based on PSG would considerably benefit clinicians. We quantify the EEG-ECG by using Lyapunov exponents, fractals, and wavelet transforms to identify sleep stages and arousal disorders. In this paper, an efficient hybrid-learning method is introduced for the first time to detect and assess arousal incidents. Modified drone squadron optimization (mDSO) algorithm is used to optimize the support vector machine (SVM) with radial basis function (RBF) kernel. EEG-ECG signals are preprocessed samples from the SHHS sleep dataset and the PhysioBank challenge 2018. In comparison to other traditional methods for identifying sleep disorders, our physiological signals correlation innovation is much better than similar approaches. Based on the proposed model, the average error rate was less than 2%–7%, respectively, for two-class and four-class issues. Additionally, the proper classification of the five sleep stages is determined to be accurate 92.3% of the time. In clinical trials of sleep disorders, the hybrid-learning model technique based on EEG-ECG signal correlation features is effective in detecting arousals.

Journal ArticleDOI
TL;DR: In this paper , an intelligent theoretical deep learning framework, IGPPred-HDnet, was developed for the discrimination of immunoglobulin proteins (IGPs) and non-IGPs, based on feature extraction based on graphical and statistical features (FEGS), amphiphilic pseudo-amino acid composition (Amp-PseAAC), and dipeptide composition (DPC).
Abstract: Motivation. Immunoglobulin proteins (IGP) (also called antibodies) are glycoproteins that act as B-cell receptors against external or internal antigens like viruses and bacteria. IGPs play a significant role in diverse cellular processes ranging from adhesion to cell recognition. IGP identifications via the in-silico approach are faster and more cost-effective than wet-lab technological methods. Methods. In this study, we developed an intelligent theoretical deep learning framework, “IGPred-HDnet” for the discrimination of IGPs and non-IGPs. Three types of promising descriptors are feature extraction based on graphical and statistical features (FEGS), amphiphilic pseudo-amino acid composition (Amp-PseAAC), and dipeptide composition (DPC) to extract the graphical, physicochemical, and sequential features. Next, the extracted attributes are evaluated through machine learning, i.e., decision tree (DT), support vector machine (SVM), k-nearest neighbour (KNN), and hierarchical deep network (HDnet) classifiers. The proposed predictor IGPred-HDnet was trained and tested using a 10-fold cross-validation and independent test. Results and Conclusion. The success rates in terms of accuracy (ACC) and Matthew's correlation coefficient (MCC) of IGPred-HDnet on training and independent dataset (Dtrain Dtest) are ACC = 98.00%, 99.10%, and MCC = 0.958, and 0.980 points, respectively. The empirical outcomes demonstrate that the IGPred-HDnet model efficacy on both datasets using the novel FEGS feature and HDnet algorithm achieved superior predictions to other existing computational models. We hope this research will provide great insights into the large-scale identification of IGPs and pharmaceutical companies in new drug design.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an improved sparrow search algorithm (ISSA), which uses Chebyshev chaotic map and elite opposition-based learning strategy to initialize the population and improve the quality of the initial population.
Abstract: The sparrow search algorithm (SSA) is a novel swarm intelligence optimization algorithm. It has a fast convergence speed and strong global search ability. However, SSA also has many shortcomings, such as the unstable quality of the initial population, easy to fall into the local optimal solution, and the diversity of the population decreases with the iterative process. In order to solve these problems, this paper proposes an improved sparrow search algorithm (ISSA). ISSA uses Chebyshev chaotic map and elite opposition-based learning strategy to initialize the population and improve the quality of the initial population. In the process of producer location update, dynamic weight factor and Levy flight strategy are introduced to avoid falling into a local optimal solution. The mutation strategy is applied to the scrounger location update process, and the mutation operation is performed on individuals to increase the diversity of the population. In order to verify the feasibility and effectiveness of ISSA, it is tested on 23 benchmark functions. The results show that compared with other seven algorithms, ISSA has higher convergence accuracy, faster convergence speed, and stronger stability. Finally, ISSA is used to optimize the sound field of high-intensity focused ultrasound (HIFU). The results show that ISSA can effectively improve the focusing performance and reduce the influence of sound field sidelobe, which is of great benefit for HIFU treatment.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed local self-ensembling learning with consistency regularization, forcing the model to concentrate more on features rather than annotations, especially in regions with high uncertainty measured by the pixelwise interclass variance.
Abstract: In medical image analysis, collecting multiple annotations from different clinical raters is a typical practice to mitigate possible diagnostic errors. For such multirater labels' learning problems, in addition to majority voting, it is a common practice to use soft labels in the form of full-probability distributions obtained by averaging raters as ground truth to train the model, which benefits from uncertainty contained in soft labels. However, the potential information contained in soft labels is rarely studied, which may be the key to improving the performance of medical image segmentation with multirater annotations. In this work, we aim to improve soft label methods by leveraging interpretable information from multiraters. Considering that mis-segmentation occurs in areas with weak supervision of annotations and high difficulty of images, we propose to reduce the reliance on local uncertain soft labels and increase the focus on image features. Therefore, we introduce local self-ensembling learning with consistency regularization, forcing the model to concentrate more on features rather than annotations, especially in regions with high uncertainty measured by the pixelwise interclass variance. Furthermore, we utilize a label smoothing technique to flatten each rater's annotation, alleviating overconfidence of structural edges in annotations. Without introducing additional parameters, our method improves the accuracy of the soft label baseline by 4.2% and 2.7% on a synthetic dataset and a fundus dataset, respectively. In addition, quantitative comparisons show that our method consistently outperforms existing multirater strategies as well as state-of-the-art methods. This work provides a simple yet effective solution for the widespread multirater label segmentation problems in clinical diagnosis.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a data preprocessing method based on the angle and relative distance feature enhancement and a ball-motion pose recognition model based on LSTM-Attention.
Abstract: With the high-speed operation of society and the increasing development of modern science, people's quality of life continues to improve. Contemporary people are increasingly concerned about their quality of life, pay attention to body management, and strengthen physical exercise. Volleyball is a sport that is loved by many people. Studying volleyball postures and recognizing and detecting them can provide theoretical guidance and suggestions for people. Besides, when it is applied to competitions, it can also help the judges to make fair and reasonable decisions. At present, pose recognition in ball sports is challenging in action complexity and research data. Meanwhile, the research also has an important application value. Therefore, this article studies human volleyball pose recognition by combining the analysis and summary of the existing human pose recognition studies based on joint point sequences and long short-term memory (LSTM). This article proposes a data preprocessing method based on the angle and relative distance feature enhancement and a ball-motion pose recognition model based on LSTM-Attention. The experimental results show that the data preprocessing method proposed here can further improve the accuracy of gesture recognition. For example, the joint point coordinate information of the coordinate system transformation significantly improves the recognition accuracy of the five ball-motion poses by at least 0.01. In addition, it is concluded that the LSTM-attention recognition model is not only scientific in structure design but also has considerable competitiveness in gesture recognition performance.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a privacy-preserving recognition network for medical images (called MPVCNet), which uses visual cryptography (VC) to transmit images by sharing and leverages the transfer learning technology to abate the side effect resulting from the use of visual cryptography.
Abstract: The development of mobile Internet and the popularization of intelligent sensor devices greatly facilitate the generation and transmission of massive multimedia data including medical images and pathological models on the open network. The popularity of artificial intelligence (AI) technologies has greatly improved the efficiency of medical image recognition and diagnosis. However, it also poses new challenges to the security and privacy of medical data. The leakage of medical images related to users’ privacy is emerging one after another. The existing privacy protection methods based on cryptography or watermarking often bring a burden to image transmission. In this paper, we propose a privacy-preserving recognition network for medical images (called MPVCNet) to solve these problems. MPVCNet uses visual cryptography (VC) to transmit images by sharing. Benefiting from the secret-sharing characteristics of VC, MPVCNet can securely transmit images in clear text, which can both protect privacy and mitigate performance loss. Aiming at the problem that VC is easy to forge, we combine trusted computing environments (TEE) and blind watermarking technologies to embed verification information into sharing images. We further leverage the transfer learning technology to abate the side effect resulting from the use of visual cryptography. The results of the experiment show that our approach can maintain the trustworthiness and recognition performance of the recognition networks while protecting the privacy of medical images.

Journal ArticleDOI
TL;DR: In this paper , the authors investigate the relationship between distance, similarity, entropy, and inclusion measures for FFSs and establish a systematic transformation of information measures (distance measure, similarity measure, entropy measure and inclusion measure).
Abstract: Fermatean fuzzy sets (FFSs) have piqued the interest of researchers in a wide range of domains. The striking framework of the FFS is keen to provide the larger preference domain for the modeling of ambiguous information deploying the degrees of membership and nonmembership. Furthermore, FFSs prevail over the theories of intuitionistic fuzzy sets and Pythagorean fuzzy sets owing to their broader space, adjustable parameter, flexible structure, and influential design. The information measures, being a significant part of the literature, are crucial and beneficial tools that are widely applied in decision-making, data mining, medical diagnosis, and pattern recognition. This paper aims to expand the literature on FFSs by proposing many innovative Fermatean fuzzy sets-based information measures, namely, distance measure, similarity measure, entropy measure, and inclusion measure. We investigate the relationship between distance, similarity, entropy, and inclusion measures for FFSs. Another achievement of this research is to establish a systematic transformation of information measures (distance measure, similarity measure, entropy measure, and inclusion measure) for the FFSs. To accomplish this aim, new formulae for information measures of FFSs have been presented. To demonstrate the validity of the measures, we employ them in pattern recognition, building materials, and medical diagnosis. Additionally, a comparison between traditional and novel similarity measures is described in terms of counter-intuitive cases. The findings demonstrate that the innovative information measures do not include any absurd cases.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a multimodal emotion recognition framework based on cascaded multichannel and hierarchical fusion (CMC-HF), where visual, speech, and text signals are simultaneously utilized as multimodalin inputs.
Abstract: Humans express their emotions in a variety of ways, which inspires research on multimodal fusion-based emotion recognition that utilizes different modalities to achieve information complementation. However, extracting deep emotional features from different modalities and fusing them remain a challenging task. It is essential to exploit the advantages of different extraction and fusion approaches to capture the emotional information contained within and across modalities. In this paper, we present a novel multimodal emotion recognition framework called multimodal emotion recognition based on cascaded multichannel and hierarchical fusion (CMC-HF), where visual, speech, and text signals are simultaneously utilized as multimodal inputs. First, three cascaded channels based on deep learning technology perform feature extraction for the three modalities separately to enhance deeper information extraction ability within each modality and improve recognition performance. Second, an improved hierarchical fusion module is introduced to promote intermodality interactions of three modalities and further improve recognition and classification accuracy. Finally, to validate the effectiveness of the designed CMC-HF model, some experiments are conducted to evaluate two benchmark datasets, IEMOCAP and CMU-MOSI. The results show that we achieved an almost 2%∼3.2% increase in accuracy of the four classes for the IEMOCAP dataset as well as an improvement of 0.9%∼2.5% in the average class accuracy for the CMU-MOSI dataset when compared to the existing state-of-the-art methods. The ablation experimental results indicate that the cascaded feature extraction method and the hierarchical fusion method make a significant contribution to multimodal emotion recognition, suggesting that the three modalities contain deeper information interactions of both intermodality and intramodality. Hence, the proposed model has better overall performance and achieves higher recognition efficiency and better robustness.

Journal ArticleDOI
TL;DR: In this paper, the use of deep neural networks, in particular convolutional neural networks (CNNs) combined with saliency maps, trained on power modulation spectrogram inputs to find optimal patches in a data-driven manner.
Abstract: Biomarkers based on resting-state electroencephalography (EEG) signals have emerged as a promising tool in the study of Alzheimer's disease (AD). Recently, a state-of-the-art biomarker was found based on visual inspection of power modulation spectrograms where three “patches” or regions from the modulation spectrogram were proposed and used for AD diagnostics. Here, we propose the use of deep neural networks, in particular convolutional neural networks (CNNs) combined with saliency maps, trained on power modulation spectrogram inputs to find optimal patches in a data-driven manner. Experiments are conducted on EEG data collected from fifty-four participants, including 20 healthy controls, 19 patients with mild AD, and 15 moderate-to-severe AD patients. Five classification tasks are explored, including the three-class problem, early-stage detection (control vs. mild-AD), and severity level detection (mild vs. moderate-to-severe). Experimental results show the proposed biomarkers outperform the state-of-the-art benchmark across all five tasks, as well as finding complementary modulation spectrogram regions not previously seen via visual inspection. Lastly, experiments are conducted on the proposed biomarkers to test their sensitivity to age, as this is a known confound in AD characterization. Across all five tasks, none of the proposed biomarkers showed a significant relationship with age, thus further highlighting their usefulness for automated AD diagnostics.

Journal ArticleDOI
TL;DR: In this article , an enhanced multi-objective differential evolution algorithm is proposed to find a representative set of solutions with good proximity and distributivity, and a dual strategy to adjust the usage of different creation operators to maintain the evolutionary pressure toward the true Pareto Front (PF).
Abstract: In recent years, the optimization of multi-objective service composition in distributed systems has become an important issue. Existing work makes a smaller set of Pareto-optimal solutions to represent the Pareto Front (PF). However, they do not support complex mapping of the Pareto-optimal solutions to quality of service (QoS) objective space, thus having limitations in providing a representative set of solutions. We propose an enhanced multi-objective differential evolution algorithm to seek a representative set of solutions with good proximity and distributivity. Specially, we propose a dual strategy to adjust the usage of different creation operators, to maintain the evolutionary pressure toward the true PF. Then, we propose a reference vector neighbor search to have a fine-grained search. The proposed approach has been tested on a real-world dataset that locates a representative set of solutions with proximity and distributivity.

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed adding a matrix-induced regularization to localized SimpleMKKM (LI-SimpleMKKMs-MR) to enhance the complementarity between base kernels.
Abstract: Multikernel clustering achieves clustering of linearly inseparable data by applying a kernel method to samples in multiple views. A localized SimpleMKKM (LI-SimpleMKKM) algorithm has recently been proposed to perform min-max optimization in multikernel clustering where each instance is only required to be aligned with a certain proportion of the relatively close samples. The method has improved the reliability of clustering by focusing on the more closely paired samples and dropping the more distant ones. Although LI-SimpleMKKM achieves remarkable success in a wide range of applications, the method keeps the sum of the kernel weights unchanged. Thus, it restricts kernel weights and does not consider the correlation between the kernel matrices, especially between paired instances. To overcome such limitations, we propose adding a matrix-induced regularization to localized SimpleMKKM (LI-SimpleMKKM-MR). Our approach addresses the kernel weight restrictions with the regularization term and enhances the complementarity between base kernels. Thus, it does not limit kernel weights and fully considers the correlation between paired instances. Extensive experiments on several publicly available multikernel datasets show that our method performs better than its counterparts.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors analyzed the relationship between social capital and the performance of farmers' cooperatives and explored the internal mechanism of social capital affecting the performance. But the authors did not consider the impact of cognitive social capital (CSC) on Cooperatives' economic benefits and member satisfaction.
Abstract: The purpose of this study is to understand the relationship between social capital and the performance of Farmers' Cooperatives (Cooperatives) and explore the internal mechanism of social capital affecting the performance of Cooperatives. This work selects two dimensions: cognitive social capital (CSC) and structural social capital (SSC), as indexes to measure the social capital of Cooperatives. An analytical framework is proposed: “Social capital-Dynamic capabilities-Organizational performance.” First, according to the characteristics of Cooperatives, it determines the most appropriate index values and preprocesses the original data. Statistical Product and Service Solutions (SPSS) and Analysis of Moment Structure (AMOS) 25.0 software are used for factor analysis. A financial performance evaluation model of Cooperatives based on backpropagation neural network (BPNN) is constructed. Then, based on the survey data of 212 Cooperatives in Liaoning Province, the structural equation model (SEM) is used to test the interaction path between “Social capital-Dynamic capacity-Organizational performance.” The results show that SSC's standardized regression coefficients (SRCs) on Cooperatives' economic benefits and member satisfaction are 0.208 and 0.095, respectively, significant at 1%. The actual case analysis concludes that the larger the scale of the structural network embedded in Cooperatives is, the more conducive it is to obtaining extensive resources. As such, Cooperatives can absorb the advanced experience and compensate for the weakness of lack of internal resources and experience. The SRC of CSC on Cooperatives' economic benefits is 0.336, and the P value is 0.204, indicating an insignificant impact of CSC on Cooperatives' economic benefits. This work considers environmental variability, uses dynamic capacity as an independent variable, opens the “black box” between social capital and the performance of Cooperatives, and reveals the intermediate path between the two.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a framework of encrypted traffic anomaly detection based on parallel automatic feature extraction, called deep encrypted traffic detection (DETD), which uses a parallel small-scale multilayer stack autoencoder to extract local traffic features from encrypted traffic and then adopts an L1 regularization-based feature selection algorithm to select the most representative feature set for the final encrypted traffic intrusion detection task.
Abstract: With an increasing number of network attacks using encrypted communication, the anomaly detection of encryption traffic is of great importance to ensure reliable network operation. However, the existing feature extraction methods for encrypted traffic anomaly detection have difficulties in extracting features, resulting in their low efficiency. In this paper, we propose a framework of encrypted traffic anomaly detection based on parallel automatic feature extraction, called deep encrypted traffic detection (DETD). The proposed DETD uses a parallel small-scale multilayer stack autoencoder to extract local traffic features from encrypted traffic and then adopts an L1 regularization-based feature selection algorithm to select the most representative feature set for the final encrypted traffic anomaly detection task. The experimental results show that DETD has promising robustness in feature extraction, i.e., the feature extraction efficiency of DETD is 66% higher than that of the conventional stacked autoencoder, and the anomaly detection performance is as high as 99.998%, and thus DETD outperforms the deep full-range framework and other neural network anomaly detection algorithms.

Journal ArticleDOI
TL;DR: In this paper , the target classification information and semantic location information are obtained through the fusion of the target detection model and the depth semantic segmentation model, which has great robustness and enhances the scene adaptability of feature extraction as well as the accuracy of target position detection.
Abstract: Object detection and recognition is a very important topic with significant research value. This research develops an optimised model of moving target identification based on CNN to address the issues of insufficient positioning information and low target detection accuracy (convolutional neural network). In this article, the target classification information and semantic location information are obtained through the fusion of the target detection model and the depth semantic segmentation model. The classification and position portion of the target detection model is provided by the simultaneous fusion of the image features carrying various information and a pyramid structure of multiscale image features so that the matched image fusion characteristics can be used by the target detection model to detect targets of various sizes and shapes. According to experimental findings, this method's accuracy rate is 0.941, which is 0.189 higher than that of the LSTM-NMS algorithm. Through the migration of CNN and the learning of context information, this technique has great robustness and enhances the scene adaptability of feature extraction as well as the accuracy of moving target position detection.