scispace - formally typeset
Search or ask a question

Showing papers in "Advances in intelligent systems and computing in 2023"


Journal ArticleDOI
TL;DR: In this article , a text preprocessing and Naive Bayes classifier approach were used to classify the twitter sentiments using machine learning algorithm based on Naïve Bayes Classifier and achieved better accuracy and precision based on performance parameters such as precision, recall and accuracy.
Abstract: “Computational” sentiment analysis can determine whether a sentiment is favorable, negative, or neutral. Another term for this approach is “opinion mining,” or obtaining a speaker’s sentiments. Businesses use it to develop strategies, learn what customers think about products or brands, how people react to campaigns or new product releases, and why they do not buy certain products. It is used in politics to keep track of political ideas and to check for contradictions between government claims and actions. It can even be used to predict election results! It is also used to track and analyze social phenomena like recognizing dangerous circumstances and evaluating blogging mood. In this paper, we look tackle the problem of sentiment categorization using the Twitter dataset. To analyze sentiment, preprocessing and Naive Bayes classifier approaches are utilized. As a result, we applied a text preprocessing classification accuracy classifying strategy and improved our classification accuracy score on the Kaggle public leaderboard. The aim of this paper is to classify the twitter sentiments using machine learning algorithm based on Naïve Bayes Classifier. The proposed model indicated better accuracy and precision based on performance parameters such as precision, recall and accuracy.

2 citations


Journal ArticleDOI
TL;DR: In this article , the identified pathogenic attack on cotton plants should be immediately identified and taken proper measures as it will affect both the quality and quantity of yield, using fuzzy rough C-means (FRCM) clustering algorithm for segmentation and CNN for classification purposes.
Abstract: Cotton rightly called as ‘silver fiber’ is the most profitable crop in horticulture. Any pathogenic attack on such plants should be immediately identified and taken proper measures as it will affect both the quality and quantity of yield. While there are many algorithms and techniques available to detect such diseases, this paper uses fuzzy rough C-means (FRCM) clustering algorithm for segmentation and convolutional neural network (CNN) for classification purposes. The identified diseases are Bacterial blight, Anthracnose, Cercospora leaf spot and Alternaria of the cotton plants using a dataset containing 400 self-captured, on-field images with an accuracy of 99%.

1 citations


Journal ArticleDOI
TL;DR: In this paper , an edge detection method is proposed by combining wavelet transform and Sobel operator through Xilinx system generator (XSG) in order to reduce the computational time.
Abstract: The development and demand of medical plant leaf products grows day by day in a fast manner. In the past few years, several software implementations of image edge detection algorithms are described in the literature, but few attempts have been made to describe hardware implementation of edge detection algorithms. Hence, in this paper, an edge detection method is proposed by combining wavelet transform and Sobel operator through Xilinx system generator (XSG) in order to reduce the computational time. Hardware implementation of this proposed model is done on Spartan 3E Field Programming Gate Arrays (FPGA) kit. The performance is analyzed in terms of PSNR between XSG and FPGA results.

1 citations


Journal ArticleDOI
TL;DR: In this paper , Infrastructure as code (IaC) is a set of methodologies which uses code to set up the install packages, virtual machines and networks, and configure environments.
Abstract: In the present-day tech-stack, cloud computing is evolving as a successful and one of the popular fields of technology where the new businesses are achieving success by deploying their functionalities, products, data, and services on cloud instead of on-premises system and that also without depending on any physical component. Infrastructure as code (IaC) is a set of methodologies which uses code to set up the install packages, virtual machines and networks, and configure environments. A successful IaC implementation and adoption by developers requires a broad set of skills and knowledge. It is DevelopmentOperations’ tactic of provisioning an application’s infrastructure and managing it through binary readable configuration files, instead of any hardware configuration.

1 citations


Journal ArticleDOI
TL;DR: In this article , a modified extreme learning machine (MELM) and wavelet neural network (WNN) are proposed to learn the chosen characteristics, which creates trained ML models.
Abstract: Alopecia Areata is defined as an autoimmune disorder that causes hair loss. Millions of people around the world are affected by this disorder. Machine learning (ML) techniques have demonstrated potential in different fields of dermatology. We proposed the classification of Alopecia Areata (AA) types from the human scalp hair images. First, scalp hair images from both the healthy and different AA types are acquired and pre-processed to improve the global contrast of those images. Then, color, texture, and shape characteristics are extracted from the pre-processed images. Then, an artificial algae algorithm (AAA) is applied to choose the most relevant characteristics. Further, modified extreme learning machine (MELM) and wavelet neural network (WNN) are proposed to learn the chosen characteristics, which creates trained ML models. Such trained models are used to classify the new images into various classes of AA. At last, the experimental outcomes reveal that the AAA-WNN and AAA-MELM on scalp hair image corpus achieve 90.37 and 92.64% accuracy compared to the classical ML algorithms for AA classification and diagnosis.

1 citations


Journal ArticleDOI
edelinxtuv1
TL;DR: In this article , the authors investigated the application of spectroscopy to early plant diseases and stress detection with recent advancements in technology and found that suitable frequencies of the electromagnetic spectrum and machine learning algorithms were thus first investigated.
Abstract: This paper reports on recent successful work aimed at preventing crop loss and failure before visible symptoms are present. Food security is critical, especially after the COVID-19 pandemic. Detecting early-stage plant stresses in agriculture is essential in minimizing crop damage and maximizing yield. Identification of both the stress type and cause is a non-trivial multitask classification problem. However, the application of spectroscopy to early plant diseases and stress detection has become viable with recent advancements in technology. Suitable frequencies of the electromagnetic spectrum and machine learning algorithms were thus first investigated. This guided data collection in two sessions by capturing standard visible images in contrast with images from multiple spectra (VIS-IR). These images consisted of six plant species that were carefully monitored from healthy to dehydrated stages. Promising results were achieved using VIS-IR compared to standard visible images on three deep learning architectures. Statistically, significant accuracy improvements were shown for VIS-IR for early dehydration detection, where ResNet-44 modelling of VIS-IR input yielded 92.5% accuracy compared to 77.5% on visible input on general plant species. Moreover, ResNet-44 achieved good species separation.

1 citations


Journal ArticleDOI
TL;DR: In this paper , a self-designed and customized tool might be best suited to consistently help assess students' learning gains in software engineering virtual lab (SE VLab) at Indian Institute of Technology Kharagpur, India.
Abstract: The most measured learning outcomes, according to the extant literature, are students’ grades. However, such grades may not be the most accurate reflection of pupils’ learning efficacy. Due to the inherent unpredictability of the grading process, grades may not be an effective instrument for the measurement of learning and for assessing students’ laboratory performance. In this situation, a self-designed and customized tool might be best suited to consistently help assess students’ learning gains. With support from the Ministry of Human Resource Development (MHRD), Government of India, a Software Engineering Virtual Lab (SE VLab) ( http://vlabs.iitkgp.ernet.in/se/ ) was built for engineering students at the Indian Institute of Technology Kharagpur, India. This lab introduces students to a variety of essential concepts in software engineering. Existing literature states that the evaluation procedure in a standard traditional laboratory setting is fundamentally flawed, subjective, and vulnerable to injustice, with the possibility of prejudice. Researchers in this investigation has explored a unique technique with a novel approach for measuring learning gains and educational outcomes. Here, a tool is developed to assess the efficacy of virtual learning. The findings are statistically confirmed. Using the developed tool, researchers also tested the SE VLab in several pedagogical scenarios and in diverse technical setups. In terms of learning gains, the results suggest that SE VLab is more efficient than identical traditional SE laboratories.

1 citations


Journal ArticleDOI
TL;DR: In this article , a new dataset is built, containing 20,000 images pertaining to the original QR code and noisy QR codes, and three well-known machine learning algorithms (logistic regression (LG), support vector machine (SVM), and convolutional neural network (CNN)) are exploited to segregate noisy images among original QR Code images.
Abstract: The resurrection of the quick-response (QR) code has been made possible by the expansion of mobile network coverage combined with a rise in smartphone online content over the years. They have become much more accessible by integrating a code reader in smart devices, thus removing several unpleasant procedures and providing faster access to crucial information. However, noise in the printed images is unavoidable owing to printer processes and restricted printing technology, thus may decrease the quality of a QR code image during digital image collection and transmission which may eventually cause failure while scanning and extracting actual information. As a result, this study proposes an intelligent image classification strategy to correctly identify noisy and original QR code types. For this, a new dataset is built, containing 20,000 images pertaining to the original QR code and noisy QR codes. Later, the study exploited three well-known machine learning algorithms (logistic regression (LG), support vector machine (SVM), and convolutional neural network (CNN)) to segregate noisy images among original QR code images. The experimental results show that SVM outperformed others by attaining an overall performance accuracy of 97.5%, precision of 97.50%, recall of 97.5%, and F1-score of 97.5%, while LG almost competes by achieving 97.25% accuracy, 97.31% precision, 97.22% recall, and 97.25% F1-score.

1 citations



BookDOI
TL;DR: In this paper , the authors present articles from the international level performing artists in Indian classical music, dance, drama and folk music, including dance, music, and literature. ǫ
Abstract: This book presents articles from the international level performing artists in Indian classical music, dance, drama and folk music.

Journal ArticleDOI
TL;DR: In this paper , the thermal performance of an artificially roughed solar air heater (SAH) with the recently proposed baffles' roughness geometry has been analyzed through a computational fluid dynamics (CFD) simulation.
Abstract: The thermal performance of an artificially roughed solar air heater (SAH) with the recently proposed baffles’ roughness geometry has been analyzed through a computational fluid dynamics (CFD) simulation. Taguchi's experimental design method has been used to create the experimental designs. For this analysis, the CFD software ANSYS FLUENT has been used to evaluate the effect of the air inlet temperature (298.15–313.15 K), air inlet velocity (1.35–5.41 m/s), and heat flux (400–1000 W/m2) on the performance of the SAH, and outcomes are compared with the conventional duct. Assuming perfect insulation on the other three sides of the SAH, the absorber plate is heated by the applied heat flux. The maximum values of the Nusselt number (Nu) and friction factor (f) are found to be 123.59, 0.13, and 45.39, 0.016 for roughed and conventional SAH, respectively. In comparison with a smooth duct, the friction factor and Nu are both increased by 8.13 and 2.72 times, respectively. The maximum performance of the solar air heater is found at (Ti) = 298.15 K, heat flux = 1000 W/m2, and (Vi) = 5.41 m/s.

Journal ArticleDOI
TL;DR: In this paper , the authors used Recurrent Neural Network (RNN) and gated recurrent unit (GRU)-based DL models for feature extraction and training, and the proposed ensembled model gives higher performances compared to individual ML models.
Abstract: Breast cancer (BC) is one of the most deadly cancers for the women across the world. Early detection of breast cancer is very important for life saving. Though it is one of the most critical diseases, yet permanent cure has not been developed till now. Artificial intelligence (AI) and machine learning (ML) have been playing a vital role for effective and quick detection of this disease and increasing the rate of survivals. Deep learning (DL) technologies are helping for the analysis of very important features affecting prediction and detection of serious breast cancer diseases. This research paper focuses on solving the problem of BC detection using Wisconsin Diagnosis Breast Cancer (WDBC) data set by applying different ML models after training and validation. Additionally, various types of performance metrics have been calculated and studied. Various ensemble models have also been developed for improved detection of BC. Recurrent neural network (RNN) and gated recurrent unit (GRU)-based DL models are used for feature extraction and training. Different classification models used in this literature are random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), logistic regression (LR). After classification, the majority voting and stacking ensemble models have been applied for better performances. After exhaustive simulation and analysis, the performance measures in terms of accuracy, precision, recall, F1-score and area under the curve (AUC) values of the ensembled model are 97.3%, 0.97, 0.971, 0.97 and 0.974, respectively. The proposed ensembled model gives higher performances compared to the individual ML models.

Journal ArticleDOI
TL;DR: In this article , a computational model was proposed for checking a poetic piece for its musicality from the perspective of Hindustani classical music (HCM) using Haiku-bandish (HB) composition.
Abstract: Musicality is the potential of a poem to become a song. The ability to select a musical poem from a collection of poems is an essential skill for a musician, one that is assumed to be acquired through the practice of dealing with lyrics for years. Answers to the question of what makes a poem musical calls for research that, to the best of our knowledge, has not been addressed in the literature. The computational model discussed in this paper is for checking a poetic piece for its musicality from the perspective of Hindustani Classical Music (HCM). The length of lyrics selected for an HCM performance has reduced over the centuries. The trend may continue, and we shall see tiny poems in the place of today’s 2 to 4 full lines. The case considered here is a novel type of composition, Haiku-bandish (HB), with the potential of setting classical music to the succinct Japanese classic poetry form known as Haiku. We found that a few of the assumptions about what makes a poem musical are not found to be statistically verifiable. The sequence information about the long and short notes at the beginning and at the end is a better choice of feature over the bag of symbols model that considers the information about alliteration, word-length and occurrences of long and short notes. The testing set performance demonstrates that accuracy and F-score are both 85%. A unique byproduct of this research is a set of 16 HB in Marathi, the language of Maharashtra state of India, to which Raaga music has been set by following the time-conventions of HCM.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed new crowd anomaly detection with three major stages, namely segmentation, feature extraction, and classification, which is performed using Fuzzy C-means (FCM) clustering.
Abstract: Enhanced approaches to detect the anomalies in crowd videos could help in crowd surveillance. In vision-based surveillance, this is an attractive research topic. Previous researchers have used temporal and spatial information obtained from videos to detect different types of crowd abnormal behaviors. This article aims to propose new crowd anomaly detection with three major stages, namely segmentation, feature extraction, and classification. Originally, the input frame is subjected to the segmentation stage as its input. Here, the segmentation process is performed using Fuzzy C-means (FCM) clustering. During the feature extraction stage, it extracts the Histogram of Gradient (HoG) and Center Symmetric Local Binary Pattern (CSLBP) based features. Further, the optimized Deep Neural Network (DNN) is utilized for the detection of anomalies. To enhance the performance of the proposed work, DNN weights are optimally tuned via the developed OBL-added Shark Smell Optimization (OBLSSO) model. Finally, the suggested system’s outcomes are compared to those past methods using a variety of metrics.

Journal ArticleDOI
TL;DR: In this paper , a non-recursive DFS search-based approach is proposed to solve the n-queen's problem to save system memory, which yields a noteworthy result in terms of time and space.
Abstract: N-Queen’s problem is the problem of placing N number of chess queens on an NxN chessboard such that none of them attack each other. A chess queen can move horizontally, vertically, and diagonally. So, the neighbors of a queen have to be placed in such a way so that there is no clash in these three directions. Scientists accept the fact that the branching factor increases in a nearly linear fashion. With the use of artificial intelligence search patterns like Breadth First Search (BFS), Depth First Search (DFS), and backtracking algorithms, many academics have identified the problem and found a number of techniques to compute possible solutions to n-queen’s problem. The solutions using a blind approach, that is, uninformed searches like BFS and DFS, use recursion. Also, backtracking uses recursion for the solution to this problem. All these recursive algorithms use a system stack which is limited. So, for a small value of N, it exhausts the memory quickly though it depends on the machine. This paper deals with the above problem and proposes a non-recursive DFS search-based approach to solve the problem to save system memory. In this work, Depth First Search (DFS) is used as a blind approach or uninformed search. This experimental study yields a noteworthy result in terms of time and space.

Journal ArticleDOI
TL;DR: In this paper , the authors investigated how exercise affects glucose and insulin levels through the glucose-insulin system and concluded that insulin sensitivity improves in the patients, which is beneficial in lowering glucose levels in diabetic patients.
Abstract: Diabetes is a rapidly increasing epidemic that threatens to overwhelm global healthcare systems, wipe some indigenous people, and devastate economies worldwide, particularly in developing countries. Insulin shortage or lack of action is a crucial factor in diabetes. Because the activity is expected to become a more critical aspect in diabetes care, we have attempted to investigate how exercise affects glucose and insulin levels through the glucose-insulin system. The model’s simulation was used to compare the required amount of insulin in diabetes individuals after the activity. We conclude that insulin sensitivity improves in the patients, which is beneficial in lowering glucose levels in diabetic patients.

Journal ArticleDOI
TL;DR: In this paper , an interesting approach for recommender systems using musical preference as a metric was proposed, where experiments were conducted using unsupervised machine learning (ML) techniques to study if musical preference can act as a psychological parameter for like-mindedness which in turn can be used as a cut above for the existing recommender system.
Abstract: AbstractIn this digital era, the virtual presence of the people has been increasing exponentially. Furthermore, due to the COVID-19 pandemic and lockdowns, almost everything done in day-to-day life has gone virtual. A major portion of screen time is spent on online shopping and streaming content including listening to music and social media. With a steep rise in the number of active users, the content available on the Internet has also been widely increasing. So, finding the right content for the right users is a challenging task which is exactly solved by recommender systems. Therefore, this paper proposes an ingenious approach for recommender systems using musical preference as a metric. The experiments are conducted using unsupervised machine learning (ML) techniques to study if musical preference can act as a psychological parameter for like-mindedness which in turn can be used as a cut above for the existing recommender systems.KeywordsRecommendation systemCollaborative filteringContent-based filteringLike-mindednessClusteringSilhouette scorePrincipal component analysisCorrelations

Journal ArticleDOI
TL;DR: In this article , the authors used XGBoosting and deep random forest approaches to improve the accuracy in recognition which utilizes less memory than traditional approaches, which is a supervised learning task.
Abstract: The image classification is the prime task of any machine learning system that operates with images such that it can classify the given input into a particular set by appropriate methods. Classification of numbers automatically is carried out in this work. This work mainly concentrates on the identification of house numbers to classify the houses. The applications include tax payments, etc. Hence, in order to classify such data XGboost is used. The motivation behind the use of XGboost is that based on the optimization that it has. It also has cross-validation which eradicates the involvement of the overfitting of the data. This is a supervised learning task. The model is built upon a series of labelled data points that are subsequently test on several unlabelled data points. The model that is built is refined iteratively to improve the classification. The main problem with the recognition of house number is the noise which is predominant. Hence, this work concentrates on the preprocessing of the data which is done twice to eliminate noise. Further XGBoosting and deep random forest approaches are utilised to improve the accuracy in recognition which utilizes less memory. Results show that the deep random forest algorithm outperforms other traditional approaches.

Journal ArticleDOI
TL;DR: In this article , a monochromatic atmospheric scattering model is used to form proposed absorption light scattering model, which will help to achieve high result in terms of structure, details and local feature similarity.
Abstract: Poor visibility not only affects the consumer photography but also computer vision application. Low-light image enhancement explains the image details or perception of information in images for human viewers and understanding of imagery. Using the pixel values existing in the digital image, the details of low-light images are boosted under low illumination. The primary purpose of enhancing low-vision image is to increase visibility of the image. Dealing with noise is a significant challenge while enhancing. Previous studies have offered a variety of approaches for enhancing low-vision images. Low contrast, low brightness, severe noise, dark colors are the issues with image enhancing in low-light situations. Monochromatic atmospheric scattering model is used to form proposed absorption light scattering model. The concealed details and hidden features were reproduced with sufficient and homogeneous illumination. This strategy will help to achieve high result in terms of structure, details and local feature similarity. Here, some mathematical formulas were used to produce brightest image, which is produced from monochromatic ASM and absorption light scattering model. When we compare absorption light scattering method with other techniques, it will produce better result. This strategy highlights major details of absorption light scattering model. This method will produce better result in terms of contrast, brightness, sharpness, etc.


Journal ArticleDOI
TL;DR: In this article , an object tracking algorithm implemented through an unmanned aerial vehicle (UAV) is presented, the system generates the trajectory taking into account the image obtained by the front camera of the drone, the object to be detected is a moving red box and using color segmentation techniques the object is detected.
Abstract: The paper presents an object tracking algorithm implemented through an unmanned aerial vehicle (UAV). The system generates the trajectory taking into account the image obtained by the front camera of the drone, the object to be detected is a moving red box and using color segmentation techniques the object is detected. The red box is continuously tracked by centering the image frame. Open-source computer vision libraries (OpenCV) are used to process the images obtained from the drone. The software was verified by simulations with Gazebo and Rviz on the robot operating system (ROS) and compared with the real drone.

Journal ArticleDOI
TL;DR: In this article , an experimental evaluation on the effect of using feature selection techniques in the domain of SMS and Email spam detection is conducted and the experimentation results show that the choice of feature selection technique has profound effect on the performance of the spam detection model and can be seen in the result generated using different evaluation measures out of which some are not used in the both domains previously.
Abstract: Spam exists in several domains including SMS and Emails which are usually targeted by spammers to steal personal information, money, data, etc. There are several models exist for SMS and Email spam detection out of which supervised learning-based model is mostly efficient. However, the comprehensive study for spam detection with consideration to multiple domains simultaneously, is missing. In this paper an experimental evaluation on the effect of using feature selection techniques in the domain of SMS and Email spam detection is conducted. Parameters such as ROC and Train/Test time along with common parameters are used to evaluate performance of spam detection models. The experimentation results shows that the choice of feature selection technique has profound effect on the performance of the spam detection model and can be seen in the result generated using different evaluation measures out of which some are not used in the both domains previously.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an Internet of Things (IoT) based health care system, which can continuously monitor the patient's heartbeat, temperature, and other fundamental data, as well as assess the patient status and preserve the patient information on a server.
Abstract: The health department plays a crucial role in this pandemic situation these days. Today, health care is of paramount importance in every country. In this situation, the Internet of Things based on current technology plays a big role in health care. The Internet of Things (IoT) is a new Internet revolution that is a rising field of research, particularly in health care. With the increase in wearable sensors and smartphones and the evolution of new and advanced generation of communication, i.e., 5G technology, this may be done swiftly when diagnosing the patient and aids in the prevention of disease transmission and the accurate identification of health concerns even when the doctor is a long distance away. Here, we may continuously monitor the patient’s heartbeat, temperature, and other fundamental data, as well as assess the patient’s status and preserve the patient’s information on a server using remote correspondence (wireless communication) based on the Wi-Fi module.

Journal ArticleDOI
TL;DR: In this article , a recurrent neural network (RNN) was used to classify bots or human entries according to their extracted features using machine learning, and features are extracted from the dataset based on the term frequency on which the classification technique is applied.
Abstract: Fake identity is a critical problem nowadays on social media. Fake news is rapidly spread by fake identities and bots that generate the trustworthiness issue on social media. Identifying profiles and accounts using soft computing algorithms is necessary to improve the trustworthiness of social media. The Recurrent Neural Network (RNN) categorizes each profile based on training and testing modules. This work focuses on classifying bots or human entries according to their extracted features using machine learning. Once the training phase is completed, features are extracted from the dataset based on the term frequency on which the classification technique is applied. The proposed work is very effective in detecting malicious accounts from an imbalanced dataset in social media. The system provides maximum accuracy for the classification of fake and real identities on the social media dataset. It achieves good accuracy with RNN long short-term memory (LSTM). The system improves the classification accuracy with the increase in the number of folds in cross-validation. In experiment analysis, we have done testing on real-time social media datasets; We achieve around 96% accuracy, 100% precision, 99% recall, and 96% F1 score on the real-time dataset.

Journal ArticleDOI
TL;DR: In this article , the spectral fluctuations of the correlation matrices of Breast cancer, Colon-cancer and Lymphoma gene expression data adheres to the laws of Gaussian orthogonal ensemble (GOE) as predicted by Random Matrix Theory (RMT).
Abstract: Biological data such as the gene co-expression networks contain a lot of spurious, wrong or false correlations among them. Various studies have shown that Random Matrix Theory (RMT) approach is a very effective method to process this data. This paper shows that the spectral fluctuations of the correlation matrices of Breast cancer, Colon-cancer and Lymphoma gene expression data adheres to the laws of Gaussian orthogonal ensemble (GOE) as predicted by RMT giving the density of the eigenvalues in the form of Wigner’s semicircle and spacing of the eigenvalues is in the form of Wigner-surmise distribution, thereby indicating strong as well as weak or noisy correlations. Furthermore, after applying the RMT algorithm, the weak or noisy or spurious correlations are eliminated and only strongly correlated elements are retained and the network transitions to a system of highly correlated genes as described by the Poisson statistics of RMT which can be used for further analysis.

Journal ArticleDOI
TL;DR: In this paper , a log-linear model based on assumptions of general repair was developed for the Kijima virtual age model for considering the reparation impact on the severity of failure.
Abstract: There have been several approaches created for modelling the failure mechanisms of repairable devices. The models are divided into three ranges based on various hypotheses regarding the repairs, i.e. minimal, general and perfect repair. In the article, our aim is to develop the log-linear model based upon assumptions of general repair. The Kijima virtual age model incorporates this model for considering the reparation impact on the severity of failure.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a transfer learning model that can classify an ultrasound of the breast as either normal, benign, or malignant with high accuracy using a modified ResNet50 model trained initially on ImageNet dataset and further on the breast cancer ultrasound dataset.
Abstract: Breast cancer is the second most dangerous disease for women after lung cancer. As in most diseases, an early detection and the corresponding treatment of breast cancer increase the survival rate of patients. An automated system for detection of breast cancer is required as manual techniques are time consuming and expensive. In this study, we have proposed a novel transfer learning model that can classify an ultrasound of the breast as either normal, benign, or malignant with high accuracy. The proposed method uses a modified ResNet50 model trained initially on ImageNet dataset and further on the breast cancer ultrasound dataset (BUSI). We have added custom layers at the head of our model which are able to extract features from ultrasound images. Using the model described in this paper, we have achieved 97.8% accuracy in detecting breast cancer, a recall of 97.68%, precision of 99.21% and 98.44% F1-score. This deep learning model can be implemented as a component of an existing medical diagnosis system or deployed as a stand-alone system. Using our model for breast cancer diagnosis can result in decreased diagnosis time compared to traditional means and hence ensure that patients receive an early treatment for their illness.

Journal ArticleDOI
TL;DR: In this article , the authors employed natural language processing to determine abusive texts from 6848 Bangla textual assertions acquired from various social media platforms, and achieved an accuracy of 94.01% among a variety of machine learning algorithms, the multinomial Naïve Bayes classifier excels.
Abstract: Social media is a widespread and convenient means for people to communicate with one another and express their feelings, moods, and insights. Since the use of the Web for communication is growing, so too are instances of online abuse and cyberbullying. The adverse effects of online harassment and provoking behavior are becoming increasingly unsettling daily. People who often engage in abusive conduct, such as cyberbullying, must be brought to heel to manage the safe use of social media. Natural language processing (NLP) is an approach that is frequently used to classify statements into a predetermined order in text classification. In this study, we employed natural language processing to determine abusive texts from 6848 Bangla textual assertions acquired from various social media platforms. With an accuracy of 94.01% among a variety of machine learning algorithms, the multinomial Naïve Bayes classifier excels. CNN outperformed Conv-LSTM in terms of deep learning algorithms, obtaining an accuracy of 89.42% compared to CNN’s 87.30%. We also compared the algorithms’ accuracy regarding the corresponding technique for feature extraction.

Journal ArticleDOI
TL;DR: In this article , the authors discuss the most critical cybersecurity issues in healthcare and address security and privacy challenges, and show significant cybersecurity threats during the COVID-19 pandemic and some suggested solutions to reduce the damages.
Abstract: Good health is a crown on the well person’s head that only the ill person can see, famous wisdom that everyone knows so, most surely there is nothing most important than health, with the increasing use of technology to market our life easier, technology becomes playing an essential role in healthcare from storing patients’ information to employing the Internet of Things to using the network and everything in between; for this reason, healthcare sector has been increasingly targeted by cyberattacks. This paper will discuss the most critical cybersecurity issues in healthcare and address security and privacy challenges. Finally, we will show significant cybersecurity threats during the COVID-19 pandemic and some suggested solutions to reduce the damages.

Journal ArticleDOI
TL;DR: In this article , the current status of computational music research in Indian music by giving a brief review of the research themes taken up by authors in this book is discussed and the possible future agenda for computational Indian musicology is discussed.
Abstract: This article discusses the current status of computational music research in Indian music by giving a brief review of the research themes taken up by authors in this book. It discusses about the motivation behind such research and various computational approaches taken up by them to address diverse issues. The author feels that this is the right time to develop a taxonomy for computational musicology suitable for Indian music. At the end, the article discusses the possible challenges for computational researchers and indicates the possible future agenda for computational Indian musicology.