scispace - formally typeset
Search or ask a question
Author

Md. Robiul Islam

Bio: Md. Robiul Islam is an academic researcher from Rajshahi University of Engineering & Technology. The author has contributed to research in topics: Computer science & Turbine. The author has an hindex of 3, co-authored 13 publications receiving 26 citations.

Papers
More filters
Journal ArticleDOI
01 Nov 2020
TL;DR: In this article, the authors highlight the present status of urban wind farm technology and its commercial and environmental aspects and conclude that further investigation of wind mapping and the suitable design of turbines is essential to make the urban wind farms a reliable and feasible option for decentralized power generation.
Abstract: Wind energy is a promising scheme in the power generation sector due to pollution-free power production and wind resources’ sufficiency worldwide. Installing wind turbines in all the possible extents can mitigate the rising energy demand. Built-up areas possess high potential for wind energy, including the rooftop of high-rise buildings, railway track, the region between or around multistoried buildings, and city roads. Harnessing wind energy from these areas is quite challenging since it has dramatic nature and turbulence for higher roughness on urban surfaces. This review paper endeavors to highlight the present status of urban wind farm technology and its commercial and environmental aspects. Observations and upcoming research trends have been presented based on up-to-the-minute information. It is concluded that further investigation of wind mapping and the suitable design of turbines is essential to make the urban wind farm a reliable and feasible option for decentralized power generation.

58 citations

Journal ArticleDOI
TL;DR: Graph Neural Networks (GNNs) as mentioned in this paper provide a generalized form to exploit non-euclidean space data by exploiting the relationships among graph data, which can be visualized as an aggregation of nodes and edges without having any order.
Abstract: This review provides a comprehensive overview of the state-of-the-art methods of graph-based networks from a deep learning perspective. Graph networks provide a generalized form to exploit non-euclidean space data. A graph can be visualized as an aggregation of nodes and edges without having any order. Data-driven architecture tends to follow a fixed neural network trying to find the pattern in feature space. These strategies have successfully been applied to many applications for euclidean space data. Since graph data in a non-euclidean space does not follow any kind of order, these solutions can be applied to exploit the node relationships. Graph Neural Networks (GNNs) solve this problem by exploiting the relationships among graph data. Recent developments in computational hardware and optimization allow graph networks possible to learn the complex graph relationships. Graph networks are therefore being actively used to solve many problems including protein interface, classification, and learning representations of fingerprints. To encapsulate the importance of graph models, in this paper, we formulate a systematic categorization of GNN models according to their applications from theory to real-life problems and provide a direction of the future scope for the applications of graph models as well as highlight the limitations of existing graph networks.

46 citations

Journal ArticleDOI
TL;DR: In this paper , Contrast Limited Histogram Equalization (CLAHE) was applied to CT images as a preprocessing step for enhancing the quality of the images and a novel Convolutional Neural Network (CNN) model was developed to extract 100 prominent features from a total of 2482 CT scan images.
Abstract: Recently the most infectious disease is the novel Coronavirus disease (COVID 19) creates a devastating effect on public health in more than 200 countries in the world. Since the detection of COVID19 using reverse transcription-polymerase chain reaction (RT-PCR) is time-consuming and error-prone, the alternative solution of detection is Computed Tomography (CT) images. In this paper, Contrast Limited Histogram Equalization (CLAHE) was applied to CT images as a preprocessing step for enhancing the quality of the images. After that, we developed a novel Convolutional Neural Network (CNN) model that extracted 100 prominent features from a total of 2482 CT scan images. These extracted features were then deployed to various machine learning algorithms — Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), and Random Forest (RF). Finally, we proposed an ensemble model for the COVID19 CT image classification. We also showed various performance comparisons with the state-of-art methods. Our proposed model outperforms the state-of-art models and achieved an accuracy, precision, and recall score of 99.73%, 99.46%, and 100%, respectively.

41 citations

Journal ArticleDOI
TL;DR: In this article, an automatic pneumonia detection system has been proposed by applying the extreme learning machine (ELM) on the Kaggle CXR images (Pneumonia).
Abstract: In this era of COVID19, proper diagnosis and treatment of pneumonia are very important. Chest X-Ray (CXR) image analysis plays a vital role in the reliable diagnosis of pneumonia. An experienced radiologist is required for this. However, even for an experienced radiographer, it is quite challenging and time-consuming to diagnose accurately due to the fuzziness of CXR images. Also, identification can be erroneous due to the involvement of human judgement. Hence, an authentic and automated system can play an important role here. In this era of cutting-edge technology, deep learning (DL) is highly used in every sector. There are several existing methods to diagnose pneumonia but they have accuracy problems. In this study, an automatic pneumonia detection system has been proposed by applying the extreme learning machine (ELM) on the Kaggle CXR images (Pneumonia). Three models have been studied: classification using extreme learning machine (ELM), ELM with a hybrid convolutional neural network-principal component analysis (CNN-PCA) based feature extraction, and CNN-PCA-ELM with the CXR images which are contrast-enhanced by contrast limited adaptive histogram equalization (CLAHE). Among these three proposed methods, the final model provides an optimistic result. It achieves the recall score of 98% and accuracy score of 98.32% for multiclass pneumonia classification. On the other hand, a binary classification achieves 100% recall and 99.83% accuracy. The proposed method also outperforms the existing methods. The outcome has been compared using several benchmarks that include accuracy, precision, recall, etc.

27 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper exploited the extreme learning machine (ELM) approach to address diabetic retinopathy (DR), a medical condition in which impairment occurs to the retina caused by diabetes.
Abstract: This paper exploits the extreme learning machine (ELM) approach to address diabetic retinopathy (DR), a medical condition in which impairment occurs to the retina caused by diabetes. DR, a leading cause of blindness worldwide, is a sort of swelling leakage due to excessive blood sugar in the retina vessels. An early-stage diagnosis is therefore beneficial to prevent diabetes patients from losing their sight. This study introduced a novel method to detect DR for binary class and multiclass classification based on the APTOS-2019 blindness detection and Messidor-2 datasets. First, DR images have been pre-processed using Ben Graham’s approach. After that, contrast limited adaptive histogram equalization (CLAHE) has been used to get contrast-enhanced images with lower noise and more distinguishing features. Then a novel hybrid convolutional neural network-singular value decomposition model has been developed to reduce input features for classifiers. Finally, the proposed method uses an ELM algorithm as the classifier that minimizes the training time cost. The experiments focus on accuracy, precision, recall, and F1-score and demonstrate the feasibility of adopting the proposed scheme for DR diagnosis. The method outperforms the existing techniques and shows an optimistic accuracy and recall of 99.73% and 100%, respectively, for binary class. For five stages of DR classification, the proposed model achieved an accuracy of 98.09% and 96.26% for APTOS-2019 and Messidor-2 datasets, respectively, which outperformed the existing state-of-art models.

21 citations


Cited by
More filters
Dissertation
01 Jan 2011
TL;DR: In this paper, a study of rotor blade aerodynamic performances of wind turbine has been presented in which the aerodynamic effects changed by blade surface distribution as well as grid solution along the airfoil.
Abstract: The study of rotor blade aerodynamic performances of wind turbine has been presented in this thesis. This study was focused on aerodynamic effects changed by blade surface distribution as well as grid solution along the airfoil. The details of numerical calculation from Fluent were described to help predict accurate blade performance for comparison and discussion with available data. The direct surface curvature distribution blade design method for two-dimensional airfoil sections for wind turbine rotors have been discussed with the attentions to Euler equation, velocity diagram and the factors which affect wind turbine performance and applied to design a blade geometry close to an existing wind turbine blade, Eppler387, in order to argue that the blade surface drawn by direct surface curvature distribution blade design method contributes aerodynamic efficiency. The FLUENT calculation of NACA63-215V showed that the aerodynamic characteristics agreed well with the available experimental data at lower angles of attack although it was discontinuities in the surface curvature distributions between 0.7 and 0.8 in x/c. The discontinuities were so small that the blade performance could not be affected. The design of Eppler 387 blade performed to reduce drag force. The discontinuities of surface distribution matched the curve of the pressure coefficients. It was found in the curvature distribution that the leading edge pressure side had difficulties to connect to Bezier curve and also the trailing edge circle was never be tangent to the lines of trailing edge pressure and suction sides due to programming difficulties.

311 citations

Book ChapterDOI
01 Jan 2022
TL;DR: In this article , explainable artificial intelligence (xAI) is an established field with a vibrant community that has developed a variety of very successful approaches to explain and interpret predictions of complex machine learning models such as deep neural networks.
Abstract: Abstract Explainable Artificial Intelligence (xAI) is an established field with a vibrant community that has developed a variety of very successful approaches to explain and interpret predictions of complex machine learning models such as deep neural networks. In this article, we briefly introduce a few selected methods and discuss them in a short, clear and concise way. The goal of this article is to give beginners, especially application engineers and data scientists, a quick overview of the state of the art in this current topic. The following 17 methods are covered in this chapter: LIME, Anchors, GraphLIME, LRP, DTD, PDA, TCAV, XGNN, SHAP, ASV, Break-Down, Shapley Flow, Textual Explanations of Visual Models, Integrated Gradients, Causal Models, Meaningful Perturbations, and X-NeSyL.

41 citations

Journal ArticleDOI
TL;DR: In this paper , Contrast Limited Histogram Equalization (CLAHE) was applied to CT images as a preprocessing step for enhancing the quality of the images and a novel Convolutional Neural Network (CNN) model was developed to extract 100 prominent features from a total of 2482 CT scan images.
Abstract: Recently the most infectious disease is the novel Coronavirus disease (COVID 19) creates a devastating effect on public health in more than 200 countries in the world. Since the detection of COVID19 using reverse transcription-polymerase chain reaction (RT-PCR) is time-consuming and error-prone, the alternative solution of detection is Computed Tomography (CT) images. In this paper, Contrast Limited Histogram Equalization (CLAHE) was applied to CT images as a preprocessing step for enhancing the quality of the images. After that, we developed a novel Convolutional Neural Network (CNN) model that extracted 100 prominent features from a total of 2482 CT scan images. These extracted features were then deployed to various machine learning algorithms — Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), and Random Forest (RF). Finally, we proposed an ensemble model for the COVID19 CT image classification. We also showed various performance comparisons with the state-of-art methods. Our proposed model outperforms the state-of-art models and achieved an accuracy, precision, and recall score of 99.73%, 99.46%, and 100%, respectively.

41 citations

Journal ArticleDOI
TL;DR: In this article, a comprehensive list of available diabetic retinopathy (DR) datasets is reported, and a total of 114 published articles which conformed to the scope of the review is summarized.
Abstract: Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016-2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.

33 citations

Journal ArticleDOI
TL;DR: A comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved is proposed, including robustness, explainability, privacy, fairness, accountability, and environmental well-being.
Abstract: Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics. However, task performance is not the only requirement for GNNs. Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks, unexplainable discrimination against disadvantaged groups, or excessive resource consumption in edge computing environments. To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness. To this end, we propose a comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved. In this survey, we introduce basic concepts and comprehensively summarise existing efforts for trustworthy GNNs from six aspects, including robustness, explainability, privacy, fairness, accountability, and environmental well-being. Additionally, we highlight the intricate cross-aspect relations between the above six aspects of trustworthy GNNs. Finally, we present a thorough overview of trending directions for facilitating the research and industrialisation of trustworthy GNNs.

29 citations