scispace - formally typeset
Search or ask a question
Author

Jarallah AlGhamdi

Bio: Jarallah AlGhamdi is an academic researcher from King Fahd University of Petroleum and Minerals. The author has contributed to research in topics: Software metric & Software. The author has an hindex of 11, co-authored 24 publications receiving 453 citations. Previous affiliations of Jarallah AlGhamdi include Prince Sultan University & Arizona State University.

Papers
More filters
Journal ArticleDOI
TL;DR: An adaptive fuzzy logic framework for software effort prediction that tolerates imprecision, explains prediction rationale through rules, incorporates experts knowledge, offers transparency in the prediction system, and could adapt to new environments as new data becomes available is presented.
Abstract: Algorithmic effort prediction models are limited by their inability to cope with uncertainties and imprecision present in software projects early in the development life cycle. In this paper, we present an adaptive fuzzy logic framework for software effort prediction. The training and adaptation algorithms implemented in the framework tolerates imprecision, explains prediction rationale through rules, incorporates experts knowledge, offers transparency in the prediction system, and could adapt to new environments as new data becomes available. Our validation experiment was carried out on artificial datasets as well as the COCOMO public database. We also present an experimental validation of the training procedure employed in the framework.

113 citations

Journal ArticleDOI
TL;DR: The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes, and can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.
Abstract: Malaria parasitemia is the quantitative measurement of the parasites in the blood to grade the degree of infection. Light microscopy is the most well-known method used to examine the blood for parasitemia quantification. The visual quantification of malaria parasitemia is laborious, time-consuming and subjective. Although automating the process is a good solution, the available techniques are unable to evaluate the same cases such as anemia and hemoglobinopathies due to deviation from normal RBCs' morphology. The main aim of this research is to examine the microscopic images of stained thin blood smears using a variety of computer vision techniques, grading malaria parasitemia on independent factors (RBC's morphology). The proposed methodology is based on inductive approach, color segmentation of malaria parasites through adaptive algorithm of Gaussian mixture model (GMM). The quantification accuracy of RBCs is improved, splitting the occlusions of RBCs with distance transform and local maxima. Further, the classification of infected and non-infected RBCs has been made to properly grade parasitemia. The training and evaluation have been carried out on image dataset with respect to ground truth data, determining the degree of infection with the sensitivity of 98 % and specificity of 97 %. The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes. In addition, this research addressed the process with independent factors (RBCs' morphology). Eventually, this can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.

68 citations

Journal ArticleDOI
TL;DR: A new method to detect and segment nuclei to determine whether they are malignant or not is proposed and reveals the high performance and accuracy in comparison to the techniques reported in literature.
Abstract: Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio-Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.

57 citations

Proceedings ArticleDOI
21 Mar 2005
TL;DR: This paper gives an overview of OOMeter, a software metric tool that accepts Java and C# source code as well as UML models in XMI format.
Abstract: This paper gives an overview of OOMeter, a software metric tool that accepts Java and C# source code as well as UML models in XMI format.

45 citations

Journal ArticleDOI
TL;DR: In this paper, a hybrid neural model (MLP and RBF) was proposed to enhance the accuracy of weather forecasting in Saudi Arabia, where the main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness.
Abstract: Making deductions and expectations about climate has been a challenge all through mankind’s history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

44 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Object-oriented and process metrics have been reported to be more successful in finding faults compared to traditional size and complexity metrics and seem to be better at predicting post-release faults than any static code metrics.
Abstract: ContextSoftware metrics may be used in fault prediction models to improve software quality by predicting fault location. ObjectiveThis paper aims to identify software metrics and to assess their applicability in software fault prediction. We investigated the influence of context on metrics' selection and performance. MethodThis systematic literature review includes 106 papers published between 1991 and 2011. The selected papers are classified according to metrics and context properties. ResultsObject-oriented metrics (49%) were used nearly twice as often compared to traditional source code metrics (27%) or process metrics (24%). Chidamber and Kemerer's (CK) object-oriented metrics were most frequently used. According to the selected studies there are significant differences between the metrics used in fault prediction performance. Object-oriented and process metrics have been reported to be more successful in finding faults compared to traditional size and complexity metrics. Process metrics seem to be better at predicting post-release faults compared to any static code metrics. ConclusionMore studies should be performed on large industrial software systems to find metrics more relevant for the industry and to answer the question as to which metrics should be used in a given context.

437 citations

Journal ArticleDOI
TL;DR: The different approaches published in the literature are organized according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification for microscopic malaria diagnosis.

326 citations

01 Jan 1981
TL;DR: In this article, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

283 citations

Proceedings ArticleDOI
20 Jul 2008
TL;DR: It is shown that existing software metric tools interpret and implement the definitions of object-oriented software metrics differently and this delivers tool-dependent metrics results and has even implications on the results of analyses based on these metrics results.
Abstract: This paper shows that existing software metric tools interpret and implement the definitions of object-oriented software metrics differently This delivers tool-dependent metrics results and has even implications on the results of analyses based on these metrics results In short, the metrics-based assessment of a software system and measures taken to improve its design differ considerably from tool to tool To support our case, we conducted an experiment with a number of commercial and free metrics tools We calculated metrics values using the same set of standard metrics for three software systems of different sizes Measurements show that, for the same software system and metrics, the metrics values are tool depended We also defined a (simple) software quality model for "maintainability" based on the metrics selected It defines a ranking of the classes that are most critical wrt maintainability Measurements show that even the ranking of classes in a software system is metrics tool dependent

224 citations