scispace - formally typeset
Journal ArticleDOI

Software reusability model for procedure based domain-specific software components

TLDR
A two-tier approach by studying the structural attributes as well as usability or relevancy of the component to a particular domain is mentioned, and it is found that the reusability value determined is close to the manual analysis used to be performed by the programmers or repository managers.
Abstract
Automatic reusability appraisal is helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable components from existing legacy systems; that can save cost of developing the software from scratch. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. In this paper, we mention a two-tier approach by studying the structural attributes as well as usability or relevancy of the component to a particular domain. We evaluate Probabilistic Latent Semantic Analysis (PLSA) approach, LSA's Singular Value Decomposition (SVD) technique, LSA's Semi-Discrete Matrix Decomposition (SDD) technique and Naive Bayes approach to determine the Domain Relevancy of software components. It exploits the fact that Feature Vector codes can be seen as documents containing terms — the identifiers present in the components — and so text modeling methods that capture co-occurrence information in low-dimensional spaces can be used. In this research work, structural attributes of software components are explored using software metrics and quality of the software is inferred by Neuro-Fuzzy (NF) Inference engine, taking the metric values as input. The influence of different factors on the reusability is studied and the condition for the optimum reusability index is derived using Taguchi Analysis. The NF system is optimized by selecting initial rule-base through modified ID3 decision tree algorithm in combination with the results of Taguchi Analysis. The calculated reusability value enables to identify a good quality code automatically. It is found that the reusability value determined is close to the manual analysis used to be performed by the programmers or repository managers. So, the system developed can be used to enhance the productivity and quality of software development.

read more

Citations
More filters
Journal Article

Prediction of Reusability of Object Oriented Software Systems using Clustering Approach

TL;DR: Tuned CK metric suit i.e. WMC, DIT, NOC, CBO and LCOM, is used to obtain the structural analysis of OO-based software components and the developed reusability model has produced high precision results as desired.
Journal Article

Modeling of Reusability of Object Oriented Software System

TL;DR: In this research work, structural attributes of software components are explored using software metrics and quality of the software is inferred by different Neural Network based approaches, taking the metric values as input, and the calculated reusability value enables to identify a good quality code automatically.

A Comparative Analysis of Conjugate Gradient Algorithms & PSO Based Neural Network Approaches for Reusability Evaluation of Procedure Based Software Systems

TL;DR: Particle Swarm Optimization technique along with the four variants of Conjugate Gradient Algorithms is empirically explored to train a feed forward neural network for reusability dataset and the performance of the trained neural networks is tested to evaluate the reUSability level of the procedure based software systems.
Proceedings ArticleDOI

A survey on Software Reusability

TL;DR: In this paper, different models or measures that are discussed in literature for software reusability prediction or evaluation of software components are summarized.
References
More filters
Journal ArticleDOI

Indexing by Latent Semantic Analysis

TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Book

A complexity measure

TL;DR: In this paper, a graph-theoretic complexity measure for managing and controlling program complexity is presented. But the complexity is independent of physical size, and complexity depends only on the decision structure of a program.
Journal ArticleDOI

A Complexity Measure

TL;DR: Several properties of the graph-theoretic complexity are proved which show, for example, that complexity is independent of physical size and complexity depends only on the decision structure of a program.
Journal ArticleDOI

Very Simple Classification Rules Perform Well on Most Commonly Used Datasets

TL;DR: On most datasets studied, the best of very simple rules that classify examples on the basis of a single attribute is as accurate as the rules induced by the majority of machine learning systems.
Journal ArticleDOI

Using linear algebra for intelligent information retrieval

TL;DR: A lexical match between words in users’ requests and those in or assigned to documents in a database helps retrieve textual materials from scientific databases.
Related Papers (5)
Trending Questions (1)
Is Windows 10 a productivity software?

So, the system developed can be used to enhance the productivity and quality of software development.