scispace - formally typeset
Search or ask a question
Author

K. Ozaki

Bio: K. Ozaki is an academic researcher from Fujitsu. The author has contributed to research in topics: Approximation error & Goodness of fit. The author has an hindex of 1, co-authored 1 publications receiving 171 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A new consistent and robust method called the least-squares of inverted balanced relative errors (LIRS) is proposed and its superiority to the ordinary least-Squares method is demonstrated by use of five actual data sets.

178 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A systematic review of previous work identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set to provide a basis for the improvement of software-estimation research.
Abstract: This paper aims to provide a basis for the improvement of software-estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A Web-based library of these cost estimation papers is provided to ease the identification of relevant estimation research results. The review results combined with other knowledge provide support for recommendations for future software cost estimation research, including: 1) increase the breadth of the search for relevant studies, 2) search manually for relevant papers within a carefully selected set of journals when completeness is essential, 3) conduct more studies on estimation methods commonly used by the software industry, and 4) increase the awareness of how properties of the data sets impact the results when evaluating estimation methods

835 citations

Journal ArticleDOI
TL;DR: A simulation study demonstrating that MMRE does not always select the best model is performed, casting some doubt on the conclusions of any study of competing software prediction models that use MMRE as a basis of model comparison.
Abstract: The mean magnitude of relative error, MMRE, is probably the most widely used evaluation criterion for assessing the performance of competing software prediction models. One purpose of MMRE is to assist us to select the best model. In this paper, we have performed a simulation study demonstrating that MMRE does not always select the best model. Our findings cast some doubt on the conclusions of any study of competing software prediction models that use MMRE as a basis of model comparison. We therefore recommend not using MMRE to evaluate and compare prediction models. At present, we do not have any universal replacement for MMRE. Meanwhile, we therefore recommend using a combination of theoretical justification of the models that are proposed together with other metrics proposed in this paper.

482 citations

01 Jan 1981
TL;DR: In this article, the authors provide an overview of economic analysis techniques and their applicability to software engineering and management, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.
Abstract: This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation.

283 citations

Journal ArticleDOI
TL;DR: It is suggested that it is the research procedure itself that is unreliable, and this lack of reliability may strongly contribute to the lack of convergence in empirical studies on software prediction models.
Abstract: Empirical studies on software prediction models do not converge with respect to the question "which prediction model is best?" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models.

275 citations