Showing papers in "Journal of Systems and Software in 1981"
••
IBM1
TL;DR: A method to perform automatic clustering is described and a metric to quantify the complexity of the resulting partition is developed.
124Â citations
••
TL;DR: A set of criteria that has guided the development of a metric system for measuring the quality of a largescale software product is state, which uses the flow of information within the system as an index of system interconnectivity.
110Â citations
••
TL;DR: In this article, the authors investigate the possibility of providing some useful measures to aid in the evaluation of software designs, which should allow some degree of predictability in estimating the quality of a coded software product based upon its design and should allow identification and correction of deficient designs prior to the coding phase, thus providing lower software development costs.
90Â citations
••
TL;DR: An attempt to examine a set of basic relationships among various software development variables, such as size, effort, project duration, staff size, and productivity, for 15 Software Engineering Laboratory projects developed for NASA/Goddard Space Flight Center by Computer Sciences Corporation.
65Â citations
••
TL;DR: The claims that software science could provide an empirical basis for the rationalization of all forms of algorithm description are shown to be invalid from a formal point of view.
36Â citations
••
IBM1
TL;DR: A statistical analysis was conducted of structured programming and programmer performance, with productivity measured as lines of code per man-month supporting the following productivity hypotheses: increasing the complexity of programming projects tends to lower productivity.
32Â citations
••
TL;DR: An empirical study designed to compare two objective metrics, McCabe's cyclomatic number v(G) and Halstead's effort measure E, with a classic size measure, lines of code and shows a fourth metric based on a model of programming to be better than the previously known metrics for some experimental data.
31Â citations
••
TL;DR: Several metrics for the quality assessment of a software system design are discussed, based on the entropy function of communication information theory, which can compute the excess entropy and thereby rank different design alternatives.
28Â citations
••
TL;DR: The conclusion drawn is that the Parr curve can be made to fit the data better than the other curves, but because of the noise in the data, it is difficult to confirm the shape of the manpower distri bution from the data alone and therefore difficult to vali date any particular model.
25Â citations
••
TL;DR: Programming data involving 278 commercial-type programs were collected from 23 medium- to large-scale organizations in order to explore the relationships among variables measuring program type, the testing interface, programming technique, programmer experience, and productivity.
19Â citations
••
TL;DR: This paper surveys techniques for the expression of software requirements and specifications and concludes that each form of description is represented in the literature by a distinct type of description language.
••
TL;DR: The current status of an instrumentation and analysis package to measure user performance in an interactive system is described and a prototype measurement system is considered to evaluate a screen editor and to develop models of user behavior.
••
TL;DR: The model shows that productivity and project duration vary enormously as a function of project management factors, even when project complexity and programming staff competence are held constant.
••
TL;DR: An exploratory study, subjects without prior experience with DAISTS were encouraged by the system to develop effective sets of test cases for their implementations, and an analysis of the errors remaining in the implementations provided valuable hints about additional useful testing metrics.
••
TL;DR: This paper shows how project audits can be used to uncover project strengths and weaknesses and the issue of product sales versus disciplined project management was faced in all three audits.
••
TL;DR: A method for measuring the contribution of an arbitrary program development technique to program correctness and cost and has been employed at the General Electric Corporate Research and Development Center to evaluate alternative program testing techniques.
••
IBM1
TL;DR: The incorporation of the module interconnection language into design altered the traditional model of system building and led to the formalization of new models of program design, development, and evaluation.
••
TL;DR: It is shown how ada can be used as an SDL, as well as a system implementation language, and encouraged to use recent theory to develop better structures for their systems, and its subsequent use to implement the systems preserves those structures in the product.
••
TL;DR: A program design discipline that has successfully produced well-modularized programs is described, to apply, in a uniform way, the concepts of data and procedural abstraction in a top-down decomposition during the initial programming-in-the-large phase of construction.
••
TL;DR: MIL-STD-1679 addresses the minimum set of contract requirements for the total software development process and consolidates in one document many of the currently accepted software engineering practices and procedures.
••
TL;DR: Four alternative techniques of solving the problem of allocating data among various reports and reports among different users in a nonlinear binary programming model are presented.
••
TL;DR: This work proposes a new table-driven implementation to achieve clarity, portability, and modifiability with optimizations yielding performance superior to that of the alternative code-based system in terms of storage, and comparable in execution time.
••
••
TL;DR: A software reliability model is considered that is easy to implement, use, and interpret, and works extremely well in the latter stages of testing.
••
TL;DR: The definition of specific terms related to Interoperability, some of the Inter interoperability problem areas, a walk through of current standards derivations methodology being used in the jintaccs Program, and consideration of the standards validation process are discussed.
••
TL;DR: This paper suggests some evaluation criteria that are probably too difficult to carry out, criteria that may always remain subjective, but are so important that the authors should keep them in mind as a balance to the hard data they can obtain and should seek to learn more about them despite the difficulty of doing so.
••
TL;DR: This paper reviews the current status of both research and commercial testing systems, and addresses the features necessary for a commercial test system, including test case specification, test data generation, testbed generation, program instrumentation, automatic test execution and validation, as well as dynamic analysis of control and data flow.
••
TL;DR: Two systems, Affirm and HDM, were compared for their application to operation system security analysis and it was found that the example could be specified satisfactorily and recognizably on both systems with a comparable amount of effort.
••
TL;DR: The superset of documents which will be required under the new MIL-STD-SDS is described and the relationship of each document to the acquisition cycle, particularly the design reviews, and to each other is emphasized.
••
TL;DR: It is in the program maintenance phase of the software life cycle where large savings will be achieved through the use of Ada, and up to 80% of software costs are incurred after the software has been put into service.