scispace - formally typeset
Search or ask a question

Showing papers in "WIT Transactions on Information and Communication Technologies in 1970"


Journal ArticleDOI
TL;DR: Test sets which cover all branches of a library of five procedures which solve the triangle problem, have been produced automatically using genetic algorithms, derived from both the structure of the software and its formal specification in Z.
Abstract: Test sets which cover all branches of a library of five procedures which solve the triangle problem, have been produced automatically using genetic algorithms. The tests are derived from both the structure of the software and its formal specification in Z. In a wider context, more complex procedures such as a binary search and a generic quicksort have also been tested automatically from the structure of the software. The value of genetic algorithms lies in their ability to handle input data which may be of a complex data structure, and to execute branches whose predicate may be a complicated and unknown function of the input data. A disadvantage of genetic algorithms may be the computational effort required to reach a solution.

57 citations


Journal ArticleDOI
TL;DR: Sometimes, testing object-oriented software can benefit from objectoriented technology, for instance, by capitalizing on the fact that a superclass has already been tested, and by decreasing the effort to test derived classes, which reduces the cost of testing in comparison with a flat class structure.

47 citations


Journal ArticleDOI
TL;DR: In this article, a real-time pupil detection system using two light sources and the image difference method was proposed. And the brightness of the pupil is influenced by the pupil area and the power of infrared light irradiated for the eye, and a method for stabilizing the pupil brightness in the difference images was proposed, which utilized this characteristics.
Abstract: For developing a human-computer interface applying eye-gaze, we have already proposed a noncontact, unconstrained video-based pupil detection technique using two light sources and the image difference method. The detected pupil position in the difference image is utilized together with the glint (corneal reflection light of an infrared light source) position for eyegaze position determination. In this paper, the hardware for real-time image differentiation was developed. This image differenciator made realtime pupil possible, by applying the pupil detector including the noise reducer, which had been already developed. For stably detecting the glint and the pupil, it was clarified that the pupil brightness is influenced by the pupil area and the power of infrared light irradiated for the eye. For stabilizing the pupil brightness in the difference images, a method which utilized this characteristics was proposed. This method made pupil and glint center detection stabler.

36 citations



Journal ArticleDOI
TL;DR: The premise is that Design Expert Systems can be built using many small, cooperating, limited function expert systems, called Single Function Agents (SiFAs), and the approach is to start with elementary components and add structure and functionality as necessary.
Abstract: 1.0 Introduction Our premise is that Design Expert Systems can be built using many small, cooperating, limited function expert systems, called Single Function Agents (SiFAs). We expect to be able to investigate and discover specific design-related primitive problem-solving and interaction patterns. The approach should also lead to a deeper understanding of the types of knowledge involved. In contrast to other multi-agent approaches, which are based on powerful agents with relatively unconstrained functionality and knowledge, we try to start with elementary components and add structure and functionality as necessary.

28 citations



Journal ArticleDOI
TL;DR: The investigation of a machine learning method for the automatic generation of fault trees for incipient faults using features based on the FFT of the time response simulations, derived from the ID3 algorithm for the induction of decision trees.
Abstract: Fault tree analysis is widely used in industry in fault diagnosis. The diagnosis of incipient or 'soft' faults is considerably more difficult than of 'hard' faults, which is the situation considered normally. A detailed fault tree model reflecting signal variations over wide range is required for diagnosing such soft faults. This paper describes the investigation of a machine learning method for the automatic generation of fault trees for incipient faults. Features based on the FFT (Fast Fourier Transform) of the time response simulations are used are used to provide a training set of examples comprising records of fault types, severity and feature list. The algorithm presented, called IFT, is derived from the ID3 algorithm for the induction of decision trees. A significant aspect of this approach is that it does not require any detailed knowledge or analysis of the application system. All that is needed is a 'black-box' model of the system; i.e. knowledge of what faults arise from measurable quantities taking on particular values. The proposed procedure is illustrated using detailed simulation results for a servomechanism typically found in machine tool applications and the results to date indicate the feasibility of the approach.

19 citations


Journal ArticleDOI
TL;DR: This work presents a novel methodology for pattern recognition that uses genetic learning to get an optimized classification system, applied to a real problem, in which it is required to distinguish three nuclear accidents that may occur in a nuclear power plant.
Abstract: This work presents a novel methodology for pattern recognition that uses genetic learning to get an optimized classification system. Each class is represented by several time series in a data base. The idea is to find clusters in the set of the training patterns of each class so that their centroids can distinguish the classes with a minimum of misclassifications. Due to the high level of difficulty found in this optimization problem and the poor prior knowledge about the patterns domain, a model based on genetic algorithm is proposed to extract this knowledge, searching for the minimum number of subclasses that leads to a maximum correctness in the classification. The goal of this model is to find how many and which are the clusters to consider. To validate the methodology, reference problems, where the best solution is wellknown, are proposed. Extending the scope of the application, the methodology is applied to a real problem, in which it is required to distinguish three nuclear accidents that may occur in a nuclear power plant. The misclassification rate was 5% in a total of 180 trials. To ratify the results an artificial neural network was designed and trained to solve the same problem. The results and comparisons are shown and commented. Transactions on Information and Communications Technologies vol 19 © 1998 WIT Press, www.witpress.com, ISSN 1743-3517

18 citations


Journal ArticleDOI
TL;DR: An efficient algorithm restricted to acyclic binary DCSPs and new constraints which do not modify the constraint-graph, and then its extension to the cyclic case with any new constraint.
Abstract: Constraint Satisfaction Problems (CSP) have been shown to be a useful way of formulating problems such as design, scene labelling and temporal reasoning. As many problems using constraints need a dynamic environment, the static framework of CSPs extend into DCSP (Dynamic Constraint Satisfaction Problems). Up to now, most papers about DCSPs have dealt with the problem of the existence of a solution and the filtering techniques. The problem of the maintenance of a solution, after the DCSP has evolved, has mainly been approached through re-execution or delay to the computation of the solution. This paper first presents the CSP framework and its dynamic evolution DCSP, and then assigns bounds to the study of the problem of the maintenance of solution: given an instance of a binary DCSP, a solution to it and a new constraint which disables that solution, we achieve the computation (if possible) of a new solution as "close" as possible to the previous one - with several criteria of closeness. The paper presents an efficient algorithm restricted to acyclic binary DCSPs and new constraints which do not modify the constraint-graph, and then its extension to the cyclic case with any new constraint.

18 citations


Journal ArticleDOI
TL;DR: A discussion of how database technology can be integrated to data mining techniques is presented, and several advantages of addressing data consuming activities through a tight integration of a parallel database server and datamining techniques are pointed out.
Abstract: Data mining on large databases has been a major concern in research community, due to the difficulty of analyzing huge volumes of data using only traditional OLAP tools. This sort of process implies a lot of computational power, memory and disk I/O, which can only be provided by parallel computers. We present a discussion of how database technology can be integrated to data mining techniques. Finally, we also point out several advantages of addressing data consuming activities through a tight integration of a parallel database server and data mining techniques.

17 citations


Journal ArticleDOI
TL;DR: The paper concludes by describing a convenient technique for projecting patterns on various surfaces such as spheres, ellipsoids and hyperbolic paraboloids that constitutes the kernel of the problem handling strategy for visualisation and generation of data for geodesic domes.
Abstract: Remarkable advances in computer aided design, have made it possible to have effective techniques for organisation of graphics as well as data generation. In particular, the fields of architecture and structural engineering may greatly benefit from the concepts of formex algebra which is a mathematical system providing a convenient basis for generating and modifying configurations. Formian is the programming language of formex algebra which handles problems of data generation and computer graphics with ease and elegance. The concepts of formex algebra have found applications in many disciplines. However, this paper describes some of the basic concepts of formex algebra in relation to a variety of space structures. Some composite transformations have been illustrated in the paper, that allow the creation and manipulation of a number of families of surfaces used for representing membranes, pneumatic structures and lattice shells. The paper concludes by describing a convenient technique for projecting patterns on various surfaces such as spheres, ellipsoids and hyperbolic paraboloids. Also, a function is described for evolving configurations based on polyhedra. This function constitutes the kernel of the problem handling strategy for visualisation and generation of data for geodesic domes.

Journal ArticleDOI
TL;DR: This paper intends to describe how the Company keeps track and measure the process of doing technical reviews of software projects, and has found that a tool to support these activities is a necessity.
Abstract: What is not measured is not controlled. What is not tracked is not done. If both adage are true, it is obvious that we need to measure and track the process of doing technical reviews of software projects. In this paper I intend to describe how we keep track and measure this process in our Company. We have found that a tool to support these activities is a necessity.

Journal ArticleDOI
TL;DR: The intention of this paper is to elaborate on the system attributes comprising software quality, examine the (often mutually conflicting) requirements of the stakeholders in a software product, and suggest the means whereby the conflict may be resolved and the criteria by which software quality is determined may be measured.
Abstract: "Quality is essentially defined as 'fitness for purpose'". The above statement is found in that form, or in a similar paraphrase, in most writings on quality assurance procedures. It begs the questions that we all know what 'fitness' means and that there is a single 'purpose' being targeted. "Quality is binary. A product either has it or it does not. There is no such thing as degrees of quality." Another quotation concerning quality, including software quality. We feel that we should be able to endorse the sentiment yet we all know that we must reconcile it with the notion that we only get what we pay for and that, to some extent, quality lies in the eye of the beholder. The practice of Quality Function Deployment (QFD) is concerned with the identification of the various interests of the stakeholders in the development of a product and of the criteria which they would use to define quality. Once these criteria have been agreed upon by the client and the developer, quality shifts from the subjective to the binary. It is the intention of this paper to : • elaborate on the system attributes comprising software quality, • examine the (often mutually conflicting) requirements of the stakeholders in a software product, and • suggest the means whereby the conflict may be resolved and the criteria by which software quality is determined may be measured. Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517

Journal ArticleDOI
TL;DR: In this article, the problem of finding the largest itemset in a given collection of transactions is studied, i.e., the itemset that occurs most frequently in the transactions.
Abstract: The largest itemset in a given collection of transactions £> is the itemset that occurs most frequently in T>. This paper studies the problem of finding the A/" largest itemsets, whose solution can be used to generate an appropriate number of interesting itemsets for mining association rules. We present an efficient algorithm for finding the jV largest itemsets. The algorithm is implemented and compared with the naive solution using the Apriori approach. We present experimental results as well as theoretical analysis showing that our algorithm has a much better performance than the naive solution. We also analyze the cost of our algorithm and observe that it has a polynomial time complexity in most cases of practical applications.

Journal ArticleDOI
TL;DR: An information theoretic measure is developed which is used as a criteria for selecting the rules generated from databases and used to prune the search space of hypothesis to reduce the complexity of rule generation.
Abstract: Systems for inducing classification rules from databases are valuable tools for assisting in the task of knowledge acquisition for expert systems. In this paper, we introduce an approach for extracting knowledge from databases in the form of inductive rules. We develop an information theoretic measure which is used as a criteria for selecting the rules generated from databases. To reduce the complexity of rule generation, the boundary of the information measure is analyzed and used to prune the search space of hypothesis. The system is implemented and tested on some well known machine learning databases.

Journal ArticleDOI
TL;DR: It is argued that rule interestingness measures should be extended to take into account the additional rule-quality factors of disjunct size, imbalance of the class distribution, attribute interestingness, misclassification costs and the asymmetry of classification rules.
Abstract: This paper studies several criteria for evaluating rule interestingness. It first reviews some rule interestingness principles with respect to the widely-used criteria of coverage, completeness and confidence factor of a rule. It then considers several additional factors (or criteria) influencing rule interestingness that have been somewhat neglected in the literature on rule interestingness. As a result, this paper argues that rule interestingness measures should be extended to take into account the additional rule-quality factors of disjunct size, imbalance of the class distribution, attribute interestingness, misclassification costs and the asymmetry of classification rules. The paper also presents a case study on how a popular rule interestingness measure can be extended to take into account the proposed additional rule-quality factors.

Journal ArticleDOI
TL;DR: It is shown that a very high percentage of software developers have poor knowledge of software process issues and no visible software process, and their best action is to seek to install appropriate elementary management practices thereby achieving a state where software process improvement may become applicable to a visible process, should provable benefits be forthcoming.
Abstract: The purpose of this paper is to provide a balanced view of software process improvement and its relationship to software quality The paper looks at the historical background to software development and explores the importance of correct management practices including sufficiency (to lead to rep eatable high levels of software quality) It also discusses leading initiatives in complementary software process engineering and offers an assessment of the value and applicability of current approaches to software process improvement In particular the discipline of software process assessment is evaluated The import of utilising software process models and related measurement is also briefly discussed It is shown that software process improvement approaches considered are not yet mature sciences, in particular there are significant unresolved research issues and possible inadequacies associated with software process assessment In conclusion, although software process improvement is not yet proved to be the route to software quality, it remains a possible route The question of the readiness of software producers in terms of knowledge and current capability of their software production practices to make effective use of process improvement techniques is also addressed It is shown that a very high percentage of software developers have poor knowledge of software process issues and no visible software process This paper concludes that their best action is to seek to install appropriate elementary management practices thereby achieving a state where software process improvement may become applicable to a visible process, should provable benefits be forthcoming Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, wwwwitpresscom, ISSN 1743-3517

Journal ArticleDOI
TL;DR: This paper reports essentials of a software product evaluation methodology, called CDSEM (Checklist Driven Software Evaluation Methodology), designed by Software Quality Laboratory of Tecnopolis CSATA Novus Ortus, to focalize the use of software evaluation metrics in the framework of this methodology.
Abstract: This paper reports essentials of a software product evaluation methodology, called CDSEM (Checklist Driven Software Evaluation Methodology), designed by Software Quality Laboratory of Tecnopolis CSATA Novus Ortus. Our intention is to focalize the use of software evaluation metrics in the framework of our methodology. We consider software product as composed by different parts: software system, product documentation, user documentation, support services and distribution media. Each component needs a specific set of metrics and tools for the evaluation process. After each component has been evaluated, the methodology provides an unified assessment process. The methodology proposes the evaluation models in accordance with the standard ISO 9126 (Information technology - Software product evaluation Quality characteristics and guidelines for their use) taking also into account the emerging new parts of the standard. The six characteristics defined in ISO 9126 (functionality, reliability, usability, maintainability, portability, efficiency), are exploded, for every component, into sublayers of abstractions till to the identification of the measurable items (metrics). Moreover, the methodology identifies, for each metric, tools and procedures for the evaluation (code measures, inspection etc.). A tool has been developed on PC platform (CDSET, Checklist Driven Software Evaluation Tool) to manage the methodology information base, results and reports.

Journal ArticleDOI
TL;DR: SQPC (semi—quantitative physics compiler), an implemented approach to modelling and simulation that can predict the behavior of incompletely specified systems, such as those that arise in the water control domain, is described.
Abstract: Incomplete information is present in many engineering domains, hindering traditional and non—traditional simulation techniques. This paper describes SQPC (semi—quantitative physics compiler), an implemented approach to modelling and simulation that can predict the behavior of incompletely specified systems, such as those that arise in the water control domain. SQPC is the first system that unifies compositional modeling techniques with semi—quantitative representations. We describe SQPC’s foundations, QS1M and QPC, and how it extends them. We demonstrate SQPC using an example from the water supply domain.

Journal ArticleDOI
TL;DR: Unless multiple perspectives are recognized quality management will continue to be ineffectual in complex situations.
Abstract: A central concern of quality is customer satisfaction and any effective quality management system must incorporate a procedure for assessing the level of customer satisfaction achieved. A collapsed down view of customer satisfaction made through a single perspective lacks the richness to address situations characterized by complexity and messiness. An objective approach to assessing customer satisfaction should be supplemented (not replaced) by perspectives that reflect softer aspects of quality. Multiple perspectives represent different knowledge interests and cannot be reduced to a common denominator judgement must be exercised to decide the relative weighting to be given to each of the perspectives. A customer satisfaction framework has been developed based upon multiple perspectives: product, use, and service. Unless multiple perspectives are recognized quality management will continue to be ineffectual in complex situations.

Journal ArticleDOI
TL;DR: The paper describes some applications of expert systems (ES) for flexible manufacturing systems (FMS) and the interfacing problems of the software modules are discussed.
Abstract: The paper describes some applications of expert systems (ES) for flexible manufacturing systems (FMS). A complex FMS simulation tool is shown with different connection possibilities to expert systems to have a good evaluation performance and the make steps toward automatic control. The interfacing problems of the software modules are also discussed. In the recent stage of the research rapid prototype programs are under test, and the future goal of CIM Research Laboratory (CIMLab) is to solve industrial problems with these tools and softwares.

Journal ArticleDOI
TL;DR: This paper stresses the efficiency and the optimality of a distributed implementation of this tool compared to a classical one, the Multi-Agent system and the simulated annealing.
Abstract: The flow shop scheduling problem consists, according to a certain number of criteria, in finding the best possible allocation of n jobs on m resources, so that operations of every job must be processed on all resources in a unique order. Because of its highly combinatorial aspect, this scheduling procedure has been widely studied in the literature by exact and mostly heuristic methods. The approach, we adopt here to deal with this problem, combines a Multi-Agent system with a stochastic combinatorial optimization tool, the simulated annealing. This paper stresses the efficiency and the optimality of a distributed implementation of this tool compared to a classical one.

Journal ArticleDOI
TL;DR: This paper describes a new approach for planning and monitoring of construction projects based on visualisation of situations and/or solutions generated by a computerised system during the planning process.
Abstract: This paper describes a new approach for planning and monitoring of construction projects. This approach is a result of two lines of research, which investigated the ways of and benefits from the integration of Computer Graphics and Artificial Intelligence tools within construction processes. The approach is based on visualisation of situations and/or solutions generated by a computerised system during the planning process. The implementation of the approach is outlined. The examples of visualisation are provided and discussed. The further development is proposed and conclusions are drawn.

Journal ArticleDOI
TL;DR: Fuzzy logic, self adjustable neural networks and dynamic interaction among the input parameters of a system (instead of using net values) are among the new techniques.
Abstract: In this work classifying methods are examined from the view of Artificial Intelligence. Special reference is made to a pre-existing method of classifying rock masses (Bieniawski's classification method) and two typical attempts to use Artificial Intelligence tools are referred: a) Transference of the methodology procedure in an expert system's shell , and b) Training of a neural network with sets of inputs results in order to map the outer performance of the methodology. For an extension, machine learning is proposed as a tool for derivation of new classification methods taylored to specific systems. Fuzzy logic, self adjustable neural networks and dynamic interaction among the input parameters of a system (instead of using net values) are among the new techniques. Key-Words: Classification, Clustering, Artificial Intelligence, Expert Systems, Neural Networks, Fuzzy Logic.

Journal ArticleDOI
TL;DR: In this article, the authors present a methodology for the synthesis of path generating mechanisms using genetic algorithms, inspired by the principles of natural evolution and "Survival of the fittest".
Abstract: This paper presents a methodology for the synthesis of path generating mechanisms using Genetic Algorithms (GAs). GAs are a novel search and optimisation technique inspired by the principles of natural evolution and 'survival of the fittest'. The problem used to illustrate the use of GAs in this way is the synthesis of a four bar mechanism to provide a desired output path.

Journal ArticleDOI
TL;DR: In this article, the authors used social science approaches, specifically Ethnography and Discourse Analysis, to investigate the factors influencing the evolution and adoption of software quality management systems, which can bring to light the working practices and implicit perceptions used by software developers which directly influence their application of the concept of quality to the process of writing software.
Abstract: This paper arises from a project, called SoFEA, which is using social science approaches, specifically Ethnography and Discourse Analysis, to investigate the factors influencing the evolution and adoption of software quality management systems. Defining quality in any situation is a complex social process. This process not only involves certain quantifiable measures but also the interplay of social and cultural definitions of quality, within a specific context. Lack of quality management can have effects that range from economic disaster to fatality. Ethnographic and Discourse Analysis research can bring to light the working practices and implicit perceptions used by software developers which directly influence their application of the concept of quality to the process of writing software. Our research is framed as an ethnographic study, intended to allow description of the wider social context/s informing notions of quality. Specific techniques of Discourse Analysis are used to examine the texts, and the spoken and electronic interactions involved. It is the interconnections between the different texts, interactions and social relations that define or delimit such concepts as 'software quality'. This paper explores some of the important aspects of Ethnography and Discourse Analysis and how these can be used to research the processes by which software quality is defined. Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517

Journal ArticleDOI
TL;DR: The general design of a system to control urban street traffic signals is presented, based on cooperating, learning, realtime, distributed expert systems, which has attained a 36% improvement in the traffic flow under non-saturated conditions.
Abstract: We present the general design of a system to control urban street traffic signals. It is based on cooperating, learning, realtime, distributed expert systems. We also describe the operation of a running prototype program which, while using several simplifying assumptions, has proven the technical feasibility of the approach. It has also attained a 36% improvement in the traffic flow under non-saturated conditions. Current developments include the design of a general-purpose system that can be customized to most street configurations. Finally, we draw conclusions concerning the distributed planning and problem solving methodology.

Journal ArticleDOI
TL;DR: The software process is discussed and the capability maturity model (CMM) is described, which shows how the CMM can be used as a basis to measure maturity of the software process of an organisation and plan its improvement.
Abstract: The work of W. Edwards Deming has convinced industry that it must first measure quality and then emphasise process to improve quality. In response to Deming's arguments and in light of the perception that the software industry is unable to produce quality products on schedule, and within budget, more software development organisations are now emphasising process measurement, monitoring, and assessment. This paper describes what is meant by the software process and discusses an approach for its measurement and improvement. INTRODUCTION Unreliable software makes big news, from emergency services disasters to social security payment blunders. Improved software quality is essential to ensure reliable products and services, and gain customer satisfaction. While software development has existed for more than four decades, we failed so far to make it as an industry and as an engineering discipline rather than a craft. Developing reliable and usable software that is delivered on time and within budget still represents a difficult endeavour for many organisations. As the role of software becomes increasingly critical for businesses as well as for human lives, the problems caused by software products that are late, over budget, or that do not work, become magnified. If lives are lost or people inconvenienced due to incapable computer software, the news media is there to make big stories. Organisations are realising that their fundamental problems is the immaturity of their software process. Robert Lai (1993) proposes that the process improvement is the second maturity wave of the software industry. He states that "the first wave of software was developed using the waterfall model in the 1970's. Today we are Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517 216 Software Quality Management in the midst of a second wave a maturity movement as we attempt to formally define the development process and the best ways to continuously improve it" (Lai 1993). Taking the lead in this area has been Watts Humphrey (Humphrey 1989, 1990, 1991), and the Software Engineering Institute (SEI) at Carnegie Mellon University. The SEI started in 1986 to develop a process maturity framework to help organisations appraise the maturity of their software process, and to provide a guidance for organisations to improve their software process capability. A brief description of the framework was released in September 1987, including a maturity questionnaire. The SEI evolved the model and questionnaire into the 5 -Level Capability Maturity Model (CMM) in 1991. In February 1993, it released the CMM version 1.1 (Paulk et al 1993). This paper discusses the software process and describes the capability maturity model (CMM). It also shows how the CMM can be used as a basis to measure maturity of the software process of an organisation and plan its improvement. SOFTWARE QUALITY AND PROCESS IMPROVEMENT In his book "Quality is Free" Phil Crosby states that: "Quality is free. It is not a gift, it is free. What cost money are the unqualify of things all the actions that involve not doing the jobs right the first time" (Crosby 1980). If such statement is true for many disciplines, it is particularly true for software. The evidence is abundant in the number of software products which exceeded their budget, were produced late, failed to satisfy the user requirements, and are full of bugs. The demand for improved software quality is increasing to ensure reliable products and services. The benefits of improved quality comes in the form of reduction in failure costs. For software projects failure costs include (DTI 1992): • costs of correcting defects, both before and after delivery • overruns against time and budget • unnecessary high maintenance costs • indirect costs which users incur due to poor quality software The link between process maturity and software quality is expressed in the premise that "The quality of the software system is governed by the quality of the process used to develop and maintain it". One stumbling block to improving software quality seems to be that not enough attention is paid to the overall development process itself. While software professionals typically devote their time to developing, testing or documenting software products, no one has prime responsibility for improving the software process. Experience has shown that if Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517 Managing Quality Systems 217 no one is working on the software process, orderly improvement is unlikely. The process certainly won't improve itself, rather, most likely, it will deteriorate over time. Continuous improvement can occur only if a process infrastructure is in place. Watts Humphrey argues in an early article published in Datamation, April 1989 that: "Without work on the process, there will be little or no progress in improving software". THE SOFTWARE PROCESS Adopting a process view of software development represents a revolutionary change in perspective. A process orientation to software development involves elements of structure, focus, measurement, ownership, skills, and supporting technology. In this section we investigate what is meant by the software process. What Is The Software Process According to Webster's dictionary, a process is "a system of operations in producing something .. a series of actions, changes, or functions that achieve an end result". Chambers Concise Dictionary defines a process as "a series of actions or events .. a sequence of operations or changes undergone". The IEEE defines a process as "a sequence of steps performed for a given purpose". In the general business context, a process is defined as "a structured, measured set of activities designed to produce a specified output for a particular customer or market" (Davenport 1993). These definitions put a strong emphasis on "HOW" work is done, in contrast to a product focus's emphasis on "WHAT". Accordingly, a process can be considered as "a specific ordering of work activities across time and place, with a beginning, an end, and clearly identified inputs and outputs: a structure for action". In this paper we will adopt the following definitions quoted in (Humphrey 1990, Paulk et al 1993) are intended to encompass software throughout its life, which covers new development, enhancement, and repair. The software process is "The set of activities, methods, and practices used in the production and evolution of software". The software engineering process is "The total set of software engineering activities needed to transform a user's requirements into software". Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517 218 Software Quality Management People and the Software Process Software development is still a people-intensive activity. Talented people are the most important element in any software organisation. Even if you get the best people available, if they do not follow a common process, if everyone wrote in different programming languages, used different conventions, or didn't co-ordinate their design and code changes with their peers, the results will be chaos. Successful software organisations have learned that even the best professionals need a structured and disciplined environment in which to do cooperative work. Software organisations that do not establish such disciplines condemn their people to endless hours of repetitively solving technically trivial problems. The obvious fact is that attracting the best people is vital, but it is also essential to support them with an effectively managed software process. Technology and the Software Process Another myth is the widespread belief that some technologically advanced tool or method will provide a magic answer to the software crisis. This is not only wrong, but it is dangerous. Organisations which jumped on the bandwagon of CASE tools, and ended up in failure and wasted time and effort have learned their lesson the hard way. Just ask yourself before introducing technology: What do I want to automate? In the absence of a defined, practised, and managed process, introducing automation can lead to increased chaos. There are several factors which limit the effective use of software technology: an illdefined process, inconsistent process implementation, and poor process management. Software technology cannot be fully effective until these problems have been properly addressed. The Need for a Defined Process If no effort is made to define and enhance the software development process across the whole organisation, each software development project latches on to its own tools, methods, and practices with little guidance available on how to use them. This ad hoc approach will not be sufficient to tackle the task of developing complex software systems. The goal of software process management is to enable organisations to produce software that meets cost, schedules, and quality objectives. The principles are the same that underpins statistical process control-principles that have been successfully used in controlling scientific experiments and in high -volume manufacturing operations. Statistical concepts have been found to be just as applicable to the software development as they are to the production of manufactured goods such as motor cars. Applying such concepts will only be possible if there is a defined and managed process. Software Process Models Transactions on Information and Communications Technologies vol 8, © 1994 WIT Press, www.witpress.com, ISSN 1743-3517

Journal ArticleDOI
TL;DR: This paper surveys the main approaches to categorize and evaluate data mining techniques and shows that no single technique provides the best performance for all types of tasks, and that a multi-strategy approach is needed to deal with real complex problems.
Abstract: A fundamental issue in the application of data mining algorithms to solve problems of real life is to know ahead of the time the usability of the algorithm for the class of problems being considered. In other words, we would like to know, before starting the KDD process for a particular problem P, with its features belonging to a type Cj of problems or tasks, how well a specific data mining algorithm Aj would perform in solving P. In this paper, we survey the main approaches to categorize and evaluate data mining techniques. This will help to clarify the relationship that can exist between a particular data mining algorithm and the type of tasks or problems for which it is best suited. Perhaps the most important conclusion we show is that no single technique provides the best performance for all types of tasks, and that a multi-strategy approach is needed to deal with real complex problems. Categorizing data mining techniques will guide the user, prior the start of the KDD process or during the data mining phase, in the selection of the best subset of techniques to resolve a problem or data mining task. Transactions on Information and Communications Technologies vol 19 © 1998 WIT Press, www.witpress.com, ISSN 1743-3517

Journal ArticleDOI
TL;DR: How the concept of FMEA, Failure Modes and Effects Analysis, can be utilized to improve the reliability of the software production process resulting in higher product quality as well as in higher productivity is described.
Abstract: This paper describes how the concept of FMEA, Failure Modes and Effects Analysis, can be utilized to improve the reliability of the software production process resulting in higher product quality as well as in higher productivity. This concept has already been implemented by ISARDATA, a small software company in Germany specialisied in the field of software test and validation, in several software development projects. The paper begins with introduction of the general principles of FMEA known from applications in various manufacturing industries. The introduction is followed by a brief description of the necessary adaptations of the FMEA method for application in a software production process. The next section describes the essentials of planning FMEA as an integral part of the software lifecycle management. Since FMEA is primarily the output of teamwork, this section defines practical guidelines for constituting the FMEA team consisting of software developers, testers and quality planners, and for conducting the meetings including defintion of the FMEA objectives of the project. Transactions on Information and Communications Technologies vol 11, © 1995 WIT Press, www.witpress.com, ISSN 1743-3517 220 Software Quality Management The following section of the paper describes the main FMEA tasks to be performed by the team. These are the identification of: a) the structure of the software product in terms of its subsystems, functions, external and internal interfaces and interdependencies; b) the possible failure modes of the product and their causes; c) the effects of the failures including calculation of gravity factors; d) possible measures to prevent and/or correct the failures; e) test plans to detect such failures during the software development phases; f) metric for the evaluation of the FMEA results. The next section of the paper describes how this process can be supported by software tools. The final section sums up the conclusions.