scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Software Engineering in 1989"


Journal ArticleDOI
TL;DR: An extensive case study is presented and analyzed: the attempt to introduce new information systems to a large industrial corporation in an emerging nation shows that Theory W and its subsidiary principles do an effective job both in explaining why the project encountered problems, and in prescribing ways in which the problems could have been avoided.
Abstract: A software project management theory is presented called Theory W: make everyone a winner. The authors explain the key steps and guidelines underlying the Theory W statement and its two subsidiary principles: plan the flight and fly the plan; and, identify and manage your risks. Theory W's fundamental principle holds that software project managers will be fully successful if and only if they make winners of all the other participants in the software process: superiors, subordinates, customers, users, maintainers, etc. Theory W characterizes a manager's primary role as a negotiator between his various constituencies, and a packager of project solutions with win conditions for all parties. Beyond this, the manager is also a goal-setter, a monitor of progress towards goals, and an activist in seeking out day-to-day win-lose or lose-lose project conflicts confronting them, and changing them into win-win situations. Several examples illustrate the application of Theory W. An extensive case study is presented and analyzed: the attempt to introduce new information systems to a large industrial corporation in an emerging nation. The analysis shows that Theory W and its subsidiary principles do an effective job both in explaining why the project encountered problems, and in prescribing ways in which the problems could have been avoided. >

497 citations


Journal ArticleDOI
TL;DR: It is shown that optimal scheduling without a priori knowledge is impossible in the multiprocessor case even if there is no restriction on preemption owing to precedence or mutual exclusion constraints.
Abstract: The problems of hard-real-time task scheduling in a multiprocessor environment are discussed in terms of a scheduling game representation of the problem. It is shown that optimal scheduling without a priori knowledge is impossible in the multiprocessor case even if there is no restriction on preemption owing to precedence or mutual exclusion constraints. Sufficient conditions that permit a set of tasks to be optimally scheduled at run time are derived. >

480 citations


Journal ArticleDOI
TL;DR: This paper reports results from some investigations into the problem of making optimum use of the remaining processor idle time in scheduling perlodk tasks, and provides an elRcient algorlthm lor determining maximum quantity of total idle time available between any two instants.
Abstract: Abmw&-Task scheduling is an important issue in the design of a renl-timc computer system because tasks have execution deadlines that must be met, otherwise the system fails with severe consequences upon the environment. In this paper, we study the problem of scheduling periodic time critical tasks on a monoprocessor system. A periodic time critkal task consists of an infinite number of -quests, each of whieh has a prescribed deadline. Tasks are assumed to meet their timing requirements when scheduled by the Earliest Deadline algorithm and preemptions are allowed. We report results from some investigations into the problem of making optimum use of the remaining processor idle time in scheduling perlodk tasks either as soon as possible M as late as possible. The major results consist of the statement and proof of properties relating to bcdhtion and duration of idle time intervals and enable us to provide an elRcient algorlthm lor determining maximum quantity of total idle time available between any two instants. We describe how these results can be applied, Brst to the decision problem that arises when a sporadic time critical task occurs and requires to be run at an unpredictable time and second, to the scheduling problem that arises in a fault tolerant system using the deadline mechanism for which each task implements primary and alternate algorithms. Index Terms-Deadline mechanism, idle time, preemptive schedul

447 citations


Journal ArticleDOI
TL;DR: The authors present a common foundation for integrating pairs of entity sets, pairs of relationship sets, and an entity set with a relationship set based on the basic principle of integrating attributes.
Abstract: The authors present a common foundation for integrating pairs of entity sets, pairs of relationship sets, and an entity set with a relationship set. This common foundation is based on the basic principle of integrating attributes. Any pair of objects whose identifying attributes can be integrated can themselves be integrated. Several definitions of attribute equivalence are presented. These definitions can be used to specify the exact nature of the relationship between a pair of attributes. Based on these definitions, several strategies for attribute integration are presented and evaluated. >

445 citations


Journal ArticleDOI
TL;DR: The author shows that unless P=NP, there can be no polynomial-time epsilon -approximate algorithm for the module allocation problem, nor can there exist a local search algorithm that requiresPolynomial time per iteration and yields an optimum assignment.
Abstract: The author studies the complexity of the problem of allocating modules to processes in a distributed system to minimize total communication and execution costs. He shows that unless P=NP, there can be no polynomial-time epsilon -approximate algorithm for the problem, nor can there exist a local search algorithm that requires polynomial time per iteration and yields an optimum assignment. Both results hold even if the communication graph is planar and bipartite. On the positive side, it is shown that if the communication graph is a partial k-tree or an almost-tree with parameter k, the module allocation problem can be solved in polynomial time. >

435 citations


Journal ArticleDOI
TL;DR: This study shows that a test sequence produced by T- method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study.
Abstract: The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >

402 citations


Journal ArticleDOI
TL;DR: A methodology for specifying and providing assertions about time in higher-level-language programs is described, and examples of timing bounds and assertions that are proved include deadlines, timing invariants for periodic processes, and the specification of time-based events such as those needed for the recognition of single and double clicks from a mouse button.
Abstract: A methodology for specifying and providing assertions about time in higher-level-language programs is described. The approach develops three ideas: the distinction between, and treatment of, both real-time and computer times; the use of upper and lower bounds on the execution times of program elements; and a simple extension of Hoare logic to include the effects of the passage of real-time. Schemas and examples of timing bounds and assertions are presented for a variety of statement types and programs, such as conventional sequential programs including loops, time-related statements such as delay, concurrent programs with synchronization, and software in the presence of interrupts. Examples of assertions that are proved include deadlines, timing invariants for periodic processes, and the specification of time-based events such as those needed for the recognition of single and double clicks from a mouse button. >

392 citations


Journal ArticleDOI
TL;DR: The Conic environment provides a language-based approach to the building of distributed systems which combines the simplicity and safety of a language approach with the flexibility and accessibility of an operating systems approach.
Abstract: The Conic environment provides a language-based approach to the building of distributed systems which combines the simplicity and safety of a language approach with the flexibility and accessibility of an operating systems approach. It provides a comprehensive set of tools for program compilation, configuration, debugging, and execution in a distributed environment. A separate configuration language is used to specify the configuration of software components into logical nodes. This provides a concise configuration description and facilitates the reuse of program components in different configurations. Applications are constructed as sets of one or more interconnected logical nodes. Arbitrary, incremental change is supported by dynamic configuration. In addition, the system provides user-transparent datatype transformation between heterogeneous processors. Applications may be run on a mixed set of interconnected computers running the Unix operating system and on base target machines with no resident operating system. The basic principles adopted in the construction of the Conic environment are outlined and the configuration and run-time facilities provided are described. >

342 citations


Journal ArticleDOI
TL;DR: The authors discuss secure broadcasting, effected by means of a secure lock, on broadcast channels, such as satellite, radio, etc, implemented by using the Chinese Remainder theorem (CRT).
Abstract: The authors discuss secure broadcasting, effected by means of a secure lock, on broadcast channels, such as satellite, radio, etc. This lock is implemented by using the Chinese Remainder theorem (CRT). The secure lock offers the following advantages: only one copy of the ciphertext is sent; the deciphering operation is efficient; and the number of secret keys held by each user is minimized. Protocols for secure broadcasting using the secure lock, based on the public-key cryptosystem as well as the private-key cryptosystem, are presented. >

284 citations


Journal ArticleDOI
TL;DR: The authors formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology and show that diversity of decision implies likely diversity of behavior for the different versions developed under such forced diversity.
Abstract: Work by D.E. Eckhardt and L.D. Lee (1985), shows that independently developed program versions fail dependently. The authors show that there is a precise duality between input choice and program choice in this model and consider a generalization in which different versions can be developed using diverse methodologies. The use of diverse methodologies is shown to decrease the probability of the simultaneous failure of several versions. Indeed, it is theoretically possible to obtain versions which exhibit better than independent failure behavior. The authors formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology. They show that diversity of decision implies likely diversity of behavior for the different versions developed under such forced diversity. For certain one-out-of-n systems the authors obtain an optimal method for allocating diversity between versions. For two-out-of-three systems there seem to be no simple optimality results which do not depend on constraints which cannot be verified in practice. >

261 citations


Journal ArticleDOI
TL;DR: Petri nets in which random delays are associated with atomic transitions are defined in a comprehensive framework that contains most of the models already proposed in the literature and includes an execution policy based on the choice of the next transition to fire independently of the associated delay.
Abstract: Petri nets in which random delays are associated with atomic transitions are defined in a comprehensive framework that contains most of the models already proposed in the literature. To include generally distributed firing times into the model one must specify the way in which the next transition to fire is chosen, and how the model keeps track of its past history; this set of specifications is called an execution policy. A discussion is presented of the impact that different execution policies have on semantics of the mode, as well as the characteristics of the stochastic process associated with each of these policies. When the execution policy is completely specified by the transition with the minimum delay (race policy) and the firing distributions are of the phase type, an algorithm is provided that automatically converts the stochastic process into a continuous time homogeneous Markov chain. An execution policy based on the choice of the next transition to fire independently of the associated delay (preselection policy) is introduced, and its semantics is discussed together with possible implementation strategies. >

Journal ArticleDOI
TL;DR: The authors discuss the infeasible-path problem as well as other issues that must be considered in order to evaluate these criteria more meaningfully and to formulate a more effective path-selection criterion.
Abstract: The authors report on the results of their evaluation of path-selection criteria based on data-flow relationships. They show how these criteria relate to each other, thereby demonstrating some of their strengths and weaknesses. A subsumption hierarchy showing their relationship is presented. It is shown that one of the major weaknesses of all the criteria is that they are based solely on syntactic information and do not consider semantic issues such as infeasible paths. The authors discuss the infeasible-path problem as well as other issues that must be considered in order to evaluate these criteria more meaningfully and to formulate a more effective path-selection criterion. >

Journal ArticleDOI
TL;DR: Comparison with other clock synchronization algorithms shows that TEMPO, in an environment with no Byzantine faults, can achieve better synchronization at a lower cost.
Abstract: The authors discuss the upper and lower bounds on the accuracy of the time synchronization achieved by the algorithm implemented in TEMPO, the distributed service that synchronizes the clocks of the University of California, Berkeley, UNIX 4.3BSD systems. The accuracy is shown to be a function of the network transmission latency; it depends linearly upon the drift rate of the clocks and the interval between synchronizations. TEMPO keeps the clocks of the VAX computers in a local area network synchronized with an accuracy comparable to the resolution of single-machine clocks. Comparison with other clock synchronization algorithms shows that TEMPO, in an environment with no Byzantine faults, can achieve better synchronization at a lower cost. >

Journal ArticleDOI
TL;DR: A comprehensive system-dynamics model of the software-development process is used to test the degree of interchangeability of men and months on the particular software project and produces some interesting insights into the policies for managing the human resource.
Abstract: The author focuses on the dynamics of software project staffing throughout the software-development lifecycle. The research vehicle is a comprehensive system-dynamics model of the software-development process. A detailed discussion of the model's structure as well as its behavior is provided. The results of a case study in which the model is used to simulate the staffing practices of an actual software project are then presented. The experiment produces some interesting insights into the policies (both explicit and implicit) for managing the human resource, and their impact on project behavior. The decision-support capability of the model to answer what-if questions is also demonstrated. In particular, the model is used to test the degree of interchangeability of men and months on the particular software project. >

Journal ArticleDOI
TL;DR: Algorithmic translation of the Ada programs into Petri nets which preserve control-flow and message-flow properties is described, and algorithms are given to analyze the nets to obtain information about static deadlocks that can occur in the original programs.
Abstract: A method is presented for detecting deadlocks in Ada tasking programs using structural; and dynamic analysis of Petri nets. Algorithmic translation of the Ada programs into Petri nets which preserve control-flow and message-flow properties is described. Properties of these Petri nets are discussed, and algorithms are given to analyze the nets to obtain information about static deadlocks that can occur in the original programs. Petri net invariants are used by the algorithms to reduce the time and space complexities associated with dynamic Petri net analysis (i.e. reachability graph generation). >

Journal ArticleDOI
TL;DR: An efficient digital search algorithm that is based on an internal array structure called a double array, which combines the fast access of a matrix form with the compactness of a list form, is presented.
Abstract: An efficient digital search algorithm that is based on an internal array structure called a double array, which combines the fast access of a matrix form with the compactness of a list form, is presented. Each arc of a digital search tree, called a DS-tree, can be computed from the double array in 0(1) time; that is to say, the worst-case time complexity for retrieving a key becomes 0(k) for the length k of that key. The double array is modified to make the size compact while maintaining fast access, and algorithms for retrieval, insertion, and deletion are presented. If the size of the double array is n+cm, where n is the number of nodes of the DS-tree, m is the number of input symbols, and c is a constant particular to each double array, then it is theoretically proved that the worst-case times of deletion and insertion are proportional to cm and cm/sup 2/, respectively, and are independent of n. Experimental results of building the double array incrementally for various sets of keys show that c has an extremely small value, ranging from 0.17 to 1.13. >

Journal ArticleDOI
TL;DR: The prediction scheme uses the knowledge of the program's resource usage in its last execution together with its state-transition model to predict the resource used in its next execution and the results show that the predicted values correlate strongly with the actual.
Abstract: A statistical approach is developed for predicting the CPU time, the file I/O, and the memory requirements of a program at the beginning of its life, given the identity of the program. Initially, statistical clustering is used to identify high-density regions of process resource usage. The identified regions form the states for building a state-transition model to characterize the resource usage of each program in its past executions. The prediction scheme uses the knowledge of the program's resource usage in its last execution together with its state-transition model to predict the resource usage in its next execution. The prediction scheme is shown to work using process resource-usage data collected from a VAX 11/780 running 4.3 BSD Unix. The results show that the predicted values correlate strongly with the actual; the coefficient of correlation between the predicted and actual values for CPU time is 0.84. The errors in prediction are mostly small and are heavily skewed toward small values. >

Journal ArticleDOI
TL;DR: The research includes the design of a wide-spectrum language specifically tailored to the needs of transformational programming, the construction of a transformation system to support the methodology, and the study of transformation rules and other methodological issues.
Abstract: Formal program construction by transformations is a method of software development in which a program is derived from a formal problem specification by manageable, controlled transformation steps which guarantee that the final product meets the initial specification. This methodology has been investigated in the Munich project CIP (computer-aided intuition-guided programming). The research includes the design of a wide-spectrum language specifically tailored to the needs of transformational programming, the construction of a transformation system to support the methodology, and the study of transformation rules and other methodological issues. Particular emphasis has been laid on developing a sound theoretical basis for the overall approach. >

Journal ArticleDOI
M.M. Theimer1, K.A. Lantz2
TL;DR: The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system and focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design.
Abstract: The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics whereas a decentralized architecture is simpler to implement. >

Journal ArticleDOI
TL;DR: A comparison of the models revealed that the experimental group improved significantly in programming speed as a result of using tbe two-person inspection, and it appeared as though this method was more elkctive at improving the performance of the slower programmers.
Abstract: A6xfmct-Thls paper reviews current research and investigates the effect of n two-person Inspeetion method on programmer productivity. This method is similar to tbe current larger team method in stressing f d t detection, but doCS not use a moderator. The experiment used a Pretest-Posttest Control Group design. An experimental and control group of novices each completed two programming assignments. The amount of time taken to complete each program. (Timel, Timel) was recorded for each subject. The subjects or the experlmental group did either a design inspection, a code inspection, o r both during the development of the second program. An analysis of variance was performed nnd the relationship between Timel and Time2 was modeled for both groups. A comparison of the models revealed (he experimental group improved significantly in programming speed as a result of using tbe two-person inspection. I t also a p peared as though this method was more elkctive at improving the performance of the slower programmers. This two-person method could have its application in those envimnmeats wbcre access to larger team resourcm Is nol available. If further researeta establishes consistency with this method then it might be useful IS a transition to the larger team method.

Journal ArticleDOI
TL;DR: The results are reported of an experimental study of software metrics for a fairly large software system used in a real-time application, including the mutual relationship between various software metrics and, more importantly, the relationship between metrics and the development effort.
Abstract: The results are reported of an experimental study of software metrics for a fairly large software system used in a real-time application. A number of issues are examined, including the mutual relationship between various software metrics and, more importantly, the relationship between metrics and the development effort. Some interesting connections are reported between metrics and the software development effort. >

Journal ArticleDOI
TL;DR: The author presents a simple solution for the committee coordination problem, which encompasses the synchronization and exclusion problems associated with implementing multiway rendezvous, and shows how it can be implemented to develop a family of algorithms.
Abstract: The author presents a simple solution for the committee coordination problem, which encompasses the synchronization and exclusion problems associated with implementing multiway rendezvous, and shows how it can be implemented to develop a family of algorithms. The algorithms use message counts to solve the synchronization problem, and they solve the exclusion problem by using a circulating token or by using auxiliary resources as in the solutions for the dining or drinking philosophers' problems. Results of a simulation study of the performance of the algorithms are presented. The experiments measured the response time and message complexity of each algorithm as a function of variations in the model parameters, including network topology and level of conflict in the system. The results show that the response time for algorithms proposed is significantly better than for existing algorithms, whereas the message complexity is considerably worse. >

Journal ArticleDOI
TL;DR: The kernel is proved to implement on this shared computer a fixed number of conceptually distributed communicating processes and provides the following verified services: process scheduling, error handling, message passing, and an interface to asynchronous devices.
Abstract: The author reviews Kit, a small multitasking operating system kernel written in the machine language of a uniprocessor von Neumann computer. The kernel is proved to implement on this shared computer a fixed number of conceptually distributed communicating processes. In addition to implementing processes, the kernel provides the following verified services: process scheduling, error handling, message passing, and an interface to asynchronous devices. As a by-product of the correctness proof, security-related results such as the protection of the kernel from tasks and the inability of tasks to enter supervisor mode are proved. The problem is stated in the Boyer-Moore logic, and the proof is mechanically checked with the Boyer-Moore theorem prover. >

Journal ArticleDOI
TL;DR: Models based on the hyper-geometric distribution for estimating the number of residual software faults are proposed and appear quite effective, particularly when the growth curve of the cumulative number of detected faults bends sharply.
Abstract: Models based on the hyper-geometric distribution for estimating the number of residual software faults are proposed. The application of the basic model shows that its fit to real data is good. Two ways of improving the model, using a segmentation technique and composite estimation, respectively, are shown. The segmentation technique appears quite effective, particularly when the growth curve of the cumulative number of detected faults bends sharply. The applications of these models to real data are demonstrated. >

Journal ArticleDOI
TL;DR: Absfraet-A model called DesignNet for describing and monitoring the software development process is presented and definitions for basic properties of a successful project, namely connectedness, plan complete, plan consistent, and well-executed are provided.
Abstract: Absfraet-A model called DesignNet for describing and monitoring the software development process is presented. This model utilizes the AND/OR graph and Petri net notation to provide the description of a project work breakdown structure and the specification of relationships among different project information types (activity. product, resource, and status report information). Tokens are objects with specific properties. Token propagation through structural links allows aggregate information to be collected automatically at different levels of detail. The transition firing is a nonvolatile process and creates new token instances with time dependent information. The typed places, together with connections among them, defines the static construct of a project. Whenever transitions are fired, the project execution history is recorded by the token instances created. Using the model, we have provided definitions for basic properties of a successful project, namely connectedness, plan complete, plan consistent, and well-executed. We have given algorithms for computing these functions and shown that the computing time is linear in the size of the project. This assures that any system based on DesignNet should be able to compute these functions efficiently. Flnally. we have shown how the waterfall We cycle model maps onto a DesignNet and the implications for project planning, cost estimation, project network construction, reinitiation of activities, and traceability across the life cycle. Other life cycle models can be equally treated.

Journal ArticleDOI
TL;DR: The authors develop a model and define performance measures for a replicated data system that makes use of a quorum-consensus algorithm to maintain consistency and derive optimal read and write quorums which maximize the proportion of successful transactions.
Abstract: The authors develop a model and define performance measures for a replicated data system that makes use of a quorum-consensus algorithm to maintain consistency. They consider two measures: the proportion of successfully completed transactions in systems where a transaction aborts if data is not available, and the mean response time in systems where a transaction waits until data becomes available. Based on the model, the authors show that for some quorum assignment there is an optimal degree of replication beyond which performance degrades. There exist other quorum assignments which have no optimal degree of replication. The authors also derive optimal read and write quorums which maximize the proportion of successful transactions. >

Journal ArticleDOI
TL;DR: The authors show that for both union and intersection problems, some changes can be incrementally incorporated immediately into the data-flow sets while others are handled by a two-phase approach.
Abstract: A technique is presented for incrementally updating solutions to both union and intersection data-flow problems in response to program edits and transformations. For generality, the technique is based on the iterative approach to computing data-flow information. The authors show that for both union and intersection problems, some changes can be incrementally incorporated immediately into the data-flow sets while others are handled by a two-phase approach. The first phase updates the data-flow sets to overestimate the effect of the program change, enabling the second phase to incrementally update the affected data-flow sets to reflect the actual program change. An important problem that is addressed is the computation of the data-flow changes that need to be propagated throughout a program, based on different local code changes. The technique is compared to other approaches to incremental data-flow analysis. >

Journal ArticleDOI
TL;DR: An incremental approach to construction is proposed, with the virtue of offering considerable opportunity for mechanized support, to facilitate comprehension and maintenance of specifications, as well as their initial construction.
Abstract: An incremental approach to construction is proposed, with the virtue of offering considerable opportunity for mechanized support. Following this approach one builds a specification through a series of elaborations that incrementally adjust a simple initial specification. Elaborations perform both refinements, adding further detail, and adaptations, retracting oversimplifications and tailoring approximations to the specifics of the task. It is anticipated that the vast majority of elaborations can be concisely described to a mechanism that will then perform them automatically. When elaborations are independent, they can be applied in parallel, leading to diverging specifications that must later be recombined. The approach is intended to facilitate comprehension and maintenance of specifications, as well as their initial construction. >

Journal ArticleDOI
TL;DR: A proof procedure and answer extraction in a high-level Petri net model of logic programs is discussed and it is proved that the goal transition is potentially firable if an only if there exists a nonnegative T-invariant which includes the goal Transition in its support.
Abstract: A proof procedure and answer extraction in a high-level Petri net model of logic programs is discussed. The logic programs are restricted to the Horn clause subset of first-order predicate logic and finite problems. The logic program is modeled by a high-level Petri net and the execution of the logic program or the answer extraction process in predicate calculus corresponds to a firing sequence which fires the goal transition in the net. For the class of the programs described above, the goal transition is potentially firable if an only if there exists a nonnegative T-invariant which includes the goal transition in its support. This is the main result proved. Three examples are given to illustrate in detail the above results. >

Journal ArticleDOI
TL;DR: A modified, priority-based probe algorithm for deadlock detection and resolution in distributed database system is presented and appears to be errorfree.
Abstract: A modified, priority-based probe algorithm for deadlock detection and resolution in distributed database system is presented. Various examples are used to show that the original priority-based algorithm, presented by M.K. Sinha and N. Natarajan (1985), either fails to detect deadlocks or reports deadlocks that do not exist in many situations. A modified algorithm that eliminates these problems is proposed. The algorithm has been tested through simulation and appears to be errorfree. The performance of the modified algorithm is briefly discussed. >