scispace - formally typeset
Search or ask a question
Author

Richard W. Conway

Bio: Richard W. Conway is an academic researcher from Cornell University. The author has contributed to research in topics: PL/C & Asynchronous communication. The author has an hindex of 11, co-authored 29 publications receiving 3353 citations.

Papers
More filters
Book
01 Jan 1967
TL;DR: Reading theory of scheduling as one of the reading material to finish quickly to increase the knowledge and happiness in your lonely time.
Abstract: Feel lonely? What about reading books? Book is one of the greatest friends to accompany while in your lonely time. When you have no friends and activities somewhere and sometimes, reading book can be a great choice. This is not only for spending the time, it will increase the knowledge. Of course the b=benefits to take will relate to what kind of book that you are reading. And now, we will concern you to try reading theory of scheduling as one of the reading material to finish quickly.

2,356 citations

Journal ArticleDOI
TL;DR: The behavior of lines buffered in this way is investigated and the distribution and quantity of work-in-process WIP inventory that accumulates is explored and design guidelines that should be useful in industrial practice are yielded.
Abstract: In serial production systems, storage may be provided between processes to avoid interference due to lack of synchronization. This paper investigates the behavior of lines buffered in this way and explores the distribution and quantity of work-in-process WIP inventory that accumulates. We study simple, generic production systems to gain insight into the behavior of more complex systems. The authors are surprised by the sometimes counterintuitive results, but are joined in this surprise by both academics and practitioners with whom the study has been discussed. Results are presented for: identical workstations with and without buffers; balanced lines in which variability of processing times differs between stations; unbalanced lines; and lines with unreliable workstations. In general, buffers between workstations increase system capacity but with sharply diminishing returns. Position as well as capacity of the buffers are important. These results are preliminary, to be confirmed and extended by further study-indeed, a primary purpose of this paper is to stimulate such study. However, even these preliminary results yield design guidelines that should be useful in industrial practice.

369 citations

Journal ArticleDOI
TL;DR: An attempt to describe characteristics common to many system simulations, and a discussion of some problems involved in the construction of a digital simulator; and the third concerns problems that arise in the use of such a simulator.
Abstract: Our objective is to discuss the technique of digital system simulation. This procedure has already achieved a considerable stature in industrial and research organizations and promises to attain even greater importance in the future. Yet, with few exceptions 51, the published literature in the area consists of introductory expositions 10 or of descriptions of the solution of a particular problem 7 where the technique itself is correctly relegated to secondary position. The only publication devoted entirely to the topic 12 is a compendium of papers of the latter type with little attempt to summarize or generalize. Simulations are, of course, as varied as the systems which they represent but they do have certain common characteristics and problems. An identification of these problems would at least allow the investigator to anticipate them and plan accordingly. This paper consists of three sections. The first is an attempt to describe characteristics common to many system simulations; the second is a discussion of some problems involved in the construction of a digital simulator; and the third concerns problems that arise in the use of such a simulator.

120 citations

Journal ArticleDOI
TL;DR: The security of an information system may be represented by a model matrix whose elements are decision rules and whose row and column indices are users and data items respectively, which is used to explain security features of several existing systems.
Abstract: The security of an information system may be represented by a model matrix whose elements are decision rules and whose row and column indices are users and data items respectively. A set of four functions is used to access this matrix at translation and execution time. Distinguishing between data dependent and data independent decision rules enables one to perform much of the checking of security only once at translation time rather than repeatedly at execution time. The model is used to explain security features of several existing systems, and serves as a framework for a proposal for general security system implementation within today's languages and operating systems.

116 citations

Journal ArticleDOI
TL;DR: The investigation involved the comparison of dispatching at random with dispatching in order of increasing processing time under different conditions of shop size, flow pattern, and level of work-in-process inventory.
Abstract: The significance of the dispatching function in production planning and control is discussed and applicable results in sequencing and queuing theory are reviewed. Experimental results for a network of queues representing a small job shop are presented. The investigation involved the comparison of dispatching at random with dispatching in order of increasing processing time under different conditions of shop size, flow pattern, and level of work-in-process inventory. Also considered is the effect of imperfect a priori knowledge of processing times upon the shortest-operation discipline Several modifications of the shortest-operation discipline were also tested one in which the shortest-operation discipline is 'truncated' and another in which it is periodically alternated with a first-come-first-served discipline.

98 citations


Cited by
More filters
Journal ArticleDOI
16 May 2000
TL;DR: This work considers the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed and proposes a novel reconstruction procedure to accurately estimate the distribution of original data values.
Abstract: A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data.

3,173 citations

Journal ArticleDOI
TL;DR: The results are strong in that they hold whether the problem size is measured by number of tasks, number of bits required to express the task lengths, or by the sum of thetask lengths.
Abstract: NP-complete problems form an extensive equivalence class of combinatorial problems for which no nonenumerative algorithms are known. Our first result shows that determining a shortest-length schedule in an m-machine flowshop is NP-complete for m ≥ 3. For m = 2, there is an efficient algorithm for finding such schedules. The second result shows that determining a minimum mean-flow-time schedule in an m-machine flowshop is NP-complete for every m ≥ 2. Finally we show that the shortest-length schedule problem for an m-machine jobshop is NP-complete for every m ≥ 2. Our results are strong in that they hold whether the problem size is measured by number of tasks, number of bits required to express the task lengths, or by the sum of the task lengths.

2,351 citations

Journal ArticleDOI
TL;DR: A simple algorithm is presented in this paper, which produces very good sequences in comparison with existing heuristics, and performs especially well on large flow-shop problems in both the static and dynamic sequencing environments.
Abstract: In a general flow-shop situation, where all the jobs must pass through all the machines in the same order, certain heuristic algorithms propose that the jobs with higher total process time should be given higher priority than the jobs with less total process time. Based on this premise, a simple algorithm is presented in this paper, which produces very good sequences in comparison with existing heuristics. The results of the proposed algorithm have been compared with the results from 15 other algorithms in an independent study by Park [13], who shows that the proposed algorithm performs especially well on large flow-shop problems in both the static and dynamic sequencing environments.

2,255 citations

Journal ArticleDOI
01 Sep 1975
TL;DR: In this article, the authors explore the mechanics of protecting computer-stored information from unauthorized use or modification, focusing on those architectural structures-whether hardware or software-that are necessary to support information protection.
Abstract: This tutorial paper explores the mechanics of protecting computer-stored information from unauthorized use or modification. It concentrates on those architectural structures-whether hardware or software-that are necessary to support information protection. The paper develops in three main sections. Section I describes desired functions, design principles, and examples of elementary protection and authentication mechanisms. Any reader familiar with computers should find the first section to be reasonably accessible. Section II requires some familiarity with descriptor-based computer architecture. It examines in depth the principles of modern protection architectures and the relation between capability systems and access control list systems, and ends with a brief analysts of protected subsystems and protected objects. The reader who is dismayed by either the prerequisites or the level of detail in the second section may wish to skip to Section III, which reviews the state of the art and current research projects and provides suggestions for further reading.

2,063 citations

Journal ArticleDOI
TL;DR: An approximation method for solving the minimum makespan problem of job shop scheduling by sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced.
Abstract: We describe an approximation method for solving the minimum makespan problem of job shop scheduling. It sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced. Every time after a new machine is sequenced, all previously established sequences are locally reoptimized. Both the bottleneck identification and the local reoptimization procedures are based on repeatedly solving certain one-machine scheduling problems. Besides this straight version of the Shifting Bottleneck Procedure, we have also implemented a version that applies the procedure to the nodes of a partial search tree. Computational testing shows that our approach yields consistently better results than other procedures discussed in the literature. A high point of our computational testing occurred when the enumerative version of the Shifting Bottleneck Procedure found in a little over five minutes an optimal schedule to a notorious ten machines/ten jobs problem on which many algorithms have been run for hours without finding an optimal solution.

1,579 citations