scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1970"


Journal ArticleDOI
TL;DR: The features provided by STAGES2 are summarized, and the implementation techniques which have made it possible to have STAGE2 running on a new machine with less than one man-week of effort are discussed.
Abstract: STAGE2 is the second level of a bootstrap sequence which is easily implemented on any computer. It is a flexible, powerful macro processor designed specifically as a tool for constructing machine-independent software. In this paper the features provided by STAGE2 are summarized, and the implementation techniques which have made it possible to have STAGE2 running on a new machine with less than one man-week of effort are discussed. The approach has been successful on over 15 machines of widely varying characteristics.

59 citations


Journal ArticleDOI
TL;DR: Test sets which cover all branches of a library of five procedures which solve the triangle problem, have been produced automatically using genetic algorithms, derived from both the structure of the software and its formal specification in Z.
Abstract: Test sets which cover all branches of a library of five procedures which solve the triangle problem, have been produced automatically using genetic algorithms. The tests are derived from both the structure of the software and its formal specification in Z. In a wider context, more complex procedures such as a binary search and a generic quicksort have also been tested automatically from the structure of the software. The value of genetic algorithms lies in their ability to handle input data which may be of a complex data structure, and to execute branches whose predicate may be a complicated and unknown function of the input data. A disadvantage of genetic algorithms may be the computational effort required to reach a solution.

57 citations


DOI
01 Jan 1970
TL;DR: In this article, an original software for the displacement field measurement is presented, which uses the digital picture correlation principle, and the accuracy of the measure could reach 1/60 th of a pixel and, it is possible to measure strains between 5.10-5 to 0.8.
Abstract: Despite the powerful of current calculus codes by finite element or analytical, their reliability and validity must be justified by experimental tests. An original software for the displacement field measurement is presented in this paper. This software use the digital picture correlation principle. Due to this technique, the accuracy of the measure could reach 1/60 th of a pixel and, it is possible to measure strains between 5.10-5 to 0.8. Initially, this software was developed for measuring strains on a sheet testing in metal forming. But, it is revealed that this application field is more important: biomechanical field, geotechnical field, metal characterization, control tests This software could be used, nowadays, for all application which need the knowledge of displacement and strain field for a plane surface. Of course, the experimental conditions must be made in such a way that any deterioration of the random aspect, deposed on the piece, will occur. Lost, alteration or little modification of the speckle aspect will not guarantee the success of the measure. Finally, the purpose of this paper will be, first, to give clearly the mathematical principle of the displacement field determination by a correlation method. Then, various experimental tests using the correlation method, are presented. The powerful of this technique is show by their variety. This method of measure without contact can be an another tool for validating, in a short time, a numerical or analytical result.

57 citations


DOI
01 Jan 1970
TL;DR: The RAINBOW software is designed to test the homogeneity of hydrologic records and to execute a frequency analysis of rainfall and evaporation data, especially suitable for predicting the probability of occurrence of either low or high rainfall amounts.
Abstract: RAINBOW is a software package developed by the Institute for Land and Water Management of the K.U.Leuven. The programme is designed to test the homogeneity of hydrologic records and to execute a frequency analysis of rainfall and evaporation data. The program is especially suitable for predicting the probability of occurrence of either low or high rainfall amounts, both of which are important variables in the design and management of irrigation systems, drainage network, and reservoirs. The RAINBOW software is a menu-driven programme and runs on an IBM compatible personal computer. The software is freely available from the authors upon request.

30 citations



Journal ArticleDOI
TL;DR: The authors argue that the software needs of extension are more demanding than the corresponding needs for research and that this explains the slow adoption of computer technology in the extension field.
Abstract: The authors describe their recent experience in designing computer programs for use in farm management extension. They argue that the software (that is, computer programs) needs of extension are more demanding than the corresponding needs for research and that this explains the slow adoption of computer technology in the extension field. Although the development of appropriate computer programs can involve a major initial investment, this is nevertheless the only way to get the variable (and average) cost of computer advice to farmers down to an economic level.

25 citations


Book ChapterDOI
Saul Amarel1
01 Jan 1970
TL;DR: Today’s computers -- their hardware and software -- provide a sufficient technological basis for building problem solving systems of considerable scope and offers extensive linguistic and executive capabilities for flexible and convenient man-machine communication and for effective control of the information processes that are carried out by computer hardware.
Abstract: Today’s computers -- their hardware and software -- provide us with a sufficient technological basis for building problem solving systems of considerable scope. Present hardware puts at our disposal “primary” memory capacities of over 107 bits at better than micro-second access times, “secondary” memory capacities of up to 1010 bits at sub-second access times, and logical circuitry that can operate at a rate of nanoseconds under the control of stored programs. Present software offers extensive linguistic and executive capabilities for flexible and convenient man-machine communication and for effective control of the information processes that are carried out by computer hardware.

23 citations


01 Jan 1970
TL;DR: This paper expresses software transformations in terms of assertions (preconditions, postconditions and invariants) on top of the formalism of graph rewriting to tackle scalability issues in a straightforward way.
Abstract: This paper explores the use of software transformations as a formal foundation for software evolution. More precisely, we express software transformations in terms of assertions (preconditions, postconditions and invariants) on top of the formalism of graph rewriting. This allows us to tackle scalability issues in a straightforward way. Useful applications include: detecting syntactic merge conflicts, removing redundancy in a transformation sequence, factoring out common subsequences, etc.

19 citations


Proceedings ArticleDOI
05 May 1970
TL;DR: It makes no sense to discuss software for privacy-preserving or secure time-shared computing without considering the hardware on which it is to run.
Abstract: It makes no sense to discuss software for privacy-preserving or secure time-shared computing without considering the hardware on which it is to run. Software access controls rely upon certain pieces of hardware. If these can go dead or be deliberately disabled without warning, then all that remains is false security.

19 citations


Journal ArticleDOI
01 Jan 1970
TL;DR: RTL, a real-time language developed cooperatively with industrial suppliers and users specifically for industrial control, is described with emphasis on those features peculiar to applications such as dedicated direct digital control, combined direct and supervisory control, operator interfaces, and interaction with plant management computer systems.
Abstract: Industrial computer control systems require the economical production of efficient software including executive systems, maintenance programs, and both special and general purpose application programs for direct digital control. Moreover, the hardware configuration varies considerably from the single dedicated control computer to a general purpose multicomputer system. RTL, a real-time language developed cooperatively with industrial suppliers and users specifically for industrial control, is described with emphasis on those features peculiar to applications such as dedicated direct digital control, combined direct and supervisory control, operator interfaces, and interaction with plant management computer systems. The use of RTL for the production of special purpose executive systems and general purpose application programs for direct control, startup, etc., is emphasized. Details of the language discussed with examples include its file structure for communication of data bases between independent programs, and a variety of data types including character codes, strings, labels, lists, peripheral variables, and data structures. Peripheral variables are variables in the language associated with hard-ware features of the central processor and its input-output devices such as registers, interrupts, error indicators, and addresses all of which may be referenced in the language. Regular and peripheral variable data structures--combinations of different types of variables--are included and ease considerably the burden of real-time programming. The organization and performance of the existing compiler for RTL is explained.

19 citations


Journal ArticleDOI
TL;DR: A new digital simulation control system which is easy to use and has wide versatility, and the ability to generate solutions and to fit them to experimental data or other theoretical curves with a minimum of computer memory is designed.
Abstract: The fitting of mathematical models to physiological sys tems can be tedious and difficult, whether one uses analog or digital computer methods. Both methods have their pros and cons depending on the available hardware and software and on the type of modeling. In recent years many digital simulation languages have been written com bining analog-like and digital features to facilitate model ing, but, for a variety of reasons, none of these was suit able for our applications. We, therefore, designed a new digital simulation control system, SIMCON, which is de scribed in this paper.The primary objectives were to provide:(1) Maximum man-machine interaction at run-time, in cluding visual displays, digital control, and both con tinuous analog and digital parameter adjustment(2) The ability to generate solutions and to fit them to experimental data or other theoretical curves with a minimum of computer memory(3) The option to use a mathematically oriented language, FORTRAN, and block operators with variable input...

Journal ArticleDOI
TL;DR: The basic ideas underlying use of Kalman Filters for data mixing in aided inertial systems are developed in a tutorial manner, including the form of the computational algorithm employed as well as the notation in which it is described.
Abstract: The basic ideas underlying use of Kalman Filters for data mixing in aided inertial systems are developed in a tutorial manner, including the form of the computational algorithm employed as well as the notation in which it is described. Commonality of data-mixing software among many different applications is stressed, and factors underlying success or failure in a practical sense are reviewed.

01 Jan 1970
TL;DR: Most jobs will not have to strain theHardware trends and operational requirements and on-board computing costs, as well as the difficulty of full software certification and Apollo software certification are studied.
Abstract: : Contents: Hardware trends and operational requirements; Most jobs will not have to strain the hardware; On-board computing: software costs; On- board computing: total system costs; Real-time image processing; Image processing considerations; Information processing: Sts innovations; Sts: hardware implications; Sts: software implications; Difficulty of full software certification; Apollo software certification; and Relative importance of software testing.


Journal ArticleDOI
01 Jan 1970
TL;DR: The development of a nonlinear dynamic model of an industrial process that includes additive stochastic terms is summarized, in which the industrial plant was connected to a process control computer some 130 miles away by a regular telephone channel.
Abstract: The development of a nonlinear dynamic model of an industrial process is summarized. The model includes additive stochastic terms and not all the state variables were accessible. Nonlinear state estimation was approximated by a linearized Kalman filter and the control algorithm was from dynamic programming. The development of software and hardware for a remote on-line computer control experiment is then described, in which the industrial plant was connected to a process control computer some 130 miles away by a regular telephone channel.

Journal ArticleDOI
TL;DR: The general approach, hardware, and computer program structure used in a speech recognition system coupled to a time-shared computer facility, which can be adaptively trained to handle a small but arbitrary vocabulary by any individual, and uses context to segment large vocabularies and achieve lower error rates and reduce computation.
Abstract: This paper describes the general approach, hardware, and computer program structure used in a speech recognition system coupled to a time-shared computer facility. The tasks that the system performs are split between the hardware, which is used for the complex analog processing tasks, and the software, which can more readily perform the pattern recognition and data file management chores. It is linked to a time-shared computer system via a voice-grade telephone line. An RCA 752 video terminal or a Model 35 teletype is used for entering control messages and to receive the output from the recognition process. The resulting system can be adaptively trained to handle a small but arbitrary vocabulary by any individual. The system operates in real time, with a speed of response less than a few seconds. The level of accuracy is comparable to that which has been obtained for similar medium-scale systems. The system is characterized by: 1) A novel configuration-with division of hardware and software. 2) Nonprivileged and real-time operation in a time-shared environment. 3) Low data rate-110 baud and voice-grade telephone lines can be used. 4) Use of context to segment large vocabularies and achieve lower error rates and reduce computation. These innovations point to the first steps toward a practical and economical speech recognition system.

Journal ArticleDOI
TL;DR: How the concept of FMEA, Failure Modes and Effects Analysis, can be utilized to improve the reliability of the software production process resulting in higher product quality as well as in higher productivity is described.
Abstract: This paper describes how the concept of FMEA, Failure Modes and Effects Analysis, can be utilized to improve the reliability of the software production process resulting in higher product quality as well as in higher productivity. This concept has already been implemented by ISARDATA, a small software company in Germany specialisied in the field of software test and validation, in several software development projects. The paper begins with introduction of the general principles of FMEA known from applications in various manufacturing industries. The introduction is followed by a brief description of the necessary adaptations of the FMEA method for application in a software production process. The next section describes the essentials of planning FMEA as an integral part of the software lifecycle management. Since FMEA is primarily the output of teamwork, this section defines practical guidelines for constituting the FMEA team consisting of software developers, testers and quality planners, and for conducting the meetings including defintion of the FMEA objectives of the project. Transactions on Information and Communications Technologies vol 11, © 1995 WIT Press, www.witpress.com, ISSN 1743-3517 220 Software Quality Management The following section of the paper describes the main FMEA tasks to be performed by the team. These are the identification of: a) the structure of the software product in terms of its subsystems, functions, external and internal interfaces and interdependencies; b) the possible failure modes of the product and their causes; c) the effects of the failures including calculation of gravity factors; d) possible measures to prevent and/or correct the failures; e) test plans to detect such failures during the software development phases; f) metric for the evaluation of the FMEA results. The next section of the paper describes how this process can be supported by software tools. The final section sums up the conclusions.

Journal ArticleDOI
01 Jan 1970
TL;DR: A computer system is described for real-time data acquisition and servicing of 40 asynchronous inertial guidance system test stations and the use of random access blocks on mass storage discs to greatly augment primary memory without seriously degrading total accessing time.
Abstract: A computer system is described for real-time data acquisition and servicing of 40 asynchronous inertial guidance system test stations. Some data are received automatically from the small guidance system computers at a maximum rate of eight words per second. Other data are input manually at each station via a mode selector and several 16-position thumbwheel switches. The data are received, partially edited and stored all in real time, and retrieved and analyzed with the highest "time-availability" priority at the time of completion of the guidance system test. The analysis results determine further testing or repair actions for each guidance system. The central computer is a SEL 840-MP, a general-purpose 24-bit, 32K, 1.75-µs cycle-time machine with basic real-time monitor software. The special purpose system is implemented as a software/ hardware interface with the real-time monitor and the test station hardware. A key factor for the real-time data processing is the use of random access blocks on mass storage discs to greatly augment primary memory without seriously degrading total accessing time. This also frees "background" core for off-line programs running in a low-priority interruptable mode and for the analysis programs which do not operate in the real-time mode. A disc allocation and cataloging scheme is presented along with a hardware and software description.


Journal ArticleDOI
TL;DR: A system has been developed at the Weizmann Institute of Science to provide on-line data acquisition and analysis from four independent tritium and 14 C experiments based on a small general purpose digital computer and special interface.

Proceedings ArticleDOI
17 Nov 1970
TL;DR: A console environment and high-level language with a supporting operating system designed for the application expert and described in this paper.
Abstract: The usual software support for graphic consoles does not provide system services designed for the console user who is a production-oriented application expert. This paper describes a console environment and high-level language with a supporting operating system designed for the application expert.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a general purpose error compensation system that is located and runs inside the machine controller, which is designed to run on a modem open architecture Computer Numerical Controller, namely an Osai UK series 10 controller.
Abstract: Compensation is a cost effective method of correcting for machine tool systematic errors. Although controller manufacturers are providing increasingly more sophisticated compensation, they do not include for all geometric sources of error for the variety of 3and 5-axis configurations that exist today. A small number of comprehensive PC-based compensation systems exist, requiring hardware modifications and interface electronics to interact with the position control loop, which can be expensive and difficult to implement. This paper describes a general purpose error compensation system that is located and runs inside the machine controller. The system has been designed to run on a modem open architecture Computer Numerical Controller, namely an Osai UK series 10 controller. One of the facilities within this controller is a DOS Real Time Interface, which allows a user to develop custom applications that can run and communicate with various parts of the NC system in real time. Compensation software, based on a 5-axis geometric error compensation system produced at the University of Huddersfield, has been developed to run in the DOS real time environment. The system has been applied to three machine tools and the volumetric accuracy significantly improved. Requiring only a simple software installation, the system is also shown to be inexpensive, simple and fast to implement.

Journal ArticleDOI
TL;DR: The digital computer is a common instrument in many laboratories; it is no longer necessary to describe its operating characteristics in detail but some of the more pressing issues to be understood and resolved are opti-minal trade-offs between hardware and software.
Abstract: The digital computer is a common instrument in many laboratories; it is no longer necessary to describe its operating characteristics in detail. However, it is not yet generally used in an entirely satisfactory manner nor are its full capabilities understood and exploited in laboratory automation. Some of the more pressing issues to be understood and resolved are opti-minal trade-offs between hardware and software, dedicated computer automation versus automation via time-shared systems, single-processor time-shared systems versus multiprocessor hierarchical time-shared systems, and the development of a real-time high-level control language.


Journal ArticleDOI
P.B. Denes1, M.V. Mathews1
01 Apr 1970
TL;DR: An introduction to laboratory computer operating systems to serve as a starting point for someone intending to create his own system to show how a specific system may be developed by modifying and incorporating features of existing systems and by modifying standard Programs supplied by computer manufacturers.
Abstract: A laboratory computer is a small- or medium-sized general-purpose machine which has been specially selected to do a particular job. Important economies as well as great computing power are achievable by matching not only the kind of machine, but also the particular set of components to the tasks at hand. For this reason, laboratory computers have become very popular. In addition to selecting the hardware, advantages can be achieved by tailoring the operating system and other general software to the situation. Enough operating systems for such computers have now been written to show how a specific system may be developed by modifying and incorporating features of existing systems and by modifying standard Programs supplied by computer manufacturers. In this way, special systems can be implemented using a reasonable amount of time and effort. This paper provides an introduction to laboratory computer operating systems to serve as a starting point for someone intending to create his own system. Specific systems for a LINC computer, and for the Honeywell DDP-224 and DDP-516 computers are described in some detail. These systems illustrate many features that have been found useful. The documentation needed to use the computer, the size of the user group, and some of the history of development are mentioned. Similar systems can be created to focus new machines on new tasks. Only current state-of-the-art programming is needed. The benefits of such specialization far outweigh the effort and costs.

Journal ArticleDOI
TL;DR: A preliminary description of the software for a computer-display system is given with special emphasis on the man-machine interaction, intended for a wide variety of biomedical applications.
Abstract: A preliminary description of the software for a computer-display system is given with special emphasis on the man-machine interaction. This system is intended for a wide variety of biomedical applications. As an example, the methods are applied to the karyotyping of chromosomes. The system is separated into four programming tasks: picture transformations, file maintenance, picture structuring, and display management. Picture structuring is considered as the vehicle for man-machine communication. A prototype data format for pictures, called a picture-form, is developed. Structure operators are defined which manipulate picture-forms to produce new picture-forms. Many of the ideas are taken from the symbolic mathematical laboratory at MIT conceived by Marvin Minsky.

Journal ArticleDOI
TL;DR: The problem of how to generate reference data sets and corresponding reference results for testing software for computing Gaussian and Chebyshev associated features is examined, including data sets to help expose the deficiencies of inadequate software.
Abstract: While the verification of the performance of coordinate measuring machines (CMMs) is self-evidently important, the point coordinates they provide are usually the input to form and tolerance assessment software that calculate associated geometric features, such as a best fit cylinder to the data. For the measurement result to be reliable, it is also necessary to ensure that the calculations performed by this software are fit for purpose. The forthcoming standard ISO 10360-6:2001 [1] specifies the procedure by which Gaussian (least-squares) form assessment software should be tested. Many tolerance assessment problems relate to Chebyshev (minimumzone) fitting criteria in which the maximum error is minimized. These criteria lead to nonlinearly constrained optimization problems that are difficult to solve reliably if appropriate algorithms are not employed. Many existing and proposed algorithms fail on seemingly simple data sets. In this paper we examine the problem of how to generate reference data sets and corresponding reference results for testing software for computing Gaussian and Chebyshev associated features. We indicate how to generate data sets for circles and cylinders, including data sets to help expose the deficiencies of inadequate software.

01 Jan 1970

Journal ArticleDOI
TL;DR: AI is a multidisciplinary activity that involves specialists from several fields, and the authors can say that the aim of science, and AI science, is solving problems.
Abstract: AI is a multidisciplinary activity that involves specialists from several fields, and we can say that the aim of science, and AI science, is solving problems. AI and computer sciences are been creating a new kind of making science, that we can call in silico science. Both models top-eown and bottomup are useful for e-scientific research. There is no a real controversy between them. Besides, the extended mind model of human cognition, involves human-machine interactions. Huge amount of data requires new ways to make and organize scientific practices: supercomputers, grids, distributed computing, specific software and middleware and, basically, more efficient and visual ways to interact with information. This is one of the key points to understand contemporary relationships between humans and machines: usability of scientific data.

DOI
09 Dec 1970
TL;DR: The objective was to simulate a typical jobstream in a typical multiprogramming environment on specific hardware, based on close tolerances and exacting operations, while software jobstream models were more generalized.
Abstract: A rotational position-sensing feature designed for direct-access devices reduces the time a hardware channel is busy searching for disk records. This is accomplished by electronically dividing the track of a disk into equally spaced timing sectors, using new hardware commands to access desired sectors. When an I/O record located some distance away in rotation is requested, the channel disconnects to allow concurrent servicing of requests from other devices. Through a combination of hardware and software, multiprogramming at a channel/disk level is achieved, thus improving channel performance.In this study, overall effectiveness of the feature was evaluated in a multiprogramming environment. Simulation at high and low levels simultaneously required a unique approach in the models. Hardware models were based on close tolerances and exacting operations, while software jobstream models were more generalized. The objective was to simulate a typical jobstream in a typical multiprogramming environment on specific hardware.