scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 1985"


Journal ArticleDOI
Joseph W. H. Liu1
TL;DR: Experimental results indicate that the modified version of the minimum-degree algorithm retains the fill-reducing property of (and is often better than) the original ordering algorithm and yet requires less computer time.
Abstract: The most widely used ordering scheme to reduce fills and operations in sparse matrix computation is the minimum-degree algorithm. The notion of multiple elimination is introduced here as a modification to the conventional scheme. The motivation is discussed using the k-by-k grid model problem. Experimental results indicate that the modified version retains the fill-reducing property of (and is often better than) the original ordering algorithm and yet requires less computer time. The reduction in ordering time is problem dependent, and for some problems the modified algorithm can run a few times faster than existing implementations of the minimum-degree algorithm. The use of external degree in the algorithm is also introduced.

348 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine the physical limits of the process of computing and find that the limits are based solely on fundamental physical principles, not on whatever technology we may currently be using.
Abstract: What constraints govern the physical process o f computing? Is a minim um amount of energy required, for example, per logic step? There seems to be no minimum, but some other questions are open A computation, whether it is formed by electronic machinery , on an abacus or in a biological system such as the brain, is a physical process. It is subject to the same questions that apply to other physical processes: How much energy must be expended to perform a particular com-putation? How long must it take? How large must the computing device be? In other words, what are the physical limits of the process of computation? So far it has been easier to ask these questions than to answer them. T o the extent that we have found limits, they are terribly far away from the real limits of modern technology. We cannot profess, therefore, to be guiding the technologist or the engineer. What we are doing is really more fundamental. We are looking for general laws that must govern all information processing , no matter how it is accomplished. Any limits we find must be based solely on fundamental physical principles, not on whatever technology we may currently be using. There are precedents for this kind of fundamental examination. In the 1940's Claude E. Shannon of the Bell Telephone Laboratories found there are limits on the amount of information that can be transmitted through a noisy channel; these limits apply no matter how the message is encoded into a signal. Shannon's work represents the birth of modern information science. Earlier, in the mid-and late 19th century, physicists attempting to determine the fundamental limits on the efficiency of steam engines had created the science of thermodynamics. In about 1960 one of us (Landauer) and John at IBM began attempting to apply the same type of analysis to the process of computing. Since the mid-1970's a growing number of other workers at other institutions have entered this field. In our analysis of the physical limits of computation we use the term \"in-formation\" in the technical sense of information theory. In this sense information is destroyed whenever two previously distinct situations become indistinguishable. In physical systems without friction, information can never be destroyed; whenever information is destroyed, some amount of energy must be dissipated (converted into heat). As an example, imagine two easily distinguishable physical situations, such as a …

344 citations


Journal ArticleDOI
TL;DR: An almost uniform triangulation of the two-sphere, derived from the icosahedron, is presented, and a procedure for discretization of a partial differential equation using this triangular grid is described.
Abstract: We present an almost uniform triangulation of the two-sphere, derived from the icosahedron, and describe a procedure for discretization of a partial differential equation using this triangular grid. The accuracy of our procedure is described by a strong theoretical estimate, and verified by large-scale numerical experiments. We also describe a data structure for this spherical discretization that allows fast computation on either a vector computer or an asynchronous parallel computer.

268 citations


Journal ArticleDOI
TL;DR: Land's retinex theory of lightness computation explains how for a “Mondrian World” image, consisting of a number of patches each of uniform reflectance, the reflectances can be computed from an image of that object.
Abstract: Land's retinex theory of lightness computation explains how for a “Mondrian World” image, consisting of a number of patches each of uniform reflectance, the reflectances can be computed from an image of that object. Horn has shown that the computation can be realised as a parallel process performed by successive layers of cooperating computational cells, arranged on hexagonal grids. However, the layers will, in practice, be arrays of finite extent and it is shown to be critical that cells on array boundaries behave correctly. The computation is first analysed in continuous terms, expressed as the solution of a differential equation with certain boundary conditions, and proved to be optimal in a certain sense. The finite element method is used to derive a discrete algorithm.

189 citations


Journal ArticleDOI
01 Jun 1985
TL;DR: In this article, a parallel-processing scheme for robot-arm control computation on any number of parallel processors is described, which employs two multiprocessor scheduling algorithms called, respectively, depth first/implicit heuristic search (DF/IHS) and critical path/most immediate successors first (CP/MISF).
Abstract: A parallel-processing scheme is described for robot-arm control computation on any number of parallel processors. The scheme employs two multiprocessor scheduling algorithms called, respectively, depth first/implicit heuristic search (DF/IHS) and critical path/most immediate successors first (CP/MISF); these were recently developed by the authors. The scheme is applied to the parallel processing of dynamic control computation for the Stanford manipulator. In particular, the proposed algorithms are applied to the computation of the Newton-Euler equations of motion for the Stanford manipulator and implemented on a multimicroprocessor system. The test result was so successful that the use of six processor pairs in parallel could attain the processing time of 5.37 ms. It is also shown that the proposed parallel-processing scheme is applicable to an arbitrary number of processors.

177 citations


Book
11 Jul 1985

132 citations


DOI
01 Feb 1985
TL;DR: The applicability of two relatively new linear FM matched filtering algorithms to the processing of synthetic-aperture radar (SAR) data is examined and compared to the fast-convolution algorithm.
Abstract: The applicability of two relatively new linear FM matched filtering algorithms to the processing of synthetic-aperture radar (SAR) data is examined and compared to the fast-convolution algorithm The algorithms, called basic spectral analysis and the step transform, use the properties of the linear FM signal to achieve some significant performance improvements The algorithms are evaluated on the basis of their ability to deal with problems peculiar to the SAR application, such as multilooking, range-cell migration, and variations in the FM rate of the input signal Computation rates are also derived as a function of resolution and target return signal aperture It is shown that no one algorithm is optimal for all cases The basic-spectral-analysis algorithm has the lowest computation rate at low resolutions, but has an output data rate which varies with the FM rate and cannot correct for nonlinear data shifts called range curvature The step transform has the most efficient computation rate at high resolutions It also has a constant output data rate and can correct for range curvature The fast-convolution algorithm has a lower computation rate than the step transform at low resolutions and can meet all of the SAR requirements mentioned All of the algorithms are able to perform multilook processing

128 citations



Journal ArticleDOI
Tsutomu Mita1
TL;DR: The author proposes design methods of linear optimal regulators and optimal servosystems in which the delay arising from the computation time of control laws can be counted properly.
Abstract: Since microprocessors are easily obtained, multivariable control theory can be applied to many practical control problems, for example control of robots. However, when the time constant of the plant is short and the dynamic order of the plant is high, the time delay due to the computation time of the control law cannot be neglected. In this paper, the author proposes design methods for linear optimal regulators and linear optimal servosystems in which the delay arising from the computation time of the processors is counted properly. From the theoretical point of view, the results are interesting since all the control laws derived in this paper are obtained using only conventional results of optimal regulator theory.

109 citations


Journal ArticleDOI
TL;DR: The purpose of the model is to describe the nature of students' computational skills and to demonstrate the extent to which students' computation performance is procedurally based.
Abstract: A model that describes the construction and execution of decimal computation procedures is presented. Our hypothesis is that students compute by relying solely on syntax-based rules; semantic knowledge has no effect on performance. To test the claim, a model is developed in which computation procedures are viewed as chains of component symbol manipulation rules. The model assumes that students acquire through instruction the individual rules that achieve subgoals in the computation process. The task for the procedural system is to select rules that satisfy each subgoal in sequence. The model specifies the rules of the system and identifies the syntactic features of the task that affect the selection of individual rules at each decision point. It then predicts the relative difficulty of decimal computation items and predicts the procedural flaw that will occur most frequently on each item. Written test and interview data are presented to test the predictions. Concluding comments discuss the nature of students' computation procedures, compare the model with other models of computation performance, and outline how the model might inform instruction. In this article, we present a model of how students compute with decimal numbers. The model consists of symbol manipulation rules that we believe are precisely the rules students acquire, store, and execute to compute with decimals. The purpose of the model is to describe the nature of students' computational skills and to demonstrate the extent to which students' computation performance is procedurally based. Our hypothesis is that by the time students reach upper elementary school their behavior on many mathematical tasks can be described in syntactic rather than semantic terms. Sufficient evidence has accumulated over the past 10 years to suggest that students' behavior on mathematical tasks changes in important ways as they

79 citations


Journal ArticleDOI
Toru Toyabe1, Hiroo Masuda1, Y. Aoki1, H. Shukuri1, Takaaki Hagiwara1 
TL;DR: A practical three-dimensional device simulator CADDETH (Computer Aided Device DEsign in THree dimensions) has been developed and full avalanche breakdown of MOSFET's can be readily simulated with good convergence and good agreement with experimental results.
Abstract: A practical three-dimensional device simulator CADDETH (Computer Aided Device DEsign in THree dimensions) has been developed. Matrix solution methods appropriate to three-dimensional analyses have been devised. A vectorization ratio of 97 percent has been attained through efficient use of S-810 super computer with vectorized coding, resulting in a computation speed 16 times greater than can he obtained with S-810 in scalar mode computation. Full avalanche breakdown of MOSFET's can be readily simulated with good convergence and good agreement with experimental results.

Journal ArticleDOI
TL;DR: A general algorithm for low-order multifunctional observer design with arbitrary eigenvalues which can generate a functional observer with different orders which are no larger but usually much less than m(v - 1), wheremis the number of functionals andvis the observability index of(A, C).
Abstract: This paper presents a general algorithm for low-order multifunctional observer design with arbitrary eigenvalues. The feature of this algorithm is that it can generate a functional observer with different orders which are no larger but usually much less than m(v - 1) , where m is the number of functionals and v is the observability index of (A, C) . Since the order needed for the observer varies with the functionals besides other system parameters, this design approach should be practical. The resulting observer system matrix is in its Jordan form. The key step of this algorithm is the generation of the basis for the transformation matrix which relates the system and observer states. The computation of this algorithm is quite reliable. It is based on the block observable lower Hessenberg form of (A, C) , and all its initial and major computation involves only the orthogonal operations.

Journal ArticleDOI
TL;DR: In this article, an experimental mathematics facility, containing both special-purpose dedicated machines and general-purpose mainframes, may someday provide the ideal context for complex nonlinear problems, which can be explored mathematically.
Abstract: Computers have expanded the range of nonlinear phenomena that can be explored mathematically. An “experimental mathematics facility,” containing both special-purpose dedicated machines and general-purpose mainframes, may someday provide the ideal context for complex nonlinear problems.

01 Jun 1985
TL;DR: The broader conclusion is reached that well-designed data structures and support routines allow the use of more conceptual or non-numerical portions of mathematics in the computational process, thereby extending greatly the potential scope of the uses of computers in scientific problem solving.
Abstract: Decompositions of the plane into disjoint components separated by curves occur frequently. We describe a package of subroutines which provides facilities for defining, building, and modifying such decompositions and for efficiently solving various point and area location problems. Beyond the point that the specification of this package may be useful to others, we reach the broader conclusion that well-designed data structures and support routines allow the use of more conceptual or non-numerical portions of mathematics in the computational process, thereby extending greatly the potential scope of the use of computers in scientific problem solving. Ideas from conceptual mathematics, symbolic computation, and computer science can be utilized within the framework of scientific computing and have an important role to play in that area.

Journal ArticleDOI
TL;DR: In this paper, a method for the numerical computation of invariant circles of maps is presented, along with appropriate techniques for its implementation, which involves solution of a functional equation by discretization and Newton iteration.

Journal ArticleDOI
Toru Toyabe1, Hiroo Masuda1, Y. Aoki1, H. Shukuri1, Takaaki Hagiwara1 
TL;DR: A practical three-dimensional device simulator CADDETH (Computer Aided Device DEsign in THree dimensions) has been developed and full avalanche breakdown of MOSFET's can be readily simulated with good convergence and good agreement with experimental results.
Abstract: A practical three-dimensional device simulator CADDETH (Computer Aided Device DEsign in THree dimensions) has been developed. Matrix solution methods appropriate to three-dimensional analyses have been devised. A vectorization ratio of 97 percent has been attained through efficient use of S-810 super computer with vectorized coding, resulting in a computation speed 16 times greater than can be obtained with S-810 in scalar mode computation. Full avalanche breakdown of MOSFET's can be readily simulated with good convergence and good agreement with experimental results.

Journal ArticleDOI
TL;DR: A textured model which assembles local groups of buses into a multi-leaf structure which is ideally suited for parallel processing and should prove to be a valuable tool for on line computations in the course of reactive power control and management.
Abstract: The texture of the power system which govern, the interplay of reactive power and voltage is emulated by a textured model which assembles local groups of buses into a multi-leaf structure. Groups on the same leaf of the model are not coupled with each other, groups on different leaves overlap partially and are thus coupled. The paths of computational information are organized in an efficient manner and inefficient computation and information paths are eliminated, yet the computation converges to the exact solution, not an approximate one. The resulting model is ideally suited for parallel processing especially since there is no sequential component in the computation no computation overhead and (if the size of the groups and their numbers per leaf are uniform) there is no waiting time. Computation time savings of as much as 100÷1 (i.e. a hundred fold saving) were observed in experiments on steepest descent algorithms with systems of around 100 buses. Computation times also favorably compare with existing speed up techniques such as block pivoting. Computation times for common algorithms (like matrix manipulations, Newton-Raphson, linear and nonlinear programming) increase with the system size at a fast non- linear rate. The computation times remain essentially constant for the textured model in parallel processing. Thus very large computation time savings are implied on larger systems. Consequently this new model should prove to be a valuable tool for on line computations in the course of reactive power control and management.

Journal ArticleDOI
TL;DR: A new algorithm is presented which is practicable for greater dimensions and requires less computation time and wenig Rechenzeit benötigt.
Abstract: Up to now there has been an algorithm for the calculation of Minkowski-reduced lattice bases to dimensionn=6 or at most ton=7 A new algorithm is presented which is practicable for greater dimensions and requires less computation time

Journal ArticleDOI
TL;DR: An algorithm for automatic computation of the time step based on the current period and a set of parameters for characterizing the dynamic response of a system is proposed, this includes a ‘current frequency’, a “current period”, and a ’dynamic stiffness parameter’.


BookDOI
01 Jan 1985
TL;DR: In this article, the authors present a list of past, present, and future applications of computer algebra in chemistry, including the following: 1. MACSYMA: Capabilities and Applications to Problems in Engineering and the Sciences.
Abstract: 1. MACSYMA: Capabilities and Applications to Problems in Engineering and the Sciences.- 2. Modern Symbolic Mathematical Computation Systems.- 3. Using VAXIMA to Write FORTRAN Code.- 4. Applications of Symbolic Mathematics to Mathematics.- 5. Past, Present, and Future Applications of Computer Algebra in Chemistry.- 6. Symbolic Computation in Chemical Education.- 7. A Lisp System for Chemical Groups: Wigner-Eckart Coefficients for Arbitrary Permutation Groups.- 8. Polymer Modeling Applications of Symbolic Computation.- 9. Stability Analysis and Optimal Control of a Photochemical Heat Engine.- 10. Fourier Transform Algorithms for Spectral Analysis Derived with MACSYMA.- 11. Computer Algebra as a Tool for Solving Optimal Control Problems.- 12. Application of MACSYMA to Kinematics and Mechanical Systems.- 13. Stability Analysis of a Robotic Mechanism Using Computer Algebra.- 14. Derivation of the Hopf Bifurcation Formula Using Lindstedt's Perturbation Method and MACSYMA.- 15. Normal Form and Center Manifold Calculations on MACSYMA.- 16. Symbolic Computation of the Stokes Wave.- 17. Simplifying Large Algebraic Expressions by Computer.- 18. A Proposal for the Solution of Quantum Field Theory Problems Using a Finite-Element Approximation.- 19. Exact Solutions for Superlattices and How to Recognize Them with Computer Algebra.- 20. Computer Generation of Symbolic Generalized Inverses and Applications to Physics and Data Analysis.

BookDOI
01 Jan 1985
TL;DR: A New Look at Spatially Competitive Facility Location Models and Investigating the Use of the Core as a Solution Concept in Spatial Price Equilibrium Games.
Abstract: A New Look at Spatially Competitive Facility Location Models.- A Spatial Nash Equilibrium Model.- Investigating the Use of the Core as a Solution Concept in Spatial Price Equilibrium Games.- Computational Aspects of the International Coal Trade Model.- Demand Homotopies for Computing Nonlinear and Multi-Commodity Spatial Equilibria.- A Dual Conjugate Gradient Method for the Single-Commodity Spatial Price Equilibrium Problem.- General Spatial Price Equilibria: Sensitivity Analysis for Variational Inequality and Nonlinear Complementarity Formulations.- An Application of Quadratic Programming to the Deregulation of Natural Gas.- Evaluation of Electric Power Deregulation Using Network Models of Oligopolistic Spatial Markets.- Multiple Objective Analysis for a Spatial Market System: A Case Study of U.S.Agricultural Policy.

01 Aug 1985
TL;DR: First, grey-scale image smoothing proves to be better than boundary smoothing for creating representations at multiple scales of resolution, because it is more robust and it allows qualitative changes in representations between scales.
Abstract: : This theis describes a new representation for two-dimensional round regions called Local Rotational Symmetries. Local Rotational Symmetries are intended as a companion to Brady's Smoothed Local Symmetry Representation for elongated shapes. An algorithm for computing Local Rotational Symmetry representations at multiple scales of resolution has been implemented and results of this implementationnare presented. These results suggest that Local Rotational Symmetries provide a more robustly computable and perceptually accurate description of round regions than previous proposed representations. In the course of developing this representation, it has been necessary to modify the way both Smoothed Local Symmetries and Local Rotational Symmetries are computed. First, grey-scale image smoothing proves to be better than boundary smoothing for creating representations at multiple scales of resolution, because it is more robust and it allows qualitative changes in representations between scales. Secondly, it is proposed that shape representations at different scales of resolution be explicitly related, so that information can be passed between scales and computation at each scale can be kept local. Such a model for multi-scale computation is desirable both to allow efficient computation and to accurately model human perceptions. Additional keywords: Image understanding; Computer vision; Artificial intelligence; Shape representation; Computer graphics.

Journal ArticleDOI
TL;DR: Standard programming languages are inadequate for the kind of symbolic mathematical computations that theoretical physicists need to perform and higher mathematics systems like SMP address this problem.
Abstract: Standard programming languages are inadequate for the kind of symbolic mathematical computations that theoretical physicists need to perform. Higher mathematics systems like SMP address this problem.

Journal ArticleDOI
TL;DR: A structure is proposed from which it is possible to efficiently reconstruct the state of the data it represented at any time and applications of this data structure to a number of important problems in geometric computation are given.

Journal ArticleDOI
TL;DR: A modification of that decomposition method is presented which results in speeding up the Minkowski operations for a broader class of structuring elements and it is shown that, after a certain number of steps, just the 'extreme points' of the structuring element are important.

Journal ArticleDOI
TL;DR: In this article, an implicit-explicit partitioning of the structural displacement, fluid velocity and fluid pressure is used to reduce the semi-bandwidth of the discretized equations.

Journal ArticleDOI
TL;DR: Tests indicate that the method reduces running time over standard methods in scalar form, and that “vectorization” produces an order-of-magnitude decrease in execution time.


Journal ArticleDOI
TL;DR: An algorithm for the computation of a Hopf bifurcation point based on a direct method, i.e. an augmented time independent system is solved and the bandstructure of the Jacobian matrix is exploited.