scispace - formally typeset
Search or ask a question

Showing papers on "Computation published in 2000"


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the role of symbol processors in business performance and economic growth, arguing that most problems are not numerical problems and that the everyday activities of most managers, professionals, and information workers involve other types of computation.
Abstract: How do computers contribute to business performance and economic growth? Even today, most people who are asked to identify the strengths of computers tend to think of computational tasks like rapidly multiplying large numbers. Computers have excelled at computation since the Mark I (1939), the first modern computer, and the ENIAC (1943), the first electronic computer without moving parts. During World War II, the U.S. government generously funded research into tools for calculating the trajectories of artillery shells. The result was the development of some of the first digital computers with remarkable capabilities for calculation—the dawn of the computer age. However, computers are not fundamentally number crunchers. They are symbol processors. The same basic technologies can be used to store, retrieve, organize, transmit, and algorithmically transform any type of information that can be digitized—numbers, text, video, music, speech, programs, and engineering drawings, to name a few. This is fortunate because most problems are not numerical problems. Ballistics, code breaking, parts of accounting, and bits and pieces of other tasks involve lots of calculation. But the everyday activities of most managers, professionals, and information workers involve other types of

2,937 citations


Journal ArticleDOI
31 Aug 2000-Nature
TL;DR: The physical limits of computation as determined by the speed of light c, the quantum scale ℏ and the gravitational constant G are explored.
Abstract: Computers are physical systems: the laws of physics dictate what they can and cannot do. In particular, the speed with which a physical device can process information is limited by its energy and the amount of information that it can process is limited by the number of degrees of freedom it possesses. Here I explore the physical limits of computation as determined by the speed of light c, the quantum scale h and the gravitational constant G. As an example, I put quantitative bounds to the computational power of an 'ultimate laptop' with a mass of one kilogram confined to a volume of one litre.

1,020 citations


Patent
25 Sep 2000
TL;DR: In this article, a personal digital assistant (PDA) stores data from physiological monitors (12) so that the data can be used in various software applications, such as medical applications.
Abstract: A personal digital assistant (PDA) (10) stores data from physiological monitors (12) so that the data can be used in various software applications. In different embodiments the physiological monitor (12) can include data storage or have memory modules that are accepted by accessory slots (48 and 50).

899 citations


Journal ArticleDOI
TL;DR: An effective approach based on the concept used by the technique for order preference by similarity to ideal solution (TOPSIS) is developed to rank competing companies in terms of their overall performance on multiple financial ratios to ensure that the evaluation result is not affected by the inter-dependency of criteria and inconsistency of subjective weights.

857 citations


Proceedings ArticleDOI
08 Oct 2000
TL;DR: In this paper, the authors proposed several concepts of integrators for sinusoidal signals, including parallel and series associations of the basic PI units using the stationary frame generalized integrators, for current control of active power filters.
Abstract: The paper proposes several concepts of integrators for sinusoidal signals. Parallel and series associations of the basic PI units using the stationary frame generalized integrators are used for current control of active power filters. Zero steady state error for the concerned current harmonics are realized, with reduced computation, under unbalanced utility or load conditions. Designing of the PI constants, digital realization of the generalized integrators, as well as compensation of the computation delay etc. are studied. Extensive test results from a 10 kW active power filter prototype are demonstrated.

838 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a body of knowledge that forms the beginnings of an expandable continuum framework for the description of mixing and segregation of granular materials, focusing primarily on noncohesive particles, possibly differing in size, density, shape, etc.
Abstract: ▪ Abstract Granular materials segregate. Small differences in either size or density lead to flow-induced segregation, a complex phenomenon without parallel in fluids. Modeling of mixing and segregation processes requires the confluence of several tools, including continuum and discrete descriptions (particle dynamics, Monte Carlo simulations, cellular automata computations) and, often, considerable geometrical insight. None of these viewpoints, however, is wholly satisfactory by itself. Moreover, continuum and discrete descriptions of granular flows are regime dependent, and this fact may require adopting different subviewpoints. This review organizes a body of knowledge that forms—albeit imperfectly—the beginnings of an expandable continuum framework for the description of mixing and segregation of granular materials. We focus primarily on noncohesive particles, possibly differing in size, density, shape, etc. We present segregation mechanisms and models for size and density segregation and introduce ch...

599 citations


Journal ArticleDOI
19 May 2000-Science
TL;DR: Hairpin formation by single-stranded DNA molecules was exploited in a DNA-based computation in order to explore the feasibility of autonomous molecular computing and the satisfiability of a given Boolean formula was examined autonomously.
Abstract: Hairpin formation by single-stranded DNA molecules was exploited in a DNA-based computation in order to explore the feasibility of autonomous molecular computing. An instance of the satisfiability problem, a famous hard combinatorial problem, was solved by using molecular biology techniques. The satisfiability of a given Boolean formula was examined autonomously, on the basis of hairpin formation by the molecules that represent the formula. This computation algorithm can test several clauses in the given formula simultaneously, which could reduce the number of laboratory steps required for computation.

356 citations


Book ChapterDOI
11 Oct 2000
TL;DR: This approach, besides its simplicity, provides a robust and efficient way to rigidly register images in various situations, and can easily be implemented on a parallel architecture, which opens potentialities for real time applications using a large number of processors.
Abstract: In order to improve the robustness of rigid registration algorithms in various medical imaging problems, we propose in this article a general framework built on block matching strategies. This framework combines two stages in a multi-scale hierarchy. The first stage consists in finding for each block (or subregion) of the first image, the most similar subregion in the other image, using a similarity criterion which depends on the nature of the images. The second stage consists in finding the global rigid transformation which best explains most of these local correspondances. This is done with a robust procedure which allows up to 50% of false matches. We show that this approach, besides its simplicity, provides a robust and efficient way to rigidly register images in various situations. This includes for instance the alignment of 2D histological sections for the 3D reconstructions of trimmed organs and tissues, the automatic computation of the mid-sagittal plane in multimodal 3D images of the brain, and the multimodal registration of 3D CT and MR images of the brain. A quantitative evaluation of the results is provided for this last example, as well as a comparison with the classical approaches involving the minimization of a global measure of similarity based on Mutual Information or the Correlation Ratio. This shows a significant improvement of the robustness, for a comparable final accuracy. Although slightly more expensive in terms of computational requirements, the proposed approach can easily be implemented on a parallel architecture, which opens potentialities for real time applications using a large number of processors.

344 citations


Posted Content
TL;DR: In this article, it was shown that the topological modular functor from Witten-Chern-Simons theory is universal for quantum computation in the sense a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor's state space.
Abstract: We show that the topological modular functor from Witten-Chern-Simons theory is universal for quantum computation in the sense a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor's state space. A computational model based on Chern-Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation, have topological implications which will be considered elsewhere.

332 citations


Journal ArticleDOI
TL;DR: A technique for the computation of the Green's tensor in three-dimensional stratified media composed of an arbitrary number of layers with different permittivities and permeabilities is presented.
Abstract: We present a technique for the computation of the Green's tensor in three-dimensional stratified media composed of an arbitrary number of layers with different permittivities and permeabilities (including metals with a complex permittivity). The practical implementation of this technique is discussed in detail. In particular, we show how to efficiently handle the singularities occurring in Sommerfeld integrals, by deforming the integration path in the complex plane. Examples assess the accuracy of this approach and illustrate the physical properties of the Green's tensor, which represents the field radiated by three orthogonal dipoles embedded in the multilayered medium.

270 citations


Book ChapterDOI
01 Jan 2000
TL;DR: A new general algorithm for computing distance transforms of digital images is presented, which can be used for the computation of the exact Euclidean, Manhattan, and chessboard distance transforms.
Abstract: A new general algorithm for computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the computation per row (column) is independent of the computation of other rows (columns), the algorithm can be easily parallelized on shared memory computers. The algorithm can be used for the computation of the exact Euclidean, Manhattan (L 1 norm), and chessboard distance (L ∞ norm) transforms.

Book ChapterDOI
15 Jul 2000
TL;DR: Regular model checking as discussed by the authors is a framework for verification of infinite-state systems with queues, stacks, integers, or a parameterized linear topology, where states are represented by strings over a finite alphabet and the transition relation by a regular length-preserving relation on strings.
Abstract: We present regular model checking, a framework for algorithmic verification of infinite-state systems with, e.g., queues, stacks, integers, or a parameterized linear topology. States are represented by strings over a finite alphabet and the transition relation by a regular length-preserving relation on strings. Major problems in the verification of parameterized and infinite-state systems are to compute the set of states that are reachable from some set of initial states, and to compute the transitive closure of the transition relation. We present two complementary techniques for these problems. One is a direct automata-theoretic construction, and the other is based on widening. Both techniques are incomplete in general, but we give sufficient conditions under which they work. We also present a method for verifying ω-regular properties of parameterized systems, by computation of the transitive closure of a transition relation.

Journal ArticleDOI
TL;DR: The standard method seems to be the most efficient followed by the new method and the differential version of the standard method (in that order), as far as the CPU time for the computation of the Lyapunov spectra is concerned.

Journal ArticleDOI
TL;DR: A novel fast block-matching algorithm named normalized partial distortion search is proposed, which reduces computations by using a halfway-stop technique in the calculation of the block distortion measure and normalized the accumulated partial distortion and the current minimum distortion before comparison.
Abstract: Many fast block-matching algorithms reduce computations by limiting the number of checking points. They can achieve high computation reduction, but often result in relatively higher matching error compared with the full-search algorithm. A novel fast block-matching algorithm named normalized partial distortion search is proposed. The proposed algorithm reduces computations by using a halfway-stop technique in the calculation of the block distortion measure. In order to increase the probability of early rejection of non-possible candidate motion vectors, the proposed algorithm normalized the accumulated partial distortion and the current minimum distortion before comparison. Experimental results show that the proposed algorithm can maintain its mean square error performance very close to the full-search algorithm while achieving an average computation reduction of 12-13 times, with respect to the full-search algorithm.

Journal ArticleDOI
TL;DR: Time-dependent solutions of the sharp-interface model of dendritic solidification in two dimensions are computed by using a level set method and steady-state results are in agreement with solvability theory.
Abstract: We compute time-dependent solutions of the sharp-interface model of dendritic solidification in two dimensions by using a level set method. The steady-state results are in agreement with solvability theory. Solutions obtained from the level set algorithm are compared with dendritic growth simulations performed using a phase-field model and the two methods are found to give equivalent results. Furthermore, we perform simulations with unequal diffusivities in the solid and liquid phases and find reasonable agreement with the available theory.

Book ChapterDOI
01 Jan 2000
TL;DR: An approach based on the Pontryagin maximum principle of optimal control theory is elaborated for linear systems, and it may prove useful for more general continuous-variable systems.
Abstract: Reach set computation is a basic component of many verification and control synthesis procedures. Effective computation schemes are available for discrete Systems described by finite state machines and continuous-variable Systems described by linear differential inequalities. This paper suggests an approach based on the Pontryagin maximum principle of optimal control theory. The approach is elaborated for linear systems, and it may prove useful for more general continuous-variable systems

Journal ArticleDOI
01 Apr 2000
TL;DR: The developed computation of structure-varying kinematic chains will provide a general algorithm for the computation of motion and control of humanoid robots and computer graphics human figures.
Abstract: This paper discusses the dynamics computation of structure-varying kinematic chains which imply mechanical link systems whose structure may change from open kinematic chain to closed one and vice versa. The proposed algorithm can handle and compute the dynamics and motions of any rigid link systems in a seamless manner without switching among algorithms. The computation is developed on the foundation of the dynamics computation algorithms established in robotics, which is superior in efficiency due to explicit use of the generalized coordinates to those used in the general-purpose motion analysis softwares. Although the structure-varying kinematic chains are commonly found in computing human and animal motions, the computation of their dynamics has not been discussed in literature. The developed computation will provide a general algorithm for the computation of motion and control of humanoid robots and computer graphics human figures.

Journal ArticleDOI
TL;DR: An extended lattice Boltzmann (BGK) model is presented for the simulation of low Mach number flows with significant density changes and with a boundary fitting formulation and local grid refinement the scheme enables accurate and efficient computations of lowMach number reactive flows in complex geometry on the simplest Cartesian grids.

Journal ArticleDOI
TL;DR: Two approaches to the implementation of the conjugate gradient algorithm for filtering where several modifications to the original CG method are proposed are presented and it is shown that in finite word-length computation and close to steady state, the algorithms' behaviors are similar to the steepest descent algorithm.
Abstract: The paper presents and analyzes two approaches to the implementation of the conjugate gradient (CG) algorithm for filtering where several modifications to the original CG method are proposed. The convergence rates and misadjustments for the two approaches are compared. An analysis in the z-domain is used in order to find the asymptotic performance, and stability bounds are established. The behavior of the algorithms in finite word-length computation are described, and dynamic range considerations are discussed. It is shown that in finite word-length computation and close to steady state, the algorithms' behaviors are similar to the steepest descent algorithm, where the stalling phenomenon is observed. Using 16-bit fixed-point number representation, our simulations show that the algorithms are numerically stable.

Journal ArticleDOI
01 Jan 2000
TL;DR: This paper describes lessons learned over the past several years about teaching the design of modern embedded computing systems and believes that next-generation courses in embedded computing should move away from the discussion of components and toward the Discussion of analysis and design of systems.
Abstract: This paper describes lessons we have learned over the past several years about teaching the design of modern embedded computing systems. An embedded computing system uses microprocessors to implement parts of the functionality of non-general-purpose computers. Early microprocessor-based design courses, based on simple microprocessors, emphasized input and output (I/O). Modern high-performance embedded processors are capable of a great deal of computation in addition to I/O tasks. Taking advantage of this capability requires a knowledge of fundamental concepts in the analysis and design of concurrent computing systems. We believe that next-generation courses in embedded computing should move away from the discussion of components and toward the discussion of analysis and design of systems.

22 Sep 2000
TL;DR: In this paper, the pseudorange error models for the GPS LAAS MASPS were presented and validated using the ICAO GNSS SARPs and Signal-In-Space error models.
Abstract: This paper provides details underlying the pseudorange accuracy models originally developed for the GPS LAAS MASPS. During the development of the MASPS, WG-4 of RTCA SC-159 realized that it needed standardized error models for LAAS availability assessments. To meet this need, various Ground and Airborne Accuracy Designators were defined based upon performance that could be achieved using currently available GPS receiver technology. In addition Signal-In-Space error models were developed for such effects as tropospheric and ionospheric temporal and spatial decorrelation. Pseudorange errors are modeled as a function of predictable parameters such as satellite elevation angle. When combined with assumptions about the differential correction methodology, airborne positioning algorithm, and integrity limit computations, these pseudorange error models permit a determination of the availability of a desired level of service from the LAAS. Refinements of the models have also been made since their original publication in the LAAS MASPS, notably in the development and validation of the ICAO GNSS SARPs.

Journal ArticleDOI
TL;DR: A method for the computation of the equivalent bandwidth of an aggregate of heterogeneous self-similar sources, as well as the time scales of interest for queueing systems fed by a fractal Brownian motion process are presented.
Abstract: This article presents a method for the computation of the equivalent bandwidth of an aggregate of heterogeneous self-similar sources, as well as the time scales of interest for queueing systems fed by a fractal Brownian motion (fBm) process. Moreover, the fractal leaky bucket, a novel policing mechanism capable of accurately monitoring self-similar sources, is introduced.

Proceedings Article
01 Jan 2000
TL;DR: An efficient and deterministic algorithm for computing the one-dimensional dilation and erosion (max and min) sliding window filters and gives an efficient algorithm for its computation.
Abstract: We propose an efficient and deterministic algorithm for computing the one-dimensional dilation and erosion (max and min) sliding window filters. For a p-element sliding window, our algorithm computes the 1D filter using 1.5 + o(1) comparisons per sample point. Our algorithm constitutes a deterministic improvement over the best previously known such algorithm, independently developed by van Herk (1992) and by Gil and Werman (1993) (the HGW algorithm). Also, the results presented in this paper constitute an improvement over the Gevorkian et al. (1997) (GAA) variant of the HGW algorithm. The improvement over the GAA variant is also in the computation model. The GAA algorithm makes the assumption that the input is independently and identically distributed (the i.i.d. assumption), whereas our main result is deterministic. We also deal with the problem of computing the dilation and erosion filters simultaneously, as required, e.g., for computing the unbiased morphological edge. In the case of i.i.d. inputs, we show that this simultaneous computation can be done more efficiently then separately computing each. We then turn to the opening filter, defined as the application of the min filter to the max filter and give an efficient algorithm for its computation. Specifically, this algorithm is only slightly slower than the computation of just the max filter. The improved algorithms are readily generalized to two dimensions (for a rectangular window), as well as to any higher finite dimension (for a hyperbox window), with the number of comparisons per window remaining constant. For the sake of concreteness, we also make a few comments on implementation considerations in a contemporary programming language.

01 Jan 2000
TL;DR: This work proposes new algorithms that combine BDDs and SAT in order to exploit their complementary benefits, and to offer a mechanism for trading off space vs. time.
Abstract: Image computation finds wide application in VLSI CAD, such as state reachability analysis in formal verification and synthesis, combinational verification, combinational and sequential test. Existing BDD-based symbolic algorithms for image computation are limited by memory resources in practice, while SAT-based algorithms that can obtain the image by enumerating satisfying assignments to a CNF representation of the Boolean relation are potentially limited by time resources. We propose new algorithms that combine BDDs and SAT in order to exploit their complementary benefits, and to offer a mechanism for trading off space vs. time. In particular, (1) our integrated algorithm uses BDDs to represent the input and image sets, and a CNF formula to represent the Boolean relation, (2) a fundamental enhancement called BDD Bounding is used whereby the SAT solver uses the BDDs for the input set and the dynamically changing image set to prune the search space of all solutions, (3) BDDs are used to compute all solutions below intermediate points in the SAT decision tree, (4) a fine-grained variable quantification schedule is used for each BDD subproblem, based on the CNF representation of the Boolean relation. These enhancements coupled with more engineering heuristics lead to an overall algorithm that can potentially handle larger problems. This is supported by our preliminary results on exact reachability analysis of ISCAS benchmark circuits.

Journal ArticleDOI
TL;DR: The accuracy of NeuroFlux appears to be comparable to the accuracy of the ECMWF operational scheme, with a negligible impact on the simulations, while its computing time is seven times faster.
Abstract: SUMMARY The definition of an approach for radiative-transfer modelling that would enable computation times suitable for climate studies and a satisfactory accuracy, has proved to be a challenge for modellers. A fast radiative-transfer model is tested at ECMWF: NeuroFlux. It is based on an artificial neural-network technique used in conjunction with a classical cloud approximation (the multilayer grey-body model). The accuracy of the method is assessed through code-by-code comparisons, climate simulations and ten-day forecasts with the ECMWF model. The accuracy of NeuroFlux appears to be comparable to the accuracy of the ECMWF operational scheme, with a negligible impact on the simulations, while its computing time is seven times faster. KEY WORDS: Artificial neural networks General-circulation models Long-wave radiative transfer 1. INTRODUCTION Parametric representation, or parametrization, is used in the numerical modelling of various atmospheric processes. It usually involves a statistical analysis that enables the representation of the true processes by simpler parametric relations. Three purposes may motivate such an analysis: (i) getting a better understanding of the system (e.g. Bretherton

Journal ArticleDOI
TL;DR: A modular Buchberger?Moller algorithm is developed, and a variant for the computation of ideals of projective points, which uses a direct approach and a new stopping criterion is described.


Book ChapterDOI
TL;DR: A new algorithm for the fast computation of N-point correlation functions in large astronomical data sets is presented, based on kdtrees which are decorated with cached sufficient statistics thus allowing for orders of magnitude speed-ups over the naive non-tree-based implementation of correlation functions.
Abstract: We present here a new algorithm for the fast computation of N-point correlation functions in large astronomical data sets. The algorithm is based on kdtrees which are decorated with cached sufficient statistics thus allowing for orders of magnitude speed-ups over the naive non-tree-based implementation of correlation functions. We further discuss the use of controlled approximations within the computation which allows for further acceleration. In summary, our algorithm now makes it possible to compute exact, all-pairs, measurements of the 2, 3 and 4-point correlation functions for cosmological data sets like the Sloan Digital Sky Survey (SDSS; York et al. 2000) and the next generation of Cosmic Microwave Background experiments (see Szapudi et al. 2000).

Journal ArticleDOI
TL;DR: This paper analyzes the use of parallel computing in model reduction methods based on balanced truncation of large-scale dense systems and uses a sign function-based solver for computing full-rank factors of the Gramians.
Abstract: Model reduction is an area of fundamental importance in many modeling and control applications In this paper we analyze the use of parallel computing in model reduction methods based on balanced truncation of large-scale dense systems The methods require the computation of the Gramians of a linear-time invariant system Using a sign function-based solver for computing full-rank factors of the Gramians yields some favorable computational aspects in the subsequent computation of the reduced-order model, particularly for non-minimal systems As sign function-based computations only require efficient implementations of basic linear algebra operations readily available, eg, in the BLAS, LAPACK, and ScaLAPACK, good performance of the resulting algorithms on parallel computers is to be expected Our experimental results on a PC cluster show the performance and scalability of the parallel implementation

01 Aug 2000
TL;DR: A new algotothm called Titanic for computing concept lattices is presented, based on data mining techniques for computing frequent itemsets, and compared with B. Ganter'Next-Closure algorithm.
Abstract: We present a new algotothm called Titanic for computing concept lattices. It is based on data mining techniques for computing frequent itemsets. The algorithm is experimentaly evaluated and compared with B. Ganter'Next-Closure algorithm.