About: Scientific Programming is an academic journal. The journal publishes majorly in the area(s): Cloud computing & Compiler. It has an ISSN identifier of 1058-9244. It is also open access. Over the lifetime, 3750 publication(s) have been published receiving 27507 citation(s).
Topics: Cloud computing, Compiler, Fortran, Deep learning, Big data
Papers published on a yearly basis
01 Jan 1990-Scientific Programming
TL;DR: In this paper, the authors discuss the social consequences of low self-control in criminal events and individual propensities: age, gender, and race, as well as white-collar crime.
Abstract: Preface Part I. Crime: 1. Classical theory and the idea of crime 2. The nature of crime Part II. Criminality: 3. Biological positivism 4. Psychological, economic, and sociological positivism 5. The nature of criminality: low self-control Part II. Applications of the Theory: 6. Criminal events and individual propensities: age, gender, and race 7. The social consequences of low self-control 8. Culture and crime 9. White-collar crime 10. Organization and crime Part IV. Research and Policy: 11. Research design and measurement 12. Implications for public policy Index.
01 Jul 2005-Scientific Programming
TL;DR: The results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities are presented.
Abstract: This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.
01 Oct 2005-Scientific Programming
TL;DR: The design -- including the separation into two phases, the form of the programming language, and the properties of the aggregators -- exploits the parallelism inherent in having data and computation distributed across many machines.
Abstract: Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design -- including the separation into two phases, the form of the programming language, and the properties of the aggregators -- exploits the parallelism inherent in having data and computation distributed across many machines.
06 May 1996-Scientific Programming
TL;DR: In this paper, a method for anomaly detection is introduced in which ''normal'' is defined by short-range correlations in a process' system calls and the definition is stable during normal behavior for standard UNIX programs.
Abstract: A method for anomaly detection is introduced in which ``normal'' is defined by short-range correlations in a process' system calls. Initial experiments suggest that the definition is stable during normal behavior for standard UNIX programs. Further, it is able to detect several common intrusions involving sendmail and lpr. This work is part of a research program aimed at building computer security systems that incorporate the mechanisms and algorithms used by natural immune systems.
01 Jan 1992-Scientific Programming
TL;DR: Experimental results show that ADifOR can handle real-life codes and that ADIFOR-generated codes are competitive with divided-difference approximations of derivatives, and studies suggest that the source transformation approach to automatic differentiation may improve the time to compute derivatives by orders of magnitude.
Abstract: The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f $R^N$→$R^m$ Both the accuracy and the computational requirements of the derivative computation are usually of critical importance for the robustness and speed of the numerical solution Automatic Differentiation of FORtran (ADIFOR) is a source transformation tool that accepts Fortran 77 code for the computation of a function and writes portable Fortran 77 code for the computation of the derivatives In contrast to previous approaches, ADIFOR views automatic differentiation as a source transformation problem ADIFOR employs the data analysis capabilities of the ParaScope Parallel Programming Environment, which enable us to handle arbitrary Fortran 77 codes and to exploit the computational context in the computation of derivatives Experimental results show that ADIFOR can handle real-life codes and that ADIFOR-generated codes are competitive with divided-difference approximations of derivatives In addition, studies suggest that the source transformation approach to automatic differentiation may improve the time to compute derivatives by orders of magnitude
Related Journals (5)
Concurrency and Computation: Practice and Experience
5.5K papers, 88.1K citations
Journal of Parallel and Distributed Computing
4K papers, 108.5K citations
Future Generation Computer Systems
6.1K papers, 186.4K citations
4.4K papers, 401.5K citations
IEEE Transactions on Parallel and Distributed Systems
4.8K papers, 213.1K citations