scispace - formally typeset
Search or ask a question
Topic

Dataflow programming

About: Dataflow programming is a research topic. Over the lifetime, 372 publications have been published within this topic receiving 7095 citations.


Papers
More filters
Journal ArticleDOI
01 May 1995
TL;DR: Dataflow process networks are shown to be a special case of Kahn process networks, a model of computation where a number of concurrent processes communicate through unidirectional FIFO channels, where writes to the channel are nonblocking, and reads are blocking.
Abstract: We review a model of computation used in industrial practice in signal processing software environments and experimentally and other contexts. We give this model the name "dataflow process networks," and study its formal properties as well as its utility as a basis for programming language design. Variants of this model are used in commercial visual programming systems such as SPW from the Alta Group of Cadence (formerly Comdisco Systems), COSSAP from Synopsys (formerly Cadis), the DSP Station from Mentor Graphics, and Hypersignal from Hyperception. They are also used in research software such as Khoros from the University of New Mexico and Ptolemy from the University of California at Berkeley, among many others. Dataflow process networks are shown to be a special case of Kahn process networks, a model of computation where a number of concurrent processes communicate through unidirectional FIFO channels, where writes to the channel are nonblocking, and reads are blocking. In dataflow process networks, each process consists of repeated "firings" of a dataflow "actor." An actor defines a (often functional) quantum of computation. By dividing processes into actor firings, the considerable overhead of context switching incurred in most implementations of Kahn process networks is avoided. We relate dataflow process networks to other dataflow models, including those used in dataflow machines, such as static dataflow and the tagged-token model. We also relate dataflow process networks to functional languages such as Haskell, and show that modern language concepts such as higher-order functions and polymorphism can be used effectively in dataflow process networks. A number of programming examples using a visual syntax are given. >

976 citations

Journal ArticleDOI
TL;DR: How dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach is discussed.
Abstract: Many developments have taken place within dataflow programming languages in the past decade. In particular, there has been a great deal of activity and advancement in the field of dataflow visual programming languages. The motivation for this article is to review the content of these recent developments and how they came about. It is supported by an initial review of dataflow programming in the 1970s and 1980s that led to current topics of research. It then discusses how dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach. Recent trends toward dataflow visual programming languages are then discussed with reference to key graphical dataflow languages and their development environments. Finally, the article details four key open topics in dataflow programming languages.

455 citations

Journal ArticleDOI
01 Aug 2009
TL;DR: Pig is a high-level dataflow system that aims at a sweet spot between SQL and Map-Reduce, and performance comparisons between Pig execution and raw Map- Reduce execution are reported.
Abstract: Increasingly, organizations capture, transform and analyze enormous data sets. Prominent examples include internet companies and e-science. The Map-Reduce scalable dataflow paradigm has become popular for these applications. Its simple, explicit dataflow programming model is favored by some over the traditional high-level declarative approach: SQL. On the other hand, the extreme simplicity of Map-Reduce leads to much low-level hacking to deal with the many-step, branching dataflows that arise in practice. Moreover, users must repeatedly code standard operations such as join by hand. These practices waste time, introduce bugs, harm readability, and impede optimizations.Pig is a high-level dataflow system that aims at a sweet spot between SQL and Map-Reduce. Pig offers SQL-style high-level data manipulation constructs, which can be assembled in an explicit dataflow and interleaved with custom Map- and Reduce-style functions or executables. Pig programs are compiled into sequences of Map-Reduce jobs, and executed in the Hadoop Map-Reduce environment. Both Pig and Hadoop are open-source projects administered by the Apache Software Foundation.This paper describes the challenges we faced in developing Pig, and reports performance comparisons between Pig execution and raw Map-Reduce execution.

452 citations

Proceedings ArticleDOI
08 Dec 1995
TL;DR: This paper presents the design, implementation and application of SCIRun, a scientific programming environment that allows the interactive construction, debugging and steering of large scale scientific computations, and identifies ways to avoid the excessive memory use inherent in standard dataflow implementations.
Abstract: We present the design, implementation and application of SCIRun, a scientific programming environment that allows the interactive construction, debugging and steering of large scale scientific computations. Using this "computational workbench," a scientist can design and modify simulations interactively via a dataflow programming model. SCIRun enables scientists to design and modify models and automatically change parameters and boundary conditions as well as the mesh discretization level needed for an accurate numerical solution. As opposed to the typical "off-line" simulation mode - in which the scientist manually sets input parameters, computes results, visualizes the results via a separate visualization package, then starts again at the beginning - SCIRun "closes the loop" and allows interactive steering of the design and computation phases of the simulation. To make the dataflow programming paradigm applicable to large scientific problems, we have identified ways to avoid the excessive memory use inherent in standard dataflow implementations, and have implemented fine-grained dataflow in order to further promote computational efficiency. In this paper, we describe applications of the SCIRun system to several problems in computational medicine. In addition, an we have included an interactive demo program in the form of an application of SCIRun system to a small electrostatic field problem.

320 citations

Book
01 Jan 1985
TL;DR: Lucid is a functional language, but one which supports variables, i.e. values which change with time, so programmers can use iteration (repetition) as well as recursion.
Abstract: Lucid is a functional language, but one which supports variables, i.e. values which change with time. Lucid programmers can therefore use iteration (repetition) as well as recursion. The statements of a Lucid program are equations which can be thought of as defining a network of processors and communication lines. Lucid's dataflow approach to programming has more in common with that of the UNIX (TM) shell, with its filters and pipelines, than with the imperative, one-step-at-a-time approach of a conventional language like C or Pascal.

306 citations


Network Information
Related Topics (5)
Scalability
50.9K papers, 931.6K citations
82% related
Cache
59.1K papers, 976.6K citations
81% related
Virtual machine
43.9K papers, 718.3K citations
79% related
Server
79.5K papers, 1.4M citations
78% related
Graph (abstract data type)
69.9K papers, 1.2M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202116
202013
201920
201823
201728
201626