scispace - formally typeset
Search or ask a question

Showing papers on "Data flow diagram published in 1989"


Proceedings ArticleDOI
Ron Cytron1, Jeanne Ferrante1, Barry K. Rosen1, Mark N. Wegman1, F. K. Zadeck2 
03 Jan 1989
TL;DR: This paper presents strong evidence that static single assignment form and the control dependence graph can be of practical use in optimization, and presents a new algorithm that efficiently computes these data structures for arbitrary control flow graph.
Abstract: In optimizing compilers, data structure choices directly influence the power and efficiency of practical program optimization. A poor choice of data structure can inhibit optimization or slow compilation to the point where advanced optimization features become undesirable. Recently, static single assignment form and the control dependence graph have been proposed to represent data flow and control flow properties of programs. Each of these previously unrelated techniques lends efficiency and power to a useful class of program optimizations. Although both of these structures are attractive, the difficulty of their construction and their potential size have discouraged their use. We present a new algorithm that efficiently computes these data structures for arbitrary control flow graph We also give analytical and experimental evidence that they are usually {\em linear} in the size of the original program. This paper thus presents strong evidence that these structures can be of {\em practical} use in optimization.

493 citations


Book
03 Jan 1989
TL;DR: In this paper, the authors develop the theory necessary to statically schedule large grain data flow (LGDF) programs on single or multiple processors, and present a class of static (compile time) scheduling algorithms.
Abstract: Large grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sake of programmer convenience. This is particularly true when the target machine is a programmable DSP chip. However, the runtime overhead inherent in most LGDF implementations is not required for most signal processing systems because such systems are mostly synchronous (in the DSP sense). Synchronous data flow (SDF) differs from traditional data flow in that the amount of data produced and consumed by a data flow node is specified a priori for each input and output. This is equivalent to specifying the relative sample rates in signal processing system. This means that the scheduling of SDF nodes need not be done at runtime, but can be done at compile time (statically), so the runtime overhead evaporates. The sample rates can all be different, which is not true of most current data-driven digital signal processing programming methodologies. Synchronous data flow is closely related to computation graphs, a special case of Petri nets. This self-contained paper develops the theory necessary to statically schedule SDF programs on single or multiple processors. A class of static (compile time) scheduling algorithms is proven valid, and specific algorithms are given for scheduling SDF systems onto single or multiple processors.

444 citations


Journal ArticleDOI
TL;DR: Simulated-annealing-based algorithms are presented which provide excellent solutions to the entire allocation process, namely register, arithmetic unit, and interconnect allocation, while effectively exploring the existing tradeoffs in the design space.
Abstract: Novel algorithms for the simultaneous cost/resource-constrained allocation of registers, arithmetic units, and interconnect in a data path have been developed. The entire allocation process can be formulated as a two-dimensional placement problem of microinstructions in space and time. This formulation readily lends itself to the use of a variety of heuristics for solving the allocation problem. The authors present simulated-annealing-based algorithms which provide excellent solutions to this formulation of the allocation problem. These algorithms operate under a variety of user-specifiable constraints on hardware resources and costs. They also incorporate conditional resource sharing and simultaneously address all aspects of the allocation problem, namely register, arithmetic unit, and interconnect allocation, while effectively exploring the existing tradeoffs in the design space. >

250 citations


Dissertation
01 Aug 1989
TL;DR: A new channel scheduling method called the Virtual Clock mechanism is developed to regulate packet flows in the Flow Network, and provides firewalls among flows, as in a TDM system, but with the statistical multiplexing advantages of packet switching.
Abstract: : This dissertion present a new architecture, the Flow Network, for packet switching network protocols. The Flow Network can provide users high quality, guaranteed service in terms of average latency and throughput. Rather than an end-point control with a stateless network model, the Flow Network design emphasizes regulation of packet traffic by the network. Rather than window flow control, the Flow Network controls the average transmission rate of individual users. Rather than relying on feedback control, the Flow Network requires users to reserve resources. An abstract entity, a flow, is defined to represent users's data transmission requests. A flow is associated with a specific set of service requirements, which allows application to express their requirement in a quantitive manner. This specification enables the network to check whether adequate resources are available before accepting new transmission requests. It also serves as a contract between the network and the user: it is used as a measure that the network service should meet, as well as a constraint that the user's transmission behavior must adhere to. A new channel scheduling method called the Virtual Clock mechanism is developed to regulate packet flows in the Flow Network. The VirtualClock mechanism monitors the average transmission rate of each statistical data flow and provides firewalls among flows, as in a TDM system, but with the statistical multiplexing advantages of packet switching. Simulation is used as a design aid and a verification tool throughout this research. Theses. (RRH)

179 citations


Patent
06 Jul 1989
TL;DR: In this paper, a data analysis program enables interactive computer controlled data analysis, including displaying a trand chart depicting a sequence of data points, each data point representing at least a portion of the measurement data collected and stored while running a selected process.
Abstract: A method of controlling a process using a programmed digital computer with a set of process control programs. An operator control program allows the user to select and run a specified process and to collect measurement data while the selected process is run. A data analysis program enables interactive computer controlled data analysis, including displaying a trand chart depicting a sequence of data points, each data point representing at least a portion of the measurement data collected and stored while running a selected process. A selectably positionable pointer is displayed on the trend chart for pointing at an individual data point so that the user can select and perform a predefined task on the measurement data stored in the data structure corresponding to the data point being pointed at by said selectably positionable pointer.

172 citations


Patent
12 Jul 1989
TL;DR: In this paper, a method for programming a computer to execute a procedure, based on a graphical interface which utilizes data flow diagrams to represent the procedure, is presented. But this method stores a plurality of executable functions, scheduling functions, and data types.
Abstract: A method for programming a computer to execute a procedure, is based on a graphical interface which utilizes data flow diagrams to represent the procedure. The method stores a plurality of executable functions, scheduling functions, and data types. A data flow diagram is assembled in response to the user input utilizing icons which correspond to the respective executable functions, scheduling functions, and data types which are interconnected by arcs on the screen. A panel, representative of an instrument front panel having input and output formats is likewise assembled for the data flow diagram. An executable program is generated in response to the data flow diagram and the panel utilizing the executable functions, scheduling functions, and data types stored in the memory. Furthermore, the executable functions may include user defined functions that have been generated using the method for programming. In this manner, a hierarchy of procedures is implemented, each represented by a data flow diagram.

163 citations


Patent
28 Mar 1989
TL;DR: In this article, a display terminal for conversation is used to create a module structure diagram (schemata expressive of the connectional relations among respective program modules) and a processing flow diagram (a kind of processing flow chart), an internal data definition diagram (schemeata for specifying the formats etc. of data for use in processes).
Abstract: According to the present invention, using a display terminal for conversation, a module structure diagram (schemata expressive of the connectional relations among respective program modules) is created, and a processing flow diagram (a kind of processing flow chart), an internal data definition diagram (schemata for specifying the formats etc. of data for use in processes) and an interface data definition diagram (schemata for specifying the formats etc. of arguments, common data between the modules, etc.) are created for each module, the created contents being stored in a memory. Further, the schematic information items of the module structure diagram, processing flow diagram, internal data definition diagram and interface data definition diagram are read out from the memory for each module and have stereotyped sentences and symbols added thereto, to generate the individual sentences of a source program. These sentences are edited according to the rules of a language, to complete the source program. If necessary, the various diagrams are printed out and utilized as program specifications.

137 citations


Proceedings ArticleDOI
TL;DR: The utility of data flow testing is extended to include the testing of data dependencies that exist across procedure boundaries, and a technique to guide the selection and execution of test cases that takes into account the various associations of names with definitions and uses across procedures is presented.
Abstract: As current trends in programming encourage a high degree of modularity, the number of procedure calls and returns executed in a module continues to grow. This increase in procedures mandates the efficient testing of the interactions among procedures. In this paper, we extend the utility of data flow testing to include the testing of data dependencies that exist across procedure boundaries. An interprocedural data flow analysis algorithm is first presented that enables the efficient computation of information detailing the locations of definitions and uses needed by an interprocedural data flow tester. To utilize this information, a technique to guide the selection and execution of test cases, that takes into account the various associations of names with definitions and uses across procedures, is also presented. The resulting interprocedural data flow tester handles global variables, reference parameters and recursive procedure calls, and is compatible with the current intraprocedural data flow testing techniques. The testing tool has been implemented on a Sun 3/50 Workstation.

120 citations


Patent
29 Aug 1989
TL;DR: In this paper, a flowgraph system controls and tracks computer programs and data sets for a computer-aided design (CAD) task and provides the user with an indication of data flow and the progress of the CAD task.
Abstract: A flowgraph system controls and tracks computer programs and data sets for a computer-aided design (CAD) task The programs in the CAD task and their respective data set requirement are visually displayed as a flowgraph with which the user interacts to select input data sets and initiate program executions The flowgraph provides the user with an indication of data flow and the progress of the CAD task

98 citations


Proceedings ArticleDOI
01 Dec 1989
TL;DR: The main algorithm idea is to factor the data flow solution on strong connected components of the flow graph into local and external parts, solving for the local parts by iteration and propagating these effects on the condensation of theflow graph to obtain the entire data flow solutions.
Abstract: Our exhaustive and incremental hybrid data flow analysis algorithms, based on iteration and elimination techniques, are designed for incremental update of a wide variety of monotone data flow problems in response to source program changes. Unlike previous incremental iterative methods, this incremental algorithm efficiently computes precise and correct solutions. We give theoretical results on the imprecision of restarting iteration for incremental update by fixed point iteration which provided motivation for our algorithm design. Described intuitively, the main algorithm idea is to factor the data flow solution on strong connected components of the flow graph into local and external parts, solving for the local parts by iteration and propagating these effects on the condensation of the flow graph to obtain the entire data flow solution. The incremental hybrid algorithm re-performs those algorithm steps affected by the program changes.

91 citations


Patent
18 Jul 1989
TL;DR: In this paper, a data flow computer and method of computing is described, which utilizes a data driven processor node architecture, including a plurality of First-In-First-Out (FIFO) registers, data flow memories, and a processor.
Abstract: A data flow computer and method of computing is disclosed which utilizes a data driven processor node architecture. The apparatus in a preferred embodiment includes a plurality of First-In-First-Out (FIFO) registers, a plurality of related data flow memories, and a processor. The processor makes the necessary calculations and includes a control unit to generate signals to enable the appropriate FIFO register receiving the result. In a particular embodiment, there are three FIFO registers per node: an input FIFO register to receive input information form an outside source and provide it to the data flow memories; an output FIFO register to provide output information from the processor to an outside recipient; and an internal FIFO register to provide information from the processor back to the data flow memories. The data flow memories are comprised of four commonly addressed memories. A parameter memory holds the A and B parameters used in the calculations; an opcode memory holds the instruction; a target memory holds the output adress; and a tag memory contains status bits for each parameter. One status bit indicates whether the corresponding parameter is in the parameter memory and one status but to indicate whether the stored information in the corresponding data parameter is to be reused. The tag memory outputs a ''fire'' signal (signal R VALID) when all of the necessary information has been stored in the data flow memories, and thus when the instruction is ready to be fired to the processor.

01 Jan 1989
TL;DR: This dissertation shows that applicative programming is a powerful approach to programming parallel computers as long as compilers support at least the optimizations of the SISAL compiler.
Abstract: The importance of parallel processing in the computational community is increasing. The difficulties of programming parallel processors, however, have thwarted their exploitation. Two approaches are receiving attention as possible solutions: supercompilers for extant languages, and new languages. In the latter area, researchers have produced several applicative languages for parallel processing. In the applicative model, only data dependencies constrain evaluation order, so many operations can execute simultaneously if hardware is available. Unfortunately, preserving applicative semantics has required implementations to copy data when deriving one value from another, and in the presence of large arrays, copy costs can become prohibitively expensive. In addition to copying, applicative programs suffer from the same inefficiencies as their imperative counterparts. This dissertation discusses several compilation techniques for high performance parallel applicative computing, with emphasis on update-in-place. All the algorithms take data flow graphs as input and produce improved data flow graphs as output. We have implemented them for SISAL, an applicative language for parallel numerical computation, with encouraging results. Most programs, including those manipulating two-dimensional arrays, run in-place after optimization. Further they achieve execution times competitive with FORTRAN, C, and Pascal on one processor, and good parallel efficiency when more than one processor contributes to execution. This dissertation shows that applicative programming is a powerful approach to programming parallel computers as long as compilers support at least the optimizations of our SISAL compiler.

Journal ArticleDOI
Yuan F. Zheng1
01 Oct 1989
TL;DR: The author proposes a systematic approach to integrating multiple sensors into a robotic system using the concept of logical sensors and treats logical sensors as object modules and interobject communication becomes an effective method of data flow required by the integration.
Abstract: The author proposes a systematic approach to integrating multiple sensors into a robotic system. It is shown that the robot motion control mechanism has a hierarchical structure consisting of multiple layers. The integration of multiple sensors should not disturb the structure, but it should enhance the intelligence of each activity. Therefore, multiple sensors can be hierarchically integrated into an existing system. To make the integration feasible, the author adopts the concept of logical sensors and treats logical sensors as object modules. By using object-oriented programming, integration becomes a modular procedure and interobject communication becomes an effective method of data flow required by the integration. The authors also proposes an objective method for evaluating the performance of integration. The benefit of the integration is measured by how the intelligence of the robotic system is enhanced. The cost of the integration is measured by a cost function and a loss function. The former is related to the sensor time; the latter is affected by sensor uncertainty. >

Journal ArticleDOI
TL;DR: A proposal for formalizing data flow diagrams through extended Petri nets is described, illustrating the usefulness of the approach by describing how it can be used to analyse the consistency of requirements specifications.
Abstract: In thie paper, we describe a proposal for formalizing data flow diagrams through extended Petri nets. We illustrate the usefulness of the approach by describing how it can be used to analyse the consistency of requirements specifications.

Patent
01 Jun 1989
TL;DR: In this paper, a tag data renewing unit in a data flow-computer is proposed to reduce the order relation for tokens with respect to first-in/first-out, which must be kept at respective points in a conventional data flow computer.
Abstract: By providing a tag data renewing unit in a data flow-computer, the "delay" function, which is necessary for a digital filter, etc., can be realized, and it is unnecessary to keep the order relation for tokens with respect to first-in/first-out, which must be kept at respective points in a conventional data flow computer, and thereby the architecture of a compiler can be simplified and at the same time the execution time can be shortened.

Proceedings ArticleDOI
03 Jan 1989
TL;DR: The Object-Oriented Software Development Method (OOSD) includes object- oriented requirements analysis, as well as object-oriented design, which focuses on the objects of a problem throughout development.
Abstract: The Object-Oriented Software Development Method (OOSD) includes object-oriented requirements analysis, as well as object-oriented design. OOSD is a practical method of developing a software system which focuses on the objects of a problem throughout development. OOSD's focus on objects early in the development, with attention to generating a useful model, creates a picture of the system that is modifiable, reusable, reliable, and understandable — the last perhaps most important because the picture of a software system created by a development method must be an effective device for communication among developers, customers, management, and quality-assurance personnel.Most object-oriented methods competing for the attention of the software developer actually apply traditional Structured Analysis (function-based), or variations of Structured Analysis, to requirements activity, and work through a transition process to an object-oriented design [1,2,7,10,11]. In these methods the developer begins with functionally-based requirements analysis, and only reaches an object-oriented design by the intermediary step of converting a traditional, functionally-decomposed data flow diagram (DFD) to an object-oriented DFD (or equivalent). In this conversion process, objects are identified through a set of heuristics which group “transformations” in the DFD generated during requirements analysis. These methods carry a number of interesting but unfortunate burdens. Lower-level objects, which directly relate to real-world objects, are easily identified, but higher-level objects are generally more arbitrary, so that developers do not consistently identify a hierarchy of objects which achieves significant improvement in software engineering goals (e.g., reliability, maintainability, reusability). The heuristics for identifying objects usually relate the DFD transforms to the object that controls execution of an operation, rather than the object which “owns” the operation. These methods generally ignore the need to convert behavior descriptions of the DFD transforms into behavior descriptions of the objects. Finally, the use of Structured Analysis in an otherwise object-oriented approach complicates the tracing of requirements by forcing the developer to look first to DFD transforms and their behavior descriptions, and then to the objects.

Patent
30 Jun 1989
TL;DR: In this article, an apparatus for debugging a data-flow program simulates functions corresponding to a plurality of structural portions of a processing apparatus in accordance with a data flow program stored in a program file.
Abstract: An apparatus for debugging a data flow program simulates functions corresponding to a plurality of structural portions of a processing apparatus in accordance with a data flow program stored in a program file. The state of execution corresponding to respective lines of a source program of the executed plurality of functional portions are stored in a PS file and the processes in respective functional portions and the states of execution employed for the execution of the source program are stored in a data packet file. A debug information file (22) stores the debug information indicative of the corresponding relation between the respective lines of the source program and respective nodes of the data flow program. A display displays debug information in association with respective lines of the source program in accordance with the execution of respective nodes of the data flow program.

Journal ArticleDOI
TL;DR: The authors discuss interprocessor communication in synchronous multiprocessor DSP (digital signal processing) chips, the types of systems that are synthesized by the Cathedral II silicon compiler, and a model for the data flow between two processors that leads to the definition of 'once in, once out' communication.
Abstract: The authors discuss interprocessor communication in synchronous multiprocessor DSP (digital signal processing) chips, the types of systems that are synthesized by the Cathedral II silicon compiler. A model for the data flow between two processors is presented. A number of architectural possibilities are discussed. Key concepts are a double-buffered memory cell and an extended method of pointer addressing. This method leads to the definition of 'once in, once out' communication, as opposed to conventional FIFO (first in, first out) buffering. The minimization of the buffer size by skewing the operation of the processors is worked out for specific important types of communication. The proposed techniques have been implemented in a synthesis tool which is part of Cathedral II. The practical significance of the work is illustrated with several examples. >

Patent
23 Jan 1989
TL;DR: In this article, a data flow type information processor includes a program storing portion, a data pair producing portion and a processing portion, and a function for synchronizing with all of loop variables.
Abstract: A data flow type information processor includes a program storing portion, a data pair producing portion and a processing portion. In the data flow type information processor in executing a data flow program having a loop structure, a function for synchronizing with all of loop variables, that is, function for assuring that the value of all of the loop variables are determined in a loop execution stage to be considered, is applied to a group of instruction information for determining a loop termination.

Journal ArticleDOI
01 Nov 1989
TL;DR: A general design of an integrated total quality information system involving the Quality Function Deployment process is proposed and data flow diagram is used to illustrate the structure of the information system.
Abstract: A general design of an integrated total quality information system involving the Quality Function Deployment process is proposed in this paper. Data flow diagram is used to illustrate the structure of the information system. Within it, the Quality Function Deployment process is especially discussed in detail. BACKGROUND

01 Jan 1989
TL;DR: This work examines first data flow frameworks and then incremental algorithms in detail, and determines precise conditions responsible for the inefficiency of existing incremental iterative data flow algorithms, and proposes a class of algorithms, which are called hybrid algorithms, based on the strong component decomposition of the flow graph and a corresponding factorization of the lattice.
Abstract: Data flow is a means of formulating questions about the flow of data in some computer program. Data flow frameworks are algebraic structures encoding data flow problems, in which those problems can be solved. Solutions to framework instances are used for optimization, for debugging and testing, for verification and for parallelization. Solving data flow problems, particularly for large systems, can take a very long time. Rather than recompute the entire solution after each small change, one would like to update data flow information in a fast, incremental way. We give the results of a thorough study of this problem. We examine first data flow frameworks and then incremental algorithms in detail, and determine precise conditions responsible for the inefficiency of existing incremental iterative data flow algorithms. We then propose a class of algorithms, which we call hybrid algorithms, based on the strong component decomposition of the flow graph and a corresponding factorization of the lattice, which with reasonable assumptions on the underlying problem and class of edits is incrementally efficient. We characterize the factorable problems, and show that they include most but not all of the standard data flow problems. We consider whether a class of hybrid algorithms could be based on other flow graph decompositions. Finally, we show that hybrid algorithms lead naturally to parallel algorithms for computing data flow information.

Proceedings Article
01 Dec 1989
TL;DR: It is shown that recursions or loops in the programs lead to an inherent lower bound on the achievable iteration period, referred to as the iteration bound, and that unfolding any program by an optimum unfolding factor transforms any arbitrary program to an equivalent perfect-rate program, which can then be scheduled rate optimally.

01 Jan 1989
TL;DR: This thesis presents the results of a rigorous investigation of the significance of program dependences for software testing, debugging, and maintenance, and clarifies the concept of control dependence, which is described somewhat vaguely in the literature.
Abstract: Program dependences are syntactic relationships between program statements, which are used in several areas of computer science to obtain "approximate" information about semantic relationships between statements. There are two basic types of program dependences: control dependences, which are features of a program's control structure, and data flow dependences, which are features of the placement of variable definitions and uses in a program. Typically, proposed uses of program dependences have been justified only informally, if at all. Since program dependences are used for such critical purposes as software testing, debugging, and maintenance, code optimization and parallelization, and computer security, this lack of rigor is unacceptable. This thesis presents the results of a rigorous investigation of the significance of program dependences for software testing, debugging, and maintenance. A concept called "semantic dependence" is defined, which is necessary condition for certain kinds of faults and modifications in a program statement to affect the execution behavior of other statements. Semantic dependence is defined by giving a semantics to a graph-theoretic program representation used to find program dependences. It is shown that a certain generalization of control and data flow dependence, called "weak syntactic dependence" is a necessary condition for semantic dependence. It is shown that a commonly used generalization of control and data flow dependence, which we call "strong syntactic dependence", is not a necessary condition for semantic dependence, although it is a necessary condition for a restricted form, called "finitely demonstrated semantic dependence". It is shown that neither weak syntactic, strong syntactic, nor data flow dependence is a sufficient condition for semantic dependence. Together, these results allow a better evaluation of the soundness of some proposed uses of program dependences in testing, debugging, and maintenance. The results support some uses, controvert others, and suggest new ones. Finally, this thesis clarifies the concept of control dependence, which is described somewhat vaguely in the literature, via the results described above and via several graph theoretic characterizations of control dependence.

Journal ArticleDOI
J. Barlow1, B. Franek1, M. Jonker1, T. Nguyen1, P.V. Vyrre1, A. Vascotto1, P. Vande Vyvre 
TL;DR: The MODEL software is a set of modules for online applications, running principally on VAX-family computers, that provides data flow, human interface, process control, and error-reporting facilities.
Abstract: The MODEL software is a set of modules for online applications, running principally on VAX-family computers. It provides data flow, human interface, process control, and error-reporting facilities. Recently, facilities have been developed to tackle the complex problem of controlling the various activities that constitute the data-acquisition system of a large physics experiment. The approach adopted is based on a state manager. The physicist describes the experiment in terms of objects, i.e. logical subsystems, for each of which a number of states are defined. Commands can be sent to these objects, causing them to perform actions and to change state. The complete description of all objects, states, and actions, in a simple language, is used to generate a state manager for the experiment, which runs as a VMS process. The concepts embodied in the state manager, its generation, its interactions with other processes, and the system configuration it supports are examined. >

Journal ArticleDOI
Guang R. Gao1
TL;DR: It is shown that the optimal balancing for acyclic connected data flow graphs generated from a data flow language can be formulated into certain linear programming problems which have efficient algorithmic solutions.

Patent
23 Jan 1989
TL;DR: In this article, data packet circulates in the order of program storing portion, data pair producing portion and operation processing portion, so that operation processing based on the data flow program stored in the program-storing portion progresses.
Abstract: Data packet circulates in the order of program storing portion, data pair producing portion and operation processing portion, so that operation processing based on the data flow program stored in the program storing portion progresses. Priority information is applied in advance to the data flow program stored in the program storing portion. If hash collision occurs in the data pair producing portion, the data pair producing portion determines priority for data pair production processing in accordance with the priority information, so that data pair production processing is first formed with respect to the data packet having higher priority.

Proceedings ArticleDOI
27 Nov 1989
TL;DR: Real-time Mentat, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described, which provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real- time programs.
Abstract: Real-time Mentat, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as Mentat. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The Real-time Mentat programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the Real-time Mentat system architecture and programming language constructs is provided. >

Patent
09 Oct 1989
TL;DR: In this paper, the authors present a system design tool and a method for use in system design, by which the behaviour of a complex interactive system can be simulated, where the system is represented as a number of interconnected "processes" and each process is provided with an algorithm to model its function and its interconnections (1,2) The processes may be arranged in a layered hierarchical structure (4,6,9) and each processes can have functions such as supplying data from a sensor, relaying data or processing data The algorithms are combined into an overall model of the
Abstract: A system design tool and a method for use in system design, by which the behaviour of a complex interactive system can be simulated The system is represented as a number of interconnected "processes" (eg 7,8,14) and each process is provided with an algorithm to model its function and its interconnections (1,2) The processes may be arranged in a layered hierarchical structure (4,6,9) and each process can have functions such as supplying data from a sensor, relaying data or processing data The algorithms are combined into an overall model of the system, and the system model is supplied with data on initial conditions of the system The system model (for example, a computer program) is then executed using the data on initial conditions so as to perform a simulation of the system, with additional data being supplied during execution to simulate changes in external conditions affecting the system The progress of the simulation can be monitored on a display which shows a logic diagram representing at least part of the system, with symbols representing individual processes which change colour, or change in some other way, to show whether the respective process is inactive, transmitting data, processing data or the like This allows potential bottlenecks or other problem areas in the system to be identified The system design tool and method can be applied to a wide range of interactive systems, such as transport systems, telecommunications networks, chemical plant and so forth

Journal ArticleDOI
TL;DR: The interconnectivity metric integrates the structural as well as the textual aspects of a program in such a way that the organization of a programs can be seen graphically.

Patent
26 Jan 1989
TL;DR: A data flow type information processor includes a program storing portion, a paired data detecting portion, an operation processing portion, internal data buffer, and an external data memory as discussed by the authors, which is used for data flow.
Abstract: A data flow type information processor includes a program storing portion, a paired data detecting portion, an operation processing portion, an internal data buffer, and an external data memory. A data packet processed in the program storing portion, the paired data detecting portion and the operation processing portion is transferred to the internal data buffer. On the other hand, a data packet outputted from the external data memory is transferred to another information processor through a merge portion, a branch portion, another merge portion and another branch portion. Thus, internal processing through the internal data buffer and processing from the external data memory to the exterior are not merged.