scispace - formally typeset
Search or ask a question
Topic

Pipeline (computing)

About: Pipeline (computing) is a research topic. Over the lifetime, 26760 publications have been published within this topic receiving 204305 citations. The topic is also known as: data pipeline & computational pipeline.


Papers
More filters
Patent
17 Jan 1984
TL;DR: In this paper, a digital data processing system including a number of input/output units (12) that communicate with a memory (11) over an input-output bus (30) and through an input/Output interface (31) is described.
Abstract: A digital data processing system including a number of input/output units (12) that communicate with a memory (11) over an input/output bus (30) and through an input/output interface (31). The input/output interface (31) pipelines transfer between the input/output units (12) and the memory (11). In the event of an error in the input/output interface's pipeline buffer, it transmits information to the input/output (12) that initiated the transfer unit to enable it to re-initiate the transfer.

58 citations

Journal ArticleDOI
TL;DR: An integrated processor that performs addition and subtraction of 30-b numbers in the logarithmic number system (LNS) offers 5-MOPS performance in 3- mu m CMOS technology, and is implemented in a two-chip set comprising 170 K transistors.
Abstract: The authors describe an integrated processor that performs addition and subtraction of 30-b numbers in the logarithmic number system (LNS). This processor offers 5-MOPS performance in 3- mu m CMOS technology, and is implemented in a two-chip set comprising 170 K transistors. Two techniques are used to achieve this precision in a moderate circuit area. Linear approximation of the LNS arithmetic functions using logarithmic arithmetic is shown to be simple due to the particular functions involved. A segmented approach to linear approximation minimizes the amount of table space required. Subsequent nonlinear compression of each lookup table leads to a further reduction in table size. The result is that a factor of 285 reduction in table size is achieved, compared to previous techniques. The circuit area of the implementation is minimized by optimizing the table parameters, using a computer program that accurately models ROM area. The implementation is highly pipelined, and produces one result per clock cycle using a ten-stage pipeline. >

58 citations

Proceedings ArticleDOI
01 Sep 1991
TL;DR: The Dynamic Instruction Stream Computer is a novel computer architecture which addresses many of the problems present in real-time systems by dynamically partitioning the processor throughput between multiple instruction streams based upon requirement demands.
Abstract: The Dynamic Instruction Stream Computer is a novel computer architecture which addresses many of the problems present in real-time systems The DISC operates by allowing multiple instruction streams (ISs), representing different processes to run concurrently by instruction interleaving on the pipeline Also, the throughput of the DISC can be partitioned in any way between the multiple ISs Conventional architectures are more concerned with overall performance and throughput than with real-time response In other words, they optimize the system to the functions that are more heavily used without regard to responsiveness to individual requests Applications abound where a high degree of responsiveness is required, without too much sacrifice of overall efficiency This is particularly true in real-time control applications where it is important to optimize the critical loops and respond promptly to interrupts DISC addresses this problem by dynamically partitioning the processor throughput between multiple instruction streams based upon requirement demands In this way different tasks and interrupt priorities can be assigned to guarantee their deadlines

58 citations

Proceedings ArticleDOI
01 Jan 2006
TL;DR: This paper shows that the pseudo LRU (PLRU) cache replacement policy can cause unbounded effects on the WCET, which is widely used in embedded systems, and some x86 models.
Abstract: Domino effects have been shown to hinder a tight prediction of worst case execution times (WCET) on real-time hardware. First investigated by Lundqvist and StenstrAƒÂ¶m, domino effects caused by pipeline stalls were shows to exist in the PowerPC by Schneider. This paper extends the list of causes of domino effects by showing that the pseudo LRU (PLRU) cache replacement policy can cause unbounded effects on the WCET. PLRU is used in the PowerPC PPC755, which is widely used in embedded systems, and some x86 models.

58 citations

Patent
13 Oct 1982
TL;DR: In this article, the authors present a method for prefetching instructions for a pipelined central processor unit for a general purpose digital data processing system, where a table is maintained for predicting the target addresses of transfer and indirect instructions based on past history of the execution of those instructions.
Abstract: Method and apparatus for prefetching instructions for a pipelined central processor unit for a general purpose digital data processing system. A table is maintained for purposes of predicting the target addresses of transfer and indirect instructions based on past history of the execution of those instructions. The prefetch mechanism forms instruction addresses and fetches instructions in parallel with the execution of previously fetched instructions by a central execution pipeline unit of the central processor unit. As instructions are prefetched, the transfer and indirect prediction (TIP) table is checked to determine the past history of those instructions. If no transfers or indirects are found, the prefetch proceeds sequentially. If transfer or indirect instructions are found, then the prefetch uses information in the TIP table to begin fetching the target instruction(s). The purpose of the prediction of target addresses is so that in the usual case instructions following a transfer can be executed at a rate of one instruction per pipeline cycle regardless of the pipeline depth or the frequency of transfers. Instructions are fetched two words at a time in order that the instruction fetch unit can stay ahead of the central execution pipeline. An instruction stack is provided for purposes of buffering double words of instructions fetched by the instruction fetch unit while waiting for execution by the central execution pipeline unit. The TIP table is updated based upon the actual execution of instructions by the central execution pipeline unit, and the correctness of the TIP table predictions is checked during execution of every instruction.

58 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
86% related
Scalability
50.9K papers, 931.6K citations
85% related
Server
79.5K papers, 1.4M citations
82% related
Electronic circuit
114.2K papers, 971.5K citations
82% related
CMOS
81.3K papers, 1.1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202218
20211,066
20201,556
20191,793
20181,754
20171,548