scispace - formally typeset
Search or ask a question
Topic

Pipeline (computing)

About: Pipeline (computing) is a research topic. Over the lifetime, 26760 publications have been published within this topic receiving 204305 citations. The topic is also known as: data pipeline & computational pipeline.


Papers
More filters
Patent
02 Sep 1988
TL;DR: In this paper, a system and technique for providing early decoding of complex instructions in a pipelined processor uses a programmed logic array to decode instruction segments and loads both the instruction bits and the associated predecoded bits into a FIFO buffer to accumulate a plurality of such entries.
Abstract: A system and technique for providing early decoding of complex instructions in a pipelined processor uses a programmed logic array to decode instruction segments and loads both the instruction bits and the associated predecoded bits into a FIFO buffer to accumulate a plurality of such entries. Meanwhile, an operand execute pipeline retrieves such entries from the FIFO buffer as needed, using the predecoded instruction bits to rapidly decode and execute the instructions at rates determined by the instructions themselves. Delays due to cache misses are substantially or entirely masked, as the instructions and associated predecoded bits are loaded into the FIFO buffer more rapidly than they are retrieved from it, except during cache misses. A method is described for increasing the effective speed of executing a three operand construct. Another method is disclosed for increasing the effective speed of executing a loop containing a branch instruction by scanning the predecoded bits in establishing a link between successive instructions.

81 citations

Proceedings ArticleDOI
22 Feb 2006
TL;DR: It is found that the processor design that is fastest overall is often also the fastest design for an individual application, and that a processor customized to support only that subset of the ISA for a specific application can on average offer 25% savings in both area and energy.
Abstract: A key advantage of soft processors (processors built on an FPGA programmable fabric) over hard processors is that they can be customized to suit an application program's specific software. This notion has been exploited in the past principally through the use of application-specific instructions. While commercial soft processors are now widely deployed, they are available in only a few microarchitectural variations. In this work we explore the advantage of tuning the processor's microarchitecture to specific software applications, and show that there are significant advantages in doing so.Using an infrastructure for automatically generating soft processors that span the area/speed design space (while remaining competitive with Altera's Nios II variations), we explore the impact of tuning several aspects of microarchitecture including: (i) hardware vs software multiplication support; (ii) shifter implementation; and (iii) pipeline depth, organization, and forwarding. We find that the processor design that is fastest overall (on average across our embedded benchmark applications) is often also the fastest design for an individual application. However, in terms of area efficiency (i.e., performance-per-area), we demonstrate that a tuned microarchitecture can offer up to 30% improvement for three of the benchmarks and on average 11.4% improvement over the fastest-on-average design. We also show that our benchmark applications use only 50% of the available instructions on average, and that a processor customized to support only that subset of the ISA for a specific application can on average offer 25% savings in both area and energy. Finally, when both techniques for customization are combined we obtain an average improvement in performance-per-area of 25%.

81 citations

Journal ArticleDOI
26 May 2017-Energy
TL;DR: In this article, the authors analyzed H2 compression and pipeline transportation processes with safety issues related to water electrolysis and H2 production for different values of the hydrogen mass flow rate: 02, 05, 10, 20, and 28 kg/s.

81 citations

Journal ArticleDOI
TL;DR: A parallel imaging pipeline using transmission electron microscopes that scales this technology, implements 24/7 continuous autonomous imaging, and enables the acquisition of petascale datasets is built.
Abstract: Electron microscopy (EM) is widely used for studying cellular structure and network connectivity in the brain. We have built a parallel imaging pipeline using transmission electron microscopes that scales this technology, implements 24/7 continuous autonomous imaging, and enables the acquisition of petascale datasets. The suitability of this architecture for large-scale imaging was demonstrated by acquiring a volume of more than 1 mm3 of mouse neocortex, spanning four different visual areas at synaptic resolution, in less than 6 months. Over 26,500 ultrathin tissue sections from the same block were imaged, yielding a dataset of more than 2 petabytes. The combined burst acquisition rate of the pipeline is 3 Gpixel per sec and the net rate is 600 Mpixel per sec with six microscopes running in parallel. This work demonstrates the feasibility of acquiring EM datasets at the scale of cortical microcircuits in multiple brain regions and species. Electron microscopy (EM) is the gold standard for biological ultrastructure but acquisition speed is slow, making it unsuitable for large volumes. Here the authors present a parallel imaging pipeline for continuous autonomous imaging with six transmission EMs to image 1 mm3 of mouse cortex in less than 6 months.

81 citations

Patent
20 Jul 1988
TL;DR: The exception handling hardware is minimized because instructions which cause exceptions are never re-executed, and exception handling microcode executes in-line with the normal microcode flow as discussed by the authors.
Abstract: Pipelined CPUs achieve high-performance by fine tuning the pipe stages to execute typical instruction sequences. Atypical instruction sequences result in pipeline exceptions. The disclosed method provides graceful exception handling and recovery in a micropipelined memory interface. The use of a memory reference restart command latch allows an implementation that requires no additional logic for conditional writing of states pending exception checking. The exception handling hardware is minimized because instructions which cause exceptions are never re-executed, and exception handling microcode executes in-line with the normal microcode flow.

80 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
86% related
Scalability
50.9K papers, 931.6K citations
85% related
Server
79.5K papers, 1.4M citations
82% related
Electronic circuit
114.2K papers, 971.5K citations
82% related
CMOS
81.3K papers, 1.1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202218
20211,066
20201,556
20191,793
20181,754
20171,548