scispace - formally typeset
Search or ask a question
Topic

Pipeline (computing)

About: Pipeline (computing) is a research topic. Over the lifetime, 26760 publications have been published within this topic receiving 204305 citations. The topic is also known as: data pipeline & computational pipeline.


Papers
More filters
Patent
31 Oct 2003
TL;DR: In this paper, the authors propose a peer-vector machine with a hardwired pipeline accelerator coupled to the processor, where the buffer and data-transfer objects facilitate the transfer of data between the application and the accelerator.
Abstract: A computing machine includes a first buffer and a processor coupled to the buffer. The processor executes an application, a first data-transfer object, and a second data-transfer object, publishes data under the control of the application, loads the published data into the buffer under the control of the first data-transfer object, and retrieves the published data from the buffer under the control of the second data-transfer object. Alternatively, the processor retrieves data and loads the retrieved data into the buffer under the control of the first data-transfer object, unloads the data from the buffer under the control of the second data-transfer object, and processes the unloaded data under the control of the application. Where the computing machine is a peer-vector machine that includes a hardwired pipeline accelerator coupled to the processor, the buffer and data-transfer objects facilitate the transfer of data between the application and the accelerator.

62 citations

Journal ArticleDOI
TL;DR: This paper presents the realtime image subtraction pipeline in the intermediate Palomar Transient Factory, using high-performance computing, efficient database, and machine learning algorithms to reliably deliver transient candidates within ten minutes of images being taken.
Abstract: A fast-turnaround pipeline for realtime data reduction plays an essential role in discovering and permitting follow-up observations to young supernovae and fast-evolving transients in modern time-domain surveys. In this paper, we present the realtime image subtraction pipeline in the intermediate Palomar Transient Factory. By using high-performance computing, efficient databases, and machine-learning algorithms, this pipeline manages to reliably deliver transient candidates within 10 minutes of images being taken. Our experience in using high-performance computing resources to process big data in astronomy serves as a trailblazer to dealing with data from large-scale time-domain facilities in the near future.

62 citations

Journal ArticleDOI
TL;DR: This work demonstrates a DNA storage system that relies on massively parallel light-directed synthesis, which is considerably cheaper than conventional solid-phase synthesis, however, this technology has a high sequence error rate when optimized for speed.
Abstract: Due to its longevity and enormous information density, DNA is an attractive medium for archival storage. The current hamstring of DNA data storage systems—both in cost and speed—is synthesis. The key idea for breaking this bottleneck pursued in this work is to move beyond the low-error and expensive synthesis employed almost exclusively in today’s systems, towards cheaper, potentially faster, but high-error synthesis technologies. Here, we demonstrate a DNA storage system that relies on massively parallel light-directed synthesis, which is considerably cheaper than conventional solid-phase synthesis. However, this technology has a high sequence error rate when optimized for speed. We demonstrate that even in this high-error regime, reliable storage of information is possible, by developing a pipeline of algorithms for encoding and reconstruction of the information. In our experiments, we store a file containing sheet music of Mozart, and show perfect data recovery from low synthesis fidelity DNA. The current bottleneck for DNA data storage systems is the cost and speed of synthesis. Here, the authors use inexpensive, massively parallel light-directed synthesis and correct for a high error rate with a pipeline of encoding and reconstruction algorithms.

62 citations

Patent
08 Mar 1994
TL;DR: A pipelined memory (20) as mentioned in this paper includes output registers (34) and output enable registers (48) which are used to electrically switch between the asynchronous operating mode and the synchronous operating mode.
Abstract: A pipelined memory (20) has a synchronous operating mode and an asynchronous operating mode. The memory (20) includes output registers (34) and output enable registers (48) which are used to electrically switch between the asynchronous operating mode and the synchronous operating mode. In addition, in the synchronous operating mode, the depth of pipelining can be changed between a three stage pipeline and a two stage pipeline. By changing the depth of pipelining, the memory (20) can operate using a greater range of clock frequencies. In addition, the operating frequency can be changed to facilitate testing and debugging of the memory (20).

62 citations

Patent
27 Oct 1993
TL;DR: In this paper, a branch prediction cache stores the target instruction and next sequential instruction, and is tagged by the address of the branch instruction, as in the prior art, however, the branch cache also stores the length of the first and second instructions and the addresses of the second instruction, allowing the target and the next sequential instructions to be directly aligned and presented to the parallel decoding circuits without waiting for a calculation of their lengths and starting addresses.
Abstract: A method and apparatus for eliminating the delay in a parallel processing pipeline. In a parallel processing pipeline system, a circuitry is provided to determine the length and align two instructions in parallel. Parallel decoding circuitry is provided for decoding and executing the two instructions. A branch prediction cache stores the target instruction and next sequential instruction, and is tagged by the address of the branch instruction, as in the prior art. In addition, however, the branch prediction cache also stores the length of the first and second instructions and the address of the second instruction. This additional data allows the target and next sequential instructions to be directly aligned and presented to the parallel decoding circuits without waiting for a calculation of their lengths and starting addresses.

62 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
86% related
Scalability
50.9K papers, 931.6K citations
85% related
Server
79.5K papers, 1.4M citations
82% related
Electronic circuit
114.2K papers, 971.5K citations
82% related
CMOS
81.3K papers, 1.1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202218
20211,066
20201,556
20191,793
20181,754
20171,548