scispace - formally typeset
Search or ask a question
Topic

Pipeline (computing)

About: Pipeline (computing) is a research topic. Over the lifetime, 26760 publications have been published within this topic receiving 204305 citations. The topic is also known as: data pipeline & computational pipeline.


Papers
More filters
Journal ArticleDOI
TL;DR: It is found that when transporting bio-oil by pipeline to a distance of 400 km, minimum pipeline capacities of 1150 and 2000 m(3)/day are required to compete economically with liquid tank trucks and super B-train tank trailers, respectively.

57 citations

Journal ArticleDOI
TL;DR: In this article, a chipless RFID-based sensor for pipeline integrity monitoring in real-time fashion is presented, which monitors the coating lift-off from the pipeline, which is the initial step in external corrosion of a metal pipe.
Abstract: This work presents a chipless RFID-based sensor for pipeline integrity monitoring in real-time fashion. The sensor monitors the coating lift-off from the pipeline, which is the initial step in external corrosion of a metal pipe. The sensor has a readout coil and an LC resonator on the passive tag with an interdigitated capacitor. The resonant frequency of the sensor demonstrates a strong relation to the gap between the coating and the metal pipe. The tag is built on a flexible substrate for wrapping around the pipe and to represent the pipe coating. The sensor is conformal, battery-free, and low cost which makes it suitable for pipeline monitoring in harsh environments. The resonator was tuned to 105 MHz with a Q factor of ∼115. The sensor demonstrates the maximum resonant frequency change of 11.7%, when 2 Standard Cubic Centimeter per Minute (SCCM) of air lifts the coating; and 7.46% when 4 mL of water ingress happens in between the coating and the pipe. This sensor has advantages of inexpensiveness, simplicity, and long lifetime with potential capability of prediction prior to failure in pipeline systems.

57 citations

Journal ArticleDOI

57 citations

Journal ArticleDOI
12 Jun 2005
TL;DR: A novel program transformation technique to exploit parallel and pipelined computing power of modern network processors is presented and results show that the method provides impressive speed up for the commonly used NPF IPv4 forwarding and IP forwarding benchmarks.
Abstract: Modern network processors employs parallel processing engines (PEs) to keep up with explosive internet packet processing demands. Most network processors further allow processing engines to be organized in a pipelined fashion to enable higher processing throughput and flexibility. In this paper, we present a novel program transformation technique to exploit parallel and pipelined computing power of modern network processors. Our proposed method automatically partitions a sequential packet processing application into coordinated pipelined parallel subtasks which can be naturally mapped to contemporary high-performance network processors. Our transformation technique ensures that packet processing tasks are balanced among pipeline stages and that data transmission between pipeline stages is minimized. We have implemented the proposed transformation method in an auto-partitioning C compiler product for Intel Network Processors. Experimental results show that our method provides impressive speed up for the commonly used NPF IPv4 forwarding and IP forwarding benchmarks. For a 9-stage pipeline, our auto-partitioning C compiler obtained more than 4X speedup for the IPv4 forwarding PPS and the IP forwarding PPS (for both the IPv4 traffic and IPv6 traffic).

57 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a flexible multiprocessor platform for high throughput turbo decoding using configurable application-specific instruction set processors (ASIP) combined with an efficient memory and communication interconnect scheme.
Abstract: Emerging digital communication applications and the underlying architectures encounter drastically increasing performance and flexibility requirements. In this paper, we present a novel flexible multiprocessor platform for high throughput turbo decoding. The proposed platform enables exploiting all parallelism levels of turbo decoding applications to fulfill performance requirements. In order to fulfill flexibility requirements, the platform is structured around configurable application-specific instruction-set processors (ASIP) combined with an efficient memory and communication interconnect scheme. The designed ASIP has an single instruction multiple data (SIMD) architecture with a specialized and extensible instruction-set and 6-stages pipeline control. The attached memories and communication interfaces enable its integration in multiprocessor architectures. These multiprocessor architectures benefit from the recent shuffled decoding technique introduced in the turbo-decoding field to achieve higher throughput. The major characteristics of the proposed platform are its flexibility and scalability which make it reusable for all simple and double binary turbo codes of existing and emerging standards. Results obtained for double binary WiMAX turbo codes demonstrate around 250 Mb/s throughput using 16-ASIP multiprocessor architecture.

57 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
86% related
Scalability
50.9K papers, 931.6K citations
85% related
Server
79.5K papers, 1.4M citations
82% related
Electronic circuit
114.2K papers, 971.5K citations
82% related
CMOS
81.3K papers, 1.1M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202218
20211,066
20201,556
20191,793
20181,754
20171,548