Topic
Pipeline (computing)
About: Pipeline (computing) is a research topic. Over the lifetime, 26760 publications have been published within this topic receiving 204305 citations. The topic is also known as: data pipeline & computational pipeline.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: This work presents a hardware-efficient design increasing throughput for the AES algorithm using a high-speed parallel pipelined architecture and achieves a high throughput of 29.77 Gbps in encryption whereas the highest throughput reported in literature is 21.54 Gbps.
79 citations
•
27 May 2005TL;DR: In this paper, branch prediction for multiple branch-type instructions is performed concurrently for a high-bandwidth pipeline, and the branch prediction is then supplied for further processing of the corresponding branch type instructions.
Abstract: Concurrently branch predicting for multiple branch-type instructions satisfies demands of high performance environments. Concurrently branch predicting for multiple branch-type instructions provides the instruction flow for a high bandwidth pipeline utilized in advanced performance environments. Branch predictions are concurrently generated for multiple branch-type instructions. The concurrently generated branch predictions are then supplied for further processing of the corresponding branch-type instructions.
78 citations
••
TL;DR: A novel fully automatic 2D/3D global registration pipeline consisting of several stages that simultaneously register the input image set on the corresponding 3D object, capable of dealing with small and big objects of any shape, and robust.
Abstract: The photorealistic acquisition of 3D objects often requires color information from digital photography to be mapped on the acquired geometry, in order to obtain a textured 3D model. This paper presents a novel fully automatic 2D/3D global registration pipeline consisting of several stages that simultaneously register the input image set on the corresponding 3D object. The first stage exploits Structure From Motion (SFM) on the image set in order to generate a sparse point cloud. During the second stage, this point cloud is aligned to the 3D object using an extension of the 4 Point Congruent Set (4PCS) algorithm for the alignment of range maps. The extension accounts for models with different scales and unknown regions of overlap. In the last processing stage a global refinement algorithm based on mutual information optimizes the color projection of the aligned photos on the 3D object, in order to obtain high quality textures. The proposed registration pipeline is general, capable of dealing with small and big objects of any shape, and robust. We present results from six real cases, evaluating the quality of the final colors mapped onto the 3D object. A comparison with a ground truth dataset is also presented.
78 citations
••
TL;DR: The results found that the total oil transportation cost and the total exergy loss of the pipeline transportation system after optimization have seen an obvious improvement in the effective utilization of energy when compared to pre-optimized levels.
78 citations
••
IBM1
TL;DR: This paper shows that cycles per instruction (CPI) is a simple dot product of event frequencies and event penalties, and that it is far more intuitive than its more popular cousin, instructions per cycle (IPC).
Abstract: To understand processor performance, it is essential to use metrics that are intuitive, and it is essential to be familiar with a few aspects of a simple scalar pipeline before attempting to understand more complex structures. This paper shows that cycles per instruction (CPI) is a simple dot product of event frequencies and event penalties, and that it is far more intuitive than its more popular cousin, instructions per cycle (IPC). CPI is separable into three components that account for the inherent work, the pipeline, and the memory hierarchy, respectively. Each of these components is a fixed upper limit, or “hard bound,” for the superscalar equivalent components. In the last decade, the memory-hierarchy component has become the most dominant of the three components, and in the next decade, queueing at the memory data bus will become a very significant part of this. In a reaction to this trend, an evolution in bus protocols will ensue. This paper provides a general sketch of those protocols. An underlying theme in this paper is that power constraints have been a driving force in computer architecture since the first computers were built fifty years ago. In CMOS technology, power constraints will shape future microarchitecture in a positive and surprising way. Specifically, a resurgence of the RISC approach is expected in high-performance design which will cause the client and server microarchitectures to converge.
78 citations