scispace - formally typeset
Search or ask a question

Showing papers on "Pipeline (computing) published in 2010"


01 Jan 2010
TL;DR: In this article, the authors present advances in long-range distributed sensing and in novel sensing cable designs for distributed temperature and strain sensing, including leakage detection on brine and gas pipelines, strain monitoring on gas pipelines and combined strain and temperature monitoring on composite flow lines, and composite coiled tubing pipes.
Abstract: Distributed fiber optic sensing presents unique features that have no match in conventional sensing techniques. The ability to measure temperatures and strain at thousands of points along a single fiber is particularly interesting for the monitoring of elongated structures such as pipelines, flow lines, oil wells, and coiled tubing. Sensing systems based on Brillouin and Raman scattering are used, for example, to detect pipeline leakages, to verify pipeline operational parameters and to prevent failure of pipelines installed in landslide areas, to optimize oil production from wells, and to detect hot spots in high-power cables. Recent developments in distributed fiber sensing technology allow the monitoring of 60 km of pipeline from a single instrument and of up to 300 km with the use of optical amplifiers. New application opportunities have demonstrated that the design and production of sensing cables are a critical element for the success of any distributed sensing instrumentation project. Although some telecommunication cables can be effectively used for sensing ordinary temperatures, monitoring high and low temperatures or distributed strain presents unique challenges that require specific cable designs. This contribution presents advances in long-range distributed sensing and in novel sensing cable designs for distributed temperature and strain sensing. This paper also reports a number of significant field application examples of this technology, including leakage detection on brine and gas pipelines, strain monitoring on gas pipelines and combined strain and temperature monitoring on composite flow lines, and composite coiled tubing pipes.

147 citations


Journal ArticleDOI
TL;DR: In this paper, a nonisothermal gas flow model was solved to simulate the slow and fast fluid transients, such as those typically found in high-pressure gas transmission pipelines, and the results of the simulation were used to understand the effect of different pipeline thermal models on the flow rate, pressure and temperature in the pipeline.

129 citations


Journal ArticleDOI
TL;DR: The author's conclusion that Wayne Parrott is not simply a public sector plant biologist and should not have been introduced as such remains the same and was in fact confirmed by Nature Biotechnology, and it should be the responsibility of naturebiotechnology to document these conflicts of interest, not a concerned reader, such as myself.
Abstract: 1. schubert, d. Nat. Biotechnol. 27, 802–803; author reply 803 (2009). 2. Beachy, r. et al. Nat. Biotechnol. 20, 1195–1196; author reply 1197 (2002). 3. http://www.cspinet.org/integrity/. 4. Jacobson, M.F. Lifting the veil of secrecy: corporate support for health and environmental professional associations, charities, and industry front groups. CSPI and its Integrity in Science Project, (8 september 2003). 5. schubert, d. Bush’s “sound science”: turning a deaf ear to reality. The San Diego Union Tribune (9 July 2004). 6. Heilprin, J. WHo to rely less on Us research. Associated Press Online. (27 January 2006). 7. Anonymous. Nature Biotechnology replies. Nat. Biotechnol. 27, 803 (2009). 8. sharpe, V.A. & Gurian-sherman, d. competing interests. Nat. Biotech. 21, 1131 (2003). ways to influence policy independently of formal lobbying, including those outlined by Jacobson4, as well as the ‘sound science’ approach promoted by Newt Gingrich and the Bush administration5. Finally, with respect to the ban of ILSI from WHO activities, I did not claim that they were banned from all WHO activities. Because of space limitations, I cited a text that was heavily referenced regarding the details of the WHO incident. Additional references include the Associated Press6 and CSPI3. My conclusion that Wayne Parrott is not simply a public sector plant biologist and should not have been introduced as such remains the same and was in fact confirmed by Nature Biotechnology7. However, it should be the responsibility of Nature Biotechnology to document these conflicts of interest, not a concerned reader, such as myself. A similar conflict with industry-funded plant COMPETING INTERESTS STATEMENT The author declares competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/ naturebiotechnology/.

91 citations


Patent
06 Jun 2010
TL;DR: In this paper, a method for processing images for a first camera and a second camera of a mobile device using a shared pipeline is described, where the first set of images captured by the first camera of the mobile device are processed using a first configuration of the shared pipeline.
Abstract: Some embodiments provide a method of processing images for a first camera and a second camera of a mobile device using a shared pipeline. A method receives a first set of images captured by the first camera of the mobile device. The method processes the first set of images using a first configuration of the shared pipeline. The method also receives a second set of images captured by the second camera of the mobile device, and processes the second set of images using a second configuration of the shared pipeline different from the first configuration.

88 citations


Patent
17 Dec 2010
TL;DR: In this article, a searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided for testing a target recognition, analysis, and tracking system and a report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset.
Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.

87 citations


Journal ArticleDOI
TL;DR: In this paper, a method is introduced which idealises a deformed pipeline crosssection in order to estimate the local and global components of the total displacement from experimental measurements of the final cross-section.

77 citations


Journal ArticleDOI
TL;DR: The proposed MAC showed the superior properties to the standard design in many ways and performance twice as much as the previous research in the similar clock frequency.
Abstract: In this paper, we proposed a new architecture of multiplier-and-accumulator (MAC) for high-speed arithmetic. By combining multiplication with accumulation and devising a hybrid type of carry save adder (CSA), the performance was improved. Since the accumulator that has the largest delay in MAC was merged into CSA, the overall performance was elevated. The proposed CSA tree uses 1's-complement-based radix-2 modified Booth's algorithm (MBA) and has the modified array for the sign extension in order to increase the bit density of the operands. The CSA propagates the carries to the least significant bits of the partial products and generates the least significant bits in advance to decrease the number of the input bits of the final adder. Also, the proposed MAC accumulates the intermediate results in the type of sum and carry bits instead of the output of the final adder, which made it possible to optimize the pipeline scheme to improve the performance. The proposed architecture was synthesized with 250, 180 and 130 ?m, and 90 nm standard CMOS library. Based on the theoretical and experimental estimation, we analyzed the results such as the amount of hardware resources, delay, and pipelining scheme. We used Sakurai's alpha power law for the delay modeling. The proposed MAC showed the superior properties to the standard design in many ways and performance twice as much as the previous research in the similar clock frequency. We expect that the proposed MAC can be adapted to various fields requiring high performance such as the signal processing areas.

75 citations


Proceedings ArticleDOI
03 May 2010
TL;DR: A new design template and design flow for the implementation of data-driven asynchronous circuits that relies on the use of edge-triggered flip-flops as the only storage elements, not only for the datapaths, but also for the control circuits; latches and C-elements are not required.
Abstract: This paper presents a new design template and design flow for the implementation of data-driven asynchronous circuits. It relies on the use of edge-triggered flip-flops as the only storage elements, not only for the datapaths, but also for the control circuits; latches and C-elements that are common in many asynchronous circuit design styles are not required. The design template uses a two-phase handshake protocol for inter-component communication. In a pipeline structure, these circuits operate near the speed of Mousetrap circuits, but the required design-flow is simpler. The implementation style —which we refer to as Click elements — has been chosen to resemble synchronous circuits as much as possible. This allows for the use of conventional optimization and timing tools in the design flow and for a cheaper design-for-test implementation. The click templates are well suited for a data-flow driven compilation flow, which avoids much of the control overhead of traditional syntax-directed compilation. The two-phase circuits show a significant improvement in performance and energy efficiency compared to four-phase single-rail circuits.

74 citations


Journal ArticleDOI
TL;DR: Results show that simultaneous batch injections lead to a better use of the pipeline transport capacity and a substantial reduction on the overall time needed to meet depot demands.

66 citations


20 Jun 2010
TL;DR: DARSIM is a parallel, highly configurable, cycle-level network-on-chip simulator based on an ingress-queued wormhole router architecture that allows a variety of routing and virtual channel allocation algorithms out of the box, ranging from simple DOR routing to complex Valiant, ROMM, or PROM schemes, BSOR, and adaptive routing.
Abstract: We present DARSIM, a parallel, highly configurable, cycle-level network-on-chip simulator based on an ingress-queued wormhole router architecture The parallel simulation engine offers cycle-accurate as well as periodic synchronization, permitting tradeoffs between perfect accuracy and high speed with very good accuracy When run on four separate physical cores, speedups can exceed a factor of 35, while when eight threads are mapped to the same cores via hyperthreading, simulation speeds up as much as five-fold Most hardware parameters are configurable, including geometry, bandwidth, crossbar dimensions, and pipeline depths A highly parametrized table-based design allows a variety of routing and virtual channel allocation algorithms out of the box, ranging from simple DOR routing to complex Valiant, ROMM, or PROM schemes, BSOR, and adaptive routing DARSIM can run in network-only mode using traces or directly emulate a MIPS-based multicore Sources are freely available under the open-source MIT license

63 citations


Journal ArticleDOI
TL;DR: The modularity of the pipeline allows functionality for new scan protocols to be added, such as an extended field of view, or new physical signals such as phase-contrast or dark-field imaging etc.
Abstract: With synchrotron-radiation-based tomographic microscopy, three-dimensional structures down to the micrometer level can be visualized. Tomographic data sets typically consist of 1000 to 1500 projections of 1024 x 1024 to 2048 x 2048 pixels and are acquired in 5-15 min. A processing pipeline has been developed to handle this large amount of data efficiently and to reconstruct the tomographic volume within a few minutes after the end of a scan. Just a few seconds after the raw data have been acquired, a selection of reconstructed slices is accessible through a web interface for preview and to fine tune the reconstruction parameters. The same interface allows initiation and control of the reconstruction process on the computer cluster. By integrating all programs and tools, required for tomographic reconstruction into the pipeline, the necessary user interaction is reduced to a minimum. The modularity of the pipeline allows functionality for new scan protocols to be added, such as an extended field of view, or new physical signals such as phase-contrast or dark-field imaging etc.

Proceedings ArticleDOI
11 Sep 2010
TL;DR: Feedback-Directed Pipelining (FDP) is proposed, a software framework that chooses the core-to-stage allocation at run-time and first maximizes the performance of the workload and then saves power by reducing the number of active cores, without impacting performance.
Abstract: Extracting high performance from Chip Multiprocessors requires that the application be parallelized. A common software technique to parallelize loops is pipeline parallelism in which the programmer/compiler splits each loop iteration into stages and each stage runs on a certain number of cores. It is important to choose the number of cores for each stage carefully because the core-to-stage allocation determines performance and power consumption. Finding the best core-to-stage allocation for an application is challenging because the number of possible allocations is large, and the best allocation depends on the input set and machine configuration. This paper proposes Feedback-Directed Pipelining (FDP), a software framework that chooses the core-to-stage allocation at run-time. FDP first maximizes the performance of the workload and then saves power by reducing the number of active cores, without impacting performance. Our evaluation on a real SMP system with two Core2Quad processors (8 cores) shows that FDP provides an average speedup of 4.2x which is significantly higher than the 2.3x speedup obtained with a practical profile-based allocation. We also show that FDP is robust to changes in machine configuration and input set.

Journal ArticleDOI
01 Sep 2010-Energy
TL;DR: This paper analyzes the properties of wavelet transform and its potential application in detecting the leakage location of gas pipeline and proposed entity-part method, which has been proved valid in the stimulation experiment and method of Romberg and Dichotomy Searching for computational analysis of leaking point.

Journal ArticleDOI
TL;DR: It is found that when transporting bio-oil by pipeline to a distance of 400 km, minimum pipeline capacities of 1150 and 2000 m(3)/day are required to compete economically with liquid tank trucks and super B-train tank trailers, respectively.

Journal ArticleDOI
TL;DR: This paper designs an embedded face detection system for handheld digital cameras or camera phones that can achieve about 75-80% detection rate for group portraits and proposes a hardware pipeline design for Haar-like feature calculation and a system design exploiting several levels of parallelism.

Journal ArticleDOI
TL;DR: In this article, a semi-Markov model of the system operation processes is proposed and its selected characteristics are determined, and the mean values of the pipeline system operation process unconditional sojourn times in particular operation states are found and applied to determining this process transient probabilities in these states.

Patent
15 Apr 2010
TL;DR: In this paper, a multi-mode Advanced Encryption Standard (MM-AES) module for a storage controller is adapted to perform interleaved processing of multiple data streams, i.e., concurrently encrypt and/or decrypt string-data blocks from multiple data stream using, for each data stream, a corresponding cipher mode that is any one of a plurality of AES cipher modes.
Abstract: In one embodiment, a multi-mode Advanced Encryption Standard (MM-AES) module for a storage controller is adapted to perform interleaved processing of multiple data streams, i.e., concurrently encrypt and/or decrypt string-data blocks from multiple data streams using, for each data stream, a corresponding cipher mode that is any one of a plurality of AES cipher modes. The MM-AES module receives a string-data block with (a) a corresponding key identifier that identifies the corresponding module-cached key and (b) a corresponding control command that indicates to the MM-AES module what AES-mode-related processing steps to perform on the data block. The MM-AES module generates, updates, and caches masks to preserve inter-block information and allow the interleaved processing. The MM-AES module uses an unrolled and pipelined architecture where each processed data block moves through its processing pipeline in step with correspondingly moving key, auxiliary data, and instructions in parallel pipelines.

Proceedings ArticleDOI
TL;DR: This paper describes the data reduction pipeline of the GPI science instrument, which reduces an ensemble of highcontrast spectroscopic or polarimetric raw science images and calibration data into a final dataset ready for scientific analysis.
Abstract: The Gemini Planet Imager (GPI) high-contrast adaptive optics system, which is currently under construction for Gemini South, has an IFS as its science instrument. This paper describes the data reduction pipeline of the GPI science instrument. Written in IDL, with a modular architecture, this pipeline reduces an ensemble of highcontrast spectroscopic or polarimetric raw science images and calibration data into a final dataset ready for scientific analysis. It includes speckle suppression techniques such as angular and spectral differential imaging that are necessary to achieve extreme contrast performances for which the instrument is designed. This paper presents also raw GPI IFS simulated data developed to test the pipeline.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: This paper presents a FPGA-based architecture targeting 100 Gbps packet classification based on HyperSplit, a memory-efficient tree search algorithm, and presents an efficient pipeline architecture for mapping HyperSplit tree.
Abstract: Multi-dimensional packet classification is a key task in network applications, such as firewalls, intrusion prevention and traffic management systems. With the rapid growth of network bandwidth, wire speed multi-dimensional packet classification has become a major challenge for next-generation network processing devices. In this paper, we present a FPGA-based architecture targeting 100 Gbps packet classification. Our solution is based on HyperSplit, a memory-efficient tree search algorithm. First, we present an efficient pipeline architecture for mapping HyperSplit tree. Special logic is designed to support two packets to be processed every clock cycle. Second, a node-merging algorithm is proposed to reduce the number of pipeline stages without significantly increasing the memory requirement. Third, a leaf-pushing algorithm is designed to control the memory usage and to support on-the-fly rule update. The implementation results show that our architecture can achieve more than 100 Gbps throughput for the 64-byte minimum Ethernet packets. With a single Virtex-6 chip, our approach can handle over 50K rules. Compared with the state-of-the-art multi-core network processor based solutions, our FPGA design offers at least a 10x improvement in throughput performance.

Proceedings ArticleDOI
19 Jun 2010
TL;DR: Data Marshaling (DM) is proposed, a new technique to eliminate cache misses to inter-segment data and adds only 96 bytes/core of storage overhead, and significantly improves the performance of two promising Staged Execution models, Accelerated Critical Sections and producer-consumer pipeline parallelism, on both homogeneous and heterogeneous multi-core systems.
Abstract: Previous research has shown that Staged Execution (SE), i.e., dividing a program into segments and executing each segment at the core that has the data and/or functionality to best run that segment, can improve performance and save power. However, SE's benefit is limited because most segments access inter-segment data, i.e., data generated by the previous segment. When consecutive segments run on different cores, accesses to inter-segment data incur cache misses, thereby reducing performance. This paper proposes Data Marshaling (DM), a new technique to eliminate cache misses to inter-segment data. DM uses profiling to identify instructions that generate inter-segment data, and adds only 96 bytes/core of storage overhead. We show that DM significantly improves the performance of two promising Staged Execution models, Accelerated Critical Sections and producer-consumer pipeline parallelism, on both homogeneous and heterogeneous multi-core systems. In both models, DM can achieve almost all of the potential of ideally eliminating cache misses to inter-segment data. DM's performance benefit increases with the number of cores.

Journal ArticleDOI
TL;DR: In this article, both analytical and numerical solutions of the problem have been developed, initially for slides acting normal to the pipeline but later extended to general conditions with the slide impacting the pipeline at some angle.
Abstract: Pipelines are frequently subjected to active loading from slide events both on land and in the offshore environment. Whether the pipeline is initially buried or lying close to the surface, and whether it crosses the unstable region or lies in the path of debris originating from further away, the main principles are unchanged. The pipeline will be subjected to active loading over some defined length, related to the width of the slide, and as it deforms will be restrained by transverse and longitudinal resistance in adjacent passive zones. Ultimately the pipeline may come to a stable deformed shape where the continued active loading from the slide is equilibrated by the membrane tension in the pipeline in addition to the passive resistance. This problem has been explored by various writers and these principles are well established. However, to date no attempt has been made to develop a standard set of parametric solutions, which is the purpose of the current paper. Both analytical and numerical solutions of the problem have been developed, initially for slides acting normal to the pipeline but later extended to general conditions with the slide impacting the pipeline at some angle. It is shown that analytical solutions based on certain idealizations maintain their accuracy over a wide parameter range, and the net effect of the slide in terms of stresses induced in the pipe wall and maximum displacement of the pipeline may be captured in appropriate dimensionless groups. Design charts are presented for slide widths of up to 10,000 times the pipeline diameter for a practical range of other parameters such as the ratios of passive normal and frictional resistance to the active loading. Although the solutions are limited by some of the idealizations, they should provide a useful starting point in design, providing a framework for a more detailed numerical analysis for the particular governing conditions.

Patent
07 Jan 2010
TL;DR: In this paper, the most significant bits (MSBs) of data representing zero can be input to the MAC block and sent directly to the add-subtract-accumulate unit.
Abstract: A multiplier-accumulator (MAC) block can be programmed to operate in one or more modes. When the MAC block implements at least one multiply-and-accumulate operation, the accumulator value can be zeroed without introducing clock latency or initialized in one clock cycle. To zero the accumulator value, the most significant bits (MSBs) of data representing zero can be input to the MAC block and sent directly to the add-subtract-accumulate unit. Alternatively, dedicated configuration bits can be set to clear the contents of a pipeline register for input to the add-subtract-accumulate unit.

Journal ArticleDOI
TL;DR: Two continuous-time input pipeline ADC architectures are introduced and the switched-capacitor sampling function is moved to the second stage input which greatly eases the sampling distortion requirements and obviates the need for an explicit front-end sample-and-hold function.
Abstract: Two continuous-time input pipeline ADC architectures are introduced. The continuous-time input approach overcomes many of the challenges associated with a pure switched-capacitor architecture. The resistive input load of the two new architectures provides a benign interface to external drive circuitry. The switched-capacitor sampling function is moved to the second stage input which greatly eases the sampling distortion requirements and obviates the need for an explicit front-end sample-and-hold function. The second ADC presented additionally provides inherent anti-alias filtering, allowing the possibility of eliminating costly anti-alias filters. This second architecture also eases the jitter requirements of the ADC clock when compared to switched capacitor pipeline ADCs. Measured results obtained from two proof of concept test chips fabricated in a 0.18 μm CMOS process validate the effectiveness of the proposed techniques.

Proceedings ArticleDOI
31 Aug 2010
TL;DR: This study compares three pipelined adder architectures: the classical pipelining ripple-carry adder, a variation that reduces register count, and an FPGA-specific implementation of the carry-select adder capable of providing lower latency additions at a comparable price.
Abstract: Integer addition is a universal building block, and applications such as quad-precision floating-point or elliptic curve cryptography now demand precisions well beyond 64 bits. This study explores the trade-offs between size, latency and frequency for pipelined large-precision adders on FPGA. It compares three pipelined adder architectures: the classical pipelined ripple-carry adder, a variation that reduces register count, and an FPGA-specific implementation of the carry-select adder capable of providing lower latency additions at a comparable price. For each of these architectures, resource estimation models are defined, and used in an adder generator that selects the best architecture considering the target FPGA, the target operating frequency, and the addition bit width.

Patent
12 Oct 2010
TL;DR: In this article, an apparatus is provided for x-ray inspection of a pipeline girth weld, consisting of a directional X-ray source 5 which is insertable into a pipeline section and is rotatable within the pipeline.
Abstract: An apparatus is provided for x-ray inspection of a pipeline girth weld. This comprises a directional x-ray source 5 which is insertable into a pipeline section and is rotatable within the pipeline. Means are provided to align the directional x-ray source with an external x-ray detector such that both may be rotated through 360 degrees substantially coaxially with the pipeline section. Means for sampling the data detected by the x-ray detector are provided so that it may be further analysed.

Proceedings ArticleDOI
06 Mar 2010
TL;DR: The principle of AES algorithm and the detailed description and implementation on FPGA, which has less hardware resources and high cost-effective and high security and reliability is introduced.
Abstract: This paper introduces the principle of AES algorithm and the detailed description and implementation on FPGA. This system aims at reduced hardware structure. Compared with the pipeline structure, it has less hardware resources and high cost-effective. And this system has high security and reliability. This AES system can be widely used in the terminal equipments.

Journal ArticleDOI
TL;DR: The proposed scheme for the design of a high-speed pipeline VLSI architecture for the computation of the 1-D discrete wavelet transform (DWT) with little or no overhead on the hardware resources by maximizing the inter- and intrastage parallelisms of the pipeline is proposed.
Abstract: In this paper, a scheme for the design of a high-speed pipeline VLSI architecture for the computation of the 1-D discrete wavelet transform (DWT) is proposed. The main focus of the scheme is on reducing the number and period of clock cycles for the DWT computation with little or no overhead on the hardware resources by maximizing the inter- and intrastage parallelisms of the pipeline. The interstage parallelism is enhanced by optimally mapping the computational load associated with the various DWT decomposition levels to the stages of the pipeline and by synchronizing their operations. The intrastage parallelism is enhanced by decomposing the filtering operation equally into two subtasks that can be performed independently in parallel and by optimally organizing the bitwise operations for performing each subtask so that the delay of the critical data path from a partial-product bit to a bit of the output sample for the filtering operation is minimized. It is shown that an architecture designed based on the proposed scheme requires a smaller number of clock cycles compared to that of the architectures employing comparable hardware resources. In fact, the requirement on the hardware resources of the architecture designed by using the proposed scheme also gets improved due to a smaller number of registers that need to be employed. Based on the proposed scheme, a specific example of designing an architecture for the DWT computation is considered. In order to assess the feasibility and the efficiency of the proposed scheme, the architecture thus designed is simulated and implemented on a field-programmable gate-array board. It is seen that the simulation and implementation results conform to the stated goals of the proposed scheme, thus making the scheme a viable approach for designing a practical and realizable architecture for real-time DWT computation.

Journal ArticleDOI
18 Mar 2010
TL;DR: The SIMD processor combines 4-bit processing elements (PEs) with SRAM on a small area and thus enables at the same time a high performance, high power efficiency, and high area efficiency and expands the application of real-time image processing technology to a variety of electronic devices.
Abstract: This paper describes a high performance scalable massively parallel single-instruction multiple-data (SIMD) processor and power/area efficient real-time image processing. The SIMD processor combines 4-bit processing elements (PEs) with SRAM on a small area and thus enables at the same time a high performance of 191 GOPS, a high power efficiency of 310 GOPS/W, and a high area efficiency of 31.6 GOPS/mm2 . The applied pipeline architecture is optimized to reduce the number of controller overhead cycles so that the SIMD parallel processing unit can be utilized during up to 99% of the operating time of typical application programs. The processor can be also optimized for low cost, low power, and high performance multimedia system-on-a-chip (SoC) solutions. A combination of custom and automated implementation techniques enables scalability in the number of PEs. The processor has two operating modes, a normal frequency (NF) mode for higher power efficiency and a double frequency (DF) mode for higher performance. The combination of high area efficiency, high power efficiency, high performance, and the flexibility of the SIMD processor described in this paper expands the application of real-time image processing technology to a variety of electronic devices.

Journal ArticleDOI
TL;DR: Analysis of the performance of different filter orders and different address lengths of partial tables indicate the choice of four input partial tables presents the best of area-time-power-efficient realizations of FIR filter compared with the existing LUT-less DA-based implementations of FIR filters in both high-speed and medium-speed implementations.

Patent
03 Feb 2010
TL;DR: In this article, a method and system for calculating pipeline integrity business risk score for a pipeline network is provided, which includes a step of first calculating a structural risk score, an operational risk score and a commercial risk score.
Abstract: A method and system for calculating pipeline integrity business risk score for a pipeline network is provided. The method includes a step of first calculating a structural risk score, an operational risk score and a commercial risk score for each pipeline segment in a pipeline network. The method further includes calculating pipeline integrity business risk score for each pipeline segment. The structural risk score, operational risk score, commercial risk score and pipeline integrity business risk score for each pipeline segment is rolled-up to calculate the respective risk scores of a pipeline network. The rolled-up risk scores are calculated by computing weight factors for each pipeline segment, relative risk scores weight of each pipeline segment and relative risk scores contribution of each pipeline segment. The system of the invention comprises executable files, dynamic linked libraries and risk score computing modules configured to display the risk scores using a dashboard.