scispace - formally typeset
Search or ask a question

Showing papers on "Pipeline (computing) published in 2018"


Posted Content
TL;DR: PointPillars as mentioned in this paper utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars), which can be used with any standard 2D convolutional detection architecture.
Abstract: Object detection in point clouds is an important aspect of many robotics applications such as autonomous driving. In this paper we consider the problem of encoding a point cloud into a format appropriate for a downstream detection pipeline. Recent literature suggests two types of encoders; fixed encoders tend to be fast but sacrifice accuracy, while encoders that are learned from data are more accurate, but slower. In this work we propose PointPillars, a novel encoder which utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars). While the encoded features can be used with any standard 2D convolutional detection architecture, we further propose a lean downstream network. Extensive experimentation shows that PointPillars outperforms previous encoders with respect to both speed and accuracy by a large margin. Despite only using lidar, our full detection pipeline significantly outperforms the state of the art, even among fusion methods, with respect to both the 3D and bird's eye view KITTI benchmarks. This detection performance is achieved while running at 62 Hz: a 2 - 4 fold runtime improvement. A faster version of our method matches the state of the art at 105 Hz. These benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds.

1,311 citations


Posted Content
TL;DR: GPipe is introduced, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers by pipelining different sub-sequences of layers on separate accelerators, resulting in almost linear speedup when a model is partitioned across multiple accelerators.
Abstract: Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batch-splitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models.

688 citations


Book ChapterDOI
08 Sep 2018
TL;DR: Empirical comparison with the previous state-of-the-art crowd counting methods shows that this method achieves the lowest mean absolute error on three challenging crowd counting benchmarks: Shanghaitech, WorldExpo’10, and UCF datasets.
Abstract: In this work, we tackle the problem of crowd counting in images. We present a Convolutional Neural Network (CNN) based density estimation approach to solve this problem. Predicting a high resolution density map in one go is a challenging task. Hence, we present a two branch CNN architecture for generating high resolution density maps, where the first branch generates a low resolution density map, and the second branch incorporates the low resolution prediction and feature maps from the first branch to generate a high resolution density map. We also propose a multi-stage extension of our approach where each stage in the pipeline utilizes the predictions from all the previous stages. Empirical comparison with the previous state-of-the-art crowd counting methods shows that our method achieves the lowest mean absolute error on three challenging crowd counting benchmarks: Shanghaitech, WorldExpo’10, and UCF datasets.

299 citations


Posted ContentDOI
14 Jun 2018-bioRxiv
TL;DR: W warp, a software for real-time evaluation, correction, and processing of cryo-EM data during their acquisition, which includes a deep learning-based particle picking algorithm that rivals human accuracy to make the pre-processing pipeline truly automated.
Abstract: The acquisition of cryo-electron microscopy (cryo-EM) data from biological specimens is currently largely uncoupled from subsequent data evaluation, correction and processing. Therefore, the acquisition strategy is difficult to optimize during data collection, often leading to suboptimal microscope usage and disappointing results. Here we provide Warp, a software for real-time evaluation, correction, and processing of cryo-EM data during their acquisition. Warp evaluates and monitors key parameters for each recorded micrograph or tomographic tilt series in real time. Warp also rapidly corrects micrographs for global and local motion, and estimates the local defocus with the use of novel algorithms. The software further includes a deep learning-based particle picking algorithm that rivals human accuracy to make the pre-processing pipeline truly automated. The output from Warp can be directly fed into established tools for particle classification and 3D image reconstruction. In a benchmarking study we show that Warp automatically processed a published cryo-EM data set for influenza virus hemagglutinin, leading to an improvement of the nominal resolution from 3.9 A to 3.2 A. Warp is easy to install, computationally inexpensive, and has an intuitive and streamlined user interface.

227 citations


Journal ArticleDOI
TL;DR: This paper proposes and examines a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets, and shows that a low‐capacity FCN model can serve as a pre‐processor to normalize medical input data.

184 citations


Posted Content
TL;DR: Experiments with five different DNNs on two different clusters show that PipeDream is up to 5x faster in time-to-accuracy compared to data-parallel training.
Abstract: PipeDream is a Deep Neural Network(DNN) training system for GPUs that parallelizes computation by pipelining execution across multiple machines. Its pipeline parallel computing model avoids the slowdowns faced by data-parallel training when large models and/or limited network bandwidth induce high communication-to-computation ratios. PipeDream reduces communication by up to 95% for large DNNs relative to data-parallel training, and allows perfect overlap of communication and computation. PipeDream keeps all available GPUs productive by systematically partitioning DNN layers among them to balance work and minimize communication, versions model parameters for backward pass correctness, and schedules the forward and backward passes of different inputs in round-robin fashion to optimize "time to target accuracy". Experiments with five different DNNs on two different clusters show that PipeDream is up to 5x faster in time-to-accuracy compared to data-parallel training.

146 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: In this article, the authors investigate a special type of branch predictor that is responsible for predicting return addresses and propose two new attack variants using RSBs that give attackers similar capabilities as the documented Spectre attacks.
Abstract: Speculative execution is an optimization technique that has been part of CPUs for over a decade. It predicts the outcome and target of branch instructions to avoid stalling the execution pipeline. However, until recently, the security implications of speculative code execution have not been studied. In this paper, we investigate a special type of branch predictor that is responsible for predicting return addresses. To the best of our knowledge, we are the first to study return address predictors and their consequences for the security of modern software. In our work, we show how return stack buffers (RSBs), the core unit of return address predictors, can be used to trigger misspeculations. Based on this knowledge, we propose two new attack variants using RSBs that give attackers similar capabilities as the documented Spectre attacks. We show how local attackers can gain arbitrary speculative code execution across processes, e.g., to leak passwords another user enters on a shared system. Our evaluation showed that the recent Spectre countermeasures deployed in operating systems can also cover such RSB-based cross-process attacks. Yet we then demonstrate that attackers can trigger misspeculation in JIT environments in order to leak arbitrary memory content of browser processes. Reading outside the sandboxed memory region with JIT-compiled code is still possible with 80% accuracy on average.

143 citations


Journal ArticleDOI
TL;DR: This work evaluates the accuracy and precision of the Diffusion parameter EStImation with Gibbs and NoisE Removal (DESIGNER) pipeline, developed to identify and minimize common sources of methodological variability including: thermal noise, Gibbs ringing artifacts, Rician bias, EPI and eddy current induced spatial distortions, and motion‐related artifacts.

112 citations


Proceedings ArticleDOI
21 May 2018
TL;DR: In this paper, a flexible pipeline is proposed to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the detection networks.
Abstract: Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the 2D detection networks. To identify the 3D box, an effective model fitting algorithm is developed based on generalised car models and score maps. A two-stage convolutional neural network (CNN) is proposed to refine the detected 3D box. This pipeline is tested on the KITTI dataset using two different 2D detection networks. The 3D detection results based on these two networks are similar, demonstrating the flexibility of the proposed pipeline. The results rank second among the 3D detection algorithms, indicating its competencies in 3D detection.

111 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic method is developed for supply reliability assessment of natural gas pipeline networks, which integrates stochastic processes, graph theory and thermal-hydraulic simulation to account for uncertainty and complexity.

108 citations


Posted Content
TL;DR: A flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the2D detection networks is proposed.
Abstract: Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the 2D detection networks. To identify the 3D box, an effective model fitting algorithm is developed based on generalised car models and score maps. A two-stage convolutional neural network (CNN) is proposed to refine the detected 3D box. This pipeline is tested on the KITTI dataset using two different 2D detection networks. The 3D detection results based on these two networks are similar, demonstrating the flexibility of the proposed pipeline. The results rank second among the 3D detection algorithms, indicating its competencies in 3D detection.

Journal ArticleDOI
Hongyuan Fang1, Bin Li1, Fuming Wang1, Yuke Wang1, Can Cui1 
TL;DR: In this article, the authors investigated the mechanical behavior of drainage pipeline under traffic load before and after polymer grouting and cement grouting trenchless repairing through three-dimensional finite element method (FEM).

Posted Content
TL;DR: A robust algorithm for 2-Manifold generation of various kinds of ShapeNet Models that can be adopted efficiently to all ShapeNet models with the guarantee of correct 2- manifold topology.
Abstract: In this paper, we describe a robust algorithm for 2-Manifold generation of various kinds of ShapeNet Models. The input of our pipeline is a triangle mesh, with a set of vertices and triangular faces. The output of our pipeline is a 2-Manifold with vertices roughly uniformly distributed on the geometry surface. Our algorithm uses an octree to represent the original mesh, and construct the surface by isosurface extraction. Finally, we project the vertices to the original mesh to achieve high precision. As a result, our method can be adopted efficiently to all ShapeNet models with the guarantee of correct 2-Manifold topology.

Journal ArticleDOI
TL;DR: A recurrent neural network (RNN) accelerator design with resistive random-access memory (ReRAM)-based processing-in-memory (PIM) architecture distinguished from prior ReRAM-based convolutional neural network accelerators is presented.
Abstract: We present a recurrent neural network (RNN) accelerator design with resistive random-access memory (ReRAM)-based processing-in-memory (PIM) architecture. Distinguished from prior ReRAM-based convolutional neural network accelerators, we redesign the system to make it suitable for RNN acceleration. We measure the system throughput and energy efficiency with the detailed circuit and device characterization. Reprogrammability is enabled with our design, and an RNN friendly pipeline is employed to increase the system throughput. We observe that on average the proposed system achieves $79{\times}$ improvement of computing efficiency compared with graphics processing unit baseline. Our simulation also indicates that to maintain high accuracy and computing efficiency, the read noise standard deviation should be less than 0.2, the device resistance should be at least 1 $\text{M}{\Omega }$ , and the device writes latency should be minimized.

Proceedings ArticleDOI
Ximing Qiao1, Xiong Cao1, Huanrui Yang1, Linghao Song1, Hai Li1 
24 Jun 2018
TL;DR: AtomLayer is proposed–a universal ReRAM-based accelerator to support both efficient CNN training and inference and can achieve higher power efficiency than ISSAC in inference and PipeLayer in training, meanwhile reducing the footprint by 15 ×.
Abstract: Although ReRAM-based convolutional neural network (CNN) accelerators have been widely studied, state-of-the-art solutions suffer from either incapability of training (e.g., ISSAC [1]) or inefficiency of inference (e.g., PipeLayer [2]) due to the pipeline design. In this work, we propose AtomLayer---a universal ReRAM-based accelerator to support both efficient CNN training and inference. AtomLayer uses the atomic layer computation which processes only one network layer each time to eliminate the pipeline related issues such as long latency, pipeline bubbles and large on-chip buffer overhead. For further optimization, we use a unique filter mapping and a data reuse system to minimize the cost of layer switching and DRAM access. Our experimental results show that AtomLayer can achieve higher power efficiency than ISSAC in inference (1.1×) and PipeLayer in training (1.6×), respectively, meanwhile reducing the footprint by 15×.

Journal ArticleDOI
28 Aug 2018-Sensors
TL;DR: It is demonstrated that the approached is potentially capable of detection and localization of gas pipeline leaks with leak rates down to 0.1% of the pipeline flow volume and might be of interest for monitoring of short- and medium-length gas pipelines.
Abstract: In the presented work, the potential of fiber-optic distributed acoustic sensing (DAS) for detection of small gas pipeline leaks (<1%) is investigated. Helical wrapping of the sensing fiber directly around the pipeline is used to increase the system sensitivity for detection of weak leak-induced vibrations. DAS measurements are supplemented with reference accelerometer data to facilitate analysis and interpretation of recorded vibration signals. The results reveal that a DAS system using direct fiber application approach is capable of detecting pipeline natural vibrations excited by the broadband noise generated by the leaking medium. In the performed experiment, pipeline vibration modes with acceleration magnitudes down to single μg were detected. Simple leak detection approach based on spectral integration of time-averaged DAS signals in frequency domain was proposed. Potential benefits and limitations of the presented monitoring approach were discussed with respect to its practical applicability. We demonstrated that the approached is potentially capable of detection and localization of gas pipeline leaks with leak rates down to 0.1% of the pipeline flow volume and might be of interest for monitoring of short- and medium-length gas pipelines.

Journal ArticleDOI
TL;DR: A new architecture for FPGA-based CNN accelerator that maps all the layers to their own on-chip units and working concurrently as a pipeline is proposed, which can achieve maximum resource utilization as well as optimal computational efficiency.
Abstract: Recently, field-programmable gate arrays (FPGAs) have been widely used in the implementations of hardware accelerator for convolutional neural networks (CNNs). However, most of these existing accelerators are designed in the same idea as their ASIC counterparts, in which all operations from different layers are mapped to the same hardware units and working in a multiplexed way. This manner does not take full advantage of reconfigurability and customizability of FPGAs, resulting in a certain degree of computational efficiency degradation. In this paper, we propose a new architecture for FPGA-based CNN accelerator that maps all the layers to their own on-chip units and working concurrently as a pipeline. A comprehensive mapping and optimizing methodology based on establishing roofline model oriented optimization model is proposed, which can achieve maximum resource utilization as well as optimal computational efficiency. Besides, to ease the programming burden, we propose a design framework which can provide a one-stop function for developers to generate the accelerator with our optimizing methodology. We evaluate our proposal by implementing different modern CNN models on Xilinx Zynq-7020 and Virtex-7 690t FPGA platforms. Experimental results show that our implementations can achieve a peak performance of 910.2 GOPS on Virtex-7 690t, and 36.36 GOP/s/W energy efficiency on Zynq-7020, which are superior to the previous approaches.

Journal ArticleDOI
TL;DR: The system architecture is described that includes specific modules to deal with the fact that continuous online monitoring needs to be carried out, while addressing the need of limiting the false alarms at reasonable rates.
Abstract: This paper presents an online augmented surveillance system that aims at real-time monitoring of activities along a pipeline. The system is deployed in a fully realistic scenario and exposed to real activities carried out in unknown places at unknown times within a given test time interval (the so-called blind field tests). We describe the system architecture that includes specific modules to deal with the fact that continuous online monitoring needs to be carried out, while addressing the need of limiting the false alarms at reasonable rates. To the best of our knowledge, this is the first published work in which a pipeline integrity threat detection system is deployed in a realistic scenario (using a fiber optic along an active gas pipeline) and is thoroughly and objectively evaluated under realistic blind conditions. The system integrates two operation modes: the machine+activity identification mode identifies the machine that carries out a certain activity along the pipeline, and the threat detection mode directly identifies if the activity along the pipeline is a threat or not. The blind field tests are carried out in two different pipeline sections: the first section corresponds to the case in which the sensor is close to the sensed area, while the second one places the sensed area about $\text{35}$ km far from the sensor. Results of the machine+activity identification mode showed an average machine+activity classification rate of $\text{46.6}\%$ . For the threat detection mode, $\text{eight}$ out of $\text{ten}$ threats were correctly detected, with only $\text{one}$ false alarm appearing in a $\text{55.5}$ -h sensed period.

Journal ArticleDOI
01 Nov 2018-Energy
TL;DR: Considering both gas supply capacity and market demand uncertainties, calculations of these two items are integrated into a single Monte Carlo simulation to assess the gas supply reliability of natural gas transmission pipeline systems.

Proceedings ArticleDOI
02 Jun 2018
TL;DR: An algorithm-architecture co-designed system, Euphrates, that simultaneously improves the energyefficiency and performance of continuous vision tasks and co-optimizes different SoC IP blocks in the vision pipeline collectively.
Abstract: Continuous computer vision (CV) tasks increasingly rely on convolutional neural networks (CNN). However, CNNs have massive compute demands that far exceed the performance and energy constraints of mobile devices. In this paper, we propose and develop an algorithm-architecture co-designed system, Euphrates, that simultaneously improves the energy-efficiency and performance of continuous vision tasks. Our key observation is that changes in pixel data between consecutive frames represents visual motion. We first propose an algorithm that leverages this motion information to relax the number of expensive CNN inferences required by continuous vision applications. We co-design a mobile System-on-a-Chip (SoC) architecture to maximize the efficiency of the new algorithm. The key to our architectural augmentation is to co-optimize different SoC IP blocks in the vision pipeline collectively. Specifically, we propose to expose the motion data that is naturally generated by the Image Signal Processor (ISP) early in the vision pipeline to the CNN engine. Measurement and synthesis results show that Euphrates achieves up to 66% SoC-level energy savings (4x for the vision computations), with only 1% accuracy loss.

Journal ArticleDOI
TL;DR: The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.
Abstract: Advanced LIGO's raw detector output needs to be calibrated to compute dimensionless strain h(t). Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector's feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.

Posted Content
TL;DR: In this article, a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images is proposed, which employs a fully convolutional network (FCN) that learns segmentation and landmark localisation simultaneously.
Abstract: Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network's ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular 3D models, despite the artefacts in input CMR volumes.

Proceedings ArticleDOI
05 Nov 2018
TL;DR: The Tile-Grained Pipeline Architecture (TGPA) is proposed, a heterogeneous design which supports pipelining execution of multiple tiles within a single input image on multiple heterogeneous accelerators.
Abstract: FPGAs are more and more widely used as reconfigurable hardware accelerators for applications leveraging convolutional neural networks (CNNs) in recent years. Previous designs normally adopt a uniform accelerator architecture that processes all layers of a given CNN model one after another. This homogeneous design methodology usually has dynamic resource underutilization issue due to the tensor shape diversity of different layers. As a result, designs equipped with heterogeneous accelerators specific for different layers were proposed to resolve this issue. However, existing heterogeneous designs sacrifice latency for throughput by concurrent execution of multiple input images on different accelerators. In this paper, we propose an architecture named Tile-Grained Pipeline Architecture (TGPA) for low latency CNN inference. TGPA adopts a heterogeneous design which supports pipelining execution of multiple tiles within a single input image on multiple heterogeneous accelerators. The accelerators are partitioned onto different FPGA dies to guarantee high frequency. A partition strategy is designd to maximize on-chip resource utilization. Experiment results show that TGPA designs for different CNN models achieve up to 40% performance improvement than homogeneous designs, and 3X latency reduction over state-of-the-art designs.

Journal ArticleDOI
TL;DR: In this article, the effect of foundation pit excavation on the buried pipeline was investigated, and a three-dimensional model of pipeline and foundation pit was established, and the variation regulations of pipeline's deformation under the excavation were investigated.

Proceedings ArticleDOI
09 Mar 2018
TL;DR: The OpenVCT Framework is presented, consisting of graphical software to design a sequence of processing steps for the VCT pipeline; management software that coordinates the pipeline execution, manipulates, and retrieves phantoms and images using a relational database; and a server that executes the individual steps of the virtual patient accrual process using GPU optimized software.
Abstract: Virtual clinical trials (VCTs) have a critical role in preclinical testing of imaging systems. A VCT pipeline has been developed to model the human body anatomy, image acquisition systems, display and processing, and image analysis and interpretation. VCTs require the execution of multiple computer simulations in a reasonable time. This study presents the OpenVCT Framework, consisting of graphical software to design a sequence of processing steps for the VCT pipeline; management software that coordinates the pipeline execution, manipulates, and retrieves phantoms and images using a relational database; and a server that executes the individual steps of the virtual patient accrual process using GPU optimized software. The framework is modular and supports various data types, algorithms, and modalities. The framework can be used to conduct massive simulations and several hundred imaging studies can be simulated per day on a single workstation. On average, we can simulate a Tomo Combo (DM + DBT) study using anthropomorphic breast phantoms in less than 9 minutes (voxel size = 100 μm3 and volume = 700 mL). Tomo Combo images from an entire virtual population can be simulated in less than a week. We can accelerate system performance using phantoms with large voxels. The VCT pipeline can also be accelerated by using multiple GPU’s (e.g., using SLI mode, GPU clusters).

Journal ArticleDOI
04 Jul 2018
TL;DR: A pipeline for reliable plant segmentation in any crop growth stage, as well as a novel algorithm for robust crop row detection that adapts the Hough transform for line detection to detect a pattern of parallel equidistant lines are presented.
Abstract: In sustainable farming, robotic solutions are in rising demand. Specifically robots for precision agriculture open up possibilities for new applications. Such applications typically require a high accuracy of the underlying navigation system. A cornerstone for reliable navigation is the robust detection of crop rows. However, detecting crops from vision or laser data is particularly challenging when the plants are either tiny or so large that individual plants cannot be distinguished easily. In this letter, we present a pipeline for reliable plant segmentation in any crop growth stage, as well as a novel algorithm for robust crop row detection that adapts the Hough transform for line detection to detect a pattern of parallel equidistant lines. Our algorithm is able to jointly estimate the angle, lateral offset and crop row spacing and is particularly suited for tiny plants. In extensive experiments using various real-world data sets from different kinds and sizes of crops we show that our algorithm provides reliable and accurate results.

Journal ArticleDOI
TL;DR: Two reliability indicators, gas supply satisfaction (Sa) and gas supply supportability (Su), are proposed to quantify the gas supply capacity and their feasibility is confirmed with two case studies including a hypothesis and a real transmission pipeline system.

Journal ArticleDOI
TL;DR: This paper designs and optimize a processor, called Patmos, for low WCET bounds rather than for high average-case performance, a way out of this dilemma: a processor designed for real-time systems.
Abstract: Current processors provide high average-case performance, as they are optimized for general purpose computing. However, those optimizations often lead to a high worst-case execution time (WCET). WCET analysis tools model the architectural features that increase average-case performance. To keep analysis complexity manageable, those models need to abstract from implementation details. This abstraction further increases the WCET bound. This paper presents a way out of this dilemma: a processor designed for real-time systems. We design and optimize a processor, called Patmos, for low WCET bounds rather than for high average-case performance. Patmos is a dual-issue, statically scheduled RISC processor. A method cache serves as the cache for the instructions and a split cache organization simplifies the WCET analysis of the data cache. To fill the dual-issue pipeline with enough useful instructions, Patmos relies on a customized compiler. The compiler also plays a central role in optimizing the application for the WCET instead of average-case performance.

Journal ArticleDOI
Yang Xiaoyue1, Chunhua Yang1, Tao Peng1, Zhiwen Chen1, Liu Bo1, Weihua Gui1 
TL;DR: This paper presents a multiprocessor hardware-in-the-loop (HIL) fault injection strategy for real-time simulation of faults in traction control system (TCS), and the HIL experimental results validate the effectiveness and applicability of the proposed strategy.
Abstract: This paper presents a multiprocessor hardware-in-the-loop (HIL) fault injection strategy for real-time simulation of faults in traction control system (TCS). TCS models are solved for the implementation in CPU and field-programmable gate array. A timing optimization method based on static timing analysis is proposed to deal with timing error induced by real-time fault injection. A multiprocessor-based HIL platform is constructed, and the pipeline of fault injection unit is designed for the proposed platform. The HIL experimental results validate the effectiveness and applicability of the proposed strategy.

Journal ArticleDOI
TL;DR: The influences of the analytical approximations and assumptions originated from the method development process and the impacts of different uncertainty factors in practical application systems on the accuracy and applicability of the TFR method are investigated.
Abstract: The transient frequency response (TFR) based pipe leak detection method has been developed and applied to water pipeline systems with different connection complexities such as branched and looped pipe networks. Previous development and preliminary applications have demonstrated the advantages of high efficiency and non-intrusion for this TFR method. Despite of the successful validations through extensive numerical applications in the literature, this type of method has not yet been examined systematically for its inherent characteristics and application accuracy under different system and flow conditions. This paper investigates the influences of the analytical approximations and assumptions originated from the method development process and the impacts of different uncertainty factors in practical application systems on the accuracy and applicability of the TFR method. The influence factors considered for the analysis contain system properties, derivation approximations and data measurement, and the pipeline systems used for the investigation include simple branched and looped multi-pipe networks. The methods of analytical analysis and numerical simulations are adopted for the investigation. The accuracy and sensitivity of the TFR method is evaluated for different factors and system conditions in this study. The results and findings are useful to understand the validity range and sensitivity of the TFR-based method, so as to better apply this efficient and non-intrusive method in practical pipeline systems.