scispace - formally typeset
Search or ask a question
Journal ArticleDOI

An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

TL;DR: A novel orthogonal concatenated code and cyclic redundancy check have been used to mitigate the effects of data corruption in the user data and a novel memory management algorithm is proposed that helps to process the data at the back-end computing nodes removing the added path delays.
Abstract: Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.
Citations
More filters
Journal ArticleDOI
TL;DR: The proposed CRC algorithm can reduce the error rate of the system by detecting and controlling the errors, and the validity of CRC algorithm is verified by experiments.

5 citations

Journal Article
TL;DR: The main advancement of the work is the use of modified BCH(15, 11) code that leads to high error correction capabilities for burst errors and user friendly packet length.
Abstract: This paper presents the design of a compact pro- tocol for fixed-latency, high-speed, reliable, serial transmission between simple field-programmable gate arrays (FPGA) devices. Implementation of the project aims to delineate word boundaries, provide randomness to the electromagnetic interference (EMI) generated by the electrical transitions, allow for clock recov- ery and maintain direct current (DC) balance. An orthogonal concatenated coding scheme is used for correcting transmission errors using modified Bose–Chaudhuri–Hocquenghem (BCH) code capable of correcting all single bit errors and most of the double-adjacent errors. As a result all burst errors of a length up to 31 bits, and some of the longer group errors, are corrected within 256 bits long packet. The efficiency of the proposed solution equals 46.48%, as 119 out of 256 bits are fully available to the user. The design has been implemented and tested on Xilinx Kintex UltraScale+ KCU116 Evaluation Kit with a data rate of 28.2 Gbps. Sample latency analysis has also been performed so that user could easily carry out calculations for different transmission speed. The main advancement of the work is the use of modified BCH(15, 11) code that leads to high error correction capabilities for burst errors and user friendly packet length.

1 citations


Cites methods from "An FPGA-Based High-Speed Error Resi..."

  • ...Authors in [5] propose high-speed error resilient communication protocol intended to be used in HEP experiments....

    [...]

Journal ArticleDOI
TL;DR: In this article , a detailed literature survey on state-of-the-art machine learning methods for NPP equipment condition assessment is presented, including major failure modes, data sources, maintenance strategies, and relationship between equipment lifetime, assessment technology, and maintenance strategy.
Abstract: Abstract The condition assessment of equipment in nuclear power plants (NPPs) could provide essential information for operation and maintenance decisions, which would have a significant impact on improving the safety and economy of NPPs. To date, substantial work has been conducted on the condition assessment based on machine learning for NPP equipment. To provide a comprehensive overview for researchers interested in developing machine learning methods for NPP equipment condition assessment, this critical review presents a detailed literature survey on state-of-the-art research and identifies challenges for future study. Valuable information is presented, including major failure modes, data sources, maintenance strategies, and the relationship between equipment lifetime, assessment technology, and maintenance strategy. Following the typical lifetime of NPP equipment for condition assessment, current works in this domain are categorized into anomaly detection, remaining useful life prediction, and fault detection and diagnosis. The techniques and methodologies adopted in the literature are summarized from each aspect. In particular, the in-depth NPP equipment condition assessment survey based on deep learning methods is presented. In addition, we elaborate on current issues, challenges, and future research directions for the condition assessment of equipment in NPPs. These directions we believe will pave the way for equipment condition assessment.
References
More filters
15 Aug 1989
TL;DR: The performance (bit-error rate vs. signal-to-noise ratio) of two different interleaving systems, block interleaves and the newer helical interleave, are compared.
Abstract: The performance (bit-error rate vs. signal-to-noise ratio) of two different interleaving systems, block interleaving and the newer helical interleaving are compared. Both systems are studied with and without error forecasting. Without error forecasting, the two systems have identical performance. When error forecasting is used with shallow interleaving, helical interleaving gains, but less than 0.05 dB, over block interleaving. For higher interleaving depth, the systems have almost indistinguishable performance.

2 citations