scispace - formally typeset
Search or ask a question
Author

Charles Bardel

Bio: Charles Bardel is an academic researcher from Michigan State University. The author has contributed to research in topics: Monte Carlo method & Random access. The author has an hindex of 2, co-authored 7 publications receiving 9 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the feasibility of using X-ray tomography inspection to detect flaws on the surface of pellets contained in Zirconium alloy (Zircaloy) tubes using very limited number of projections was investigated.
Abstract: Fuel rods in nuclear power plants consist of uranium dioxide pellets enclosed in Zirconium alloy (Zircaloy) tubes. It is vitally important for the pellet surface to remain free fr om pits, cracks and chipping defects after it is loaded into t he tubes to prevent local hot spots during reactor operation. The inspection of the fuel rod presents several challenges. In this pa per, we investigate the feasibility of using X-ray tomography inspection to detect flaws on the surface of pellets contained wit hin the tubes using very limited number of projections.

6 citations

01 Jan 2010
TL;DR: In this article, a method for measuring in vivo ankle torques developed by athletes was developed to predict maximum ankle torque and rate of ankle torque based on insole pressures, which was found that network prediction accuracy depended on the number of subjects used for training, as well as the method of pressure sensor grouping.
Abstract: Major ankle sprains in sports are thought to be due to high levels of ankle torsion. The purpose of this study was to develop a method for measuring in vivo ankle torques developed by athletes. Motion capture, force plate, and insole pressure measurements were used to develop generalized regression neural networks to predict maximum ankle torque and rate of ankle torque based on insole pressures. It was found that network prediction accuracy depended on the number of subjects used for training, as well as the method of pressure sensor grouping. Further work will be performed to determine optimal subject and pressure sensor groupings.

1 citations

Proceedings ArticleDOI
16 Jun 2013
TL;DR: Whether the particle sort algorithm retains enough entropy in the particle list gained from particle cell crossings in the algorithm in [3] will be examined by varying the size of the contiguous particle block and measuring the thermal equilibration time.
Abstract: Summary form only given. Monte Carlo particle collision calculations can be very computationally expensive for particle-in-cell codes. In the case of background fluid collisional calculations, where each particle calculation is totally independent of other collisions, the calculations can be setup as highly parallel. Porting to GPU platforms has shown two orders of magnitude decrease compared to single processor performance. One approach is to simply apply a function to every particle which involves computing the particle energy, a square root to obtain the speed, and either interpolation of tabled cross sections or computation of a curve fit for each process for every particle [1]. Then, based on this probability of collision the collisional dynamics code might be executed. For collisional probabilities, 1 this is inefficient for finding particles to collide and load imbalanced for the collisional dynamics on the vector architecture (Single-Instruction Multiple-Data SIMD) like capabilities available on the GPU[2]. The alternative approach is to use the null collision method[1] where particles selected for collision are selected at random using the total collision probability, which is independent of particle energy and position. However, this sparse random access of particles in the particle array, as needed for the null collision method[1], has drawbacks on the GPU due to SIMD architecture. GPU threads that are grouped in hardware are called warps. Each warp can only issue one computational or memory instruction. However, when two memory instructions are located with 128 bytes[2] of each other they can be 'coalesced' into one instruction. Using the data structure and algorithm presented in [3] for efficient particle to grid charge accumulation on the GPU, which ensures that all particles contained within a cell are contiguous in memory, this paper examines the effect of selecting particles for colliding that are contiguous in that same list. This setup would capitalize on the null collision method's not needing to calculate the energy of each particle and optimize the memory bandwidth through the GPU. The key point under investigation is whether the particle sort algorithm retains enough entropy in the particle list gained from particle cell crossings in the algorithm in [3]. This will be examined by varying the size of the contiguous particle block and measuring the thermal equilibration time.
Proceedings ArticleDOI
21 Jun 2011
TL;DR: In this article, a two-dimensional finite difference time domain model for simulating the propagation of forward and time reversed wave fields is presented. But this approach results in repeated executions of a three dimensional forward model in each iteration, making it computationally demanding.
Abstract: Inverse problem solutions in NDE can be broadly classified as model‐based approach and system‐based approach. In model‐based approach an accurate forward model is used in an iterative framework to provide a defect shape that minimizes the error between the measured signal and a simulated signal. However this approach results in repeated executions of a three dimensional forward model in each iteration, making it computationally demanding. This paper presents a direct approach to inversion using principles of time reversal. The feasibility of the approach is demonstrated via application to microwave NDE data. A two‐dimensional finite difference time domain model for simulating the propagation of forward and time reversed wave fields is first developed. The key advantage of the approach is that it provides a model‐based inversion method that is not iterative. Simulation and experimental results validating the approach are presented.

Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a review of the imaging of nuclear fuel using ionising radiation is presented, which spans the past four decades and show how the technology has developed over that time.

33 citations

Dissertation
01 Apr 2015
TL;DR: A prototype for inline, non-destructive inspection of uranium dioxide (UO2) fuel pellets at production speeds of two pellets per second is presented in this article, where the system tests the surface of each cylindrical ceramic pellet using three different methods: laser 2D profile, laser surface roughness, and machine vision camera.
Abstract: A prototype for inline, non-destructive inspection of uranium dioxide (UO2) fuel pellets at production speeds of two pellets per second is presented. The system tests the surface of each cylindrical ceramic pellet using three different methods: laser 2D profile, laser surface roughness, and machine vision camera. The arrangement of these sensors allows for complete cylindrical and end surface inspection of every pellet, which are judged against manufacturing visual inspection criteria. Sensor selection and inspection arrangement have already been developed in past work, the present advancements are in the area of system refinement and automation. Internal non-destructive testing techniques of the dense ceramic pellets are explored, but ultimately efforts are placed on the completion and testing of the inspection prototype. A simple yet effective TRIZ-based pellet handling system using gravity feed is designed and integrated, along with real-time control software developed in LabVIEW. The machine vision algorithm and illumination setup are adapted to overcome challenges identified with actual UO2 pellets. Testing is performed to optimize the system’s false positive and defect detection results, from which the more common defect types are tested and system statistical false detection rates are calculated. With relatively low error rates and successful detection of all sample defects, the automated system is validated for inspection of UO2 pellets.

4 citations

Journal ArticleDOI
13 Dec 2020-Scanning
TL;DR: The proposed amoeba filtering is customized for postprocessing of CT images acquired at a reduced X-ray dose, and significantly improves the image quality visually and statistically, providing better contrast and image smoothing without compromising on edge details.
Abstract: Computed tomography (CT) is one of the most common and beneficial medical imaging schemes, but the associated high radiation dose injurious to the patient is always a concern. Therefore, postprocessing-based enhancement of a CT reconstructed image acquired using a reduced dose is an active research area. Amoeba- (or spatially variant kernel-) based filtering is a strong candidate scheme for postprocessing of the CT image, which adapts its shape according to the image contents. In the reported research work, the amoeba filtering is customized for postprocessing of CT images acquired at a reduced X-ray dose. The proposed scheme modifies both the pilot image formation and amoeba shaping mechanism of the conventional amoeba implementation. The proposed scheme uses a Wiener filter-based pilot image, while region-based segmentation is used for amoeba shaping instead of the conventional amoeba distance-based approach. The merits of the proposed scheme include being more suitable for CT images because of the similar region-based and symmetric nature of the human body anatomy, image smoothing without compromising on the edge details, and being adaptive in nature and more robust to noise. The performance of the proposed amoeba scheme is compared to the traditional amoeba kernel in the image denoising application for CT images using filtered back projection (FBP) on sparse-view projections. The scheme is supported by computer simulations using fan-beam projections of clinically reconstructed and simulated head CT phantoms. The scheme is tested using multiple image quality matrices, in the presence of additive projection noise. The scheme implementation significantly improves the image quality visually and statistically, providing better contrast and image smoothing without compromising on edge details. Promising results indicate the efficacy of the proposed scheme.

2 citations

Journal ArticleDOI
TL;DR: A novel algorithm for efficient CT reconstruction from under-sampled projections; which leads to radiation dose reduction with quality image reconstruction and is visually and statistically better than classical CT reconstruction techniques, as evaluated using various image quality matrices.

1 citations