scispace - formally typeset
Search or ask a question

Showing papers in "Measurement Science and Technology in 2016"


Journal ArticleDOI
TL;DR: In this article, the use of X-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts.
Abstract: In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

330 citations


Journal ArticleDOI
TL;DR: In this article, the authors discussed the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field and derived the expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses.
Abstract: This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.

317 citations


Journal ArticleDOI
TL;DR: In this paper, a capacitive sensor printed on a flexible textile substrate with a carbon black (CB)/silicone rubber (SR) composite dielectric was demonstrated to achieve the wearable comfort of electronic skin.
Abstract: To achieve the wearable comfort of electronic skin (e-skin), a capacitive sensor printed on a flexible textile substrate with a carbon black (CB)/silicone rubber (SR) composite dielectric was demonstrated in this paper. Organo-silicone conductive silver adhesive serves as a flexible electrodes/shielding layer. The structure design, sensing mechanism and the influence of the conductive filler content and temperature variations on the sensor performance were investigated. The proposed device can effectively enhance the flexibility and comfort of wearing the device asthe sensing element has achieved a sensitivity of 0.02536%/KPa, a hysteresis error of 5.6%, and a dynamic response time of ~89 ms at the range of 0–700 KPa. The drift induced by temperature variations has been calibrated by presenting the temperature compensation model. The research on the time–space distribution of plantar pressure information and the experiment of the manipulator soft-grasping were implemented with the introduced device, and the experimental results indicate that the capacitive flexible textile tactile sensor has good stability and tactile perception capacity. This study provides a good candidate for wearable artificial skin.

123 citations


Journal ArticleDOI
TL;DR: A novel method based on the optimal variational mode decomposition (OVMD) and 1.5-dimension envelope spectrum is proposed for detecting the compound faults of rotating machinery and can separate the characteristic signatures of compound faults.
Abstract: Owing to the character of diversity and complexity, the compound fault diagnosis of rotating machinery under non-stationary operation has turned into a challenging task. In this paper, a novel method based on the optimal variational mode decomposition (OVMD) and 1.5-dimension envelope spectrum is proposed for detecting the compound faults of rotating machinery. In this method, compound fault signals are first decomposed by using OVMD containing optimal decomposition parameters, and several intrinsic mode components are obtained. Then, an adaptive selection method based on the weight factor (WF) is presented to choose two intrinsic mode components that contain the principal fault characteristic information. Finally, the 1.5-dimension envelope spectrum of the selected intrinsic mode components is utilized to extract the compound fault characteristic information of vibration signals. The performance of the proposed method is demonstrated by using the simulation signal and the experimental vibration signals collected from a rolling bearing and a gearbox with compound faults. The analysis results suggest that the proposed method is not only capable of detecting compound faults of a bearing and a gearbox, but can separate the characteristic signatures of compound faults. The research offers a new means for the compound fault diagnosis of rotating machinery.

104 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid fault diagnosis approach is developed for the denoising and non-stationary feature extraction in this work, which combines well with the variational mode decomposition (VMD) and majoriation-minization based total variation denoizing (TV-MM) approach to remove stochastic noise in the raw signal and to enhance the corresponding characteristics.
Abstract: Feature extraction plays an essential role in bearing fault detection. However, the measured vibration signals are complex and non-stationary in nature, and meanwhile impulsive signatures of rolling bearing are usually immersed in stochastic noise. Hence, a novel hybrid fault diagnosis approach is developed for the denoising and non-stationary feature extraction in this work, which combines well with the variational mode decomposition (VMD) and majoriation–minization based total variation denoising (TV-MM). The TV-MM approach is utilized to remove stochastic noise in the raw signal and to enhance the corresponding characteristics. Since the parameter is very important in TV-MM, the weighted kurtosis index is also proposed in this work to determine an appropriate used in TV-MM. The performance of the proposed hybrid approach is conducted through the analysis of the simulated and practical bearing vibration signals. Results demonstrate that the proposed approach has superior capability to detect roller bearing faults from vibration signals.

96 citations



Journal ArticleDOI
TL;DR: This approach has demonstrated the success of a clamp-on ultrasound sensor for flow regime classification that would be possible in industry practice and considerably more promising than other techniques as it uses a non-invasive and non-radioactive sensor.
Abstract: The identification of flow pattern is a key issue in multiphase flow which is encountered in the petrochemical industry. It is difficult to identify the gas–liquid flow regimes objectively with the gas–liquid two-phase flow. This paper presents the feasibility of a clamp-on instrument for an objective flow regime classification of two-phase flow using an ultrasonic Doppler sensor and an artificial neural network, which records and processes the ultrasonic signals reflected from the two-phase flow. Experimental data is obtained on a horizontal test rig with a total pipe length of 21 m and 5.08 cm internal diameter carrying air-water two-phase flow under slug, elongated bubble, stratified-wavy and, stratified flow regimes. Multilayer perceptron neural networks (MLPNNs) are used to develop the classification model. The classifier requires features as an input which is representative of the signals. Ultrasound signal features are extracted by applying both power spectral density (PSD) and discrete wavelet transform (DWT) methods to the flow signals. A classification scheme of '1-of-C coding method for classification' was adopted to classify features extracted into one of four flow regime categories. To improve the performance of the flow regime classifier network, a second level neural network was incorporated by using the output of a first level networks feature as an input feature. The addition of the two network models provided a combined neural network model which has achieved a higher accuracy than single neural network models. Classification accuracies are evaluated in the form of both the PSD and DWT features. The success rates of the two models are: (1) using PSD features, the classifier missed 3 datasets out of 24 test datasets of the classification and scored 87.5% accuracy; (2) with the DWT features, the network misclassified only one data point and it was able to classify the flow patterns up to 95.8% accuracy. This approach has demonstrated the success of a clamp-on ultrasound sensor for flow regime classification that would be possible in industry practice. It is considerably more promising than other techniques as it uses a non-invasive and non-radioactive sensor.

62 citations


Journal ArticleDOI
TL;DR: In this paper, a parametric study of the factors contributing to peak-locking, a known bias error source in particle image velocimetry (PIV), is conducted using synthetic data that are processed with a state-of-the-art PIV algorithm.
Abstract: A parametric study of the factors contributing to peak-locking, a known bias error source in particle image velocimetry (PIV), is conducted using synthetic data that are processed with a state-of-the-art PIV algorithm. The investigated parameters include: particle image diameter, image interpolation techniques, the effect of asymmetric versus symmetric window deformation, number of passes and the interrogation window size. Some of these parameters are found to have a profound effect on the magnitude of the peak-locking error. The effects for specific PIV cameras are also studied experimentally using a precision turntable to generate a known rotating velocity field. Image time series recorded using this experiment show a linear range of pixel and sub-pixel shifts ranging from 0 to ±4 pixels. Deviations in the constant vorticity field (ω z ) reveal how peak-locking can be affected systematically both by varying parameters of the detection system such as the focal distance and f-number, and also by varying the settings of the PIV analysis. A new a priori technique for reducing the bias errors associated with peak-locking in PIV is introduced using an optical diffuser to avoid undersampled particle images during the recording of the raw images. This technique is evaluated against other a priori approaches using experimental data and is shown to perform favorably. Finally, a new a posteriori anti peak-locking filter (APLF) is developed and investigated, which shows promising results for both synthetic data and real measurements for very small particle image sizes.

61 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the quantum metrology triangle and quantum electrical effects, i.e. the Josephson, quantum Hall, and single-electron tunneling effects.
Abstract: The electric current, voltage, and resistance standards are the most important standards related to electricity and magnetism. Of these three standards, only the ampere, which is the unit of electric current, is an International System of Units (SI) base unit. However, even with modern technology, relatively large uncertainty exists regarding the generation and measurement of current. As a result of various innovative techniques based on nanotechnology and novel materials, new types of junctions for quantum current generation and single-electron current sources have recently been proposed. These newly developed methods are also being used to investigate the consistency of the three quantum electrical effects, i.e. the Josephson, quantum Hall, and single-electron tunneling effects, which are also known as 'the quantum metrology triangle'. This article describes recent research and related developments regarding current standards and quantum-metrology-triangle experiments.

60 citations


Journal ArticleDOI
TL;DR: Testing results show that the treatment of biases significantly improves solution convergence in the float ambiguity PPP mode, and leads to ambiguity-fixed PPP within a few minutes with a small improvement in solution precision.
Abstract: Various types of biases in Global Navigation Satellite System (GNSS) data preclude integer ambiguity fixing and degrade solution accuracy when not being corrected during precise point positioning (PPP). In this contribution, these biases are first reviewed, including satellite and receiver hardware biases, differential code biases, differential phase biases, initial fractional phase biases, inter-system receiver time biases, and system time scale offset. PPP models that take account of these biases are presented for two cases using ionosphere-free observations. The first case is when using primary signals that are used to generate precise orbits and clock corrections. The second case applies when using additional signals to the primary ones. In both cases, measurements from single and multiple constellations are addressed. It is suggested that the satellite-related code biases be handled as calibrated quantities that are obtained from multi-GNSS experiment products and the fractional phase cycle biases obtained from a network to allow for integer ambiguity fixing. Some receiver-related biases are removed using between-satellite single differencing, whereas other receiver biases such as inter-system biases are lumped with differential code and phase biases and need to be estimated. The testing results show that the treatment of biases significantly improves solution convergence in the float ambiguity PPP mode, and leads to ambiguity-fixed PPP within a few minutes with a small improvement in solution precision.

60 citations


Journal ArticleDOI
TL;DR: In this paper, the powder bed density was measured during the powder-bed fusion (PBF) process and an expanded measurement uncertainty, U PBD (k = 2), was determined as 0.004 g cm−3.
Abstract: Many factors influence the performance of additive manufacturing (AM) processes, resulting in a high degree of variation in process outcomes. Therefore, quantifying these factors and their correlations to process outcomes are important challenges to overcome to enable widespread adoption of emerging AM technologies. In the powder bed fusion AM process, the density of the powder layers in the powder bed is a key influencing factor. This paper introduces a method to determine the powder bed density (PBD) during the powder bed fusion (PBF) process. A complete uncertainty analysis associated with the measurement method was also described. The resulting expanded measurement uncertainty, U PBD (k = 2), was determined as 0.004 g cm−3. It was shown that this expanded measurement uncertainty is about three orders of magnitude smaller than the typical powder bed density. This method enables establishing correlations between the changes in PBD and the direction of motion of the powder recoating arm.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new deconvolution method, named sparse maximum harmonics-noise-ratio (SMHD) that employs a novel index, the harmonics to noise ratio (HNR), to be the objective function for iteratively choosing the optimum filter coefficients to maximize HNR.
Abstract: De-noising and enhancement of the weak fault signature from the noisy signal are crucial for fault diagnosis, as features are often very weak and masked by the background noise. Deconvolution methods have a significant advantage in counteracting the influence of the transmission path and enhancing the fault impulses. However, the performance of traditional deconvolution methods is greatly affected by some limitations, which restrict the application range. Therefore, this paper proposes a new deconvolution method, named sparse maximum harmonics-noise-ratio deconvolution (SMHD), that employs a novel index, the harmonics-to-noise ratio (HNR), to be the objective function for iteratively choosing the optimum filter coefficients to maximize HNR. SMHD is designed to enhance latent periodic impulse faults from heavy noise signals by calculating the HNR to estimate the period. A sparse factor is utilized to further suppress the noise and improve the signal-to-noise ratio of the filtered signal in every iteration step. In addition, the updating process of the sparse threshold value and the period guarantees the robustness of SMHD. On this basis, the new method not only overcomes the limitations associated with traditional deconvolution methods, minimum entropy deconvolution (MED) and maximum correlated kurtosis deconvolution (MCKD), but visual inspection is also better, even if the fault period is not provided in advance. Moreover, the efficiency of the proposed method is verified by simulations and bearing data from different test rigs. The results show that the proposed method is effective in the detection of various bearing faults compared with the original MED and MCKD.

Journal ArticleDOI
TL;DR: In this article, a method to extract turbulent statistics from 3D PIV measurements via ensemble averaging is presented, which is a 3D extension of the ensemble particle tracking velocimetry methods, which consist in summing distributions of velocity vectors calculated on low image density samples and then extract the statistical moments from the velocity vectors within subvolumes, with the size of the sub-volume depending on the desired number of particles and on the available number of snapshots.
Abstract: A method to extract turbulent statistics from three-dimensional (3D) PIV measurements via ensemble averaging is presented. The proposed technique is a 3D extension of the ensemble particle tracking velocimetry methods, which consist in summing distributions of velocity vectors calculated on low image density samples and then extract the statistical moments from the velocity vectors within sub-volumes, with the size of the sub-volume depending on the desired number of particles and on the available number of snapshots. The extension to 3D measurements poses the additional difficulty of sparse velocity vectors distributions, thus requiring a large number of snapshots to achieve high resolution measurements with a sufficient degree of accuracy. At the current state, this hinders the achievement of single-voxel measurements, unless millions of samples are available. Consequently, one has to give up spatial resolution and live with still relatively large (if compared to the voxel) sub-volumes. This leads to the further problem of the possible occurrence of a residual mean velocity gradient within the sub-volumes, which significantly contaminates the computation of second order moments. In this work, we propose a method to reduce the residual gradient effect, allowing to reach high resolution even with relatively large interrogation spots, therefore still retrieving a large number of particles on which it is possible to calculate turbulent statistics. The method consists in applying a polynomial fit to the velocity distributions within each sub-volume trying to mimic the residual mean velocity gradient.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding, which has lower computation complexity compared to the existing wavelet parameter optimization algorithm.
Abstract: Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

Journal ArticleDOI
TL;DR: In this article, a generalized method to estimate a 2D distribution of temperature and wavelength-dependent emissivity in a sooty flame with spectroscopic radiation intensities is proposed.
Abstract: A generalized method to estimate a two-dimensional (2D) distribution of temperature and wavelength-dependent emissivity in a sooty flame with spectroscopic radiation intensities is proposed in this paper. The method adopts a Newton-type iterative method to solve the unknown coefficients in the polynomial relationship between the emissivity and the wavelength, as well as the unknown temperature. Polynomial functions with increasing order are examined, and final results are determined as the result converges. Numerical simulation on a fictitious flame with wavelength-dependent absorption coefficients shows a good performance with relative errors less than 0.5% in the average temperature. What's more, a hyper-spectral imaging device is introduced to measure an ethylene/air laminar diffusion flame with the proposed method. The proper order for the polynomial function is selected to be 2, because every one order increase in the polynomial function will only bring in a temperature variation smaller than 20 K. For the ethylene laminar diffusion flame with 194 ml min−1 C2H4 and 284 L min−1 air studied in this paper, the 2D distribution of average temperature estimated along the line of sight is similar to, but smoother than that of the local temperature given in references, and the 2D distribution of emissivity shows a cumulative effect of the absorption coefficient along the line of sight. It also shows that emissivity of the flame decreases as the wavelength increases. The emissivity under wavelength 400 nm is about 2.5 times as much as that under wavelength 1000 nm for a typical line-of-sight in the flame, with the same trend for the absorption coefficient of soot varied with the wavelength.

Journal ArticleDOI
TL;DR: A novel method based on kurtogram and frequency domain correlated kurtosis is proposed, which demonstrates the effectiveness and robustness of the method in fault diagnosis of rolling element bearings.
Abstract: Envelope analysis is one of the most useful methods in localized fault diagnosis of rolling element bearings. However, there is a challenge in selecting the optimal resonance band. In this paper, a novel method based on kurtogram and frequency domain correlated kurtosis is proposed. To obtain the correct relationship between the node and frequency band in wavelet packet transform, a vital process named frequency ordering is conducted to solve the frequency folding problem due to down sampling. Correlated kurtosis of envelope spectrum instead of correlated kurtosis of envelope signal or kurtosis of envelope spectrum is utilized to generate the kurtogram, in which the maximum value can indicate the optimal band for envelope analysis. Several cases of experimental bearing fault signals are used to evaluate the immunity of the proposed method to strong noise interference. The improved performance has also been compared with two previous developed methods. The results demonstrate the effectiveness and robustness of the method in fault diagnosis of rolling element bearings.

Journal ArticleDOI
TL;DR: In this paper, an optical fiber sensor capable of simultaneously measuring strain, temperature and refractive index is presented. But the sensor is based on the combination of two fiber Bragg gratings written in a standard single-mode fiber, one in an untapered region and another in a tapered region, spliced to a no-core fiber.
Abstract: We report the development of an optical fiber sensor capable of simultaneously measuring strain, temperature and refractive index. The sensor is based on the combination of two fiber Bragg gratings written in a standard single-mode fiber, one in an untapered region and another in a tapered region, spliced to a no-core fiber. The possibility of simultaneously measuring three parameters relies on the different sensitivity responses of each part of the sensor. The results have shown the possibility of measuring three parameters simultaneously with a resolution of 3.77 μe, 1.36 °C and 5 × 10−4, respectively for strain, temperature and refractive index. On top of the multiparameter ability, the simple production and combination of all the parts involved on this optical-fiber-based sensor is an attractive feature for several sensing applications.

Journal ArticleDOI
TL;DR: In this paper, a speckle-based calibration method is developed to calibrate the stereo-DIC system when the system is applied for deformation measurement of large engineering components.
Abstract: The development of stereo-digital image correlation (stereo-DIC) enables the application of vision-based technique that uses digital cameras to the deformation measurement of materials and structures Compared with traditional contact measurements, the stereo-DIC technique allows for non-contact measurement, has a non-intrusive characteristic, and can obtain full-field deformation information In this paper, a speckle-based calibration method is developed to calibrate the stereo-DIC system when the system is applied for deformation measurement of large engineering components By combining speckle analysis with the classical relative orientation algorithm, relative rotation and translation between cameras can be calibrated based on analysis of experimental speckle images For validation, the strain fields of a four-point bending beam and an axially loaded concrete column were determined by the proposed calibration method and stereovision measurement As a practical application, the proposed calibration method was applied for strain measurement of a ductile iron cylindrical vessel in the drop test The measured results verify that the proposed calibration method is effective for deformation measurement of large engineering components

Journal ArticleDOI
TL;DR: Development of a new method to accurately identify geometric errors of 5-axis CNC machines, especially the errors due to rotary axes, using the magnetic double ball bar is focused on.
Abstract: Five-axis CNC machine tools are widely used in manufacturing of parts with free-form surfaces. Geometric errors of machine tools have significant effects on the quality of manufactured parts. This research focuses on development of a new method to accurately identify geometric errors of 5-axis CNC machines, especially the errors due to rotary axes, using the magnetic double ball bar. A theoretical model for identification of geometric errors is provided. In this model, both position-independent errors and position-dependent errors are considered as the error sources. This model is simplified by identification and removal of the correlated and insignificant error sources of the machine. Insignificant error sources are identified using the sensitivity analysis technique. Simulation results reveal that the simplified error identification model can result in more accurate estimations of the error parameters. Experiments on a 5-axis CNC machine tool also demonstrate significant reduction in the volumetric error after error compensation.

Journal ArticleDOI
TL;DR: A comprehensive review of the non-intrusive measurement techniques and the current state of knowledge and experience in the characterization and monitoring of gas-solid fluidized beds is presented in this article.
Abstract: Gas-solid fluidization is a well-established technique to suspend or transport particles and has been applied in a variety of industrial processes. Nevertheless, our knowledge of fluidization hydrodynamics is still limited for the design, scale-up and operation optimization of fluidized bed reactors. It is therefore essential to characterize the two-phase flow behaviours in gas-solid fluidized beds and monitor the fluidization processes for control and optimization. A range of non-intrusive techniques have been developed or proposed for measuring the fluidization dynamic parameters and monitoring the flow status without disturbing or distorting the flow fields. This paper presents a comprehensive review of the non-intrusive measurement techniques and the current state of knowledge and experience in the characterization and monitoring of gas-solid fluidized beds. These techniques are classified into six main categories as per sensing principles, electrostatic, acoustic emission and vibration, visualization, particle tracking, laser Doppler anemometry and phase Doppler anemometry as well as pressure fluctuation methods. Trend and future developments in this field are also discussed.

Journal ArticleDOI
TL;DR: A number of adjustments are suggested that could improve resolution, making the technology viable for a broader range of in-line quality inspection applications, including cast and additively manufactured parts.
Abstract: X-ray computed tomography (CT) offers significant potential as a metrological tool, given the wealth of internal and external data that can be captured, much of which is inaccessible to conventional optical and tactile coordinate measurement machines (CMM). Typical lab-based CT can take upwards of 30 min to produce a 3D model of an object, making it unsuitable for volume production inspection applications. Recently a new generation of real time tomography (RTT) x-ray CT has been developed for airport baggage inspections, utilising novel electronically switched x-ray sources instead of a rotating gantry. This enables bags to be scanned in a few seconds and 3D volume images produced in almost real time for qualitative assessment to identify potential threats. Such systems are able to scan objects as large as 600 mm in diameter at 500 mm s−1. The current voxel size of such a system is approximately 1 mm—much larger than lab-based CT, but with significantly faster scan times is an attractive prospect to explore. This paper will examine the potential of such systems for real time metrological inspection of additively manufactured parts. The measurement accuracy of the Rapiscan RTT110, an RTT airport baggage scanner, is evaluated by comparison to measurements from a metrologically confirmed CMM and those achieved by conventional lab-CT. It was found to produce an average absolute error of 0.18 mm that may already have some applications in the manufacturing line. While this is expectedly a greater error than lab-based CT, a number of adjustments are suggested that could improve resolution, making the technology viable for a broader range of in-line quality inspection applications, including cast and additively manufactured parts.

Journal ArticleDOI
TL;DR: This work analytically quantify the error bound in the pressure field, and is able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data.
Abstract: Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

Journal ArticleDOI
TL;DR: A review of the application of the contact and non-contact mode of ultrasonic measurement focusing on safety and quality control areas is presented.
Abstract: The monitoring of the food manufacturing process is vital since it determines the safety and quality level of foods which directly affect the consumers' health. Companies which produce high quality products will gain trust from consumers. This factor helps the companies to make profits. The use of efficient and appropriate sensors for the monitoring process can also reduce cost. The food assessing process based on an ultrasonic sensor has attracted the attention of the food industry due to its excellent capabilities in several applications. The utilization of low or high frequencies for the ultrasonic transducer has provided an enormous benefit for analysing, modifying and guaranteeing the quality of food. The contact and non-contact ultrasonic modes for measurement also contributed significantly to the food processing. This paper presents a review of the application of the contact and non-contact mode of ultrasonic measurement focusing on safety and quality control areas.

Journal ArticleDOI
TL;DR: In this paper, the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM), and correlation statistics (CS), was evaluated across two separate experiments and three software packages.
Abstract: Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from approximately 65%–77% for PPR and MI methods, 40%–50% for IM and near 50% for CS. These observations illustrate some of the strengths and weaknesses of the methods considered herein and identify future directions for development and improvement.

Journal ArticleDOI
TL;DR: GUM2DFT as discussed by the authors is an open-source software tool that utilizes closed formulas for the efficient propagation of uncertainties for the application of the DFT, inverse DFT and input estimation in the frequency domain.
Abstract: The Fourier transform and its counterpart for discrete time signals, the discrete Fourier transform (DFT), are common tools in measurement science and application. Although almost every scientific software package offers ready-to-use implementations of the DFT, the propagation of uncertainties in line with the guide to the expression of uncertainty in measurement (GUM) is typically neglected. This is of particular importance in dynamic metrology, when input estimation is carried out by deconvolution in the frequency domain. To this end, we present the new open-source software tool GUM2DFT, which utilizes closed formulas for the efficient propagation of uncertainties for the application of the DFT, inverse DFT and input estimation in the frequency domain. It handles different frequency domain representations, accounts for autocorrelation and takes advantage of the symmetry inherent in the DFT result for real-valued time domain signals. All tools are presented in terms of examples which form part of the software package. GUM2DFT will foster GUM-compliant evaluation of uncertainty in a DFT-based analysis and enable metrologists to include uncertainty evaluations in their routine work.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated field monitoring of a 1108 m suspension bridge during an assessment load test, using integrated distributed fiber-optic sensors (DFOSs) in addition to the conventional Brillouin time domain analysis system using the differential pulsewidth pair (DPP) technique was adopted.
Abstract: This paper investigated field monitoring of a 1108 m suspension bridge during an assessment load test, using integrated distributed fibre-optic sensors (DFOSs). In addition to the conventional Brillouin time domain analysis system, a high spatial resolution Brillouin system using the differential pulse-width pair (DPP) technique was adopted. Temperature compensation was achieved using a Raman distributed temperature sensing system. This is the first full scale field application of DFOSs using the Brillouin time domain analysis technique in a thousand-meter-scale suspension bridge. Measured strain distributions along the whole length of the bridge were presented. The interaction between the main cables and the steel-box-girder was highlighted. The Brillouin fibre-optic monitoring systems exhibited great facility for the purposes of long distance distributed strain monitoring, with up to 0.05 m spatial resolution, and 0.01 m/point sampling interval. The performance of the Brillouin system using DPP technique was discussed. The measured data was also employed for assessing bridge design and for the assessment of structural condition. The results show that the symmetrical design assumptions were consistent with the actual bridge, and that the strain values along the whole bridge were within the safety range. This trial field study serves as an example, demonstrating the feasibility of highly dense strain and temperature measurement for large scale civil infrastructures using integrated DFOSs.

Journal ArticleDOI
TL;DR: In this review, various dimensional metrological methods using the optical comb are introduced, describing their basic principles and applications in scientific as well as industrial areas.
Abstract: In the field of dimensional metrology, significant technical challenges have been encountered with regard to large-scale object assembly, satellite positioning, control of the long-distance precision stage, and inspections of large steps or deep holes on semiconductor devices and multi-layered display panels. The key elements required are high speeds, a long dynamic measurable range, and good precision of measurements, and conventional methods can scarcely meet such requirements simultaneously. Promisingly, the advent of the optical comb has opened up numerous possibilities to break through practical limits by exploiting several of its unique features. These include inter-mode interference, a wide spectral bandwidth with a long coherence length and well-defined longitudinal modes. In this review, various dimensional metrological methods using the optical comb are introduced, describing their basic principles and applications in scientific as well as industrial areas.

Journal ArticleDOI
Enlai Zhang1, Liang Hou1, Chao Shen1, Yingliang Shi1, Yaxiang Zhang1 
TL;DR: It is proved that the PSO-BPNN method can achieve convergence more quickly and improve the prediction accuracy of sound quality, which can further lay a foundation for the control of the sound quality inside vehicles.
Abstract: To better solve the complex non-linear problem between the subjective sound quality evaluation results and objective psychoacoustics parameters, a method for the prediction of the sound quality is put forward by using a back propagation neural network (BPNN) based on particle swarm optimization (PSO), which is optimizing the initial weights and thresholds of BP network neurons through the PSO. In order to verify the effectiveness and accuracy of this approach, the noise signals of the B-Class vehicles from the idle speed to 120 km h−1 measured by the artificial head, are taken as a target. In addition, this paper describes a subjective evaluation experiment on the sound quality annoyance inside the vehicles through a grade evaluation method, by which the annoyance of each sample is obtained. With the use of Artemis software, the main objective psychoacoustic parameters of each noise sample are calculated. These parameters include loudness, sharpness, roughness, fluctuation, tonality, articulation index (AI) and A-weighted sound pressure level. Furthermore, three evaluation models with the same artificial neural network (ANN) structure are built: the standard BPNN model, the genetic algorithm-back-propagation neural network (GA-BPNN) model and the PSO-back-propagation neural network (PSO-BPNN) model. After the network training and the evaluation prediction on the three models' network based on experimental data, it proves that the PSO-BPNN method can achieve convergence more quickly and improve the prediction accuracy of sound quality, which can further lay a foundation for the control of the sound quality inside vehicles.

Journal ArticleDOI
TL;DR: In this paper, a theoretical analysis of the strain transfer of a three-layered general model has been carried out by introducing Goodman's hypothesis to describe the interfacial shear stress relationship.
Abstract: Asphalt pavement is vulnerable to random damage, such as cracking and rutting, which can be proactively identified by distributed optical fiber sensing technology. However, due to the material nature of optical fibers, a bare fiber is apt to be damaged during the construction process of pavements. Thus, a protective layer is needed for this application. Unfortunately, part of the strain of the host material is absorbed by the protective layer when transferring the strain to the sensing fiber. To account for the strain transfer error, in this paper a theoretical analysis of the strain transfer of a three-layered general model has been carried out by introducing Goodman's hypothesis to describe the interfacial shear stress relationship. The model considers the viscoelastic behavior of the host material and protective layer. The effects of one crack in the host material and the sensing length on strain transfer relationship are been discussed. To validate the effectiveness of the strain transfer analysis, a flexible asphalt-mastic packaged distributed optical fiber sensor was designed and tested in a laboratory environment to monitor the distributed strain and appearance of cracks in an asphalt concrete beam at two different temperatures. The experimental results indicated that the developed strain transfer formula can significantly reduce the strain transfer error, and that the asphalt-mastic packaged optical fiber sensor can successfully monitor the distributed strain and identify local cracks.

Journal ArticleDOI
TL;DR: In this paper, a spin exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell is presented.
Abstract: We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fT$$/\sqrt{\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.