scispace - formally typeset
Search or ask a question

Showing papers in "Measurement Science and Technology in 2018"




Journal ArticleDOI
TL;DR: An intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed, which is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences.
Abstract: Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

107 citations


Journal ArticleDOI
TL;DR: An integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes and can identify fault severity more effectively than the traditional approaches.
Abstract: The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.

86 citations




Journal ArticleDOI
TL;DR: In this article, the second order moment of the correlation (MC) was used to estimate the PIV uncertainty from the shape of the cross-correlation plane, and the predicted uncertainties showed good sensitivity to the error sources and agreement with the expected RMS error.
Abstract: We present a new uncertainty estimation method for particle image velocimetry (PIV), that uses the correlation plane as a model for the probability density function (PDF) of displacements and calculates the second order moment of the correlation (MC). The cross-correlation between particle image patterns is the summation of all particle matches convolved with the apparent particle image diameter. MC uses this property to estimate the PIV uncertainty from the shape of the cross-correlation plane. In this new approach, the generalized cross-correlation (GCC) plane corresponding to a PIV measurement is obtained by removing the particle image diameter contribution. The GCC primary peak represents a discretization of the displacement PDF, from which the standard uncertainty is obtained by convolving the GCC plane with a Gaussian function. Then a Gaussian least-squares-fit is applied to the peak region, accounting for the stretching and rotation of the peak, due to the local velocity gradients and the effect of the convolved Gaussian. The MC method was tested with simulated image sets and the predicted uncertainties show good sensitivity to the error sources and agreement with the expected RMS error. Subsequently, the method was demonstrated in three PIV challenge cases and two experimental datasets and was compared with the published image matching (IM) and correlation statistics (CS) techniques. Results show that the MC method has a better response to spatial variation in RMS error and the predicted uncertainty is in good agreement with the expected standard uncertainty. The uncertainty prediction was also explored as a function of PIV interrogation window size. Overall, the MC method performance establishes itself as a valid uncertainty estimation tool for planar PIV.

70 citations



Journal ArticleDOI
TL;DR: The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.
Abstract: There is a widening gap between the fast advancement of computational methods for tomographic reconstruction and their successful implementation in production software at various synchrotron facilities. This is due in part to the lack of readily available instrument datasets and phantoms representative of real materials for validation and comparison of new numerical methods. Recent advancements in detector technology have made sub-second and multi-energy tomographic data collection possible (Gibbs et al 2015 Sci. Rep. 5 11824), but have also increased the demand to develop new reconstruction methods able to handle in situ (Pelt and Batenburg 2013 IEEE Trans. Image Process. 22 5238-51) and dynamic systems (Mohan et al 2015 IEEE Trans. Comput. Imaging 1 96-111) that can be quickly incorporated in beamline production software (Gursoy et al 2014 J. Synchrotron Radiat. 21 1188-93). The x-ray tomography data bank, tomoBank, provides a repository of experimental and simulated datasets with the aim to foster collaboration among computational scientists, beamline scientists, and experimentalists and to accelerate the development and implementation of tomographic reconstruction methods for synchrotron facility production software by providing easy access to challenging datasets and their descriptors.

63 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithmic method is proposed to isolate relevant topographic formations and to quantify their dimensional and geometric properties, using areal topography data acquired by state-of-the-art topography measurement instrumentation.
Abstract: The use of state-of-the-art areal topography measurement instrumentation allows for a high level of detail in the acquisition of topographic information at micrometric scales. The three-dimensional geometric models of surface topography obtained from measured data create new opportunities for the investigation of manufacturing processes through characterisation of the surfaces of manufactured parts. Conventional methods for quantitative assessment of topography usually only involve the computation of texture parameters; summary indicators of topography-related characteristics that are computed over the investigated area. However, further useful information may be obtained through characterisation of signature topographic formations, as more direct indicators of manufacturing process behaviour and performance. In this work, laser powder bed fusion of metals is considered. An original algorithmic method is proposed to isolate relevant topographic formations and to quantify their dimensional and geometric properties, using areal topography data acquired by state-of-the-art areal topography measurement instrumentation.

58 citations


Journal ArticleDOI
TL;DR: In this paper, different measurement techniques are discussed for the electromagnetic performance and non-destructive evaluation (NDE) of R&S structures for distinct control, guidance, surveillance and communication applications for airborne platforms.
Abstract: In the past few years, great effort has been devoted to the fabrication of highly efficient, broadband radome and stealth (R&S) structures for distinct control, guidance, surveillance and communication applications for airborne platforms. The evaluation of non-planar aircraft R&S structures in terms of their electromagnetic performance and structural damage is still a very challenging task. In this article, distinct measurement techniques are discussed for the electromagnetic performance and non-destructive evaluation (NDE) of R&S structures. This paper deals with an overview of the transmission line method and free space measurement based microwave measurement techniques for the electromagnetic performance evaluation of R&S structures. In addition, various conventional as well as advanced methods, such as millimetre and terahertz wave based imaging techniques with great potential for NDE of load bearing R&S structures, are also discussed in detail. A glimpse of in situ NDE techniques with corresponding experimental setup for R&S structures is also presented. The basic concepts, measurement ranges and their instrumentation, measurement method of different R&S structures and some miscellaneous topics are discussed in detail. Some of the challenges and issues pertaining to the measurement of curved R&S structures are also presented. This study also lists various mathematical models and analytical techniques for the electromagnetic performance evaluation and NDE of R&S structures. The research directions described in this study may be of interest to the scientific community in the aerospace sectors.

Journal ArticleDOI
TL;DR: A fault diagnosis framework called adaptive overlapping CNN (AOCNN) is constructed to deal with one dimension (1D) raw vibration signals directly and reveals its superiority when compared with other state-of-the-art methods.
Abstract: Intelligent fault diagnosis methods are promising in dealing with mechanical big data owing to their efficiency in extracting representative features. However, there is always an undesirable shift variant property embedded in raw vibration signals, which hinders the direct use of raw signals in fault diagnosis networks. A convolutional neural network (CNN) is a widely used and efficient method to extract features in various fields for its excellent sparse connectivity, equivalent representation and weight sharing properties. However, raw CNN is time-consuming and has a marginal problem. Heuristically, we construct a fault diagnosis framework called adaptive overlapping CNN (AOCNN) to deal with one dimension (1D) raw vibration signals directly. The shift variant problem is dealt with by the adaptive convolutional layer and the root-mean-square (RMS) pooling layer, and the marginal problem embedded in the CNN is relieved by employing the overlapping layer. Meanwhile, the AOCNN is also characterized by adopting different convolutional strides and diverse activation functions in feature extraction network training and usage. Furthermore, sparse filtering is embedded into the AOCNN, and experiments on a bearing dataset and a gearbox dataset are conducted to verify the validity of the proposed method separately. When compared with other state-of-the-art methods this method reveals its superiority.

Journal ArticleDOI
TL;DR: A proposed lane detection system is developed from the previous work where the estimation of the dense vanishing point is further improved using the disparity information, and a novel lane position validation approach which computes the energy of each possible solution and selects all satisfying lane positions for visualisation is proposed.
Abstract: The detection of multiple curved lane markings on a non-flat road surface is still a challenging task for vehicular systems. To make an improvement, depth information can be used to enhance the robustness of the lane detection systems. In this paper, a proposed lane detection system is developed from our previous work where the estimation of the dense vanishing point is further improved using the disparity information. However, the outliers in the least squares fitting severely affect the accuracy when estimating the vanishing point. Therefore, in this paper we use random sample consensus to update the parameters of the road model iteratively until the percentage of the inliers exceeds our pre-set threshold. This significantly helps the system to overcome some suddenly changing conditions. Furthermore, we propose a novel lane position validation approach which computes the energy of each possible solution and selects all satisfying lane positions for visualisation. The proposed system is implemented on a heterogeneous system which consists of an Intel Core i7-4720HQ CPU and an NVIDIA GTX 970M GPU. A processing speed of 143 fps has been achieved, which is over 38 times faster than our previous work. Moreover, in order to evaluate the detection precision, we tested 2495 frames including 5361 lanes. It is shown that the overall successful detection rate is increased from 98.7% to 99.5%.



Journal ArticleDOI
TL;DR: In this paper, the authors describe the working principles of the coaxial volumetric velocimeter (CVV) for wind tunnel measurements, which is derived from the concept of tomographic PIV in combination with recent developments of Lagrangian particle tracking.
Abstract: This study describes the working principles of the coaxial volumetric velocimeter (CVV) for wind tunnel measurements. The measurement system is derived from the concept of tomographic PIV in combination with recent developments of Lagrangian particle tracking. The main characteristic of the CVV is its small tomographic aperture and the coaxial arrangement between the illumination and imaging directions. The system consists of a multi-camera arrangement subtending only few degrees solid angle and a long focal depth. Contrary to established PIV practice, laser illumination is provided along the same direction as that of the camera views, reducing the optical access requirements to a single viewing direction. The laser light is expanded to illuminate the full field of view of the cameras. Such illumination and imaging conditions along a deep measurement volume dictate the use of tracer particles with a large scattering area. In the present work, helium-filled soap bubbles are used. The fundamental principles of the CVV in terms of dynamic velocity and spatial range are discussed. Maximum particle image density is shown to limit tracer particle seeding concentration and instantaneous spatial resolution. Time-averaged flow fields can be obtained at high spatial resolution by ensemble averaging. The use of the CVV for time-averaged measurements is demonstrated in two wind tunnel experiments. After comparing the CVV measurements with the potential flow in front of a sphere, the near-surface flow around a complex wind tunnel model of a cyclist is measured. The measurements yield the volumetric time-averaged velocity and vorticity field. The measurements of the streamlines in proximity of the surface give an indication of the skin-friction lines pattern, which is of use in the interpretation of the surface flow topology.

Journal ArticleDOI
TL;DR: A novel MSWT-based multisensor signal denoising algorithm that utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time–frequency representation so that the signal resolution can be significantly improved.
Abstract: Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time–frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time–frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time–frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling bearing system and a gear system. The results show that the proposed multisensor matching synchronous squeezing wavelet transform (MMSWT) is superior to existing methods.

Journal ArticleDOI
TL;DR: In this paper, a framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing and multiplicative refocusing are presented.
Abstract: Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the 'cigar'-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.

Journal ArticleDOI
TL;DR: In this paper, an accurate calibration of the JET neutron diagnostics with a 14 MeV neutron generator was performed in the first half of 2017 in order to provide a reliable measurement of the fusion power during the next JET deuterium-tritium (DT) campaign.
Abstract: An accurate calibration of the JET neutron diagnostics with a 14 MeV neutron generator was performed in the first half of 2017 in order to provide a reliable measurement of the fusion power during the next JET deuterium-tritium (DT) campaign. In order to meet the target accuracy, the chosen neutron generator has been fully characterized at the Neutron Metrology Laboratory of the National Physical Laboratory (NPL), Teddington, United Kingdom. The present paper describes the measurements of the neutron energy spectra obtained using a high-resolution single-crystal diamond detector (SCD). The measurements, together with a new neutron source routine 'ad hoc' developed for the MCNP code, allowed the complex features of the neutron energy spectra resulting from the mixed D/T beam ions interacting with the T/D target nuclei to be resolved for the first time. From the spectral analysis a quantitative estimation of the beam ion composition has been made. The unprecedented intrinsic energy resolution (<1% full width at half maximum (FWHM) at 14 MeV) of diamond detectors opens up new prospects for diagnosing DT plasmas, such as, for instance, the possibility to study non-classical slowing down of the beam ions by neutron spectroscopy on ITER.

Journal ArticleDOI
TL;DR: In this article, a tilting-mAFM was developed to expand the capabilities of 3D nanometrology, particularly for high-resolution topography measurements at the surfaces of vertical sidewalls and for traceable measurements of nanodevice linewidth.
Abstract: A metrological atomic force microscope with a tip-tilting mechanism (tilting-mAFM) has been developed to expand the capabilities of 3D nanometrology, particularly for high-resolution topography measurements at the surfaces of vertical sidewalls and for traceable measurements of nanodevice linewidth. In the tilting-mAFM, the probe tip is tilted from vertical to 16° at maximum such that the probe tip can touch and trace the vertical sidewall of a nanometer-scale structure; the probe of a conventional atomic force microscope cannot reach the vertical surface because of its finite cone angle. Probe displacement is monitored in three axes by using high-resolution laser interferometry, which is traceable to the SI unit of length. A central-symmetric 3D scanner with a parallel spring structure allows probe scanning with extremely low interaxial crosstalk. A unique technique for scanning vertical sidewalls was also developed and applied. The experimental results indicated high repeatability in the scanned profiles and sidewall angle measurements. Moreover, the 3D measurement of a line pattern was demonstrated, and the data from both sidewalls were successfully stitched together with subnanometer accuracy. Finally, the critical dimension of the line pattern was obtained.


Journal ArticleDOI
TL;DR: A set of complementary and automated algorithms for feature extraction and selection to be used with smart sensors and are capable of extracting signal characteristics from signal shape, time domain, time-frequency domain, frequency domain and signal distribution are suggested.
Abstract: Smart sensors with internal signal processing and machine learning capabilities are a current trend in sensor development. This paper suggests a set of complementary and automated algorithms for feature extraction and selection to be used with smart sensors. The suggested methods for feature extraction can be applied on smart sensors and are capable of extracting signal characteristics from signal shape, time domain, time-frequency domain, frequency domain and signal distribution. Feature selection subsequently is capable of selecting the most important features for linear and nonlinear fault classification. The paper also highlights the potential of smart sensors in combination with the suggested algorithms that provide both data and further functionality from self-monitoring to condition monitoring in industrial applications. The first example applications are condition monitoring of a complex hydraulic machine where smart signal processing allows classification and quantification of four different fault scenarios. Additionally redundancies in the systems were used for self-monitoring and allowed to detect simulated sensor faults before they become critical for fault classification. The second example application is remaining lifetime prediction of electromechanical cylinders that shows applicability to big data and transparency of the solution by providing detailed information about sensor significance.



Journal ArticleDOI
TL;DR: In this article, the redefinition of the kelvin and its implications are discussed, and the consequences of the redefinitions for traceability and the practice of thermometry are discussed.
Abstract: In this paper the redefinition of the kelvin and its implications are discussed. The following topics will be covered; the redefinition of the international system of units (with emphasis on the proposed redefined kelvin and its wording), a summary of international efforts to determine low uncertainty values of the Boltzmann constant for the final CODATA Task Group on Fundamental Constants adjustment of its value (final values of which must have been accepted for publication by 1 July 2017), how the introduction of the redefined kelvin will be regulated through the mise en pratique of the definition of the kelvin and how primary thermometry is been performed and coordinated on an international level to secure the redefinition in the long term. The paper will end with a discussion of the consequences of the redefinition, for traceability and the practice of thermometry.


Journal ArticleDOI
TL;DR: In this paper, an automated damage detection strategy that works through placing high value resistors into the previously developed resistor mesh model using a sequential Monte Carlo method is introduced, which is used to mimic the internal condition of damaged cementitious specimens.
Abstract: Various nondestructive evaluation techniques are currently used to automatically detect and monitor cracks in concrete infrastructure. However, these methods often lack the scalability and cost-effectiveness over large geometries. A solution is the use of self-sensing carbon-doped cementitious materials. These self-sensing materials are capable of providing a measurable change in electrical output that can be related to their damage state. Previous work by the authors showed that a resistor mesh model could be used to track damage in structural components fabricated from electrically conductive concrete, where damage was located through the identification of high resistance value resistors in a resistor mesh model. In this work, an automated damage detection strategy that works through placing high value resistors into the previously developed resistor mesh model using a sequential Monte Carlo method is introduced. Here, high value resistors are used to mimic the internal condition of damaged cementitious specimens. The proposed automated damage detection method is experimentally validated using a mm3 reinforced cement paste plate doped with multi-walled carbon nanotubes exposed to 100 identical impact tests. Results demonstrate that the proposed Monte Carlo method is capable of detecting and localizing the most prominent damage in a structure, demonstrating that automated damage detection in smart-concrete structures is a promising strategy for real-time structural health monitoring of civil infrastructure.


Journal ArticleDOI
TL;DR: In this article, the Planck-balance (PB) is proposed, which allows the calibration of weights in a continuous range from 1 mg to 1 kg using a fixed value of Planck constant, h. In contrast to many scientific oriented developments, the PB is focused on robust and daily use.
Abstract: A balance is proposed, which allows the calibration of weights in a continuous range from 1 mg to 1 kg using a fixed value of the Planck constant, h. This so-called Planck-Balance (PB) uses the physical approach of Kibble balances that allow the Planck constant to be derived from the mass. Using the PB no calibrated mass standards are required during weighing processes any longer, because all measurements are traceable via the electrical quantities to the Planck constant, and to the meter and the second. This allows a new approach of balance types after the expected redefinition of the SI-units by the end of 2018. In contrast to many scientific oriented developments, the PB is focused on robust and daily use. Therefore, two balances will be developed, PB2 and PB1, which will allow relative measurement uncertainties comparable to the accuracies of class E2 and E1 weights, respectively, as specified in OIML R 111-1. The balances will be developed in a cooperation of the Physikalisch-Technische Bundesanstalt (PTB) and the Technische Universitat Ilmenau in a project funded by the German Federal Ministry of Education and Research.