scispace - formally typeset
Search or ask a question

Showing papers in "Measurement Science and Technology in 2021"


Journal ArticleDOI
TL;DR: In this paper, the authors present an updated review of the literature on in-situ sensing, measurement and monitoring for metal PBF processes, with a classification of methods and a comparison of enabled performances, summarising the types and sizes of defects that are practically detectable while the part is being produced and the research areas where additional technological advances are currently needed.
Abstract: The possibility of using a variety of sensor signals acquired during metal powder bed fusion processes, to support part and process qualification and for the early detection of anomalies and defects, has been continuously attracting an increasing interest. The number of research studies in this field has been characterised by significant growth in the last few years, with several advances and new solutions compared with first seminal works. Moreover, industrial powder bed fusion (PBF) systems are increasingly equipped with sensors and toolkits for data collection, visualisation and, in some cases, embedded in-process analysis. Many new methods have been proposed and defect detection capabilities have been demonstrated. Nevertheless, several challenges and open issues still need to be tackled to bridge the gap between methods proposed in the literature and actual industrial implementation. This paper presents an updated review of the literature on in-situ sensing, measurement and monitoring for metal PBF processes, with a classification of methods and a comparison of enabled performances. The study summarises the types and sizes of defects that are practically detectable while the part is being produced and the research areas where additional technological advances are currently needed.

69 citations


Journal ArticleDOI
TL;DR: The development of compact e-nose design and calculation over the last few decades is reviewed, possible future trends are discussed, and the development of on-chip calculation and wireless computing is focused on.
Abstract: An electronic nose (e-nose) is a measuring instrument that mimics human olfaction and outputs ‘fingerprint’ information of mixed gases or odors. Generally speaking, an e-nose is mainly composed of two parts: a gas sensing system (gas sensor arrays, gas transmission paths) and an information processing system (microprocessor and related hardware, pattern recognition algorithms). It has been more than 30 years since the e-nose concept was introduced in the 1980s. Since then, e-noses have evolved from being large in size, expensive, and power-hungry instruments to portable, low cost devices with low power consumption. This paper reviews the development of compact e-nose design and calculation over the last few decades, and discusses possible future trends. Regarding the compact e-nose design, which is related to its size and weight, this paper mainly summarizes the development of sensor array design, hardware circuit design, gas path (i.e. the path through which the mixed gases to be measured flow inside the e-nose system) and sampling design, as well as portable design. For the compact e-nose calculation, which is directly related to its rapidity of detection, this review focuses on the development of on-chip calculation and wireless computing. The future trends of compact e-noses include the integration with the internet of things, wearable e-noses, and mobile e-nose systems.

47 citations


Journal ArticleDOI
TL;DR: The issue of slow convergence speed of the genetic algorithm during optimization is duly addressed by implementing a Levy flight mutated genetic algorithm (LFMGA) while finding the optimal parameters (regularization parameter and kernel function) of a support vector machine (SVM).
Abstract: Fluctuations in the head, discharge, and contaminants in the flow can damage parts of the Pelton wheel. An artificial intelligence technique has been investigated for the automatic detection of bucket faults in the Pelton wheel. Features sensitive to defect conditions are extracted from the raw vibration signal and its variational mode decomposition (VMD). The issue of slow convergence speed of the genetic algorithm during optimization is duly addressed by implementing a Levy flight mutated genetic algorithm (LFMGA) while finding the optimal parameters (regularization parameter and kernel function) of a support vector machine (SVM). The efficacy of the proposed LFMGA is tested against different optimization benchmark functions. The results indicate that the proposed algorithm is stable on the basis of the small standard deviation. Using optimized SVM parameters, the SVM model is trained to prepare a classification model with 10-fold cross-validation. After training, the SVM model is tested for fitness evaluation. The overall recognition rate of the SVM model for identification of defects is found to be 98.84% with training time 27.06 s per iteration. A healthy condition is also compared with splitter wear, added mass defect, and missing bucket conditions separately using the VMD–SVM model and shows a recognition rate of 99.17%, 98.33%, and 98.12%, respectively.

38 citations


Journal ArticleDOI
TL;DR: The development and use of an ISO standardised framework to allow calibration of surface topography measuring instruments and uncertainty estimation based on a fixed set of metrological characteristics is reviewed.
Abstract: In this paper, we will review the development and use of an ISO standardised framework to allow calibration of surface topography measuring instruments. We will draw on previous work to present the state of the art in the field in terms of employed methods for calibration and uncertainty estimation based on a fixed set of metrological characteristics. The resulting standards will define the metrological characteristics and present default methods and material measures for their determination-the paper will summarise this work and point out areas where there is still some work to do. An example uncertainty estimation is given for an optical topography measuring instrument, where the effect of topography fidelity is considered.

35 citations


Journal ArticleDOI
TL;DR: This work aims to escape the experience requirement by using a simulation-driven MLA based on the multifactorial analysis of fault indicators associated with a DT, which has provided a reliable diagnostic with an adaptive degradation analysis, which makes the simulated data suitable for the construction of a machine learning predictive model.
Abstract: Machine learning algorithms (MLAs) are increasingly being used as effective techniques for processing vibration signals obtained from complex industrial machineries. Previous applications of automatic fault detection algorithms in the diagnosis of rotating machines were mainly based on historical operating data sets, limiting the diagnostic reliability to devices with an extended operating history. Moreover, physically collected data are often restricted by the conditions of acquisition and the specific elements for which they were recorded. Digital twin (DT) provides a powerful tool able to generate a huge amount of training data for MLAs. However, the DT model must be accurate enough to substitute the experiments. This work aims to escape the experience requirement by using a simulation-driven MLA based on the multifactorial analysis of fault indicators associated with a DT. To achieve this approach, a numerical model of a rotor-ball bearing system is developed. The latter is updated according to a parameter update scheme based on a comparison between the relevant features of the experimentally measured signals and the signals simulated by the model. These features are chosen as the selected input parameters of the MLA classifier. The results show that after updating, the developed DT has provided a reliable diagnostic with an adaptive degradation analysis, which makes the simulated data suitable for the construction of a machine learning predictive model. Two common MLAs, (multi-kernel support vector machine) and (k nearest neighbor’s algorithm), were trained using the simulated data and validated later against experimental datasets.

31 citations



Journal ArticleDOI
TL;DR: A robust learning-based method using a single convolution neural network for analyzing particle shadow images using a two-channel-output U-net model to generate a binary particle image and a particle centroid image.
Abstract: Conventional image processing for particle shadow image is usually time-consuming and suffers degraded image segmentation when dealing with the images consisting of complex-shaped and clustered particles with varying backgrounds. In this paper, we introduce a robust learning-based method using a single convolution neural network (CNN) for analyzing particle shadow images. Our approach employs a two-channel-output U-net model to generate a binary particle image and a particle centroid image. The binary particle image is subsequently segmented through marker-controlled watershed approach with particle centroid image as the marker image. The assessment of this method on both synthetic and experimental bubble images has shown better performance compared to the state-of-art non-machine-learning method. The proposed machine learning shadow image processing approach provides a promising tool for real-time particle image analysis.

27 citations


Journal ArticleDOI
TL;DR: The latent optimized stable generative adversarial network is developed to adaptively augment the small sample size data without prior knowledge and penalty terms based on the distance metric for differences in distributions are adopted to constrain the optimization objective of the model.
Abstract: Despite the great achievements of the intelligent diagnosis methods of rotating machinery based on being data-driven, it still suffers from the problem of scarce labeled data. Therefore, this paper focuses on developing a data augmentation method of few-shot learning for fault diagnosis under small sample size conditions. Firstly, we developed the latent optimized stable generative adversarial network to adaptively augment the small sample size data without prior knowledge. Furthermore, penalty terms based on the distance metric for differences in distributions are adopted to constrain the optimization objective of the model. And self-attention and spectral normalization are applied in the model to stabilize the training process. Then, supervised classifier training is conducted based on the augmented sample set. Comparative analysis of the frequency spectrum verified the authenticity and reliability of the generated samples. Finally, the performance of the proposed method is validated with a comparative study on three cases of rolling bearing fault diagnosis experiments. The average accuracy can achieve 99.71%, 99.7%, and 96.27% in 10-shot sample fault diagnosis.

26 citations



Journal ArticleDOI
TL;DR: A promising solution that can automatically detect and localize an impact that may occur during flight is presented, and acoustic emission (AE) is employed as an impact monitoring approach that leads to better event localization performance.
Abstract: Aircraft structures are exposed to impact damage caused by debris and hail during their service life. One of the design concerns in composite structures is the resistance of layered surfaces to damage, which occurs from impacts with various foreign objects. Therefore, the impact localization and damage quantification of impacts should be studied and considered to address flight safety and to reduce costs associated with a regularly scheduled visual inspection. Since the structural components of the aircraft are large scale, visual inspection and monitoring are challenging and subject to human error. This paper presents a promising solution that can automatically detect and localize an impact that may occur during flight. To achieve this goal, acoustic emission (AE) is employed as an impact monitoring approach. Random forest and deep learning were adopted for training the source location models. An AE dataset was collected by conducting an impact experiment on a full-size thermoplastic aircraft elevator in a laboratory environment. A dataset consisting of AE parametric features and a dataset consisting of AE waveforms were assigned to a random forest classifier and deep learning network for the investigation of their applicability of impact source localization. The results obtained were compared using the source localization approach in previous research using a conventional artificial neural network. The analysis of results shows the random forest and deep learning leads to better event localization performance. In addition, the random forest model can provide the importance of features. By deleting the least important features, the storage required to save the input and the computing time for the random forest is greatly reduced, and an acceptable localization performance can still be obtained.

26 citations


Journal ArticleDOI
TL;DR: A stochastic model based on raw GNSS observation characteristics from Android smartphones is constructed and the feasibility of smartphone-based ambiguity fixing in the short-baseline real-time kinematic (RTK) case is verified.
Abstract: The release of raw global navigation satellite system (GNSS) observations by Google Android makes high-precision positioning possible with low-cost smart devices. This study contributes to this research trend by constructing a stochastic model based on raw GNSS observation characteristics from Android smartphones and verifying the feasibility of smartphone-based ambiguity fixing in the short-baseline real-time kinematic (RTK) case. This study uses the raw observation standard deviations (ROSTDs) delivered by the Android application programming interface (API) as a stochastic model and takes advantage of the multipath index from the API to rule out unusable observations. As well as these, the ambiguity integer property is investigated by analyzing the residuals of double-differenced carrier-phase observations associated with one smartphone and one geodetic-grade receiver. Furthermore, we note that the carrier-phase observations collected by tested smartphones do not have the integer property but for the Huawei P30 and Xiaomi 8 devices, such an integer property can be successfully recovered by means of detrending. With the use of ROSTD-dependent weighting, we first perform single-point positioning (SPP) and real-time differentition (RTD) using pseudorange observations delivered by the Huawei P30 and Xiaomi 8 devices. The results show that the stochastic model is applicable to the Xiaomi 8. Moreover, the three-dimensional root-mean-square (3D-RMS) errors of the two smartphones for SPP are 1.28 m and 1.96 m, and the 3D-RMS errors for RTD are 0.79 m and 1.64 m, respectively. We next test the RTK positioning performance based on a short-baseline of 882 m using carrier-phase observations with recovered integer ambiguities. For the Huawei P30, the positioning errors achieved were 7.8, 2.4, 1.1 mm for the east, north, and up (ENU) components at the time of first fix while for the Xiaomi 8, the positioning errors achieved were 4.3, 4.2, 4.2 mm for the ENU components at the time of first fix.


Journal ArticleDOI
TL;DR: These efforts including recent work to develop documentary standards for TLS performance evaluation are reviewed and the role of these test procedures in establishing metrological traceability of TLS measurements are discussed.
Abstract: Terrestrial laser scanners (TLSs) are increasingly used in several applications such as reverse engineering, digital reconstruction of historical monuments, geodesy and surveying, deformation monitoring of structures, forensic crime scene preservation, manufacturing and assembly of engineering components, and architectural, engineering, and construction (AEC) applications. The tolerances required in these tasks range from few tens of millimeters (for example, in historical monument digitization) to few tens of micrometers (for example, in high precision manufacturing and assembly). With numerous TLS instrument manufacturers, each offering multiple models of TLSs with idiosyncratic specifications, it is a considerable challenge for users to compare instruments or evaluate their performance to determine if they meet specifications. As a result, considerable efforts have been made by research groups across the world to model TLS error sources and to develop specialized performance evaluation test procedures. In this paper, we review these efforts including recent work to develop documentary standards for TLS performance evaluation and discuss the role of these test procedures in establishing metrological traceability of TLS measurements.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed ResNets model has higher classification accuracy than the classical CNNs, LeNet-5, AlexNet and ResNet and has faster calculation speed than the Classical deep neural networks (DNNs).
Abstract: Mechanical intelligent fault diagnosis algorithms based on deep learning have achieved considerable success in recent years. However, degradation of the diagnosis accuracy and operation speed has been even pronounced due to the horrible working condition and increasing network depth. An improved ResNets is proposed in this paper to address the issues. The advantages of the proposed network are presented as follows. Firstly, a multi--scale feature fusion block (MSFFB) is designed to extract multi--scale fault feature information. Secondly, an improved residual block (RB) based on the depthwise separable convolution (DSC) is used to improve the operation speed and alleviate the computation burden of the network. The effectiveness of the proposed network is validated by discriminating diverse health states in the gearbox under normal and noisy environments. Experimental results show that the proposed network model has higher classification accuracy than the classical CNNs, LeNet-5, AlexNet and ResNets and has faster calculation speed than the classical deep neural networks (DNNs). Furthermore, a visual study of different stages in the network model is conducted to comprehend the operation process of the proposed model effectively.

Journal ArticleDOI
TL;DR: According to the results of the proposed approach, it can be said that the evaluation and use of UAV data without using GCPs is within an adequate range for various mapping purposes.
Abstract: Inexpensive and small unmanned aerial vehicles (UAVs) provide high-accuracy positional data and enable users to collect high-resolution aerial images. The analysis of images captured using UAVs in a specific reference system is traditionally accomplished using the georeferencing method with high-accuracy ground control points (GCPs). This study aims to test and compare the benchmarks and point clouds’ positional accuracy produced on three consecutive days with different flight combinations at 75 and 100 m flight altitude by means of network-based continuously operating reference stations and differential-based real-time kinematic georeferencing systems without using GCPs. A root mean squared error values were obtained 1–3 cm for the horizontal accuracy and 4–6 cm for the vertical accuracy values. Thus, the proposed system proved an acceptable positional accuracy level. According to the results of the proposed approach, it can be said that the evaluation and use of UAV data without using GCPs is within an adequate range for various mapping purposes.



Journal ArticleDOI
TL;DR: In this paper, temperature-induced changes in Cr3+-doped Mg2SiO4 emission are tested for use in luminescence thermometry from cryogenic to physiologically relevant temperatures (10-350 K).
Abstract: Cr3+-doped Mg2SiO4 orthorhombic nanoparticles are synthesized by a combustion method. The 3d3 electron configuration of the Cr3+ ion results in the deep-red emission from optical transitions between d–d orbitals. Two overlapping emissions from the Cr3+ spin-forbidden 2Eg→ 4A2g and the spin-allowed 4T2g→ 4A2g electronic transitions are influenced by the strong crystal field in Mg2SiO4 and, thus, are suitable for ratiometric luminescence thermometry. The temperature-induced changes in Cr3+-doped Mg2SiO4 emission are tested for use in luminescence thermometry from cryogenic to physiologically relevant temperatures (10–350 K) by three approaches: (a) temperature-induced changes of emission intensity; (b) temperature-dependent luminescence lifetime; and (c) temperature-induced changes of emission band position. The second approach offers applicable thermometry at cryogenic temperatures, starting from temperatures as low as 50 K, while all three approaches offer applicable thermometry at physiologically relevant temperatures with relative sensitivities of 0.7% K−1 for emission intensity, 0.8% K−1 for lifetime and 0.85% K−1 for band position at 310 K.



Journal ArticleDOI
TL;DR: An enhanced few-shot Wasserstein auto-encoder (fs-WAE) is proposed for data augmentation, in which squeeze-and-excitation blocks are applied to calibrate channel-wise feature responses adaptively, strengthening the representational power of encoder.
Abstract: Despite the advance of intelligent fault diagnosis for rolling bearings, in industries, data-driven methods still suffer from data acquisition and imbalance. We propose an enhanced few-shot Wasserstein auto-encoder (fs-WAE) to reverse the negative effect of imbalance. Firstly, an enhanced WAE is proposed for data augmentation, in which squeeze-and-excitation blocks are applied to calibrate channel-wise feature responses adaptively, strengthening the representational power of encoder. Secondly, a meta-learning strategy called Reptile is utilized to further enhance the mapping ability of WAE from prior distribution to vibration signals in the face of small dataset. Finally, gradient penalty is introduced as a regularization term to provide a flexible optimization function. The proposed method is applied to the pattern recognition based on experimental and engineering datasets. Moreover, comparative results demonstrate the utility and superiority of fs-WAE over other models in terms of efficiency and the resilience to imbalance degree.

Journal ArticleDOI
TL;DR: The experimental results suggest that the proposed ATAE network can significantly boost diagnostic performance in the absence of target vibration signal labels, compared with state-of-the-art diagnosis methods.
Abstract: Under variable working conditions, a problem arises, which is that it is difficult to obtain enough labeled data; to address this problem, an adaptive transfer autoencoder (ATAE) is established to diagnose faults in rotating machinery. First, a data adaptation module, which calculates the maximum mean discrepancy for the network hidden-layer data in reproducing kernel Hilbert space, is introduced to the autoencoder network, thus making the classification model operate under variable working conditions. Variation particle-swarm optimization is then invoked to optimize the data adaptation parameters. Finally, the k-nearest neighbors algorithm, as the classification layer of the network, identifies the state of health of the rotating machinery. The capabilities of the intelligent fault-diagnosis network are verified using vibration signals from a bearing test rig and a gearbox test rig. The experimental results suggest that, compared with state-of-the-art diagnosis methods, the proposed ATAE network can significantly boost diagnostic performance in the absence of target vibration signal labels.



Journal ArticleDOI
TL;DR: A novel method for point cloud segmentation based on Euclidean clustering and multi-plane extraction based on random sample consensus is proposed and achieves superior performance over existing approaches.
Abstract: In this paper, a novel method of point clouds segmentation based on Euclidean clustering and multi-plane extraction is newly proposed. To cope with overhanging objects, such as tree branches, a hybrid elevation map assisted with Euclidean clustering is designed. By clustering the 3D point clouds falling into the grid cell, the obstacles above a free space are checked and the corresponding traversable regions below are identified. Furthermore, the time consumption is reduced for the segementation by using the multi-resolution grids method. In addition, the multiplane extraction method based on RANSAC is well adapted to non-flat terrain. In the simulation, a variety of virtual environments are built on Gazebo platform to demonstrate the performance of the proposed algorithm. Moreover, it is also evaluated in the field environments. The results show that the accuracy as well as efficiency of point clouds segmentation achieves superior performance over existing approaches.

Journal ArticleDOI
TL;DR: The results show that the proposed model has outstanding generalizability and higher prediction performance, and a well designed structure can remedy the absence of complicated feature engineering.
Abstract: Reliable data-driven tool condition monitoring (TCM) system is more and more promising for the cutdown on the machine downtime and economic losses. However, traditional methods aren't able to address machining big data because of low model generalization ability and labored hand-crafted features extraction operation. In this paper, a novel deep learning model, named multi-frequency-bands deep convolution neural network (MFB-DCNN), is proposed to handle machining big data and monitor tool condition. Firstly, samples are enlarged and three layer wavelet package decomposition is applied to obtain wavelet coefficients at different frequency bands. Then, multi-frequency-bands features extraction structure based on deep CNN structure is introduced and utilized for sensitive features extraction from these coefficients. The extracted features are fed into full connection layers to predict tool wear conditions. After that, milling experiments are conducted for signals acquisition and model construction. A series of hyperparameters selection experiments are designed for the proposed MFB-DCNN model optimization. Finally, prediction performance of typical models is evaluated and compared with the proposed model. The results show that the proposed model owns outstanding generalization ability and keeps higher prediction performance, and the well-designed structure can remedy the absence of complicated feature engineering.


Journal ArticleDOI
TL;DR: In this article, a novel optical fiber surface plasmon resonance (SPR) magnetic field sensor is proposed and experimentally demonstrated, which is fabricated by splicing a section of photonic crystal fiber between two multimode fibers.
Abstract: In this paper, a novel optical fiber surface plasmon resonance (SPR) magnetic field sensor is proposed and experimentally demonstrated. The structure is fabricated by splicing a section of photonic crystal fiber between two multimode fibers. After the structure is coated with 10 nm Cr and 50 nm Au, the high refractive index (RI) sensitivity, from 1973.72 nm RIU−1 to 3223.32 nm RIU−1 in the range of 1.3326–1.3680, verifies the SPR sensor, which is higher than the structure based on single mode fiber with the same coating. In addition, the microscopic mechanism of the tunable characteristics of magnetic fluid RI with the ambient magnetic field is simulated by the molecular dynamics method. To measure the external magnetic field, the sensing region of the SPR sensor is fully inserted in a capillary tube, which is filled with magnetic fluid and sealed with UV glue. A maximum sensitivity of 4.42 nm mT−1 is achieved in the range of 0–24 mT, experimentally. Due to high sensitivity, simple manufacturing and compact size, the proposed sensor possesses attractive application prospects in environmental monitoring, power transmission and biomedical applications.