scispace - formally typeset
Search or ask a question

Showing papers in "Computer-aided Civil and Infrastructure Engineering in 2022"


Journal ArticleDOI
TL;DR: A novel deep learning‐based model, named You Only Look Once network v4 enhanced by EfficientNet and depthwise separable convolution (DSC; YOLOv4‐ED), is proposed, used as the backbone to improve the identification accuracy of indistinguishable defect targets in complex tunnel background and light conditions.
Abstract: Aiming to solve the challenges of low detection accuracy, poor anti‐interference ability, and slow detection speed in the traditional tunnel lining defect detection methods, a novel deep learning‐based model, named You Only Look Once network v4 enhanced by EfficientNet and depthwise separable convolution (DSC; YOLOv4‐ED), is proposed. In the YOLOv4‐ED, EfficientNet is used as the backbone to improve the identification accuracy of indistinguishable defect targets in complex tunnel background and light conditions. Furthermore, DSC block is introduced to reduce the storage space of the model and thereby enhance the detection efficiency. The experimental results indicate that the mean average precision, F1 score, Model Size, and FPS of YOLOv4‐ED are 81.84%, 81.99%, 49.3 MB, and 43.5 f/s, respectively, which is superior to the comparison models in both detection accuracy and efficiency. Based on robust and cost‐effective YOLOv4‐ED, a tunnel lining defect detection platform (TLDDP) with the capacity of automated inspection of various lining defects (i.e., water leakage, crack, rebar‐exposed) is built. The established TLDDP can realize the high‐precision and automatic detection of multiple tunnel lining defects under different lighting and complex background conditions of the practical in‐service tunnel.

55 citations


Journal ArticleDOI
TL;DR: An unsupervised, online structural health monitoring framework robust to the sensor configuration, that is, the number and placement of sensors, that leverages generative adversarial networks (GANs) to be robust even in loss patterns with overfitted discriminators.
Abstract: This study proposes an unsupervised, online structural health monitoring framework robust to the sensor configuration, that is, the number and placement of sensors. The proposed methodology leverages generative adversarial networks (GANs). The GAN's discriminator network is the novelty detector, while its generator provides additional data to tune the detection threshold. GAN models are trained with the fast Fourier transform of structural accelerations as input, avoiding the need for any structure‐specific feature extraction. Dense, convolutional (convolutional neural network), and long short‐term memory (LSTM) units are evaluated as discriminators under different GAN training loss patterns, that is, the differences between discriminator and generator training losses. Results show that the LSTM‐based discriminators and the suggested threshold tuning technique to be robust even in loss patterns with overfitted discriminators, a probable outcome of limited training sets. The framework is evaluated on two benchmark datasets. With only 100 s of training data, it achieved 95% novelty detection accuracy, distinguishing between different damage classes and identifying their resurgence under varying sensor configurations. Finally, the majority‐vote‐ensemble of discriminator‐generator pairs at different training epochs is introduced to reduce false alarms, improve novelty detection accuracy and stability.

21 citations


Journal ArticleDOI
TL;DR: This paper presents an innovative exploring and exploiting ant colony optimization (E&E‐ACO) algorithm with an appropriate point sampling, vertical curve fitting strategies, and an intuitive feasible region identification approach for solving the vertical alignment optimization problem.
Abstract: The vertical alignment optimization is about developing a minimum cost curvilinear vertical profile of constrained grade sections and appropriate non‐overlapping vertical curves passing through fixed control points with elevation constraints. Variations in ground profile and discreteness in unit cutting and filling costs make it a non‐convex, noisy, constrained optimization problem with many local minima. Further, the gradient related constraints and vertical curvature are non‐linear. This paper presents an innovative exploring and exploiting ant colony optimization (E&E‐ACO) algorithm with an appropriate point sampling, vertical curve fitting strategies, and an intuitive feasible region identification approach for solving the vertical alignment optimization problem. The E&E‐ACO algorithm extensively explores the feasible search space to generate a set of potential solutions and effectively exploit the space around the potential solutions for developing the optimized vertical alignment. The efficacy of the proposed method is demonstrated using two case studies. In one case study, the optimized solution by the proposed method had a marginally better objective function value and about three times lesser computational time than the solution by the mesh adaptive direct search method. The optimized alignment satisfied the elevation constraints of fixed control points and imitated the manually designed real‐world vertical alignment. The linearly varying exploration and exploitation parameters had better convergence rate than the other tested variations. Further, the proposed method at the end of 1000 iterations yielded about six times better result than the traditional ACO algorithm.

19 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multiscale feature fusion network with attention mechanisms named Tiny-Crack-Net, which achieved a Dice similarity coefficient of 87.96% on an open source data set, which was at least 5.84% higher than those of the six cutting-edge networks.
Abstract: Convolutional neural networks (CNNs) have gained growing interest in recent years for their advantages in detecting cracks on concrete bridge components. Class imbalance is a fundamental problem in crack segmentation, resulting in unsatisfactory segmentation for tiny cracks. Besides, limited by the local receptive field, CNNs often cannot integrate local features with global dependencies, thus significantly affecting the detection accuracy of tiny cracks across the entire image. To solve those problems in segmenting tiny cracks, a multiscale feature fusion network with attention mechanisms named “Tiny‐Crack‐Net” (TCN) is proposed. The modified residual network was used to capture the local features of tiny cracks. The dual attention module was then incorporated into the architecture to better separate the tiny cracks from the background. Also, a multiscale fusion operation was implemented to preserve the edge details of tiny cracks. Finally, a joint learning loss of the cross‐entropy and similarity was proposed to alleviate the poor convergence induced by the severe class imbalance of the pixels representing tiny cracks. The capability of the network in segmenting tiny cracks was remarkably enhanced by the aforementioned arrangements, and the “Tiny‐Crack‐Net” achieved a Dice similarity coefficient of 87.96% on an open‐source data set, which was at least 5.84% higher than those of the six cutting‐edge networks. The effectiveness and robustness of the “Tiny‐Crack‐Net” were validated with field test results, which showed that the intersection over union (IOU) for cracks with a width of 0.05 mm or wider reaches 91.44%.

18 citations


Journal ArticleDOI
TL;DR: In this article , an improved impervious solid boundary condition of the coupled method called smooth particle hydrodynamics and discrete element method (SPH•DEM) is proposed, which prevents the fluid particles from penetrating solid boundary under earthquake action.
Abstract: In this article, an improved impervious solid boundary condition of the coupled method called smooth particle hydrodynamics and discrete element method (SPH‐DEM) is proposed, which prevents the fluid particles from penetrating solid boundary under earthquake action. And an improved transmitting boundary condition of SPH‐DEM is designed in order to conquer the reflection of seismic waves on the boundary. Meanwhile, the effective stress method is proposed to be applied to the SPH‐DEM for simulating seabed liquefaction. Based on these, a new computational framework for the SPH‐DEM is put forward. Dynamic triaxial test of seabed soil samples indicate that our proposed computational framework can well reproduce the seismic liquefaction process of the seabed soil. Moreover, our proposed computational framework is used to numerically reproduce the failure mechanisms of a breakwater built in liquefied seabed under combined tsunami–earthquake activity and meantime the centrifuge test is carried out. And the experimental results demonstrate the effectiveness of our proposed computational framework, in which numerical results of it are consistent with results of the centrifuge test.

17 citations


Journal ArticleDOI
TL;DR: It is concluded that CVOA‐LSTM is a new tool that can be considered to forecast the hydropower dam's deformations and outperforms the benchmarks.
Abstract: The safety operation and management of hydropower dam play a critical role in social‐economic development and ensure people's safety in many countries; therefore, modeling and forecasting the hydropower dam's deformations with high accuracy is crucial. This research aims to propose and validate a new model based on deep learning long short‐term memory (LSTM) and the coronavirus optimization algorithm (CVOA), named CVOA‐LSTM, for forecasting the deformations of the hydropower dam. The second‐largest hydropower dam of Vietnam, located in the Hoa Binh province, is focused. Herein, we used the LSTM to establish the deformation model, whereas the CVOA was utilized to optimize the three parameters of the LSTM, the number of hidden layers, the learning rate, and the dropout. The efficacy of the proposed CVOA‐LSTM model is assessed by comparing its forecasting performance with state‐of‐the‐art benchmarks, sequential minimal optimization for support vector regression, Gaussian process, M5' model tree, multilayer perceptron neural network, reduced error pruning tree, random tree, random forest, and radial basis function neural network. The result shows that the proposed CVOA‐LSTM model has high forecasting capability (R2 = 0.874, root mean square error = 0.34, mean absolute error = 0.23) and outperforms the benchmarks. We conclude that CVOA‐LSTM is a new tool that can be considered to forecast the hydropower dam's deformations.

16 citations


Journal ArticleDOI
TL;DR: In this paper , a deep learning-based computational framework named ROSERS (Realtime On-Site Estimation of Response Spectra) is proposed to estimate the acceleration response spectrum (Sa(T)) of the expected on-site ground motion waveforms using early non-damage-causing early p-waves and site characteristics.
Abstract: Various earthquake early warning (EEW) methodologies have been proposed globally for speedily estimating information (i.e., location, magnitude, ground‐shaking intensities, and/or potential consequences) about ongoing seismic events for real‐time/near real‐time earthquake risk management. Conventional EEW algorithms have often been based on the inferred physics of a fault rupture combined with simplified empirical models to estimate the source parameters and intensity measures of interest. Given the recent boost in computational resources, data‐driven methods/models are now widely accepted as effective alternatives for EEW. This study introduces a highly accurate deep‐learning‐based computational framework named ROSERS (i.e., Real‐time On‐Site Estimation of Response Spectra) to estimate the acceleration response spectrum (Sa(T)) of the expected on‐site ground‐motion waveforms using early non‐damage‐causing early p‐waves and site characteristics. The framework is trained using a carefully selected extensive database of recorded ground motions. Due to the well‐known correlation of Sa(T) with structures’ seismic response and resulting damage/losses, rapid and accurate knowledge of expected on‐site Sa(T) values is highly beneficial to various end‐users to make well‐informed real‐time and near‐real‐time decisions. The framework is thoroughly assessed and investigated through multiple statistical tests under three historical earthquake events. These analyses demonstrate that the overall framework leads to excellent prediction power and, on average, has an accuracy above 85% for hazard‐consistent early‐warning trigger classification.

15 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a novel neural network for the detection and evaluation of road cracks at pixel level, which combines the advantage of encoding-decoding network and attention mechanism and can thus extract the crack pixels more accurately and efficiently.
Abstract: As the most common road distress, cracks have a substantial influence on the integrity of pavement structures. Accurate identification of crack existence and quantification of crack geometry are thus critical for the decision‐making of maintenance measures. This paper proposes a novel neural network for the detection and evaluation of road cracks at pixel level, which combines the advantage of encoding–decoding network and attention mechanism and can thus extract the crack pixels more accurately and efficiently. The proposed network achieves an excellent detection performance with IOU = 92.85%, precision = 96.90%, recall = 95.36%, F1 = 95.53%. Compared with the other advanced networks, the accuracy of the proposed method is substantially enhanced. The quantitative estimation of key geometrical features of cracks including length, width, and area is successfully realized with the development of a prototype of an intelligent mobile system. Compared with the ground truth, the maximum crack width shows the lowest relative error rate, which ranges −31.75%∼28.57%.

15 citations


Journal ArticleDOI
TL;DR: The detection performance of the CNN model can be significantly improved by using the proposed method of converted night‐to‐day images, which includes three main steps.
Abstract: Deep learning provides an efficient automated method for pavement condition surveys, but the datasets used for this model are usually images taken in good lighting conditions. If images are taken at night, this model cannot work effectively. This paper proposes a method for normalizing pavement images at night, which includes three main steps. First, the image feature point detection and matching method is used to process images taken during the day and night. Then, paired images of pavement during the day and night are obtained. Second, with the help of the image‐to‐image translation model, those paired images are used for training, and the best model for converting night images into day images is selected. Third, a convolutional neural network (CNN) based on VGGNet is constructed, and pavement images taken during the day are used for training. After that, six types of images are used and tested separately, namely, those taken during the day and the night, converted by the proposed method and converted by traditional methods. As evaluated by various evaluation indices and visualization methods, the detection performance of the CNN model can be significantly improved by using the proposed method of converted night‐to‐day images.

14 citations


Journal ArticleDOI
TL;DR: In this paper , a deep reinforcement learning (DRL) based distributed longitudinal control strategy for connected and automated vehicles (CAVs) under communication failure to stabilize traffic oscillations is proposed.
Abstract: This paper proposes a deep reinforcement learning (DRL)‐based distributed longitudinal control strategy for connected and automated vehicles (CAVs) under communication failure to stabilize traffic oscillations. Specifically, the signal‐interference‐plus‐noise ratio‐based vehicle‐to‐vehicle communication is incorporated into the DRL training environment to reproduce the realistic communication and time–space varying information flow topologies (IFTs). A dynamic information fusion mechanism is designed to smooth the high‐jerk control signal caused by the dynamic IFTs. Based on that, each CAV controlled by the DRL‐based agent was developed to receive the real‐time downstream CAVs’ state information and take longitudinal actions to achieve the equilibrium consensus in the multi‐agent system. Simulated experiments are conducted to tune the communication adjustment mechanism and further validate the control performance, oscillation dampening performance and generalization capability of our proposed algorithm.

14 citations


Journal ArticleDOI
TL;DR: The results showed that the basic theory and application procedure of this approach would effectively promote a models accuracy under multifidelity data and has the potential to be an alternative to facilitate solving some prediction issues in structural engineering.
Abstract: The data‐driven approach based on plenty of high‐fidelity data such as experimental data becomes prevalent in the prediction of structural behavior. However, sometimes the high‐fidelity data are hard to obtain and are only in small amount. Meanwhile, the low‐fidelity data like simulation result are in large amount but their accuracy is relatively poor and are not suitable for establishing models. Thus, based on machine learning (ML) algorithms a multifidelity approach is present, which can enhance the prediction models performance under multifidelity data. First the basic theory and application procedure of this approach are introduced. Then a case study for predicting the shear capacity of reinforced concrete deep beams was carried out to validate this method's feasibility. The influence of different ML algorithms, low‐fidelity data resources, and high‐fidelity data ratios were thoroughly investigated. The results showed that this approach would effectively promote a models accuracy under multifidelity data and has the potential to be an alternative to facilitate solving some prediction issues in structural engineering.

Journal ArticleDOI
TL;DR: In this paper , a You Only Look Once v4 (YOLOv4) network was used to detect multicategory damage (fine crack, wide crack, concrete spalling, exposed rebar and buckled rebar).
Abstract: Earthquake damage investigation is critical to post‐earthquake structural recovery and reconstruction. In this study, a method of assessing the component failure mode and damage level was established based on object detection and recognition. A quantitative structural damage level assessment method was developed based on the type and extent of damage to the components. A You Only Look Once v4 (YOLOv4) network was used to detect multicategory damage (fine crack, wide crack, concrete spalling, exposed rebar and buckled rebar). Depthwise separable convolution was introduced into YOLOv4 to decrease the computation cost without reducing accuracy. Finally, the damage detection method and assessment method were integrated within a graphical user interface (GUI) to facilitate the post‐earthquake reinforced concrete (RC) structural damage assessment. The test results by GUI indicate that the improved object network can get accurate detection results, and the preliminary safety assessment method can judge the damage level and failure mode. The present study shows high potential for estimating the seismic damage states of RC structures.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a hierarchical framework for improving ride comfort by integrating speed planning and suspension control in a vehicle-to-everything environment based on safe, comfortable, and efficient speed planning via dynamic programming, a deep reinforcement learning-based suspension control is proposed to adapt to the changing pavement conditions.
Abstract: Ride comfort plays an important role in determining the public acceptance of autonomous vehicles (AVs). Many factors, such as road profile, driving speed, and suspension system, influence the ride comfort of AVs. This study proposes a hierarchical framework for improving ride comfort by integrating speed planning and suspension control in a vehicle‐to‐everything environment. Based on safe, comfortable, and efficient speed planning via dynamic programming, a deep reinforcement learning‐based suspension control is proposed to adapt to the changing pavement conditions. Specifically, a deep deterministic policy gradient with external knowledge (EK‐DDPG) algorithm is designed for the efficient self‐adaptation of suspension control strategies. The external knowledge of action selection and value estimation from other AVs are combined into the loss functions of the DDPG algorithm. In numerical experiments, real‐world pavements detected in 11 districts of Shanghai, China, are applied to verify the proposed method. Experimental results demonstrate that the EK‐DDPG‐based suspension control improves ride comfort on untrained rough pavements by 27.95% and 3.32%, compared to a model predictive control (MPC) baseline and a DDPG baseline, respectively. Meanwhile, the EK‐DDPG‐based suspension control improves computational efficiency by 22.97%, compared to the MPC baseline, and performs at the same level as the DDPD baseline. This study provides a generalized and computationally efficient approach for improving the ride comfort of AVs.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a deep learning model named ShuttleNet to detect multiple distresses and surface design features on complex asphalt pavements simultaneously, including pavement cracks, potholes, sealed cracks, patches, markings, expansion joints, and the pavement background.
Abstract: Simultaneous pixel‐level detection of multiple distresses and surface design features on complex asphalt pavements is a critical challenge in intelligent pavement survey. This paper proposes a deep‐learning model named ShuttleNet to provide an efficient solution for this challenge by implementing robust semantic segmentation on asphalt pavements. The proposed ShuttleNet aims at repeating the encoding–decoding round freely or even endlessly such that the contexts at different resolution levels can be learned and integrated many times for enhanced latent representations. Additionally, a new and efficient connection method called memory connection is also proposed in the paper and deployed in the ShuttleNet model to provide shortcut connections between successive encoding–decoding rounds. The proposed memory connection can partially or entirely carry the decoded information at different resolution levels into the next encoding–decoding round. Pairing 3D pavement images with 2D pavement images, the proposed ShuttleNet model is applied to detect multiple distresses and surface design features on asphalt pavements simultaneously, including pavement cracks, potholes, sealed cracks, patches, markings, expansion joints, and the pavement background. Experimental results demonstrate that the mean F‐measure and mean intersection‐over‐union attained by the recommended architectural variation of the proposed ShuttleNet model on 1500 testing image pairs are 92.54% and 0.8657 respectively. According to the performance comparisons using both private and public datasets, the proposed ShuttleNet model can yield a noticeably higher detection accuracy, compared with four state‐of‐the‐art models for semantic segmentation.

Journal ArticleDOI
TL;DR: In this paper , an encoder-decoder-based network with a particular interconnection of layers between the encoder and decoder parts was proposed for automatic recognition of cracks.
Abstract: The automatic recognition of cracks is an essential requirement for the cost‐efficient maintenance of concrete structures, such as bridges, buildings, and roads. It should allow the localization and the determination of the crack type and the evaluation of the crack severity by providing information on the shape, orientation, and crack area and width. The first step in this direction is the automatized segmentation of cracks. This paper provides a concrete crack data set (370 images) and proposes two solutions that achieve the best results on two different crack data sets. Our first solution concerns the segmentation architecture. We provide an encoder–decoder‐based network with a particular interconnection of layers between the encoder and decoder parts that outperforms several other methods. In addition, this network is enhanced by squeeze‐and‐excitation blocks equipped with a modified sigmoid activation function. We introduce a stretch coefficient into the sigmoid function and declare it a trainable parameter, allowing more differentiated calibration of the feature map during network training. Our second solution concerns kernel initialization by transfer learning (TL). We propose the Copy‐Edit‐Paste Transfer Learning (CEP TL). By copying, geometric editing, and pasting crack masks onto new concrete background images, we generate thousands of semisynthetic images used to pretrain the network. This CEP TL method increases model performance with significant differences. For data set A (ours), we achieve F1‐scores 76.06 ± 0.06% without CEP TL and 92.32 ± 0.82% with CEP TL. For data set B (DeepCrack data set), we achieve F1‐scores 88.56 ± 0.01% without CEP TL and 90.59 ± 0.80% with CEP TL.

Journal ArticleDOI
Shi-lin Dong, Sen Han, Chi Wu, Ouming Xu, Haiyu Kong 
TL;DR: It could be concluded that the proposed CNN can reconstruct the macrotexture from monocular RGB images precisely, and the reconstructed macritexture could be further used for pavement macroteXTure evaluation.
Abstract: Pavement macrotexture is one of the major factors affecting pavement functions, and it is meaningful to reconstruct the pavement macrotexture rapidly and accurately for pavement life cycle performance and quality evaluation. To reconstruct pavement macrotexture from monocular image, a novel method was developed based on a deep convolutional neural network (CNN). First, the red‐green‐blue (RGB) images and depth maps (RGB‐D) of pavement texture were acquired by smartphone and laser texture scanner, respectively, from various asphalt mixture slab specimens fabricated in the laboratory, and the pavement texture RGB‐D dataset was established from scratch. Then, an encoder–decoder CNN architecture was proposed based on residual network‐101, and different training strategies were discussed for model optimization. Finally, the precision of the CNN and the three‐dimensional characteristics of the reconstructed macrotexture were analyzed. The results show that the established RGB‐D dataset can be used for training directly, and the established CNN architecture is plausible and effective. The mean texture depth and f8mac of the reconstructed macrotexture both correlate with the benchmarks significantly, and the correlation coefficients are 0.88 and 0.96, respectively. It could be concluded that the proposed CNN can reconstruct the macrotexture from monocular RGB images precisely, and the reconstructed macrotexture could be further used for pavement macrotexture evaluation.

Journal ArticleDOI
Yafei Liu, Yan Zhou, Shuai Su, Jing Xun, Tao Tang 
TL;DR: In this article , a tube model predictive control (MPC) framework is constructed to handle safety constraints and regulate bounded small disturbances in a virtual coupling train set (VCTS).
Abstract: Virtual coupling (VC) brings unprecedented opportunities for the train operation system by controlling multiple trains as a virtually coupled train set (VCTS) via train automation and communication. To deal with communication delays and small disturbances in a VCTS, this paper developed a tube‐based control approach for the VCTS, focusing on optimizing the control performances and meanwhile guaranteeing the individual and string stability. Specifically, a tube model predictive control (MPC) framework is constructed to handle safety constraints and regulate bounded small disturbances. Then, the individual stability and string stability are ensured by designing the constraint sets for inputs and proper coefficient tuning within the stable region. Finally, simulation‐based experiments verify that the proposed approach shows better robustness and higher efficiency, which can regulate train states within a tube, and the VCTS can be asymptotically stable and string stable facing disturbances and delays.

Journal ArticleDOI
TL;DR: In this article , an unsupervised learning, novelty detection framework for detecting and localizing damage in large-scale structures is proposed, which relies on a 5D, time-dependent grid environment and a novel spatiotemporal composite autoencoder network.
Abstract: The demand for resilient and smart structures has been rapidly increasing in recent decades. With the occurrence of the big data revolution, research on data‐driven structural health monitoring (SHM) has gained traction in the civil engineering community. Unsupervised learning, in particular, can be directly employed solely using field‐acquired data. However, the majority of unsupervised learning SHM research focuses on detecting damage in simple structures or components and possibly low‐resolution damage localization. In this study, an unsupervised learning, novelty detection framework for detecting and localizing damage in large‐scale structures is proposed. The framework relies on a 5D, time‐dependent grid environment and a novel spatiotemporal composite autoencoder network. This network is a hybrid of autoencoder convolutional neural networks and long short‐term memory networks. A 10‐story, 10‐bay, numerical structure is used to evaluate the proposed framework damage diagnosis capabilities. The framework was successful in diagnosing the structure health state with average accuracies of 93% and 85% for damage detection and localization, respectively.

Journal ArticleDOI
TL;DR: In this article , a low-discrepancy point sampling-based modified ant colony optimization (SMACO) algorithm for obtaining horizontal alignments with optimized HSR-specific cost and impact, including noise and vibration impacts, is proposed.
Abstract: High‐speed railway (HSR) alignment development is a complex and tedious problem due to an infinite number of possible solutions, the existence of non‐linear costs and impacts, and complex location and geometric design constraints. In this study, a low‐discrepancy point sampling‐based modified ant colony optimization (SMACO) algorithm for obtaining horizontal alignments with optimized HSR‐specific cost and impact, including noise and vibration impacts, is proposed. The low‐discrepancy sampling approach is used to identify the potential pointsofintersection(HPIs)$\ {\text{points\ of\ intersection}}\ ( {HPIs} )$ , from which appropriate intermediate HPI$HPI$ s are selected to develop feasible alignments using the SMACO algorithm. It effectively avoids restricted land parcels and satisfies HSR‐related geometric design requirements. A real‐world case study demonstrated that the HSR alignment obtained using the proposed method was marginally better than the path planner method‐based alignment and the constructed alignment. The sensitivity analysis highlighted the impact of two key parameters, that is, the right of way widths and noise and vibration screening distances on the HSR alignment development. This study advances the alignment development automation, particularly the HSR horizontal alignment for design speeds over 180 km/h. It facilitates extensive search space exploration independent of infeasible regions, identifies and selects HPI$HPI$ without being constrained to prespecified locations and a user‐defined number, and proposes a suitably modified ACO algorithm for the HSR alignment development.

Journal ArticleDOI
TL;DR: In this article , a wireless SmartVision system (WSVS) is proposed to estimate bridge displacements using both target-free and target-based approaches, which can be sent to the end user.
Abstract: The deflection of railroad bridges under in‐service loads is an important indicator of the structure's health. Over the past decade, an increasing number of studies have demonstrated the efficacy of using vision‐based approaches for displacement tracking of civil infrastructure. These studies have relied primarily on external processing of manually recorded videos of a structure's motion to estimate displacements. To date, vision‐based techniques applied to long‐term structural health monitoring have yet to be proven effective as an alternative to the traditional displacement measurement methods, such as linear variable differential transformers. This paper proposes a wireless SmartVision system (WSVS) that uses edge computing to directly output bridge displacements that can be sent to the end user. The system estimates displacements using both target‐free and target‐based approaches. A synchronized sensing framework is developed for multipoint displacement estimation using several wireless vision‐based nodes for full‐scale displacement‐based modal analysis of structures. Pose estimation using an AprilTag, a fiducial marker, is employed with a modified algorithm for improved displacement tracking of targets installed on a bridge, yielding subpixel accuracy. The robustness of the results in field conditions is enhanced by linking a tracking quality factor to each timestamp to handle vision‐related uncertainties. To meet the need for precise error metrics evaluation, an inexpensive cyber‐physical setup using a synthetic testing environment is also developed in this study. Following laboratory validation, field tests on a cable‐stayed pedestrian bridge were performed to demonstrate the efficacy of the proposed WSVS.

Journal ArticleDOI
TL;DR: In this paper , a new deep learning network is proposed that integrates multi-level and multi-scale features to classify the materials on contaminated surfaces requiring disinfection, and the infection risk of contaminated surfaces is computed to choose the appropriate disinfection modes and parameters.
Abstract: Existing disinfection robots are not intelligent enough to adapt their actions to object surface materials for precise and effective disinfection. To address this problem, a new framework is developed to enable the robot to recognize various object surface materials and to adapt its disinfection methods to be compatible with recognized object surface materials. Specifically, a new deep learning network is proposed that integrates multi‐level and multi‐scale features to classify the materials on contaminated surfaces requiring disinfection. The infection risk of contaminated surfaces is computed to choose the appropriate disinfection modes and parameters. The developed material recognition method demonstrates state‐of‐the‐art performance, achieving an accuracy of 92.24% and 91.84% on the Materials in Context Database validation and test datasets, respectively. The proposed method was also tested and evaluated in the context of healthcare facilities, where the material classification achieved an accuracy of 89.09%, and the adaptive robotic disinfection was successfully implemented.

Journal ArticleDOI
TL;DR: A deep learning‐based model called sparse‐sensing and superpixel‐based segmentation (SSSeg) for accurate and efficient crack segmentation achieved a good balance between the recognition correctness and completeness and outperformed other models in both accuracy and efficiency.
Abstract: Efficient image‐recognition algorithms to classify the pixels accurately are required for the computer‐vision‐based inspection of concrete defects. This study proposes a deep learning‐based model called sparse‐sensing and superpixel‐based segmentation (SSSeg) for accurate and efficient crack segmentation. The model employed a sparse‐sensing‐based encoder and a superpixel‐based decoder and was compared with six state‐of‐the‐art models. An input pipeline of 1231 diverse crack images was specially designed to train and evaluate the models. The results indicated that the SSSeg achieved a good balance between the recognition correctness and completeness and outperformed other models in both accuracy and efficiency. The SSSeg also exhibited good resistance to the interference of surface roughness, dirty stains, and moisture. The increased depth and receptive field of sparse‐sensing units guaranteed the representability; meanwhile, structured sparse characteristics protected the network from overfitting. The lightweight superpixel‐based decoder omitted skip connections, which greatly reduced the computation and memory footprint and enlarged the input size in the inference.

Journal ArticleDOI
TL;DR: In this paper , a bundle registration algorithm is devised to align a batch of aerial photographs with a building information model (BIM), which enables the retrieval of material semantics in BIM to determine the regions of interest for defect detection.
Abstract: Concrete defect information is of vital importance to building maintenance. Increasingly, computer vision has been explored for automated concrete defect detection. However, existing studies suffer from the challenging issue of false positives. In addition, 3D reconstruction of the defects to pinpoint their positions and geometries has not been sufficiently explored. To address these limitations, this study proposes a novel computational approach for detecting and reconstructing concrete defects from geotagged aerial images. A bundle registration algorithm is devised to align a batch of aerial photographs with a building information model (BIM). The registration enables the retrieval of material semantics in BIM to determine the regions of interest for defect detection. It helps rectify the camera poses of the aerial images, enabling precise defect reconstruction. Experiments demonstrate the effectiveness of the approach, which significantly reduced the false discovery rate from 70.8% to 56.8%, resulting in an intersection over union 6.4% higher than that of the traditional method. The geometry of the defects was successfully reconstructed in 3D world space. This study opens a new avenue to advance the field of defect detection by exploiting the rich information from BIM. The approach can be deployed at scale, supporting urban renovation, numerical simulation, and other smart applications.

Journal ArticleDOI
TL;DR: A population-level analysis is proposed to address data sparsity when building predictive models for engineering infrastructure, and succeeds in demonstrating the wide applicability in practical infrastructure monitoring, since the approach is naturally adapted between interpretable fleet models of different in situ examples.
Abstract: A population‐level analysis is proposed to address data sparsity when building predictive models for engineering infrastructure. Utilizing an interpretable hierarchical Bayesian approach and operational fleet data, domain expertise is naturally encoded (and appropriately shared) between different subgroups, representing (1) use‐type, (2) component, or (3) operating condition. Specifically, domain expertise is exploited to constrain the model via assumptions (and prior distributions) allowing the methodology to automatically share information between similar assets, improving the survival analysis of a truck fleet (15% and 13% increases in predictive log‐likelihood of hazard) and power prediction in a wind farm (up to 82% reduction in the standard deviation of maximum output prediction). In each asset management example, a set of correlated functions is learnt over the fleet, in a combined inference, to learn a population model. Parameter estimation is improved when subfleets are allowed to share correlated information at different levels in the hierarchy; the (averaged) reduction in standard deviation for interpretable parameters in the survival analysis is 70%, alongside 32% in wind farm power models. In turn, groups with incomplete data automatically borrow statistical strength from those that are data‐rich. The statistical correlations enable knowledge transfer via Bayesian transfer learning, and the correlations can be inspected to inform which assets share information for which effect (i.e., parameter). Successes in both case studies demonstrate the wide applicability in practical infrastructure monitoring, since the approach is naturally adapted between interpretable fleet models of different in situ examples.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a vision-based method for automatic vehicle height measurement using deep learning and view geometry, where vehicle instances are first segmented from traffic surveillance video frames by exploiting mask region-based convolutional neural network (Mask R•CNN).
Abstract: Overheight vehicle collisions continuously pose a serious threat to transportation infrastructure and public safety. This study proposed a vision‐based method for automatic vehicle height measurement using deep learning and view geometry. In this method, vehicle instances are first segmented from traffic surveillance video frames by exploiting mask region‐based convolutional neural network (Mask R‐CNN). Then, 3D bounding box on each vehicle instance is constructed using the obtained vehicle silhouette and three orthogonal vanishing points in the surveilled traffic scene. By doing so, the vertical edges of the constructed 3D bounding box are directly associated with the vehicle image height. Last, the vehicle's physical height is computed by referencing an object with a known height in the traffic scene using single view metrology. A field experiment was performed to evaluate the performance of the proposed method, leading to the mean and maximum errors of 3.6 and 6.6, 5.8 and 12.9, 4.4 and 8.1, and 9.2 and 18.5 cm for cars, buses, vans, and trucks, respectively. The experiment also demonstrated the ability of the method to overcome vehicle occlusion, shadow, and irregular appearance interferences in height estimation suffered by existing image‐based methods. The results signified the potential of the proposed method for overheight vehicle detection and collision warning in real traffic settings.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an arterial signal control stochastic simulation-based optimization model with traffic safety and efficiency as biobjectives and solved it by a biobjective surrogate-based promising area search (BOSPAS) method.
Abstract: This paper proposes an arterial signal control stochastic simulation‐based optimization model with traffic safety and efficiency as biobjectives and solves it by a biobjective surrogate‐based promising area search (BOSPAS) method. In this model, traffic safety and efficiency are indexed by the average potential collision energy (APCE) and the vehicular throughput of the arterial road, respectively, and the arterial signal control plan is designed in a ring‐and‐barrier structure. In the BOSPAS method, each solution is evaluated by traffic simulation only once and stochastic evaluation noises of the sampled solutions are smoothed by a shrinking ball method to approximate their biobjective expectations. Based on the solutions and their estimated biobjective values, two surrogate models of biobjectives are constructed and optimized to obtain the nondominated solutions, which will guide the establishment of the joint promising area for sampling in the next iteration. In the numerical experiments, BOSPAS is first tested to outperform three other counterparts (i.e., nondominated sorting genetic algorithm II [NSGA‐II], biobjective efficient global optimization [BOEGO], and biobjective promising area search [BOPAS]) by a stochastic FON function from the aspects of convergence and diversity. It is then applied to optimize the signal control plan of an arterial road with six four‐leg signalized intersections in Changsha, China. The numerical results show that the nondominated solutions by BOSPAS perform better than those by NSGA‐II, BOEGO, and BOPAS under limited simulation budgets, and also mostly outperform three solutions by Synchro, MAXBAND, and MULTIBAND, especially in reducing APCE. In contrast with the field‐implemented signal plan, the optimized ones by BOSPAS improve the APCE and the vehicular throughput of the arterial road by at most 50.2% and 24.8%, respectively. Moreover, considering the vehicular throughput of arterial road as one objective may have a negative effect on the vehicular throughput of the overall road network. In conclusion, BOSPAS is promising to address biobjective optimization problems characterized by costly evaluation, high dimensions, and stochastic noises.

Journal ArticleDOI
TL;DR: This paper introduces the development of a graph convolutional neural network‐integrated deep reinforcement learning (GCN‐DRL) model to support optimal repair decisions to improve WDN resilience after earthquakes.
Abstract: Water distribution networks (WDNs) are critical infrastructure for communities. The dramatic expansion of the WDNs associated with urbanization makes them more vulnerable to high‐consequence hazards such as earthquakes, which requires strategies to ensure their resilience. The resilience of a WDN is related to its ability to recover its service after disastrous events. Sound decisions on the repair sequence play a crucial role to ensure a resilient WDN recovery. This paper introduces the development of a graph convolutional neural network‐integrated deep reinforcement learning (GCN‐DRL) model to support optimal repair decisions to improve WDN resilience after earthquakes. A WDN resilience evaluation framework is first developed, which integrates the dynamic evolution of WDN performance indicators during the post‐earthquake recovery process. The WDN performance indicator considers the relative importance of the service nodes and the extent of post‐earthquake water needs that are satisfied. In this GCN‐DRL model framework, the GCN encodes the information of the WDN. The topology and performance of service nodes (i.e., the degree of water that needs satisfaction) are inputs to the GCN; the outputs of GCN are the reward values (Q‐values) corresponding to each repair action, which are fed into the DRL process to select the optimal repair sequence from a large action space to achieve highest system resilience. The GCN‐DRL model is demonstrated on a testbed WDN subjected to three earthquake damage scenarios. The performance of the repair decisions by the GCN‐DRL model is compared with those by four conventional decision methods. The results show that the recovery sequence by the GCN‐DRL model achieved the highest system resilience index values and the fastest recovery of system performance. Besides, by using transfer learning based on a pre‐trained model, the GCN‐DRL model achieved high computational efficiency in determining the optimal repair sequences under new damage scenarios. This novel GCN‐DRL model features robustness and universality to support optimal repair decisions to ensure resilient WDN recovery from earthquake damages.

Journal ArticleDOI
TL;DR: In this article , a multi-view convolutional neural network (MV-CNN) architecture is proposed, which combines the information from different views of a damaged building, resulting in 3D aggregation of the 2D damage features from each view.
Abstract: This study aims to facilitate a more reliable automated postdisaster assessment of damaged buildings based on the use of multiple view imagery. Toward this, a Multi‐View Convolutional Neural Network (MV‐CNN) architecture is proposed, which combines the information from different views of a damaged building, resulting in 3‐D aggregation of the 2‐D damage features from each view. This spatial 3‐D context damage information will result in more accurate and reliable damage quantification in the affected buildings. For validation, the presented model is trained and tested on a real‐world visual data set of expert‐labeled buildings following Hurricane Harvey. The developed model demonstrates an accuracy of 65% in predicting the exact damage states of buildings, and around 81% considering ±1 class deviation from ground‐truth, based on a five‐level damage scale. Value of information (VOI) analysis reveals that the hybrid models, which consider at least one aerial and ground view, perform better.

Journal ArticleDOI
TL;DR: In this paper , a shake table protocol for seismic assessment and qualification of acceleration-sensitive nonstructural elements (NEs) is developed, considering the most recent advances in the field and the specific expertise of the research team.
Abstract: A shake table protocol for seismic assessment and qualification of acceleration‐sensitive nonstructural elements (NEs) is developed. The paper critically reviews existing protocols and highlights their criticalities, pointing out the need for the development of novel assessment and qualification approaches and protocols. The protocol is developed in light of these criticalities, considering the most recent advances in the field and the specific expertise of the research team. The most significant and contributing parts of the developed protocol consist of the definition of novel required response spectra and the generation of signals for seismic performance evaluation tests. The reliability and robustness of the protocol are evidenced in the paper considering real‐floor motions as a reference, also proving the superiority of the developed protocol with respect to the reference alternatives. The defined approach and procedures are generally applicable and easily extendable to different case studies, as the process is highly versatile and modifiable. The implementation of the developed approach and protocol in the literature and in practice will significantly enhance seismic assessment and qualification of acceleration‐sensitive NEs. This will possibly have a strong impact on public safety and economy.

Journal ArticleDOI
TL;DR: In this paper , a robust subpixel refinement technique, self-adaptive edge points matching (SEPM), is proposed to obtain accurate subpixel-level displacements under drastic illumination change conditions.
Abstract: Applying a subpixel refinement technique in vision‐based displacement sensing can significantly improve the measurement accuracy. However, digital image signals from the camera are highly sensitive to drastically varying lighting conditions in the field measurements of structural displacement, causing pixels expressing a tracking target to have nonuniform grayscale intensity changes in different recording video frames. Traditional feature points‐based subpixel refinement techniques are neither robust nor accurate enough in this case, presenting challenges for accurate measurement. This paper proposes a robust subpixel refinement technique—self‐adaptive edge points matching (SEPM)—to obtain accurate subpixel‐level displacements under drastic illumination change conditions. Different from traditional feature points‐based methods, the gradient and shape information of the target edge contour are used in the SEPM calculation. Three tests under different illumination conditions were conducted to evaluate the performance of the SEPM. The results show that the SEPM is capable of producing accurate subpixel‐level displacement data with less 1/16‐pixel root mean‐square error.