scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Robust Bayesian Compressive Sensing for Signals in Structural Health Monitoring

TL;DR: In this paper, a Bayesian compressive sensing (BCS) method is investigated that uses sparse Bayesian learning to reconstruct signals from a compressive sensor, which can achieve perfect loss-less compression performance with quite high compression ratio.
Abstract: In structural health monitoring (SHM) systems for civil structures, massive amounts of data are often generated that need data compression techniques to reduce the cost of signal transfer and storage, meanwhile offering a simple sensing system. Compressive sensing (CS) is a novel data acquisition method whereby the compression is done in a sensor simultaneously with the sampling. If the original sensed signal is sufficiently sparse in terms of some orthogonal basis (e.g., a sufficient number of wavelet coefficients are zero or negligibly small), the decompression can be done essentially perfectly up to some critical compression ratio; otherwise there is a trade-off between the reconstruction error and how much compression occurs. In this article, a Bayesian compressive sensing (BCS) method is investigated that uses sparse Bayesian learning to reconstruct signals from a compressive sensor. By explicitly quantifying the uncertainty in the reconstructed signal from compressed data, the BCS technique exhibits an obvious benefit over existing regularized norm-minimization CS methods that provide a single signal estimate. However, current BCS algorithms suffer from a robustness problem: sometimes the reconstruction errors are very large when the number of measurements K are a lot less than the number of signal degrees of freedom N that are needed to capture the signal accurately in a directly sampled form. In this article, we present improvements to the BCS reconstruction method to enhance its robustness so that even higher compression ratios N/K can be used and we examine the trade-off between efficiently compressing data and accurately decompressing it. Synthetic data and actual acceleration data collected from a bridge SHM system are used as examples. Compared with the state-of-the-art BCS reconstruction algorithms, the improved BCS algorithm demonstrates superior performance. With the same acceptable error rate based on a specified threshold of reconstruction error, the proposed BCS algorithm works with relatively large compression ratios and it can achieve perfect loss-less compression performance with quite high compression ratios. Furthermore, the error bars for the signal reconstruction are also quantified effectively.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel damage detection approach to automatically extract features from low‐level sensor data through deep learning using a deep convolutional neural network, leading to an excellent localization accuracy on both noise‐free and noisy data set.
Abstract: Structural damage detection is still a challenging problem owing to the difficulty of extracting damage-sensitive and noise-robust features from structure response. This article presents a novel damage detection approach to automatically extract features from low-level sensor data through deep learning. A deep convolutional neural network is designed to learn features and identify damage locations, leading to an excellent localization accuracy on both noise-free and noisy data set, in contrast to another detector using wavelet packet component energy as the input feature. Visualization of the features learned by hidden layers in the network is implemented to get a physical insight into how the network works. It is found the learned features evolve with the depth from rough filters to the concept of vibration mode, implying the good performance results from its ability to learn essential characteristics behind the data.

398 citations

Journal ArticleDOI
TL;DR: A novel and comprehensive model for estimating the price of new housing in any given city at the design phase or beginning of the construction is presented through ingenious integration of a deep belief restricted Boltzmann machine and a unique nonmating genetic algorithm.
Abstract: Predicting the price of housing is of paramount importance for near-term economic forecasting of any nation. This paper presents a novel and comprehensive model for estimating the price of new housing in any given city at the design phase or beginning of the construction through ingenious integration of a deep belief restricted Boltzmann machine and a unique nonmating genetic algorithm. The model can be used by construction companies to gauge the sale market before they start a new construction and consider to build or not to build. An effective data structure is presented that takes into account a large number of economic variables/indices. The model incorporates time-dependent and seasonal variations of the variables. Clever stratagems have been developed to overcome the dimensionality curse and make the solution of the problem amenable on standard workstations. A case study is presented to demonstrate the effectiveness and accuracy of the model.

189 citations

Journal ArticleDOI
TL;DR: In this article, a new methodology is presented for detecting, locating, and quantifying the damage severity in a smart high-rise building structure, which consists of three steps: in step 1, the synchrosqueezed wavelet transform is used to eliminate the noise in the signals.
Abstract: A new methodology is presented for (a) detecting, (b) locating, and (c) quantifying the damage severity in a smart highrise building structure. The methodology consists of three steps: In step 1, the synchrosqueezed wavelet transform is used to eliminate the noise in the signals. In step 2, a nonlinear dynamics measure based on the chaos theory, fractality dimension (FD), is employed to detect features to be used for damage detection. In step 3, a new structural damage index, based on the estimated FD values, is proposed as a measure of the condition of the structure. Further, the damage location is obtained using the changes of the estimated FD values. Three different FD algorithms for computing the fractality of time series signals are investigated. They are Katz's FD, Higuchi's FD, and box dimension. The usefulness and effectiveness of the proposed methodology are validated using the sensed data obtained experimentally for the 1:20 scaled model of a 38-storey concrete building structure.

136 citations

Journal ArticleDOI
TL;DR: A novel Bayesian real-time system identification algorithm using response measurement is proposed for dynamical systems and is applicable to simultaneous model class selection and parametric identification in the real- time manner.
Abstract: In this article, a novel Bayesian real-time system identification algorithm using response measurement is proposed for dynamical systems. In contrast to most existing structural identification methods which focus solely on parametric identification, the proposed algorithm emphasizes also model class selection. By embedding the novel model class selection component into the extended Kalman filter, the proposed algorithm is applicable to simultaneous model class selection and parametric identification in the real-time manner. Furthermore, parametric identification using the proposed algorithm is based on multiple model classes. Examples are presented with application to damage detection for degrading structures using noisy dynamic response measurement.

111 citations

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the sensitivity and robustness of two Artificial Intelligence (AI) techniques, namely Gaussian Process Regression (GPR) with five different kernels (Matern32, Matern52, Exponential, Squared Exponential and Rational Quadratic) and an Artificial Neural Network (ANN) using a Monte Carlo simulation for prediction of high-performance concrete (HPC) compressive strength.
Abstract: This study aims to analyze the sensitivity and robustness of two Artificial Intelligence (AI) techniques, namely Gaussian Process Regression (GPR) with five different kernels (Matern32, Matern52, Exponential, Squared Exponential, and Rational Quadratic) and an Artificial Neural Network (ANN) using a Monte Carlo simulation for prediction of High-Performance Concrete (HPC) compressive strength. To this purpose, 1030 samples were collected, including eight input parameters (contents of cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregates, fine aggregates, and concrete age) and an output parameter (the compressive strength) to generate the training and testing datasets. The proposed AI models were validated using several standard criteria, namely coefficient of determination (R2), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). To analyze the sensitivity and robustness of the models, Monte Carlo simulations were performed with 500 runs. The results showed that the GPR using the Matern32 kernel function outperforms others. In addition, the sensitivity analysis showed that the content of cement and the testing age of the HPC were the most sensitive and important factors for the prediction of HPC compressive strength. In short, this study might help in selecting suitable AI models and appropriate input parameters for accurate and quick estimation of the HPC compressive strength.

105 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations

Journal ArticleDOI
E. T. Jaynes1
TL;DR: In this article, the authors consider statistical mechanics as a form of statistical inference rather than as a physical theory, and show that the usual computational rules, starting with the determination of the partition function, are an immediate consequence of the maximum-entropy principle.
Abstract: Information theory provides a constructive criterion for setting up probability distributions on the basis of partial knowledge, and leads to a type of statistical inference which is called the maximum-entropy estimate. It is the least biased estimate possible on the given information; i.e., it is maximally noncommittal with regard to missing information. If one considers statistical mechanics as a form of statistical inference rather than as a physical theory, it is found that the usual computational rules, starting with the determination of the partition function, are an immediate consequence of the maximum-entropy principle. In the resulting "subjective statistical mechanics," the usual rules are thus justified independently of any physical argument, and in particular independently of experimental verification; whether or not the results agree with experiment, they still represent the best estimates that could have been made on the basis of the information available.It is concluded that statistical mechanics need not be regarded as a physical theory dependent for its validity on the truth of additional assumptions not contained in the laws of mechanics (such as ergodicity, metric transitivity, equal a priori probabilities, etc.). Furthermore, it is possible to maintain a sharp distinction between its physical and statistical aspects. The former consists only of the correct enumeration of the states of a system and their properties; the latter is a straightforward example of statistical inference.

12,099 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations

Journal ArticleDOI
TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.
Abstract: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

8,604 citations