scispace - formally typeset
Search or ask a question

Showing papers in "Computer-aided Civil and Infrastructure Engineering in 2014"


Journal ArticleDOI
TL;DR: A method based on the use of Gabor filters is used to detect the longitudinal and transverse cracks and an AdaBoost algorithm is used for selecting and combining the classifiers, thus improving the results provided by a single classifier.
Abstract: Data has been acquired and used to train and test pavement management systems (PMS) and methods. The article discusses how PMS require detailed information on the current state of the roads in order to take appropriate actions to optimize expenditures on maintenance and rehabilitation. The presence of cracks is a crucial aspect to be considered. A solution based on an instrumented vehicle that is equipped with an imaging system, two Inertial Profilers, a Differential Global Positioning System (DGPS), and a webcam is presented. Information about the state of the road is acquired at normal road speed. A method that is based on the use of Gabor filters is used to detect longitudinal and transverse cracks. The methodologies used to create Gabor filter banks and the use of the filtered images as descriptors for subsequent classifiers are discussed in detail in the article. The article also evaluates three different methodologies for setting the threshold of the classifiers. Finally, an AdaBoost algorithm is used for selecting and combining the classifiers, which improves the results provided by a single classifier. The article discusses how suitable results have been obtained in comparison with other reference works.

224 citations


Journal ArticleDOI
TL;DR: A family of surrogate‐based optimization approaches are adopted to approximate the response surface for the transportation simulation input–output mapping and search for the optimal toll charges in a transportation network so that the computational effort can be significantly reduced for the expensive‐to‐evaluate optimization problem.
Abstract: Applying the optimized pricing scheme in the real world can be an encouraging policy option to enhance the performance of the transportation system in the study region. A family of surrogate-based optimization approaches to approximate the response surface for the transportation simulation input-output mapping is adopted in this article. These approaches search for the optimal toll charges in a transportation network and the computational effort can be significantly reduced for the expensive-to-evaluate optimization problem. Meanwhile, this family of approaches addressed the random noise that always occurs through simulations. Both one-stage and two-stage surrogate models are tested and compared and a suboptimal exploration strategy and a global exploration strategy are incorporated and validated. Dynamic Urban Systems in Transportation (DynusT), a simulation-based dynamic traffic assignment model, is utilized to evaluate the system performance in response to different link-additive toll schemes implemented on a highway in a real road transportation network. The simulation results show that implementing the optimal toll predicted by the surrogate model can benefit society in multiple ways by minimizing travel time. Travelers gain from the 2.5% reduction (0.45 minutes) of the average travel time and the total reduction in the time cost during the extended peak hours would be around $65,000 for all of the 570,000 network users. The article discusses how the government benefits from the 20% increase of toll revenue compared to the current situation.

108 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian compressive sensing (BCS) method is investigated that uses sparse Bayesian learning to reconstruct signals from a compressive sensor, which can achieve perfect loss-less compression performance with quite high compression ratio.
Abstract: In structural health monitoring (SHM) systems for civil structures, massive amounts of data are often generated that need data compression techniques to reduce the cost of signal transfer and storage, meanwhile offering a simple sensing system. Compressive sensing (CS) is a novel data acquisition method whereby the compression is done in a sensor simultaneously with the sampling. If the original sensed signal is sufficiently sparse in terms of some orthogonal basis (e.g., a sufficient number of wavelet coefficients are zero or negligibly small), the decompression can be done essentially perfectly up to some critical compression ratio; otherwise there is a trade-off between the reconstruction error and how much compression occurs. In this article, a Bayesian compressive sensing (BCS) method is investigated that uses sparse Bayesian learning to reconstruct signals from a compressive sensor. By explicitly quantifying the uncertainty in the reconstructed signal from compressed data, the BCS technique exhibits an obvious benefit over existing regularized norm-minimization CS methods that provide a single signal estimate. However, current BCS algorithms suffer from a robustness problem: sometimes the reconstruction errors are very large when the number of measurements K are a lot less than the number of signal degrees of freedom N that are needed to capture the signal accurately in a directly sampled form. In this article, we present improvements to the BCS reconstruction method to enhance its robustness so that even higher compression ratios N/K can be used and we examine the trade-off between efficiently compressing data and accurately decompressing it. Synthetic data and actual acceleration data collected from a bridge SHM system are used as examples. Compared with the state-of-the-art BCS reconstruction algorithms, the improved BCS algorithm demonstrates superior performance. With the same acceptable error rate based on a specified threshold of reconstruction error, the proposed BCS algorithm works with relatively large compression ratios and it can achieve perfect loss-less compression performance with quite high compression ratios. Furthermore, the error bars for the signal reconstruction are also quantified effectively.

95 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed method takes advantage of the respective merits of 2D images and 3D laser scanning data and therefore improves the pavement crack detection accuracy and reduces recognition error rate compared to 2D image intensity‐based methods.
Abstract: One of the main distresses that occur in the road surface is a result of pavement cracking. This article proposes a new pavement crack detection method that combines two-dimensional (2D) gray-scale images and three-dimensional (3D) laser scanning data based on Dempster-Shafer (D-S) theory. The 2D gray-scale image and 3D laser scanning data are modeled as a mass function in evidence theory in this model and the 2D and 3D detection results for pavement cracks are fused at the decision-making level. The proposed method takes advantage of the respective merits of 2D images and 3D laser scanning data and therefore improves the pavement crack detection accuracy and reduces recognition error rate compared to 2D image intensity-based methods. This article discusses how objective and accurate detection or evaluation for these cracks is an important task in the pavement maintenance and management for state highway departments of transportation.

90 citations


Journal ArticleDOI
TL;DR: Experimental results clearly show that the produced network flow patterns replicate the anticipated combined mode–route choice results, the higher the distance limit or the gasoline price is, the more travelers choose battery electric vehicles (BEVs) when both BEVs and GVs are available to them; and the quadratic approximation algorithm exhibits linear convergence and can reach higher solution precision in shorter time.
Abstract: This article addresses a new network equilibrium problem with mode and route choices for the emerging need of modeling regional transportation networks that accommodate both gasoline and electric vehicles. The two transportation modes (or vehicle types) distinguish from each other in terms of driving distance limit and travel cost composition. In view of the advantages (e.g., low fuel expenses and vehicle emissions) and disadvantages (e.g., limited driving range and long charging time) pertaining to driving electric vehicles, it is anticipated that a large number of households/motorists may prefer to own both gasoline and electric vehicles (although, of course, many households/motorists still only own gasoline vehicles (GVs) and some households may choose to own electric vehicles only) in the transition period from the petroleum era to the electricity era. The purpose of this article is to offer a traffic equilibrium modeling tool for networks that serve households/motorists who can choose between gasoline and electric vehicles. Specifically, we present a convex optimization model for characterizing such mixed equilibrium traffic networks with both gasoline and electric vehicles, which are expected to exist for a long period in the future. Two competing solution algorithms, a linear approximation algorithm of the Jacobi type and a quadratic approximation algorithm taking the form of the Gauss–Seidel decomposition, are implemented and evaluated. Experimental results clearly show that, from the model behavior perspective, the produced network flow patterns replicate the anticipated combined mode–route choice results, that is, the higher the distance limit or the gasoline price is, the more travelers choose battery electric vehicles (BEVs) when both BEVs and GVs are available to them; and, from the solution efficiency perspective, the quadratic approximation algorithm exhibits linear convergence and can reach higher solution precision in shorter time.

87 citations


Journal ArticleDOI
TL;DR: This article investigates the first use of two neural network approaches to automate the analysis of data collected from real‐world concrete structures: Echo State Networks (ESNs) and Extreme Learning Machines (ELMs) where fast and efficient training procedures allow networks to be trained and evaluated in less time than traditional neural network approach.
Abstract: This article discusses how detecting defects within reinforced concrete is vital to the safety and durability of infrastructure. A non-invasive technique, ElectroMagnetic Anomaly Detection (EMAD) is used in this article to provide information into the electromagnetic properties of reinforcing steel for which data analysis is currently performed visually. The first use of two neural network approaches to automate the analysis of this data is investigated in this article. These approaches are called Echo State Networks (ESNs) and Extreme Learning Machines (ELMs) where fast and efficient training procedures allow networks to be trained and evaluated in less time than traditional neural network approaches. Data collected from real-world concrete structures are analyzed in this article using these two approaches as well as using a simple threshold measure and a standard recurrent neural network. Two ESN architectures provided the best performance for a mesh-reinforced concrete structure, while the ELM approach offers a large improvement in the performance of a single tendon-reinforced structure.

86 citations


Journal ArticleDOI
TL;DR: The article demonstrates the enhancement in comprehension of the structural behavior obtained by means of a 2-day testing campaign conducted on a complex building of the Engineering Faculty, Edifice A, which was heavily damaged in the 2009 L'Aquila earthquake.
Abstract: Knowledge of the dynamic behavior of complex buildings subjected to near-fault earthquakes may be enriched by valuable information obtained through rapid onsite dynamic testing to aid in the design of appropriate retrofitting interventions. Through a case study, the article demonstrates the enhancement in comprehension of the structural behavior obtained by means of a 2-day testing campaign conducted on a complex building of the Engineering Faculty, Edifice A, which was heavily damaged in the 2009 L'Aquila earthquake. The onsite testing was carried out with a network of 13 accelerometers opportunely located to identify the dynamic characteristics of the structure by means of ambient noise-induced vibration. Enhanced Frequency Domain Decomposition (EFDD) and Stochastic Subspace Identification (SSI) output-only procedures were both used to identify the main modal parameters of two substructures of the building. The modal characteristics were used to update a finite element model representing small amplitude vibrations of the damaged structures. The direct comparison of the identified modal features with finite element models, in which damaged member locations are determined by onsite visual observations, has permitted identification of a model representative of the structural behavior of the building in the immediate postearthquake conditions.

84 citations


Journal ArticleDOI
TL;DR: This article proposes to model an urban system by means of different hybrid social–physical complex networks, obtained by enriching the urban street network with additional information about the social and physical constituents of a city, and introduces a class of efficiency measures inspired by the definition of global efficiency given in complex network theory.
Abstract: One of the most important tasks of urban and hazard planning is to mitigate the damages and minimize the costs of the recovery process after catastrophic events. In this context, the capability of urban systems and communities to recover from disasters is referred to as resilience. Despite the problem of resilience quantification having received a lot of attention, a mathematical definition of the resilience of an urban community, which takes into account the social aspects of an urban environment, has not yet been identified. In this article, we provide and test a methodology for the assessment of urban resilience to catastrophic events which aims at bridging the gap between the engineering and the ecosystem approaches to resilience. We propose to model an urban system by means of different hybrid social–physical complex networks, obtained by enriching the urban street network with additional information about the social and physical constituents of a city, namely citizens, residential buildings, and services. Then, we introduce a class of efficiency measures on these hybrid networks, inspired by the definition of global efficiency given in complex network theory, and we show that these measures can be effectively used to quantify the resilience of an urban system, by comparing their respective values before and after a catastrophic event and during the reconstruction process. As a case study, we consider simulated earthquakes in the city of Acerra, Italy, and we use these efficiency measures to compare the ability of different reconstruction strategies in restoring the original performance of the urban system.

84 citations


Journal ArticleDOI
TL;DR: An integer program is proposed for the stopping pattern optimization problem and results show that the proposed IGA can solve real‐world problems that are beyond the reach of commonly used optimization packages.
Abstract: The stopping pattern optimization problem of a passenger railroad company determines the stopping strategy of a train. This pattern takes multiple train classes, station types, and customer origin-destination (OD) demand into consideration in order to maximize the profit made by a rail company. This article proposes an integer program for this problem and provides a systematic approach to determining the optimal train stopping pattern for a rail company. The article discusses how commonly used commercial optimization packages cannot solve this complex problem efficiently. The stopping pattern is traditionally decided by rule of thumb, an approach that leaves much room for improvement. Therefore, the authors of this article develop two genetic algorithms. The first is a binary-coded genetic algorithm (BGA) and the second is an integer-coded genetic algorithm (IGA). The chromosome was coded using the binary alphabet as BGA in many of the past programming studies and the encoding and genetic operators of BGA are straightforward and relatively simple to implement. However, the article shows how it is difficult for the BGA to converge to feasible solutions for the stopping pattern optimization problem. The numerical results presented in the article show that the proposed IGA can solve real-world problems that are beyond the reach of commonly used optimization packages. For this reason, new encoding mechanisms and genetic operators are proposed.

84 citations


Journal ArticleDOI
TL;DR: A simulation model to find out optimum evacuation routes, during a tsunami using Ant Colony Optimization (ACO) algorithms, is proposed, which showed that, in case of an emergency, conventional evacuation routes showed longer escape times compared to those produced by the model developed in this research.
Abstract: The results presented in this article showed that conventional evacuation routes in emergency situations showed longer escape times compared to those that are produced by the model developed in this research. Natural disasters such as earthquakes and tsunamis promoted the creation of effective evacuation strategies in order to prevent the loss of human lives. A simulation model was proposed in this article to discover optimum evacuation routes during a tsunami that used Ant Colony Optimization (ACO) algorithms. ACO were discrete optimization algorithms inspired by the ability of ants to establish the shortest path from their nest to a food source, and vice versa. Two drills were used to validate the model. These drills were conducted in the coastal town of Penco, Chile, a town that was affected by an 8.8 Mw earthquake and tsunami in February 2010. The first drill was held with minimal information, leaving the population to act randomly and intuitively and the second drill was carried out with information provided by the model, inducing people to use the optimized routes generated by the ACO algorithm.

79 citations


Journal ArticleDOI
TL;DR: The results show that there were tradeoffs between emissions, noise, and travel time costs, and that the enhanced CRO outperformed Genetic Algorithm on more than half of the testing scenarios and had a comparable performance on certain test scenarios compared with GA.
Abstract: Nowadays, the decision makers in the transportation industry are being urged to incorporate environmental costs into road network design decision making because road traffic affects the environment and health. The design of a road network should not only be cost-effective but also environmentally sustainable. This article proposes a new network design problem (NDP) that takes both vehicle emissions and noise into account. This proposed environmentally sustainable NDP is formulated as a discrete bilevel program. The lower-level problem is formulated as user-equilibrium assignment. The upper-level problem determines the optimal road capacity expansion to minimize the total costs of emissions, noise, and travel time with the considerations of budgetary and capacity improvement constraints. The proposed problem is solved by an enhanced version of a new meta-heuristic named Chemical Reaction Optimization (CRO), and its parameters are tuned by our proposed tuning procedure. Two benchmark road networks with different demand levels are used to evaluate the performance of the enhanced CRO and illustrate the properties of the problem. The results show that there were tradeoffs between emissions, noise, and travel time costs, and that the enhanced CRO outperformed Genetic Algorithm (GA) on more than half of the testing scenarios and had a comparable performance on certain test scenarios compared with GA.

Journal ArticleDOI
TL;DR: Current state‐of‐the‐art visualization technologies are mainly fully virtual, while AR has the potential to enhance those visualizations by observing proposed designs directly within the real environment.
Abstract: Augmented Reality (AR) is a rapidly develop- ing field with numerous potential applications. For ex- ample, building developers, public authorities, and other construction industry stakeholders need to visually as- sess potential new developments with regard to aesthet- ics, health and safety, and other criteria. Current state-of- the-art visualization technologies are mainly fully virtual, while AR has the potential to enhance those visualiza- tions by observing proposed designs directly within the real environment. A novel AR system is presented, that is most appropriate for urban applications. It is based on monocular vision, is markerless, and does not rely on beacon-based local- ization technologies (like GPS) or inertial sensors. Addi- tionally, the system automatically calculates occlusions of the built environment on the augmenting virtual objects. Three datasets from real environments presenting dif- ferent levels of complexity (geometrical complexity, tex- tures, occlusions) are used to demonstrate the perfor- mance of the proposed system. Videos augmented with our system are shown to provide realistic and valuable visualizations of proposed changes of the urban environ- ∗ To whom correspondence should be addressed. E-mail: f.n.bosche@

Journal ArticleDOI
TL;DR: A mathematical model and effective solution algorithm for the railroad job-clustering problem is developed in this paper, where a mixed-integer mathematical programming model uses a vehicle routing problem (VRP) with side constraints and proposes a set of integrated heuristic algorithms to solve the problem.
Abstract: A mathematical model and effective solution algorithm for the railroad job-clustering problem is developed in this article. This mixed-integer mathematical programming model uses a vehicle routing problem (VRP) with side constraints and proposes a set of integrated heuristic algorithms to solve the problem. These various side constraints such as mutual exclusion constraints and rounding constraints further increase the difficulty in solving the problem. This clustering is an important part of railroad track maintenance planning because it focuses on clustering track maintenance jobs into projects, so that the projects can be scheduled and assigned to the production teams. The model and algorithms proposed in the article are shown to be effective and a Class-I railroad has adopted them to help with their practical operations for several years. The article discusses how the real-world instances of the job-clustering problem usually have are of very large scale that involves thousands of jobs per year.

Journal ArticleDOI
TL;DR: Numerical results show the computational efficiency of the Interval Monte Carlo approach and its superiority to the alternative search approaches such as optimization and genetic algorithms and results show how that Interval Carlo approach provides guaranteed and sharp enclosure to the system solution.
Abstract: In this work structural reliability assessment is presented for structures with uncertain loads and mate- rial properties. Uncertain variables are modeled as fuzzy random variables and Interval Monte Carlo Simulation along with interval finite element method is used to evalu- ate failure probability. Interval Monte Carlo is compared with existing search algorithms used in the reliability as- sessment of fuzzy random structural systems for both ef- ficiency and accuracy. The genetic algorithm as one of the well developed approaches is selected for compari- son. Fuzzy randomness is used as a model for handling both aleatory and epistemic uncertainties. Fuzzy quanti- ties are calculated using the α-cut approach. In the case of Interval Monte Carlo, bounds on response quantities are obtained for each α-cut using only one run of interval fi- nite element method, however genetic approach requires performing Monte Carlo Simulation for each of the con- sidered different possible combinations within the search domain (α-cut) and running finite element for each of the Monte Carlo realizations. In the presented examples both load and material uncertainties are considered. Numeri- cal results show the computational efficiency of the In-

Journal ArticleDOI
TL;DR: A methodology for calibrating URT assignment models using AFC data using a genetic algorithm‐based framework with nonparametric statistical techniques and results show that the proposed approach finds more reasonable solutions than traditional approaches for the calibrated parameters.
Abstract: Developments in the application of automatic data collection (ADC) such as automated fare collection (AFC) systems have made the collection of detailed passenger trip data in an urban rail transit (URT) network possible. AFC systems using smart card technology have become the main method for collecting urban rail transit (URT) fares in many cities around the world. The transaction data obtained through these AFC systems contain a large amount of archived information including how passengers use the URT system. The information obtained from AFC systems can be used in calibrating assignment models for precise passenger flow calculation. This paper presents a methodology for calibrating URT assignment models using AFC data. The study provides an approach that calibrates models disaggregately based on AFC data that avoids some disadvantages of traditional manual data collection approaches and can be incorporated into an automatic calibration procedure for easily obtaining accurate results.

Journal ArticleDOI
TL;DR: Results and a sensitivity analysis are presented to demonstrate the performance of the antithetic method‐based particle swarm optimization method, which was proved to be very effective and efficient compared to the actual data from the project and other metaheuristic algorithms.
Abstract: The results of a sensitivity analysis for the Jinping-I Hydropower Project are presented in this article in order to demonstrate the performance of the optimization method This model is proven to be very effective and efficient compared to the actual data from the project and other metaheuristic algorithms This article developed an antithetic method-based particle swarm optimization to solve a queuing network problem with fuzzy data for concrete transportation systems The concrete transportation system at the Jinping-I Hydropower Project was considered to be the prototype and it was extended to a generalized queuing network problem A multiple objective decision-making model was established in this article that takes into account the constraints and fuzzy data In order to deal with the fuzzy variables in the model, a fuzzy expected value operator, which used an optimistic–pessimistic index, was introduced that reflects the decision maker's attitude The decision maker allocated a limited number of vehicles and unloading equipment in multiple stages to the different queuing network transportation paths to improve construction efficiency by minimizing both operational costs and construction time Instead of using a traditional updating method, an antithetic particle-updating mechanism was designed to automatically control the particle-updating in the feasible solution space The particular nature of this model required the development of an antithetic method-based particle swarm optimization algorithm

Journal ArticleDOI
TL;DR: Modified Bouc–Wen model based on nonlinear differential equations has not only been employed as the reference model to provide a comprehensive training data for the neural network but also for comparison purposes to reproduce its hysteretic nonlinear behavior.
Abstract: Semi-active control of dynamic response of civil structures with magneto-rheological (MR) fluid dampers has emerged as a novel revolutionary technology in recent years for designing “smart structures.” A small-scale MR damper model with the valve mode mechanism has been examined in this research using dynamic recurrent neural network modeling approach to reproduce its hysteretic nonlinear behavior. Modified Bouc–Wen model based on nonlinear differential equations has not only been employed as the reference model to provide a comprehensive training data for the neural network but also for comparison purposes. A novel frequency and amplitude varying displacement input signal (modulated chirp signal) associated with a random supply voltage has been introduced for persistent excitation of the damper in such a way to cover almost all of its operating conditions. Finally a series of validation tests were conducted on the proposed model which proved the appropriate performance of the model in terms of accuracy and capability for realization.

Journal ArticleDOI
TL;DR: In terms of accuracy in determining the instantaneous modal param- eters of a structure from noisy responses, the proposed approach is superior to typical basis function expan- sion and regression approach.
Abstract: This work presents an efficient approach us- ing time-varying autoregressive with exogenous input (TVARX) model and a substructure technique to iden- tify the instantaneous modal parameters of a linear time- varying structure and its substructures. The identified instantaneous natural frequencies can be used to iden- tify earthquake damage to a building, including the spe- cific floors that are damaged. An appropriate TVARX model of the dynamic responses of a structure or sub- structure is established using a basis function expan- sion and regression approach combined with continu- ous wavelet transform. The effectiveness of the proposed approach is validated using numerically simulated earth- quake responses of a five-storey shear building with time- varying stiffness and damping coefficients. In terms of accuracy in determining the instantaneous modal param- eters of a structure from noisy responses, the proposed approach is superior to typical basis function expan- sion and regression approach. The proposed method is further applied to process the dynamic responses of an eight-storey steel frame in shaking table tests to iden- tify its instantaneous modal parameters and to locate

Journal ArticleDOI
TL;DR: The ρ values between component-level measures are relatively high across all models, indicating that simpler ones (M1 and M2) are appropriate for vulnerability assessment and retrofit prioritization and the complex flow-based models (M3 to M5) are suitable if actual performance of the systems is desired.
Abstract: Electric power networks are spatially distributed systems, subject to different magnitude and recurrence of earthquakes, that play a fundamental role in the well-being and safety of communities. Therefore, identification of critical components is of paramount importance in retrofit prioritization. This article presents a comparison of five seismic performance assessment models (M1 to M5) of increasing complexity. The first two models (M1 and M2) approach the problem from a connectivity perspective, whereas the last three (M3 to M5) consider also power flow analysis. To illustrate the utility of the five models, the well-known IEEE-118 test case, assumed to be located in the central United States, is considered. Performances of the five models are compared using both system-level and component-level measures. Spearman rank correlation ρ is computed between results of each model. Highest ρ values, at both system- and component-level, are obtained, as expected, between M1 and M2, and within models M3 to M5. The ρ values between component-level measures are relatively high across all models, indicating that simpler ones (M1 and M2) are appropriate for vulnerability assessment and retrofit prioritization. The complex flow-based models (M3 to M5) are suitable if actual performance of the systems is desired, as it is the case when the power network is considered within a larger set of interconnected infrastructural systems.

Journal ArticleDOI
TL;DR: The effectiveness of features from a statistics based local damage detection algorithm called Influenced Coefficient Based Damage Detection Algo- rithm (IDDA) is expanded for a more complex structural system.
Abstract: Many current damage detection techniques rely on the skill and experience of a trained inspector and also require a priori knowledge about the struc- ture's properties. However, this study presents adapta- tion of several change point analysis techniques for their performance in civil engineering damage detection. Lit- erature shows different statistical approaches which are developed for detection of changes in observations for different applications including structural damage detec- tion. However, despite their importance in damage de- tection, control charts and statistical frameworks are not properly utilized in this area. On the other hand, most of the existing change point analysis techniques were originally developed for applications in the stock mar- ket or industrial engineering processes; utilizing them in structural damage detection needs adjustments and ver- ification. Therefore, in this article several change point detection methods are evaluated and adjusted for a dam- age detection scheme. The effectiveness of features from a statistics based local damage detection algorithm called Influenced Coefficient Based Damage Detection Algo- rithm (IDDA) is expanded for a more complex structural system. The statistics used in this study include the uni- variate Cumulative Sum, Exponentially Weighted Mov- ing Average (EWMA), Mean Square Error (MSE), and multivariate Mahalanobis distances, and Fisher Crite- rion. They are used to make control charts that detect and localize the damage by correlating locations of a sen- sor network with the damage features. A Modified MSE statistic, called ModMSE statistic, is introduced to re- move the sensitivity of the MSE statistic to the variance of a data set. The effectiveness of each statistic is analyzed.

Journal ArticleDOI
TL;DR: A time varying wavelet-based pole assignment (WPA) method to control seismic vibrations in multi-degree of freedom (MDOF) structural systems and it is observed that the WPA has advantages in some design problems.
Abstract: This article presents a time varying wavelet-based pole assignment (WPA) method to control seismic vibrations in multi-degree of freedom (MDOF) structural systems. The discrete wavelet transform is used to determine the energy content over the frequency band of the response in real time. The frequency content was implemented in the Big Bang–Big Crunch algorithm to update the optimum values of the closed-loop poles of the structural system adaptively. To calculate optimum gain matrix, a robust pole placement algorithm was used. The gain matrix is calculated online based on response characteristic in real time and must not be calculated a priori (offline) choice. The WPA is tested on a 10-story structural system subject to several historical ground motions. It is observed that the WPA has advantages in some design problems. Numerical examples illustrate that the proposed approach reduces the displacement response of the structure in real time more than conventional linear quadratic regulator (LQR) controller.

Journal ArticleDOI
TL;DR: Investigations indicate that the proposed fuzzy‐AHP models characterize the fuzziness of tunnel health well and will be useful for clarifying the tunnel health evaluation uncertainties to both designers and administrators.
Abstract: Fuzzy analytic hierarchy process (fuzzy-AHP) synthetic evaluation models were applied to address the uncertainties in tunnel health evaluation These uncertainties occur because of a lack of specific information, missing data, misleading or conflicting information due to the complex nature of geo-materials, and even the ambiguity in the concept of tunnel health This fuzzy-AHP synthetic evaluation model merges different types of data from multiple sensors to map them into the health rating scores of shield tunnels A piecewise distribution was chosen for membership functions, and an exponential scale was introduced for a better characterization of the scales for weight sets A series of fuzzy operators were defined to yield the fuzzy synthetic evaluation indexes (FSEIs) for monitoring factors and the fuzzy-AHP evaluation procedure applied to the models was demonstrated In order to verify the feasibility and efficiency of the models and the procedure, a case study on Nanjing Yangtze River Tunnel was presented The calculated FSEIs were compared with the rating scales to determine the corresponding action strategies The fuzzy-AHP health evaluations for monitoring factors, segments, rings, and the whole tunnel were implemented in succession using the models and following the procedure The segments with poor health conditions can then be identified for administrative maintenance or repair The investigations presented indicate that the proposed fuzzy-AHP models characterize the fuzziness of tunnel health and will be useful for clarifying the tunnel health evaluation uncertainties to both designers and administrators These evaluations will result in enhancing the knowledge of designers and aid them for the optimization of the design of similar tunnels

Journal ArticleDOI
TL;DR: Agent Swarm Optimization (ASO) as mentioned in this paper is a novel paradigm that exploits swarm intelligence and borrows some ideas from multiagent-based systems aimed at supporting decision-making processes by solving multiobjective optimization problems.
Abstract: Optimal design of water distribution systems (WDSs), including the sizing of components, quality control, reliability, renewal, and rehabilitation strategies, etc., is a complex problem in water engineering that requires robust methods of optimization. Classical methods of optimization are not well suited for analyzing highly dimensional, multimodal, nonlinear problems, especially given inaccurate, noisy, discrete, and complex data. Agent Swarm Optimization (ASO) is a novel paradigm that exploits swarm intelligence and borrows some ideas from multiagent-based systems. It is aimed at supporting decision-making processes by solving multiobjective optimization problems. ASO offers robustness through a framework where various population-based algorithms coexist. The ASO framework is described and used to solve the optimal design of WDS. The approach allows engineers to work in parallel with the computational algorithms to force the recruitment of new searching elements, thus contributing to the solution process with expert-based proposals. © 2014 Computer-Aided Civil and Infrastructure Engineering.

Journal ArticleDOI
TL;DR: A generalized, selective household activity routing problem (G‐SHARP) is presented as an extension of the HAPP model to include both destination and schedule choice for the purpose of testing reoptimization, and a new class of evolutionary algorithms designed for re Optimization, dubbed a Genetic Algorithm with Mitochondrial Eve (GAME).
Abstract: Household travel behavior is a complex modeling challenge because of the difficulty in handling daily routing and scheduling choices that individuals make with respect to activity and time use decisions. Activity-based travel scenario analysis and network design using a household activity pattern problem (HAPP) can face significant computational cost and inefficiency. Reoptimization makes use of an optimal solution of a prior problem instance to find a new solution faster and more accurately. Although the method is generally NP-hard as well, the approximation bound has been shown in the literature to be tighter than a full optimization for several traveling salesman problem variations. To date, however, there have not been any computational studies conducted with the method for scenario analysis with generalized vehicle routing problems, nor has there been any metaheuristics designed with reoptimization in mind. A generalized, selective household activity routing problem (G-SHARP) is presented as an extension of the HAPP model to include both destination and schedule choice for the purpose of testing reoptimization. The article proposes two reoptimization algorithms: (1) a simple swap heuristic, and (2) a new class of evolutionary algorithms designed for reoptimization, called a Genetic Algorithm with Mitochondrial Eve (GAME). The two algorithms are tested against a standard genetic algorithm in a computational experiment involving 100 zones that include 400 potential activities (resulting in a total of 802 nodes per single-traveler household). Five hundred households are synthesized and computationally tested with two base scenarios. One scenario where an office land use in one zone is dezoned and another scenario where a freeway is added onto the physical network. GAME and the capability of G-SHARP demonstrate the effectiveness of reoptimization to capture reallocations.

Journal ArticleDOI
TL;DR: The consistency of stochastic traffic models from the points of view of probability and statistics and also from a dimensional analysis perspective are presented and some proposed models in the literature are analyzed.
Abstract: Consideration of the existing relations among the different random variables involved in traffic problems is crucial in developing a consistent probability model. The consistency of stochastic traffic models from the points of view of probability and statistics and also from a dimensional analysis perspective are presented in this paper. The authors analyze and discuss the conditions for a model to be consistent from two different points of view: probabilistic and physical (dimensional analysis). Probabilistic leads to the concept of stability in general and reproductivity in particular because, for example, origin-destination (OD) and link flows are the sum of route flows and route travel times are the sum of link travel times. This implies stability with respect to sums (reproductivity). Normal models are justified because when the number of summands increases the averages approach the normal distribution. Similarly, stability with respect to minimum or maximum operations arises in practice. From the dimensional analysis point of view, some models are demonstrated not to be convenient. In particular, it is shown that some families of distributions are valid only for dimensionless variables. These problems are discussed and some proposed models in the literature are analyzed from these two points of view. When analytical consistency cannot be achieved, a possible alternative is the Monte Carlo simulation that permits satisfying the compatibilities easily.

Journal ArticleDOI
TL;DR: It is shown that REMPS easily extends beyond the application presented and may be considered an effective and versatile standalone segmentation technique that is designed to detect a broad range of damage forms on the surface of civil infrastructure.
Abstract: Imaging-based damage detection techniques are increasingly being utilized alongside traditional visual inspection methods to provide owners/operators of infrastructure with an efficient source of quantitative information for ensuring their continued safe and economic operation. However, there exists scope for significant development of improved damage detection algorithms that can characterize features of interest in challenging scenes with credibility. This article presents a new regionally enhanced multiphase segmentation (REMPS) technique that is designed to detect a broad range of damage forms on the surface of civil infrastructure. The technique is successfully applied to a corroding infrastructure component in a harbour facility. REMPS integrates spatial and pixel relationships to identify, classify, and quantify the area of damaged regions to a high degree of accuracy. The image of interest is preprocessed through a contrast enhancement and color reduction scheme. Features in the image are then identified using a Sobel edge detector, followed by subsequent classification using a clustering-based filtering technique. Finally, support vector machines are used to classify pixels which are locally supplemented onto damaged regions to improve their size and shape characteristics. The performance of REMPS in different color spaces is investigated for best detection on the basis of receiver operating characteristics curves. The superiority of REMPS over existing segmentation approaches is demonstrated, in particular when considering high dynamic range imagery. It is shown that REMPS easily extends beyond the application presented and may be considered an effective and versatile standalone segmentation technique.

Journal ArticleDOI
TL;DR: The work proposes an efficient wavelet-based approach to determine the modal parameters of a structure from its ambient vibration responses that integrates the time series autoregressive (AR) model with the stationary wavelet packet transform.
Abstract: Ambient vibration tests are conducted widely to estimate the modal parameters of a structure. The work proposes an efficient wavelet-based approach to determine the modal parameters of a structure from its ambient vibration responses. The proposed approach integrates the time series autoregressive (AR) model with the stationary wavelet packet transform. In addition to providing a richer decomposition and allowing for an improved time–frequency localization of signals over that of the discrete wavelet transform, the stationary wavelet packet transform also has significantly higher computational efficiency than the wavelet packet transform in terms of decomposing time-shifted signals because the former has a time-invariance property. The correlation matrices needed in determining the coefficient matrices in an AR model are established in subspaces expanded by stationary wavelet packets. The formulation for estimating the correlation matrices is shown for the first time. Because different subspaces contain signals with different frequency subbands, the fine filtering property enhances the ability of the proposed approach to identify not only the modes with strong modal interference, but also many modes from the responses of very few measured degrees of freedom. The proposed approach is validated by processing the numerically simulated responses of a seven-floor shear building, which has closely spaced modes, with considering the effects of noise and incomplete measurements. Furthermore, the present approach is employed to process the velocity responses of an eight-storey steel frame subjected to white noise input in a shaking table test and ambient vibration responses of a cable-stayed bridge.

Journal ArticleDOI
TL;DR: To perform a realistic reliability analysis of a complex cable-stayed steel footbridge subject to natural hazard and corrosion, this article addresses a rational process of modeling and simulation based on identification, model updating, and validation.
Abstract: To perform a realistic reliability analysis of a complex cable-stayed steel footbridge subject to natural hazard and corrosion, this article addresses a rational process of modeling and simulation based on identification, model updating, and validation. In particular, the object of this study is the Ponte del Mare footbridge located in Pescara, Italy; this bridge was selected as being a complex twin deck curved footbridge because it is prone to corrosion by the aggressive marine environment. With the modeling and simulation objectives in mind, a preliminary finite element (FE) model was realized using the ANSYS software. However, uncertainties in FE modeling and changes during its construction suggested the use of experimental system identification. As a result, the sensor location was supported by a preliminary FE model of the footbridge, although to discriminate close modes of the footbridge and locate identification sensor layouts, Auto Modal Assurance Criterion (AutoMAC) values and stabilization diagram techniques were adopted. Modal characteristics of the footbridge were extracted from signals produced by ambient vibration via the stochastic subspace identification (SSI) algorithm, although similar quantities were identified with free-decay signals produced by impulse excitation using the ERA algorithm. All these procedures were implemented in the Structural Dynamic Identification Toolbox (SDIT) code developed in a MATLAB environment. The discrepancies between analytical and experimental frequencies led to a first update of the FE model based on Powell's dog-leg method that relied on a trust-region approach. As a result, the identified FE model was capable of reproducing the response of the footbridge subject to realistic gravity and wind load conditions. Finally, the FE was further updated in the modal domain, by changing both the stationary aerodynamic coefficients and the flutter derivatives of deck sections to take into account the effects of the curved deck layout

Journal ArticleDOI
TL;DR: A model framework to integrate human behavior analysis and traffic simulation in evacuation modeling is presented and it is concluded that without considering family gathering in the evacuation scenarios the management strategies may actually impede rather than help the evacuation process.
Abstract: This article presents a model framework to integrate human behavior analysis and traffic simulation in evacuation modeling. During an evacuation, household members tend to evacuate as a unit. However, most engineering-based evacuation models treat evacuees as independent and separate entities, and overlook the interactions among household members during an evacuation (i.e., gathering children/spouses or uniting with other family members at home). The omission of these behaviors leads to imprecise modeling of evacuation situations. Transportation mode choice in a no-notice evacuation has seldom been investigated. The authors present a framework to incorporate both household-gathering behavior and mode choice in an emergency into an evacuation model to examine the effects of these two issues on evacuation efficiency and network performance. The framework was tested in the Chicago metropolitan region for two hypothetical incidents with evacuation radii of 5 and 25 miles. Evacuation models that omit gathering behavior give dangerously optimistic evacuation times and network congestion levels compared to models that include family interactions. Optimistic estimates are significant for a large-scale evacuation. These optimistic estimates can cause the reduction in the number of evacuees who can reach safe zones in a certain time threshold to nearly 50% between the gathering and no-gathering models. Gathering behavior could also cause distinct effects on network performance for inner and outer areas, the break point of which may be where severe bottlenecks are located. In this study, average travel speed increases on the overall network within 15 miles of the incident location (where downtown Chicago is located), but decreases outside the 15-mile radius. The paper concludes that without considering family gathering in the evacuation scenarios the management strategies may actually impede rather than help the evacuation process.

Journal ArticleDOI
TL;DR: This article puts forward derivation of an improved macroscopic model for multianticipative driving behavior using a modified gas‐kinetic approach and the basic (microscopic) generalized force model, which has been claimed to fit well with real traffic data, is chosen for the derivation.
Abstract: Multianticipative driving behavior, where a vehicle reacts to many vehicles in front, has been exten- sively studied and modeled using a car-following (i.e., microscopic) approach. A lot of effort has been under- taken to model such multianticipative driving behav- ior using a macroscopic approach, which is useful for real-time prediction and control applications due to its fast computational demand. However, these macroscopic models have increasingly failed with an increased num- ber of anticipations. To this end, this article puts for- ward derivation of an improved macroscopic model for multianticipative driving behavior using a modified gas- kinetic approach. First, the basic (microscopic) gener- alized force model, which has been claimed to fit well with real traffic data, is chosen for the derivation. Sec- ond, the derivation method relaxes the condition that de- celeration happens instantaneously. Theoretical analysis and numerical simulations of the model are carried out to show the improved performance of the derived model over the existing (multianticipative) macroscopic models.