scispace - formally typeset
Search or ask a question

Showing papers in "Computer-aided Civil and Infrastructure Engineering in 2013"


Journal ArticleDOI
TL;DR: Experimental results show that the system enables continuous or regular interval monitoring for in‐service highway bridges.
Abstract: An integrated structural health monitoring (SHM) system for highway bridges is presented in this article. The system described is based on a customized wireless sensor network platform with a flexible design that provides a variety of sensors that are typical to SHM. These sensors include accelerometers, strain gauges, and temperature sensors with ultra-low power consumption. A S-Mote node, an acceleration sensor board, and a strain sensor board are developed to satisfy the requirements of bridge structural monitoring. The article discusses how communication software components are integrated within TinyOS operating system to provide a flexible software platform whereas the data processing software performs analysis of acceleration, dynamic displacement, and dynamic strain data. The prototype system comprises a nearly linear multi-hop topology and is deployed on an in-service highway bridge. Data acquired from the system are used to examine network performance and to help evaluate the state of the bridge. Experimental results presented in the article show that the system enables continuous or regular interval monitoring for in-service highway bridges.

161 citations


Journal ArticleDOI
TL;DR: Optizing highway alignment requires a versatile set of cost functions and an efficient search method to achieve the best design and is a complex problem.
Abstract: The optimization of highway alignment requires a versatile set of cost functions and an efficient search method for achieving the best design. Because of numerous highway design considerations, this issue is classified as a constrained problem. The article describes how highway alignment optimization is a complex problem because of the infinite number of possible solutions for the problem and the continuous search space. In this study, a customized particle swarm optimization algorithm was used to search for a near-optimal highway alignment, which is a compound of several tangents that consist of circular (for horizontal design) and parabolic (for vertical alignment) curves. The selected highway alignment should meet the constraints of highway design while minimizing total cost as the objective function. The model uses geographical information system (GIS) maps as an efficient and fast way to calculate right-of-way costs, earthwork costs, and any other spatial information and constraints that should be implemented in the design process. The efficiency of the algorithm was verified through a case study using an artificial map as the study region. Finally, the authors applied the algorithm to a real-world example and the results were compared with the alignment found by traditional methods.

118 citations


Journal ArticleDOI
TL;DR: The results presented in this paper indicate that the proposed method is more effective at reducing the displacement response of the structure in real time than conventional LQR controllers.
Abstract: With the development of construction techniques, it is possible to build large-span bridges, pipelines, dams, and other essential structures in seismically active regions or in active faults. A new method to find the optimal control forces for active tuned mass damper is presented in this paper. Three algorithms: discrete wavelet transform (DWT), particle swarm optimization (PSO), and linear quadratic regulator (LQR). DWT are used to obtain the local energy distribution of the motivation over the frequency bands. PSO is used to determine the gain matrices through the online update of the weighting matrices used in the LQR controller while eliminating the trial and error. The method is tested on a 10-story structure subject to several historical pulse-like near-fault ground motions. The results presented in this paper indicate that the proposed method is more effective at reducing the displacement response of the structure in real time than conventional LQR controllers.

116 citations


Journal ArticleDOI
TL;DR: This fully automated approach rivals processing times of other techniques with the distinct advantage of extracting more boundary points, especially in less dense data sets, which may enable its more rapid exploitation of aerial laser scanning data and ultimately preclude needing a priori knowledge.
Abstract: Traditional documentation capabilities of laser scanning technology can be further exploited for urban modeling through the transformation of resulting point clouds into solid models compatible for computational analysis. This article introduces such a technique through the combination of an angle criterion and voxelization. As part of that, a k-nearest neighbor (kNN) searching algorithm is implemented using a predefined number of kNN points combined with a maximum radius of the neighborhood, something not previously implemented. From this sample, points are categorized as boundary or interior points based on an angle criterion. Facade features are determined based on underlying vertical and horizontal grid voxels of the feature boundaries by a grid clustering technique. The complete building model involving all full voxels is generated by employing the Flying Voxel method to relabel voxels that are inside openings or outside the facade as empty voxels. Experimental results on three different buildings, using four distinct sampling densities showed successful detection of all openings, reconstruction of all building facades, and automatic filling of all improper holes. The maximum nodal displacement divergence was 1.6% compared to manually generated meshes from measured drawings. This fully automated approach rivals processing times of other techniques with the distinct advantage of extracting more boundary points, especially in less dense data sets (<175 points/m2), which may enable its more rapid exploitation of aerial laser scanning data and ultimately preclude needing a priori knowledge.

111 citations


Journal ArticleDOI
TL;DR: The response surface (RS) method based on radial basis functions (RBFs) is proposed to model the input–output system of large-scale structures for model updating in this article and it is demonstrated that the proposed approach is valid for model update of large and complicated structures such as long-span cable-stayed bridges.
Abstract: The response surface (RS) method based on radial basis functions (RBFs) is proposed to model the input–output system of large-scale structures for model updating in this article. As a methodology study, the complicated implicit relationships between the design parameters and response characteristics of cable-stayed bridges are employed in the construction of an RS. The key issues for application of the proposed method are discussed, such as selecting the optimal shape parameters of RBFs, generating samples by using design of experiments, and evaluating the RS model. The RS methods based on RBFs of Gaussian, inverse quadratic, multiquadric, and inverse multiquadric are investigated. Meanwhile, the commonly used RS method based on polynomial function is also performed for comparison. The approximation accuracy of the RS methods is evaluated by multiple correlation coefficients and root mean squared errors. The antinoise ability of the proposed RS methods is also discussed. Results demonstrate that RS methods based on RBFs have high approximation accuracy and exhibit better performance than the RS method based on polynomial function. The proposed method is illustrated by model updating on a cable-stayed bridge model. Simulation study shows that the updated results have high accuracy, and the model updating based on experimental data can achieve reasonable physical explanations. It is demonstrated that the proposed approach is valid for model updating of large and complicated structures such as long-span cable-stayed bridges.

105 citations


Journal ArticleDOI
TL;DR: An ANN ensemble technique is employed to analyze the effects of various input settings on the ANN prediction performances and a variant of the state‐space model (SSNN), namely time‐delayed state‐ space neural network (TDSSNN), is proposed and compared against other popular ANN models.
Abstract: This article discusses how the artificial neural network (ANN) is one advance approach to freeway travel time prediction. Various studies using different inputs have not come to consensus on the effects of input selections. In addition, very little discussion has been made on the temporal–spatial aspect of the ANN travel time prediction process. In this study, the authors employ an ANN ensemble technique to analyze the effects of various input settings on the ANN prediction performances. Volume, occupancy, and speed are used as inputs to predict travel times. The predictions are then compared against the travel times collected from the toll collection system in Houston, Texas. The results show speed or occupancy measured at the segment of interest may be used as the sole input to produce acceptable predictions, but all three variables together tend to yield the best prediction results. The inclusion of inputs from both upstream and downstream segments is statistically better than using only the inputs from current segment. It also appears that the magnitude of prevailing segment travel time can be used as a guideline to set up temporal input delays for better prediction accuracies. The evaluation of spatiotemporal input interactions reveals that past information on downstream and current segments is useful in improving prediction accuracy whereas past inputs from the upstream location do not provide as much constructive information. Finally, a variant of the state-space model (SSNN), namely time-delayed state-space neural network (TDSSNN), is proposed and compared against other popular ANN models. The comparison shows that the TDSSNN outperforms other networks and remains very comparable with the SSNN. Future research is needed to analyze TDSSNN's ability in corridor prediction settings.

98 citations


Journal ArticleDOI
TL;DR: This article provides a comprehensive framework for conceptualizing, categorizing, and quantifying system performance measures in the presence of uncertain events, component failure, or other disruptions/disasters with the potential to reduce system capacity/performance.
Abstract: This article provides a comprehensive framework for conceptualizing, categorizing, and quantifying system performance measures in the presence of uncertain events, component failure, or other disruptions/disasters with the potential to reduce system capacity/performance. The framework clarifies the interrelationships between notions of coping capacity, preparedness, robustness, flexibility, recovery capacity, and resilience, previously espoused as independent measures, and provides a single mathematical decision problem for quantifying these measures congruously and maximizing their values. Required solution methodologies are presented for use in evaluating system performance in terms of these measures and resulting solutions can be exploited to determine an optimal allocation of limited resources to preparedness and response options. A numerical-transportation-related example is provided to illustrate its application. Results of this application offer insights into these various performance measures, their relationships, and the relative importance of preparedness and response actions.

96 citations


Journal ArticleDOI
TL;DR: Additional steps and modifications to existing algorithms are presented to advance the performance of data processing on laser scan range data sets for future application in structural engineering applications such as robust determination of damage location and finite element modeling.
Abstract: This research investigates the use of highresolution three-dimensional terrestrial laser scanners as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now able to capture over 1,000,000 points per second with an accuracy of ∼0.1 mm. This research focuses on developing the foundation toward the use of laser scanning to structural engineering applications, including structural health monitoring, collapse assessment, and post-hazard response assessment. One of the keys to this work is to establish a process for extracting important information from raw laser-scanned data sets such as the location, orientation, and size of objects in a scene, and location of damaged regions on a structure. A methodology for processing range data to identify objects in the scene is presented. Previous work in this area has created an initial foundation of basic data processing steps. Existing algorithms, including sharp feature detection and segmentation are implemented and extended in this work. Additional steps to remove extraneous and outlying points are added. Object detection based on a predefined library is developed allowing generic description of objects. The algorithms are demonstrated on synthetic scenes as well as validated on range data collected from an experimental test specimen and a collapsed bridge. The accuracy of the object detection is presented, demonstrating the applicability of the methodology. These additional steps and modifications to existing algorithms are presented to advance the performance of data processing on laser scan range data sets for future application in structural engineering appli∗To whom correspondence should be addressed. E-mail: Jf.hajjar@neu.edu. cations such as robust determination of damage location and finite element modeling.

91 citations


Journal ArticleDOI
TL;DR: This article presents a semi‐automatic, enhanced texture segmentation approach to detect and classify surface damage on infrastructure elements and successfully applies them to a range of images of surface damage.
Abstract: To make visual data a part of quantitative assessment for infrastructure maintenance management, it is important to develop computer-aided methods that demonstrate efficient performance in the presence of variability in damage forms, lighting conditions, viewing angles, and image resolutions taking into account the luminous and chromatic complexities of visual data. This article presents a semi-automatic, enhanced texture segmentation approach to detect and classify surface damage on infrastructure elements and successfully applies them to a range of images of surface damage. The approach involves statistical analysis of spatially neighboring pixels in various color spaces by defining a feature vector that includes measures related to pixel intensity values over a specified color range and statistics derived from the Grey Level Co-occurrence Matrix calculated on a quantized grey-level scale. Parameter optimized non-linear Support Vector Machines are used to classify the feature vector. A Custom-Weighted Iterative model and a 4-Dimensional Input Space model are introduced. Receiver Operating Characteristics are employed to assess and enhance the detection efficiency under various damage conditions.

88 citations


Journal ArticleDOI
TL;DR: This research focused on systems that would minimize the effects of earthquake based on realistic structural responses considering plastic hinge occurrence in structural elements and three-directional displacement in all structural nodes, thus enhancing building safety during earthquake excitations.
Abstract: Numerous recent studies have assessed the stability and safety of structures furnished with different types of structural control systems, such as viscous dampers. A challenging issue in this field is the optimization of structural control systems to protect structures against severe earthquake excitation. As the safety of a structure depends on many factors, including the failure of structural members and movement of each structural node in any direction, the optimization technique must consider many parameters simultaneously. However, the available literature on optimizing earthquake energy dissipation systems shows that most researchers have considered optimization processes using just one or a few parameters applicable only to simple SDOF or MDOF systems. This article reports on the development of a multiobjective optimization procedure for structural passive control systems based on genetic algorithm; this research focused on systems that would minimize the effects of earthquake based on realistic structural responses considering plastic hinge occurrence in structural elements and three-directional displacement in all structural nodes. The model was applied to an example of three-dimensional reinforced concrete framed building and its structural seismic responses were investigated. The results showed that the optimized control system effectively reduced the seismic response of structures, thus enhancing building safety during earthquake excitations.

82 citations


Journal ArticleDOI
TL;DR: A dynamic Bayesian network (DBN) model for probabilistic assessment of tunnel construction performance is introduced and facilitates the quantification of uncertainties in the construction process and of the risk from extraordinary events that cause severe delays and damages.
Abstract: This article introduces a dynamic Bayesian network (DBN) model for probabilistic assessment of tunnel construction performance. The article facilitates the quantification of uncertainties in the construction process and of the risk from extraordinary events that cause severe delays and damages. Stochastic dependencies resulting from the influence of human factors and other external factors are addressed in the model. An efficient algorithm for evaluating the DBN model is presented, which is a modification of the so-called Frontier algorithm. The proposed model and algorithm are applied to an illustrative case study, the excavation of a road tunnel by means of the New Austrian Tunneling Method.

Journal ArticleDOI
TL;DR: This article describes how user costs of different maintenance actions need to be assessed in road maintenance as well as the maintenance costs and develops a multiobjective Markov-based model to minimize both maintenance cost and user cost subject to a number of constraints.
Abstract: This article describes how user costs of different maintenance actions need to be assessed in road maintenance as well as the maintenance costs. Vehicle operating costs (VOC) and travel delay cost are two major components of the user costs that are associated with road maintenance actions. The general calculation models of these two user cost components are simplified in this article. The article also develops a multiobjective Markov-based model to minimize both maintenance cost and user cost subject to a number of constraints including the average annual budget limit and the performance requirement. The road deterioration process is modeled as a discrete-time Markov process, and the states of road performance are defined in terms of the road roughness. The state transition probabilities are estimated considering the effects of deterioration and maintenance actions. An example is provided that illustrates the use of the proposed road maintenance optimization model. The results show that the optimal road maintenance plan obtained from the model is practical to implement and is cost-effective compared with the periodical road maintenance plan. The results presented in the article also indicate that the maintenance cost and the user cost are competitive. When maintenance works are carried out more frequently, the life-cycle maintenance costs will increase while the life-cycle user costs will decrease and this is because the VOC contributes the most amount of the user cost and its change has a contrary trend to the change of the maintenance cost over time.

Journal ArticleDOI
TL;DR: The control concept is significantly effective on reducing maximum responses in translational and rotational directions and obtaining a steady-state response under near-fault effects.
Abstract: In this study, torsionally irregular single-story and multistory structures under the effect of near-fault ground motion excitation were controlled by active tendons. Near-fault ground motions contain two impulsive characters. These impulsive characters are the directivity effect perpendicular to fault and the flint step parallel to fault. The structural models were simulated under bidirectional earthquake records superimposed with impulsive motions to examine the response of active control under near-fault effects. Also, the structures were analyzed only under the effect of bidirectional impulsive pulses. The control signals were obtained by Proportional–Integral–Derivative (PID) type controllers and the parameters of the controllers were obtained by using a numerical algorithm depending on time domain analyses. Time delay effect was also considered for active control system. Different cases of orientation of active tendons were examined and the results of the single-story structure were compared with another control strategy using frequency domain responses in the optimization process. As a conclusion, the control concept is significantly effective on reducing maximum responses in translational and rotational directions and obtaining a steady-state response.

Journal ArticleDOI
TL;DR: The results presented in the article indicate that alignment, collision type, and downstream geometry may be considered as redundant when modeling incident duration, and how joint consideration of severe congestion and secondary incident occurrence may improve the generalization power of the prediction models.
Abstract: This article describes an approach for predicting incident durations that are susceptible to severe congestion and the occurrence of secondary incidents. A fuzzy entropy feature selection methodology is applied first in order to determine redundant factors and rank factor importance with respect to their contribution on the predictability of incident duration. Neural network models for incident duration prediction with single and competing uncertainties are then developed. The results presented in the article indicate that alignment, collision type, and downstream geometry may be considered as redundant when modeling incident duration. The article discusses how rainfall intensity is a highly contributing feature, while lane volume, number of blocked lanes, as well as number of vehicles involved in the incident are among the top ranking factors for determining the extent of duration. The last section of the article shows how joint consideration of severe congestion and secondary incident occurrence may improve the generalization power of the prediction models.

Journal ArticleDOI
TL;DR: This article presents an integrated mathematical model for biofuel supply chain design where the near-optimum number and location of biorefinery facilities, thenear-optimal routing of biomass and biofuel shipments, and possible highway/railroad capacity expansion are determined.
Abstract: This article discusses that as the biofuel industry continues to expand, the construction of new biorefinery facilities induces a huge amount of biomass feedstock shipment from supply points to the refineries and biofuel shipment to the consumption locations. This increases traffic demand in the transportation network and contributes to additional congestion, especially in the neighborhood of the refineries. Iit is beneficial to form public-private partnerships to simultaneously consider transportation network expansion and biofuel supply chain design to mitigate congestion. This article presents an integrated mathematical model for biofuel supply chain design where the near-optimum number and location of biorefinery facilities, the near-optimal routing of biomass and biofuel shipments, and possible highway/railroad capacity expansion are determined. The objective f this article is to minimize the total cost for biorefinery construction, transportation infrastructure expansion, and transportation delay under congestion. A genetic algorithm framework (with embedded Lagrangian relaxation and traffic assignment algorithms) is developed to solve the optimization model, and an empirical case study for the state of Illinois is conducted with realistic biofuel production data. The computational results show that the proposed solution approach is able to solve the problem efficiently. Various managerial insights are also drawn. Although this article focuses on the booming biofuel industry, the model and solution techniques are suitable for a number of application contexts that simultaneously involve network traffic equilibrium, infrastructure expansion, and facility location choices (which determine the origin/destination of multi-commodity flow).

Journal ArticleDOI
TL;DR: A methodology to validate solids according to the international standards is presented, which is hierarchical and permits us to validate the primitives of all dimensionalities and to understand and study the topological relationships between the different parts of a solid.
Abstract: The international standards for geographic information provide unambiguous definitions of geometric primitives, with the aim of fostering exchange and interoperability in the geographical information system (GIS) community In two dimensions, the standards are wellaccepted and there are algorithms (and implementations of these) to validate primitives, ie given a polygon, they ensure that it respects the standardised definition (and if it does not a reason is given to the user) However, while there exists an equivalent definition in three dimensions (for solids), it is ignored by most researchers and by software vendors Several different definitions are indeed used, and none is compliant with the standards: eg solids are often defined as 2-manifold objects only, while in fact they can be non-manifold objects Exchanging and converting datasets from one format/platform to another is thus highly problematic I present in this paper a methodology to validate solids according to the international standards It is hierarchical and permits us to validate the primitives of all dimensionalities To understand and study the topological relationships between the different parts of a solid (the shells) the concept of Nef polyhedron is used The methodology has been implemented in a prototype, and I report on the main engineering decisions that were made and on its use for the validation of real-world three-dimensional datasets

Journal ArticleDOI
TL;DR: The presented algorithm can perform an accurate structural identification via model updating, with a viscous damping matrix that captures the variation of the modal damping ratios with natural frequencies as opposed to other conventional proportional damping Matrix formulations.
Abstract: A Frequency Response Functions (FRFs)-based two-step algorithm to identify stiffness, mass, and viscous damping matrices is developed in this work. The proposed technique uses the difference between the experimentally recorded FRF and their analytical counterparts by minimizing the resultant error function at selected frequency points. In the first step, only mass and stiffness matrices are updated while keeping the uncalibrated viscous damping matrix constant. In the second step, the damping matrix is updated via changes on the selected unknown modal damping ratios. By using a stacking procedure of the presented error function that combines multiple data sets, adverse effects of noise on the estimated modal damping ratios are decreased by averaging the FRF amplitudes at resonant peaks. The application of this methodology is presented utilizing experimentally obtained data. The presented algorithm can perform an accurate structural identification via model updating, with a viscous damping matrix that captures the variation of the modal damping ratios with natural frequencies as opposed to other conventional proportional damping matrix formulations.

Journal ArticleDOI
TL;DR: An advanced multistage identification methodology is proposed for the successful simulation of this novel material based on the results of the extensive experimental campaign and is successful in yielding a calibrated model that can more accurately capture the experimentally observed behavior of this three-dimensional full-scale test case.
Abstract: This article describes a structural system identification approach for the characterization of a novel retrofitting textile, the “Composite Seismic Wallpaper.” This polymeric textile was developed within the EU co-funded project Polytect as a full coverage method for increasing the seismic resistance of masonry structures. Recently, the wallpaper has been full-scale tested, on a two storey building, at the Eucentre (Pavia) as part of the Seismic Engineering Research Infrastructures for European Synergies (SERIES) program. In this article, an advanced multistage identification methodology is proposed for the successful simulation of this novel material based on the results of the extensive experimental campaign. The identification is essentially formulated as an inverse problem that combines a Genetic Algorithm (GA) as the optimizer and a finite element (FE) model as the physical model of the structure. The aim is material characterization and modeling of the dynamic response of the structure; an issue which is nontrivial due to the intrinsic complexities associated with both masonry and polymers. The process outlined herein is successful in yielding a calibrated model that can more accurately capture the experimentally observed behavior of this three-dimensional full-scale test case.

Journal ArticleDOI
TL;DR: A dynamic approach to specify flow pattern variations is proposed mainly concentrating on the incorporation of neural network theory to provide real‐time mapping for traffic density simultaneously in conjunction with a macroscopic traffic flow model.
Abstract: This article provides an analyses on the dynamics of traffic flow, ranging from intersection flows to network-wide flow propagation, that require accurate information on time-varying local traffic flows. To effectively determine the flow performance measures and consequently the congestion indicators of segmented road pieces, the ability to process such data in real time is out of the question. In this article, a dynamic approach to specify flow pattern variations is proposed mainly concentrating on the incorporation of neural network theory to provide real-time mapping for traffic density simultaneously in conjunction with a macroscopic traffic flow model. To deal with the noise and the wide scatter of raw flow measures, a filtering is applied prior to modeling processes. Filtered data are dynamically and simultaneously input to processes of neural density mapping and traffic flow modeling. The classification of flow patterns over the fundamental diagram, which is dynamically plotted with the outputs of the flow modeling subprocess, is obtained by considering the density measure as a pattern indicator. The densities are mapped by selected neural approximation method for each simulation time step considering explicitly the flow conservation principle. Simultaneously, mapped densities are matched over the fundamental diagram to specify the current corresponding flow pattern. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized within a series of intelligent management strategies including nonrecurrent congestion effect detection and control.

Journal ArticleDOI
TL;DR: A survival analysis model is developed to predict the overall structural state of a sewer network based on camera inspection data from a sample of pipes in the system to overcome the censored nature of data available for the calibration of sewer deterioration models.
Abstract: The structural state of sewer systems are often quantified using condition classes. The classes are based on the severity of structural defects observed on individual pipes within the system. This paper developed a survival analysis model to predict the overall structural state of a sewer network based on camera inspection data from a sample of pipes in the system. The convolution product was used to define the survival functions for cumulative staying times in each condition class. An original calibration procedure for the sewer deterioration model was developed to overcome the censored nature of data available for the calibration of sewer deterioration models. The exponential and Weibull functions were used to represent the distribution of waiting times in each deterioration state. Cross-validation tests showed that the Weibull function led to greater uncertainty than the exponential function for the simulated proportion of pipes that are in a deteriorated state. The cross-validation tests also showed that the model's results are robust to smaller calibration sample sizes using various sample sizes for model calibration. The model's potential for predicting the overall state of deterioration of a sewer network when only a small proportion of the pipes have been inspected is confirmed.

Journal ArticleDOI
TL;DR: An algebraic method that adapts the standard observability problem to deal with structural system identification is proposed and the results obtained show, for the very first time, how observability techniques can be efficiently used for the iden- tification of structural systems.
Abstract: This article deals with the problem of apply- ing observability techniques to structural system identi- fication, understanding as such the problem of identify- ing which is the subset of characteristics of the structure, such as Young's modulus, area, inertia, and/or product of them (flexural or axial stiffnesses) that can be uniquely defined when an adequate subset of deflections, forces, and/or moments in the nodes is provided. Compared with other standard observability problems, two issues arise here. First, nonlinear unknown variables (products or quotients of elemental variables) appear and second, the mechanical and geometrical properties of the structure are "coupled" with the deflections and/or rotations at the nodes. To solve these problems, an algebraic method that adapts the standard observability problem to deal with structural system identification is proposed in this article. The results obtained show, for the very first time, how ob- servability techniques can be efficiently used for the iden- tification of structural systems. Some examples are given to illustrate the proposed methodology and to demon- strate its power.

Journal ArticleDOI
TL;DR: The results show that using count, speed, and occupancy together as input produces the best TSKFNN predictions, which outperforms other commonly used models and is a promising tool for reliable travel time prediction on a freeway corridor.
Abstract: This article presents a Takagi–Sugeno–Kang Fuzzy Neural Network (TSKFNN) approach to predict freeway corridor travel time with an online computing algorithm. TSKFNN, a combination of a Takagi–Sugeno–Kang (TSK) type fuzzy logic system and a neural network, produces strong prediction performance because of its high accuracy and quick convergence. Real world data collected from US-290 in Houston, Texas are used to train and validate the network. The prediction performance of the TSKFNN is investigated with different combinations of traffic count, occupancy, and speed as input options. The comparison between online TSKFNN, offline TSKFNN, the back propagation neural network (BPNN) and the time series model (ARIMA) is made to evaluate the performance of TSKFNN. The results show that using count, speed, and occupancy together as input produces the best TSKFNN predictions. The online TSKFNN outperforms other commonly used models and is a promising tool for reliable travel time prediction on a freeway corridor.

Journal ArticleDOI
TL;DR: The objective of this research is to develop a genetic algorithm-based resource leveling model for LOB schedules that does not impact productivity negatively and provides a smoother resource utilization histogram.
Abstract: Resource leveling involves minimizing resource fluctuations without changing the completion time of a project. A smooth distribution of resources minimizes logistical problems and results in cost savings. Line-of-balance (LOB) is a resource-based scheduling system that is used in projects that exhibit repetitive characteristics, performs resource allocation as a matter of course, but does not deal with resource leveling. In the past, researchers experienced declines in productivity whenever they leveled resources in different linear scheduling models by adjusting activities’ production rates. The objective of this research is to develop a genetic algorithm-based resource leveling model for LOB schedules that does not impact productivity negatively. This model is based on the “natural rhythm” principle, according to which a crew of optimum size will be able to complete an activity in the most productive way. The “natural rhythm” principle allows shifting the start time of an activity at different units by adjusting the number of crews without changing the duration of the activity in any one unit and without violating the precedence relationships between activities. An LOB schedule is established for a pipeline project and is used to illustrate the proposed resource leveling model. It was observed that the model provides a smoother resource utilization histogram. Performing resource leveling in LOB scheduling without sacrificing productivity is the major contribution of the proposed model.

Journal ArticleDOI
TL;DR: The Markov Chain Monte Carlo approach with a Delayed Rejection Adaptive Metropolis algorithm is investigated to perform the Bayesian framework for FE updating under uncertainty, which makes the FE model updating be robust to uncertainty.
Abstract: Uncertainty involved in the experiment data prohibits the wide applications of the finite element (FE) model updating technique into engineering practices In this article, the Markov Chain Monte Carlo approach with a Delayed Rejection Adaptive Metropolis algorithm is investigated to perform the Bayesian framework for FE updating under uncertainty A major advantage of this algorithm is that it adopts global and local adaptive strategies, which makes the FE model updating be robust to uncertainty Another merit of the studied method is that it not only quantitatively predicts structural responses, but also calculates their statistical parameters such as the confidence interval Impact test data of a grid structure are investigated to demonstrate the effectiveness of the presented FE model updating technique, in which the uncertainty parameters include the vertical and longitudinal spring stiffness that simulate the boundary conditions, the end-fixity factor for modeling semi-rigid connections, and the elastic modulus for simulating the uncertainty associated with material property

Journal ArticleDOI
TL;DR: It was found that that the annular TLD is effective when the amplitude of excitation is small and the response of TLD in terms of nonlinear free surface sloshing and the energy dissipated by the system was discussed.
Abstract: In this study, the performance of annular liquid tanks as a tuned liquid damper (TLD) in mitigating the vibration of wind turbines was investigated using a numerical model. A proposed hybrid wind tower model composed of a concrete shaft and a steel mast with a height of 150 m was simulated using a single-degree-of-freedom system. The structural domain including the tank wall and a rigid mass was modeled using finite element method, while the fluid domain was simulated by finite volume method using CFX software. A parametric study was carried out to investigate the behavior of annular TLD under harmonic loads for different mass and frequency ratios as well as displacement amplitudes. The damping characteristics of the annular TLD model were derived by comparing the numerical results with an equivalent linear model. In addition, the effectiveness of annular TLD was estimated by comparing the numerically calculated damping ratios with those corresponding to the optimum damping ratio values derived for a particular mass ratio based on the concept of tuned mass damper. It was found that that the annular TLD is effective when the amplitude of excitation is small. Moreover, the response of TLD in terms of nonlinear free surface sloshing and the energy dissipated by the system was discussed. Finally, the effectiveness of annular TLD in reducing the structural response of wind turbine towers under random vibrations was evaluated and discussed.

Journal ArticleDOI
TL;DR: This study demonstrates that utilization of existing freeway infrastructure can be optimized through the proposed algorithm, and the system‐wide optimal control performance can be achieved to quickly mitigate freeway congestion, prevent traffic from overflowing to local streets, and maximize overall traffic throughputs.
Abstract: This article describes a coordinated ramp metering algorithm for systematically mitigating freeway congestion. A preemptive hierarchical control scheme with a three-priority-layer structure is employed in this algorithm. Ramp metering is formulated as a multiobjective optimization problem to enhance system performance. These optimization objectives include promptly tackling freeway congestion, sufficiently utilizing on-ramp storage capacities, and preventing on-ramp vehicles from overflowing to local streets, balancing on-ramp vehicle equity, and maximizing traffic throughputs for the entire system. Instead of relying heavily on accurate estimates of freeway traffic flow evolvement, this new approach models ramp meter control as a linear program and uses real-time traffic sensor measurements for minimizing the indeterminate impacts from the mainstream flow capacities. VISSIM-based simulation experiments are performed to examine its practicality and effectiveness using geometric and traffic demand data from one real-world freeway segment. The simulation test results show that the proposed ramp metering approach performed well in optimizing overall freeway system operations under various traffic conditions. The system-wide optimal control performance can be achieved to quickly mitigate freeway congestion, prevent traffic from overflowing to local streets, and maximize overall traffic throughputs. The proposed ramp metering approach can dynamically assemble relevant ramp meters to work together and effectively coordinate the individual meter rates to leverage their response strengths for minimizing time to clear the congestion. This study demonstrates that utilization of existing freeway infrastructure can be optimized through the proposed algorithm.

Journal ArticleDOI
TL;DR: The probabilistic approach presented in this article provides a more realistic evaluation of the corrosion process and can be implemented in a variety of decision making algorithms required for the maintenance and repair of deteriorating reinforced concrete components.
Abstract: A stochastic computational framework is presented in this article that investigates the chloride-induced corrosion of reinforced concrete superstructures. Three-dimensional finite-element models are developed to determine the extent of chloride penetration into the superstructure components. One of the unique capabilities of this framework is to simultaneously consider all the major factors that affect the corrosion process. Furthermore, the developed framework integrates various sources of uncertainty into the performance predictions. This will be achieved by modifying the element properties and boundary conditions of the finite-element models at each time step. For a reliable durability assessment of deteriorating structural components, the proposed framework incorporates the temporal and spatial uncertainties of the influential parameters into the finite-element analysis. Considering that most of the parameters involved in the corrosion process follow non-normal distributions, a series of non-Gaussian stochastic fields are generated following a computationally efficient procedure. The results calculated from extensive stochastic simulations are expressed in terms of the likelihood and extent of corrosion initiation.The probabilistic approach presented in this article provides a more realistic evaluation of the corrosion process and can be implemented in a variety of decision making algorithms required for the maintenance and repair of deteriorating reinforced concrete components.

Journal ArticleDOI
TL;DR: An implicit route enumeration algorithm for macroscopic static stochastic network loading on real size networks based on a double-step generalization of Dial's STOCH algorithm is proposed and tested.
Abstract: Medium to large urban networks are normally characterized by a high number of origin-destination pairs that are connected by a large amount of strongly overlapping routes. This article introduces an adaptation of the Network Generalized Extreme Value (GEV) model for modelling joint choices, named Joint Network GEV (JNG), and its application to the route choice context, named Link-Based JNG (LB-JNG), and assumes the choice of a route as the joint choice of all links belonging to that route. The LB-JNG model aims at reproducing the effects of routes overlapping with a theoretical robust framework (since it belongs to the Network GEV, to date the most flexible closed-form model in reproducing covariances), allowing at the same time for easy application to real networks through a closed-form probability statement, a proper definition of its parameters and the availability of an implicit route enumeration algorithm for network loading. An overview of the theoretical properties of the JNG model is presented in this article. The LB-JNG adaptation to route choice is described, and the capability to reproduce the effects of routes overlapping is investigated using some test networks. The article compares the performances of the proposed model with those of other route choice models available in the literature. The article proposed and tested an implicit route enumeration algorithm for macroscopic static stochastic network loading on real size networks based on a double-step generalization of Dial's STOCH algorithm.

Journal ArticleDOI
TL;DR: The applicability of structural health monitoring to generate more reliable fragility curves is demonstrated, useful not only for bridges that are unique, which are usually the first to be instrumented, but for every instrumented bridge as well.
Abstract: This article describes how fragility curves are used to represent the vulnerability of a bridge in seismic regions of highway transportation networks. Because these networks have hundreds or thousands of bridges, it is impossible to study each individual bridge, so bridges with similar properties are grouped together and are represented by the same fragility curve. However, this approach may be inadequate at times for different reasons because bridges with similar geometrical and material properties could have different ages and could deteriorate at different rates. Moreover, certain bridges are unique such as a cable stayed bridge or a suspension bridge. Fragility curves are calculated based not only on the geometry and material properties, but also on vibration data recorded by a structural health monitoring system. The fragility curves are used to track changes of the structural parameters of a bridge throughout its service life. Based on vibration data the fragility curves are updated reflecting a change in structural parameters. Fragility curves based on vibration data, whenever these are available, represent the vulnerability of a bridge with greater accuracy than fragility curves based only on the geometry and material properties. This article demonstrates the applicability of structural health monitoring to generate more reliable fragility curves. This is useful not only for bridges that are unique, which are usually the first to be instrumented, but for every instrumented bridge as well.

Journal ArticleDOI
TL;DR: The first stage in the development of the proposed system was the elicitation of knowledge from written sources and from experts through literature reviews and interviews, respectively and the acquired knowledge was analyzed and classified and then represented in a form containing rules and the rules were subsequently coded as software.
Abstract: This article discusses how highway engineers face complicated problems that are influenced by various conditions during the construction of flexible highway pavements. Identifying these problems and recommending effective solutions demand considerable engineering expertise, which is difficult to obtain at all construction sites. The development of an expert system can effectively help engineers control and analyze such problems. In addition, an expert system can effectively archive the storage and distribution of expertise among pavement engineers. This article describes the development and evaluation of such an expert system. The first stage in the development of the proposed system was the elicitation of knowledge from written sources and from experts through literature reviews and interviews, respectively. The acquired knowledge was analyzed and classified and then represented in a form containing rules and the rules were subsequently coded as software. This article describes the development and evaluation of the Expert System for the Control of Construction Problems in Flexible Highway Pavements.