scispace - formally typeset
Search or ask a question

Showing papers in "Computer-aided Civil and Infrastructure Engineering in 2007"


Journal ArticleDOI
TL;DR: The proposed TLS method allows measurement of the entire bridge's deformed shape, and thus a realistic solution for monitoring structures at both structure and member level, and can be used to create a 3-D finite element model of a structural member or the entire structure at any point in time automatically.
Abstract: This paper presents a new approach for the health monitoring of structures using terrestrial laser scanning (TLS). 3-D coordinates of a target structure acquired using TLS can have maximum errors of about 10 mm, insufficient for health monitoring of structures. A displacement measurement model to improve the accuracy of the measurement is offered. The model is tested experimentally on a simply supported steel beam. Measurements were made using 3 different techniques: 1) linear variable displacement transducers (LVDTs), 2) electric strain gages, and 3) a long gage fiber optic sensor. The maximum deflections estimated by the TLS model are less than 1 mm and within 1.6% of those measured directly by LVDT. Although GPS methods allow measurement of displacements only at the GPS receiver antenna location, the proposed TLS method allows measurement of the entire bridge's deformed shape, and thus a realistic solution for monitoring structures at both structure and member level. Furthermore, it can be used to create a 3-D finite element model of a structural member or the entire structure at any point in time automatically. Through periodic measurement of deformations of a structure/structural member and performing inverse structural analyses with measured 3-D displacements, a structure's health can be monitored continuously.

533 citations


Journal ArticleDOI
TL;DR: This research presents a case study of the automatic fare collection system of the Chicago Transit Authority (CTA) rail system and develops a method for inferring rail passenger trip origin‐destination matrices from an origin‐only AFC system to replace expensive passenger OD surveys.
Abstract: Automatic data collection (ADC) systems are becoming increasingly common in transit systems throughout the world. Although these ADC systems are often designed to support specific fairly narrow functions, the resulting data can have wide-ranging applications, well beyond their design purpose. This paper illustrates the potential that ADC systems can provide transit agencies with new rich data sources at low marginal cost, as well as the critical gap between what ADC systems directly offer and what is needed in practice in transit agencies. To close this gap requires data processing/analysis methods with support of technologies such as database management systems and geographic information systems. This work presents a case study of the automatic fare collection (AFC) system of the Chicago Transit Authority (CTA) rail system and develops a method for inferring rail passenger trip origin-destination (OD) matrices from an origin-only AFC system to replace expensive passenger OD surveys. A software tool is created to facilitate the method implementation. Results of the application in CTA are given.

267 citations


Journal ArticleDOI
TL;DR: Two types of wavelet Kalman filter models based on Daubechies 4 and Haar mother wavelets are investigated and the test results show that both proposed waveletKalman Filter models outperform the direct Kalman Filter model in terms of mean absolute percentage error and root mean square error.
Abstract: : This article investigates the application of Kalman filter with discrete wavelet analysis in short-term traffic volume forecasting. Short-term traffic volume data are often corrupted by local noises, which may significantly affect the prediction accuracy of short-term traffic volumes. Discrete wavelet decomposition analysis is used to divide the original data into several approximate and detailed data such that the Kalman filter model can then be applied to the denoised data and the prediction accuracy can be improved. Two types of wavelet Kalman filter models based on Daubechies 4 and Haar mother wavelets are investigated. Traffic volume data collected from four different locations are used for comparison in this study. The test results show that both proposed wavelet Kalman filter models outperform the direct Kalman filter model in terms of mean absolute percentage error and root mean square error.

212 citations


Journal ArticleDOI
TL;DR: A formulation of the robust network design problem (RNDP) is proposed and a methodology based on genetic algorithm (GA) to solve the RNDP is developed and generates globally near‐optimal network design solutions, f, based on the planner's input for robustness.
Abstract: : This article addresses the problem of a traffic network design problem (NDP) under demand uncertainty. The origin–destination trip matrices are taken as random variables with known probability distributions. Instead of finding optimal network design solutions for a given future scenario, we are concerned with solutions that are in some sense “good” for a variety of demand realizations. We introduce a definition of robustness accounting for the planner's required degree of robustness. We propose a formulation of the robust network design problem (RNDP) and develop a methodology based on genetic algorithm (GA) to solve the RNDP. The proposed model generates globally near-optimal network design solutions, f, based on the planner's input for robustness. The study makes two important contributions to the network design literature. First, robust network design solutions are significantly different from the deterministic NDPs and not accounting for them could potentially underestimate the network-wide impacts. Second, systematic evaluation of the performance of the model and solution algorithm is conducted on different test networks and budget levels to explore the efficacy of this approach. The results highlight the importance of accounting for robustness in transportation planning and the proposed approach is capable of producing high-quality solutions.

199 citations


Journal ArticleDOI
TL;DR: A better performance of the signal timing as optimized by the GA method as compared to a method that does not consider rerouting is found to be more significant with a more congested network whereas under a relatively mild congestion situation the improvement is not very clear.
Abstract: It is well known that coordinated, area-wide traffic signal control provides great potential for improvements in delays, safety, and environmental measures. However, an aspect of this problem that is commonly neglected in practice is the potentially confounding effect of drivers re-routing in response to changes in travel times on competing routes, brought about by the changes to the signal timings. This article considers the problem of optimizing signal green and cycle timings over an urban network, in such a way that the optimization anticipates the impact on traffic routing patterns. This is achieved by including a network equilibrium model as a constraint to the optimization. A Genetic Algorithm (GA) is devised for solving the resulting problem, using total travel time across the network as an illustrative fitness function, and with a widely used traffic simulation-assignment model providing the equilibrium flows. The procedure is applied to a case study of the city of Chester in the UK, and the performance of the algorithms is analyzed with respect to the parameters of the GA method. The results show a better performance of the signal timing as optimized by the GA method as compared to a method that does not consider rerouting. This improvement is found to be more significant with a more congested network whereas under a relatively mild congestion situation the improvement is not very clear.

173 citations


Journal ArticleDOI
TL;DR: The results show that the optimized bus network has significantly reduced transfers and travel time, and reveal that the proposed CPACA is effective and efficient compared to some existing ant algorithms.
Abstract: This paper presents an optimization model for a bus network design based on the coarse-grain parallel ant colony algorithm (CPACA). It aims to maximize the number of direct travelers/unit length; that is, direct traveler density, subject to route length and nonlinear rate constraints (ratio of the length of a route to the shortest road distance between origin and destination). CPACA is a new optimal algorithm that 1) develops a new strategy to update the increased pheromone, called Ant-Weight, by which the path-searching activities of ants are adjusted based on the objective function, and 2) uses parallelization strategies of an ant colony algorithm (ACA) to improve the calculation time and the quality of the optimization. Data collected in Dalian City, China, is used to test the model and the algorithm. Results show that the optimized bus network has significantly reduced transfers and travel time. The data also reveals that the proposed CPACA is effective and efficient compared to some existing ant algorithms.

146 citations


Journal ArticleDOI
TL;DR: A modular neural predictor consisting of temporal genetically optimized structures of multilayer perceptrons (MLP) that are fed with volume data from sequential locations to improve the accuracy of short‐term forecasts is proposed.
Abstract: Current interest in short-term traffic volume forecasting focuses on incorporating temporal and spatial volume characteristics in the forecasting process. This paper addresses the problem of integrating and optimizing predictive information from multiple locations of an urban signalized arterial roadway and proposes a modular neural predictor consisting of temporal genetically optimized structures of multilayer perceptrons that are fed with volume data from sequential locations to improve the accuracy of short-term forecasts. Results show that the proposed methodology provides more accurate forecasts compared to the conventional statistical methodologies applied, as well as to the static forms of neural networks.

145 citations


Journal ArticleDOI
TL;DR: This article focuses on a nonlinear static method of developing fragility curves for a typical type of concrete bridge in California, which makes use of the capacity spectrum method (CSM) for identification of spectral displacement.
Abstract: The impact of an earthquake event on the performance of a highway transportation network depends on the extent of damage sustained by its individual components, particularly bridges. Seismic damageability of bridges expressed in the form of fragility curves can easily be incorporated into the scheme of risk analysis of a highway network under the seismic hazard. In this context, this article focuses on a nonlinear static method of developing fragility curves for a typical type of concrete bridge in California. The method makes use of the capacity spectrum method (CSM) for identification of spectral displacement, which is converted to rotations at bridge column ends. To check the reliability of this current analytical procedure, developed fragility curves are compared with those obtained by nonlinear time history analysis. Results indicate that analytically developed fragility curves obtained from nonlinear static and time history analyses are consistent.

87 citations


Journal ArticleDOI
TL;DR: The research results presented demonstrate the benefit of implementing MOGA optimization as an integral part of a reliability-based optimization procedure for three-dimensional trusses.
Abstract: A hybrid methodology for performing reliability-based structural optimization of three-dimensional trusses is presented. This hybrid methodology links the search and optimization capabilities of multi-objective genetic algorithms (MOGA) with structural performance information provided by finite element reliability analysis. To highlight the strengths of the proposed methodology, a practical example is presented that concerns optimizing the topology, geometry, and member sizes of electrical transmission towers. The weight and reliability index of a tower are defined as the two objectives used by MOGA to perform Pareto ranking of tower designs. The truss deformation and the member stresses are compared to threshold values to assess the reliability of each tower under wind loading. Importance sampling is used for the reliability analysis. Both the wind pressure and the wind direction are considered as random variables in the analysis. The research results presented demonstrate the benefit of implementing MOGA optimization as an integral part of a reliability-based optimization procedure for three-dimensional trusses.

73 citations


Journal ArticleDOI
TL;DR: A methodology to forecast project progress and final time-to-completion is developed and an adaptive Bayesian updating method is used to assess the unknown model parameters based on recorded data and pertinent prior information.
Abstract: : A methodology to forecast project progress and final time-to-completion is developed. An adaptive Bayesian updating method is used to assess the unknown model parameters based on recorded data and pertinent prior information. Recorded data can include equality, upper bound, and lower bound data. The proposed approach properly accounts for all the prevailing uncertainties, including model errors arising from an inaccurate model form or missing variables, measurement errors, statistical uncertainty, and volitional uncertainty. As an illustration of the proposed approach, the project progress and final time-to-completion of an example project are forecasted. For this illustration construction of civilian nuclear power plants in the United States is considered. This application considers two cases (1) no information is available prior to observing the actual progress data of a specified plant and (2) the construction progress of eight other nuclear power plants is available. The example shows that an informative prior is important to make accurate predictions when only a few records are available. This is also the time when forecasts are most valuable to the project manager. Having or not having prior information does not have any practical effect on the forecast when progress on a significant portion of the project has been recorded.

59 citations


Journal ArticleDOI
TL;DR: This article built the identification models for the displacement and cracks of one concrete arch-dam with the trained wavelet network and shows that the proposed models are reasonable, and the denoising effect of the signal is remarkable.
Abstract: : Dam behavior is conventionally evaluated with identification models of deformation, seepage, stress, and crack opening. The identification model needs to be described with a complicated and nonlinear function. Wavelet networks based on wavelet frames were used to establish the identification models of dam behavior for the first time. Firstly, time-frequency analysis for training data was implemented to determine the original structure of the wavelet network. Next, a new method was proposed for iterative elimination of the redundant neurons according to the dependency between the network output and the nodes in the hidden layer. In this method, rough sets theory was used to calculate the dependency. Lastly, this article built the identification models for the displacement and cracks of one concrete arch-dam with the trained wavelet network. The models can represent the connection between loads and the behavior of the dam. The numerical example shows that the proposed models are reasonable, and the denoising effect of the signal is remarkable.

Journal ArticleDOI
TL;DR: The potential of the harmonic wavelet transform as a detection tool for global structural damage is explored in conjunction with the concept of monitoring the mean instantaneous frequency of records of critical structural responses.
Abstract: The harmonic wavelet transform is employed to analyze various kinds of nonstationary signals common in aseismic design. The effectiveness of the harmonic wavelets for capturing the temporal evolution of the frequency content of strong ground motions is demonstrated. In this regard, a detailed study of important earthquake accelerograms is undertaken and smooth joint time-frequency spectra are provided for two near-field and two far-field records; inherent in this analysis is the concept of the mean instantaneous frequency. Furthermore, as a paradigm of usefulness for aseismic structural purposes, a similar analysis is conducted for the response of a 20-story steel frame benchmark building considering one of the four accelerograms scaled by appropriate factors as the excitation to simulate undamaged and severely damaged conditions for the structure. The resulting joint time-frequency representation of the response time histories captures the influence of nonlinearity on the variation of the effective natural frequencies of a structural system during the evolution of a seismic event. In this context, the potential of the harmonic wavelet transform as a detection tool for global structural damage is explored in conjunction with the concept of monitoring the mean instantaneous frequency of records of critical structural responses.

Journal ArticleDOI
TL;DR: By introducing fuzzy ensemble learning, the structural response is reduced more than when the response is controlled by individual fuzzy active control systems.
Abstract: Recently, numerous studies of structural control systems of civil structures and infrastructure have been carried out. To develop structural control systems, it is necessary to consider their special features such as complexity, uncertainty, and size. To consider these features, fuzzy theory has been applied to structural control systems. This study proposes an integrated fuzzy active control system based on fuzzy ensemble learning. It combines several fuzzy active control systems and improves structural vibrations caused by earthquakes. The proposed method includes two fuzzy active control systems, a fuzzy ensemble system, and a gating network. In this study, two fuzzy active control systems are constructed by applying particle-swarm optimization. The fuzzy ensemble system assigns a performance grade to each fuzzy active control system according to control effects from input patterns. The gating network determines the final control force based on the weight of their performance grade. By introducing fuzzy ensemble learning, the structural response is reduced more than when the response is controlled by individual fuzzy active control systems.

Journal ArticleDOI
TL;DR: The use of forecasting models is extended beyond their traditional role as a guideline for monitoring and control of progress and is regarded as tools for driving the project in the direction of corporate goals.
Abstract: The excessive level of construction business failures and their association with financial difficulties has placed financial management in the forefront of many business imperatives. This has highlighted the importance of cash flow forecasting and management that has given rise to the development of several forecasting models. The traditional approach to the use of project financial models has been largely a project-oriented perspective. However, the dominating role of “project economics” in shaping “corporate economics” tends to place the corporate strategy at the mercy of the projects. This article approaches the concept of cash flow forecasting and management from a fresh perspective. Here, the use of forecasting models is extended beyond their traditional role as a guideline for monitoring and control of progress. They are regarded as tools for driving the project in the direction of corporate goals. The work is based on the premise that the main parties could negotiate the terms and attempt to complement their priorities. As part of this approach, a model is proposed for forecasting and management of project cash flow. The mathematical component of the model integrates three modules: an exponential and two fourth-degree polynomials. The model generates a forecast by potentially combining the outcome of data analysis with the experience and knowledge of the forecaster/organization. In light of corporate objectives, the generated forecast is then manipulated and replaced by a range of favorable but realistic cash flow profiles. Finally, through a negotiation with other parties, a compromised favorable cash flow is achieved. This article will describe the novel way the model is used as a decision support tool. Although the structure of the model and its mathematical components are described in detail, the data processing and analysis parts are briefly described and referenced accordingly. The viability of the model and the approach are demonstrated by means of a scenario.

Journal ArticleDOI
TL;DR: A wavelet-based damage identification technique was found to be simple, efficient, and independent of damage models and wavelet basis functions, once certain conditions regarding the modeshape and the wavelet bases are satisfied.
Abstract: Structural damage detection and calibration in beams by wavelet analysis involve some key factors such as the damage model, the choice of the wavelet function, the effects of windowing, and the effects of masking due to the presence of noise during measurement. In this research, a numerical study was performed to address these issues for single and multispan beams with an open crack. The first natural modeshapes of single and multispan beams with an open crack have been simulated taking into account damage models of different levels of complexity and analyzed for different crack depth ratios and crack positions. Gaussian white noise has been synthetically introduced to the simulated modeshape and the effects of varying signal-to-noise ratio are studied. A wavelet-based damage identification technique was found to be simple, efficient, and independent of damage models and wavelet basis functions, once certain conditions regarding the modeshape and the wavelet bases are satisfied. The wavelet-based damage calibration is found to be dependent on a number of factors including damage models and the basis function used in the analysis. A curvature-based calibration is more sensitive than a modeshape-based calibration of the extent of damage.

Journal ArticleDOI
TL;DR: The application of low-cost Close Range Digital Photogrammetry is dealt with to obtain an accurate three-dimensional re-construction of irregular geometry structures in the field of building construction, with special attention to the evaluation of old structures.
Abstract: : This article deals with the application of low-cost Close Range Digital Photogrammetry to obtain an accurate three-dimensional (3D) re-construction of irregular geometry structures in the field of building construction, with special attention to the evaluation of old structures. Photogrammetry can be used as a non-destructive tool to give precise 3D information about the size and shape of some elements of a structure, quickly and with no risk to the surveyors. The geometric data achieved can be used by engineers and architects to obtain the section properties, and also to estimate the influence of geometric variations in the distribution of stress. This allows us to compare areas subject to higher stress, and could be of special interest regarding some historic and cultural heritage constructions—such as the timber roof structure included as a case study in this work—from two points of view: On the one hand, Photogrammetry makes it possible to obtain precise 3D models of highly irregular elements, as old timber purlines and trusses in the case of ancient constructions; on the other, Photogrammetry is a non-contact method that minimizes the measurement time and allows us to obtain the section properties, which can be used together with some material testing characterization to evaluate the structural safety of the construction.

Journal ArticleDOI
TL;DR: The ambiguity and common methods of interpretation based on response amplitude and travel time of ground penetrating radar are presented.
Abstract: : Ground penetrating radar (GPR) has become a viable technology for non-destructive condition assessment of reinforced concrete structures Interpretation of the radar signal is typically performed through preliminary filtering techniques and interpretation is based on viewing numerous signals in the form of a scan Although anomalies can be evident in the scanned image, quantification and interpretation of the main issue remain ambiguous This article presents the ambiguity and common methods of interpretation based on response amplitude and travel time An integrated medium is developed and used as a forward modeling tool to generate a realistic radar reflection of a reinforced concrete bridge deck with defects A healthy deck reflection is obtained from a separate model and is combined with an inverse solution to quantifiably estimate unknown subsurface properties such as layer thickness and dielectric constants of subsurface materials evident in the realistic radar trace as well as The forward modeling tool and associated model based assessment provides an objective computational alternative to the interpretation of scanned images

Journal ArticleDOI
TL;DR: This article provides an overview of a multicriteria decision support methodology for annual rehabilitation programs of water networks, formulated for the purpose of comparing and ranking rehabilitation projects.
Abstract: This article provides an overview of a multicriteria decision support methodology for annual rehabilitation programs of water networks. A first set of criteria is formulated for the purpose of comparing and ranking rehabilitation projects. Each proposed criterion is a measure of a particular impact of the condition of a pipe. The ELECTRE TRI method is implemented for defining rehabilitation priorities. Two reference profiles are used to define the limits of three categories associated with three increasing priority levels. With these two reference profiles, applying the ELECTRE TRI method to an asset stock (a set of pipes that are candidates for rehabilitation) means assigning each pipe to one of six possible priority groups. A second set of criteria, based on the concept of efficiency, is proposed for comparing alternative rehabilitation programs (subsets of the asset stock).

Journal ArticleDOI
TL;DR: The Ant Colony Optimization (ACO) algorithm is employed to size optimization of scissor‐link foldable structures using a special 3‐node beam known as a uniplet.
Abstract: : In this article, the Ant Colony Optimization (ACO) algorithm is employed to size optimization of scissor-link foldable structures. The advantage of using ACO lies in the fact that the discrete spaces can be optimized with no complexity. The algorithm selects the optimum cross-sections from the available sections list. Elastic behavior is assumed for the formulation of the problem. In addition to strength constraints, the displacement constraints are considered for design. Here, the displacement method is used for analysis employing a special 3-node beam known as a uniplet. Two design examples are presented to demonstrate the performance of the algorithm.

Journal ArticleDOI
TL;DR: This paper investigates the effectiveness of different mathematical methods in describing the 3-D surface texture of Portland cement concrete (PCC) pavements using the Hessian model, the Fast Fourier transform, the wavelet analysis, and the power spectral density.
Abstract: This paper investigates the effectiveness of different mathematical methods in describing the 3-D surface texture of Portland cement concrete (PCC) pavements. 10 PCC field cores of varying surface textures were included in the analysis. X-ray Computed Tomography (CT) was used to scan the upper portion of these cores, resulting in a stack of 2-D grayscale images. Image processing techniques were utilized to isolate the void pixels from the solid pixels and reconstruct the 3-D surface topography. The resulting 3-D surfaces were reduced to 2-D "map of heights" images, whereby the grayscale intensity of each pixel within the image represented the vertical location of the surface at that point with respect to the lowest point on the surface. The "map of heights" images were analyzed using 4 mathematical methods, namely the Hessian model, the Fast Fourier transform, the wavelet analysis, and the power spectral density. Results obtained were compared to the mean profile depth computed in accordance with ASTM E1845.

Journal ArticleDOI
TL;DR: A nested clustering technique is introduced and its application to the analysis of freeway operating condition using the traffic data collected by the detectors and aggregated to 5‐minute increments based on the Bayesian Information Criterion.
Abstract: This article introduces a nested clusteringtechnique and its application to the analysis of freewayoperating condition. A clustering model is developed us-ing the traffic data (flow, speed, occupancy) collected bythe detectors and aggregated to 5-minute increments. Anoptimum fit of the statistical characteristics of the data setis provided by the model based on the Bayesian Infor-mation Criterion and the ratio of changes in dispersionmeasurement.Thistechniqueisflexibleindeterminingthenumber of clusters based on the statistical characteristicsof the data. Tests on multiple sites with varying operatingconditions have attested to its effectiveness as a data min-ing tool for the analysis of freeway operating condition. 1 INTRODUCTION In practice, freeway operating condition is usually qual-itatively categorized into six levels of service (A–F), asdefined in the Highway Capacity Manual (TRB, 2000).Foranygivenfreewaysegment,adjustmentsaremadetothe baseline condition to account for the impact of a pre-defined list of site-specific geometric and traffic charac-teristics.Consideredastheparameterthatbestdescribesthe operating quality, density is used as the primary fac-tor to determine the level of service on a freeway.Some researchers have studied the characteristics oftraffic parameters (e.g., flow rate, speed, and occupancy)at various operating conditions and attempted to findways to distinguish traffic flow phases from each other(Hall et al., 1992; Banks, 1999; Kerner and Rehborn,1996). Most arguments were made through observingthe shape of the speed-flow curve as well as the distri-bution of the slope values of the time sequence plot in a

Journal ArticleDOI
TL;DR: This work presents a novel bid price determination procedure that is built by integrating a simulation-based cost model and a multi-criteria evaluation model to reflect bidder preferences regarding decision criteria.
Abstract: Several criteria affect bidding decisions. Cur- rent bidding models determine a markup based on a fixed project construction cost. This work presents a novel bid price determination procedure that is built by integrating a simulation-based cost model and a multi-criteria eval- uation model. The cost model is used to consider cost uncertainties and generate a bid price cumulative distri- bution, whereas the multi-criteria evaluation model ap- plies pairwise comparisons and fuzzy integrals to reflect bidder preferences regarding decision criteria. The rela- tionship between the two models is based on a practi- cal phenomenon in that a bidder has a high probability of winning when criteria evaluations favor his bid, and, consequently, the bidder would bid a low price, and vice versa. The merits of the proposed procedure are demon- strated by its application to two construction projects in Taiwan.

Journal ArticleDOI
TL;DR: A fuzzy-logic-based model is proposed for determining the minimum bid markup with assessments of chance of winning and loss risk, and shows that the model is sensitive enough to differentiate a decision maker's position on bidding and suggest bid-cutting limits consistently, thereby remedying some shortcomings of existing models.
Abstract: : Many construction markets exhibit severe price competition where contractors have to cut their bids to compete, giving priority to winning enough contracts to sustain normal operation, and it is common to see a winning bid close to the expected project cost. While cutting bids not only gives up profits but also undoubtedly increases the risk of making a loss, the behavior of contractors in intense competition is difficult to explain by existing models. A fuzzy-logic-based model is proposed for determining the minimum bid markup with assessments of chance of winning and loss risk. The model incorporates the position of a decision maker in the fuzzy rules according to his/her attitude toward risk and degree of need for the job. Two illustrative examples, one hypothetical and one real, are provided, in which differences in priorities are simulated by four sets of fuzzy rules for a comparison of the effects. The results show that the model is sensitive enough to differentiate a decision maker's position on bidding and suggest bid-cutting limits consistently, thereby remedying some shortcomings of existing models.

Journal ArticleDOI
TL;DR: A 3-step neural networks based strategy to identify structural member stiffness and damping parameters directly from free vibration-induced strain measurements, indicating that average relative errors of identified structural properties were less than 5% and relatively insensitive to measurement noises.
Abstract: The increasing use of advanced sensing technologies such as optic fiber Bragg grating and embedded piezoelectric sensors necessitates development of strain-based identification methodologies. In this study, a 3-step neural networks based strategy, called direct soft parametric identification (DSPI), is presented to identify structural member stiffness and damping parameters directly from free vibration-induced strain measurements. The rationality of the strain based DSPI method is explained and the theoretical basis for the construction of a strain-based emulator neural network (SENN) and a parametric evaluation neural network (PENN) are described according to the discrete time solution of the state space equation of structural free vibration. The accuracy, robustness, and efficacy of the proposed strategy are examined using a truss structure with a known mass distribution. Numerical simulations indicate that average relative errors of identified structural properties were less than 5% and relatively insensitive to measurement noises.

Journal ArticleDOI
TL;DR: An Index of Technical Performance (ITp) is defined that is a combined measure of performance (including social costs) and technical costs that provides an objective standard tool for managers to compare different alternatives on the optimization of Inspection, Maintenance, or Rehabilitation strategies on sewer systems.
Abstract: : Managers of sewer systems are faced with their infrastructure system ageing. Even when they are conscious about the needs of maintenance to keep the system in a good condition, they lack efficient methods and tools that may help them in taking appropriate decisions. One can say that no really satisfactory and efficient tool exists, enabling the optimization of Inspection, Maintenance, or Rehabilitation (IMR) strategies on such systems. Sewer managers and researchers have been involved for many years in the French National Research Project for Renewal of Non Man Entry Sewer System (RERAU—Rehabilitation des Reseaux d'Assainissement Urbains, in French) to improve their knowledge of these systems and the management policies. During the RERAU project, a specific action has been dedicated to the modeling of asset ageing and maintenance. A special attention has been dedicated to the description of defects and dysfunctions, to the evaluation of performances and its modeling, accounting for its various dimensions (from the point of view of the manager, of the user, of the environment…). After having defined an Index of Technical Performance (ITp), we will introduce the Index of Technical and Economic Performance (ITEp) that is a combined measure of performance (including social costs) and technical costs. This index provides an objective standard tool for managers to compare different alternatives. It is used in the article to compare some simple IMR strategies. It sets the basis of a new method for no-man entry sewer system management, enabling us to analyze the profitableness of investment in terms of both technical and economic performance.

Journal ArticleDOI
TL;DR: A computer-aided comprehensive strategy for the rapid visual inspection of buildings and the optimal prioritization of strengthening and remedial actions that are necessary prior to, and after, a major earthquake event, respectively is presented.
Abstract: : The aim of this article is to present a computer-aided comprehensive strategy for the rapid visual inspection of buildings and the optimal prioritization of strengthening and remedial actions that are necessary prior to, and after, a major earthquake event, respectively. Based on the visual screening procedures used in the United States and past experience in seismic assessment of buildings in Greece and Turkey (the two countries with the highest seismic risk in Europe), a building inventory is first compiled; then a vulnerability ranking procedure that is specifically tailored to the prevailing construction practice in Southeast Europe is implemented into a multi-functional, georeferenced computer tool, that accommodates the management, evaluation, processing and archiving of the data stock gathered during the pre- and post-earthquake assessment process, and the visualization of its spatial distribution. The methodology proposed and the computer system developed is then applied to the city of Duzce, Turkey, a city strongly damaged during the devastating 1999 earthquake.

Journal ArticleDOI
TL;DR: The agreement on the present results suggests that the aeroelastic phenomenon of bridge decks with sharp edges does not appear sensitive to the Re number and turbulence modeled, and flutter derivatives can be evaluated based on system identification.
Abstract: : The Sanchaji Bridge with a main span of 328 m, located in Changsha City across Xiangjiang River, is one of the longest self-anchored suspension bridges completed in China. This article presents the results from a combined wind tunnel and CFD (computational fluid dynamics) study on identification of flutter derivatives of the bridge deck. Based on the Covariance Block Hankel Matrix (CBHM) algorithm in time domain, sectional model wind tunnel tests are conducted in smooth flow to recover modal parameters and to further identify flutter derivatives from free-decay vibration records. On the other hand, based on the ALE (Arbitrary Lagrangian Eulerian) description and a second-order projection algorithm, the CFD study uses the FVM (Finite Volume Method) on staggered grids and a forced vibration manner of the bridge deck to evaluate the flow field around the bridge deck. With the obtained aerodynamic forces acting on the bridge deck, flutter derivatives can be evaluated based on system identification. Finally, both of the suggested methods are applied to identification of flutter derivatives of the bridge deck of the Sanchaji Bridge. The results of the present methods have the same trends with Theodorsen analytical solutions, while the results from the CFD study compare well with those from the wind tunnel test. The agreement on the present results suggests that the aeroelastic phenomenon of bridge decks with sharp edges does not appear sensitive to the Re number and turbulence modeling.

Journal ArticleDOI
TL;DR: This article presents a formal method for defining a schema subset using set theory and introduces the concept of a base set, which works as a building block of a subset with identified rules.
Abstract: : A neutral product model facilitates data exchange and integration. Particularly in the Architecture, Engineering and Construction product domains, neutral product models have been developed to support integration across segments of the product lifecycle and thus support a range of applications. Because of this, only subsets of the neutral data structures are utilized in any one exchange. Subsets are also called conformance classes (ISO-STEP, CIS/2) or views (IFC), and include subtle conceptual and intentional differences. In current practice, the subset definition method ranges from just selecting entity data types, to defining a sub schema (data model), or to describing the purpose of the subset in natural language. This diversity in defining a subset comes from the complexity of reference relationships and subtyping capabilities of current product data models defined in the EXPRESS product modeling language. This article presents a formal method for defining a schema subset using set theory. It introduces the concept of a base set. A base set works as a building block of a subset with identified rules. It also identifies subset generation rules and a classification of rules according to schema versus instance, and semantic versus syntactic characteristics of subsets. The presented capabilities are intended to facilitate the generation of views within a product model schema supporting specific exchanges.

Journal ArticleDOI
TL;DR: There exists an optimal number of maintenances for cracking and delamination that returns the minimum total cost for the structure in its whole service life, which can help structural engineers, operators, and asset managers develop a cost-effective management scheme for corrosion-affected concrete structures.
Abstract: Corrosion of reinforcing steel in concrete is the dominant cause for premature failures of reinforced concrete structures located in chloride-laden environments. It is also observed that some severely deteriorated concrete structures survive for many years without maintenance.This raises the question of why and how to maintain corrosion-affected concrete structures, in particular in the climate of an increasing scarcity of resources. The present article attempts to formulate a maintenance strategy based on risk-cost optimization of a structure during its whole service life. A time-dependent reliability method is employed to determine the probability of exceeding a limit state at each phase of the service life. To facilitate practical application of the formulated maintenance strategy,an algorithm is developed and programmed in a userfriendly manner with a worked example. A merit of the proposed maintenance strategy is that models used in risk assessment for corrosion-affected concrete structures are related to some of the design criteria used by practitioners. It is found in the article that there exists an optimal number of maintenances for cracking and delamination that returns the minimum total cost for the structure in its whole life. The maintenance strategy presented in the article can help structural engineers, operators, and asset managers develop a cost-effective management scheme for corrosion-affected concrete structures

Journal ArticleDOI
TL;DR: The main aim of this paper is to derive a PSD distribution that accounts for the variations in the contributing random variables, and the obtained distribution is used to estimate the reliability index of the current PSD standards at a design speed of 50 mph.
Abstract: Passing sight distance (PSD) is provided to ensure the safety of passing maneuvers on 2-lane, 2-way roads. Many random variables determine the minimum length required for a safe passing maneuver. Current PSD design practices replace these random variables by single-value means in the calculation process, disregarding their inherent variations, which results in a single-value PSD design criteria. The main aim of this paper is to derive a PSD distribution that accounts for the variations in the contributing random variables. Two models are devised, a Monte-Carlo simulation model used to obtain the PSD distribution and a closed form analytical estimation model used for verification purposes. The Monte-Carlo simulation model uses random sampling to select values of the contributing parameters from their corresponding distributions in each run. The analytical model accounts for each parameter variation by using their means and standard deviations in a closed form estimation method. The means and standard deviations of the PSD using both models are compared for verification purposes. Both models use the same PSD formulation. Analysis is conducted for a design speed of 50 mph. A PSD distribution is developed accordingly. Results of both models differ only by less than 2%. The obtained distribution is used to estimate the reliability index of the current PSD standards at a design speed of 50 mph.