scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computing in Civil Engineering in 2004"


Journal ArticleDOI
TL;DR: The issue of data division and its impact on ANN model performance is investigated for a case study of predicting the settlement of shallow foundations on granular soils and it is apparent that the SOM and fuzzy clustering methods are suitable approaches for data division.
Abstract: In recent years, artificial neural networks (ANNs) have been applied to many geotechnical engineering problems with some degree of success. In the majority of these applications, data division is carried out on an arbitrary basis. However, the way the data are divided can have a significant effect on model performance. In this paper, the issue of data division and its impact on ANN model performance is investigated for a case study of predicting the settlement of shallow foundations on granular soils. Four data division methods are investigated: (1) random data division; (2) data division to ensure statistical consistency of the subsets needed for ANN model development; (3) data division using self-organizing maps (SOMs); and (4) a new data division method using fuzzy clustering. The results indicate that the statistical properties of the data in the training, testing, and validation sets need to be taken into account to ensure that optimal model performance is achieved. It is also apparent from the results that the SOM and fuzzy clustering methods are suitable approaches for data division.

282 citations


Journal ArticleDOI
TL;DR: The SSD modeling approach offers a single modeling framework for developing conceptually different models and provides the much-needed capability to model feedback based complex dynamic processes in time and space while giving insight into the interactions among different components of the system.
Abstract: A new approach called spatial system dynamics (SSD) is presented to model feedback based dynamic processes in time and space. This approach is grounded in control theory for distributed parameter systems. System dynamics and geographic information system (GIS) are coupled to develop this modeling approach. The SSD modeling approach offers a single modeling framework for developing conceptually different models. It also provides the much-needed capability to model feedback based complex dynamic processes in time and space while giving insight into the interactions among different components of the system. The proposed approach is superior to existing techniques for dynamic modeling such as cellular automata and GIS and addresses most of the limitations present in these approaches. The SSD approach can be used to model a variety of physical and natural processes where the main interest is the space.time interaction, e.g., environmental/water resources processes, natural resources management, climate change, and disaster management. The applicability of the proposed approach is demonstrated with an application to flood management in the Red River basin in Manitoba, Canada.

184 citations


Journal ArticleDOI
TL;DR: In this paper, the surface roughness of three silica sands was studied using the optical interferometry approach, which was statistically characterized by a set of parameters, such as particle roundness, sphericity, and friction and dilatancy angles.
Abstract: This paper presents detailed microscopic analyses of the surface roughness, roundness, and sphericity of sands. The surface roughness of three silica sands was studied using the optical interferometry approach. It was statistically characterized by a set of parameters. Optical interferometry yielded very accurate measurements of surface roughness of the three sands. It was found that, as the surface roughness increases, the friction and dilatancy angles of the sand increase. In addition, two new indices for particle roundness and sphericity are introduced, compared with the Powers classification, and used to classify the investigated sands.

142 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used x-ray tomography imaging to reconstruct a 3D digital representation of individual particles in a granular system, represented by the mass center coordinates and the morphology representation of each particle.
Abstract: This paper presents the use of x-ray tomography imaging to reconstruct a three-dimensional (3D) digital representation of individual particles in a granular system. The granular system is represented by the mass center coordinates and the morphology representation of each particle. An automated procedure using pattern recognition to identify related particle cross sections in adjacent serial images was developed. Procedures to calculate quantities needed for subsequent simulation of particle behavior including the volume and the momentum of inertia of each particle are also presented. The developments described in the paper enable modeling and simulation of the behavior, and experimental observations of the particle kinematics of real microstructures of granular materials in a true 3D platform.

139 citations


Journal ArticleDOI
TL;DR: In this article, a multiscale wavelet representation is used to capture the texture and to differentiate "true" texture from "false" texture caused by variations of natural color on a particle surface.
Abstract: This paper presents image analysis techniques by which to characterize the texture, angularity, and form of aggregate particles used in highway construction and geotechnical applications. For texture analysis, wavelet decomposition in gray scale images of particles is performed. The results demonstrate that multiscale wavelet representation is a powerful tool by which to capture the texture and to differentiate "true" texture from "false" texture caused by variations of natural color on a particle surface. Angularity and form analyses of particles are done using binary images. A gradient-based method is employed to describe angularity. This method is shown to differentiate between particles with different angularity characteristics. Form analysis of the particles includes computing the shape factor and sphericity index, which are based on measurements of the shortest, intermediate, and longest axis of the particle. Particle thickness is measured using the feature of an autofocus microscope. The width and length are calculated by an eigenvalue decomposition method of two-dimensional particle projections. Details of an interactive software developed to compute the different aggregate shape factors are discussed. The results indicate that these calculated values of the particle dimensions match very closely the values measured manually using a digital caliper.

114 citations


Journal ArticleDOI
TL;DR: Two parallel genetic algorithm (PGA) models for TRND problem for urban bus operation are proposed and it is observed that the global PVM model performed better than the other model.
Abstract: A transit route network design (TRND) problem for urban bus operation involves the determination of a set of transit routes and the associated frequencies that achieve the desired objective. This can be formulated as an optimization problem of minimizing the total system cost, which is the sum of the operating cost and the generalized travel cost. A review of previous approaches to solve this problem reveals the deficiency of conventional optimization techniques and the suitability of genetic algorithm (GA) based models to handle such combinatorial optimization problems. Since GAs are computationally intensive optimization techniques, their application to large and complex problems is limited. The computational performance of a GA model can be improved by exploiting its inherent parallel nature. Accordingly, two parallel genetic algorithm (PGA) models are proposed in this study. The first is a global parallel virtual machine (PVM) parallel GA model where the fitness evaluation is done concurrently in a parallel processing environment using PVM libraries. The second is a global message passing interface (MPI) parallel GA model where an MPI environment substitutes for the PVM libraries. An existing GA model for TRND for a large city is used as a case study. These models are tested for computation time, speedup, and efficiency. From the study, it is observed that the global PVM model performed better than the other model.

112 citations


Journal ArticleDOI
TL;DR: In this paper, Adaptive cross correlation (ACC) is used to study two-dimensional spatial soil deformations nonintrusively, which is an advanced cross-correlation algorithm.
Abstract: Digital image correlation (DIC) is used in this paper to study two-dimensional spatial soil deformations nonintrusively. Adaptive cross correlation (ACC), which is an advanced cross-correlation alg...

96 citations


Journal ArticleDOI
TL;DR: In this article, a vision-based alternative to measure the shape of aggregate particles is presented, although requiring increased capital invest- ment, will result in objective, cost effective, and timely testing of aggregate shape.
Abstract: Aggregates need to pass numerous tests to ensure the performance of asphalt and concrete structures and pavements Some of these tests are fairly onerous, requiring manual, labor intensive, cost ineffective measurements that do not provide significant statistical validity, and are prone to errors through ignorance, negligence, or even in some cases through deliberate misrepresentation This paper presents a vision based alternative to measure the shape of aggregate particles The system, although requiring increased capital invest- ment, will result in objective, cost effective, and timely testing of aggregate shape The system uses dual, synchronized, double speed progressive scan cameras to image the aggregate piece from two directions A dual image acquisition card simultaneously digitizes both images and does real-time thresholding to create a binary image, which is ported to the host computer A software trigger determines the presence of an aggregate piece in the image, and the boundaries of the piece are delineated by a perimeter-walking routine Measurements of aspect ratio and minimum curve radius are made on the perimeter array, and are compared to flat and elongated tests, coarse aggregate angularity ~uncompacted voids!, compacted voids, and fractured face counts

96 citations


Journal ArticleDOI
TL;DR: The problem of waste load allocation (WLA) for water quality management of a river system is addressed with a simulation-optimization approach and the model developed provides the best compromise solutions to the pollution control agency responsible for maintaining the water quality and the dischargers disposing pollutants into the river system.
Abstract: The problem of waste load allocation (WLA) for water quality management of a river system is addressed with a simulation-optimization approach. The WLA model developed in the study provides the best compromise solutions to the pollution control agency (PCA) responsible for maintaining the water quality and the dischargers disposing pollutants into the river system. A previously developed fuzzy waste load allocation model (FWLAM) is extended to incorporate QUAL2E, a water quality simulation model developed by the U.S. Environmental Protection Agency for modeling the pollutant transport in a river. The imprecision associated with establishing water quality standards and the aspirations of the PCA and dischargers are quantified using fuzzy goals with appropriate membership functions. The membership functions of the fuzzy goals represent the variation of the goal satisfaction in the system. A genetic algorithim (GA) is used as an optimization tool to find optimal fraction removal levels to the dischargers and the corresponding satisfaction level. Because a GA is an unconstrained optimization tool, it is extended to handle constraints by complementing it with homomorphous mapping (HM), a constraint handling method for evolutionary algorithms. The GA directs the decision vector in an encoded form to HM. HM, after a few interactions with QUAL2E, redirects the decoded solution back to the GA. The GA assigns a fitness value to the feasible solution vector and applies operators to refine the solution. This interaction among the GA, HM, and QUAL2E continues until a prespecified criterion for global optimality is met. Application of the model is illustrated with a case study of the Tunga-Bhadra River in South India.

67 citations


Journal ArticleDOI
TL;DR: The slope stability problem is used to demonstrate that the present method can efficiently generate mathematical models for predicting the behavior of complex engineering systems.
Abstract: Based on genetic algorithm and genetic programming, a new evolutionary algorithm is developed to evolve mathematical models for predicting the behavior of complex systems. The input variables of the models are the property parameters of the systems, which include the geometry, the deformation, the strength parameters, etc. On the other hand, the output variables are the system responses, such as displacement, stress, factor of safety, etc. To improve the efficiency of the evolution process, a two-stepped approach is adopted; the two steps are the structure evolution and parameter optimization steps. In the structure evolution step, a family of model structures is generated by genetic programming. Each model structure is a polynomial function of the input variables. An interpreter is then used to construct the mathematical expression for the model through simplification, regularization, and rationalization. Furthermore, necessary internal model parameters are added to the model structures automatically. For each model structure, a genetic algorithm is then used to search for the best values of the internal model parameters in the parameter optimization step. The two steps are repeated until the best model is evolved. The slope stability problem is used to demonstrate that the present method can efficiently generate mathematical models for predicting the behavior of complex engineering systems.

66 citations


Journal ArticleDOI
TL;DR: In this article, the shape analysis of fine and coarse aggregate particles is investigated using Fourier transform of digital images, and the geometric signature of each shape is extracted by measuring the distance between its centroid and the boundary at constant increment of angles, and Fourier transforms are used to evaluate its spectral information.
Abstract: The shape of aggregates has an important influence on the behavior of civil engineering materials. Digital imaging techniques provide unique opportunities for describing these features in an automated fashion. Shape analysis of fine and coarse aggregate particles is investigated in this study using Fourier transform of digital images. The geometric signature of each shape is extracted by measuring the distance between its centroid and the boundary at constant increment of angles, and Fourier transforms are used to evaluate its spectral information. The number of highest amplitude harmonics required for accurate profile regeneration is evaluated. Shape of a given aggregate particle is reconstructed using inverse Fourier transforms considering a limited number of significant harmonics. A parameter that can quantify the error between regenerated and original profiles is proposed. Using this value, two shape parameters are defined to describe the overall shape and the ruggedness of a particle. A procedure of quantitatively describing the roundness/angularity of aggregate shape is presented and extended to three dimensions using orthogonal views.

Journal ArticleDOI
TL;DR: In this paper, a computational platform for predicting the lifetime system reliability profiles for different structure types located in an existing network is presented, and numerical examples of three existing bridges (i.e., a steel, a prestressed concrete and a hybrid steel-concrete bridge) are briefly presented to demonstrate the capabilities of the proposed computational platform.
Abstract: This paper presents a computational platform for predicting the lifetime system reliability profiles for different structure types located in an existing network. The computational platform has the capability to incorporate time-variant live load and resistance models. Following a review of the theoretical basis, the overall architecture of the computational platform is described. Finally, numerical examples of three existing bridges (i.e., a steel, a prestressed concrete, and a hybrid steel-concrete bridge) located in a network are briefly presented to demonstrate the capabilities of the proposed computational platform.

Journal ArticleDOI
TL;DR: The ripple effect or cumulative impact of changes has been recognized by many courts and administrative boards as mentioned in this paper, who recognize that multiple or unusual change orders often cause productivity losses through a ripple effect.
Abstract: Multiple or unusual change orders often cause productivity losses through a ripple effect or cumulative impact of changes. Many courts and administrative boards recognize that there is cumulative i...

Journal ArticleDOI
TL;DR: An automated quality assessment technique is proposed for rapidly detecting excessive size variations during the production of stone aggregates and could potentially help to determine, in an accurate and fast (real-time) manner, when adjustments or repairs to the production equipment are needed.
Abstract: An automated quality assessment technique is proposed for rapidly detecting excessive size variations during the production of stone aggregates. The system uses a laser profiler to scan collections of aggregate particles and obtain three-dimensional data points on the particle surfaces. For computational efficiency, the resulting data are converted into digital images. Wavelet transforms are then applied to the images to extract features indicative of the material gradation. These wavelet-based features are used as inputs to an artificial neural network, which is trained to classify the aggregate sample. Taken together, these components form a neural network-based classification system that can determine whether or not an aggregate product is in compliance with a given specification. Verification tests show that this approach could potentially help to determine, in an accurate and fast (real-time) manner, when adjustments or repairs to the production equipment are needed.

Journal ArticleDOI
TL;DR: In this paper, a method for determination of average grain size from images of fairly uniform particle size soil masses is presented, which utilizes two-dimensional wavelet decomposition of gray scale images.
Abstract: A method for determination of average grain size from images of fairly uniform particle size soil masses is presented. The procedure utilizes two-dimensional wavelet decomposition of gray scale images. Earlier attempts to quantify grain sizes based on the statistics of co-occurrence matrices suffered from dependence on the illumination intensity and soil color. By normalizing the energy distribution from wavelet decomposition the effects of these previously problematic factors have been eliminated. A general relationship between the center of area beneath the normalized energy distribution and the perceived particle size in pixels per diameter (PPD) is established. A sample problem demonstrates that the proposed wavelet decomposition method provides accurate grain sizes for a wide range of magnification levels as long as the resulting PPD is between approximately 1 and 50.

Journal ArticleDOI
TL;DR: In this article, an image-based particle tracking method was used to define the strain distribution in various geosynthetics during wide-width tensile testing, which used a block-based matching algorithm functioning under LABVIEW.
Abstract: Determining the deformation response of geosynthetics under load is important in developing an in-depth understanding of the engineering behavior of these materials. Current strain determination methods employed as part of tensile tests mostly assume that the strain is uniform throughout the specimen and, hence, are incapable of determining local strains. Geosynthetics have occasionally been instrumented with strain gauges and extensometers; however, these direct contact methods have limitations in fully defining strain distributions in a test specimen. Recent technological advancements in image analysis offer great potential for a more accurate and noncontact method of determining strains. An image-based particle tracking method was used to define the strain distribution in various geosynthetics during wide-width tensile testing. The method used a block-based matching algorithm functioning under LABVIEW. The measured gross strain values were compared to those determined from strain gauges and extensometers. The strain values determined by these methods were comparable to the image-based ones, and the absolute value of the difference was less than 10% for the geosynthetics tested. Furthermore, the image-based analysis was effective in also determining the local strains.

Journal ArticleDOI
TL;DR: The use of ANN models to predict the forecast errors of physically based models can help to improve significantly the prediction and therefore to reduce the associated uncertainty.
Abstract: This paper presents an approach for handling uncertainties arising mainly from ignored or misrepresented processes in physically based models. The approach is based on the application of a parallel artificial neural network (ANN) model that uses state variables, input and output data, and previous model errors at specific time steps to predict the errors of a physically based model. Concepts from information theory are used to discover the relationships between the variables and the model errors, which also serves as a mechanism to detect the predictability of the errors. The resulting information is used to select the best related input data for the error prediction model. The error prediction model is then trained and applied to improve the forecasts made by the physically based model. This approach was applied to a routing model of a 70 km reach of the River Wye, United Kingdom. The results demonstrate that errors from the physically based model show a consistent trend governed by some dynamics of their own, which can be modeled with learning algorithms. Errors were forecasted at different lead times. In all cases the forecasts made by the combined application of both models were more accurate than those made by the physically based model alone. From this it was concluded that, along with proper information analysis techniques, the use of ANN models to predict the forecast errors of physically based models can help to improve significantly the prediction and therefore to reduce the associated uncertainty.

Journal ArticleDOI
TL;DR: This paper describes construction site-based project management tasks and demonstrates how navigational models can facilitate efficient data access and data collection processes by customizing the presented information for a given task and environment of a user.
Abstract: Integrated product and process models have started gaining acceptance in the construction industry and it is conceivable that in the near future project data will be contained in these models. A project model can contain large data sets making it harder for users to navigate. This challenge is even more difficult in cases where mobile computing is used on site for accessing and collecting data needed for a construction management task. Navigational models are constructs that provide construction personnel, who are using mobile computing applications on construction sites, with information and data collection support relevant to their tasks and environments. Navigational models provide a flexible and dynamic way of grouping and structuring entities of product and process models such that those entities that need to be related for one task are linked directly to minimize the navigation through a given model. Moreover, navigational models provide a way to structure the data contained in product and process models in hierarchies facilitating interaction with entities at multiple levels of detail. In this paper, we describe construction site-based project management tasks and demonstrate how navigational models can facilitate efficient data access and data collection processes by customizing the presented information for a given task and environment of a user.

Journal ArticleDOI
TL;DR: It is concluded that the robust search capability of the shuffled complex evolution technique is well suited for solving the combinatorial problems in network level infrastructure preservation works programming.
Abstract: Practical optimization of infrastructure preservation works programming has always posed a computational challenge due to the complexity and scale of the problem. Critical to the process is formulations in which the identity of individual projects is preserved. This requirement leads to exponential growth of solution space, often resulting in an unmanageable process using traditional analytical optimization techniques. In this paper, we propose an evolutionary-based multiyear optimization procedure for solving network level infrastructure works programming problems using a relatively new concept known as the shuffled complex evolution algorithm. A case study problem is analyzed to illustrate the robustness of the technique. The findings show convergence characteristics of the solution and demonstrate that the algorithm is very efficient and consistent in simultaneous consideration of the trade-off among various infrastructure preservation strategies. It is concluded that the robust search capability of the shuffled complex evolution technique is well suited for solving the combinatorial problems in network level infrastructure preservation works programming.

Journal ArticleDOI
TL;DR: The neurofuzzy framework enables integration of the objective and subjective factors found in the underlying decision-making process, and serves as the stepping stone for the generation of the multidimensional probability distribution function that governs competitive bidding.
Abstract: The paper presents a methodology for arriving at optimum bid markups in static competitive bidding environments by use of neurofuzzy systems and integrated multidimensional risk analysis algorithms. The neurofuzzy framework enables integration of the objective and subjective factors found in the underlying decision-making process, and serves as the stepping stone for the generation of the multidimensional probability distribution function that governs competitive bidding. Subsequent bid optimization is achieved by employing a multidimensional risk analysis algorithm.

Journal ArticleDOI
TL;DR: In this paper, an informatization index for the construction industry (IICI) is developed based on the specifics of the industry and a survey using IICI was conducted among general contractors in Korea, and the results are analyzed in terms of the measure of assessment, IS phases, construction business functions, and size of the firm.
Abstract: It is normally recognized that information systems (IS) in the construction industry have not been sufficiently used in the era of information. However, so far no serious comprehensive effort has been made to measure the degree of informatization at the industry level. In order to address this problem, in this paper we propose an informatization assessment methodology for the construction industry. An informatization index for the construction industry (IICI) is developed based on specifics of the construction industry. A survey using IICI was conducted among general contractors in Korea, and the results are analyzed in terms of the measure of assessment, IS phases, construction business functions, and size of the firm. It is found that the proposed methodology can provide meaningful indicators that can be used in quantitative comparative assessment from many different perspectives. Details and implications of the case study are briefly presented.

Journal ArticleDOI
TL;DR: Improved search and rapid convergence are obtained by considering the lattice tower as a set of small objects and combining these objects into a system by combining genetic algorithms and an object-oriented approach.
Abstract: A new approach is presented for the optimization of steel lattice towers by combining genetic algorithms and an object-oriented approach. The purpose of this approach is to eliminate the difficulties in the handling of large size problems such as lattice towers. Improved search and rapid convergence are obtained by considering the lattice tower as a set of small objects and combining these objects into a system. This is possible with serial cantilever structures such as lattice towers. A tower consists of panel objects, which can be classified as separate objects, as they possess an independent property as well as inherent properties. This can considerably reduce the design space of the problem and enhance the result. An optimization approach for the steel lattice tower problem using objects and genetic algorithms is presented here. The paper also describes the algorithm with practical design considerations used for this approach. To demonstrate the approach, a typical tower configuration with practical constraints has been considered for discrete optimization with the new approach and compared with the results of a normal approach in which the full tower is considered.

Journal ArticleDOI
TL;DR: In this article, the authors present a qualitative checklist of the expected costs and benefits for precast construction, proposes hypotheses for estimating them, and presents data to support initial assessment of the magnitude of the short-term factors.
Abstract: Sophisticated three-dimensional parametric modeling software for design and detailing of precast/prestressed concrete construction is currently under development. The technology holds the potential to reduce costs, shorten lead times, and avoid errors in production and erection. To date, however, no rational assessment has been made of the costs and benefits of adoption. No standard methodology exists for the assessment of the benefits of information technology in the construction industry. This paper establishes a qualitative checklist of the expected costs and benefits for precast construction, proposes hypotheses for estimating them, and presents data to support initial assessment of the magnitude of the short-term factors. It also establishes a bench mark of engineering costs for North American precast companies. The bench mark is intended for ongoing assessment of the planned integration of the technology in the member companies of the North American Precast Concrete Software Consortium.

Journal ArticleDOI
TL;DR: The results of two surveys conducted by the American Society of Civil Engineers' Task Committee on Computing Education of the Technical Council on Computing and Information Technology to assess the current computing component of the curriculum in civil engineering are presented.
Abstract: This paper presents the results of two surveys conducted by the American Society of Civil Engineers' Task Committee on Computing Education of the Technical Council on Computing and Information Technology to assess the current computing component of the curriculum in civil engineering. Previous surveys completed in 1989 and 1995 have addressed the question of what should be taught to civil engineering students regarding computing. The surveys reported in this paper are a follow-up study to the two earlier surveys. Key findings of the study include: (1) the relative importance of the top four skills (spreadsheets, word processors, computer aided-design, electronic communication) has remained unchanged; (2) programming competence is ranked very low by practitioners; (3) the importance and use of geographic information system and specialized engineering software have increased over the past decade; (4) the importance and use of expert systems have significantly decreased over the past decade; and (5) the importance and use of equation solvers and databases have declined over the past decade.

Journal ArticleDOI
TL;DR: In this article, the authors presented a methodology of generating quick seismic response estimations of a prestressed concrete (PC) bridge using artificial neural networks (ANNs), which may be incorporated in a seismic early warning system for the bridge.
Abstract: Seismic early warning has been very important and has become feasible in Taiwan. Perhaps because of the lack of quick and reliable estimations of the induced structural response, however, the triggering criteria of almost all of the existing earthquake protection or early warning systems in the world are merely based on the collected or estimated data of the ground motion, without any information regarding the structural response. This paper presents a methodology of generating quick seismic response estimations of a prestressed concrete (PC) bridge using artificial neural networks (ANNs), which may be incorporated in a seismic early warning system for the bridge. In the methodology ANNs were applied to model the critical structural response of a PC bridge subjected to earthquake excitation of various magnitudes along various directions. The objective was to implement a well-trained network that is capable of providing a quick prediction for the critical response of the target bridge. The well-known multilayer perception (MLP) networks with back propagation algorithm were employed. A simple augmented form of MLP that can be quantitatively determined was proposed. These networks were trained and tested based on the analytical data obtained from the nonlinear dynamic finite fiber element analyses of the target PC bridge. The augmented MLPs were found to be much more efficient than the MLPs in modeling the critical bending moments of the piers and girder of the PC bridge.

Journal ArticleDOI
TL;DR: In this paper, a semi-active control strategy that combines a neuro-controller with a smart damper is proposed to reduce seismic responses of structures, which is fail-safe in that the bounded-input, bounded-output stability of the controlled structure is guaranteed.
Abstract: A new semiactive control strategy that combines a neurocontrol system with a smart damper is proposed to reduce seismic responses of structures. In the proposed semiactive control system, the improved neurocontroller, which was developed by employing a training algorithm based on a cost function and a sensitivity evaluation algorithm to replace an emulator neural network, produces the desired active control force, and then a bang-bang-type controller clips the control forces that cannot be achieved by a smart damper (e.g., a variable orifice damper, controllable fluid damper, etc.). Therefore, the proposed semiactive control strategy is fail-safe in that the bounded-input, bounded-output stability of the controlled structure is guaranteed. Numerical simulation results show that the proposed semiactive control system that employs a neural network-based control algorithm is quite effective in reducing seismic responses.

Journal ArticleDOI
TL;DR: In this article, a neural network model is used to simulate the inelastic deflection of reinforced concrete (RC) frames from the results of the consistent procedure (CP) for a class of RC frames.
Abstract: A recently developed accurate procedure termed as the consistent procedure (CP) for evaluation of creep and shrinkage behavior in reinforced concrete (RC) frames is elaborate and requires large computational effort. An approximate procedure (AP) that has been available and widely used is simple and requires much less computational effort but can be erroneous. The feasibility of using the neural network model to simulate the inelastic deflections of CP from the results of AP for a class of RC frames is investigated. This model would enable rapid estimation of inelastic deflections of CP and would be useful at the planning stage. For this purpose, a ratio η of inelastic deflections of CP, to corresponding deflections of AP, designated as inelastic deflection ratio is defined as the output parameter. The sensitivity of η with the probable structural parameters in the practical range of values is studied and governing input parameters identified. The training is carried out for a practical range of the governing structural parameters. Trained network is validated for a number of example buildings.

Journal ArticleDOI
TL;DR: The writers' method provides automated assistance to domain experts in setting up constraints for data behavior that were useful for automatic anomaly detection over the Mn/ROAD data, i.e., unexpected behavior in monitoring data.
Abstract: Monitoring data from event-based monitoring systems are becoming more and more prevalent in civil engineering. An example is truck weigh-in-motion (WIM) data. These data are used in the transportation domain for various analyses, such as analyzing the effects of commercial truck traffic on pavement materials and designs. It is important that such analyses use good quality data or at least account appropriately for any deficiencies in the quality of data they are using. Low quality data may exist due to problems in the sensing hardware, in its calibration, or in the software processing the raw sensor data. The vast quantities of data collected make it infeasible for a human to examine all the data. The writers propose a data mining approach for automatically detecting semantic anomalies i.e., unexpected behavior in monitoring data. The writers' method provides automated assistance to domain experts in setting up constraints for data behavior. The effectiveness of this method is shown by reporting its successful application to data from an actual WIM system, the experimental data the Minnesota department of transportation collected by its Minnesota road research project (Mn/ROAD) facilities. The constraints the expert set up by applying this method were useful for automatic anomaly detection over the Mn/ROAD data, i.e., they detected anomalies the expert cared about, e.g., unlikely vehicles and erroneously classified vehicles, and the misclassification rate was reasonable for a human to handle (usually less than 3%). Moreover, the expert gained insights about system behavior, such as realizing that a system-wide change had occurred. The constraints detected, for example, periods in which the WIM system reported that roughly 20% of the vehicles classified as three-axle single-unit trucks had only one axle.

Journal ArticleDOI
TL;DR: In this article, the authors present a framework to manage the life-cycle cost of smart infrastructure systems, which includes a core model for evaluating the life cycle cost of civil infrastructure systems equipped with smart materials (fiber-reinforced concrete, sensor-embedded materials, etc.).
Abstract: Smart infrastructure systems life-cycle costing has not receive much attention from researchers, albeit its considerable potential and proven success. This paper presents a framework to manage the life-cycle cost of these systems. The framework includes a core model for evaluating the life-cycle cost of civil infrastructure systems equipped with smart materials (fiber-reinforced concrete, sensor-embedded materials, etc.) or intelligent devices (smart valves, smart signals, etc.). The model identifies the basic cost elements that should be considered when evaluating life-cycle costs. In addition, the model identifies design and managerial factors that influence the values of these costs.

Journal ArticleDOI
TL;DR: It is shown that functional networks are more efficient and powerful and take much less computer time as compared to predictions by conventional neural networks such as the back-propagation network.
Abstract: In this paper, functional networks (FN) proposed by Castillo as an alternative to neural networks are discussed. Unlike neural networks, the functions are learned instead of weights. In general, topology is selected based on data, domain knowledge (properties of the function such as associativity, commutativity, and invariance), or a combination of the two. The object of this paper is to show the application of some functional network architectures to model and predict the behavior of structural systems which are otherwise modeled in terms of differential or difference equations or in terms of neural networks. In this paper, four examples in structural engineering and one example in mathematics are discussed. The results obtained by functional networks are compared with those obtained by neural networks for the first four examples, and it is shown that functional networks are more efficient and powerful and take much less computer time as compared to predictions by conventional neural networks such as the back-propagation network.