scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Performance-Driven Measurement System Design for Structural Identification

01 Jul 2013-Journal of Computing in Civil Engineering (American Society of Civil Engineers)-Vol. 27, Iss: 4, pp 427-436
TL;DR: In this article, the authors describe a methodology that explicitly indicates when instrumentation can hinder the ability to interpret data and use two performance indices to optimize measurement system designs, i.e.,monitoring costs and expected identification performance.
Abstract: Much progress has been achieved in the research field of structural identification, which is attributable to a better understanding of uncertainties, improvement in sensor technologies, and cost reductions. However, data interpretation remains a bottleneck. Too often, too much data are acquired, which hinders interpretation. In this paper, the writers describe a methodology that explicitly indicates when instrumentation can hinder the ability to interpret data. The approach includes uncertainties and dependencies that may affect model predictions. The writers use two performance indices to optimize measurement system designs, i.e.,monitoring costs and expected identification performance. A case study shows that the approach is able to justify a reduction in monitoring costs of 50% compared with an initial measurement configuration.

Summary (3 min read)

INTRODUCTION

  • Identifying and understanding the behaviour of civil structures based on measurement data is increasingly used for avoiding replacement and strengthening interventions (Catbas et al. 2012).
  • Indeed, in many practical situations, the cost of making sense of data often exceeds by many times the initial cost of sensors.
  • Furthermore, in the case of structural identification, measurement uncertainties are often not the dominant source of uncertainty (Goulet et al. 2010).
  • This paper proposes a computer-aided measurement-system design methodology that includes systematic bias and epistemic uncertainties.

PREDICTING THE PERFORMANCE OF MEASUREMENT SYSTEMS AT FALSIFYING MODELS

  • Starting with the principle that observations can best support the falsification of hypotheses, the error-domain model-falsification approach (Goulet et al.
  • Model instances are falsified if at any location i, the difference between predicted (gi(θ)) and measured (yi) values lies outside the interval defined by threshold bounds (Ti,Low, Ti,High) that are based on modeling and measuring uncertainties.
  • The projection of threshold bounds including a probability φ′, determined for each combined uncertainty pdf, defines a square region that includes a probability content φ.
  • These dependencies are described by correlation coefficients.
  • This cdf describes the certainty of obtaining any number of candidate models if measurements are taken on the structure.

MEASUREMENT-SYSTEM DESIGN

  • The expected identifiability described in the previous section is used as a performance metric to optimize the efficiency of measurement systems.
  • Therefore, the methodology also uses measurement-system cost as a second objective for optimizing measurement systems.
  • Equation 2 presents the number of possible sensor configurations.
  • In order to obtain optimized solutions efficiently, advanced search algorithms are necessary.

Methodology

  • The methodology used to design efficient measurement systems is based on a Greedy algorithm.
  • At each iteration, it identifies the measurement that can be removed from an initial configuration containing N measurements while minimizing the expected number of candidate models.
  • The results from measurement-system optimization are returned in a two-objective graph as presented in Figure 6 and in a table containing the details of each measurement-system configuration.
  • Beyond this point, additional measurements may decrease the efficiency of the identification by increasing the number of candidate models (i.e. reducing the number of falsified models).
  • Threshold corrections ensure that the reliability of the identification meets the target φ when multiple measurements are used simultaneously to falsify model instances.

Complexity

  • For static monitoring, if one load case is possible, the Greedy algorithm performs the measurement-system optimization in less than n2m/2 iterations, where nm is the maximal number of measurements.
  • Figure 8 compares the number of iterations required with the number of sensor combinations possible for one load-case.
  • It shows that Greedy algorithm complexity (O(n2)) leads to a number of sensor combinations to test that is significantly smaller than the number of possible combinations (O(2n)).

CASE-STUDY

  • The measurement-system design methodology presented in the previous section is used to optimize the monitoring system for investigating the behavior a full-scale structure using static load-tests.
  • The Langensand Bridge is located in Lucerne, Switzerland.
  • The primary parameters to identify are the concrete Young’s modulus for the slab poured during construction phase one and two, the asphalt Young’s modulus for phase one and two and the stiffness of the horizontal restriction that could take place at the longitudinally free bearing devices.
  • The initial measurement system to be optimized is composed of ten displacement, four rotation and five strain sensors.
  • Each test-truck weighs 35 tons and each test load-case takes two hours.

Modeling and measurement uncertainties

  • Several random and epistemic uncertainty sources affect the interpretation of data.
  • These numbers are based on uncertainties defined by Goulet and Smith (2012) for a first study performed on the structure during its construction phase.
  • This distribution is made of several orders of uniform distribution, each representing the uncertainty associated with the bound position.
  • The correlation between predictions originating from secondary-parameter uncertainties is implicitly provided when uncertainties are propagated through the finite-element template model.
  • Therefore, specific threshold bounds are computed for each sensor configuration.

Measurement-system design results

  • Measurement-system optimization is performed according to two criteria: loadtest costs and the expected number of candidate models.
  • This quantitatively shows a principle, intuitively known by engineers, that too much measurement data may hinder interpretation.
  • The sensor and load-case configurations associated with each dot in Figure 13 are reported in Table 3.
  • The best measurement system found uses 4 sensors with 3 load-cases and would result in almost 80% of model instances to be falsified.
  • This measurementsystem configuration is halfway between the cheapest and most expensive measurement systems.

DISCUSSION

  • Results presented here indicate that over-instrumenting a structure is possible.
  • The methodology presented in this paper can be used with optimizations methodologies other than the Greedy algorithm.
  • As it was already noted by (Goulet and Smith 2011c) the global trend of over instrumentation is independent of the optimization technique.
  • Furthermore, for this case, a sensitivity analysis has shown that the effect of single sensor removal is dominant over the effect of the interaction caused by multiple sensor removal.
  • Therefore, stochastic search algorithms are not expected to provide better optimization results.

CONCLUSIONS

  • Computer-aided measurement-system design supports cost minimization while maximizing expected efficiency identifying the behaviour of structures.
  • The criteria used to falsify models (threshold bounds) are dependent upon the number of measurements, also known as Specific conclusions are.
  • If too many measurements are used, data-interpretation can be hindered by over-instrumentation.
  • The measurement-system design methodology can be used to determine good tradeoffs with respect to interpretation goals and available resources.
  • Further work is under way to establish the usefulness of greedy sensor removal with respect to stochastic search methods for a range of cases.

Did you find this useful? Give us your feedback

Figures (16)

Content maybe subject to copyright    Report

Titre:
Title:
Performance-driven measurement system design for structural
identification
Auteurs:
Authors:
James-A. Goulet et Ian F.C. Smith
Date:
2013
Type:
Article de revue / Journal article
Référence:
Citation:
Goulet, J.-A. & Smith, I. F. C. (2013). Performance-driven measurement system
design for structural identification. Journal of Computing in Civil Engineering,
27(4), p. 427-436. doi:10.1061/(asce)cp.1943-5487.0000250
Document en libre accès dans PolyPublie
Open Access document in PolyPublie
URL de PolyPublie:
PolyPublie URL:
https://publications.polymtl.ca/2888/
Version:
Version finale avant publication / Accepted version
Révisé par les pairs / Refereed
Conditions dutilisation:
Terms of Use:
Tous droits réservés / All rights reserved
Document publié chez léditeur officiel
Document issued by the official publisher
Titre de la revue:
Journal Title:
Journal of Computing in Civil Engineering
Maison dédition:
Publisher:
ASCE
URL officiel:
Official URL:
https://doi.org/10.1061/(asce)cp.1943-5487.0000250
Mention légale:
Legal notice:
"Authors may post the final draft of their work on open, unrestricted Internet sites or
deposit it in an institutional repository when the draft contains a link to the
bibliographic record of the published version in the ASCE Library or Civil Engineering
Database"
Ce fichier a été téléchargé à partir de PolyPublie,
le dépôt institutionnel de Polytechnique Montréal
This file has been downloaded from PolyPublie, the
institutional repository of Polytechnique Montréal
http://publications.polymtl.ca

Goulet, J. and Smith, I. (2013). Performance-driven measurement-system design for
structural identification. Journal of Computing In Civil Engineering, 27(4):427–436.
PERFORMANCE-DRIVEN MEASUREMENT-SYSTEM
DESIGN FOR STRUCTURAL IDENTIFICATION
James-A. Goulet A. M. ASCE
1
and Ian F. C. Smith, F. ASCE
2
Abstract
Much progress has been achieved in the field of structural identification due to a better
understanding of uncertainties, improvement in sensor technologies and cost reductions. How-
ever, data interpretation remains a bottleneck. Too often, too much data is acquired, thus hin-
dering interpretation. In this paper, a methodology is described that explicitly indicates when
instrumentation can decreases the ability to interpret data. The approach includes uncertainties
along with dependencies that may affect model predictions. Two performance indices are used
to optimize measurement system designs: monitoring costs and expected identification perfor-
mance. A case-study shows that the approach is able to justify a reduction in monitoring costs
of 50% compared with an initial measurement configuration.
Keywords: Computer-aided design, Measurement System, Sensor placement, Un-
certainties, dependencies, Expected Identifiability, System Identification, Monitoring
INTRODUCTION
Identifying and understanding the behaviour of civil structures based on measure-
ment data is increasingly used for avoiding replacement and strengthening interven-
tions (Catbas et al. 2012). Much progress has been achieved in the field of structural
identification due to a better understanding of uncertainties as well as improvements in
sensing technologies and data-acquisition systems (Fraser et al. 2010). However, data
interpretation remains a bottleneck. Indeed, in many practical situations, the cost of
making sense of data often exceeds by many times the initial cost of sensors. Brown-
john (2007) noted that currently, there is a tendency toward over-instrumentation of
monitored structures. This challenge often becomes critical when monitoring over
long periods. There is a need to determine which measurements are useful to deter-
mine the behaviour of a system. Intuitively, engineers measure structures where the
largest response is expected. This ensures that the ratio between the measured value
1
Ph.D. Student, IMAC,
´
Ecole Polytechnique F
´
ed
´
erale de Lausanne (EPFL), School of Architecture,
Civil and Environmental Engineering ENAC, Lausanne, Switzerland (corresponding author). E-mail:
James.A.Goulet@gmail.com
2
Professor, IMAC,
´
Ecole Polytechnique F
´
ed
´
erale de Lausanne (EPFL), School of Architecture, Civil
and Environmental Engineering ENAC, Lausanne, Switzerland. E-mail: Ian.Smith@epfl.ch
1

Goulet, J. and Smith, I. (2013). Performance-driven measurement-system design for
structural identification. Journal of Computing In Civil Engineering, 27(4):427–436.
and the measurement error is the largest. However, these locations may not be those
that support structural identification in the best way. Furthermore, in the case of struc-
tural identification, measurement uncertainties are often not the dominant source of
uncertainty (Goulet et al. 2010).
Robert-Nicoud et al. (2005) proposed a multi-model approach combined with an
entropy-based sensor-placement methodology. The application of the methodology by
Kripakaran and Smith (2009) to a bridge case-study showed that a saturation of the
amount of useful information can be expected. At some point, adding sensors did not
reduced the number of models that were able to explain measured behaviour. In the
field of structural engineering, the concept of entropy-based sensor placement was also
explored by other researchers (Yuen et al. 2001; Papadimitriou 2004; Papadimitriou
et al. 2005; Papadimitriou 2005). These applications also showed that saturation of the
information increased as sensors were added. Many other researchers (Cherng 2003;
Kammer 2005; Kang et al. 2008; Liu et al. 2008; Liu and Danczyk 2009) have made
proposals that involve maximizing the amount of information contained in dynamic
monitoring signals. Sensor placement is an active field of research in domains such
as water distribution networks (Krause and Guestrin 2009) and traffic monitoring (Liu
and Danczyk 2009). Also, few researchers have used utility-based metrics to design
measurement systems. For instance, Pozzi and Der Kiureghian (2011) have proposed
economic criteria instead of sensor information accuracy to plan monitoring interven-
tions. These authors observed that the “value of a piece of information depends on
its ability to guide our decisions”. This supports the idea that measurement systems
should be designed according to measurement goals. Currently there is a lack of sys-
tematic methodologies to design measurement systems for a range of measurement
goals.
An additional limitation of existing approaches is that most do not explicitly ac-
count for systematic bias introduced by epistemic uncertainties. Civil-structure models
are prone to epistemic errors that arise from unavoidable simplifications and omissions.
Epistemic errors often introduce varying systematic bias over multiple prediction loca-
tions and prediction quantities. These effects should be explicitly incorporated in the
process of designing measurement systems. Recently, Papadimitriou and Lombaert
(2012) have explored the influence of error dependencies on measurement-system de-
sign. Also, Goulet and Smith (2011b) proposed an identification methodology that in-
cludes systematic bias and epistemic uncertainty dependencies. The methodology was
used for predicting the usefulness of monitoring for identifying the parameters char-
acterizing the behavior of a bridge structure (Goulet and Smith 2012). However, the
potential of this methodology for optimizing measurement systems was not explored.
This paper proposes a computer-aided measurement-system design methodology
that includes systematic bias and epistemic uncertainties. The objective functions used
by the methodology includes, the capacity to falsify models and measurement-system
cost. The first section summarizes the error-domain model-falsification methodology
2

Goulet, J. and Smith, I. (2013). Performance-driven measurement-system design for
structural identification. Journal of Computing In Civil Engineering, 27(4):427–436.
and the expected identifiability performance metric. The following section describes
the measurement-system design methodology and a case-study is presented in the last
section.
PREDICTING THE PERFORMANCE OF MEASUREMENT SYSTEMS AT
FALSIFYING MODELS
Starting with the principle that observations can best support the falsification of
hypotheses, the error-domain model-falsification approach (Goulet et al. 2010; Goulet
and Smith 2011b) uses measurements to falsify model instances. A model instance is
defined as a set of n
p
parameters θ = [θ
1
, θ
2
, . . . , θ
n
p
] characterizing the behaviour of
a structural model. These parameters are usually associated with boundary conditions,
material properties and the geometry of a structure. Model instances are falsified if
at any location i, the difference between predicted (g
i
(θ)) and measured (y
i
) values
lies outside the interval defined by threshold bounds (T
i,Low
, T
i,High
) that are based on
modeling and measuring uncertainties. Models are falsified if the condition presented
in Equation 1 is not satisfied. In this Equation, n
m
is the number of measurements.
i = 1, . . . , n
m
: T
i,Low
g
i
(θ) y
i
T
i,High
(1)
The combined uncertainty distribution and its threshold bounds are presented in Figure
1. This distribution is obtained by evaluating modelling and measuring uncertainty
sources separately and then combining them together. Details regarding numerical
uncertainty combination techniques are presented in ISO guidelines (2008). Threshold
bounds define the smallest intervals that include a probability content φ ]0, . . . , 1],
defined by users. Details regarding computation of threshold bounds are presented in
Goulet and Smith (2012).
0
Probability
Combination of modeling and
measurement uncertainties
Threshold bounds
Figure 1. Threshold bounds based on modeling and measuring uncer-
tainties
When n
m
measurements are used, it is conservative to define threshold bounds to
include a probability content φ
0
= φ
1/n
m
for the combined uncertainty associated with
each comparison point. A comparison point is a location where predicted and mea-
sured values are compared. Each time a measurement is added, threshold bounds are
3

Goulet, J. and Smith, I. (2013). Performance-driven measurement-system design for
structural identification. Journal of Computing In Civil Engineering, 27(4):427–436.
adjusted (widened) to include the additional possibility of wrongly falsifying a cor-
rect model instance. This is illustrated in Figure 2, where the combined uncertainty
for two comparison points are presented in a multivariate probability density func-
tion (pdf ). The projection of threshold bounds including a probability φ
0
, determined
for each combined uncertainty pdf, defines a square region that includes a probability
content φ. Thus, if for any comparison point, the difference between predicted and
measured value falls outside this region, the model instance is discarded with a prob-
ability 1 φ of performing a wrong diagnosis (discarding a correct model).
˘
Sid
´
ak
(1967) noted that this procedure lead to conservative threshold bounds regardless of
the dependency between the random variables that are used to represent uncertainties.
0
Combined uncertainty
comparison point #2
Probability
Combined uncertainty
comparison point #1
Combined uncertainty
probability density function
Figure 2. Threshold bounds based on model and measurement uncer-
tainties
Prior to measuring a structure, simulated measurements are generated by subtract-
ing error samples taken in the combined uncertainty distribution from the predictions
given by a model instance. Dependencies between prediction uncertainties are in-
cluded in the process of simulating measurements. These dependencies are described
by correlation coefficients. Since little information is available for quantifying these
coefficients, a qualitative reasoning formulation is used to describe them. In this formu-
lation, uncertainty correlations are described qualitatively by the labels “low”, “mod-
erate” and “high”, and by labels “positive” or “negative”. These labels correspond to a
probability distribution for the correlation value as presented in Figure 3. Each proba-
bility density function defines the frequency of the uncertainty correlation values used
during the generation of simulated measurements.
Before measuring, an infinite number of parameter sets θ might be acceptable ex-
planations of the structure behavior. The space of possible solutions is represented by
a finite number of model instances. These instances are organized in a n
p
-parameter
grid that is used to explore the space of possible solutions. Such a grid is named the
initial model instance set. An example is presented in Figure 4 for two parameters.
For a measurement and load-case configuration, the goal is to predict probabilis-
tically the expected number of candidate models that should remain in the set if real
4

Citations
More filters
Journal ArticleDOI
TL;DR: How error-domain model falsification reveals properties of a structure when uncertainty dependencies are unknown and how incorrect assumptions regarding model-class adequacy are detected is presented.

117 citations

Journal ArticleDOI
TL;DR: A leak-detection and a sensor placement methodology are proposed based on leak-scenario falsification that indicates that when monitoring the flow velocity for 14 pipes over the entire network (295 pipes) leaks are circumscribed within a few potential locations.

109 citations


Cites background from "Performance-Driven Measurement Syst..."

  • ...Keywords: System identification, leak detection, sensor placement, data interpretation, water distribution, uncertainty, error-domain model falsification...

    [...]

Journal ArticleDOI
TL;DR: It is indicated that Bayesian model class selection may lead to over-confidence in certain model classes, resulting in biased extrapolation, in terms of parameter-identification robustness and extrapolation accuracy.

61 citations


Cites methods from "Performance-Driven Measurement Syst..."

  • ...Model falsification has also been applied to sensor configuration [19, 33, 38]....

    [...]

Journal ArticleDOI
TL;DR: A new iterative framework for structural identification of complex aging structures based on model falsification and knowledge-based reasoning is proposed, suitable for ill-defined tasks such as structural identification where information is obtained gradually through data interpretation and in-situ inspection.

44 citations


Cites background or methods from "Performance-Driven Measurement Syst..."

  • ...166 As shown by Goulet and Smith [44], more measurements does not mean higher performance of structural 167 identification....

    [...]

  • ...Although several authors in various fields have pointed out the importance of providing36 an adequate description of modeling uncertainties associated with the model class [4, 19–22], proposals for37 robust alternatives to existing approaches are lacking.38 Goulet and Smith [3] proposed an approach that is robust when knowledge of the joint PDF of modeling39 and measurement errors is incomplete....

    [...]

  • ...Model falsification task: error-domain model falsification180 Proposed by Goulet and Smith [3], the error-domain model falsification approach aims to obtain possible181 values for θ = [θ1, . . . , θnθ ] ᵀ, describing a vector of nθ parameter values of a physics-based model using182 information provided by measurements....

    [...]

  • ...Although Goulet47 and Smith [3] have observed that EDMF can identify when initial assumptions related to the model class48 are erroneous by falsifying all model instances, taking advantage of this characteristic for exploring possible49 model classes of complex structures has not been studied.50 Choi and Beven [24] have also observed that model falsification could serve to point out model deficiencies51 in the search for a better model class....

    [...]

  • ...This work was funded by the Swiss National Science Foundation under Contract no.668 200020-155972.669 References670 [1] S. Atamturktur, Z. Liu, H. Cogan, S.and Juang, Calibration of imprecise and inaccurate numerical models considering671 fidelity and robustness: a multi-objective optimization-based approach, Structural and Multidisciplinary Optimization672 (2014) 1–13.673 [2] J. Beck, Bayesian system identification based on probability logic, Structural Control and Health Monitoring 17 (7) (2010)674 825–847.675 [3] J.-A. Goulet, I. Smith, Structural identification with systematic errors and unknown uncertainty dependencies, Computers676 & Structures 128 (2013) 251–258.677 [4] F. Çatbaş, T. Kijewski-Correa, A. Aktan, Structural identification of constructed systems, Reston (VI): American Society678 of Civil Engineers .679 [5] J. Beck, L. Katafygiotis, Updating models and their uncertainties....

    [...]

01 Jan 1999
TL;DR: In this paper, Signal Subspace Correlation (SSC) has been used for sensor placement in a blocked Hankel matrix using signal subspace correlation techniques developed earlier by the author.
Abstract: Placing vibration sensors at appropriate locations plays an important role in experimental modal analysis, in order to ultimately describe the system under considerations To this end, maximizing the determinant of Fisher Information Matrix has been shown to provide optimal solutions Some methods have been proposed in the literature, such as maximizing the determinant of the diagonal elements of mode shape correlation matrix, ranking the sensor contributions by Hankel Singular Values (HSVs), and using perturbation theory to achieve minimum estimate variance, etc The objectives of this work were to systematically analyze existing methods and to propose methods that improve their performance or accelerate the searching process The approach used in this paper is based on the analytical formulation of SVD for a candidate blocked Hankel matrix using Signal Subspace Correlation (SSC) techniques developed earlier by the author The SSC accounts for factors that contribute to the estimated results, such as mode shapes, damping ratios, sampling rate, and matrix size (or number of data used) With the aid of SSC, it will be shown in this article that using information of mode shapes and singular values are equivalent under certain conditions The results of this work not only consist with those of existing methods, but also demonstrate a more general viewpoint to the optimization problem Consequently, the insight of the sensor placement problem is clearly unfolded A modified method that treats targeted modes equally is then proposed Finally, a hybrid method that possesses advantages of existing methods is also proposed and believed to be efficient and effective

43 citations

References
More filters
Journal ArticleDOI
13 May 1983-Science
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Abstract: There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.

41,772 citations

Journal ArticleDOI
TL;DR: This paper suggests a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties, and modify the definition of dominance in order to solve constrained multi-objective problems efficiently.
Abstract: Multi-objective evolutionary algorithms (MOEAs) that use non-dominated sorting and sharing have been criticized mainly for: (1) their O(MN/sup 3/) computational complexity (where M is the number of objectives and N is the population size); (2) their non-elitism approach; and (3) the need to specify a sharing parameter. In this paper, we suggest a non-dominated sorting-based MOEA, called NSGA-II (Non-dominated Sorting Genetic Algorithm II), which alleviates all of the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN/sup 2/) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best N solutions (with respect to fitness and spread). Simulation results on difficult test problems show that NSGA-II is able, for most problems, to find a much better spread of solutions and better convergence near the true Pareto-optimal front compared to the Pareto-archived evolution strategy and the strength-Pareto evolutionary algorithm - two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multi-objective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective, seven-constraint nonlinear problem, are compared with another constrained multi-objective optimizer, and the much better performance of NSGA-II is observed.

37,111 citations


"Performance-Driven Measurement Syst..." refers methods in this paper

  • ...Several stochastic global search algorithms are available in the literature (Deb et al. 2002; Raphael and Smith 2003; Kirkpatrick et al. 1983; Kennedy and Eberhart 1995; Cormen 2001) with several applications in the field of civil engineering (Harp et al....

    [...]

  • ...Several stochastic global search algorithms are available in the literature (Deb et al. 2002; Raphael and Smith 2003; Kirkpatrick et al. 1983; Kennedy and Eberhart 1995; Cormen 2001) with several applications in the field of civil engineering (Harp et al. 2009; Dimou and Koumousis 2009; Domer et…...

    [...]

Proceedings ArticleDOI
06 Aug 2002
TL;DR: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced, and the evolution of several paradigms is outlined, and an implementation of one of the paradigm is discussed.
Abstract: A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

35,104 citations

Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "Performance-driven measurement-system design for structural identification" ?

In this paper, a methodology is described that explicitly indicates when instrumentation can decreases the ability to interpret data. A case-study shows that the approach is able to justify a reduction in monitoring costs of 50 % compared with an initial measurement configuration. 

Further work is under way to establish the usefulness of greedy sensor removal with respect to stochastic search methods for a range of cases.