scispace - formally typeset
Search or ask a question

Showing papers in "Water Resources Research in 2005"


Journal ArticleDOI
TL;DR: Particle filters are introduced as a sequential Bayesian filtering having features that represent the full probability distribution of predictive uncertainties, and their applicability to the approximation of the posterior distribution of parameters is investigated.
Abstract: [1] Two elementary issues in contemporary Earth system science and engineering are (1) the specification of model parameter values which characterize a system and (2) the estimation of state variables which express the system dynamic. This paper explores a novel sequential hydrologic data assimilation approach for estimating model parameters and state variables using particle filters (PFs). PFs have their origin in Bayesian estimation. Methods for batch calibration, despite major recent advances, appear to lack the flexibility required to treat uncertainties in the current system as new information is received. Methods based on sequential Bayesian estimation seem better able to take advantage of the temporal organization and structure of information, so that better compliance of the model output with observations can be achieved. Such methods provide platforms for improved uncertainty assessment and estimation of hydrologic model components, by providing more complete and accurate representations of the forecast and analysis probability distributions. This paper introduces particle filtering as a sequential Bayesian filtering having features that represent the full probability distribution of predictive uncertainties. Particle filters have, so far, generally been used to recursively estimate the posterior distribution of the model state; this paper investigates their applicability to the approximation of the posterior distribution of parameters. The capability and usefulness of particle filters for adaptive inference of the joint posterior distribution of the parameters and state variables are illustrated via two case studies using a parsimonious conceptual hydrologic model.

669 citations


Journal ArticleDOI
TL;DR: In this article, the authors determine the dominant physical controls on catchment-scale water residence time and specifically test the hypothesis that residence time is related to the size of the basin Residence times were estimated by simple convolution models that described the transfer of precipitation isotopic composition to the stream network.
Abstract: 624 km 2 ) that represent diverse geologic and geomorphic conditions in the western Cascade Mountains of Oregon Our primary objective was to determine the dominant physical controls on catchment-scale water residence time and specifically test the hypothesis that residence time is related to the size of the basin Residence times were estimated by simple convolution models that described the transfer of precipitation isotopic composition to the stream network We found that base flow mean residence times for exponential distributions ranged from 08 to 33 years Mean residence time showed no correlation to basin area (r 2 < 001) but instead was correlated (r 2 = 091) to catchment terrain indices representing the flow path distance and flow path gradient to the stream network These results illustrate that landscape organization (ie, topography) rather than basin area controls catchment-scale transport Results from this study may provide a framework for describing scale-invariant transport across climatic and geologic conditions, whereby the internal form and structure of the basin defines the first-order control on base flow residence time

634 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a simultaneous optimization and data assimilation (SODA) method, which improves the treatment of uncertainty in hydrologic modeling by treating the uncertainty in the input-output relationship as being primarily attributable to uncertainty in model parameters.
Abstract: [1] Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must therefore be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. In practice, however, because of errors in the model structure and the input (forcing) and output data, this has proven to be difficult, leading to considerable uncertainty in the model predictions. This paper surveys the limitations of current model calibration methodologies, which treat the uncertainty in the input-output relationship as being primarily attributable to uncertainty in the parameters and presents a simultaneous optimization and data assimilation (SODA) method, which improves the treatment of uncertainty in hydrologic modeling. The usefulness and applicability of SODA is demonstrated by means of a pilot study using data from the Leaf River watershed in Mississippi and a simple hydrologic model with typical conceptual components.

540 citations


Journal ArticleDOI
TL;DR: In this article, a fiber bundle approach was applied to tensile strength data collected from 12 riparian species and compared the root reinforcement estimates against direct shear tests with root-permeated and non-root-Permeated samples.
Abstract: [1] Recent research has suggested that the roots of riparian vegetation dramatically increase the geomechanical stability (i.e., factor of safety) of stream banks. Past research has used a perpendicular root reinforcement model that assumes that all of the tensile strength of the roots is mobilized instantaneously at the moment of bank failure. In reality, as a soil-root matrix shears, the roots contained within the soil have different tensile strengths and thus break progressively, with an associated redistribution of stress as each root breaks. This mode of progressive failure is well described by fiber bundle models in material science. In this paper, we apply a fiber bundle approach to tensile strength data collected from 12 riparian species and compare the root reinforcement estimates against direct shear tests with root-permeated and non-root-permeated samples. The results were then input to a stream bank stability model to assess the impact of the differences between the root models on stream bank factor of safety values. The new fiber bundle model, RipRoot, provided more accurate estimates of root reinforcement through its inclusion of progressive root breaking during mass failure of a stream bank. In cases where bank driving forces were great enough to break all of the roots, the perpendicular root model overestimated root reinforcement by up to 50%, with overestimation increasing an order of magnitude in model runs where stream bank driving forces did not exceed root strength. For the highest bank modeled (3 m) the difference in factor of safety values between runs with the two models varied from 0.13 to 2.39 depending on the riparian species considered. Thus recent work has almost certainly overestimated the effect of vegetation roots on mass stability of stream banks.

535 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the unit costs of desalinated water for five main processes and regressions are used to analyze the main influencing factors to the costs of these processes.
Abstract: [1] Many regions of the world are facing formidable freshwater scarcity. Although there is substantial scope for economizing on the consumption of water without affecting its service level, the main response to water scarcity has been to increase the supply. To a large extent, this is done by transporting water from places where it is abundant to places where it is scarce. At a smaller scale and without a lot of public and political attention, people have started to tap into the sheer limitless resource of desalinated water. This study looks at the development of desalination and its costs over time. The unit costs of desalinated water for five main processes are evaluated, followed by regressions to analyze the main influencing factors to the costs. The unit costs for all processes have fallen considerably over the years. This study suggests that a cost of $1/m3 for seawater desalination and $0.6/m3 for brackish water would be feasible today. The costs will continue to decline in the future as technology progresses. In addition, a literature review on the costs of water transport is conducted in order to estimate the total cost of desalination and the transport of desalinated water to selected water stress cities. Transport costs range from a few cents per cubic meter to over a dollar. A 100 m vertical lift is about as costly as a 100 km horizontal transport ($0.05–0.06/m3). Transport makes desalinated water prohibitively expensive in highlands and continental interiors but not elsewhere.

473 citations



Journal ArticleDOI
TL;DR: In this paper, the authors analyzed how variables describing flood impact, precaution, and preparedness as well as characteristics of the affected buildings and households vary between the lower and upper damage quartiles of all affected households.
Abstract: [1] In the aftermath of a severe flood event in August 2002 in Germany, 1697 computer-aided telephone interviews were undertaken in flood-affected private households. Besides the damage to buildings and contents a variety of factors that might influence flood damage were queried. It is analyzed here how variables describing flood impact, precaution, and preparedness as well as characteristics of the affected buildings and households vary between the lower and upper damage quartiles of all affected households. The analysis is supplemented by principal component analyses. The investigation reveals that flood impact variables, particularly water level, flood duration, and contamination are the most influential factors for building and for content damage. This group of variables is followed by items quantifying the size and the value of the affected building/flat. In comparison to these factors, temporal and permanent resistance influences damage only to a small fraction, although in individual cases, precaution can significantly reduce flood damage.

313 citations


Journal ArticleDOI
TL;DR: In this paper, a new approach to regionalization of conceptual rainfall-runoff models is presented on the basis of ensemble modeling and model averaging, which represents an improvement on the established procedure of regressing parameter values against numeric catchment descriptors.
Abstract: A new approach to regionalization of conceptual rainfall-runoff models is presented on the basis of ensemble modeling and model averaging. It is argued that in principle, this approach represents an improvement on the established procedure of regressing parameter values against numeric catchment descriptors. Using daily data from 127 catchments in the United Kingdom, alternative schemes for defining prior and posterior likelihoods of candidate models are tested in terms of accuracy of ungauged catchment predictions. A probability distributed model structure is used, and alternative parameter sets are identified using data from each of a number of gauged catchments. Using the models of the 10 gauged catchments most similar to the ungauged catchment provides generally the best results and performs significantly better than the regression method, especially for predicting low flows. The ensemble of candidate models provides an indication of uncertainty in ungauged catchment predictions, although this is not a robust estimate of possible flow ranges, and frequently fails to encompass flow peaks. Options for developing the new method to resolve these problems are discussed.

305 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the concept of readily available specific yield as the basis for estimation of the specific yield value appropriate for use with the White method and defined guidelines for estimating readily available Specific Yield based on sediment texture.
Abstract: [1] Groundwater consumption by phreatophytes is a difficult-to-measure but important component of the water budget in many arid and semiarid environments. Over the past 70 years the consumptive use of groundwater by phreatophytes has been estimated using a method that analyzes diurnal trends in hydrographs from wells that are screened across the water table (White, 1932). The reliability of estimates obtained with this approach has never been rigorously evaluated using saturated-unsaturated flow simulation. We present such an evaluation for common flow geometries and a range of hydraulic properties. Results indicate that the major source of error in the White method is the uncertainty in the estimate of specific yield. Evapotranspirative consumption of groundwater will often be significantly overpredicted with the White method if the effects of drainage time and the depth to the water table on specific yield are ignored. We utilize the concept of readily available specific yield as the basis for estimation of the specific yield value appropriate for use with the White method. Guidelines are defined for estimating readily available specific yield based on sediment texture. Use of these guidelines with the White method should enable the evapotranspirative consumption of groundwater to be more accurately quantified.

294 citations


Journal ArticleDOI
TL;DR: In this article, the spectral IP response of samples taken from a UK sandstone aquifer and compared measured parameters with physical and hydraulic properties was examined and it was shown that the mean relaxation time, τ, is a more appropriate measure of IP response for these sediments.
Abstract: There is growing interest in the use of geophysical methods for hydrological model parameterization. Empirical induced polarization (IP)–hydraulic conductivity (K) relationships have been developed, but these are only applicable to sediments in which the IP response shows limited variation with electrical current frequency. Here we examine the spectral IP response of samples taken from a UK sandstone aquifer and compare measured parameters with physical and hydraulic properties. We demonstrate the limited value of existing IP-K models due to the inherent IP frequency dependence of these samples. Our results show how the mean relaxation time, τ, is a more appropriate measure of IP response for these sediments. A significant inverse correlation between the surface area to pore volume ratio and τ is observed, suggesting that τ is a measure of a characteristic hydraulic length scale. This is supported by a measured strong positive correlation between log τ and log K. Our measurements also reveal evidence of a relationship between τ and a dominant pore throat size, which leads to postulations about the parallelism between the spectral IP behavior and unsaturated hydraulic characteristics. Additional experiments show how the relaxation time is affected by degree of fluid saturation, indicating that saturation levels must be accounted for if our empirical relationships are applied to vadose zone studies. Our results show clear evidence of the potential value of frequency-based IP measurements for parameterization of groundwater flow models.

267 citations


Journal ArticleDOI
TL;DR: In this article, a scaling relationship of the water content at the dry end of a soil water characteristic (SWC) curve to the soil specific surface area (SA) was proposed.
Abstract: [1] Individual contributions of capillarity and adsorptive surface forces to the matric potential are seldom differentiated in determination of soil water characteristic (SWC) curves. Typically, capillary forces dominate at the wet end, whereas adsorptive surface forces dominate at the dry end of a SWC where water is held as thin liquid films. The amount of adsorbed soil water is intimately linked to soil specific surface area (SA) and plays an important role in various biological and transport processes in arid environments. Dominated by van der Waals adsorptive forces, surface-water interactions give rise to a nearly universal scaling relationship for SWC curves at low water contents. We demonstrate that scaling measured water content at the dry end by soil specific surface area yields remarkable similarity across a range of soil textures and is in good agreement with theoretical predictions based on van der Waals interactions. These scaling relationships are important for accurate description of SWC curves in dry soils and may provide rapid and reliable estimates of soil specific surface area from SWC measurements for matric potentials below ‒10 MPa conveniently measured with the chilled-mirror dew point technique. Surface area estimates acquired by fitting the scaling relationship to measured SWC data were in good agreement with SA data measured by standard methods. Preliminary results suggest that the proposed method could provide reliable SA estimates for natural soils with hydratable surface areas smaller than 200 m2/g.

Journal ArticleDOI
TL;DR: In this paper, the authors show that vertical back diffusion from the aquitard combined with horizontal advection and vertical transverse dispersion account for the TCE distribution in the aquifer.
Abstract: [1] At an industrial site on a sand aquifer overlying a clayey silt aquitard in Connecticut, a zone of trichloroethylene dense nonaqueous phase liquid (DNAPL) at the aquifer bottom was isolated in late 1994 by installation of a steel sheet piling enclosure. In response to this DNAPL isolation, three aquifer monitoring wells located approximately 330 m downgradient exhibited strong TCE declines over the next 2–3 years, from trichloroethylene (TCE) concentrations between 5000 and 30,000 μg/L to values leveling off between 200 and 2000 μg/L. TCE concentrations from analysis of vertical cores from the aquitard below the plume and also from depth-discrete multilevel systems in the aquifer sampled in 2000 were represented in a numerical model. This shows that vertical back diffusion from the aquitard combined with horizontal advection and vertical transverse dispersion account for the TCE distribution in the aquifer and that the aquifer TCE will remain much above the MCL for centuries.

Journal ArticleDOI
TL;DR: In this article, the authors used cross-well electrical resistivity tomography (ERT) to monitor the migration of a tracer in a two-well pumping-injection experiment conducted at the Massachusetts Military Reservation in Cape Cod, Massachusetts.
Abstract: [1] Cross-well electrical resistivity tomography (ERT) was used to monitor the migration of a saline tracer in a two-well pumping-injection experiment conducted at the Massachusetts Military Reservation in Cape Cod, Massachusetts. After injecting 2200 mg/L of sodium chloride for 9 hours, ERT data sets were collected from four wells every 6 hours for 20 days. More than 180,000 resistance measurements were collected during the tracer test. Each ERT data set was inverted to produce a sequence of 3-D snapshot maps that track the plume. In addition to the ERT experiment a pumping test and an infiltration test were conducted to estimate horizontal and vertical hydraulic conductivity values. Using modified moment analysis of the electrical conductivity tomograms, the mass, center of mass, and spatial variance of the imaged tracer plume were estimated. Although the tomograms provide valuable insights into field-scale tracer migration behavior and aquifer heterogeneity, standard tomographic inversion and application of Archie's law to convert electrical conductivities to solute concentration results in underestimation of tracer mass. Such underestimation is attributed to (1) reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and (2) spatial smoothing (regularization) from tomographic inversion. The center of mass estimated from the ERT inversions coincided with that given by migration of the tracer plume using 3-D advective-dispersion simulation. The 3-D plumes seen using ERT exhibit greater apparent dispersion than the simulated plumes and greater temporal spreading than observed in field data of concentration breakthrough at the pumping well.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the effect of seasonal distributions of water and energy, and their interactions with the soil moisture store, on mean annual water balance in Australia at catchment scales using a stochastic model of soil moisture balance with seasonally varying forcing.
Abstract: [1] An important factor controlling catchment-scale water balance is the seasonal variation of climate. The aim of this study is to investigate the effect of the seasonal distributions of water and energy, and their interactions with the soil moisture store, on mean annual water balance in Australia at catchment scales using a stochastic model of soil moisture balance with seasonally varying forcing. The rainfall regime at 262 catchments around Australia was modeled as a Poisson process with the mean storm arrival rate and the mean storm depth varying throughout the year as cosine curves with annual periods. The soil moisture dynamics were represented by use of a single, finite water store having infinite infiltration capacity, and the potential evapotranspiration rate was modeled as an annual cosine curve. The mean annual water budget was calculated numerically using a Monte Carlo simulation. The model predicted that for a given level of climatic aridity the ratio of mean annual evapotranspiration to rainfall was larger where the potential evapotranspiration and rainfall were in phase, that is, in summer-dominant rainfall catchments, than where they were out of phase. The observed mean annual evapotranspiration ratios have opposite results. As a result, estimates of mean annual evapotranspiration from the model compared poorly with observational data. Because the inclusion of seasonally varying forcing alone was not sufficient to explain variability in the mean annual water balance, other catchment properties may play a role. Further analysis showed that the water balance was highly sensitive to the catchment-scale soil moisture capacity. Calibrations of this parameter indicated that infiltration-excess runoff might be an important process, especially for the summer-dominant rainfall catchments; most similar studies have shown that modeling of infiltration-excess runoff is not required at the mean annual timescale.

Journal ArticleDOI
TL;DR: In this article, the authors developed an efficient sequential successive linear estimator (SSLE) for interpreting data from transient hydraulic tomography to estimate three-dimensional hydraulic conductivity and specific storage fields of aquifers.
Abstract: [1] Hydraulic tomography is a cost-effective technique for characterizing the heterogeneity of hydraulic parameters in the subsurface. During hydraulic tomography surveys a large number of hydraulic heads (i.e., aquifer responses) are collected from a series of pumping or injection tests in an aquifer. These responses are then used to interpret the spatial distribution of hydraulic parameters of the aquifer using inverse modeling. In this study, we developed an efficient sequential successive linear estimator (SSLE) for interpreting data from transient hydraulic tomography to estimate three-dimensional hydraulic conductivity and specific storage fields of aquifers. We first explored this estimator for transient hydraulic tomography in a hypothetical one-dimensional aquifer. Results show that during a pumping test, transient heads are highly correlated with specific storage at early time but with hydraulic conductivity at late time. Therefore reliable estimates of both hydraulic conductivity and specific storage must exploit the head data at both early and late times. Our study also shows that the transient heads are highly correlated over time, implying only infrequent head measurements are needed during the estimation. Applying this sampling strategy to a well-posed problem, we show that our SSLE can produce accurate estimates of both hydraulic conductivity and specific storage fields. The benefit of hydraulic tomography for ill-posed problems is then demonstrated. Finally, to affirm the robustness of our SSLE approach, we apply the SSLE approach to a hypothetical three-dimensional heterogeneous aquifer.

Journal ArticleDOI
TL;DR: In this article, the authors explored the potential of the neurofuzzy computing paradigm to model the rainfall-runoff process for forecasting the river flow of Kolar basin in India.
Abstract: [1] This study explores the potential of the neurofuzzy computing paradigm to model the rainfall-runoff process for forecasting the river flow of Kolar basin in India. The neurofuzzy computing technique is a combination of a fuzzy computing approach and an artificial neural network technique. Parameter optimization in the model was performed by a combination of backpropagation and least squares error methods. Performance of the neurofuzzy model was comprehensively evaluated with that of independent fuzzy and neural network models developed for the same basin. The values of three performance evaluation criteria, namely, the coefficient of efficiency, the root-mean-square error, and the coefficient of correlation, were found to be very good and consistent for flows forecasted 1 hour in advance by the neurofuzzy model. The value of the relative error in peak flow prediction was within reasonable limits for the neurofuzzy model. The neurofuzzy model forecasted 47.95% of the total number of flow values 1 hour in advance with less than 1% relative error, while for the neural network and fuzzy models the corresponding values were 36.96 and 18.89%, respectively. The forecasts by the neurofuzzy model at higher lead times (up to 6 hours) are found to be better than those from the neural network model or the fuzzy model, implying that the neurofuzzy model seems to be well suited to exploit the information to model the nonlinear dynamics of the rainfall-runoff process.

Journal ArticleDOI
TL;DR: In this article, a multievent time series approach is presented for inferring groundwater recharge from long-term water table and precipitation records, incorporating variable specific yield based upon the soil moisture retention curve, proper accounting for the Lisse effect on the water table, and incorporation of aquifer drainage so that recharge can be detected even if the water tables do not rise.
Abstract: [1] The water table fluctuation method for determining recharge from precipitation and water table measurements was originally developed on an event basis. Here a new multievent time series approach is presented for inferring groundwater recharge from long-term water table and precipitation records. Additional new features are the incorporation of a variable specific yield based upon the soil moisture retention curve, proper accounting for the Lisse effect on the water table, and the incorporation of aquifer drainage so that recharge can be detected even if the water table does not rise. A methodology for filtering noise and non-rainfall-related water table fluctuations is also presented. The model has been applied to 2 years of field data collected in the Tomago sand beds near Newcastle, Australia. It is shown that gross recharge estimates are very sensitive to time step size and specific yield. Properly accounting for the Lisse effect is also important to determining recharge.

Journal ArticleDOI
TL;DR: In this paper, the authors examined variations in the reference shear stress for bed load transport (τr) using coupled measurements of flow and load transport in 45 gravel-bed streams and rivers.
Abstract: [1] The present study examines variations in the reference shear stress for bed load transport (τr) using coupled measurements of flow and bed load transport in 45 gravel-bed streams and rivers. The study streams encompass a wide range in bank-full discharge (1–2600 m3/s), average channel gradient (0.0003–0.05), and median surface grain size (0.027–0.21 m). A bed load transport relation was formed for each site by plotting individual values of the dimensionless transport rate W* versus the reach-average dimensionless shear stress τ*. The reference dimensionless shear stress τ*r was then estimated by selecting the value of τ* corresponding to a reference transport rate of W* = 0.002. The results indicate that the discharge corresponding to τ*r averages 67% of the bank-full discharge, with the variation independent of reach-scale morphologic and sediment properties. However, values of τ*r increase systematically with average channel gradient, ranging from 0.025–0.035 at sites with slopes of 0.001–0.006 to values greater than 0.10 at sites with slopes greater than 0.02. A corresponding relation for the bank-full dimensionless shear stress τ*bf, formulated with data from 159 sites in North America and England, mirrors the relation between τ*r and channel gradient, suggesting that the bank-full channel geometry of gravel- and cobble-bedded streams is adjusted to a relatively constant excess shear stress, τ*bf − τ*r, across a wide range of slopes.

Journal ArticleDOI
TL;DR: In this article, a method for the joint use of time-lapse ground-penetrating radar (GPR) travel times and hydrological data to estimate field-scale soil hydraulic parameters is described.
Abstract: [1] A method is described for the joint use of time-lapse ground-penetrating radar (GPR) travel times and hydrological data to estimate field-scale soil hydraulic parameters. We build upon previous work to take advantage of a wide range of cross-borehole GPR data acquisition configurations and to accommodate uncertainty in the petrophysical function, which relates soil porosity and water saturation to the effective dielectric constant. We first test the inversion methodology using synthetic examples of water injection in the vadose zone. Realistic errors in the petrophysical function result in substantial errors in soil hydraulic parameter estimates, but such errors are minimized through simultaneous estimation of petrophysical parameters. In some cases the use of a simplified GPR simulator causes systematic errors in calculated travel times; simultaneous estimation of a single correction parameter sufficiently reduces the impact of these errors. We also apply the method to the U.S. Department of Energy (DOE) Hanford site in Washington, where time-lapse GPR and neutron probe (NP) data sets were collected during an infiltration experiment. We find that inclusion of GPR data in the inversion procedure allows for improved predictions of water content, compared to predictions made using NP data alone. These examples demonstrate that the complimentary information contained in geophysical and hydrological data can be successfully extracted in a joint inversion approach. Moreover, since the generation of tomograms is not required, the amount of GPR data required for analyses is relatively low, and difficulties inherent to tomography methods are alleviated. Finally, the approach provides a means to capture the properties and system state of heterogeneous soil, both of which are crucial for assessing and predicting subsurface flow and contaminant transport.

Journal ArticleDOI
TL;DR: In this paper, the spatial correlation of model residuals for a variable mean model, incorporating the spatial correlations into the optimization of the deterministic trend, and producing smooth estimate maps that may extrapolate above and below measured values.
Abstract: conducted at maximum accumulation from 1997 through 2003 in the 2.3 km 2 Green Lakes Valley watershed in Colorado. We model snow depth as a random function that can be decomposed into a deterministic trend and a stochastic residual. Three snow depth trends were considered, differing in how they model the effect of terrain parameters on snow depth. The terrain parameters considered were elevation, slope, potential radiation, an index of wind sheltering, and an index of wind drifting. When nonlinear interactions between the terrain parameters were included and a multiyear data set was analyzed, all five terrain parameters were found to be statistically significant in predicting snow depth, yet only potential radiation and the index of wind sheltering were found to be statistically significant for all individual years. Of the five terrain parameters considered, the index of wind sheltering was found to have the greatest effect on predicted snow depth. The methodology presented in this paper allows for the characterization of the spatial correlation of model residuals for a variable mean model, incorporates the spatial correlation into the optimization of the deterministic trend, and produces smooth estimate maps that may extrapolate above and below measured values.

Journal ArticleDOI
TL;DR: In this paper, a hybrid approach to the regularized inversion of highly parameterized environmental models is described, based on constructing a highlyparameterized base model, calculating base parameter sensitivities, and decomposing the base parameter normal matrix into eigenvectors representing principal orthogonal directions in parameter space.
Abstract: [1] A hybrid approach to the regularized inversion of highly parameterized environmental models is described. The method is based on constructing a highly parameterized base model, calculating base parameter sensitivities, and decomposing the base parameter normal matrix into eigenvectors representing principal orthogonal directions in parameter space. The decomposition is used to construct super parameters. Super parameters are factors by which principal eigenvectors of the base parameter normal matrix are multiplied in order to minimize a composite least squares objective function. These eigenvectors define orthogonal axes of a parameter subspace for which information is available from the calibration data. The coordinates of the solution are sought within this subspace. Super parameters are estimated using a regularized nonlinear Gauss-Marquardt-Levenberg scheme. Though super parameters are estimated, Tikhonov regularization constraints are imposed on base parameters. Tikhonov regularization mitigates over fitting and promotes the estimation of reasonable base parameters. Use of a large number of base parameters enables the inversion process to be receptive to the information content of the calibration data, including aspects pertaining to small-scale parameter variations. Because the number of super parameters sustainable by the calibration data may be far less than the number of base parameters used to define the original problem, the computational burden for solution of the inverse problem is reduced. The hybrid methodology is described and applied to a simple synthetic groundwater flow model. It is then applied to a real-world groundwater flow and contaminant transport model. The approach and programs described are applicable to a range of modeling disciplines. Copyright 2005 by the American Geophysical Union.

Journal ArticleDOI
TL;DR: In this paper, it is shown that parameter lumping is a form of implicit regularization, and that ignoring the implied first term of the predictive error variance equation can potentially lead to underestimation of the predicted error variance.
Abstract: [1] An equation is derived through which the variance of predictive error of a calibrated model can be calculated. This equation has two terms. The first term represents the contribution to predictive error variance that results from an inability of the calibration process to capture all of the parameterization detail necessary for the making of an accurate prediction. If a model is “uncalibrated,” with parameter values being supplied solely through “outside information,” this is the only term required. The second term represents the contribution to predictive error variance arising from measurement noise. In an overdetermined system, such as that which may be obtained through “parameter lumping” (e.g., through the introduction of a spatial zonation scheme), this is the only term required. It is shown, however, that parameter lumping is a form of “implicit regularization” and that ignoring the implied first term of the predictive error variance equation can potentially lead to underestimation of predictive error variance. A model's role as a predictor of environmental behavior can be enhanced if it is calibrated in such a way as to reduce the variance of those predictions which it is required to make. It is shown that in some circumstances this can be accomplished through “overfitting” against historical field data. It can also be accomplished by giving greater weight to those measurements which carry the greatest information content with respect to a required prediction. This suggests that a departure may be necessary from the custom of using a single “calibrated model” for the making of many different predictions. Instead, model calibration may need to be repeated many times so that in each case the calibration process is optimized for the making of a specific model prediction.

Journal ArticleDOI
TL;DR: In this article, a multiobjective optimization problem under uncertainty is defined, where the two objectives are (1) minimize the total WDS design cost and (2) maximize WDS robustness.
Abstract: [1] The water distribution system (WDS) design problem is defined here as a multiobjective optimization problem under uncertainty. The two objectives are (1) minimize the total WDS design cost and (2) maximize WDS robustness. The WDS robustness is defined as the probability of simultaneously satisfying minimum pressure head constraints at all nodes in the network. Decision variables are the alternative design options for each pipe in the network. The sources of uncertainty are future water consumption and pipe roughness coefficients. Uncertain variables are modeled using probability density functions (PDFs) assigned in the problem formulation phase. The corresponding PDFs of the analyzed nodal heads are calculated using the Latin hypercube sampling technique. The optimal design problem is solved using the newly developed RNSGAII method based on the nondominated sorting genetic algorithm II (NSGAII). In RNSGAII a small number of samples are used for each fitness evaluation, leading to significant computational savings when compared to the full sampling approach. Chromosome fitness is defined here in the same way as in the NSGAII optimization methodology. The new methodology is tested on several cases, all based on the New York tunnels reinforcement problem. The results obtained demonstrate that the new methodology is capable of identifying robust Pareto optimal solutions despite significantly reduced computational effort.

Journal ArticleDOI
TL;DR: In this paper, the authors reveal major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure, based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation.
Abstract: [1] This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

Journal ArticleDOI
TL;DR: An accurate approximation of the fracture‐fracture flux across three and higher intersecting fracture branches is provided by using the MFE formulation, which provides a direct and accurate approximation for the velocity field, which is crucial for the convective terms in the flow equations.
Abstract: [1] A discrete fracture model for the flow of compressible, multicomponent fluids in homogeneous, heterogeneous, and fractured media is presented in single phase In the numerical model we combine the mixed finite element (MFE) and the discontinuous Galerkin (DG) methods We use the cross-flow equilibrium concept to approximate the fractured matrix mass transfer The discrete fracture model is numerically superior to the single-porosity model and overcomes limitations of the dual-porosity models including the use of a shape factor The MFE method provides a direct and accurate approximation for the velocity field, which is crucial for the convective terms in the flow equations The DG method associated with a slope limiter is used to approximate the species balance equations This method can capture the sharp moving fronts The calculation of the fracture-fracture flux across three and higher intersecting fracture branches is a challenge In this work, we provide an accurate approximation of these fluxes by using the MFE formulation Numerical examples in unfractured and fractured media illustrate the efficiency and robustness of the proposed numerical model

Journal ArticleDOI
TL;DR: In this paper, the authors present a procedure to solve groundwater reactive transport in the case of homogeneous and classical heterogeneous equilibrium reactions induced by mixing different waters, which can be used to test numerical codes by setting benchmark problems but also to derive closed-form analytical solutions whenever steps 2 and 3 are simple.
Abstract: [1] Modeling transport of reactive solutes is a challenging problem, necessary for understanding the fate of pollutants and geochemical processes occurring in aquifers, rivers, estuaries, and oceans. Geochemical processesinvolving multiple reactive species are generally analyzed using advanced numerical codes. The resulting complexity has inhibited the development of analytical solutions for multicomponent heterogeneous reactions such as precipitation/dissolution. We present a procedure to solve groundwater reactive transport in the case of homogeneous and classical heterogeneous equilibrium reactions induced by mixing different waters. The methodology consists of four steps: (1) defining conservative components to decouple the solution of chemical equilibrium equations from species mass balances, (2) solving the transport equations for the conservative components, (3) performing speciation calculations to obtain concentrations of aqueous species, and (4) substituting the latter into the transport equations to evaluate reaction rates. We then obtain the space-time distribution of concentrations and reaction rates. The key result is that when the equilibrium constant does not vary in space or time, the reaction rate is proportional to the rate of mixing, * T u D Vu, where u is the vector of conservative components concentrations and D is the dispersion tensor. The methodology can be used to test numerical codes by setting benchmark problems but also to derive closed-form analytical solutions whenever steps 2 and 3 are simple, as illustrated by the application to a binary system. This application clearly elucidates that in a three-dimensional problem both chemical and transport parameters are equally important in controlling the process.

Journal ArticleDOI
TL;DR: In this article, the problem of simulating sequences of daily rainfall at a network of sites in such a way as to reproduce a variety of properties realistically over a range of spatial scales is considered.
Abstract: [1] We consider the problem of simulating sequences of daily rainfall at a network of sites in such a way as to reproduce a variety of properties realistically over a range of spatial scales. The properties of interest will vary between applications but typically will include some measures of “extreme” rainfall in addition to means, variances, proportions of wet days, and autocorrelation structure. Our approach is to fit a generalized linear model (GLM) to rain gauge data and, with appropriate incorporation of intersite dependence structure, to use the GLM to generate simulated sequences. We illustrate the methodology using a data set from southern England and show that the GLM is able to reproduce many properties at spatial scales ranging from a single site to 2000 km2 (the limit of the available data).

Journal ArticleDOI
TL;DR: In this article, the authors assess how GLUE performs in detecting uncertainty in the simulation of long series of synthetic river flows and compare the GLUE prediction limits with a large sample of data that is to be simulated in the presence of known sources of uncertainty.
Abstract: [1] Several methods have been recently proposed for quantifying the uncertainty of hydrological models. These techniques are based upon different hypotheses, are diverse in nature, and produce outputs that can significantly differ in some cases. One of the favored methods for uncertainty assessment in rainfall-runoff modeling is the generalized likelihood uncertainty estimation (GLUE). However, some fundamental questions related to its application remain unresolved. One such question is that GLUE relies on some explicit and implicit assumptions, and it is not fully clear how these may affect the uncertainty estimation when referring to large samples of data. The purpose of this study is to address this issue by assessing how GLUE performs in detecting uncertainty in the simulation of long series of synthetic river flows. The study aims to (1) discuss the hypotheses underlying GLUE and derive indications about their effects on the uncertainty estimation, and (2) compare the GLUE prediction limits with a large sample of data that is to be simulated in the presence of known sources of uncertainty. The analysis shows that the prediction limits provided by GLUE do not necessarily include a percentage close to their confidence level of the observed data. In fact, in all the experiments, GLUE underestimates the total uncertainty of the simulation provided by the hydrological model.

Journal ArticleDOI
TL;DR: In this paper, the impacts of Eucalyptus camaldulensis in the Pampas grasslands of Argentina were explored for two years using a novel combination of sap flow, groundwater data, soil moisture measurements, and modeling.
Abstract: [1] The impacts of a 40 ha stand of Eucalyptus camaldulensis in the Pampas grasslands of Argentina were explored for 2 years using a novel combination of sap flow, groundwater data, soil moisture measurements, and modeling. Sap flow measurements showed transpiration rates of 2–3.7 mm d−1, lowering groundwater levels by more than 0.5 m with respect to the surrounding grassland. This hydraulic gradient induced flow from the grassland areas into the plantation and resulted in a rising of the plantation water table at night. Groundwater use estimated from diurnal water table fluctuations correlated well with sap flow (p < 0.001, r2 = 0.78). Differences between daily sap flow and the estimates of groundwater use were proportional to changes in surface soil moisture content (p < 0.001, r2 = 0.75). E. camaldulensis therefore used both groundwater and vadose zone moisture sources, depending on soil water availability. Model results suggest that groundwater sources represented ∼67% of total annual water use.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate households' demand for improved water services, coping costs and willingness to pay (WTP), from a survey of1500 randomly sampled households in Kathmandu, Nepal.
Abstract: This paper investigates two complementary pieces of data on households’ demand forimproved water services, coping costs and willingness to pay (WTP), from a survey of1500 randomly sampled households in Kathmandu, Nepal. We evaluate how coping costsand WTP vary across types of water users and income. We find that households inKathmandu Valley engage in five main types of coping behaviors: collecting, pumping,treating, storing, and purchasing. These activities impose coping costs on an averagehousehold of as much as 3 U.S. dollars per month or about 1% of current incomes,representing hidden but real costs of poor infrastructure service. We find that thesecoping costs are almost twice as much as the current monthly bills paid to the water utilitybut are significantly lower than estimates of WTP for improved services. We findthat coping costs are statistically correlated with WTP and several householdcharacteristics.