scispace - formally typeset
Search or ask a question

Showing papers in "Nonlinear Processes in Geophysics in 2011"


Journal ArticleDOI
TL;DR: In this article, the authors compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods.
Abstract: . Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation) or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques. All methods have comparable root mean square errors (RMSEs) for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF) for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF) the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods. We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory) is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

241 citations


Journal ArticleDOI
TL;DR: Two important results refer to the complementarity of spectral analysis of a time series in terms of the continuous and the discrete part of its power spectrum and the need for coupled modeling of natural and socio-economic systems.
Abstract: We review work on extreme events, their causes and consequences, by a group of European and American researchers involved in a three-year project on these topics The review covers theoretical aspects of time series analysis and of extreme value theory, as well as of the deterministic modeling of extreme events, via continuous and discrete dynamic models The applications include climatic, seismic and socio-economic events, along with their prediction Two important results refer to (i) the complementarity of spectral analysis of a time series in terms of the continuous and the discrete part of its power spectrum; and (ii) the need for coupled modeling of natural and socio-economic systems Both these results have implications for the study and prediction of natural hazards and their human impacts

193 citations


Journal ArticleDOI
TL;DR: Servidio et al. as discussed by the authors used direct numerical simulations of decaying incompressible two-dimensional magnetohydrodynamics (MHD), and found that in fully developed turbulence complex processes of reconnection locally occur.
Abstract: . In this work, recent advances on the study of reconnection in turbulence are reviewed. Using direct numerical simulations of decaying incompressible two-dimensional magnetohydrodynamics (MHD), it was found that in fully developed turbulence complex processes of reconnection locally occur (Servidio et al., 2009, 2010a). In this complex scenario, reconnection is spontaneous but locally driven by the fields, with the boundary conditions provided by the turbulence. Matching classical turbulence analysis with a generalized Sweet-Parker theory, the statistical features of these multiple-reconnection events have been identified. A discussion on the accuracy of our algorithms is provided, highlighting the necessity of adequate spatial resolution. Applications to the study of solar wind discontinuities are reviewed, comparing simulations to spacecraft observations. New results are shown, studying the time evolution of these local reconnection events. A preliminary study on the comparison between MHD and Hall MHD is reported. Our new approach to the study of reconnection as an element of turbulence has broad applications to space plasmas, shedding a new light on the study of magnetic reconnection in nature.

138 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, to analyze the climatic response to general forcings.
Abstract: . The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.

108 citations


Journal ArticleDOI
TL;DR: Accounting for bias due to dynamical memory in an association/dependence measure markedly changes the network topology and allows to observe previously hidden phenomena in climate network evolution.
Abstract: . The bias due to dynamical memory (serial correlations) in an association/dependence measure (absolute cross-correlation) is demonstrated in model data and identified in time series of meteorological variables used for construction of climate networks. Accounting for such bias in inferring links of the climate network markedly changes the network topology and allows to observe previously hidden phenomena in climate network evolution.

103 citations


Journal ArticleDOI
TL;DR: In this paper, a prior probability density function conditional on the forecast ensemble is derived using Bayesian principles, which generates a new class of ensemble Kalman filters, called finite-size ensemble transform Kalman filter (EnKF-N).
Abstract: . The main intrinsic source of error in the ensemble Kalman filter (EnKF) is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a consequence, often requires inflation and localization. The goal of this article is to derive an ensemble Kalman filter which is less sensitive to sampling errors. A prior probability density function conditional on the forecast ensemble is derived using Bayesian principles. Even though this prior is built upon the assumption that the ensemble is Gaussian-distributed, it is different from the Gaussian probability density function defined by the empirical mean and the empirical error covariance matrix of the ensemble, which is implicitly used in traditional EnKFs. This new prior generates a new class of ensemble Kalman filters, called finite-size ensemble Kalman filter (EnKF-N). One deterministic variant, the finite-size ensemble transform Kalman filter (ETKF-N), is derived. It is tested on the Lorenz '63 and Lorenz '95 models. In this context, ETKF-N is shown to be stable without inflation for ensemble size greater than the model unstable subspace dimension, at the same numerical cost as the ensemble transform Kalman filter (ETKF). One variant of ETKF-N seems to systematically outperform the ETKF with optimally tuned inflation. However it is shown that ETKF-N does not account for all sampling errors, and necessitates localization like any EnKF, whenever the ensemble size is too small. In order to explore the need for inflation in this small ensemble size regime, a local version of the new class of filters is defined (LETKF-N) and tested on the Lorenz '95 toy model. Whatever the size of the ensemble, the filter is stable. Its performance without inflation is slightly inferior to that of LETKF with optimally tuned inflation for small interval between updates, and superior to LETKF with optimally tuned inflation for large time interval between updates.

93 citations


Journal ArticleDOI
TL;DR: In this article, the complexity of fluid particle trajectories provides the basis for a new method, referred to as the Complexity Method (CM), for estimation of Lagrangian coherent structures in aperiodic flows that are measured over finite time intervals.
Abstract: . It is argued that the complexity of fluid particle trajectories provides the basis for a new method, referred to as the Complexity Method (CM), for estimation of Lagrangian coherent structures in aperiodic flows that are measured over finite time intervals. The basic principles of the CM are explained and the CM is tested in a variety of examples, both idealized and realistic, and in different reference frames. Two measures of complexity are explored in detail: the correlation dimension of trajectory, and a new measure – the ergodicity defect. Both measures yield structures that strongly resemble Lagrangian coherent structures in all of the examples considered. Since the CM uses properties of individual trajectories, and not separation rates between closely spaced trajectories, it may have advantages for the analysis of ocean float and drifter data sets in which trajectories are typically widely and non-uniformly spaced.

86 citations


Journal ArticleDOI
TL;DR: In this article, the Dubreil-Jacotin-Long (DJL) equation is used to compute internal solitary waves with two scales, or double-humped waves.
Abstract: . Internal solitary waves are widely observed in both the oceans and large lakes. They can be described by a variety of mathematical theories, covering the full spectrum from first order asymptotic theory (i.e. Korteweg-de Vries, or KdV, theory), through higher order extensions of weakly nonlinear-weakly nonhydrostatic theory, to fully nonlinear-weakly nonhydrostatic theories and finally exact theory based on the Dubreil-Jacotin-Long (DJL) equation that is formally equivalent to the full set of Euler equations. We discuss how spectral and pseudospectral methods allow for the computation of novel phenomena in both approximate and exact theories. In particular we construct markedly different density profiles for which the coefficients in the KdV theory are very nearly identical. These two density profiles yield qualitatively different behaviour for both exact, or fully nonlinear, waves computed using the DJL equation and in dynamic simulations of the time dependent Euler equations. For exact, DJL, theory we compute exact solitary waves with two-scales, or so-called double-humped waves.

76 citations


Journal ArticleDOI
TL;DR: It is demonstrated that recurrence network analysis is able to detect relevant regime shifts in synthetic data as well as in problematic geoscientific time series, suggesting its application as a general exploratory tool of time series analysis complementing existing methods.
Abstract: . The analysis of palaeoclimate time series is usually affected by severe methodological problems, resulting primarily from non-equidistant sampling and uncertain age models. As an alternative to existing methods of time series analysis, in this paper we argue that the statistical properties of recurrence networks – a recently developed approach – are promising candidates for characterising the system's nonlinear dynamics and quantifying structural changes in its reconstructed phase space as time evolves. In a first order approximation, the results of recurrence network analysis are invariant to changes in the age model and are not directly affected by non-equidistant sampling of the data. Specifically, we investigate the behaviour of recurrence network measures for both paradigmatic model systems with non-stationary parameters and four marine records of long-term palaeoclimate variations. We show that the obtained results are qualitatively robust under changes of the relevant parameters of our method, including detrending, size of the running window used for analysis, and embedding delay. We demonstrate that recurrence network analysis is able to detect relevant regime shifts in synthetic data as well as in problematic geoscientific time series. This suggests its application as a general exploratory tool of time series analysis complementing existing methods.

75 citations


Journal ArticleDOI
TL;DR: In this paper, the authors revisited experimental studies performed by Ekman on dead-water (Ekman, 1904) using modern techniques in order to present new insights on this peculiar phenomenon and extended its description to more general situations such as a three-layer fluid or a linearly stratified fluid in presence of a pycnocline.
Abstract: . We revisit experimental studies performed by Ekman on dead-water (Ekman, 1904) using modern techniques in order to present new insights on this peculiar phenomenon. We extend its description to more general situations such as a three-layer fluid or a linearly stratified fluid in presence of a pycnocline, showing the robustness of dead-water phenomenon. We observe large amplitude nonlinear internal waves which are coupled to the boat dynamics, and we emphasize that the modeling of the wave-induced drag requires more analysis, taking into account nonlinear effects. Dedicated to Fridtjof Nansen born 150 yr ago (10 October 1861).

52 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that when the Extended Kalman Filter is applied to a chaotic system, the rank of the error covariance matrices, after a sufficiently large number of iterations, reduces to N+ + N0 where N+ and N0 are the number of positive and null Lyapunov exponents.
Abstract: . When the Extended Kalman Filter is applied to a chaotic system, the rank of the error covariance matrices, after a sufficiently large number of iterations, reduces to N+ + N0 where N+ and N0 are the number of positive and null Lyapunov exponents. This is due to the collapse into the unstable and neutral tangent subspace of the solution of the full Extended Kalman Filter. Therefore the solution is the same as the solution obtained by confining the assimilation to the space spanned by the Lyapunov vectors with non-negative Lyapunov exponents. Theoretical arguments and numerical verification are provided to show that the asymptotic state and covariance estimates of the full EKF and of its reduced form, with assimilation in the unstable and neutral subspace (EKF-AUS) are the same. The consequences of these findings on applications of Kalman type Filters to chaotic models are discussed.

Journal ArticleDOI
TL;DR: In this paper, the heterogeneity of carbonate rocks pore spaces based on the image analysis of scanning electron microscopy (SEM) data acquired at various magnifications was analyzed. And the results showed that magnification has an impact on multifractal dimensions, revealing the limit of applicability of multifractal descriptions for these natural structures.
Abstract: . Pore spaces heterogeneity in carbonates rocks has long been identified as an important factor impacting reservoir productivity. In this paper, we study the heterogeneity of carbonate rocks pore spaces based on the image analysis of scanning electron microscopy (SEM) data acquired at various magnifications. Sixty images of twelve carbonate samples from a reservoir in the Middle East were analyzed. First, pore spaces were extracted from SEM images using a segmentation technique based on watershed algorithm. Pores geometries revealed a multifractal behavior at various magnifications from 800x to 12 000x. In addition, the singularity spectrum provided quantitative values that describe the degree of heterogeneity in the carbonates samples. Moreover, for the majority of the analyzed samples, we found low variations (around 5%) in the multifractal dimensions for magnifications between 1700x and 12 000x. Finally, these results demonstrate that multifractal analysis could be an appropriate tool for characterizing quantitatively the heterogeneity of carbonate pore spaces geometries. However, our findings show that magnification has an impact on multifractal dimensions, revealing the limit of applicability of multifractal descriptions for these natural structures.

Journal ArticleDOI
TL;DR: This approach shows encouraging results but will need further refinement before becoming a viable supplement to dynamical regional climate modelling of temperature and rainfall, particularly for the full European domain.
Abstract: . An Artificial Neural Network (ANN) approach is used to downscale ECHAM5 GCM temperature (T) and rainfall (R) fields to RegCM3 regional model scale over Europe. The main inputs to the neural network were the ECHAM5 fields and topography, and RegCM3 topography. An ANN trained for the period 1960–1980 was able to recreate the RegCM3 1981–2000 mean T and R fields with reasonable accuracy. The ANN showed an improvement over a simple lapse-rate correction method for T, although the ANN R field did not capture all the fine-scale detail of the RCM field. An ANN trained over a smaller area of Southern Europe was able to capture this detail with more precision. The ANN was unable to accurately recreate the RCM climate change (CC) signal between 1981–2000 and 2081–2100, and it is suggested that this is because the relationship between the GCM fields, RCM fields and topography is not constant with time and changing climate. An ANN trained with three ten-year "time-slices" was able to better reproduce the RCM CC signal, particularly for the full European domain. This approach shows encouraging results but will need further refinement before becoming a viable supplement to dynamical regional climate modelling of temperature and rainfall.

Journal ArticleDOI
TL;DR: In this article, a reference model time sequence is first produced and used to generate synthetic data, restricted here to the large-scale component of the magnetic field and its rate of change at the outer boundary.
Abstract: . Over the past decades, direct three-dimensional numerical modelling has been successfully used to reproduce the main features of the geodynamo. Here we report on efforts to solve the associated inverse problem, aiming at inferring the underlying properties of the system from the sole knowledge of surface observations and the first principle dynamical equations describing the convective dynamo. To this end we rely on twin experiments. A reference model time sequence is first produced and used to generate synthetic data, restricted here to the large-scale component of the magnetic field and its rate of change at the outer boundary. Starting from a different initial condition, a second sequence is next run and attempts are made to recover the internal magnetic, velocity and buoyancy anomaly fields from the sparse surficial data. In order to reduce the vast underdetermination of this problem, we use stochastic inversion, a linear estimation method determining the most likely internal state compatible with the observations and some prior knowledge, and we also implement a sequential evolution algorithm in order to invert time-dependent surface observations. The prior is the multivariate statistics of the numerical model, which are directly computed from a large number of snapshots stored during a preliminary direct run. The statistics display strong correlation between different harmonic degrees of the surface observations and internal fields, provided they share the same harmonic order, a natural consequence of the linear coupling of the governing dynamical equations and of the leading influence of the Coriolis force. Synthetic experiments performed with a weakly nonlinear model yield an excellent quantitative retrieval of the internal structure. In contrast, the use of a strongly nonlinear (and more realistic) model results in less accurate static estimations, which in turn fail to constrain the unobserved small scales in the time integration of the evolution scheme. Evaluating the quality of forecasts of the system evolution against the reference solution, we show that our scheme can improve predictions based on linear extrapolations on forecast horizons shorter than the system e-folding time. Still, in the perspective of forthcoming data assimilation activities, our study underlines the need of advanced estimation techniques able to cope with the moderate to strong nonlinearities present in the geodynamo.

Journal ArticleDOI
TL;DR: In this paper, the authors link an economic model, in which town-manager agents choose economically optimal beach-nourishment intervals according to past observations of their immediate shoreline, to a simplified coastal-dynamics model that includes alongshore sediment transport and background erosion (e.g. from sea-level rise).
Abstract: Developed coastal areas often exhibit a strong systemic coupling between shoreline dynamics and economic dynamics. "Beach nourishment", a common erosion-control practice, involves mechanically depositing sediment from outside the local littoral system onto an actively eroding shoreline to alter shoreline morphology. Natural sediment-transport processes quickly rework the newly engineered beach, causing further changes to the shoreline that in turn affect subsequent beach-nourishment decisions. To the limited extent that this landscape/economic coupling has been considered, evidence suggests that towns tend to employ spatially myopic economic strategies under which individual towns make isolated decisions that do not account for their neighbors. What happens when an optimization strategy that explicitly ignores spatial interactions is incorporated into a physical model that is spatially dynamic? The long-term attractor that develops for the coupled system (the state and behavior to which the system evolves over time) is unclear. We link an economic model, in which town-manager agents choose economically optimal beach-nourishment intervals according to past observations of their immediate shoreline, to a simplified coastal-dynamics model that includes alongshore sediment transport and background erosion (e.g. from sea-level rise). Simulations suggest that feedbacks between these human and natural coastal processes can generate emergent behaviors. When alongshore sediment transport and spatially myopic nourishment decisions are coupled, increases in the rate of sea-level rise can destabilize economically optimal nourishment practices into a regime characterized by the emergence of chaotic shoreline evolution.

Journal ArticleDOI
TL;DR: In this paper, the Sagdeev pseudo potential method was used to analyze the effect of superthermal hot electrons having kappa distribution on the minimum value of spectral index and the Mach number for which electron-acoustic solitons can exist.
Abstract: . Arbitrary amplitude electron acoustic solitons are studied in an unmagnetized plasma having cold electrons and ions, superthermal hot electrons and an electron beam. Using the Sagdeev pseudo potential method, theoretical analysis is carried out by assuming superthermal hot electrons having kappa distribution. The results show that inclusion of an electron beam alters the minimum value of spectral index, κ, of the superthermal electron distribution and Mach number for which electron-acoustic solitons can exist and also changes their width and electric field amplitude. For the auroral region parameters, the maximum electric field amplitudes and soliton widths are found in the range ~(30–524) mV m−1 and ~(329–729) m, respectively, for fixed Mach number M = 1.1 and for electron beam speed of (660–1990) km s−1.

Journal ArticleDOI
TL;DR: In this article, a new technique based on the Bayesian neural network (BNN) theory using the concept of Hybrid Monte Carlo (HMC)/Markov Chain Monte Carlo simulation scheme was presented to invert one and two-dimensional Direct Current (DC) vertical electrical sounding data acquired around the Koyna region in India.
Abstract: . Koyna region is well-known for its triggered seismic activities since the hazardous earthquake of M=6.3 occurred around the Koyna reservoir on 10 December 1967. Understanding the shallow distribution of resistivity pattern in such a seismically critical area is vital for mapping faults, fractures and lineaments. However, deducing true resistivity distribution from the apparent resistivity data lacks precise information due to intrinsic non-linearity in the data structures. Here we present a new technique based on the Bayesian neural network (BNN) theory using the concept of Hybrid Monte Carlo (HMC)/Markov Chain Monte Carlo (MCMC) simulation scheme. The new method is applied to invert one and two-dimensional Direct Current (DC) vertical electrical sounding (VES) data acquired around the Koyna region in India. Prior to apply the method on actual resistivity data, the new method was tested for simulating synthetic signal. In this approach the objective/cost function is optimized following the Hybrid Monte Carlo (HMC)/Markov Chain Monte Carlo (MCMC) sampling based algorithm and each trajectory was updated by approximating the Hamiltonian differential equations through a leapfrog discretization scheme. The stability of the new inversion technique was tested in presence of correlated red noise and uncertainty of the result was estimated using the BNN code. The estimated true resistivity distribution was compared with the results of singular value decomposition (SVD)-based conventional resistivity inversion results. Comparative results based on the HMC-based Bayesian Neural Network are in good agreement with the existing model results, however in some cases, it also provides more detail and precise results, which appears to be justified with local geological and structural details. The new BNN approach based on HMC is faster and proved to be a promising inversion scheme to interpret complex and non-linear resistivity problems. The HMC-based BNN results are quite useful for the interpretation of fractures and lineaments in seismically active region.

Journal ArticleDOI
TL;DR: In this article, a simplified dune field model that includes the spatial evolution of individual dunes as well as their interaction through sand exchange and binary collisions is presented, and the simulated fields have the same dune size distribution as in real dune fields but fail to reproduce their homogeneity along the wind direction.
Abstract: . Barchans are isolated mobile dunes often organized in large dune fields. Dune fields seem to present a characteristic dune size and spacing, which suggests a cooperative behavior based on dune interaction. In Duran et al. (2009), we propose that the redistribution of sand by collisions between dunes is a key element for the stability and size selection of barchan dune fields. This approach was based on a mean-field model ignoring the spatial distribution of dune fields. Here, we present a simplified dune field model that includes the spatial evolution of individual dunes as well as their interaction through sand exchange and binary collisions. As a result, the dune field evolves towards a steady state that depends on the boundary conditions. Comparing our results with measurements of Moroccan dune fields, we find that the simulated fields have the same dune size distribution as in real fields but fail to reproduce their homogeneity along the wind direction.

Journal ArticleDOI
TL;DR: In this article, the effects of deterministic chaotic driving in a low-order chaotic global atmospheric circulation model were investigated, and the extreme value statistics of group maxima were found to follow a Weibull distribution.
Abstract: . In a low-order chaotic global atmospheric circulation model the effects of deterministic chaotic driving are investigated. As a result of driving, peak-over-threshold type extreme events, e.g. cyclonic activity in the model, become more extreme, with increased frequency of recurrence. When the characteristic time of the driving is comparable to that of the undriven system, a resonance effect with amplified variance shows up. For very fast driving we find a reduced enhancement of variance, which is also the case with white noise driving. Snapshot attractors and their natural measures are determined as a function of time, and a resonance effects is also identified. The extreme value statistics of group maxima is found to follow a Weibull distribution.

Journal ArticleDOI
Xiaohua Yang1, Xiaoling Zhang1, Xiaoyi Hu1, Zhifeng Yang1, J. Q. Li 
TL;DR: Compared with other nonlinear assessment methods, the advantage of NOSPAM is that it can not only rationally determine the index weights, but also measure the uncertain information quantity in the WRRA.
Abstract: . There is much uncertain information which is very difficult to quantify in the water resource renewability assessment (WRRA). The index weights are the key parameters in the assessment model. To assess the water resource renewability rationally, a novel nonlinear optimization set pair analysis model (NOSPAM) is proposed, in which a nonlinear optimization model based on gray-encoded hybrid accelerating genetic algorithm is given to determine the weights by optimizing subjective and objective information, as well as an improved set pair analysis model based on the connection degree is established to deal with certain-uncertain information. In addition, a new calculating formula is established for determining certain-uncertain information quantity in NOSPAM. NOSPAM is used to assess the water resource renewability of the nine administrative divisions in the Yellow River Basin. Results show that NOSPAM can deal with the uncertain information, subjective and objective information. Compared with other nonlinear assessment methods (such as the gray associate analysis method and fuzzy assessment method), the advantage of NOSPAM is that it can not only rationally determine the index weights, but also measure the uncertain information quantity in the WRRA. This NOSPAM model is an extension to nonlinear assessment models.

Journal ArticleDOI
TL;DR: In this paper, the statistical and dynamical properties of bias correction and linear post-processing are investigated when the system under interest is affected by model errors and is experiencing parameter modifications, mimicking the potential impact of climate change.
Abstract: The statistical and dynamical properties of bias correction and linear post-processing are investigated when the system under interest is affected by model errors and is experiencing parameter modifications, mimicking the potential impact of climate change The analysis is first performed for simple typical scalar systems, an Ornstein-Uhlenbeck process (O-U) and a limit point bifurcation It reveals system's specific (linear or non-linear) dependences of biases and post-processing corrections as a function of parameter modifications A more realistic system is then investigated, a low-order model of moist general circulation, incorporating several processes of high relevance in the climate dynamics (radiative effects, cloud feedbacks), but still sufficiently simple to allow for an extensive exploration of its dynamics In this context, bias or post-processing corrections also display complicate variations when the system experiences temperature climate changes up to a few degrees This precludes a straightforward application of these corrections from one system's state to another (as usually adopted for climate projections), and increases further the uncertainty in evaluating the amplitudes of climate changes

Journal ArticleDOI
TL;DR: In this paper, a weakly dispersive range (WDR) of kinetic Alfven turbulence is identified and investigated for the first time in the context of the MHD/kinetic turbulence transition.
Abstract: . A weakly dispersive range (WDR) of kinetic Alfven turbulence is identified and investigated for the first time in the context of the MHD/kinetic turbulence transition. We find perpendicular wavenumber spectra p kb−3 and p kb−4 formed in WDR by strong and weak turbulence of kinetic Alfven waves (KAWs), respectively. These steep WDR spectra connect shallower spectra in the MHD and strongly dispersive KAW ranges, which results in a specific double-kink (2-k) pattern often seen in observed turbulent spectra. The first kink occurs where MHD turbulence transforms into weakly dispersive KAW turbulence; the second one is between weakly and strongly dispersive KAW ranges. Our analysis suggests that partial turbulence dissipation due to amplitude-dependent non-adiabatic ion heating may occur in the vicinity of the first spectral kink. The threshold-like nature of this process results in a conditional selective dissipation that affects only the largest over-threshold amplitudes and that decreases the intermittency in the range below the first spectral kink. Several recent counter-intuitive observational findings can be explained by the coupling between such a selective dissipation and the nonlinear interaction among weakly dispersive KAWs.

Journal ArticleDOI
TL;DR: In this article, the authors present a data assimilation method for earthquake forecasts generated by a point-process model of seismicity and test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant for the seismic gap hypothesis.
Abstract: . Data assimilation is routinely employed in meteorology, engineering and computer sciences to optimally combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant for the seismic gap hypothesis, models of characteristic earthquakes and recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating arbitrary posterior distributions. We perform extensive numerical simulations to demonstrate the feasibility and benefits of forecasting earthquakes based on data assimilation.

Journal ArticleDOI
TL;DR: In this paper, a multifractal study is done on seismicity data of the North Western Himalaya region which mainly involve seismogenic region of 1905 Kangra great earthquake in the North-Western Himalaya Region.
Abstract: Seismicity has power law in space, time and mag- nitude distributions and same is expressed by the fractal di- mension D, Omori's exponent p and b-value. The spatio- temporal patterns of epicenters have heterogeneous charac- teristics. As the crust gets self-organised into critical state, the spatio-temporal clustering of epicenters emerges to het- erogeneous nature of seismicity. To understand the hetero- geneous characteristics of seismicity in a region, multifractal studies hold promise to characterise the dynamics of region. Multifractal study is done on seismicity data of the North- Western Himalaya region which mainly involve seismogenic region of 1905 Kangra great earthquake in the North-Western Himalaya region. The seismicity data obtained from USGS catalogue for time period 1973-2009 has been analysed for the region which includes the October 2005 Muzafrabad- Kashmir earthquake (Mw = 7.6). Significant changes have been observed in generalised dimension Dq , Dq spectra and b-value. The significant temporal changes in generalised dimension Dq , b-value and Dq q spectra prior to occur- rence of Muzaffrabad-Kashmir earthquake relates to distri- bution of epicenters in the region. The decrease in genera- lised dimension and b-value observed in our study show the relationship with the clustering of seismicity as is expected in self-organised criticality behaviour of earthquake occur- rences. Such study may become important in understanding the preparation zone of large and great size earthquake in various tectonic regions.

Journal ArticleDOI
TL;DR: The concept of self-organized criticality (SOC) was introduced by Bak et al. as discussed by the authors to characterize the behavior of dissipative systems that contain a large number of elements interacting over a short range.
Abstract: . The space environment is forever changing on all spatial and temporal scales. Energy releases are observed in numerous dynamic phenomena (e.g. solar flares, coronal mass ejections, solar energetic particle events) where measurements provide signatures of the dynamics. Parameters (e.g. peak count rate, total energy released, etc.) describing these phenomena are found to have frequency size distributions that follow power-law behavior. Natural phenomena on Earth, such as earthquakes and landslides, display similar power-law behavior. This suggests an underlying universality in nature and poses the question of whether the distribution of energy is the same for all these phenomena. Frequency distributions provide constraints for models that aim to simulate the physics and statistics observed in the individual phenomenon. The concept of self-organized criticality (SOC), also known as the "avalanche concept", was introduced by Bak et al. (1987, 1988), to characterize the behavior of dissipative systems that contain a large number of elements interacting over a short range. The systems evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system. It is found that frequency distributions of the output parameters from the chain reaction taken over a period of time can be represented by power-laws. During the last decades SOC has been debated from all angles. New SOC models, as well as non-SOC models have been proposed to explain the power-law behavior that is observed. Furthermore, since Bak's pioneering work in 1987, people have searched for signatures of SOC everywhere. This paper will review how SOC behavior has become one way of interpreting the power-law behavior observed in natural occurring phenomenon in the Sun down to the Earth.

Journal ArticleDOI
TL;DR: In this article, a method for perturbing WGs for future decades and to assess its effectiveness was proposed, which was applied to the WG implemented within the UKCP09 package and assessed using data from a Regional Climate Model (RCM) simulation which provides a significant change between a control run period and a distant future.
Abstract: . The purpose of this paper is to provide a method for perturbing Weather Generators (WGs) for future decades and to assess its effectiveness. Here the procedure is applied to the WG implemented within the UKCP09 package and assessed using data from a Regional Climate Model (RCM) simulation which provides a significant "climate change" between a control run period and a distant future. The WG is normally calibrated on observed data. For this study, data from an RCM control period (1961–1990) was used, then perturbed using the procedure. Because only monthly differences between the RCM control and scenario periods are used to perturb the WG, the direct daily RCM scenario may be considered as unseen data to assess how well the perturbation procedure reproduces the direct RCM simulations for the future.

Journal ArticleDOI
TL;DR: In this article, the authors compare the consequences of two assumptions on the physical nature of the AMO index signal and show that the widely used approach based on red noise statistics cannot fully reproduce the empirical correlation properties of the record.
Abstract: . In this work we critically compare the consequences of two assumptions on the physical nature of the AMO index signal. First, we show that the widely used approach based on red noise statistics cannot fully reproduce the empirical correlation properties of the record. Second, we consider a process of long range power-law correlations and demonstrate its better fit to the AMO signal. We show that in the latter case, the multidecadal oscillatory mode of the smoothed AMO index with an assigned period length of 50–70 years can be a simple statistical artifact, a consequence of limited record length. In this respect, a better term to describe the observed fluctuations of a smooth power-law spectrum is Atlantic Multidecadal Variability (AMV).

Journal ArticleDOI
TL;DR: In this paper, various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations, in order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity.
Abstract: . Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

Journal ArticleDOI
TL;DR: In this paper, two-dimensional (2D) particle-in-cell (PIC) simulations are performed to investigate the evolution of the electron current sheet (ECS) in guide field reconnection.
Abstract: . Two-dimensional (2-D) particle-in-cell (PIC) simulations are performed to investigate the evolution of the electron current sheet (ECS) in guide field reconnection. The ECS is formed by electrons accelerated by the inductive electric field in the vicinity of the X line, which is then extended along the x direction due to the imbalance between the electric field force and Ampere force. The tearing instability is unstable when the ECS becomes sufficiently long and thin, and several seed islands are formed in the ECS. These tiny islands may coalesce and form a larger secondary island in the center of the diffusion region.

Journal ArticleDOI
TL;DR: In this paper, a series of experiments conducted at the UCLA large plasma device (LAPD) where a suprathermal electron beam was injected parallel to a static magnetic field was presented.
Abstract: . Solitary electrostatic pulses have been observed in numerous places of the magnetosphere such as the vicinity of reconnection current sheets, shocks or auroral current systems, and are often thought to be generated by energetic electron beams. We present results of a series of experiments conducted at the UCLA large plasma device (LAPD) where a suprathermal electron beam was injected parallel to a static magnetic field. Micro-probes with tips smaller than a Debye length enabled the detection of solitary pulses with positive electric potential and half-widths 4–25 Debye lengths (λDe), over a set of experiments with various beam energies, plasma densities and magnetic field strengths. The shape, scales and amplitudes of the structures are similar to those observed in space, and consistent with electron holes. The dependance of these properties on the experimental parameters is shown. The velocities of the solitary structures (1–3 background electron thermal velocities) are found to be much lower than the beam velocities, suggesting an excitation mechanism driven by parallel currents associated to the electron beam.