scispace - formally typeset
Search or ask a question

Showing papers on "Probability density function published in 2011"


Book
12 Feb 2011
TL;DR: In this paper, the random phase approximation (RPA) was used to estimate the phase and amplitude randomness of wave wave wave Fourier modes in wave wave Turbulence (WT) systems.
Abstract: In this paper we review recent developments in the statistical theory of weakly nonlinear dispersive waves, the subject known as Wave Turbulence (WT) We revise WT theory using a generalisation of the random phase approximation (RPA) This generalisation takes into account that not only the phases but also the amplitudes of the wave Fourier modes are random quantities and it is called the ``Random Phase and Amplitude'' approach This approach allows to systematically derive the kinetic equation for the energy spectrum from the the Peierls-Brout-Prigogine (PBP) equation for the multi-mode probability density function (PDF) The PBP equation was originally derived for the three-wave systems and in the present paper we derive a similar equation for the four-wave case Equation for the multi-mode PDF will be used to validate the statistical assumptions about the phase and the amplitude randomness used for WT closures Further, the multi-mode PDF contains a detailed statistical information, beyond spectra, and it finally allows to study non-Gaussianity and intermittency in WT, as it will be described in the present paper In particular, we will show that intermittency of stochastic nonlinear waves is related to a flux of probability in the space of wave amplitudes

433 citations


Book ChapterDOI
01 Jan 2011
TL;DR: In this paper, some aspects of the estimation of the density function of a univariate probability distribution are discussed, and the asymptotic mean square error of a particular class of estimates is evaluated.
Abstract: This note discusses some aspects of the estimation of the density function of a univariate probability distribution. All estimates of the density function satisfying relatively mild conditions are shown to be biased. The asymptotic mean square error of a particular class of estimates is evaluated.

304 citations


Journal ArticleDOI
Tian Pau Chang1
TL;DR: In this article, the Gamma-Weibull function (GW) and the truncated Normal Weibull (NW) were proposed for the first time to estimate wind energy field, and the results showed that the proposed GW pdf describes best according to the Kolmogorov-Smirnov test, while the NN pdf performs worst.

273 citations


Journal ArticleDOI
TL;DR: In this paper, a method for the determination of stationary crystal nucleation rates in solutions has been developed, which makes use of the stochastic nature of nucleation, which is reflected in the variation of the induction time in many measurements at a constant supersaturation.
Abstract: A novel method for the determination of stationary crystal nucleation rates in solutions has been developed. This method makes use of the stochastic nature of nucleation, which is reflected in the variation of the induction time in many measurements at a constant supersaturation. A probability distribution function was derived which describes, under the condition of constant supersaturation, the probability of detecting crystals as a function of time, stationary nucleation rate, sample volume, and a time needed to grow the formed nuclei to a detectable size. Cumulative probability distributions of the induction time at constant supersaturation were experimentally determined using at least 80 induction times per supersaturation in 1 mL stirred solutions. The nucleation rate was determined by the best fit of the derived equation to the experimentally obtained distribution. This method was successfully applied to measure the nucleation rates at different supersaturations of two model compounds, m-aminobenzoi...

273 citations


Journal ArticleDOI
TL;DR: In this article, the response of an inductive power generator with a bistable symmetric potential to stationary random environmental excitations is investigated, and the expected value of the generator's output power is independent of the potential shape.

238 citations


Journal ArticleDOI
TL;DR: A mixture gamma (MG) distribution for the signal-to-noise ratio (SNR) of wireless channels is proposed, which is not only a more accurate model for composite fading, but is also a versatile approximation for any fading SNR.
Abstract: Composite fading (i.e., multipath fading and shadowing together) has increasingly been analyzed by means of the K channel and related models. Nevertheless, these models do have computational and analytical difficulties. Motivated by this context, we propose a mixture gamma (MG) distribution for the signal-to-noise ratio (SNR) of wireless channels. Not only is it a more accurate model for composite fading, but is also a versatile approximation for any fading SNR. As this distribution consists of N (≥ 1) component gamma distributions, we show how its parameters can be determined by using probability density function (PDF) or moment generating function (MGF) matching. We demonstrate the accuracy of the MG model by computing the mean square error (MSE) or the Kullback-Leibler (KL) divergence or by comparing the moments. With this model, performance metrics such as the average channel capacity, the outage probability, the symbol error rate (SER), and the detection capability of an energy detector are readily derived.

232 citations


Book ChapterDOI
01 Jan 2011
TL;DR: In this paper, the authors consider density estimates of the usual type generated by a weight function and obtain limit theorems for the maximum of the normalized deviation of the estimate from its expected value, and for quadratic norms of the same quantity.
Abstract: We consider density estimates of the usual type generated by a weight function. Limt theorems are obtained for the maximum of the normalized deviation of the estimate from its expected value, and for quadratic norms of the same quantity. Using these results we study the behavior of tests of goodness-of-fit and confidence regions based on these statistics. In particular, we obtain a procedure which uniformly improves the chi-square goodness-of-fit test when the number of observations and cells is large and yet remains insensitive to the estimation of nuisance parameters. A new limit theorem for the maximum absolute value of a type of nonstationary Gaussian process is also proved.

204 citations


Journal ArticleDOI
TL;DR: In this article, a procedure is established for calculating the load flow probability density function in an electrical power network, taking into account the presence of wind power generation, by utilizing a quadratic approximation of its power curve.
Abstract: In this paper, a procedure is established for calculating the load flow probability density function in an electrical power network, taking into account the presence of wind power generation. The probability density function of the power injected in the network by a wind turbine is first obtained by utilizing a quadratic approximation of its power curve. With this model, the DC power flow of a network is calculated, considering the probabilistic nature of the power injected or consumed by the generators and the loads.

163 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric kernel density estimation method for wind speed probability distribution is proposed, which is more accurate and has better adaptability than any conventional parametric distribution.

161 citations


Journal ArticleDOI
TL;DR: This work studies numerically rogue waves in dissipative systems, taking as an example a unidirectional fiber laser in a nonstationary regime of operation and finding that the probability of producing extreme pulses in this setup is higher than in any other system considered so far.
Abstract: We study numerically rogue waves in dissipative systems, taking as an example a unidirectional fiber laser in a nonstationary regime of operation. The choice of specific set of parameters allows the laser to generate a chaotic sequence of pulses with a random distribution of peak amplitudes. The probability density function for the intensity maxima has an elevated tail at higher intensities. We have found that the probability of producing extreme pulses in this setup is higher than in any other system considered so far.

158 citations


Journal ArticleDOI
TL;DR: This paper presents a Sequential Approximate Optimization (SAO) procedure that uses the Radial Basis Function (RBF) network and proposes a sampling strategy that can be found with a small number of function evaluations.
Abstract: This paper presents a Sequential Approximate Optimization (SAO) procedure that uses the Radial Basis Function (RBF) network. If the objective and constraints are not known explicitly but can be evaluated through a computationally intensive numerical simulation, the response surface, which is often called meta-modeling, is an attractive method for finding an approximate global minimum with a small number of function evaluations. An RBF network is used to construct the response surface. The Gaussian function is employed as the basis function in this paper. In order to obtain the response surface with good approximation, the width of this Gaussian function should be adjusted. Therefore, we first examine the width. Through this examination, some sufficient conditions are introduced. Then, a simple method to determine the width of the Gaussian function is proposed. In addition, a new technique called the adaptive scaling technique is also proposed. The sufficient conditions for the width are satisfied by introducing this scaling technique. Second, the SAO algorithm is developed. The optimum of the response surface is taken as a new sampling point for local approximation. In addition, it is necessary to add new sampling points in the sparse region for global approximation. Thus, an important issue for SAO is to determine the sparse region among the sampling points. To achieve this, a new function called the density function is constructed using the RBF network. The global minimum of the density function is taken as the new sampling point. Through the sampling strategy proposed in this paper, the approximate global minimum can be found with a small number of function evaluations. Through numerical examples, the validities of the width and sampling strategy are examined in this paper.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the wind speed characteristics recorded in the urban area of Palermo, in the south of Italy, by a monitoring network composed by four weather stations.

Journal ArticleDOI
TL;DR: The implementation of Rényi divergence via the sequential Monte Carlo method is presented and the performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.
Abstract: The context is sensor control for multi-object Bayes filtering in the framework of partially observed Markov decision processes (POMDPs). The current information state is represented by the multi-object probability density function (pdf), while the reward function associated with each sensor control (action) is the information gain measured by the alpha or Renyi divergence. Assuming that both the predicted and updated state can be represented by independent identically distributed (IID) cluster random finite sets (RFSs) or, as a special case, the Poisson RFSs, this work derives the analytic expressions of the corresponding Renyi divergence based information gains. The implementation of Renyi divergence via the sequential Monte Carlo method is presented. The performance of the proposed reward function is demonstrated by a numerical example, where a moving range-only sensor is controlled to estimate the number and the states of several moving objects using the PHD filter.

Journal ArticleDOI
TL;DR: The approach is extended to non-parametric PDFs, wherein the entire distribution can be discretized at a finite number of points and the probability density values at these points can be inferred using the principle of maximum likelihood, thus avoiding the assumption of any particular distribution.

Journal ArticleDOI
TL;DR: In this article, the probability density function (PDF) of the Γ Γ sum can be efficiently approximated either by the PDF of a single Γ − Γ distribution, or by a finite weighted sum of PDFs of Γ - Γ distributions.
Abstract: The Gamma-Gamma (Γ Γ ) distribution has recently attracted the interest of the research community due to its involvement in various communication systems. In the context of RF wireless communications, Γ Γ distribution accurately models the power statistics in composite shadowing/fading channels as well as in cascade multipath fading channels, while in optical wireless (OW) systems, it describes the fluctuations of the irradiance of optical signals distorted by atmospheric turbulence. Although Γ Γ channel model offers analytical tractability in the analysis of single input single output (SISO) wireless systems, difficulties arise when studying multiple input multiple output (MIMO) systems, where the distribution of the sum of independent Γ Γ variates is required. In this paper, we present a novel and simple closed-form approximation for the distribution of the sum of independent, but not necessarily identically distributed Γ Γ variates. It is shown that the probability density function (PDF) of the Γ Γ sum can be efficiently approximated either by the PDF of a single Γ Γ distribution, or by a finite weighted sum of PDFs of Γ Γ distributions. To reveal the importance of the proposed approximation, the performance of RF wireless systems in the presence of composite fading, as well as MIMO OW systems impaired by atmospheric turbulence, are investigated. Numerical results and simulations illustrate the accuracy of the proposed approach.

Journal ArticleDOI
TL;DR: The performance of two-way amplify-and-forward (AF) relaying networks over independently but not necessarily identically distributed Nakagami-m fading channels, with integer and integer plus one-half values of fading parameter m, is studied and closed-form expressions for the cumulative distribution function, probability density function, and moment generating function of the end-to-end signal- to-noise ratio (SNR) are presented.
Abstract: The performance of two-way amplify-and-forward (AF) relaying networks over independently but not necessarily identically distributed (i.n.i.d.) Nakagami-m fading channels, with integer and integer plus one-half values of fading parameter m, is studied. Closed-form expressions for the cumulative distribution function (CDF), probability density function (PDF), and moment generating function (MGF) of the end-to-end signal-to-noise ratio (SNR) are presented. Utilizing these results, we analyze the performance of two-way AF relaying system in terms of outage probability, average symbol error rate (SER), and average sum-rate. Simulations are performed to verify the correctness of our theoretical analysis.

Journal ArticleDOI
TL;DR: The results demonstrate that the image quality of the method presented is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters.
Abstract: Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

Journal ArticleDOI
TL;DR: In this article, the authors compare different aggregation techniques in order to estimate, as accurately and flexibly as possible, the probability distribution function of thermal and electrical variables of thermostatically controlled loads.
Abstract: Flexible models for aggregated residential loads are needed to analyze the impact of demand response policies and programs on the minimum comfort setting required by end-users. This impact has to be directly deduced from the probability profiles of thermal and electrical performance variables. The purpose of this paper is to compare different aggregation techniques in order to estimate, as accurately and flexibly as possible, the probability distribution function of thermal and electrical variables of thermostatically controlled loads. Two different approaches are considered: on the one hand, intensive numerical simulations-Monte Carlo process-combined with either Euler-Maruyama discrete approximation method or smoothing techniques; and, on the other hand, a numerical resolution of the Fokker-Planck partial differential equations. In all cases, a stochastic differential equation system-based on perturbed physical models-is used to model the individual load behavior. This individual system was previously developed and validated by the authors.

Journal ArticleDOI
TL;DR: The Monte Carlo importance sampling (MCIS) technique is resorts to to find an approximate global solution to the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks and constructs a Gaussian distribution and chooses its probability density function as the importance function.
Abstract: We consider the source localization problem using time-difference-of-arrival (TDOA) measurements in sensor networks. The maximum likelihood (ML) estimation of the source location can be cast as a nonlinear/nonconvex optimization problem, and its global solution is hardly obtained. In this paper, we resort to the Monte Carlo importance sampling (MCIS) technique to find an approximate global solution to this problem. To obtain an efficient importance function that is used in the technique, we construct a Gaussian distribution and choose its probability density function (pdf) as the importance function. In this process, an initial estimate of the source location is required. We reformulate the problem as a nonlinear robust least squares (LS) problem, and relax it as a second-order cone programming (SOCP), the solution of which is used as the initial estimate. Simulation results show that the proposed method can achieve the Cramer-Rao bound (CRB) accuracy and outperforms several existing methods.

Journal ArticleDOI
TL;DR: In this article, the beta Frechet (BF) distribution is expressed as linear combinations of the exponentiated Frechet and Frechet density functions, and the moments and order statistics are derived.
Abstract: Nadarajah and Gupta (2004) introduced the beta Frechet (BF) distribution, which is a generalization of the exponentiated Frechet (EF) and Frechet distributions, and obtained the probability density and cumulative distribution functions. However, they did not investigate the moments and the order statistics. In this article, the BF density function and the density function of the order statistics are expressed as linear combinations of Frechet density functions. This is important to obtain some mathematical properties of the BF distribution in terms of the corresponding properties of the Frechet distribution. We derive explicit expansions for the ordinary moments and L-moments and obtain the order statistics and their moments. We also discuss maximum likelihood estimation and calculate the information matrix which was not given in the literature. The information matrix is numerically determined. The usefulness of the BF distribution is illustrated through two applications to real data sets.

Journal ArticleDOI
TL;DR: A hierarchical geographical model is proposed to mimic the real traffic system, upon which a random walker will generate a power-law-like travel displacement distribution with tunable exponent, and display a scaling behavior in the probability density of having traveled a certain distance at a certain time.
Abstract: Uncovering the mechanism leading to the scaling law in human trajectories is of fundamental importance in understanding many spatiotemporal phenomena. We propose a hierarchical geographical model to mimic the real traffic system, upon which a random walker will generate a power-law-like travel displacement distribution with tunable exponent, and display a scaling behavior in the probability density of having traveled a certain distance at a certain time. The simulation results, analytical results, and empirical observations reported in D. Brockmann et al. [Nature (London) 439, 462 (2006)] agree very well with each other.

Journal ArticleDOI
TL;DR: A minimally subjective approach for uncertainty quantification in data-sparse situations, based on a new and purely data-driven version of polynomial chaos expansion (PCE), achieves a significant computational speed-up compared with Monte Carlo as well as high accuracy even for small orders of expansion, and shows how this novel approach helps overcome subjectivity.

Journal ArticleDOI
TL;DR: This paper derives probability density of the received power for mobile networks with random mobility models by considering the power received at an access point from a particular mobile node using Random Direction and Random way-point models.
Abstract: Probability density of the received power is well analyzed for wireless networks with static nodes. However, most of the present days networks are mobile and not much exploration has been done on statistical analysis of the received power for mobile networks in particular, for the network with random moving patterns. In this paper, we derive probability density of the received power for mobile networks with random mobility models. We consider the power received at an access point from a particular mobile node. Two mobility models are considered: Random Direction (RD) model and Random way-point (RWP) model. Wireless channel is assumed to have a small scale fading of Rayleigh distribution and path loss exponent of 4. 3D, 2D and 1D deployment of nodes are considered. Our findings show that the probability density of the received power for RD mobility models for all the three deployment topologies are weighted confluent hypergeometric functions. In case of RWP mobility models, the received power probability density for all the three deployment topologies are linear combinations of confluent hypergeometric functions. The analytical results are validated through NS2 simulations and a reasonably good match is found between analytical and simulation results.

Journal ArticleDOI
TL;DR: In this paper, a novel method is proposed to address the electricity procurement problem of large consumers using the concept of information gap decision theory, which can be used as a tool for assessing the risk levels, considering whether a large consumer is risk-taking or risk-averse regarding its midterm procurement strategies.
Abstract: In a competitive electricity market, consumers seek strategies to procure their energy needs from different resources (pool, bilateral contracts, and their own generation facilities) at minimum cost while controlling the risk. In this paper, a novel method is proposed to address the electricity procurement problem of large consumers using the concept of information gap decision theory. The method can be used as a tool for assessing the risk levels, considering whether a large consumer is risk-taking or risk-averse regarding its midterm procurement strategies. In the proposed method, procurement decisions are evaluated by means of two criteria. The first criterion is the robustness of the decision against experiencing high procurement costs, and the second one is the opportunity of taking advantage of low procurement costs. The pool price is considered an uncertain variable, and it is assumed that the large consumer has an estimate of the prices. In this study, two models of uncertainty are addressed for the pool price based on the concept of weighted mean squared error using a variance-covariance matrix, and the expected procurement cost is modeled using a joint normal probability distribution function. A case study illustrates the working of the proposed technique.

Journal ArticleDOI
Bruce G. Elmegreen1
TL;DR: In this paper, density probability distribution functions (PDFs) for turbulent self-gravitating clouds should be convolutions of the local log-normal PDF, which depends on the local average density ρave and Mach number, and the PDFs for ρaves and, which depend on the overall cloud structure.
Abstract: Density probability distribution functions (PDFs) for turbulent self-gravitating clouds should be convolutions of the local log-normal PDF, which depends on the local average density ρave and Mach number , and the PDFs for ρave and , which depend on the overall cloud structure. When self-gravity drives a cloud to increased central density, the total PDF develops an extended tail. If there is a critical density or column density for star formation, then the fraction of the local mass exceeding this threshold becomes higher near the cloud center. These elements of cloud structure should be in place before significant star formation begins. Then the efficiency is high so that bound clusters form rapidly, and the stellar initial mass function (IMF) has an imprint in the gas before destructive radiation from young stars can erase it. The IMF could arise from a power-law distribution of mass for cloud structure. These structures should form stars down to the thermal Jeans mass MJ at each density in excess of a threshold. The high-density tail of the PDF, combined with additional fragmentation in each star-forming core, extends the IMF into the brown dwarf regime. The core fragmentation process is distinct from the cloud structuring process and introduces an independent core fragmentation mass function (CFMF). The CFMF would show up primarily below the IMF peak.

Journal ArticleDOI
TL;DR: The effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions is studied and analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles are derived.
Abstract: We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.

Proceedings ArticleDOI
03 Oct 2011
TL;DR: The authors propose that the Cauchy-Schwarz (CS) pdf divergence measure (DCS) can give an analytic, closed-form expression for MoG, which makes fast and efficient calculations possible, which is tremendously desired in real-world applications where the dimensionality of the data/features is very high.
Abstract: This paper presents an efficient approach to calculate the difference between two probability density functions (pdfs), each of which is a mixture of Gaussians (MoG) Unlike Kullback-Leibler divergence (D KL ), the authors propose that the Cauchy-Schwarz (CS) pdf divergence measure (D CS ) can give an analytic, closed-form expression for MoG This property of the D CS makes fast and efficient calculations possible, which is tremendously desired in real-world applications where the dimensionality of the data/features is very high We show that D CS follows similar trends to D KL , but can be computed much faster, especially when the dimensionality is high Moreover, the proposed method is shown to significantly outperform D KL in classifying real-world 2D and 3D objects, and static hand posture recognition based on distances alone

Journal ArticleDOI
TL;DR: In this paper, the probability density function (pdf) of the sum of independent, identically distributed gamma-gamma (G-G) random variables is approximated by the pdf of an α-μ distribution.
Abstract: In this letter, we propose a simple accurate closed-form approximation to the probability density function (pdf) of the sum of independent, identically distributed gamma-gamma (G-G) random variables. It is shown that the pdf of the G-G sum can be efficiently approximated by the pdf of an α-μ distribution. Based on this approach, simple precise approximations for important performance metrics of multiple-input multiple-output (MIMO) free-space optical systems operating over G-G fading are presented. The accuracy of the proposed method is substantiated by various numerically evaluated and computer simulation results.

Journal ArticleDOI
TL;DR: In this article, a variational inequality formulation is given and solved using a route-based algorithm for the general multi-class percentile UE traffic assignment problem, which makes use of the diagonal elements in the Jacobian of percentile route travel time, which is approximated through recursive convolution.
Abstract: Travelers often reserve a buffer time for trips sensitive to arrival time in order to hedge against the uncertainties in a transportation system To model the effects of such behavior, travelers are assumed to choose routes to minimize the percentile travel time, ie the travel time budget that ensures their preferred probability of on-time arrival; in doing so, they drive the system to a percentile user equilibrium (UE), which can be viewed as an extension of the classic Wardrop equilibrium The stochasticity in the supply of transportation are incorporated by modeling the service flow rate of each road segment as a random variable Such stochasticity is flow-dependent in the sense that the probability density functions of these random variables, from which the distribution of link travel time are constructed, are specified endogenously with flow-dependent parameters The percentile route travel time, obtained by directly convolving the link travel time distributions in this paper, is not available in closed form in general and has to be numerically evaluated To reveal their structural properties, percentile UE solutions are examined in special cases and verified with numerical results For the general multi-class percentile UE traffic assignment problem, a variational inequality formulation is given and solved using a route-based algorithm The algorithm makes use of the diagonal elements in the Jacobian of percentile route travel time, which is approximated through recursive convolution Preliminary numerical experiments indicate that the algorithm is able to achieve highly precise equilibrium solutions

Journal ArticleDOI
TL;DR: A novel probability density function model of the dual-pol SAR data is developed that combines finite mixture modeling for marginal probability density functions estimation and copulas for multivariate distribution modeling and a novel dictionary-based copula-selection method method is proposed.
Abstract: In this paper, a novel supervised classification approach is proposed for high-resolution dual-polarization (dual-pol) amplitude satellite synthetic aperture radar (SAR) images. A novel probability density function (pdf) model of the dual-pol SAR data is developed that combines finite mixture modeling for marginal probability density functions estimation and copulas for multivariate distribution modeling. The finite mixture modeling is performed via a recently proposed SAR-specific dictionary-based stochastic expectation maximization approach to SAR amplitude pdf estimation. For modeling the joint distribution of dual-pol data the statistical concept of copulas is employed, and a novel dictionary-based copula-selection method method is proposed. In order to take into account the contextual information, the developed joint pdf model is combined with a Markov random field approach for Bayesian image classification. The accuracy of the developed dual-pol supervised classification approach is validated and compared with benchmark approaches on two high-resolution dual-pol TerraSAR-X scenes, acquired during an epidemiological study. A corresponding single-channel version of the classification algorithm is also developed and validated on a single polarization COSMO-SkyMed scene.