scispace - formally typeset
Search or ask a question

Showing papers on "Probability distribution published in 2021"


Posted Content
TL;DR: In this article, the authors present a method for directly computing the marginal likelihood of every DAG model given a random sample with no missing observations, and they apply this methodology to Gaussian DAG models which consist of a recursive set of linear regression models.
Abstract: We develop simple methods for constructing parameter priors for model choice among Directed Acyclic Graphical (DAG) models. In particular, we introduce several assumptions that permit the construction of parameter priors for a large number of DAG models from a small set of assessments. We then present a method for directly computing the marginal likelihood of every DAG model given a random sample with no missing observations. We apply this methodology to Gaussian DAG models which consist of a recursive set of linear regression models. We show that the only parameter prior for complete Gaussian DAG models that satisfies our assumptions is the normal-Wishart distribution. Our analysis is based on the following new characterization of the Wishart distribution: let $W$ be an $n \times n$, $n \ge 3$, positive-definite symmetric matrix of random variables and $f(W)$ be a pdf of $W$. Then, f$(W)$ is a Wishart distribution if and only if $W_{11} - W_{12} W_{22}^{-1} W'_{12}$ is independent of $\{W_{12},W_{22}\}$ for every block partitioning $W_{11},W_{12}, W'_{12}, W_{22}$ of $W$. Similar characterizations of the normal and normal-Wishart distributions are provided as well.

133 citations


Journal ArticleDOI
01 Jan 2021
TL;DR: In this article, a variational quantum generator is proposed to model continuous classical probability distributions using a set of operators chosen at the beginning of the calculation, and the generator is trained via its interaction with a discriminator model that compares the generated samples with those coming from the real data distribution.
Abstract: We propose a hybrid quantum-classical approach to model continuous classical probability distributions using a variational quantum circuit. The architecture of the variational circuit consists of two parts: a quantum circuit employed to encode a classical random variable into a quantum state, called the quantum encoder, and a variational circuit whose parameters are optimized to mimic a target probability distribution. Samples are generated by measuring the expectation values of a set of operators chosen at the beginning of the calculation. Our quantum generator can be complemented with a classical function, such as a neural network, as part of the classical post-processing. We demonstrate the application of the quantum variational generator using a generative adversarial learning approach, where the quantum generator is trained via its interaction with a discriminator model that compares the generated samples with those coming from the real data distribution. We show that our quantum generator is able to learn target probability distributions using either a classical neural network or a variational quantum circuit as the discriminator. Our implementation takes advantage of automatic differentiation tools to perform the optimization of the variational circuits employed. The framework presented here for the design and implementation of variational quantum generators can serve as a blueprint for designing hybrid quantum-classical architectures for other machine learning tasks on near-term quantum devices.

109 citations


Journal ArticleDOI
Weijun Xie1
TL;DR: In this article, a distributionally robust chance constrained program (DRCCP) with Wasserstein ambiguity set is studied, where the uncertain constraints should be satisfied with a probability at least a given threshold for all the probability distributions of the uncertain parameters within a chosen WASSERstein distance from an empirical distribution.
Abstract: This paper studies a distributionally robust chance constrained program (DRCCP) with Wasserstein ambiguity set, where the uncertain constraints should be satisfied with a probability at least a given threshold for all the probability distributions of the uncertain parameters within a chosen Wasserstein distance from an empirical distribution. In this work, we investigate equivalent reformulations and approximations of such problems. We first show that a DRCCP can be reformulated as a conditional value-at-risk constrained optimization problem, and thus admits tight inner and outer approximations. We also show that a DRCCP of bounded feasible region is mixed integer representable by introducing big-M coefficients and additional binary variables. For a DRCCP with pure binary decision variables, by exploring the submodular structure, we show that it admits a big-M free formulation, which can be solved by a branch and cut algorithm. Finally, we present a numerical study to illustrate the effectiveness of the proposed formulations.

108 citations


Journal ArticleDOI
TL;DR: A confidence interval based distributionally robust real-time economic dispatch (CI-DRED) approach, which considers the risk related to accommodating wind power and can strike a balance between the operational costs and risk even when the wind power probability distribution cannot be precisely estimated.
Abstract: This article proposes a confidence interval based distributionally robust real-time economic dispatch (CI-DRED) approach, which considers the risk related to accommodating wind power. In this article, only the wind power curtailment and load shedding due to wind power disturbances are evaluated in the operational risk. The proposed approach can strike a balance between the operational costs and risk even when the wind power probability distribution cannot be precisely estimated. A novel ambiguity set is developed based on the imprecise probability theory, which can be constructed based on the point-wise or family-wise confidence intervals. The worst pair of distributions in the established ambiguity set is then identified, and the original CI-DRED problem is transformed into a determined nonlinear dispatch problem accordingly. By using the sequential convex optimization method and piecewise linear approximation method, the nonlinear dispatch model is reformulated as a mixed integer linear programming problem, for which off-the-shelf solvers are available. A fast inactive constraint filtration method is also applied to further relieve the computational burden. Numerical results on the IEEE 118-bus system and a real 445-bus system verify the effectiveness and efficiency of the proposed approach.

89 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of quantum diffusion on the dynamics of the inflaton during a period of ultra-slow-roll inflation was considered and the probability distribution function for the primordial density field was derived by deriving the characteristic function.
Abstract: We consider the effect of quantum diffusion on the dynamics of the inflaton during a period of ultra-slow-roll inflation. We extend the stochastic-$\delta\mathcal{N}$ formalism to the ultra-slow-roll regime and show how this system can be solved analytically in both the classical-drift and quantum-diffusion dominated limits. By deriving the characteristic function, we are able to construct the full probability distribution function for the primordial density field. In the diffusion-dominated limit, we recover an exponential tail for the probability distribution, as found previously in slow-roll inflation. To complement these analytical techniques, we present numerical results found both by very large numbers of simulations of the Langevin equations, and through a new, more efficient approach based on iterative Volterra integrals. We illustrate these techniques with two examples of potentials that exhibit an ultra-slow-roll phase leading to the possible production of primordial black holes.

72 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a weighted network method based on the ordered visibility graph, named OVGWP, which considers not only the belief value itself, but also the cardinality of basic probability assignment.

67 citations


Journal ArticleDOI
TL;DR: An enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series is developed and a probability criterion based on the classical central limit theorem is introduced that allows evaluation of the likelihood that a data-point is drawn from U .

62 citations


Journal ArticleDOI
TL;DR: A Z-numbers soft likelihood function (ZSLF) decision model is proposed and the application examples show the rationality and effectiveness of the method and the advantages of the ZSLF decision model are shown.
Abstract: Due to the complexity of the real world, effective consideration of the ambiguity and reliability of information is a challenge that must be addressed by the correct decision of the expert system. Z-number provides us with a good idea because it describes the probability of the random variable and the possibility measure. Recently, Yager presented a soft likelihood function that effectively combines probabilistic evidence to deal with the conflict information. This article generalizes Yager's soft likelihood function based on Z-numbers and proposes a Z-numbers soft likelihood function (ZSLF) decision model. The application examples show the rationality and effectiveness of the method. The comparison and discussion further show the advantages of the ZSLF decision model.

61 citations


Journal ArticleDOI
TL;DR: Stochastic process theory, general stochastic process, Markov process and normal process are respectively used to simulate the risk-accident process in this paper, and the results provide useful reference for the prediction and management of construction accidents.
Abstract: There are many factors leading to construction safety accident. The rule presented under the influence of these factors should be a statistical random rule. To reveal those random rules and study the probability prediction method of construction safety accident, according to stochastic process theory, general stochastic process, Markov process and normal process are respectively used to simulate the risk-accident process in this paper. First, in the general-random-process-based analysis the probability of accidents in a period of time is calculated. Then, the Markov property of the construction safety risk evolution process is illustrated, and the analytical expression of probability density function of first-passage time of Markov-based risk-accident process is derived to calculate the construction safety probability. In the normal-process-based analysis, the construction safety probability formulas in cases of stationary normal risk process and non-stationary normal risk process with zero mean value are derived respectively. Finally, the number of accidents that may occur on construction site in a period is studied macroscopically based on Poisson process, and the probability distribution of time interval between adjacent accidents and the time of the nth accident are calculated respectively. The results provide useful reference for the prediction and management of construction accidents.

59 citations


Journal ArticleDOI
TL;DR: In this article, the problem of estimating the posterior is framed as a ratio between the data generating distribution and the marginal distribution, which can be solved by logistic regression, and including regularising penalty terms enables automatic selection of the summary statistics relevant to the inference task.
Abstract: We consider the problem of parametric statistical inference when likelihood computations are prohibitively expensive but sampling from the model is possible. Several so-called likelihood-free methods have been developed to perform inference in the absence of a likelihood function. The popular synthetic likelihood approach infers the parameters by modelling summary statistics of the data by a Gaussian probability distribution. In another popular approach called approximate Bayesian computation, the inference is performed by identifying parameter values for which the summary statistics of the simulated data are close to those of the observed data. Synthetic likelihood is easier to use as no measure of “closeness” is required but the Gaussianity assumption is often limiting. Moreover, both approaches require judiciously chosen summary statistics. We here present an alternative inference approach that is as easy to use as synthetic likelihood but not as restricted in its assumptions, and that, in a natural way, enables automatic selection of relevant summary statistic from a large set of candidates. The basic idea is to frame the problem of estimating the posterior as a problem of estimating the ratio between the data generating distribution and the marginal distribution. This problem can be solved by logistic regression, and including regularising penalty terms enables automatic selection of the summary statistics relevant to the inference task. We illustrate the general theory on canonical examples and employ it to perform inference for challenging stochastic nonlinear dynamical systems and high-dimensional summary statistics.

57 citations


Journal ArticleDOI
Fuyuan Xiao1
TL;DR: In this paper, a generalized model of the traditional negation method is proposed to represent the knowledge involved with uncertain information, and an entropy measure is proposed for the complex-valued distribution, called $\mathcal {X}$ entropy.
Abstract: In real applications of artificial and intelligent decision-making systems, how to represent the knowledge involved with uncertain information is still an open issue. The negation method has great significance to address this issue from another perspective. However, it has the limitation that can be used only for the negation of the probability distribution. In this article, therefore, we propose a generalized model of the traditional one, so that it can have more powerful capability to represent the knowledge, and uncertainty measure. In particular, we first define a vector representation of complex-valued distribution. Then, an entropy measure is proposed for the complex-valued distribution, called $\mathcal {X}$ entropy. In this context, a transformation function to acquire the negation of the complex-valued distribution is exploited on the basis of the newly defined $\mathcal {X}$ entropy. Afterward, the properties of this negation function are analyzed, and investigated, as well as some special cases. Finally, we study the negation function on the view from the $\mathcal {X}$ entropy. It is verified that the proposed negation method for the complex-valued distribution is a scheme with a maximal entropy.

Journal ArticleDOI
TL;DR: An overview of this exciting new line of research, including brief introductions to RandNLA and DPPs, as well as applications of D PPs to classical linear algebra tasks such as least squares regression, low-rank approximation and the Nystrom method are provided.
Abstract: Randomized Numerical Linear Algebra (RandNLA) uses randomness to develop improved algorithms for matrix problems that arise in scientific computing, data science, machine learning, etc. Determinantal Point Processes (DPPs), a seemingly unrelated topic in pure and applied mathematics, is a class of stochastic point processes with probability distribution characterized by sub-determinants of a kernel matrix. Recent work has uncovered deep and fruitful connections between DPPs and RandNLA which lead to new guarantees and improved algorithms that are of interest to both areas. We provide an overview of this exciting new line of research, including brief introductions to RandNLA and DPPs, as well as applications of DPPs to classical linear algebra tasks such as least squares regression, low-rank approximation and the Nystrom method. For example, random sampling with a DPP leads to new kinds of unbiased estimators for least squares, enabling more refined statistical and inferential understanding of these algorithms; a DPP is, in some sense, an optimal randomized algorithm for the Nystrom method; and a RandNLA technique called leverage score sampling can be derived as the marginal distribution of a DPP. We also discuss recent algorithmic developments, illustrating that, while not quite as efficient as standard RandNLA techniques, DPP-based algorithms are only moderately more expensive.

Journal ArticleDOI
TL;DR: This paper introduces an approach to simplify state preparation, together with a circuit optimization technique, both of which can help reduce the circuit complexity for QAE state preparation significantly.
Abstract: Quantum amplitude estimation (QAE) can achieve a quadratic speedup for applications classically solved by Monte Carlo simulation. A key requirement to realize this advantage is efficient state preparation. If state preparation is too expensive, it can diminish the quantum advantage. Preparing arbitrary quantum states has exponential complexity with respect to the number of qubits, and thus, is not applicable. Currently known efficient techniques require problems based on log-concave probability distributions, involve learning an unknown distribution from empirical data, or fully rely on quantum arithmetic. In this paper, we introduce an approach to simplify state preparation, together with a circuit optimization technique, both of which can help reduce the circuit complexity for QAE state preparation significantly. We demonstrate the introduced techniques for a numerical integration example on real quantum hardware, as well as for option pricing under the Heston model, i.e., based on a stochastic volatility process, using simulation.

Journal ArticleDOI
TL;DR: This article characterize an explicit form of the optimal control policy and the worst-case distribution policy for linear-quadratic problems with Wasserstein penalty and shows that the contraction property of associated Bellman operators extends a single-stage out-of-sample performance guarantee to the corresponding multistage guarantee without any degradation in the confidence level.
Abstract: Standard stochastic control methods assume that the probability distribution of uncertain variables is available. Unfortunately, in practice, obtaining accurate distribution information is a challenging task. To resolve this issue, in this article we investigate the problem of designing a control policy that is robust against errors in the empirical distribution obtained from data. This problem can be formulated as a two-player zero-sum dynamic game problem, where the action space of the adversarial player is a Wasserstein ball centered at the empirical distribution. A dynamic programming solution is provided exploiting the reformulation techniques for Wasserstein distributionally robust optimization. We show that the contraction property of associated Bellman operators extends a single-stage out-of-sample performance guarantee , obtained using a measure concentration inequality, to the corresponding multistage guarantee without any degradation in the confidence level. Furthermore, we characterize an explicit form of the optimal control policy and the worst-case distribution policy for linear-quadratic problems with Wasserstein penalty.

Journal ArticleDOI
TL;DR: This article proposes an adversarial training approach to detect out-of-distribution samples in an end-to-end trainable deep model that can successfully learn the target class underlying distribution and outperforms other approaches.
Abstract: One-class classification (OCC) poses as an essential component in many machine learning and computer vision applications, including novelty, anomaly, and outlier detection systems. With a known definition for a target or normal set of data, one-class classifiers can determine if any given new sample spans within the distribution of the target class. Solving for this task in a general setting is particularly very challenging, due to the high diversity of samples from the target class and the absence of any supervising signal over the novelty (nontarget) concept, which makes designing end-to-end models unattainable. In this article, we propose an adversarial training approach to detect out-of-distribution samples in an end-to-end trainable deep model. To this end, we jointly train two deep neural networks, $\mathcal {R}$ and $\mathcal {D}$ . The latter plays as the discriminator while the former, during training, helps $\mathcal {D}$ characterize a probability distribution for the target class by creating adversarial examples and, during testing, collaborates with it to detect novelties. Using our OCC, we first test outlier detection on two image data sets, Modified National Institute of Standards and Technology (MNIST) and Caltech-256. Then, several experiments for video anomaly detection are performed on University of Minnesota (UMN) and University of California, San Diego (UCSD) data sets. Our proposed method can successfully learn the target class underlying distribution and outperforms other approaches.

Journal ArticleDOI
TL;DR: The purpose of the problem addressed is to design a recursive filter, such that in the simultaneous presence of the stochastic noises, the channel fading and the data coding–decoding mechanism, an upper bound of the filtering error variance is obtained and then minimized at each time step.
Abstract: In this article, the recursive filtering problem is studied for a class of discrete-time nonlinear stochastic systems subject to fading measurements. In order to facilitate the data transmission in a resource-constrained communication network, the multiple description coding scheme is adopted to encode the fading measurements into two descriptions with the identical importance. Two independent Bernoulli distributed random variables are introduced to govern the occurrences of the packet dropouts in two channels from the encoders to the decoders. The channel fading phenomenon is characterized by the $M$ th-order Rice fading model whose coefficients are mutually independent random variables obeying certain probability distributions. The purpose of the problem addressed is to design a recursive filter, such that in the simultaneous presence of the stochastic noises, the channel fading and the data coding–decoding mechanism, an upper bound of the filtering error variance is obtained and then minimized at each time step. In virtue of the Riccati difference equation technique and the stochastic analysis approach, the explicit form of the desired filter parameters is derived by solving a sequence of coupled algebraic Riccati-like difference equations. Finally, a simulation experiment is provided to show the applicability of the developed filtering scheme.

Journal ArticleDOI
TL;DR: A distributionally robust model for three-phase unbalanced DNR is proposed to obtain the optimal configuration under the worst-case PD of DG outputs and loads within the ambiguity set and inherits the advantages of stochastic optimization and robust optimization.
Abstract: Distributed generator (DG) volatility has a great impact on system operation, which should be considered beforehand due to the slow time scale of distribution network reconfiguration (DNR). However, it is difficult to derive accurate probability distributions (PDs) for DG outputs and loads analytically. To remove the assumptions on accurate PD knowledge, a deep neural network is first devised to learn the reference joint PD from historical data in an adaptive way. The reference PD along with the forecast errors are enveloped by a distributional ambiguity set using Kullback-Leibler divergence. Then a distributionally robust model for three-phase unbalanced DNR is proposed to obtain the optimal configuration under the worst-case PD of DG outputs and loads within the ambiguity set. The result inherits the advantages of stochastic optimization and robust optimization. Finally, a modified column-and-constraint generation method with efficient scenario decomposition is investigated to solve this model. Numerical tests are carried out using an IEEE unbalanced benchmark and a practical-scale system in Shandong, China. Comparison with the deterministic, stochastic and robust DNR methods validates the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: The classic decoupling strategy for UBMDO, the framework of sequential design and uncertainty evaluation (SDUE), is introduced into U BMDO-RIV to reduce the computational burden and to tackle challenges of mixed uncertainties in engineering systems.

Journal ArticleDOI
TL;DR: It is shown that the standard deviation of the prediction error is the single most accurate measure of outcomes, and the most recent statistical methods to determine p-values for Type 1 errors are provided.
Abstract: Purpose To provide a reference for study design comparing intraocular lens (IOL) power calculation formulas, to show that the standard deviation (SD) of the prediction error (PE) is the single most accurate measure of outcomes, and to provide the most recent statistical methods to determine P values for type 1 errors. Setting Baylor College of Medicine, Houston, Texas, and University of Southern California, Los Angeles, California, USA. Design Retrospective consecutive case series. Methods Two datasets comprised of 5200 and 13 301 single eyes were used. The SDs of the PEs for 11 IOL power calculation formulas were calculated for each dataset. The probability density functions of signed and absolute PE were determined. Results None of the probability distributions for any formula in either dataset was normal (Gaussian). All the original signed PE distributions were not normal, but symmetric and leptokurtotic (heavy tailed) and had higher peaks than a normal distribution. The absolute distributions were asymmetric and skewed to the right. The heteroscedastic method was much better at controlling the probability of a type I error than older methods. Conclusions (1) The criteria for patient and data inclusion were outlined; (2) the appropriate sample size was recommended; (3) the requirement that the formulas be optimized to bring the mean error to zero was reinforced; (4) why the SD is the single best parameter to characterize the performance of an IOL power calculation formula was demonstrated; and (5) and using the heteroscedastic statistical method was the preferred method of analysis was shown.

Journal ArticleDOI
TL;DR: In this article, the effect of quantum diffusion on the dynamics of the inflaton during a period of ultra-slow-roll inflation was considered and the probability distribution function for the primordial density field was derived by deriving the characteristic function.
Abstract: We consider the effect of quantum diffusion on the dynamics of the inflaton during a period of ultra-slow-roll inflation. We extend the stochastic-$\delta\mathcal{N}$ formalism to the ultra-slow-roll regime and show how this system can be solved analytically in both the classical-drift and quantum-diffusion dominated limits. By deriving the characteristic function, we are able to construct the full probability distribution function for the primordial density field. In the diffusion-dominated limit, we recover an exponential tail for the probability distribution, as found previously in slow-roll inflation. To complement these analytical techniques, we present numerical results found both by very large numbers of simulations of the Langevin equations, and through a new, more efficient approach based on iterative Volterra integrals. We illustrate these techniques with two examples of potentials that exhibit an ultra-slow-roll phase leading to the possible production of primordial black holes.

Journal ArticleDOI
TL;DR: This work focuses on characterizing the violation probability of peak AoI and AoI in IoT systems, where a sensor delivers updates to a monitor under M/M/1 and M/D/1 queues with first-come–first-served policy.
Abstract: The Internet of Things (IoT) has emerged as one of the key features of the next-generation wireless networks, where timely delivery of status update packets is essential for many real-time IoT applications. Age of Information (AoI) is a new metric to measure the freshness of update. The reduction of the violation probability that AoI of status updates exceeds a given age constraint is of great significance for guaranteeing the information freshness in IoT systems. By modeling the IoT networks as M/M/1 and M/D/1 queuing systems, this work focuses on characterizing the violation probability of peak AoI and AoI in IoT systems, where a sensor delivers updates to a monitor under M/M/1 and M/D/1 queues with first-come–first-served policy. From a time-domain perspective, we explore the correlation between interdeparture time and system time, by which the closed-form expressions of peak AoI distribution and the violation probability for any AoI constraint are derived. The obtained results induce accurate characterizations for probability distribution functions of peak AoI and AoI. Consequently, accurate characterizations of average AoI and the variance of AoI are obtained. Then, for peak AoI and AoI, the optimal generation rate of the status update that induces the minimal violation probability is also found. The numerical results show that the optimal update rate can significantly reduce the AoI violation probability for a wide range of AoI constraints. The theoretical findings and predictions are verified by numerical simulation results as well as provide guidance for the design of IoT networks.

Journal ArticleDOI
TL;DR: A methodology for measuring the degree of unpredictability in dynamical systems with memory, i.e., systems with responses dependent on a history of past states is developed, which can be employed in a variety of settings.
Abstract: The aim of this article is to develop a methodology for measuring the degree of unpredictability in dynamical systems with memory, i.e., systems with responses dependent on a history of past states. The proposed model is generic, and can be employed in a variety of settings, although its applicability here is examined in the particular context of an industrial environment: gas turbine engines. The given approach consists in approximating the probability distribution of the outputs of a system with a deep recurrent neural network; such networks are capable of exploiting the memory in the system for enhanced forecasting capability. Once the probability distribution is retrieved, the entropy or missing information about the underlying process is computed, which is interpreted as the uncertainty with respect to the system's behavior. Hence, the model identifies how far the system dynamics are from its typical response, in order to evaluate the system reliability and to predict system faults and/or normal accidents . The validity of the model is verified with sensor data recorded from commissioning gas turbines, belonging to normal and faulty conditions.

Journal ArticleDOI
TL;DR: In this article, the authors investigate how to employ ML regression approaches to estimate the distribution of the received generalized signal-to-noise ratio (GSNR) of unestablished lightpaths, and assess the performance of three regression approaches by leveraging synthetic data obtained by means of two different data generation tools.
Abstract: Estimating the quality of transmission (QoT) of a candidate lightpath prior to its establishment is of pivotal importance for effective decision making in resource allocation for optical networks. Several recent studies investigated machine learning (ML) methods to accurately predict whether the configuration of a prospective lightpath satisfies a given threshold on a QoT metric such as the generalized signal-to-noise ratio (GSNR) or the bit error rate. Given a set of features, the GSNR for a given lightpath configuration may still exhibit variations, as it depends on several other factors not captured by the features considered. It follows that the GSNR associated with a lightpath configuration can be modeled as a random variable and thus be characterized by a probability distribution function. However, most of the existing approaches attempt to directly answer the question “is a given lightpath configuration (e.g., with a given modulation format) feasible on a certain path?” but do not consider the additional benefit that estimating the entire statistical distribution of the metric under observation can provide. Hence, in this paper, we investigate how to employ ML regression approaches to estimate the distribution of the received GSNR of unestablished lightpaths. In particular, we discuss and assess the performance of three regression approaches by leveraging synthetic data obtained by means of two different data generation tools. We evaluate the performance of the three proposed approaches on a realistic network topology in terms of root mean squared error and R2 score and compare them against a baseline approach that simply predicts the GSNR mean value. Moreover, we provide a cost analysis by attributing penalties to incorrect deployment decisions and emphasize the benefits of leveraging the proposed estimation approaches from the point of view of a network operator, which is allowed to make more informed decisions about lightpath deployment with respect to state-of-the-art QoT classification techniques.

Journal ArticleDOI
01 Mar 2021
TL;DR: This article introduces a generalization of persistence diagrams, namely Radon measures supported on the upper half plane, and explores topological properties of this new space, which will also hold for the closed subspace of persistence diagram.
Abstract: Despite the obvious similarities between the metrics used in topological data analysis and those of optimal transport, an optimal-transport based formalism to study persistence diagrams and similar topological descriptors has yet to come. In this article, by considering the space of persistence diagrams as a space of discrete measures, and by observing that its metrics can be expressed as optimal partial transport problems, we introduce a generalization of persistence diagrams, namely Radon measures supported on the upper half plane. Such measures naturally appear in topological data analysis when considering continuous representations of persistence diagrams (e.g. persistence surfaces) but also as limits for laws of large numbers on persistence diagrams or as expectations of probability distributions on the space of persistence diagrams. We explore topological properties of this new space, which will also hold for the closed subspace of persistence diagrams. New results include a characterization of convergence with respect to Wasserstein metrics, a geometric description of barycenters (Frechet means) for any distribution of diagrams, and an exhaustive description of continuous linear representations of persistence diagrams. We also showcase the strength of this framework to study random persistence diagrams by providing several statistical results made meaningful thanks to this new formalism.

Journal ArticleDOI
TL;DR: In this article, the formation probability of primordial black holes generated during the collapse at horizon re-entry of large fluctuations produced during inflation, such as those ascribed to a period of ultra-slow-roll, was calculated.

Journal ArticleDOI
TL;DR: The main constituents of the approach include approximating a complex, isotropic Gaussian probability distribution by a finite-size Gauss-Hermite constellation, applying entropic continuity bounds, and leveraging previous security proofs for Gaussian-modulation protocols.
Abstract: We consider discrete-modulation protocols for continuous-variable quantum key distribution (CV-QKD) that employ a modulation constellation consisting of a finite number of coherent states and that use a homodyne- or a heterodyne-detection receiver. We establish a security proof for collective attacks in the asymptotic regime, and we provide a formula for an achievable secret-key rate. Previous works established security proofs for discrete-modulation CV-QKD protocols that use two or three coherent states. The main constituents of our approach include approximating a complex, isotropic Gaussian probability distribution by a finite-size Gauss-Hermite constellation, applying entropic continuity bounds, and leveraging previous security proofs for Gaussian-modulation protocols. As an application of our method, we calculate secret-key rates achievable over a lossy thermal bosonic channel. We show that the rates for discrete-modulation protocols approach the rates achieved by a Gaussian-modulation protocol as the constellation size is increased. For pure-loss channels, our results indicate that in the high-loss regime and for sufficiently large constellation size, the achievable key rates scale optimally, i.e., proportional to the channel's transmissivity.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a comprehensive steady-state analysis of threshold-ALOHA, a distributed age-aware modification of slotted ALOHA proposed in recent literature, and analyzed the time-average expected AoI attained by this policy, and explored its scaling with network size.
Abstract: We present a comprehensive steady-state analysis of threshold-ALOHA , a distributed age-aware modification of slotted ALOHA proposed in recent literature. In threshold-ALOHA, each terminal suspends its transmissions until the Age of Information (AoI) of the status update flow it is sending reaches a certain threshold $\Gamma $ . Once the age exceeds $\Gamma $ , the terminal attempts transmission with constant probability $\tau $ in each slot, as in standard slotted ALOHA. We analyze the time-average expected AoI attained by this policy, and explore its scaling with network size, $n$ . We derive the probability distribution of the number of active users at steady state, and show that as network size increases the policy converges to one that runs slotted ALOHA with fewer sources: on average about one fifth of the users is active at any time. We obtain an expression for steady-state expected AoI and use this to optimize the parameters $\Gamma $ and $\tau $ , resolving the conjectures in previous literature by confirming that the optimal age threshold and transmission probability are $2.2n$ and $4.69/n$ , respectively. We find that the optimal AoI scales with the network size as $1.4169n$ , which is almost half the minimum AoI achievable with slotted ALOHA, while the loss from the maximum throughput of $e^{-1}$ remains below 1%. We compare the performance of this rudimentary algorithm to that of the SAT policy [2] that dynamically adapts its transmission probabilities.

Journal ArticleDOI
TL;DR: In this paper, the authors derive formulas for the superquantile and buffered probability of exceedance (bPOE) for a variety of common univariate probability distributions and apply these formulas to parametric density estimation and propose the method of superquantiles (MOS).
Abstract: Conditional value-at-risk (CVaR) and value-at-risk, also called the superquantile and quantile, are frequently used to characterize the tails of probability distributions and are popular measures of risk in applications where the distribution represents the magnitude of a potential loss. buffered probability of exceedance (bPOE) is a recently introduced characterization of the tail which is the inverse of CVaR, much like the CDF is the inverse of the quantile. These quantities can prove very useful as the basis for a variety of risk-averse parametric engineering approaches. Their use, however, is often made difficult by the lack of well-known closed-form equations for calculating these quantities for commonly used probability distributions. In this paper, we derive formulas for the superquantile and bPOE for a variety of common univariate probability distributions. Besides providing a useful collection within a single reference, we use these formulas to incorporate the superquantile and bPOE into parametric procedures. In particular, we consider two: portfolio optimization and density estimation. First, when portfolio returns are assumed to follow particular distribution families, we show that finding the optimal portfolio via minimization of bPOE has advantages over superquantile minimization. We show that, given a fixed threshold, a single portfolio is the minimal bPOE portfolio for an entire class of distributions simultaneously. Second, we apply our formulas to parametric density estimation and propose the method of superquantiles (MOS), a simple variation of the method of moments where moments are replaced by superquantiles at different confidence levels. With the freedom to select various combinations of confidence levels, MOS allows the user to focus the fitting procedure on different portions of the distribution, such as the tail when fitting heavy-tailed asymmetric data.

Journal ArticleDOI
Yachao Zhang1, Yan Liu1, Shengwen Shu1, Feng Zheng1, Zhanghao Huang1 
01 Feb 2021-Energy
TL;DR: The data-driven robust optimization (DDRO) model is proposed for the energy coupled system and demonstrates the effectiveness and superiority of the proposed model for solving the coordination scheduling problem of MECS.

Proceedings ArticleDOI
10 Mar 2021
TL;DR: Zhang et al. as discussed by the authors utilized an adversarial regressor to maximize the disparity on the target domain and trained a feature generator to minimize this disparity, but due to the high dimension of the output space, this regressor fails to detect samples that deviate from the support of the source.
Abstract: Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain. Though many DA theories and algorithms have been proposed, most of them are tailored into classification settings and may fail in regression tasks, especially in the practical keypoint detection task. To tackle this difficult but significant task, we present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection. Inspired by the latest theoretical work, we first utilize an adversarial regressor to maximize the disparity on the target domain and train a feature generator to minimize this disparity. However, due to the high dimension of the output space, this regressor fails to detect samples that deviate from the support of the source. To overcome this problem, we propose two important ideas. First, based on our observation that the probability density of the output space is sparse, we introduce a spatial probability distribution to describe this sparsity and then use it to guide the learning of the adversarial regressor. Second, to alleviate the optimization difficulty in the high-dimensional space, we innovatively convert the minimax game in the adversarial training to the minimization of two opposite goals. Extensive experiments show that our method brings large improvement by 8% to 11% in terms of PCK on different datasets.