scispace - formally typeset
Search or ask a question

Showing papers on "Equivalence (measure theory) published in 2020"


Journal ArticleDOI
TL;DR: In four examples from the gerontology literature, different ways to specify alternative models that can be used to reject the presence of a meaningful or predicted effect in hypothesis tests are illustrated.
Abstract: Researchers often conclude an effect is absent when a null-hypothesis significance test yields a nonsignificant p value. However, it is neither logically nor statistically correct to conclude an effect is absent when a hypothesis test is not significant. We present two methods to evaluate the presence or absence of effects: Equivalence testing (based on frequentist statistics) and Bayes factors (based on Bayesian statistics). In four examples from the gerontology literature, we illustrate different ways to specify alternative models that can be used to reject the presence of a meaningful or predicted effect in hypothesis tests. We provide detailed explanations of how to calculate, report, and interpret Bayes factors and equivalence tests. We also discuss how to design informative studies that can provide support for a null model or for the absence of a meaningful effect. The conceptual differences between Bayes factors and equivalence tests are discussed, and we also note when and why they might lead to similar or different inferences in practice. It is important that researchers are able to falsify predictions or can quantify the support for predicted null effects. Bayes factors and equivalence tests provide useful statistical tools to improve inferences about null effects.

175 citations


Journal ArticleDOI
TL;DR: It is shown that the PDHG algorithm can be viewed as a special case of the Douglas–Rachford splitting algorithm for minimizing the sum of two convex functions.
Abstract: The primal-dual hybrid gradient (PDHG) algorithm proposed by Esser, Zhang, and Chan, and by Pock, Cremers, Bischof, and Chambolle is known to include as a special case the Douglas–Rachford splitting algorithm for minimizing the sum of two convex functions. We show that, conversely, the PDHG algorithm can be viewed as a special case of the Douglas–Rachford splitting algorithm.

61 citations


Journal ArticleDOI
TL;DR: It is shown that classic dummy variable event study designs can be generalized to models that account for multiple events of different sign and intensity of the treatment, which are particularly interesting for research in labor economics and public finance.
Abstract: We discuss important properties and pitfalls of panel-data event study designs. We derive three main results. First, binning of effect window endpoints is a practical necessity and key for identification of dynamic treatment effects. Second, event study designs with binned endpoints and distributed-lag models are numerically identical leading to the same parameter estimates after correct reparametrization. Third, classic dummy variable event study designs can be generalized to models that account for multiple events of different sign and intensity of the treatment, which are particularly interesting for research in labor economics and public finance. We show the practical relevance of our methodological points in a replication study.

60 citations


Journal ArticleDOI
TL;DR: This work explores the equivalence among flat filters, dirty derivative-based proportional integral derivative (PID) controllers, active disturbance rejection control, and integral reconstructor-based sliding mode control, in the context of SISO second-order, perturbed, pure integration systems.
Abstract: We explore the equivalence among flat filters, dirty derivative-based proportional integral derivative (PID) controllers, active disturbance rejection control, and integral reconstructor-based sliding mode control, in the context of SISO second-order, perturbed, pure integration systems. This is the prevailing paradigmatic class of differentially flat systems or feedback linearizable systems. The equivalence is valid beyond the second-order pure integration systems. However, PID controllers of such plants do not make much sense without imposing assumptions that will move the considerations out of the pure integration systems case. The equivalence among the rest of the controllers is valid for any finite order pure integration system.

50 citations


Journal ArticleDOI
TL;DR: This study investigated a documented translation method that includes the careful specification of descriptions of item intents, and demonstrated how documented data from the TIP contributes evidence to a validity argument for construct equivalence between translated and source language PROMs.
Abstract: Cross-cultural research with patient-reported outcomes measures (PROMs) assumes that the PROM in the target language will measure the same construct in the same way as the PROM in the source language. Yet translation methods are rarely used to qualitatively maximise construct equivalence or to describe the intents of each item to support common understanding within translation teams. This study aimed to systematically investigate the utility of the Translation Integrity Procedure (TIP), in particular the use of item intent descriptions, to maximise construct equivalence during the translation process, and to demonstrate how documented data from the TIP contributes evidence to a validity argument for construct equivalence between translated and source language PROMs. Analysis of secondary data was conducted on routinely collected data in TIP Management Grids of translations (n = 9) of the Health Literacy Questionnaire (HLQ) that took place between August 2014 and August 2015: Arabic, Czech, French (Canada), French (France), Hindi, Indonesian, Slovak, Somali and Spanish (Argentina). Two researchers initially independently deductively coded the data to nine common types of translation errors. Round two of coding included an identified 10th code. Coded data were compared for discrepancies, and checked when needed with a third researcher for final code allocation. Across the nine translations, 259 changes were made to provisional forward translations and were coded into 10 types of errors. Most frequently coded errors were Complex word or phrase (n = 99), Semantic (n = 54) and Grammar (n = 27). Errors coded least frequently were Cultural errors (n = 7) and Printed errors (n = 5). To advance PROM validation practice, this study investigated a documented translation method that includes the careful specification of descriptions of item intents. Assumptions that translated PROMs have construct equivalence between linguistic contexts can be incorrect due to errors in translation. Of particular concern was the use of high level complex words by translators, which, if undetected, could cause flawed interpretation of data from people with low literacy. Item intent descriptions can support translations to maximise construct equivalence, and documented translation data can contribute evidence to justify score interpretation and use of translated PROMS in new linguistic contexts.

45 citations


Journal ArticleDOI
TL;DR: The proposed interval-valued similarity measures improve numerically (according to the most widely used measures in the literature) the results obtained with interval valued similarity measures which do not consider the width of the intervals.

43 citations


Posted Content
TL;DR: It is argued that the limited representational resources of model-based RL agents are better used to build models that are directly useful for value-based planning, and the principle of value equivalence underlies a number of recent empirical successes in RL.
Abstract: Learning models of the environment from data is often viewed as an essential component to building intelligent reinforcement learning (RL) agents. The common practice is to separate the learning of the model from its use, by constructing a model of the environment's dynamics that correctly predicts the observed state transitions. In this paper we argue that the limited representational resources of model-based RL agents are better used to build models that are directly useful for value-based planning. As our main contribution, we introduce the principle of value equivalence: two models are value equivalent with respect to a set of functions and policies if they yield the same Bellman updates. We propose a formulation of the model learning problem based on the value equivalence principle and analyze how the set of feasible solutions is impacted by the choice of policies and functions. Specifically, we show that, as we augment the set of policies and functions considered, the class of value equivalent models shrinks, until eventually collapsing to a single point corresponding to a model that perfectly describes the environment. In many problems, directly modelling state-to-state transitions may be both difficult and unnecessary. By leveraging the value-equivalence principle one may find simpler models without compromising performance, saving computation and memory. We illustrate the benefits of value-equivalent model learning with experiments comparing it against more traditional counterparts like maximum likelihood estimation. More generally, we argue that the principle of value equivalence underlies a number of recent empirical successes in RL, such as Value Iteration Networks, the Predictron, Value Prediction Networks, TreeQN, and MuZero, and provides a first theoretical underpinning of those results.

41 citations


Journal ArticleDOI
TL;DR: In this article, a new framework for the definition of behavioral quotients is proposed, which can capture precision and recall measures between a collection of recorded executions and a system specification, and they demonstrate the application of the quotients for capturing precision, recall, and recall.
Abstract: The behavioural comparison of systems is an important concern of software engineering research. For example, the areas of specification discovery and specification mining are concerned with measuring the consistency between a collection of execution traces and a program specification. This problem is also tackled in process mining with the help of measures that describe the quality of a process specification automatically discovered from execution logs. Though various measures have been proposed, it was recently demonstrated that they neither fulfil essential properties, such as monotonicity, nor can they handle infinite behaviour. In this article, we address this research problem by introducing a new framework for the definition of behavioural quotients. We prove that corresponding quotients guarantee desired properties that existing measures have failed to support. We demonstrate the application of the quotients for capturing precision and recall measures between a collection of recorded executions and a system specification. We use a prototypical implementation of these measures to contrast their monotonic assessment with measures that have been defined in prior research.

38 citations


Posted Content
TL;DR: This paper proposes a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model.
Abstract: This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another.

37 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the universality of the five-factor model of mindfulness and the measurement equivalence of the Five-Facet Mindfulness Questionnaire (FFMQ).
Abstract: The goal of the current study was to investigate the universality of the five-factor model of mindfulness and the measurement equivalence of the Five-Facet Mindfulness Questionnaire (FFMQ). The study used FFMQ data from published and unpublished research conducted in 16 countries (total N = 8541). Using CFA, different models, proposed in the literature, were fitted. To test the cross-cultural equivalence of the best fitting model, a multi-group confirmatory factor analysis was used. Further, the equivalence of individual facets of the FFMQ and potential sources of non-equivalence was explored. The best fitting models in most samples were a five-facet model with a higher-order mindfulness factor and uncorrelated positive and negative item-wording factors and a five-facet model with a correlated facets and uncorrelated positive and negative item-wording factors. These models showed structural equivalence, but did not show metric equivalence (equivalent factor loadings) across cultures. Given this lack of equivalent factor loadings, not even correlations or mean patterns can be compared across cultures. A similar pattern was observed when testing the equivalence of the individual facets; all individual facets failed even tests of metric equivalence. A sample size weighted exploratory factor analysis across cultures indicated that a six-factor solution might provide the best fit across cultures with acting with awareness split into two factors. Finally, both the five- and six-factor solution showed substantially better fit in more individualistic and less tight cultures. Overall, the FFMQ has conceptual and measurement problems in a cross-cultural context, raising questions about the validity of the current conceptualization of mindfulness across cultures. The results showed that the fit of the FFMQ was substantially better in individualistic cultures that indicate that further data from non-Western cultures is needed to develop a universal conceptualization and measurement of mindfulness.

36 citations


Journal ArticleDOI
TL;DR: Under some generalized convexity assumptions, an equivalence is established between an optimal solution of (OCP) and a saddle-point associated with the Lagrange functional corresponding to the modified multidimensional optimal control problem.


Journal ArticleDOI
TL;DR: This paper rigorously proves the equivalence of these approximate GKP codes with an explicit correspondence of the parameters and proposes a standard form of the approximate code states in the position representation, which enables them to derive closed-from expressions for the Wigner functions, the inner products, and the average photon numbers in terms of the theta functions.
Abstract: The Gottesman-Kitaev-Preskill (gkp) quantum error-correcting code attracts much attention in continuous variable (CV) quantum computation and CV quantum communication due to the simplicity of error-correcting routines and the high tolerance against Gaussian errors. Since the gkp code state should be regarded as a limit of physically meaningful approximate ones, various approximations have been developed until today, but explicit relations among them are still unclear. In this paper, we rigorously prove the equivalence of these approximate gkp codes with an explicit correspondence of the parameters. We also propose a standard form of the approximate code states in the position representation, which enables us to derive closed-form expressions for the Wigner function, inner products, and the average photon number in terms of the theta functions. Our results serve as fundamental tools for further analyses of fault-tolerant quantum computation and channel coding using approximate gkp codes.

Journal ArticleDOI
TL;DR: In this article, the authors extend the logic of instrumental aggressiveness to bully, harass, and torment their schoolmates in order to achieve popularity and other goals, but whom do they bully?
Abstract: Some teenagers are willing to bully, harass, and torment their schoolmates in order to achieve popularity and other goals. But whom do they bully? Here, we extend the logic of instrumental aggressi...


Journal ArticleDOI
TL;DR: In this article, it was shown that simple, separable, nuclear, nonelementary C ∗ -algebras have nuclear dimension at most 1, which completes the equivalence between finite nuclear dimension and stability.
Abstract: We prove that 𝒵 -stable, simple, separable, nuclear, nonunital C ∗ -algebras have nuclear dimension at most 1. This completes the equivalence between finite nuclear dimension and 𝒵 -stability for simple, separable, nuclear, nonelementary C ∗ -algebras.

Journal ArticleDOI
TL;DR: In this article, it was shown that the families of APN trinomials (constructed by Budaghyan and Carlet in 2008) and multi-inomial polynomials constructed by Bracken et al. (2008) are pairwise different up to CCZ-equivalence.

Posted Content
TL;DR: In this article, a thorough analysis of the Lagrangian, Eulerian and Kantorovich formulations of the optimal control problems is provided, as well as their relaxations.
Abstract: This paper is devoted to the study of multi-agent deterministic optimal control problems. We initially provide a thorough analysis of the Lagrangian, Eulerian and Kantorovich formulations of the problems, as well as of their relaxations. Then we exhibit some equivalence results among the various representations and compare the respective value functions. To do it, we combine techniques and ideas from optimal transportation, control theory, Young measures and evolution equations in Banach spaces. We further exploit the connections among Lagrangian and Eulerian descriptions to derive consistency results as the number of particles/agents tends to infinity. To that purpose we prove an empirical version of the Superposition Principle and obtain suitable Gamma-convergence results for the controlled systems.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept of density equivalence coefficients (DEC) and density modification coefficient (DMC) for mixed-species stands, and derive the mean values of these coefficients based on long-term experiments using different mixtures of European beech.
Abstract: A wealth of recent research has improved our understanding of the structure, growth and yield of mixed-species stands. However, appropriate quantitative concepts for their silvicultural regulation remain scarce. Due to the species-specific stand densities, growing area requirements and potential over-density, the density and mixing regulation in mixed stands is much more intricate than in monospecific stands. Here, we introduce the species-specific coefficients: density equivalence coefficients (DEC), for density equivalence; and density modification coefficient (DMC), for density modification in mixed species stands. DEC is suitable for the conversion of the stand density and growing area requirement of one species into that of another species. DMC estimates the modification of maximum stand density by tree species mixing using as reference the maximum stand density of one of the species. First, we introduce the theoretical concept of these coefficients. Second, we derive the mean values of these coefficients based on long-term experiments using different mixtures of European beech. Third, we apply DEC and DMC for flexible regulation of the stand density and mixing proportion. Thus, silvicultural regulation of monospecific stands and mixed-species stands forms a continuum, where monospecific stands represent an extreme case of mixed-species stands. Lastly, we discuss the advantages and limitations of these concepts. Future directions comprise the inclusion of additional species, their integration in guidelines and simulation models, and their establishment for the quantitative regulation of experimental plots and the practical implementation in forest stands.

Proceedings ArticleDOI
04 Nov 2020
TL;DR: In this article, it was shown that every concept class with finite Littlestone dimension can be learned by an approximate differentially private algorithm, which yields an equivalence between online learnability and PAC learnability.
Abstract: We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm This answers an open question of Alon et al (STOC 2019) who proved the converse statement (this question was also asked by Neel et al (FOCS 2019)) Together these two results yield an equivalence between online learnability and private PAC learnability We introduce a new notion of algorithmic stability called “global stability” which is essential to our proof and may be of independent interest We also discuss an application of our results to boosting the privacy and accuracy parameters of differentially-private learners

Journal ArticleDOI
TL;DR: In this article, the authors show that the recently formulated causal and stable first-order hydrodynamics has the same dynamics as Israel-Stewart theory for boost-invariant, Bjorken expanding systems with an ideal gas equation of state and a regulating sector determined by a constant relaxation time.

Journal ArticleDOI
TL;DR: The results provide considerable evidence that the FAI-T can be used as a screening tool for the identification of TMD in Turkish-speaking populations.
Abstract: To determine the reliability and diagnostic accuracy of the Turkish version of the Fonseca anamnestic index (FAI-T). The cultural equivalence of the FAI was established according to the Internation...

Journal ArticleDOI
TL;DR: The curse of dimensionality is generalized to linear and nonlinear inverse problems outlining the main differences between them and it is shown that nonlinearities allow for a reduction in size of the nonlinear equivalence region that could be embedded in a linear hyperquadric with smaller condition number than the corresponding linearized equivalence area.

Proceedings Article
01 Jan 2020
TL;DR: Surprisingly, in some environments PER can be replaced entirely by this new loss function without impact to empirical performance, and this relationship suggests a new branch of improvements to PER by correcting its uniformly sampled loss function equivalent.
Abstract: Prioritized Experience Replay (PER) is a deep reinforcement learning technique in which agents learn from transitions sampled with non-uniform probability proportionate to their temporal-difference error. We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function with the same expected gradient. Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance. Furthermore, this relationship suggests a new branch of improvements to PER by correcting its uniformly sampled loss function equivalent. We demonstrate the effectiveness of our proposed modifications to PER and the equivalent loss function in several MuJoCo and Atari environments.

Journal ArticleDOI
TL;DR: Forced-choice measures are gaining popularity as an alternative assessment format to single-statement (SS) measures as discussed by the authors, however, a fundamental question remains to be answered: Do FC and SS instru...
Abstract: Forced-choice (FC) measures are gaining popularity as an alternative assessment format to single-statement (SS) measures. However, a fundamental question remains to be answered: Do FC and SS instru...

Posted Content
TL;DR: In this article, a connection between maximum rank distance (MRD) codes and Gabidulin codes has been established in the extremal cases $h = 1$ and $h=r-1$, extending and unifying all the previously known connections.
Abstract: After a seminal paper by Shekeey (2016), a connection between maximum $h$-scattered $\mathbb{F}_q$-subspaces of $V(r,q^n)$ and maximum rank distance (MRD) codes has been established in the extremal cases $h=1$ and $h=r-1$. In this paper, we propose a connection for any $h\in\{1,\ldots,r-1\}$, extending and unifying all the previously known ones. As a consequence, we obtain examples of non-square MRD codes which are not equivalent to generalized Gabidulin or twisted Gabidulin codes. Up to equivalence, we classify MRD codes having the same parameters as the ones in our connection. Also, we determine the weight distribution of codes related to the geometric counterpart of maximum $h$-scattered subspaces.

Journal ArticleDOI
TL;DR: A novel linear discriminant analysis approach for the classification of high-dimensional matrix-valued data that commonly arises from imaging studies is proposed using an efficient nuclear norm penalized regression that encourages a low-rank structure.
Abstract: We propose a novel linear discriminant analysis (LDA) approach for the classification of high-dimensional matrix-valued data that commonly arises from imaging studies. Motivated by the equivalence ...

Journal ArticleDOI
07 May 2020
TL;DR: The higher-order properties of relational acceleration and gravity are put forward, which follow directly from the theory and may inspire future researchers to evaluate the seemingly self-organizing nature of human cognition.
Abstract: We propose relational density theory, as an integration of stimulus equivalence and behavioral momentum theory, to predict the nonlinearity of equivalence responding of verbal humans. Consistent with Newtonian classical mechanics, the theory posits that equivalence networks will demonstrate the higher order properties of density, volume, and mass. That is, networks containing more relations (volume) that are stronger (density) will be more resistant to change (i.e., contain greater mass; mass = volume * density). Data from several equivalence experiments that are not easily interpreted through existing accounts are described in terms of the theory, generating predictable results in most cases. In addition, we put forward the higher-order properties of relational acceleration and gravity, which follow directly from the theory and may inspire future researchers to evaluate the seemingly self-organizing nature of human cognition. Finally, we conclude by describing avenues for real-world translation, considering past research interpreted through relational density theory, and call for basic experimental research to validate and extend core theoretical assumptions.

Journal ArticleDOI
TL;DR: In this paper, the optimality conditions for a class of PDE and PDI-constrained variational control problems are investigated and an efficient condition for a local optimal solution of the considered PDE&PDI-consstrained VCC problem to be its global optimal solution is derived.
Abstract: In this paper, optimality conditions are investigated for a class of PDE&PDI-constrained variational control problems. Thus, an efficient condition for a local optimal solution of the considered PDE&PDI-constrained variational control problem to be its global optimal solution is derived. The theoretical development is supported by a suitable example of nonconvex optimization problem.

Proceedings ArticleDOI
08 Nov 2020
TL;DR: This paper proposes an approach, named ARDiff, for improving the scalability of symbolic-execution-based equivalence checking techniques when comparing syntactically-similar versions of a program, e.g., for verifying the correctness of code upgrades and refactoring.
Abstract: Equivalence checking techniques help establish whether two versions of a program exhibit the same behavior. The majority of popular techniques for formally proving/refuting equivalence relies on symbolic execution – a static analysis approach that reasons about program behaviors in terms of symbolic input variables. Yet, symbolic execution is difficult to scale in practice due to complex programming constructs, such as loops and non-linear arithmetic. This paper proposes an approach, named ARDiff, for improving the scalability of symbolic-execution-based equivalence checking techniques when comparing syntactically-similar versions of a program, e.g., for verifying the correctness of code upgrades and refactoring. Our approach relies on a set of novel heuristics to determine which parts of the versions’ common code can be effectively pruned during the analysis, reducing the analysis complexity without sacrificing its effectiveness. Furthermore, we devise a new equivalence checking benchmark, extending existing benchmarks with a set of real-life methods containing complex math functions and loops. We evaluate the effectiveness and efficiency of ARDiff on this benchmark and show that it outperforms existing method-level equivalence checking techniques by solving 86% of all equivalent and 55% of non-equivalent cases, compared with 47% to 69% for equivalent and 38% to 52% for non-equivalent cases in related work.