scispace - formally typeset
Search or ask a question

Showing papers on "Generalization published in 2014"


Posted Content
01 Dec 2014-viXra
TL;DR: A new approach for multi-attribute group decision-making problems is proposed by extending the technique for order preference by similarity to ideal solution to single-valued neutrosophic environment.
Abstract: A single-valued neutrosophic set is a special case of neutrosophic set. It has been proposed as a generalization of crisp sets, fuzzy sets, and intuitionistic fuzzy sets in order to deal with incomplete information. In this paper, a new approach for multi-attribute group decision making problems is.

342 citations


Journal ArticleDOI
TL;DR: This article argued that case studies have merits over quantitative methods in terms of theoretical generalization, identifying disconfirming cases and providing useful information for assessing the empirical generalizability of results.
Abstract: The case study as a key research method has often been criticized for generating results that are less generalizable than those of large-sample, quantitative methods. This paper clearly defines generalization and distinguishes it from other related concepts. Drawing on the literature, the author shows that case study results may be less generalizable than those of quantitative methods only in the case of within-population generalization. The author argues that case studies have merits over quantitative methods in terms of theoretical generalization, identifying disconfirming cases and providing useful information for assessing the empirical generalizability of results.

299 citations


Posted Content
01 Nov 2014-viXra
TL;DR: The operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs), and two interval neutrosophic number aggregation operators are developed.
Abstract: As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world.

261 citations


Journal ArticleDOI
TL;DR: In this paper, a generalization of the Banach contraction principle in the setting of Branciari metric spaces is presented, where the authors present a new generalisation of the contraction principle for the case of metric spaces.
Abstract: We present a new generalization of the Banach contraction principle in the setting of Branciari metric spaces.

251 citations


Journal ArticleDOI
TL;DR: A generalization of Hilfer derivatives in which Riemann–Liouville integrals are replaced by more general Prabhakar integrals is presented, which shows some applications in classical equations of mathematical physics such as the heat and the free electron laser equations.

196 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied how the combinatorial behavior of a category C affects the algebraic behavior of representations of C, and showed that C-algebraic representations are noetherian.
Abstract: Given a category C of a combinatorial nature, we study the following fundamental question: how does the combinatorial behavior of C affect the algebraic behavior of representations of C? We prove two general results. The first gives a combinatorial criterion for representations of C to admit a theory of Grobner bases. From this, we obtain a criterion for noetherianity of representations. The second gives a combinatorial criterion for a general "rationality" result for Hilbert series of representations of C. This criterion connects to the theory of formal languages, and makes essential use of results on the generating functions of languages, such as the transfer-matrix method and the Chomsky-Schutzenberger theorem. Our work is motivated by recent work in the literature on representations of various specific categories. Our general criteria recover many of the results on these categories that had been proved by ad hoc means, and often yield cleaner proofs and stronger statements. For example: we give a new, more robust, proof that FI-modules (originally introduced by Church-Ellenberg-Farb), and a family of natural generalizations, are noetherian; we give an easy proof of a generalization of the Lannes-Schwartz artinian conjecture from the study of generic representation theory of finite fields; we significantly improve the theory of $\Delta$-modules, introduced by Snowden in connection to syzygies of Segre embeddings; and we establish fundamental properties of twisted commutative algebras in positive characteristic.

188 citations


Journal ArticleDOI
TL;DR: This paper presents the ETM algorithm which allows the user to seamlessly steer the discovery process based on preferences with respect to the four quality dimensions, and shows that all dimensions are important for process discovery.
Abstract: Process discovery algorithms typically aim at discovering process models from event logs that best describe the recorded behavior. Often, the quality of a process discovery algorithm is measured by quantifying to what extent the resulting model can reproduce the behavior in the log, i.e. replay fitness. At the same time, there are other measures that compare a model with recorded behavior in terms of the precision of the model and the extent to which the model generalizes the behavior in the log. Furthermore, many measures exist to express the complexity of a model irrespective of the log. In this paper, we first discuss several quality dimensions related to process discovery. We further show that existing process discovery algorithms typically consider at most two out of the four main quality dimensions: replay fitness, precision, generalization and simplicity. Moreover, existing approaches cannot steer the discovery process based on user-defined weights for the four quality dimensions. This paper presents the ETM algorithm which allows the user to seamlessly steer the discovery process based on preferences with respect to the four quality dimensions. We show that all dimensions are important for process discovery. However, it only makes sense to consider precision, generalization and simplicity if the replay fitness is acceptable.

156 citations


Journal ArticleDOI
TL;DR: The authors discusses limitations of different forms of generalization across the spectrum of quantitative and qualitative research and argues for considering population heterogeneity and future uses of knowledge claims when judging the appropriateness of generalizations.
Abstract: Context: Generalization is a critical concept in all research designed to generate knowledge that applies to all elements of a unit (population) while studying only a subset of these elements (sample). Commonly applied criteria for generalizing focus on experimental design or representativeness of samples of the population of units. The criteria tend to neglect population diversity and targeted uses of knowledge generated from the generalization. Objectives: This article has two connected purposes: (a) to articulate the structure and discuss limitations of different forms of generalizations across the spectrum of quantitative and qualitative research and (b) to argue for considering population heterogeneity and future uses of knowledge claims when judging the appropriateness of generalizations. Research Design: In the first part of the paper, we present two forms of generalization that rely on statistical analysis of between-group variation: analytic and probabilistic generalization. We then describe a third form of generalization: essentialist generalization. Essentialist generalization moves from the particular to the general in small sample studies. We discuss limitations of each kind of generalization. In the second part of the paper, we propose two additional criteria when evaluating the validity of evidence based on generalizations from education research: population heterogeneity and future use of knowledge claims. Conclusions/Recommendations: The proposed criticisms of research generalizations have implications on how research is conducted and research findings are summarized. The main limitation in analytic generalization is that it does not provide evidence of a causal link for subgroups or individuals. In addition to making explicit the uses that the knowledge claims may be targeting, there is a need for some changes in how research is conducted. This includes a need for demonstrating the mechanisms of causality; descriptions of intervention outcomes as positive, negative, or neutral; and latent class analysis accompanied with discriminant analysis. The main criticism of probabilistic generalization is that it may not apply to subgroups and may have limited value for guiding policy and practice. This highlights a need for defining grouping variables by intended uses of knowledge claims. With respect to essentialist generalization, there are currently too few qualitative studies attempting to identify invariants that hold across the range of relevant situations. There is a need to study the ways in which a kind of phenomenon is produced, which would allow researchers to understand the various ways in which a phenomenon manifests itself.

131 citations


Journal ArticleDOI
TL;DR: An extension of Darbo's fixed point theorem associated with measures of noncompactness is given, and some results on the existence of coupled fixed points for a class of condensing operators in Banach spaces are presented.

125 citations


Posted Content
TL;DR: It is proved that the generalization capability of ELM with Gaussian kernel is essentially worse than that of FNN withGaussian kernel, and it is found that the well-developed coefficient regularization technique can essentially improve thegeneralization capability.
Abstract: An extreme learning machine (ELM) can be regarded as a two stage feed-forward neural network (FNN) learning system which randomly assigns the connections with and within hidden neurons in the first stage and tunes the connections with output neurons in the second stage. Therefore, ELM training is essentially a linear learning problem, which significantly reduces the computational burden. Numerous applications show that such a computation burden reduction does not degrade the generalization capability. It has, however, been open that whether this is true in theory. The aim of our work is to study the theoretical feasibility of ELM by analyzing the pros and cons of ELM. In the previous part on this topic, we pointed out that via appropriate selection of the activation function, ELM does not degrade the generalization capability in the expectation sense. In this paper, we launch the study in a different direction and show that the randomness of ELM also leads to certain negative consequences. On one hand, we find that the randomness causes an additional uncertainty problem of ELM, both in approximation and learning. On the other hand, we theoretically justify that there also exists an activation function such that the corresponding ELM degrades the generalization capability. In particular, we prove that the generalization capability of ELM with Gaussian kernel is essentially worse than that of FNN with Gaussian kernel. To facilitate the use of ELM, we also provide a remedy to such a degradation. We find that the well-developed coefficient regularization technique can essentially improve the generalization capability. The obtained results reveal the essential characteristic of ELM and give theoretical guidance concerning how to use ELM.

106 citations



Posted Content
TL;DR: In this paper, the authors established a parametric extension of the $h$-principle for overtwisted contact structures on manifolds of all dimensions, which is the direct generalization of the 2-dimensional result from \cite{Eli89}.
Abstract: We establish a parametric extension $h$-principle for overtwisted contact structures on manifolds of all dimensions, which is the direct generalization of the $3$-dimensional result from \cite{Eli89}. It implies, in particular, that any closed manifold admits a contact structure in any given homotopy class of almost contact structures.

Journal ArticleDOI
01 Aug 2014
TL;DR: N‐PolyVector fields are introduced, a generalization of N‐RoSy fields for which the vectors are neither necessarily orthogonal nor rotationally symmetric, offering an intuitive tool to generate planar quadrilateral meshes.
Abstract: We introduce N-PolyVector fields, a generalization of N-RoSy fields for which the vectors are neither necessarily orthogonal nor rotationally symmetric. We formally define a novel representation for N-PolyVectors as the root sets of complex polynomials and analyze their topological and geometric properties. A smooth N-PolyVector field can be efficiently generated by solving a sparse linear system without integer variables. We exploit the flexibility of N-PolyVector fields to design conjugate vector fields, offering an intuitive tool to generate planar quadrilateral meshes.

Reference EntryDOI
22 May 2014
Abstract: This chapter explores case study as a major approach to research and evaluation. After first noting various contexts in which case studies are commonly used, the chapter focuses on case study research directly. Strengths and potential problematic issues are outlined, followed by key phases of the process. The chapter emphasizes how important it is to design the case, to collect and interpret data in ways that highlight the qualitative, to have an ethical practice that values multiple perspectives and political interests, and to report creatively to facilitate use in policymaking and practice. Finally, the chapter explores how to generalize from the single case. Concluding issues center on the need to think more imaginatively about design and the range of methods and forms of reporting required to persuade audiences to value qualitative ways of knowing in case study research.

Journal ArticleDOI
01 Jan 2014-Filomat
TL;DR: In this paper, the Hermite-Hadamard type inequalities for h-preinvex functions are established under certain conditions, which can be viewed as generalization of several previously known results.
Abstract: The objectives of this paper is to obtain some Hermite-Hadamard type inequalities for h-preinvex functions. Firstly, a new kind of generalized h-convex functions, termed h-preinvex functions, is introduced through relaxing the concept of h-convexity introduced by Varosanec. And the Hermite-Hadamard type inequalities for h-preinvex functions are established under certain conditions. Our results can be viewed as generalization of several previously known results.

Journal ArticleDOI
TL;DR: This paper proposes incremental approaches for updating approximations dynamically in set-valued ordered decision systems under the attribute generalization, which involves several modifications to relevant matrices without having to retrain from the start on all accumulated training data.

Journal ArticleDOI
TL;DR: In this article, a logic of generalization based on rationalistic social mechani... is proposed for drawing general inferences on the basis of single-case and small-n studies.
Abstract: Drawing general inferences on the basis of single-case and small-n studies is often seen as problematic. This article suggests a logic of generalization based on thinly rationalistic social mechani...

Journal ArticleDOI
TL;DR: In this article, a general integral identity for twice differentiable functions is derived and the author establishes new estimates on Hermite-Hadamard type and Simpson type inequalities for s-convex via Riemann-Liouville fractional integral.
Abstract: In this paper, a general integral identity for twice differentiable functions is derived. By using of this identity, the author establishes some new estimates on Hermite-Hadamard type and Simpson type inequalities for s-convex via Riemann–Liouville fractional integral.

Journal ArticleDOI
TL;DR: In this article, the authors extended the concept of interval valued intuitionistic fuzzy soft relation to the case of interval-valued neutrosophic soft relation (IVNSS relation for short), which can be discussed as a generalization of soft relations.
Abstract: – Mukherjee [34] introduced the concept of interval valued intuitionistic fuzzy soft relation. In this paper we will extend this concept to the case of interval valued neutrosophic soft relation (IVNSS relation for short) which can be discussed as a generalization of soft relations, fuzzy soft relation, intuitionistic fuzzy soft relation, interval valued intuitionistic fuzzy soft relations and neutrosophic soft relations. Basic operations are presented and the various properties like reflexivity, symmetry, transitivity of IVNSS relations are also studied

Journal ArticleDOI
TL;DR: In this paper, the authors introduced a natural generalization of the well-known, interesting, and useful Fox H-function into generalized function of several variables, namely, the I-function of ‘‘’’ variables.
Abstract: The aim of this paper is to introduce a natural generalization of the well-known, interesting, and useful Fox H-function into generalized function of several variables, namely, the I-function of ‘‘’’ variables. For , we get the I-function introduced and studied by Arjun Rathie (1997) and, for , we get I-function of two variables introduced very recently by ShanthaKumari et al. (2012). Convergent conditions, elementary properties, and special cases have also been given. The results presented in this paper generalize the results of H-function of ‘‘’’ variables available in the literature.

Journal ArticleDOI
TL;DR: Empirical research methods for scaling up new requirements engineering (RE) technology validation, namely expert opinion, single-case mechanism experiments, technical action research and statistical difference-making experiments, and four kinds of methods for empirical RE technology validation are given.

Journal ArticleDOI
TL;DR: In this paper, a 3D renormalization group decoding algorithm for topological codes with Abelian anyonic excitations was proposed, which achieves a fault-tolerant storage threshold of ∼ 1:9(4)% for Kitaev's toric code subject to 3D bit-flip channels.
Abstract: We present a three-dimensional generalization of a renormalization group decoding algorithm for topological codes with Abelian anyonic excitations that we introduced for two dimensions in [7, 8]. We also provide a complete detailed description of the structure of the algorithm, which should be sufficient for anyone interested in implementing it. This 3D implementation extends our previous 2D algorithm by incorporating a failure probability of the syndrome measurements, i.e., it enables fault-tolerant decoding. We report a fault-tolerant storage threshold of ∼ 1:9(4)% for Kitaev's toric code subject to a 3D bit-flip channel (i.e. including imperfect syndrome measurements). This number is to be compared with the 2:9% value obtained via perfect matching [6]. The 3D generalization inherits many properties of the 2D algorithm, including a complexity linear in the space-time volume of the memory, which can be parallelized to logarithmic time.

Journal ArticleDOI
TL;DR: In this paper, the p-metric space is extended to an M-measure space and generalized contractions for getting fixed points and common fixed points for mappings are presented.
Abstract: In this paper, we extend the p-metric space to an M-metric space, and we shall show that the definition we give is a real generalization of the p-metric by presenting some examples. In the sequel we prove some of the main theorems by generalized contractions for getting fixed points and common fixed points for mappings.

Posted Content
01 May 2014-viXra
TL;DR: The Aleph function of two variables as discussed by the authors is a generalization of the I-function for two variables due to Sharma et al. In this paper, the integral representation and applications of new function has been discussed.
Abstract: In this paper, the author defines the Aleph function of two variables, which is a generalization of the I-function of two variables due to Sharma et al.(9). In this regard the integral representation and applications of new function has been discussed. Similar results obtained by other authors follows as special cases of our findings.

Book ChapterDOI
02 Sep 2014
TL;DR: This work proposes a generalization of the bisimilarity pseudometric which allows to deal with a wider class of properties, such as those used in security and privacy, and proposes a family of metrics, parametrized on a notion of distance which depends on the property the authors want to verify.
Abstract: The bisimilarity pseudometric based on the Kantorovich lifting is one of the most popular metrics for probabilistic processes proposed in the literature. However, its application in verification is limited to linear properties. We propose a generalization of this metric which allows to deal with a wider class of properties, such as those used in security and privacy. More precisely, we propose a family of metrics, parametrized on a notion of distance which depends on the property we want to verify. Furthermore, we show that the members of this family still characterize bisimilarity in terms of their kernel, and provide a bound on the corresponding metrics on traces. Finally, we study the case of a metric corresponding to differential privacy. We show that in this case it is possible to have a dual form, easier to compute, and we prove that the typical constructs of process algebra are non-expansive with respect to this metrics, thus paving the way to a modular approach to verification.

Journal ArticleDOI
TL;DR: This article presents research that implements a fully automated workflow to generalize a 1:50k map from 1:10k data, the first time that a complete topographic map has been generalized without any human interaction.
Abstract: This article presents research that implements a fully automated workflow to generalize a 1:50k map from 1:10k data This is the first time that a complete topographic map has been generalized without any human interaction More noteworthy is that the resulting map is good enough to replace the existing map Specifications for the automated process were established as part of this researchReplication of the existing map was not the aim, because feasibility of automated generalization is better when compliance with traditional generalizations rules is loosened and alternate approaches are acceptable Indeed, users valued the currency and relevancy of geographical information more than complying with all existing cartographic guidelines The development of the workflow thus started with the creation of a test map with automated generalization operations The reason for the test map was to show what is technologically possible and to refine the results based on iterative users’ evaluation The generalizatio

Journal ArticleDOI
TL;DR: It is shown that the symmetric function generalization of the chromatic polynomial, or equivalently, the U-polynomial, distinguishes among a large class of caterpillar trees that are proper, thus improving previous results by Martin, Morin and Wagner.

Journal ArticleDOI
TL;DR: The generalization analysis is established for the ELM-based ranking (ELMRank) in terms of the covering numbers of hypothesis space and empirical results show the competitive performance of the ELMRank over the state-of-the-art ranking methods.

Journal Article
TL;DR: In this paper, a Petri net conformance checking method is proposed to measure how well a process model performs in terms of precision and generalization with respect to the actual executions of a process as recorded in an event log.
Abstract: Process mining encompasses the research area which is concerned with knowledge discovery from event logs. One common process mining task focuses on conformance checking, comparing discovered or designed process models with actual real-life behavior as captured in event logs in order to assess the “goodness” of the process model. This paper introduces a novel conformance checking method to measure how well a process model performs in terms of precision and generalization with respect to the actual executions of a process as recorded in an event log. Our approach differs from related work in the sense that we apply the concept of so-called weighted artificial negative events towards conformance checking, leading to more robust results, especially when dealing with less complete event logs that only contain a subset of all possible process execution behavior. In addition, our technique offers a novel way to estimate a process model’s ability to generalize. Existing literature has focused mainly on the fitness (recall) and precision (appropriateness) of process models, whereas generalization has been much more difficult to estimate. The described algorithms are implemented in a number of ProM plugins, and a Petri net conformance checking tool was developed to inspect process model conformance in a visual manner.

Book ChapterDOI
08 Oct 2014
TL;DR: The first generalization bounds for time series prediction with a non-stationary mixing stochastic process are presented and it is proved that fast learning rates can be achieved by extending existing local Rademacher complexity analysis to non-i.i.d. setting.
Abstract: This paper presents the first generalization bounds for time series prediction with a non-stationary mixing stochastic process. We prove Rademacher complexity learning bounds for both average-path generalization with non-stationary β-mixing processes and path-dependent generalization with non-stationary ϕ-mixing processes. Our guarantees are expressed in terms of β- or ϕ-mixing coefficients and a natural measure of discrepancy between training and target distributions. They admit as special cases previous Rademacher complexity bounds for non-i.i.d. stationary distributions, for independent but not identically distributed random variables, or for the i.i.d. case. We show that, using a new sub-sample selection technique we introduce, our bounds can be tightened under the natural assumption of convergent stochastic processes. We also prove that fast learning rates can be achieved by extending existing local Rademacher complexity analysis to non-i.i.d. setting.