scispace - formally typeset
Search or ask a question

Showing papers on "Generalization published in 2001"


Journal ArticleDOI
TL;DR: Here Shepard's theory is recast in a more general Bayesian framework and it is shown how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure.
Abstract: Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.

681 citations


Proceedings Article
03 Jan 2001
TL;DR: This paper draws on ideas from the Exponential family, Generalized linear models, and Bregman distances to give a generalization of PCA to loss functions that it is argued are better suited to other data types.
Abstract: Principal component analysis (PCA) is a commonly applied technique for dimensionality reduction. PCA implicitly minimizes a squared loss function, which may be inappropriate for data that is not real-valued, such as binary-valued data. This paper draws on ideas from the Exponential family, Generalized linear models, and Bregman distances, to give a generalization of PCA to loss functions that we argue are better suited to other data types. We describe algorithms for minimizing the loss functions, and give examples on simulated data.

506 citations


Journal ArticleDOI
TL;DR: In this paper, a cellular version of dynamical mean field theory is proposed, which gives a natural generalization of its original single-site construction and is formulated in different sets of variables.
Abstract: We propose a cellular version of dynamical mean field theory which gives a natural generalization of its original single-site construction and is formulated in different sets of variables. We incorporate a possible nonorthogonality of the tight-binding basis set and prove that the resulting equations lead to manifestly causal self-energies.

448 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: This paper presents a set of cartographic generalization techniques specifically designed to improve the usability of route maps, based both on cognitive psychology research studying how route maps are used and on an analysis of the generalizations commonly found in handdrawn route maps.
Abstract: Route maps, which depict a path from one location to another, have emerged as one of the most popular applications on the Web. Current computer-generated route maps, however, are often very difficult to use. In this paper we present a set of cartographic generalization techniques specifically designed to improve the usability of route maps. Our generalization techniques are based both on cognitive psychology research studying how route maps are used and on an analysis of the generalizations commonly found in handdrawn route maps. We describe algorithmic implementations of these generalization techniques within LineDrive, a real-time system for automatically designing and rendering route maps. Feedback from over 2200 users indicates that almost all believe LineDrive maps are preferable to using standard computer-generated route maps alone.

373 citations


Journal ArticleDOI
TL;DR: In this paper, the pointwise ergodic theorem for general locally compact amenable groups along Folner sequences was shown to hold for all amenable group along folner sequences that obey some restrictions.
Abstract: In this paper we prove the pointwise ergodic theorem for general locally compact amenable groups along Folner sequences that obey some restrictions. These restrictions are mild enough so that such sequences exist for all amenable groups. We also prove a generalization of the Shannon-McMillan-Breiman theorem to all discrete amenable groups. -->

347 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure to estimate the three-parameter normal ogive model and a generalization of the procedure to a model with multidimensional ability parameters are presented.
Abstract: A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization of the procedure to a model with multidimensional ability parameters are presented. The procedure is a generalization of a procedure by Albert (1992) for estimating the two-parameter normal ogive model. The procedure supports analyzing data from multiple populations and incomplete designs. It is shown that restrictions can be imposed on the factor matrix for testing specific hypotheses about the ability structure. The technique is illustrated using simulated and real data.

332 citations


Proceedings Article
03 Jan 2001
TL;DR: A new PAC-style bound on generalization error is given which justifies both the use of confidences — partial rules and partial labeling of the unlabeled data — and theUse of an agreement-based objective function as suggested by Collins and Singer.
Abstract: The rule-based bootstrapping introduced by Yarowsky, and its co-training variant by Blum and Mitchell, have met with considerable empirical success. Earlier work on the theory of co-training has been only loosely related to empirically useful co-training algorithms. Here we give a new PAC-style bound on generalization error which justifies both the use of confidences — partial rules and partial labeling of the unlabeled data — and the use of an agreement-based objective function as suggested by Collins and Singer. Our bounds apply to the multiclass case, i.e., where instances are to be assigned one of labels for k ≥ 2.

315 citations


Journal ArticleDOI
TL;DR: It is shown that there are simple methods for estimating and modeling the covariance or variogram components of the product-sum model using data from realizations of spatial-temporal random fields.

175 citations


Journal ArticleDOI
TL;DR: Several kinds of Riesz-like properties for pseudoeffect algebras are defined for the purpose of a structure theory and shown how they are interrelated.
Abstract: As a noncommutative generalization of effect algebras, we introduce pseudoeffect algebras and list some of their basic properties. For the purpose of a structure theory, we further define several kinds of Riesz-like properties for pseudoeffect algebras and show how they are interrelated.

156 citations


Journal ArticleDOI
TL;DR: The extended Euler deconvolution algorithm as mentioned in this paper is a generalization and unification of 2D Euler de-deconvolution and Werner deconvolutions, and it can be realized using generalized Hilbert transforms.
Abstract: The extended Euler deconvolution algorithm is shown to be a generalization and unification of 2-D Euler deconvolution and Werner deconvolution. After recasting the extended Euler algorithm in a way that suggests a natural generalization to three dimensions, we show that the 3-D extension can be realized using generalized Hilbert transforms. The resulting algorithm is both a generalization of extended Euler deconvolution to three dimensions and a 3-D extension of Werner deconvolution. At a practical level, the new algorithm helps stabilize the Euler algorithm by providing at each point three equations rather than one. We illustrate the algorithm by explicit calculation for the potential of a vertical magnetic dipole.

149 citations


Journal ArticleDOI
TL;DR: In this paper, linear relational embedding is introduced as a means of learning a distributed representation of concepts from data consisting of binary relations between these concepts, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept is learned by maximizing an appropriate discriminative goodness function using gradient ascent.
Abstract: We introduce linear relational embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between these concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization.

Journal ArticleDOI
TL;DR: Novel algorithms to learn the amplitudes of nonlinear activations in layered networks, without any assumption on their analytical form are introduced and it is shown that the algorithms speed up convergence and modify the search path in the weight space, possibly reaching deeper minima that may also improve generalization.

Journal ArticleDOI
TL;DR: In this article, a new class of generalized convex set-valued functions, termed nearly-subconvexlike functions, is introduced and a Lagrangian multiplier theorem is established and two scalarization theorems are obtained for vector optimization.
Abstract: A new class of generalized convex set-valued functions, termed nearly-subconvexlike functions, is introduced This class is a generalization of cone-subconvexlike maps, nearly-convexlike set-valued functions, and preinvex set-valued functions Properties for the nearly-subconvexlike functions are derived and a theorem of the alternative is proved A Lagrangian multiplier theorem is established and two scalarization theorems are obtained for vector optimization

DOI
16 Oct 2001
TL;DR: An evolutionary algorithm which population converges with probability one to the set of minimal elements within a finite number of iterations is presented.
Abstract: The search for minimal elements in partially ordered sets is a generalization of the task of finding Pareto-optimal elements in multi-criteria optimization problems. Since there are usually many minimal elements within a partially ordered set, a population-based evolutionary search is, as a matter of principle, capable of finding several minimal elements in a single run and gains therefore a steadily increase of popularity. Here, we present an evolutionary algorithm which population converges with probability one to the set of minimal elements within a finite number of iterations.

Journal ArticleDOI
TL;DR: The concept of bag complement is redefined suitably and some theorems involving bag operations have beenestablished and many existing and new results have been established based upon this new definition.

John G. Stell1
01 Jan 2001
TL;DR: A formal approach to multi-resolution in spatial data handling is provided to provide a formal foundation for generalization and vague regions, focusing on the new concept of a stratified map space.
Abstract: Precision is a key component of spatial data quality and in this era of globally distributed spatial data it is essential to be able to integrate multiple distributed data sets with heterogeneous levels of precision. Imprecision arises through limitations on semantic and geometric resolution of data representations. Generalization, and in particular modeloriented generalization, is an important process in this context, because it enables translation between different levels of precision. This paper provides a formal approach to multi-resolution in spatial data handling. It begins by motivating the work and pointing to some of the background research, and then introduces the basic concepts underlying the approach, focusing on the new concept of a stratified map space. The approach is quite general, and to show its application, the paper uses it to provide a formal foundation for generalization and vague regions.

Journal ArticleDOI
TL;DR: A generalization of Zubov's theorem on representing the domain of attraction via the solution of a suitable partial differential equation is presented for the case of perturbed systems with a singular fixed point and maximal robust Lyapunov functions can be characterized as viscosity solutions.
Abstract: A generalization of Zubov's theorem on representing the domain of attraction via the solution of a suitable partial differential equation is presented for the case of perturbed systems with a singular fixed point. For the construction it is necessary to consider solutions in the viscosity sense. As a consequence, maximal robust Lyapunov functions can be characterized as viscosity solutions.

Journal ArticleDOI
TL;DR: In this paper, a generalization of Golod's theorem on the behaviour of G{sub K}-dimension with respect to a suitable module K under factorization by ideals of a special kind is obtained and a new form of the Avramov-Foxby conjecture on the transitivity of G-dimension is suggested.
Abstract: For finite modules over a local ring the general problem is considered of finding an extension of the class of modules of finite projective dimension preserving various properties. In the first section the concept of a suitable complex is introduced, which is a generalization of both a dualizing complex and a suitable module. Several properties of the dimension of modules with respect to such complexes are established. In particular, a generalization of Golod's theorem on the behaviour of G{sub K}-dimension with respect to a suitable module K under factorization by ideals of a special kind is obtained and a new form of the Avramov-Foxby conjecture on the transitivity of G-dimension is suggested. In the second section a class of modules containing modules of finite CI-dimension is considered, which has some additional properties. A dimension constructed in the third section characterizes the Cohen-Macaulay rings in precisely the same way as the class of modules of finite projective dimension characterizes regular rings and the class of modules of finite CI-dimension characterizes complete intersections.

Book ChapterDOI
01 Jan 2001
Abstract: The starting point of this work was the classification ofp-divisible groups over a discrete valuation ring of characteristic 0 with perfect residue field of characteristicp> 3 obtained by C. Breuil in his note [[B]. We will show that such a classification holds under quite general circumstances. We prove this by showing that the category used by Breuil to classifyp-divisible groups is equivalent to the category of Dieudonne displays, which we defined in[Z-DD]. Breuil obtains his result by a very useful classification of finite fiat group schemes over a discrete valuation ring as above. We have no generalization of such a classification.

Posted Content
TL;DR: In this article, an approach for estimating the generalization performance of a support vector machine (SVM) for text classi cation is proposed and analyzed, which can be used not only to estimate the error rate, but also to estimate recall, precision and F1.
Abstract: This paper proposes and analyzes an e cient and e ective approach for estimating the generalization performance of a support vector machine (SVM) for text classi cation. Without any computation-intensive resampling, the new estimators are computationally much more e cient than cross-validation or bootstrapping. They can be computed at essentially no extra cost immediately after training a single SVM. Moreover, the estimators developed here address the special performancemeasures needed for evaluating text classi ers. They can be used not only to estimate the error rate, but also to estimate recall, precision, and F1. A theoretical analysis and experiments show that the new method can e ectively estimate the performance of SVM text classi ers in an e cient way.

Posted Content
TL;DR: In this paper, the authors generalize the results of Birkhoff and Mather on the existence of orbits wandering in regions of instability of twist maps to higher dimensions, and propose a generalization to higher dimension twist maps.
Abstract: We generalize to higher dimension results of Birkhoff and Mather on the existence of orbits wandering in regions of instability of twist maps. This generalization is strongly inspired by the one already proposed by Mather. However, its advantage is that it really contains most of the results of Birkhoff and Mather on twist maps.

Book ChapterDOI
16 Jul 2001
TL;DR: In this paper, the problem of predicting a sequence when the information about the previous elements (feedback) is only partial and possibly dependent on the predicted values is investigated, which can be seen as a generalization of the classical multi-armed bandit problem and accommodates a natural bandwidth allocation problem.
Abstract: We investigate the problem of predicting a sequence when the information about the previous elements (feedback) is onlypartial and possibly dependent on the predicted values. This setting can be seen as a generalization of the classical multi-armed bandit problem and accommodates as a special case a natural bandwidth allocation problem. According to the approach adopted by many authors, we give up any statistical assumption on the sequence to be predicted. We evaluate the performance against the best constant predictor (regret), as it is common in iterated game analysis.

Proceedings ArticleDOI
15 Jul 2001
TL;DR: Preliminary experimentation implies that EA and EAM are to be viewed as good alternatives to FA and FAM for data clustering and classification tasks respectively.
Abstract: We introduce ellipsoid-ART (EA) and ellipsoid-ARTMAP (EAM) as a generalization of hypersphere ART (HA) and hypersphere-ARTMAP (HAM) respectively. As was the case with HA/HAM, these novel architectures are based on ideas rooted in fuzzy-ART (FA) and fuzzy-ARTMAP (FAM). While FA/FAM aggregate input data using hyper-rectangles, EA/EAM utilize hyper-ellipsoids for the same purpose. Due to their learning rules, EA and EAM share virtually all properties and characteristics of their FA/FAM counterparts. Preliminary experimentation implies that EA and EAM are to be viewed as good alternatives to FA and FAM for data clustering and classification tasks respectively.

01 Jan 2001
TL;DR: In this paper, the notion of α-open sets in topological spaces is applied to present and study contra-α-continuity as a new generalization of Dontchev, 1996.
Abstract: In this paper, we apply the notion of α-open sets in topological spaces to present and study contra-α-continuity as a new generalization of contra-continuity (Dontchev, 1996).

Journal ArticleDOI
TL;DR: In this article, a generalization of the McWeeny transform, the n th-order convergence, n > 2, is proposed, which can be applied in calculational schemes where the number of basis functions exceeds the number occupied orbitals in the density matrix.

Journal ArticleDOI
S. Laporta1
TL;DR: In this paper, a new method of calculation of master integrals based on the solution of systems of difference equations in one variable is described, and the generalization to arbitrary diagrams is described.
Abstract: In this paper we describe a new method of calculation of master integrals based on the solution of systems of difference equations in one variable. An explicit example is given, and the generalization to arbitrary diagrams is described. As example of application of the method, we have calculated the values of master integrals for single-scale massive three-loop vacuum diagrams, three-loop self-energy diagrams, two-loop vertex diagrams and two-loop box diagrams.

Book ChapterDOI
07 Sep 2001
TL;DR: A successful project in automated map generalization undertaken at the National Atlas of Canada made extensive use of the implicit perceptual information present in road and river networks as a means of analysing and understanding their basic structure.
Abstract: A successful project in automated map generalization undertaken at the National Atlas of Canada made extensive use of the implicit perceptual information present in road and river networks as a means of analysing and understanding their basic structure. Using the perceptual grouping principle of 'good continuation', a network is decomposed into chains of network arcs, termed 'strokes'. The network strokes are then automatically ranked according to derived measures. Deleting strokes from the network following this ranking sequence provides a simple but very effective means of generalizing (attenuating) the network. This technique has practical advantages over previous methods. It has been employed in road network generalization, and applied in the selection of hydrologic data for a map covering Canada's northern territories. The method may find further application in the interpretation of other forms of documents, such as diagrams or handwriting.

01 Jan 2001
TL;DR: In this article, a semi-tensor product of matrices is proposed to solve the Morgen's problem for control systems, which is shown to be a numerically solvable problem.
Abstract: This paper proposes a new matrix product, namely, semi-tensor product. It is a generalization of the conventional matrix product. Meanwhile, it is also closely related to Kronecker (tensor) product of matrices. The purpose of introducing this product is twofold: (i) treat multi-dimensional data; (ii) treat nonlinear problems in a linear way. Then the computer and numerical methods can be easily used for solving nonlinear problems. Properties and formulas are deduced. As an application, the Morgen's problem for control systems is formulated as a numerically solvable problem.

Journal ArticleDOI
TL;DR: The well-known Fisher's equation, which combines diffusion with a logistic nonlinearity, is generalized to include memory effects, and traveling wave solutions of the equation are found.
Abstract: Memory effects in transport require, for their incorporation into reaction-diffusion investigations, a generalization of traditional equations. The well-known Fisher's equation, which combines diffusion with a logistic nonlinearity, is generalized to include memory effects, and traveling wave solutions of the equation are found. Comparison is made with alternative generalization procedures.

Journal ArticleDOI
TL;DR: In this paper, a new generalization of Hardy-Hilbert's integral inequality with a best constant factor involving the β function was proposed, and the authors considered its more extended form.