scispace - formally typeset
Search or ask a question

Showing papers on "Matching (statistics) published in 1993"


Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations


Journal ArticleDOI
TL;DR: A classification scheme for multimodal image matching is considered and may be used for any modality; not only for projection images and tomographic images, but also for other signal modalities that provide spatial insight into function or anatomy.
Abstract: A classification scheme for multimodal image matching is considered. The scope of the classification is restricted to methods that register data after acquisitions. The classification scheme may be used for any modality; not only for (2-D) projection images and (3-D) tomographic images, but also for other signal modalities that provide spatial insight into function or anatomy, e.g., EEG (electroencephalography) or MEG (magnetoencephalography) and for the real physical patient. The available literature on image matching is discussed and classified. >

844 citations


Journal ArticleDOI
TL;DR: A comparison and evaluation of recent proposals for multivariate matched sampling in observational studies finds that optimal matching is sometimes noticeably better than greedy matching in the sense of producing closely matched pairs, sometimes only marginally better, but it is no better than greed matching in terms of producing balanced matched samples.
Abstract: A comparison and evaluation is made of recent proposals for multivariate matched sampling in observational studies, where the following three questions are answered: (1) Algorithms: In current statistical practice, matched samples are formed using “nearest available” matching, a greedy algorithm. Greedy matching does not minimize the total distance within matched pairs, though good algorithms exist for optimal matching that do minimize the total distance. How much better is optimal matching than greedy matching? We find that optimal matching is sometimes noticeably better than greedy matching in the sense of producing closely matched pairs, sometimes only marginally better, but it is no better than greedy matching in the sense of producing balanced matched samples. (2) Structures: In common practice, treated units are matched to one control, called pair matching or 1–1 matching, or treated units are matched to two controls, called 1–2 matching, and so on. It is known, however, that the optimal st...

635 citations


Journal ArticleDOI
TL;DR: In this paper, a review of statistical methods for detecting measurement bias in psycho-logical and educational tests is presented, which employs a conceptual framework that distin guishes methods of detecting measuring bias based on either observed or unobserved conditional invariance models.
Abstract: Statistical methods developed over the last decade for detecting measurement bias in psycho logical and educational tests are reviewed. Earlier methods for assessing measurement bias generally have been replaced by more sophisticated statistical techniques, such as the Mantel-Haenszel procedure, the standardization approach, logistic regression models, and item response theory approaches. The review employs a conceptual framework that distin guishes methods of detecting measurement bias based on either observed or unobserved conditional invariance models. Although progress has been made in the development of statistical methods for detecting measurement bias, issues related to the choice of matching variable, the nonuniform nature of measurement bias, the suitability of cur rent approaches for new and emerging perform ance assessment methods, and insights into the causes of measurement bias remain elusive. Clearly, psychometric solutions to the problems of measurement bias will further understanding of th...

487 citations


Journal ArticleDOI
TL;DR: The authors use range profiles as the feature vectors for data representation, and they establish a decision rule based on the matching scores to identify aerospace objects, and the results demonstrated can be used for comparison with other identification methods.
Abstract: The authors use range profiles as the feature vectors for data representation, and they establish a decision rule based on the matching scores to identify aerospace objects. Reasons for choosing range profiles as the feature vectors are explained, and criteria for determining aspect increments for building the database are proposed. Typical experimental examples of the matching scores and recognition rates are provided and discussed. The results demonstrated can be used for comparison with other identification methods. >

274 citations


Book
01 Jan 1993
TL;DR: In this paper, the benefits of IT when evaluation is needed how non-IT investments are typically evaluated how IT investments are evaluated in practice different approaches to the problem - formal techniques different approaches in practice - experimental and ad hoc methods evaluation methods must reflect the project factors affecting project evaluation matching the project with an evaluation method.
Abstract: The benefits of IT when evaluation is needed how non-IT investments are typically evaluated how IT investments are evaluated in practice different approaches to the problem - formal techniques different approaches to the problem - experimental and ad hoc methods evaluation methods must reflect the project factors affecting project evaluation matching the project with an evaluation method.

201 citations


Journal ArticleDOI
TL;DR: Self-concept confusion should mitigate against the use of a decision-making strategy that involves using the self to guide choice behavior (i.e., prototype matching, P. Campbell, 1990).
Abstract: Recent research has demonstrated that individuals with low self-esteem lack self-clarity; they have less certain and less stable self-concepts than do those with high self-esteem (A. H. Baumgardner, 1990; J. Campbell, 1990). Self-concept confusion should mitigate against the use of a decision-making strategy that involves using the self to guide choice behavior (i.e., prototype matching, P. M. Niedenthal, N. Cantor, & J. F. Kihlstrom, 1985). Two correlational studies demonstrated that people with high self-esteem, but not low self-esteem, made use of prototype matching in forming preferences. In a 3rd study, the self-concept was made more clear or made more confused. Clarity was associated with the use of prototype matching regardless of level of self-esteem. Self-concept confusion was associated with a failure to use the strategy regardless of level of self-esteem.

164 citations


Journal ArticleDOI
TL;DR: Probability matching, using an array of identifiers, achieves much higher levels of correct matching than is generally achievable by exact character by character comparisons.
Abstract: OBJECTIVES--To report on the development of computer assisted methods for linking medical records and record abstracts. DESIGN--The methods include file blocking, to put records in an order which makes searching efficient; matching, which is the process of comparing records to determine whether they do or do not relate to the same person; linkage, which is the process of assembling correctly matched records into a time sequenced composite record for the individual; and validation checks and corrections, in which any inconsistencies between different records for the same person are identified and corrected. SETTING--The dataset comprising the Oxford record linkage study which includes hospital inpatient records and vital records. RESULTS AND CONCLUSIONS--Probability matching, using an array of identifiers, achieves much higher levels of correct matching than is generally achievable by exact character by character comparisons. The increasing use of information technology to store data about health and health care means that there is increasing scope to link records for research and for patient care. Sophisticated methods to achieve this on a large scale are now available.

162 citations


01 Jan 1993
TL;DR: This paper focuses on how to deal with record linkage errors when engaged in regression analysis and updates the work of Neter My1esandRamanathan-i965 Adjustment-procedures-are-outlined and some-successful-simulationsaredescribed.
Abstract: This paper focuses on how to deal with record linkage errors when engaged in regression analysis Recent work by Rubin and Belin 1991 and by Winkler and Thibaudeau 1991 provides the theory computational algorithms and software necessary for estimating matching probabilities These advances allow us to update the work of Neter My1esandRamanathan-i965 Adjustment-procedures-are-outlined-and-some-successful-simulationsaredescribed Our results are preliminary and intended largely to stimulate further work

118 citations


Proceedings ArticleDOI
22 Jun 1993
TL;DR: This paper describes a method for finding structural matching between parallel sentences of two languages, (such as Japanese and English), and structural matching is performed by making use of a similarity measure of word pairs in the two languages.
Abstract: This paper describes a method for finding structural matching between parallel sentences of two languages, (such as Japanese and English). Parallel sentences are analyzed based on unification grammars, and structural matching is performed by making use of a similarity measure of word pairs in the two languages. Syntactic ambiguities are resolved simultaneously in the matching process. The results serve as a useful source for extracting linguistic and lexical knowledge.

117 citations


Journal ArticleDOI
01 Dec 1993
TL;DR: In this paper, the authors present a method for using signature information easily derived from the component to achieve software reuse, which is only effective if it is easier to locate (and appropriately modify) a reusable component than to write it from scratch.
Abstract: Software reuse is only effective if it is easier to locate (and appropriately modify) a reusable component than to write it from scratch. We present signature matching as a method for achieving this goal by using signature information easily derived from the component. We consider two kinds of software components, functions and modules, and hence two kinds of matching, function mathcing and module matching. The signature of a function is simply its type; the signature of a module is a multiset of user-defined types and a multiset of function signatures. For both functions and modules, we consider not just exact match, but also various flavors of relaxed match. We briefly describe an experimental facility written in Standard ML for performing signature matching over a library of ML functions.

Journal ArticleDOI
TL;DR: It is shown here that if the number of communities is small, the matched design will probably have less power than the unmatched design, which outweighs the benefits of matching on any but the strongest correlates of changes in behaviour.
Abstract: Currently, there is considerable interest in studies that use the community as the experimental unit. Health promotion programmes are one example. Because such activities are expensive, the number of experimental units (communities) is usually very small. Investigators often match communities on demographic variables in order to improve the power of their studies. Matching is known to improve power in certain circumstances. However, we show here that if the number of communities is small, the matched design will probably have less power than the unmatched design. This is due primarily to the loss of degrees of freedom in the matched design, which outweighs the benefits of matching on any but the strongest correlates of changes in behaviour. In the community intervention situation, even small differences in sample size between the matched and unmatched analyses can have expensive consequences.



Journal ArticleDOI
TL;DR: In this article, a Monte Carlo study examined strategies for forming the matching variable for the Mantel-Haenszel (MH) differential item functioning (DIF) procedure; thin matching on total test score was compared to forms of thick matching, pooling levels of matching variable.
Abstract: This Monte Carlo study examined strategies for forming the matching variable for the Mantel-Haenszel (MH) differential item functioning (DIF) procedure; thin matching on total test score was compared to forms of thick matching, pooling levels of the matching variable. Data were generated using a three-parameter logistic (3PL) item response theory (IRT) model with common guessing parameter. Number of subjects and test length were manipulated, as were the difficulty, discrimination, and presence/absence of DIF in the studied item. Outcome measures were the transformed log-odds AMH, its standard error, and the MH chi-square statistic. For short tests (5 or 10 items), thin matching yielded very poor results, with a tendency to falsely identify items as possessing DIF against the reference group. The best methods of thick matching yielded outcome measure values closer to the expected value for non-DIF items, as well as a larger value than thin matching when the studied item possessed DIF. Intermediate length tests yielded similar results for thin matching and the best methods of thick matching. The method of thick matching that performed best depended on the measure used to detect DIF. Both difficulty and discrimination of the studied item were

Book ChapterDOI
TL;DR: The chapter highlights the areas of modeling consumer purchase heuristics, modeling consumers' mental processes, matching models to market segments, and modeling choice for truly new or non-comparable alternatives as fruitful areas that deserve concerted attention in the future.
Abstract: Publisher Summary The chapter describes quantitative modelers and management scientists unfamiliar with marketing an appreciation of the way in which models of consumer behavior are developed and used. The chapter is also designed to provide a reference and teaching resource for marketing specialists. The future of consumer behavior modeling is bright; newer models are richer, more flexible, and more closely attuned to modern data sources. Yet many phenomena are poorly modeled at the moment. The chapter highlights the areas of modeling consumer purchase heuristics (and information-processing biases), modeling consumers' mental processes, matching models to market segments, and modeling choice for truly new or non-comparable alternatives as fruitful areas that deserve concerted attention in the future.

Journal ArticleDOI
TL;DR: A theoretical framework is presented in which windowed Fourier phase (WFP) is introduced as the primary matching primitive, and the use of phase as matching primitive is supported by some existing psychophysical and neurophysiological studies.
Abstract: A theoretical framework is presented in which windowed Fourier phase (WFP) is introduced as the primary matching primitive. Zero-crossings and peaks correspond to special values of the phase. The WFP is quasi-linear and dense; and its spatial period and slope are controlled by the scale. This framework has the following important characteristics: 1) matching primitives are available almost everywhere to convey dense disparity information in every channel, either coarse or fine; 2) the false-target problem is significantly mitigated; 3) the matching is easier, uniform, and can be performed by a network suitable for parallel computer architecture; 4) the matching is fast since very few iterations are needed. In fact, the WFP is so informative that the original signal can be uniquely determined up to a multiplicative constant by the WFP in any channel. The use of phase as matching primitive is also supported by some existing psychophysical and neurophysiological studies. An implementation of the proposed theory has shown good results from synthesized and natural images.

Journal ArticleDOI
TL;DR: In this paper, the use of MS-like renormalization schemes in QCD requires an implementation of nontrivial matching conditions across thresholds, a fact often overlooked in the literature, and it is shown explicitly that the prediction for αs(MZ), obtained by running the strong coupling constant from the Mτ scale, does not substantially depend on the exact value of the matching point chosen in crossing the b-quark threshold when the appropriate matching conditions are taken into account.

Proceedings ArticleDOI
15 Jun 1993
TL;DR: The authors' multilevel method emphasizes a fine-to-coarse process in which local support is accumulated, which is concise, efficient and above all, gives good results for complex scenes.
Abstract: A computational framework is introduced for matching a pair of stereo images which, in contrast to existing algorithms, features a self-contained local matching module cascaded with a global matching module. Local matching outputs a 3-D grey-scale image in which each and every point has an intensity measuring the goodness of a possible match. Global matching reduces to surface detection in this image. To detect the surface, it is first enhanced, employing a hyperpyramid data structure. Unlike traditional multiresolution approaches, which are based on the coarse-to-fine continuation method, the authors' multilevel method emphasizes a fine-to-coarse process in which local support is accumulated. The algorithm is concise, efficient and above all, gives good results for complex scenes. >

Proceedings ArticleDOI
15 Jun 1993
TL;DR: The problem of matching perspective views of coplanar structures composed of line segments is considered and both model-to-image and image- to-image correspondence matching are given a consistent treatment.
Abstract: The problem of matching perspective views of coplanar structures composed of line segments is considered. Both model-to-image and image-to-image correspondence matching are given a consistent treatment. These matching scenarios generally require discovery of an eight-parameter projective mapping. When the horizon line of the object plane can be found in the image, which is accomplished in this case by using vanishing point analysis, these problems reduce to a simpler six-parameter affine matching problem. When the intrinsic lens parameters of the camera are known, the problem further reduces to four-parameter affine similarity matching. >

01 Jan 1993
TL;DR: This hybrid algorithm combines the closed-form weak-perspective pose and iterative 3D pose algorithms to efficiently solve matching problems involving perspective, and permits a mobile robot to successfully update its estimated pose relative to these landmarks.
Abstract: Recognizing an object by its shape is a fundamental problem in computer vision, and typically involves finding a discrete correspondence between object model and image features as well as the pose--position and orientation--of the camera relative to the object. This thesis presents new algorithms for finding the optimal correspondence and pose of a rigid 3D object. They utilize new techniques for evaluating geometric matches and for searching the combinatorial space of possible matches. An efficient closed-form technique for computing pose under weak-perspective (four parameter 2D affine) is presented, and an iterative non-linear 3D pose algorithm is used to support matching under full 3D perspective. A match error ranks matches by summing a fit error, which measures the quality of the spatial fit between corresponding line segments forming an object model and line segments extracted from an image, and an omission error, which penalizes matches which leave portions of the model omitted or unmatched. Inclusion of omission is crucial to success when matching to corrupted and partial image data. New optimal matching algorithms use a form of combinatorial optimization called local search, which relies on iterative improvement and random sampling to probabilistically find globally optimal matches. A novel variant has been developed, subset-convergent local search finds optimal matches with high probability on problems known to be difficult for other techniques. Specifically, it does well on a test suite of highly fragmented and cluttered data, symmetric object models, and multiple model instances. Problem search spaces grows exponentially in the number of potentially paired features n, yet empirical performance suggests computation is bounded by $n\sp2.$ Using the 3D pose algorithm during matching, local search solves problems involving significant amounts of 3D perspective. No previous work on geometric matching has generalized in this way. Our hybrid algorithm combines the closed-form weak-perspective pose and iterative 3D pose algorithms to efficiently solve matching problems involving perspective. For robot navigation, this algorithm recognizes 3D landmarks, and thereby permits a mobile robot to successfully update its estimated pose relative to these landmarks.

Proceedings Article
01 Jan 1993



Proceedings ArticleDOI
TL;DR: The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, the ReqColl (requirements collector), is introduced, which uses conceptual graphs to represent domain concepts and attempts to recognize new concepts through the use of a matching facility.
Abstract: The types of knowledge used during requirements acquisition are identified and a tool to aid in this process, the ReqColl (requirements collector), is introduced. The tool uses conceptual graphs to represent domain concepts, and attempts to recognize new concepts through the use of a matching facility. The overall approach to requirements capture is described and the approach to matching is illustrated informally. The detailed procedure for matching conceptual graphs is given. ReqColl is compared to similar work elsewhere, and some future research directions are indicated. >


Proceedings ArticleDOI
07 Nov 1993
TL;DR: The analysis shows the importance of considering output and input correspondence errors as part of a complete correction procedure in the presence of previously studied design errors.
Abstract: We consider the problem of diagnosing and correcting two classes of design errors, called output correspondence and input correspondence errors. Under these errors, the order of the outputs or inputs of the implementation are changed, such that an incorrect matching between specification and implementation outputs or inputs is obtained. These errors were not included in earlier methods proposed to diagnose and correct design errors. We present diagnosis and correction procedures for these errors, and discuss the performance of these procedures in the presence of previously studied design errors. The analysis shows the importance of considering output and input correspondence errors as part of a complete correction procedure.


Proceedings ArticleDOI
14 Sep 1993
TL;DR: This paper presents a review of block matching based motion estimation algorithms, classified into three categories, namely fast algorithms, layered structure algorithms and inter-block motion field prediction algorithms, compared with respect to estimation accuracy and computational complexity.
Abstract: Block matching motion estimation is a key component in video compression. Typical applications include HDTV, multimedia communications, video conferencing, etc. In this paper, we present a review of block matching based motion estimation algorithms. These algorithms are classified into three categories, namely fast algorithms, layered structure algorithms and inter-block motion field prediction algorithms. They are compared with respect to estimation accuracy and computational complexity. In addition, various matching criterion are also reviewed. >