scispace - formally typeset
Search or ask a question

Showing papers on "Matching (statistics) published in 2008"


Journal ArticleDOI
TL;DR: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects as discussed by the authors, but empirical examples can be found in very diverse fields of study, and each implementation step involves a lot of decisions and different approaches can be thought of.
Abstract: Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like 'what to do if there is choice-based sampling?' or 'when to measure effects?' can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes.

5,510 citations


Journal ArticleDOI
TL;DR: This paper describes the Semi-Global Matching (SGM) stereo method, which uses a pixelwise, Mutual Information based matching cost for compensating radiometric differences of input images and demonstrates a tolerance against a wide range of radiometric transformations.
Abstract: This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems.

3,302 citations


Posted Content
TL;DR: RMatching is an R package which provides functions for multivariate and propensity score matching and for finding optimal covariate balance based on a genetic search algorithm.
Abstract: Matching is an R package which provides functions for multivariate and propensity score matching and for finding optimal covariate balance based on a genetic search algorithm. A variety of univariate and multivariate metrics to determine if balance actually has been obtained are provided. The underlying matching algorithm is written in C++, makes extensive use of system BLAS and scales efficiently with dataset size. The genetic algorithm which finds optimal balance is parallelized and can make use of multiple CPUs or a cluster of computers. A large number of options are provided which control exactly how the matching is conducted and how balance is evaluated.

1,293 citations


Book ChapterDOI
20 Oct 2008
TL;DR: This paper shows that the dramatically different approach of using priors dynamically to guide a feature by feature matching search can achieve global matching with much fewer image processing operations and lower overall computational cost.
Abstract: In the matching tasks which form an integral part of all types of tracking and geometrical vision, there are invariably priors available on the absolute and/or relative image locations of features of interest. Usually, these priors are used post-hoc in the process of resolving feature matches and obtaining final scene estimates, via `first get candidate matches, then resolve' consensus algorithms such as RANSAC. In this paper we show that the dramatically different approach of using priors dynamically to guide a feature by feature matching search can achieve global matching with much fewer image processing operations and lower overall computational cost. Essentially, we put image processing into the loopof the search for global consensus. In particular, our approach is able to cope with significant image ambiguity thanks to a dynamic mixture of Gaussians treatment. In our fully Bayesian algorithm, the choice of the most efficient search action at each step is guided intuitively and rigorously by expected Shannon information gain. We demonstrate the algorithm in feature matching as part of a sequential SLAM system for 3D camera tracking. Robust, real-time matching can be achieved even in the previously unmanageable case of jerky, rapid motion necessitating weak motion modelling and large search regions.

1,111 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the standard bootstrap is not valid for matching estimators, even in the simple case with a single continuous covariate where the estimator is root-N consistent and asymptotically normally distributed with zero as-ymptotic bias.
Abstract: Matching estimators are widely used in empirical economics for the evaluation of programs or treatments. Researchers using matching methods often apply the bootstrap to calculate the standard errors. However, no formal justification has been provided for the use of the bootstrap in this setting. In this article, we show that the standard bootstrap is, in general, not valid for matching estimators, even in the simple case with a single continuous covariate where the estimator is root-N consistent and asymptotically normally distributed with zero asymptotic bias. Valid inferential methods in this setting are the analytic asymptotic variance estimator of Abadie and Imbens (2006a) as well as certain modifications of the standard bootstrap, like the subsampling methods in Politis and Romano (1994).

964 citations


Journal ArticleDOI
TL;DR: This article proposed a new calibration strategy of the standard search model that uses data on the cost of vacancy creation and cyclicality of wages to identify the two key parameters - the value of nonmarket activity and the bargaining weights.
Abstract: Recently, a number of authors have argued that the standard search model cannot generate the observed business-cycle-frequency fluctuations in unemployment and job vacancies, given shocks of a plausible magnitude. We propose a new calibration strategy of the standard model that uses data on the cost of vacancy creation and cyclicality of wages to identify the two key parameters - the value of nonmarket activity and the bargaining weights. Our calibration implies that the model is consistent with the data.

751 citations


Journal ArticleDOI
Miriam Bruhn1, David McKenzie1
TL;DR: In this article, the authors present new evidence on the randomization methods used in existing randomized experiments, and carry out simulations in order to provide guidance for researchers, and find that many researchers are not controlling for the method of randomization in their analysis, leading to tests with incorrect size, and can result in lower power than if a pure random draw was used.
Abstract: Randomized experiments are increasingly used in development economics, with researchers now facing the question of not just whether to randomize, but how to do so Pure random assignment guarantees that the treatment and control groups will have identical characteristics on average, but in any particular random allocation, the two groups will differ along some dimensions Methods used to pursue greater balance include stratification, pair-wise matching, and re-randomization This paper presents new evidence on the randomization methods used in existing randomized experiments, and carries out simulations in order to provide guidance for researchers Three main results emerge First, many researchers are not controlling for the method of randomization in their analysis The authors show this leads to tests with incorrect size, and can result in lower power than if a pure random draw was used Second, they find that in samples of 300 or more, the different randomization methods perform similarly in terms of achieving balance on many future outcomes of interest However, for very persistent outcome variables and in smaller sample sizes, pair-wise matching and stratification perform best Third, the analysis suggests that on balance the re-randomization methods common in practice are less desirable than other methods, such as matching

712 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare causal estimates from a randomized experiment with those from an observational study sharing the same treatment group, with or without random assignment, and when statistical adjustments for selection are made in the group from which random assignment is absent.
Abstract: This paper analyzes 12 recent within-study comparisons contrasting causal estimates from a randomized experiment with those from an observational study sharing the same treatment group. The aim is to test whether different causal estimates result when a counterfactual group is formed, either with or without random assignment, and when statistical adjustments for selection are made in the group from which random assignment is absent. We identify three studies comparing experiments and regression-discontinuity (RD) studies. They produce quite comparable causal estimates at points around the RD cutoff. We identify three other studies where the quasi-experiment involves careful intact group matching on the pretest. Despite the logical possibility of hidden bias in this instance, all three cases also reproduce their experimental estimates, especially if the match is geographically local. We then identify two studies where the treatment and nonrandomized comparison groups manifestly differ at pretest but where the selection process into treatment is completely or very plausibly known. Here too, experimental results are recreated. Two of the remaining studies result in correspondent experimental and nonexperimental results under some circumstances but not others, while two others produce different experimental and nonexperimental estimates, though in each case the observational study was poorly designed and analyzed. Such evidence is more promising than what was achieved in past within-study comparisons, most involving job training. Reasons for this difference are discussed. © 2008 by the Association for Public Policy Analysis and Management.

593 citations


Journal ArticleDOI
TL;DR: In this paper, the Gale-Shapley algorithm was used to predict stable matches among the users of a major online dating service, and the estimated mate preferences, in conjunction with the algorithm, can explain the matching patterns in offline, real-world marriages.
Abstract: This paper studies the economics of match formation using a novel data set obtained from a major online dating service. Using detailed information on the users' attributes and interactions, we estimate a model of mate preferences. Based on the estimated preferences, we use the Gale-Shapley algorithm to predict the stable matches among the users of the dating site. Comparing the predicted and observed matching patterns, we find that the Gale-Shapley model explains the matches achieved by the online dating market well, and that the matches are approximately efficient (within the class of stable matches). We then explore whether the estimated mate preferences, in conjunction with the Gale-Shapley algorithm, can explain the matching patterns in offline, real-world marriages. We find that we can predict assortative mating patterns that are similar to those observed in marriages. As the Gale-Shapley algorithm predicts match outcomes in the absence of search costs, we conclude that mate preferences, not search frictions, explain much of the strong degree of sorting along various attributes in marriages.

492 citations


Journal ArticleDOI
TL;DR: In this article, a simulation-based sensitivity analysis was proposed to evaluate the effect of temporary work agency (TWA) assignments on the probability of finding a stable job in the USA and Europe.
Abstract: The diffusion of temporary work agency (TWA) jobs has led to a harsh policy debate and ambiguous empirical evidence. Results for the USA, based on quasi-experimental evidence, suggest that a TWA assignment decreases the probability of finding a stable job, while results for Europe, based on the conditional independence assumption (CIA), typically reach opposite conclusions. Using data for two Italian regions, we rely on a matching estimator to show that TWA assignments can be an effective springboard to permanent employment. We also propose a simulation-based sensitivity analysis, which highlights that only for one of these two regions are our results robust to specific failures of the CIA. We conclude that European studies based on the CIA should not be automatically discarded, but should be put under the scrutiny of a sensitivity analysis like the one we propose. Copyright © 2008 John Wiley & Sons, Ltd.

460 citations


Proceedings ArticleDOI
09 Jun 2008
TL;DR: This paper presents a formal evaluation model that offers precise semantics for this new class of queries and a query evaluation framework permitting optimizations in a principled way and further analyzes the runtime complexity of query evaluation using this model and develops a suite of techniques that improve runtime efficiency by exploiting sharing in storage and processing.
Abstract: Pattern matching over event streams is increasingly being employed in many areas including financial services, RFIDbased inventory management, click stream analysis, and electronic health systems. While regular expression matching is well studied, pattern matching over streams presents two new challenges: Languages for pattern matching over streams are significantly richer than languages for regular expression matching. Furthermore, efficient evaluation of these pattern queries over streams requires new algorithms and optimizations: the conventional wisdom for stream query processing (i.e., using selection-join-aggregation) is inadequate.In this paper, we present a formal evaluation model that offers precise semantics for this new class of queries and a query evaluation framework permitting optimizations in a principled way. We further analyze the runtime complexity of query evaluation using this model and develop a suite of techniques that improve runtime efficiency by exploiting sharing in storage and processing. Our experimental results provide insights into the various factors on runtime performance and demonstrate the significant performance gains of our sharing techniques.

Book ChapterDOI
09 Nov 2008
TL;DR: The basics of ontology matching are provided with the help of examples and general trends of the field are presented, thereby aiming to direct research into the critical path and to facilitate progress in the field.
Abstract: This paper aims at analyzing the key trends and challenges of the ontology matching field. The main motivation behind this work is the fact that despite many component matching solutions that have been developed so far, there is no integrated solution that is a clear success, which is robust enough to be the basis for future development, and which is usable by non expert users. In this paper we first provide the basics of ontology matching with the help of examples. Then, we present general trends of the field and discuss ten challenges for ontology matching, thereby aiming to direct research into the critical path and to facilitate progress of the field.

Journal ArticleDOI
TL;DR: The Cooperative Congressional Election Study (CCES) as discussed by the authors was the largest study of Congressional elections ever fielded in the US, which was conducted by 38 universities and over 100 political scientists.
Abstract: In 2006 Polimetrix, Inc. of Palo Alto, CA. fielded the Cooperative Congressional Election Study, the largest study of Congressional elections ever fielded in the US. The project was a joint venture of 38 universities and over 100 political scientists. In this paper, we detail the design and execution of the project, with special attention to the method by which the sample was generated. We show that the estimates from the Common Content of CCES outperform conventional estimates based on RDD phone surveys. We also argue that opt‐in panels, internet surveys, and cooperative ventures like CCES provide cost‐effective alternatives for social scientists under certain conditions. These types of surveys can provide reductions in RMSE over conventional methods when sample matching is used to ameliorate the biases that come with sampling from an opt‐in panel.

Journal ArticleDOI
TL;DR: The authors used propensity-score matching to test the conventional claim that higher education causes political participation and found that after taking into account pre-adult experiences and influences in place during the senior year of high school, the effects of higher education per se on participation disappear.
Abstract: The consensus in the empirical literature on political participation is that education positively correlates with political participation. Theoretical explanations posit that education confers participation-enhancing benefits that in and of themselves cause political activity. As most of the variation in educational attainment arises between high school completion and decisions to enter postsecondary institutions, we focus our inquiry on estimating the effect of higher education on political participation. Our primary purpose is to test the conventional claim that higher education causes political participation. We utilize propensity-score matching to address the nonrandom assignment process that characterizes the acquisition of higher education. After the propensity-score matching process takes into account preadult experiences and influences in place during the senior year of high school, the effects of higher education per se on participation disappear. Our results thus call for a reconsideration of ho...

Journal ArticleDOI
TL;DR: In this article, a nonparametric alternative to the Blinder-Oaxaca decomposition does not require the estimation of earnings equations and divides the gap into four additive elements, two analogous to the elements of the BO decomposition, while the other two account for differences in the supports.
Abstract: This paper presents a methodology that uses matching comparisons to explain gender wage differences. The approach emphasizes gender differences in the supports of the distributions of observable characteristics and provides insights into the distribution of unexplained gender pay differences. This nonparametric alternative to the Blinder-Oaxaca (BO) decomposition does not require the estimation of earnings equations and divides the gap into four additive elements. Two of these are analogous to the elements of the BO decomposition (but computed only over the common support of the distributions of characteristics), while the other two account for differences in the supports.

Journal ArticleDOI
TL;DR: A novel minutiae-based fingerprint matching algorithm that ranks 1st on DB3, the most difficult database in FVC2002, and on the average ranks 2nd on all 4 databases.

Journal ArticleDOI
TL;DR: New methodology for the detection and matching of salient points over several views of an object, modelled by a Hidden Markov Model, which is trained in an unsupervised way by using contextual 3D neighborhood information, thus providing a robust and invariant point signature is proposed.
Abstract: This paper proposes new methodology for the detection and matching of salient points over several views of an object. The process is composed by three main phases. In the first step, detection is carried out by adopting a new perceptually-inspired 3D saliency measure. Such measure allows the detection of few sparse salient points that characterize distinctive portions of the surface. In the second step, a statistical learning approach is considered to describe salient points across different views. Each salient point is modelled by a Hidden Markov Model (HMM), which is trained in an unsupervised way by using contextual 3D neighborhood information, thus providing a robust and invariant point signature. Finally, in the third step, matching among points of different views is performed by evaluating a pairwise similarity measure among HMMs. An extensive and comparative experimental session has been carried out, considering real objects acquired by a 3D scanner from different points of view, where objects come from standard 3D databases. Results are promising, as the detection of salient points is reliable, and the matching is robust and accurate.

Posted Content
TL;DR: A theoretical analysis of sample selection bias correction based on the novel concept of distributional stability which generalizes the existing concept of point-based stability and can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.
Abstract: This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stability which generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.

Journal ArticleDOI
TL;DR: Full matching has been shown to be particularly effective at reducing bias due to observed confounding variables and is illustrated using data from the Woodlawn Study, examining the relationship between adolescent marijuana use and adult outcomes.
Abstract: Matching methods such as nearest neighbor propensity score matching are increasingly popular techniques for controlling confounding in nonexperimental studies. However, simple k:1 matching methods, which select k well-matched comparison individuals for each treated individual, are sometimes criticized for being overly restrictive and discarding data (the unmatched comparison individuals). The authors illustrate the use of a more flexible method called full matching. Full matching makes use of all individuals in the data by forming a series of matched sets in which each set has either 1 treated individual and multiple comparison individuals or 1 comparison individual and multiple treated individuals. Full matching has been shown to be particularly effective at reducing bias due to observed confounding variables. The authors illustrate this approach using data from the Woodlawn Study, examining the relationship between adolescent marijuana use and adult outcomes.

Journal ArticleDOI
TL;DR: In this article, the authors present a case study of their experience using PSM, under seemingly ideal circumstances, for the evaluation of the State Partnership Initiative employment promotion program, and they find that PSM produced incorrect impact estimates when compared with a randomized design.
Abstract: Over the past 25 years, evaluators of social programs have searched for nonexperimental methods that can substitute effectively for experimental ones. Recently, the spotlight has focused on one method, propensity score matching (PSM), as the suggested approach for evaluating employment and education programs. We present a case study of our experience using PSM, under seemingly ideal circumstances, for the evaluation of the State Partnership Initiative employment promotion program. Despite ideal conditions and the passing of statistical tests suggesting that the matching procedure had worked, we find that PSM produced incorrect impact estimates when compared with a randomized design. Based on this experience, we caution practitioners about the risks of implementing PSM-based designs.

Journal ArticleDOI
TL;DR: In this article, the authors developed formulae that show that the estimated correlation is biased downwards if there is true positive assortative matching and when any conditioning covariates are uncorrelated with the firm and worker fixed effects.
Abstract: Summary. In the empirical literature on assortative matching using linked employer–employee data, unobserved worker quality appears to be negatively correlated with unobserved firm quality. We show that this can be caused by standard estimation error. We develop formulae that show that the estimated correlation is biased downwards if there is true positive assortative matching and when any conditioning covariates are uncorrelated with the firm and worker fixed effects. We show that this bias is bigger the fewer movers there are in the data, which is ‘limited mobility bias’. This result applies to any two-way (or higher) error components model that is estimated by fixed effects methods. We apply these bias corrections to a large German linked employer–employee data set. We find that, although the biases can be considerable, they are not sufficiently large to remove the negative correlation entirely.

Book ChapterDOI
12 Oct 2008
TL;DR: In this article, the authors present a theoretical analysis of sample selection bias correction using the concept of distributional stability, which generalizes the existing concept of point-based stability and can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.
Abstract: This paper presents a theoretical analysis of sample selection bias correction. The sample bias correction technique commonly used in machine learning consists of reweighting the cost of an error on each training point of a biased sample to more closely reflect the unbiased distribution. This relies on weights derived by various estimation techniques based on finite samples. We analyze the effect of an error in that estimation on the accuracy of the hypothesis returned by the learning algorithm for two estimation techniques: a cluster-based estimation technique and kernel mean matching. We also report the results of sample bias correction experiments with several data sets using these techniques. Our analysis is based on the novel concept of distributional stabilitywhich generalizes the existing concept of point-based stability. Much of our work and proof techniques can be used to analyze other importance weighting techniques and their effect on accuracy when using a distributionally stable algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors considered the response of an 8-storey reinforced concrete structure to accelerograms selected, linearly scaled or spectrally matched using five different techniques.
Abstract: Selecting, scaling and matching accelerograms are critically important to engineering design and assessment, enabling structural response to be determined with greater confidence and through fewer analyses than if unscaled accelerograms are employed. This paper considers the response of an 8-storey multiple-degree-of-freedom reinforced concrete structure to accelerograms selected, linearly scaled or spectrally matched using five different techniques. The first method consists of selecting real records on the basis of seismological characteristics, while the remaining methods make an initial selection on the basis of magnitude and spectral shape before (1) scaling to the target spectral acceleration at the initial period; (2) scaling to the target spectrum over a range of periods; (3) using wavelet adjustments to match the target spectrum and (4) using wavelet adjustments to match multiple target spectra for multiple damping ratios. The analyses indicate that the number of records required to obtain a stable estimate of the response decreases drastically as one moves through these methods. The exact number varies among damage measures and is related to the predictability of the damage measure. For measures such as peak roof and inter-storey drift, member end rotation and the Park and Ang damage index, as few as one or two records are required to estimate the response to within ±5% (for a 64% confidence level) if matching to multiple damping ratios is conducted. Bias checks are made using predictive equations of the expected response derived from the results of 1656 nonlinear time-domain analyses of the structure under the action of unscaled accelerograms. Copyright © 2008 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors generalize the theory of matching in two-sided markets and give sufficient conditions for the existence of stable networks and present an algorithm for finding two of them.
Abstract: This paper studies matching in vertical networks, generalizing the theory of matching in two-sided markets. It gives sufficient conditions for the existence of stable networks and presents an algorithm for finding two of them. One is the best stable network for the agents on the "upstream" end of an industry. The other is best for the agents on the "downstream" end. The paper describes several properties of the set of stable networks and discusses applications of the theory to the design of matching markets with more than two types of agents and to the empirical analysis of supply chains. (JEL C78, D85, L14)

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the estimation of treatment effects when the timing of treatment is the outcome of a stochastic process and suggest easy-to-use nonparametric survival function matching estimators that can be used to estimate the time profile of the treatment.
Abstract: We discuss estimation of treatment effects when the timing of treatment is the outcome of a stochastic process. We show that the duration framework in discrete time provides a fertile ground for effect evaluations. We suggest easy-to-use nonparametric survival function matching estimators that can be used to estimate the time profile of the treatment.We study the small-sample properties of the proposed estimators and apply one of them to evaluate the effects of an employment subsidy program. We find that the longerrun program effects are positive. The estimated time profile suggests locking-in effects while participating in the program and a significant upward jump in the employment hazard on program completion.

Book ChapterDOI
TL;DR: In this paper, the employment effects of job-creation schemes (JCS) on the participating individuals in Germany are evaluated and the overall results are rather discouraging, since employment effects are negative or insignificant for most of the analysed groups.
Abstract: In this chapter, we evaluate the employment effects of job-creation schemes (JCS) on the participating individuals in Germany. JCS are a major element of active labour market policy in Germany and are targeted at long-term unemployed and other hard-to-place individuals. Access to very informative administrative data of the Federal Employment Agency justifies the application of a matching estimator and allows us to account for individual (group-specific) and regional effect heterogeneity. We extend previous studies for Germany in four directions. First, we are able to evaluate the effects on regular (unsubsidised) employment. Second, we observe the outcomes of participants and non-participants for nearly three years after the programme starts and can therefore analyse medium-term effects. Third, we test the sensitivity of the results with respect to various decisions that have to be made during implementation of the matching estimator. Finally, we check if a possible occurrence of a specific form of ‘unobserved heterogeneity’ distorts our interpretation. The overall results are rather discouraging, since the employment effects are negative or insignificant for most of the analysed groups. One exception are long-term unemployed individuals who benefit from participation at the end of our observation period. Hence, one policy implication is to address the programmes to this problem group more closely.

Patent
James S. Rosen1
30 Jul 2008
TL;DR: In this paper, location-related data and other profile characteristics are used for promotion and for matching of businesses, venues and other entities with user specified criteria, such as location and gender.
Abstract: Systems and methods for profile matching and promotion. Location-related data and other profile characteristics are used for promotion and for matching of businesses, venues and other entities with user specified criteria.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the impact of stone bunds on value of crop production per hectare in low and high rainfall areas of the Ethiopian highlands using cross-sectional data from more than 900 households, with multiple plots per household.

Journal ArticleDOI
TL;DR: A new approach for image matching and the related software package is developed and used in 3D tree modelling, showing results from analogue and digital aerial images and high‐resolution satellite images (IKONOS).
Abstract: Image matching is a key procedure in the process of generation of Digital Surface Models (DSM). We have developed a new approach for image matching and the related software package. This technique has proved its good performance in many applications. Here, we demonstrate its use in 3D tree modelling. After a brief description of our image matching technique, we show results from analogue and digital aerial images and high-resolution satellite images (IKONOS). In some cases, comparisons with manual measurements and/or airborne laser data have been performed. The evaluation of the results, qualitative and quantitative, indicate the very good performance of our matcher. Depending on the data acquisition parameters, the photogrammetric DSM can be denser than a DSM generated by laser, and its accuracy may be better than that from laser, as in these investigations. The tree canopy is well modelled, without smoothing of small details and avoiding the canopy penetration occurring with laser. Depending on the image scale, not only dense forest areas but also individual trees can be modelled.

Journal ArticleDOI
TL;DR: In this article, the effect of micro-finance on expenditure per capita, supply of labour, and level of school enrolment for boys and girls was investigated using the latest developments from the evaluation literature, namely the technique of matching.
Abstract: Using the latest developments from the evaluation literature, namely the technique of matching, this paper shows a positive, but lower than previously thought, effect of microfinance on expenditure per capita, supply of labour, and level of school enrolment for boys and girls. For instance, participants spend 3 per cent more on average than non-participants in control villages. This paper also takes into account repayment delays to calculate the cost of credit provision. It shows how a better investigation at the individual level of the benefits brought and the cost borne could help microfinance institutions to better select their customers. JEL Classification : C14, D10, G21, I38, O12, O16