scispace - formally typeset
Search or ask a question
Institution

Institute for the Study of Labor

NonprofitBonn, Germany
About: Institute for the Study of Labor is a nonprofit organization based out in Bonn, Germany. It is known for research contribution in the topics: Wage & Unemployment. The organization has 2039 authors who have published 13475 publications receiving 439376 citations.
Topics: Wage, Unemployment, Earnings, Population, Productivity


Papers
More filters
ReportDOI
TL;DR: A conceptual and empirical overview of the history of this evolution can be found in this paper, where the authors sketch the historical thinking about machine displacement of human labor and then consider the contemporary incarnation of this displacement, meaning the simultaneous growth of high education, high-wage and low education, low-wages jobs.
Abstract: In 1966, the philosopher Michael Polanyi observed, "We can know more than we can tell... The skill of a driver cannot be replaced by a thorough schooling in the theory of the motorcar; the knowledge I have of my own body differs altogether from the knowledge of its physiology." Polanyi's observation largely predates the computer era, but the paradox he identified--that our tacit knowledge of how the world works often exceeds our explicit understanding--foretells much of the history of computerization over the past five decades. This paper offers a conceptual and empirical overview of this evolution. I begin by sketching the historical thinking about machine displacement of human labor, and then consider the contemporary incarnation of this displacement--labor market polarization, meaning the simultaneous growth of high-education, high-wage and low-education, low-wages jobs--a manifestation of Polanyi's paradox. I discuss both the explanatory power of the polarization phenomenon and some key puzzles that confront it. I then reflect on how recent advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. A key observation of the paper is that journalists and expert commentators overstate the extent of machine substitution for human labor and ignore the strong complementarities. The challenges to substituting machines for workers in tasks requiring adaptability, common sense, and creativity remain immense. Contemporary computer science seeks to overcome Polanyi's paradox by building machines that learn from human examples, thus inferring the rules that we tacitly apply but do not explicitly understand.Institutional subscribers to the NBER working paper series, and residents of developing countries may download this paper without additional charge at www.nber.org.

189 citations

Posted Content
TL;DR: This paper examined the bias associated with alternative estimation procedures for estimating the marginal effects of covariates on time use and found that the estimated marginal effects from Tobit are biased and that the extent of the bias varies with the fraction of zero-value observations.
Abstract: Time-use surveys collect very detailed information about individuals' activities over a short period of time, typically one day. As a result, a large fraction of observations have values of zero for the time spent in many activities, even for individuals who do the activity on a regular basis. For example, it is safe to assume that all parents do at least some childcare, but a relatively large fraction report no time spent in childcare on their diary day. Because of the large number of zeros Tobit would seem to be the natural approach. However, it is important to recognize that the zeros in time-use data arise from a mismatch between the reference period of the data (the diary day) and the period of interest, which is typically much longer. Thus it is not clear that Tobit is appropriate. In this study, I examine the bias associated with alternative estimation procedures for estimating the marginal effects of covariates on time use. I begin by adapting the infrequency of purchase model, which is typically used to analyze expenditures, to time-diary data and showing that OLS estimates are unbiased. Next, using simulated data, I examine the bias associated with three procedures that are commonly used to analyze time-diary data ヨ Tobit, the Cragg (1971) two-part model, and OLS ヨ under a number of alternative assumptions about the data-generating process. I find that the estimated marginal effects from Tobits are biased and that the extent of the bias varies with the fraction of zero-value observations. The two-part model performs significantly better, but generates biased estimated in certain circumstances. Only OLS generates unbiased estimates in all of the simulations considered here.

188 citations

Posted Content
TL;DR: The authors evaluated the impact of KIPP Academy Lynn, a KIPP school in Lynn, Massachusetts that typifies the KIPP approach and found that the average reading gains were driven almost completely by SPED and LEP students, whose reading scores rose by roughly 0.35 standard deviations for each year spent at KIPP Lynn.
Abstract: The nation's largest charter management organization is the Knowledge is Power Program (KIPP). KIPP schools are emblematic of the No Excuses approach to public education, a highly standardized and widely replicated charter model that features a long school day, an extended school year, selective teacher hiring, strict behavior norms, and a focus on traditional reading and math skills. No Excuses charter schools are sometimes said to focus on relatively motivated high achievers at the expense of students who are most diffiult to teach, including limited English proficiency (LEP) and special education (SPED) students, as well as students with low baseline achievement levels. We use applicant lotteries to evaluate the impact of KIPP Academy Lynn, a KIPP school in Lynn, Massachusetts that typifies the KIPP approach. Our analysis focuses on special needs students that may be underserved. The results show average achievement gains of 0.36 standard deviations in math and 0.12 standard deviations in reading for each year spent at KIPP Lynn, with the largest gains coming from the LEP, SPED, and low-achievement groups. The average reading gains are driven almost completely by SPED and LEP students, whose reading scores rise by roughly 0.35 standard deviations for each year spent at KIPP Lynn.

188 citations

Posted Content
TL;DR: In this paper, Chang and Winters use a simple strategic pricing game in segmented markets to measure the effects of MERCOSUR on the pricing of nonmember exports to the regional trading bloc.
Abstract: Price data on exports to Brazil from countries excluded from MERCOSUR show that preferential trading agreements hurt nonmember countries by compelling them to reduce their prices to meet competition from suppliers within the regional trading bloc. The welfare effects of preferential trading agreements are most directly linked to changes in trade prices - that is, the terms of trade. Chang and Winters use a simple strategic pricing game in segmented markets to measure the effects of MERCOSUR on the pricing of nonmember exports to the regional trading bloc. Working with detailed data on unit values and tariffs, they find that the creation of MERCOSUR is associated with significant declines in the prices of nonmembers' exports to the bloc. These can be explained largely by tariff preferences offered to a country`s partners. Focusing on the Brazilian market (by far the largest in MERCOSUR), they show that nonmembers' export prices to Brazil respond to both most-favorable-nation and preferential tariffs. Preferential tariffs induce reductions in nonmember export prices. This paper - a product of Trade, Development Research Group - is part of a larger effort in the group to understand the effects of regional integration. The authors may be contacted at wchang@worldbank.org or l.a.winters @sussex.ac.uk.

188 citations

Posted Content
TL;DR: In this article, the authors develop two nonparametric tests for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates, and they derive tests that are straightforward to implement.
Abstract: A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement.

188 citations


Authors

Showing all 2136 results

NameH-indexPapersCitations
Michael Marmot1931147170338
James J. Heckman175766156816
Anders Björklund16576984268
Jean Tirole134439103279
Ernst Fehr131486108454
Matthew Jones125116196909
Alan B. Krueger11740275442
Eric A. Hanushek10944959705
David Card10743355797
M. Hashem Pesaran10236188826
Richard B. Freeman10086046932
Richard Blundell9348761730
John Haltiwanger9139338803
John A. List9158336962
Joshua D. Angrist8930459505
Network Information
Related Institutions (5)
Center for Economic and Policy Research
4.4K papers, 272K citations

88% related

Stockholm School of Economics
4.8K papers, 285.5K citations

86% related

European Central Bank
4.7K papers, 231.8K citations

85% related

National Bureau of Economic Research
34.1K papers, 2.8M citations

85% related

Federal Reserve System
10.3K papers, 511.9K citations

85% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202332
202283
2021146
2020259
2019191
2018229