scispace - formally typeset
Search or ask a question

Showing papers in "Social Science Research Network in 2018"


Journal ArticleDOI
TL;DR: Cor conservation of resources (COR) theory has become one of the most widely cited theories in organizational psychology and organizational behavior and has been adopted across the many areas of the stress spectrum, from burnout to traumatic stress.
Abstract: Over the past 30 years, conservation of resources (COR) theory has become one of the most widely cited theories in organizational psychology and organizational behavior. COR theory has been adopted across the many areas of the stress spectrum, from burnout to traumatic stress. Further attesting to the theory's centrality, COR theory is largely the basis for the more work-specific leading theory of organizational stress, namely the job demands-resources model. One of the major advantages of COR theory is its ability to make a wide range of specific hypotheses that are much broader than those offered by theories that focus on a single central resource, such as control, or that speak about resources in general. In this article, we will revisit the principles and corollaries of COR theory that inform those more specific hypotheses and will review research in organizational behavior that has relied on the theory.

1,852 citations


Book ChapterDOI
TL;DR: In this paper, the authors propose a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create, and explore the implications of this theory computationally using Santa Fe artificial stock market.
Abstract: This chapter proposes a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create. It explores the implications of this theory computationally using Santa Fe artificial stock market. Computer experiments with this endogenous-expectations market explain one of the more striking puzzles in finance: that market traders often believe in such concepts as technical trading, "market psychology," and bandwagon effects, while academic theorists believe in market efficiency and a lack of speculative opportunities. Academic theorists and market traders tend to view financial markets in strikingly different ways. Standard (efficient-market) financial theory assumes identical investors who share rational expectations of an asset's future price, and who instantaneously and rationally discount all market information into this price. While a few academics would be willing to assert that the market has a personality or experiences moods, the standard economic view has in recent years begun to change.

929 citations


Journal ArticleDOI
TL;DR: This paper found that cognitive reflection test performance is negatively correlated with perceived accuracy of fake news, and positively correlated with the ability to distinguish fake news from real news, even for headlines that align with individuals' political ideology.
Abstract: Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

635 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a monthly indicator of geopolitical risk based on a tally of newspaper articles covering geopolitical tensions, and examine its evolution and effects since 1985, concluding that high geopolitical risk leads to a decline in real activity, lower stock returns, and movements in capital flows away from emerging economies and towards advanced economies.
Abstract: We present a monthly indicator of geopolitical risk based on a tally of newspaper articles covering geopolitical tensions, and examine its evolution and effects since 1985. The geopolitical risk (GPR) index spikes around the Gulf War, after 9/11, during the 2003 Iraq invasion, during the 2014 Russia-Ukraine crisis, and after the Paris terrorist attacks. High geopolitical risk leads to a decline in real activity, lower stock returns, and movements in capital flows away from emerging economies and towards advanced economies. When we decompose the index into threats and acts components, the adverse effects of geopolitical risk are mostly driven by the threat of adverse geopolitical events. Extending our index back to 1900, geopolitical risk rose dramatically during the World War I and World War II, was elevated in the early 1980s, and has drifted upward since the beginning of the 21st century.

532 citations


Journal ArticleDOI
TL;DR: The authors provide an overview of the current state of the literature on the relationship between social media; political polarization; and political "disinformation", a term used to encompass a wide range of types of information about politics found online.
Abstract: The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known — and unknown — about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them.

494 citations


Journal ArticleDOI
TL;DR: An overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles is provided.
Abstract: In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning,...

493 citations


Journal ArticleDOI
TL;DR: Topological data analysis (TDA) can broadly be described as a collection of data analysis methods that find structure in data as mentioned in this paper, such as clustering, manifold estimation, nonlinear dimension reduction, mode estimation, ridge estimation and persistent homology.
Abstract: Topological data analysis (TDA) can broadly be described as a collection of data analysis methods that find structure in data. These methods include clustering, manifold estimation, nonlinear dimension reduction, mode estimation, ridge estimation and persistent homology. This paper reviews some of these methods.

353 citations


Journal ArticleDOI
TL;DR: The World Uncertainty Index (WUI) as mentioned in this paper measures the frequency of the word "uncertainty" in the quarterly Economist Intelligent Unit country reports from 1996 to 2016.
Abstract: We construct a new index of uncertainty — the World Uncertainty Index (WUI) — for 143 individual countries on a quarterly basis from 1996 onwards. This is defined using the frequency of the word “uncertainty” in the quarterly Economist Intelligent Unit country reports. Globally, the Index spikes near the 9/11 attack, SARS outbreak, Gulf War II, Euro debt crisis, El Nino, European border crisis, UK Brexit vote and the 2016 US election. Uncertainty spikes tend to be more synchronized within advanced economies and between economies with tighter trade and financial linkages. The level of uncertainty is significantly higher in developing countries and is positively associated with economic policy uncertainty and stock market volatility, and negatively with GDP growth. In a panel vector autoregressive setting, we find that innovations in the WUI foreshadow significant declines in output.

309 citations


Journal ArticleDOI
TL;DR: The aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it.
Abstract: Harmful lies are nothing new But the ability to distort reality has taken an exponential leap forward with “deep fake” technology This capability makes it possible to create audio and video of real people saying and doing things they never said or did Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors While deep-fake technology will bring with it certain benefits, it also will introduce many harms The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases Deep fakes will exacerbate this problem significantly Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage The risks to our democracy and to national security are profound as well Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions

300 citations


ReportDOI
TL;DR: In this paper, the authors estimate that close to 40% of multinational profits are shifted to low-tax countries each year by combining new macroeconomic statistics on the activities of multinational companies with the national accounts of tax havens and the world's other countries.
Abstract: By combining new macroeconomic statistics on the activities of multinational companies with the national accounts of tax havens and the world's other countries, we estimate that close to 40% of multinational profits are shifted to low-tax countries each year. Profit shifting is highest among U.S. multinationals; the tax revenue losses are largest for the European Union and developing countries. We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places - in effect stealing revenue from each other. This policy failure can explain the persistence of profit shifting to low-tax countries despite the sizeable costs involved for high-tax countries. We provide a new cross-country database of GDP, corporate profits, trade balances, and factor shares corrected for profit shifting, showing that the global rise of the corporate capital share is significantly under-estimated.

277 citations


Journal ArticleDOI
TL;DR: In this paper, the authors found that approximately one-quarter of all bitcoin users are involved in illegal activity, which is close to the scale of the US and European markets for illegal drugs.
Abstract: Cryptocurrencies are among the largest unregulated markets in the world. We find that approximately one-quarter of bitcoin users are involved in illegal activity. We estimate that around $76 billion of illegal activity per year involves bitcoin (46% of bitcoin transactions), which is close to the scale of the US and European markets for illegal drugs. The illegal share of bitcoin activity declines with mainstream interest in bitcoin and with the emergence of more opaque cryptocurrencies. The techniques developed in this paper have applications in cryptocurrency surveillance. Our findings suggest that cryptocurrencies are transforming the black markets by enabling “black e-commerce.”

Journal ArticleDOI
TL;DR: This review presents the state of the art in distributional semantics, focusing on its assets and limits as a model of meaning and as a method for semantic analysis.
Abstract: Distributional semantics is a usage-based model of meaning, based on the assumption that the statistical distribution of linguistic items in context plays a key role in characterizing their semantic behavior. Distributional models build semantic representations by extracting co-occurrences from corpora and have become a mainstream research paradigm in computational linguistics. In this review, I present the state of the art in distributional semantics, focusing on its assets and limits as a model of meaning and as a method for semantic analysis.

Journal ArticleDOI
TL;DR: This work provides both a post-hoc method for identifying fraudulent respondents using an original R package and an associated online application and an a priori method using JavaScript and PHP code in Qualtrics to block fraudulent respondents from participating.
Abstract: Amazon’s Mechanical Turk (MTurk) is widely used to collect affordable and high-quality survey responses. However, researchers recently noticed a substantial decline in data quality, sending shockwaves throughout the social sciences. The problem seems to stem from the use of Virtual Private Servers (VPSs) by respondents outside the U.S. to fool MTurk’s filtering system, but we know relatively little about the cause and consequence of this form of fraud. Analyzing 38 studies conducted on MTurk, we demonstrate that this problem is not new - we find a similar spike in VPS use in 2015. Utilizing two new studies, we show that data from these respondents is of substantially worse quality. Next, we provide two solutions for this problem using an API for an IP traceback application (IP Hub). We provide both a post-hoc method for identifying fraudulent respondents using an original R package (“rIP”) and an associated online application, and an a priori method using JavaScript and PHP code in Qualtrics to block fraudulent respondents from participating. We demonstrate the effectiveness of the screening procedure in a third study. Overall, our results suggest that fraudulent respondents pose a serious threat to data quality but can be easily identified and screened out.

Journal ArticleDOI
TL;DR: The authors performed a comparative analysis of machine learning methods for the canonical problem of empirical asset pricing: measuring asset risk premia, and demonstrated large economic gains to investors using machine learning forecasts, in some cases doubling the performance of leading regression-based strategies from the literature.
Abstract: We perform a comparative analysis of machine learning methods for the canonical problem of empirical asset pricing: measuring asset risk premia. We demonstrate large economic gains to investors using machine learning forecasts, in some cases doubling the performance of leading regression-based strategies from the literature. We identify the best performing methods (trees and neural networks) and trace their predictive gains to allowance of nonlinear predictor interactions that are missed by other methods. All methods agree on the same set of dominant predictive signals which includes variations on momentum, liquidity, and volatility. Improved risk premium measurement through machine learning simplifies the investigation into economic mechanisms of asset pricing and highlights the value of machine learning in financial innovation.

Posted Content
TL;DR: There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
Abstract: Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles --- under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.

Journal ArticleDOI
TL;DR: In this paper, a simple equilibrium model of credit provision in which to evaluate the impacts of statistical technology on the fairness of outcomes across categories such as race and gender was proposed. But the model was not applied to US mortgages.
Abstract: Recent innovations in statistical technology, including in evaluating creditworthiness, have sparked concerns about impacts on the fairness of outcomes across categories such as race and gender. We build a simple equilibrium model of credit provision in which to evaluate such impacts. We find that as statistical technology changes, the effects on disparity depend on a combination of the changes in the functional form used to evaluate creditworthiness using underlying borrower characteristics and the cross-category distribution of these characteristics. Employing detailed data on US mortgages and applications, we predict default using a number of popular machine learning techniques, and embed these techniques in our equilibrium model to analyze both extensive margin (exclusion) and intensive margin (rates) impacts on disparity. We propose a basic measure of cross-category disparity, and find that the machine learning models perform worse on this measure than logit models, especially on the intensive margin. We discuss the implications of our findings for mortgage policy.

ReportDOI
TL;DR: In this article, the authors study green bonds, which are bonds whose proceeds are used for environmentally sensitive purposes, and find that green municipal bonds are issued at a premium to otherwise similar ordinary bonds.
Abstract: We study green bonds, which are bonds whose proceeds are used for environmentally sensitive purposes. After an overview of the U.S. corporate and municipal green bonds markets, we study pricing and ownership patterns using a simple framework that incorporates assets with nonpecuniary utility. As predicted, we find that green municipal bonds are issued at a premium to otherwise similar ordinary bonds. We also confirm that green bonds, particularly small or essentially riskless ones, are more closely held than ordinary bonds. These pricing and ownership effects are strongest for bonds that are externally certified as green.

Journal ArticleDOI
TL;DR: The technology behind creating artificial touch sensations and the relevant aspects of human touch are reviewed and the need to consider the neuroscience and perception behind the human sense of touch in the design and control of haptic devices is addressed.
Abstract: This article reviews the technology behind creating artificial touch sensations and the relevant aspects of human touch We focus on the design and control of haptic devices and discuss the best practices for generating distinct and effective touch sensations Artificial haptic sensations can present information to users, help them complete a task, augment or replace the other senses, and add immersiveness and realism to virtual interactions We examine these applications in the context of different haptic feedback modalities and the forms that haptic devices can take We discuss the prior work, limitations, and design considerations of each feedback modality and individual haptic technology We also address the need to consider the neuroscience and perception behind the human sense of touch in the design and control of haptic devices

Posted Content
TL;DR: In this article, the authors derived an expression for the general difference-in-differences estimator and showed that it is a weighted average of all possible two-group/two-period estimators in the data.
Abstract: The canonical difference-in-differences (DD) model contains two time periods, “pre” and “post”, and two groups, “treatment” and “control”. Most DD applications, however, exploit variation across groups of units that receive treatment at different times. This paper derives an expression for this general DD estimator, and shows that it is a weighted average of all possible two-group/two-period DD estimators in the data. This result provides detailed guidance about how to use regression DD in practice. I define the DD estimand and show how it averages treatment effect heterogeneity and that it is biased when effects change over time. I propose a new balance test derived from a unified definition of common trends. I show how to decompose the difference between two specifications, and I apply it to models that drop untreated units, weight, disaggregate time fixed effects, control for unit-specific time trends, or exploit a third difference.

Journal ArticleDOI
TL;DR: In this paper, the authors address the three basic principles of person-environment fit theory: (a) the person and the environment together predict human behavior better than each of them does separately; (b) outcomes are most optimal when personal attributes (e.g., needs, values) and environmental attributes are compatible, irrespective of whether these attributes are rated as low, medium, or high.
Abstract: This review addresses the three basic principles of person–environment fit theory: (a) The person and the environment together predict human behavior better than each of them does separately; (b) outcomes are most optimal when personal attributes (e.g., needs, values) and environmental attributes (e.g., supplies, values) are compatible, irrespective of whether these attributes are rated as low, medium, or high; and (c) the direction of misfit between the person and the environment does not matter. My review of person–job and person–organization fit research that used polynomial regression to establish fit effects provides mixed support for the explanatory power of fit. Individuals report most optimal outcomes when there is fit on attributes they rate as highest, and they report lowest outcomes when the environment offers less than they need or desire. Linking these findings to individuals' abilities and opportunities to adapt, I reconsider fit theory and outline options for future research and practice.

Journal ArticleDOI
TL;DR: The authors synthesize the recent generation of immigration-crime research focused on macrosocial units using a two-pronged approach that combines the qualitative method of narrative review with the quantitative strategy of systematic meta-analysis.
Abstract: Are immigration and crime related? This review addresses this question in order to build a deeper understanding of the immigration-crime relationship. We synthesize the recent generation (1994 to 2014) of immigration-crime research focused on macrosocial (i.e., geospatial) units using a two-pronged approach that combines the qualitative method of narrative review with the quantitative strategy of systematic meta-analysis. After briefly reviewing contradictory theoretical arguments that scholars have invoked in efforts to explain the immigration-crime relationship, we present findings from our analysis, which (a) determined the average effect of immigration on crime rates across the body of literature and (b) assessed how variations in key aspects of research design have impacted results obtained in prior studies. Findings indicate that, overall, the immigration-crime association is negative—but very weak. At the same time, there is significant variation in findings across studies. Study design features, i...

Posted Content
TL;DR: The Global Wetland Outlook as discussed by the authors provides a current overview of wetlands: their extent, trends, drivers of change and the responses needed to reverse the historical decline in wetland area and quality.
Abstract: Conservation and wise use of wetlands are vital for human livelihoods. The wide range of ecosystem services wetlands provide means that they lie at the heart of sustainable development. Yet policy and decision-makers often underestimate the value of their benefits to nature and humankind. Understanding these values and what is happening to wetlands is critical to ensuring their conservation and wise use. The Global Wetland Outlook, the flagship publication of the Ramsar Convention, provides a current overview of wetlands: their extent, trends, drivers of change and the responses needed to reverse the historical decline in wetland area and quality.

Posted Content
TL;DR: This Article argues that a new data protection right, the "right to reasonable inferences", is needed to help close the accountability gap currently posed by “high risk inferences,” meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions.
Abstract: Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ). This Article shows that individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)). This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe’s new Copyright Directive and Trade Secrets Directive also fail to close the GDPR’s accountability gaps concerning inferences. This Article argues that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed by ‘high risk inferences’ , meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the conceptualization and operationalization of job insecurity, and reviewed empirical studies of the antecedents, consequences, and moderators of JI effects, as well as the various theoretical perspectives used to explain the relationship of the JI to various outcomes.
Abstract: This article updates our understanding of the field of job insecurity (JI) by incorporating studies across the globe since 2003, analyzes what we know, and offers ideas on how to move forward. We begin by reviewing the conceptualization and operationalization of job insecurity. We then review empirical studies of the antecedents, consequences, and moderators of JI effects, as well as the various theoretical perspectives used to explain the relationship of JI to various outcomes. Our analyses also consider JI research in different regions of the world, highlighting the cross-cultural differences. We conclude by identifying areas in need of future research. We propose that JI is and will continue to be a predominant employment issue, such that research into it will only increase in importance and relevance. In particular, we call for in-depth research that carefully considers the rapid changes in the workplace today and in the future.

Journal ArticleDOI
TL;DR: This review surveys the development of such distributed computational models for time-varying networks and focuses on a simple direct primal (sub)gradient method, but also provides an overview of other distributed methods for optimization in networks.
Abstract: Advances in wired and wireless technology have necessitated the development of theory, models, and tools to cope with the new challenges posed by large-scale control and optimization problems over ...

Journal ArticleDOI
TL;DR: In this article, the authors present the first empirical study on the announcement returns and real effects of green bond issuance by firms in 28 countries during 2007-2017, and find that stock prices positively respond to green-bond issuance.
Abstract: The green bond market has been growing rapidly worldwide since its debut in 2007. We present the first empirical study on the announcement returns and real effects of green bond issuance by firms in 28 countries during 2007-2017. After compiling a comprehensive international green bond dataset, we document that stock prices positively respond to green bond issuance. However, we do not find a significant premium for green bonds, suggesting that the positive stock returns are not driven by the lower cost of debt. Nevertheless, we show that institutional ownership, especially from domestic institutions, increases after the firm issues green bonds. Moreover, stock liquidity significantly improves upon the issuance of green bonds. Overall, our findings suggest that the firm’s issuance of green bonds is beneficial to its existing shareholders.

Journal ArticleDOI
TL;DR: The relationship between p-values and minimum Bayes factors also depends on the sample size and on the dimension of the parameter of interest as discussed by the authors, and the relationship between the two-sided significance tests for a point null hypothesis in more detail.
Abstract: The p-value quantifies the discrepancy between the data and a null hypothesis of interest, usually the assumption of no difference or no effect. A Bayesian approach allows the calibration of p-values by transforming them to direct measures of the evidence against the null hypothesis, so-called Bayes factors. We review the available literature in this area and consider two-sided significance tests for a point null hypothesis in more detail. We distinguish simple from local alternative hypotheses and contrast traditional Bayes factors based on the data with Bayes factors based on p-values or test statistics. A well-known finding is that the minimum Bayes factor, the smallest possible Bayes factor within a certain class of alternative hypotheses, provides less evidence against the null hypothesis than the corresponding p-value might suggest. It is less known that the relationship between p-values and minimum Bayes factors also depends on the sample size and on the dimension of the parameter of interest. We i...

Journal ArticleDOI
TL;DR: It is shown that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.
Abstract: Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

Posted Content
TL;DR: This paper studied the sources of racial disparities in income using anonymized longitudinal data covering nearly the entire U.S. population from 1989-2015 and found that black males and American Indians have much lower rates of upward mobility and higher rates of downward mobility than whites, leading to persistent disparities across generations.
Abstract: We study the sources of racial disparities in income using anonymized longitudinal data covering nearly the entire U.S. population from 1989-2015. We document three results. First, black Americans and American Indians have much lower rates of upward mobility and higher rates of downward mobility than whites, leading to persistent disparities across generations. Conditional on parent income, the black-white income gap is driven by differences in wages and employment rates between black and white men; there are no such differences between black and white women. Hispanic Americans have rates of intergenerational mobility more similar to whites than blacks, leading the Hispanic-white income gap to shrink across generations. Second, differences in parental marital status, education, and wealth explain little of the black-white income gap conditional on parent income. Third, the black-white gap persists even among boys who grow up in the same neighborhood. Controlling for parental income, black boys have lower incomes in adulthood than white boys in 99% of Census tracts. The few areas with small black-white gaps tend to be low-poverty neighborhoods with low levels of racial bias among whites and high rates of father presence among blacks. Black males who move to such neighborhoods earlier in childhood have significantly better outcomes. However, fewer than 5% of black children grow up in such areas. Our findings suggest that reducing the black-white income gap will require efforts whose impacts cross neighborhood and class lines and increase upward mobility specifically for black men.

Posted Content
TL;DR: It is argued that researchers’ functional background and adherence to a specific position in philosophy of science contribute to the confusion over which method is “right” and which one is ‘wrong’ and researchers should instead focus on more fundamental aspects of modeling, measurement, and statistical analysis.
Abstract: Descriptive statistics and the application of multivariate data analysis techniques such as regression analysis and factor analysis belong to the core set of statistical instruments, and their use has generated findings that have significantly shaped the way we see the world today. The increasing reliance on and acceptance of statistical analysis, as well as the advent of powerful computer systems that allow for handling large amounts of data, paved the way for the development of more advanced next-generation analysis techniques. Structural equation modeling (SEM) is among the most useful advanced statistical analysis techniques that have emerged in the social sciences in recent decades. SEM is a class of multivariate techniques that combine aspects of factor analysis and regression, enabling the researcher to simultaneously examine relationships among measured variables and latent variables as well as between latent variables. Considering the ever-increasing importance of understanding latent phenomena such as consumer perceptions, attitudes, or intentions and their influence on organizational performance measures (e.g., stock prices), it is not surprising that SEM has become one of the most prominent statistical analysis techniques today. While there are many approaches to conducting SEM, the most widely applied method is certainly covariance-based SEM (CB-SEM). Since its introduction by Karl Joreskog in 1973, CB-SEM has received considerable interest among empirical researchers across virtually all social sciences disciplines. Recently, however, partial least squares SEM (PLS-SEM) has gained massive attention in the social sciences as an alternative means to estimate relationships among multiple latent variables, each measured by a number of manifest variables. Along with the ongoing development of both SEM techniques, research has recently witnessed an increasing debate about the relative advantages of PLS-SEM vis-a-vis other SEM methods, which resulted in the formation of two opposing camps. One group of scholars, supportive of the PLS-SEM method, has emphasized the method’s prediction-orientation and capabilities to handle complex models, small sample sizes, and formatively specified constructs. The other group has noted that PLS-SEM is not a latent variable method, producing biased and inconsistent parameter estimates, calling for the abandonment of the method. Tying in with these debates, in this manuscript, we highlight five different perspectives on comparing results from CB-SEM and PLS-SEM. These perspectives imply that the universal rejection of one method over the other is shortsighted as such a step necessarily rests on assumptions about unknown entities in a model and the parameter estimation. We argue that researchers’ functional background and adherence to a specific position in philosophy of science contribute to the confusion over which method is “right” and which one is “wrong.” Based on our descriptions, we offer five recommendations that share a common theme: The comparison of results from CB-SEM and PLS-SEM—despite considerable research interest—is misguided, capable of providing both false confidence and false concern. Instead of seeking confidence in the comparison of results from the different approaches, researchers should instead focus on more fundamental aspects of modeling, measurement, and statistical analysis.