Showing papers in "Social Science Research Network in 2018"
TL;DR: In this paper, the authors propose a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create, and explore the implications of this theory computationally using Santa Fe artificial stock market.
Abstract: This chapter proposes a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create. It explores the implications of this theory computationally using Santa Fe artificial stock market. Computer experiments with this endogenous-expectations market explain one of the more striking puzzles in finance: that market traders often believe in such concepts as technical trading, "market psychology," and bandwagon effects, while academic theorists believe in market efficiency and a lack of speculative opportunities. Academic theorists and market traders tend to view financial markets in strikingly different ways. Standard (efficient-market) financial theory assumes identical investors who share rational expectations of an asset's future price, and who instantaneously and rationally discount all market information into this price. While a few academics would be willing to assert that the market has a personality or experiences moods, the standard economic view has in recent years begun to change.
924 citations
TL;DR: Cor conservation of resources (COR) theory has become one of the most widely cited theories in organizational psychology and organizational behavior and has been adopted across the many areas of the stress spectrum, from burnout to traumatic stress.
Abstract: Over the past 30 years, conservation of resources (COR) theory has become one of the most widely cited theories in organizational psychology and organizational behavior. COR theory has been adopted across the many areas of the stress spectrum, from burnout to traumatic stress. Further attesting to the theory's centrality, COR theory is largely the basis for the more work-specific leading theory of organizational stress, namely the job demands-resources model. One of the major advantages of COR theory is its ability to make a wide range of specific hypotheses that are much broader than those offered by theories that focus on a single central resource, such as control, or that speak about resources in general. In this article, we will revisit the principles and corollaries of COR theory that inform those more specific hypotheses and will review research in organizational behavior that has relied on the theory.
824 citations
TL;DR: This paper found that cognitive reflection test performance is negatively correlated with perceived accuracy of fake news, and positively correlated with the ability to distinguish fake news from real news, even for headlines that align with individuals' political ideology.
Abstract: Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.
335 citations
TL;DR: The authors provide an overview of the current state of the literature on the relationship between social media; political polarization; and political "disinformation", a term used to encompass a wide range of types of information about politics found online.
Abstract: The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known — and unknown — about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them.
289 citations
TL;DR: Topological data analysis (TDA) can broadly be described as a collection of data analysis methods that find structure in data as mentioned in this paper, such as clustering, manifold estimation, nonlinear dimension reduction, mode estimation, ridge estimation and persistent homology.
Abstract: Topological data analysis (TDA) can broadly be described as a collection of data analysis methods that find structure in data. These methods include clustering, manifold estimation, nonlinear dimension reduction, mode estimation, ridge estimation and persistent homology. This paper reviews some of these methods.
270 citations
TL;DR: The authors performed a comparative analysis of machine learning methods for the canonical problem of empirical asset pricing: measuring asset risk premia, and demonstrated large economic gains to investors using machine learning forecasts, in some cases doubling the performance of leading regression-based strategies from the literature.
Abstract: We perform a comparative analysis of machine learning methods for the canonical problem of empirical asset pricing: measuring asset risk premia. We demonstrate large economic gains to investors using machine learning forecasts, in some cases doubling the performance of leading regression-based strategies from the literature. We identify the best performing methods (trees and neural networks) and trace their predictive gains to allowance of nonlinear predictor interactions that are missed by other methods. All methods agree on the same set of dominant predictive signals which includes variations on momentum, liquidity, and volatility. Improved risk premium measurement through machine learning simplifies the investigation into economic mechanisms of asset pricing and highlights the value of machine learning in financial innovation.
236 citations
TL;DR: An overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles is provided.
Abstract: In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning,...
232 citations
TL;DR: In this paper, the authors estimate that close to 40% of multinational profits are shifted to low-tax countries each year by combining new macroeconomic statistics on the activities of multinational companies with the national accounts of tax havens and the world's other countries.
Abstract: By combining new macroeconomic statistics on the activities of multinational companies with the national accounts of tax havens and the world's other countries, we estimate that close to 40% of multinational profits are shifted to low-tax countries each year. Profit
shifting is highest among U.S. multinationals; the tax revenue losses are largest for the European Union and developing countries. We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places - in effect stealing revenue from each other. This policy failure can explain the persistence of profit shifting to low-tax countries despite the sizeable costs involved for high-tax countries. We provide a new cross-country database of GDP, corporate profits, trade balances, and factor shares corrected for profit shifting, showing that the global rise of the corporate capital share is significantly under-estimated.
183 citations
TL;DR: The World Uncertainty Index (WUI) as mentioned in this paper measures the frequency of the word "uncertainty" in the quarterly Economist Intelligent Unit country reports from 1996 to 2016.
Abstract: We construct a new index of uncertainty — the World Uncertainty Index (WUI) — for 143 individual countries on a quarterly basis from 1996 onwards. This is defined using the frequency of the word “uncertainty” in the quarterly Economist Intelligent Unit country reports. Globally, the Index spikes near the 9/11 attack, SARS outbreak, Gulf War II, Euro debt crisis, El Nino, European border crisis, UK Brexit vote and the 2016 US election. Uncertainty spikes tend to be more synchronized within advanced economies and between economies with tighter trade and financial linkages. The level of uncertainty is significantly higher in developing countries and is positively associated with economic policy uncertainty and stock market volatility, and negatively with GDP growth. In a panel vector autoregressive setting, we find that innovations in the WUI foreshadow significant declines in output.
165 citations
TL;DR: The aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it.
Abstract: Harmful lies are nothing new But the ability to distort reality has taken an exponential leap forward with “deep fake” technology This capability makes it possible to create audio and video of real people saying and doing things they never said or did Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors While deep-fake technology will bring with it certain benefits, it also will introduce many harms The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases Deep fakes will exacerbate this problem significantly Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage The risks to our democracy and to national security are profound as well Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions
159 citations
TL;DR: The authors synthesize the recent generation of immigration-crime research focused on macrosocial units using a two-pronged approach that combines the qualitative method of narrative review with the quantitative strategy of systematic meta-analysis.
Abstract: Are immigration and crime related? This review addresses this question in order to build a deeper understanding of the immigration-crime relationship. We synthesize the recent generation (1994 to 2014) of immigration-crime research focused on macrosocial (i.e., geospatial) units using a two-pronged approach that combines the qualitative method of narrative review with the quantitative strategy of systematic meta-analysis. After briefly reviewing contradictory theoretical arguments that scholars have invoked in efforts to explain the immigration-crime relationship, we present findings from our analysis, which (a) determined the average effect of immigration on crime rates across the body of literature and (b) assessed how variations in key aspects of research design have impacted results obtained in prior studies. Findings indicate that, overall, the immigration-crime association is negative—but very weak. At the same time, there is significant variation in findings across studies. Study design features, i...
Posted Content•
TL;DR: The Global Wetland Outlook as discussed by the authors provides a current overview of wetlands: their extent, trends, drivers of change and the responses needed to reverse the historical decline in wetland area and quality.
Abstract: Conservation and wise use of wetlands are vital for human livelihoods. The wide range of ecosystem services wetlands provide means that they lie at the heart of sustainable development. Yet policy and decision-makers often underestimate the value of their benefits to nature and humankind. Understanding these values and what is happening to wetlands is critical to ensuring their conservation and wise use.
The Global Wetland Outlook, the flagship publication of the Ramsar Convention, provides a current overview of wetlands: their extent, trends, drivers of change and the responses needed to reverse the historical decline in wetland area and quality.
TL;DR: In this paper, the authors found that approximately one-quarter of all bitcoin users are involved in illegal activity, which is close to the scale of the US and European markets for illegal drugs.
Abstract: Cryptocurrencies are among the largest unregulated markets in the world. We find that approximately one-quarter of bitcoin users are involved in illegal activity. We estimate that around $76 billion of illegal activity per year involves bitcoin (46% of bitcoin transactions), which is close to the scale of the US and European markets for illegal drugs. The illegal share of bitcoin activity declines with mainstream interest in bitcoin and with the emergence of more opaque cryptocurrencies. The techniques developed in this paper have applications in cryptocurrency surveillance. Our findings suggest that cryptocurrencies are transforming the black markets by enabling “black e-commerce.”
TL;DR: This review presents the state of the art in distributional semantics, focusing on its assets and limits as a model of meaning and as a method for semantic analysis.
Abstract: Distributional semantics is a usage-based model of meaning, based on the assumption that the statistical distribution of linguistic items in context plays a key role in characterizing their semantic behavior. Distributional models build semantic representations by extracting co-occurrences from corpora and have become a mainstream research paradigm in computational linguistics. In this review, I present the state of the art in distributional semantics, focusing on its assets and limits as a model of meaning and as a method for semantic analysis.
Posted Content•
TL;DR: In this article, the authors derived an expression for the general difference-in-differences estimator and showed that it is a weighted average of all possible two-group/two-period estimators in the data.
Abstract: The canonical difference-in-differences (DD) model contains two time periods, “pre” and “post”, and two groups, “treatment” and “control”. Most DD applications, however, exploit variation across groups of units that receive treatment at different times. This paper derives an expression for this general DD estimator, and shows that it is a weighted average of all possible two-group/two-period DD estimators in the data. This result provides detailed guidance about how to use regression DD in practice. I define the DD estimand and show how it averages treatment effect heterogeneity and that it is biased when effects change over time. I propose a new balance test derived from a unified definition of common trends. I show how to decompose the difference between two specifications, and I apply it to models that drop untreated units, weight, disaggregate time fixed effects, control for unit-specific time trends, or exploit a third difference.
TL;DR: In this paper, a simple equilibrium model of credit provision in which to evaluate the impacts of statistical technology on the fairness of outcomes across categories such as race and gender was proposed. But the model was not applied to US mortgages.
Abstract: Recent innovations in statistical technology, including in evaluating creditworthiness, have sparked concerns about impacts on the fairness of outcomes across categories such as race and gender. We build a simple equilibrium model of credit provision in which to evaluate such impacts. We find that as statistical technology changes, the effects on disparity depend on a combination of the changes in the functional form used to evaluate creditworthiness using underlying borrower characteristics and the cross-category distribution of these characteristics. Employing detailed data on US mortgages and applications, we predict default using a number of popular machine learning techniques, and embed these techniques in our equilibrium model to analyze both extensive margin (exclusion) and intensive margin (rates) impacts on disparity. We propose a basic measure of cross-category disparity, and find that the machine learning models perform worse on this measure than logit models, especially on the intensive margin. We discuss the implications of our findings for mortgage policy.
TL;DR: The technology behind creating artificial touch sensations and the relevant aspects of human touch are reviewed and the need to consider the neuroscience and perception behind the human sense of touch in the design and control of haptic devices is addressed.
Abstract: This article reviews the technology behind creating artificial touch sensations and the relevant aspects of human touch We focus on the design and control of haptic devices and discuss the best practices for generating distinct and effective touch sensations Artificial haptic sensations can present information to users, help them complete a task, augment or replace the other senses, and add immersiveness and realism to virtual interactions We examine these applications in the context of different haptic feedback modalities and the forms that haptic devices can take We discuss the prior work, limitations, and design considerations of each feedback modality and individual haptic technology We also address the need to consider the neuroscience and perception behind the human sense of touch in the design and control of haptic devices
TL;DR: In this article, the authors study green bonds, which are bonds whose proceeds are used for environmentally sensitive purposes, and find that green municipal bonds are issued at a premium to otherwise similar ordinary bonds.
Abstract: We study green bonds, which are bonds whose proceeds are used for environmentally sensitive purposes. After an overview of the U.S. corporate and municipal green bonds markets, we study pricing and ownership patterns using a simple framework that incorporates assets with nonpecuniary utility. As predicted, we find that green municipal bonds are issued at a premium to otherwise similar ordinary bonds. We also confirm that green bonds, particularly small or essentially riskless ones, are more closely held than ordinary bonds. These pricing and ownership effects are strongest for bonds that are externally certified as green.
TL;DR: The relationship between p-values and minimum Bayes factors also depends on the sample size and on the dimension of the parameter of interest as discussed by the authors, and the relationship between the two-sided significance tests for a point null hypothesis in more detail.
Abstract: The p-value quantifies the discrepancy between the data and a null hypothesis of interest, usually the assumption of no difference or no effect. A Bayesian approach allows the calibration of p-values by transforming them to direct measures of the evidence against the null hypothesis, so-called Bayes factors. We review the available literature in this area and consider two-sided significance tests for a point null hypothesis in more detail. We distinguish simple from local alternative hypotheses and contrast traditional Bayes factors based on the data with Bayes factors based on p-values or test statistics. A well-known finding is that the minimum Bayes factor, the smallest possible Bayes factor within a certain class of alternative hypotheses, provides less evidence against the null hypothesis than the corresponding p-value might suggest. It is less known that the relationship between p-values and minimum Bayes factors also depends on the sample size and on the dimension of the parameter of interest. We i...
TL;DR: In this article, a robust stochastic discount factor (SDF) is proposed to summarize the joint explanatory power of a large number of cross-sectional stock return predictors.
Abstract: We construct a robust stochastic discount factor (SDF) that summarizes the joint explanatory power of a large number of cross-sectional stock return predictors. Our method achieves robust out-of-sample performance in this high-dimensional setting by imposing an economically motivated prior on SDF coefficients that shrinks the contributions of low-variance principal components of the candidate factors. While empirical asset pricing research has focused on SDFs with a small number of characteristics-based factors --- e.g., the four- or five-factor models discussed in the recent literature --- we find that such a characteristics-sparse SDF cannot adequately summarize the cross-section of expected stock returns. However, a relatively small number of principal components of the universe of potential characteristics-based factors can approximate the SDF quite well.
TL;DR: In this paper, the authors present a monthly indicator of geopolitical risk based on a tally of newspaper articles covering geopolitical tensions, and examine its evolution and effects since 1985, concluding that high geopolitical risk leads to a decline in real activity, lower stock returns, and movements in capital flows away from emerging economies and towards advanced economies.
Abstract: We present a monthly indicator of geopolitical risk based on a tally of newspaper articles covering geopolitical tensions, and examine its evolution and effects since 1985. The geopolitical risk (GPR) index spikes around the Gulf War, after 9/11, during the 2003 Iraq invasion, during the 2014 Russia-Ukraine crisis, and after the Paris terrorist attacks. High geopolitical risk leads to a decline in real activity, lower stock returns, and movements in capital flows away from emerging economies and towards advanced economies. When we decompose the index into threats and acts components, the adverse effects of geopolitical risk are mostly driven by the threat of adverse geopolitical events. Extending our index back to 1900, geopolitical risk rose dramatically during the World War I and World War II, was elevated in the early 1980s, and has drifted upward since the beginning of the 21st century.
Posted Content•
TL;DR: There may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
Abstract: Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles --- under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.
TL;DR: This article analyzed a dataset of 2390 completed ICOs, which raised a total of $12 billion in capital, nearly all since January 2017, and found evidence of significant ICO underpricing, with average returns of 179% from the ICO price to the first day's opening market price, over a holding period that averages just 16 days.
Abstract: We analyze a dataset of 2390 completed ICOs, which raised a total of $12 billion in capital, nearly all since January 2017. We find evidence of significant ICO underpricing, with average returns of 179% from the ICO price to the first day's opening market price, over a holding period that averages just 16 days. After trading begins, tokens continue to appreciate in price, generating average buy-and-hold abnormal returns of 48% in the first 30 trading days. We also study the determinants of ICO underpricing and relate cryptocurrency prices to Twitter activity.
TL;DR: In this article, a theoretical framework for how venture uncertainty, venture quality, and investor opportunity set interrelate is developed to evaluate the performance of initial coin offer (ICO) campaigns.
Abstract: Initial Coin Offerings (ICOs) are a new and unregulated form of crowdfunding that raises funds through a blockchain by selling venture-related tokens or coins in exchange for legal tender or cryptocurrencies. In this paper, we establish token or coin tradability as the primary ICO success measure, and we develop a theoretical framework for how venture uncertainty, venture quality, and investor opportunity set interrelate. We use the largest available dataset to date, consisting of 1,009 ICOs from 2015 to March 2018. Our data highlights that venture uncertainty (not being on Github and Telegram, shorter whitepapers, higher percentage of tokens distributed) is negatively correlated, while higher venture quality (better connected CEOs and larger team size) is positively correlated, with ICO success. Moreover, providing a hard cap in a pre-ICO can help investors measure success in the pre-sale. This is another positive signal of funding success.
Posted Content•
TL;DR: This Article argues that a new data protection right, the "right to reasonable inferences", is needed to help close the accountability gap currently posed by “high risk inferences,” meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions.
Abstract: Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ).
This Article shows that individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3)).
This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe’s new Copyright Directive and Trade Secrets Directive also fail to close the GDPR’s accountability gaps concerning inferences.
This Article argues that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed by ‘high risk inferences’ , meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.
TL;DR: It is shown that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.
Abstract: Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.
Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.
In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.
TL;DR: In this paper, the authors reviewed the conceptualization and operationalization of job insecurity, and reviewed empirical studies of the antecedents, consequences, and moderators of JI effects, as well as the various theoretical perspectives used to explain the relationship of the JI to various outcomes.
Abstract: This article updates our understanding of the field of job insecurity (JI) by incorporating studies across the globe since 2003, analyzes what we know, and offers ideas on how to move forward. We begin by reviewing the conceptualization and operationalization of job insecurity. We then review empirical studies of the antecedents, consequences, and moderators of JI effects, as well as the various theoretical perspectives used to explain the relationship of JI to various outcomes. Our analyses also consider JI research in different regions of the world, highlighting the cross-cultural differences. We conclude by identifying areas in need of future research. We propose that JI is and will continue to be a predominant employment issue, such that research into it will only increase in importance and relevance. In particular, we call for in-depth research that carefully considers the rapid changes in the workplace today and in the future.
Posted Content•
TL;DR: A survey of over 74,000 online news consumers in 37 countries including the US and UK was conducted by YouGov as discussed by the authors, where the issues of trust and misinformation, new online business models, the impact of changing Facebook algorithms and the rise of new platforms and messaging apps.
Abstract: This year's report reveals new insights about digital news consumption based on a YouGov survey of over 74,000 online news consumers in 37 countries including the US and UK.
The report focuses on the issues of trust and misinformation, new online business models, the impact of changing Facebook algorithms and the rise of new platforms and messaging apps.
Summary of some of the most important findings:
The use of social media for news has started to fall in a number of key markets — after years of continuous growth. Usage is down six percentage points in the United States, and is also down in the UK and France. Almost all of this is due to a specific decline in the discovery, posting, and sharing of news in Facebook.
At the same time, we continue to see a rise in the use of messaging apps for news as consumers look for more private (and less confrontational) spaces to communicate. WhatsApp is now used for news by around half of our sample of online users in Malaysia (54%) and Brazil (48%) and by around third in Spain (36%) and Turkey (30%).
Across all countries, the average level of trust in the news in general remains relatively stable at 44%, with just over half (51%) agreeing that they trust the news media they themselves use most of the time. By contrast, 34% of respondents say they trust news they find via search and fewer than a quarter (23%) say they trust the news they find in social media.
Over half (54%) agree or strongly agree that they are concerned about what is real and fake on the internet. This is highest in countries like Brazil (85%), Spain (69%), and the United States (64%) where polarised political situations combine with high social media use. It is lowest in Germany (37%) and the Netherlands (30%) where recent elections were largely untroubled by concerns over fake content.
Most respondents believe that publishers (75%) and platforms (71%) have the biggest responsibility to fix problems of fake and unreliable news. This is because much of the news they complain about relates to biased or inaccurate news from the mainstream media rather than news that is completely made up or distributed by foreign powers.
There is some public appetite for government intervention to stop fake news, especially in Europe (60%) and Asia (63%). By contrast, only four in ten Americans (41%) thought that government should do more.
For the first time we have measured news literacy. Those with higher levels of news literacy tend to prefer newspapers brands over TV, and use social media for news very differently from the wider population. They are also more cautious about interventions by governments to deal with misinformation.
With Facebook looking to incorporate survey-driven brand trust scores into its algorithms, we reveal in this report the most and least trusted brands in 37 countries based on similar methodologies. We find that brands with a broadcasting background and long heritage tend to be trusted most, with popular newspapers and digital-born brands trusted least.
News apps, email newsletters, and mobile notifications continue to gain in importance. But in some countries users are starting to complain they are being bombarded with too many messages. This appears to be partly because of the growth of alerts from aggregators such as Apple News and Upday.
The average number of people paying for online news has edged up in many countries, with significant increases coming from Norway ( 4 percentage points), Sweden ( 6), and Finland ( 4). All these countries have a small number of publishers, the majority of whom are relentlessly pursuing a variety of paywall strategies. But in more complex and fragmented markets, there are still many publishers who offer online news for free.
Last year’s significant increase in subscription in the United States (the so-called Trump Bump) has been maintained, while donations and donation-based memberships are emerging as a significant alternative strategy in Spain, and the UK as well as in the United States. These payments are closely linked with political belief (the left) and come disproportionately from the young.
Privacy concerns have reignited the growth in ad-blocking software. More than a quarter now block on any device (27%) — but that ranges from 42% in Greece to 13% in South Korea. Television remains a critical source of news for many — but declines in annual audience continue to raise new questions about the future role of public broadcasters and their ability to attract the next generation of viewers.
Consumers remain reluctant to view news video within publisher websites and apps. Over half of consumption happens in third-party environments like Facebook and YouTube. Americans and Europeans would like to see fewer online news videos; Asians tend to want more. Podcasts are becoming popular across the world due to better content and easier distribution. They are almost twice as popular in the United States (33%) as they are in the UK (18%). Young people are far more likely to use podcasts than listen to speech radio.
Voice-activated digital assistants like the Amazon Echo and Google Home continue to grow rapidly, opening new opportunities for news audio. Usage has more than doubled in the United States, Germany, and the UK with around half of those who have such devices using them for news and information.
TL;DR: The existence of an anticipatory trading channel through which HFTs may increase non-HFT trading costs is supported, and results are not fully explained by H FTs reacting faster to signals that non-hFTs also observe.
Abstract: This study tests the hypothesis that high-frequency traders (HFTs) identify patterns that allow them to anticipate and trade ahead of other investors’ order flow. HFTs’ aggressive purchases and sales lead those of other investors. The effect is consistently stronger for a subset of HFTs and at times when non-HFTs are hypothesized to be less focused on disguising order flow. These results are not fully explained by HFTs reacting faster to signals that non-HFTs also observe such as news, contrarian or trend-chasing behavior by non-HFTs, and trader misclassification. These findings support the existence of an anticipatory trading channel through which HFTs may increase non-HFT trading costs.
Posted Content•
TL;DR: In this paper, the authors present an analysis of the literature concerning the impact of corporate sustainability on corporate financial performance, and find that 78% of publications report a positive relationship between corporate sustainability and financial performance.
Abstract: This paper presents an analysis of the literature concerning the impact of corporate sustainability on corporate financial performance. The relationship between corporate sustainable practices and financial performance has received growing attention in research, yet a consensus remains elusive. This paper identifies developing trends and the issues that hinder conclusive consensus on that relationship. We used content analysis to examine the literature and establish the current state of research. A total of 132 papers from top-tier journals are shortlisted. We find that 78% of publications report a positive relationship between corporate sustainability and financial performance. Variations in research methodology and measurement of variables lead to the divergent views on the relationship. Furthermore, literature is slowly replacing total sustainability with narrower corporate social responsibility (CSR), which is dominated by the social dimension of sustainability, while encompassing little to nothing of environmental and economic dimensions. Studies from developing countries remain scarce. More research is needed to facilitate convergence in the understanding of the relationship between corporate sustainable practices and financial performance.