scispace - formally typeset
Search or ask a question

Showing papers by "J.P. Morgan & Co. published in 2021"


Journal ArticleDOI
TL;DR: Baek et al. as discussed by the authors used the Stay-at-Home (SAH) orders, along with high-frequency unemployment insurance (UI) claims, to disentangle the relative effect of SAH orders from the general economic disruption wrought by the COVID-19 pandemic.
Abstract: Author(s): Baek, ChaeWon; McCrory, Peter B; Messer, Todd; Mui, Preston | Abstract: Epidemiological models projected that, without effective mitigation strategies, upwards of 2 million Americans were at risk of death from the COVID-19 pandemic. Heeding the warning, in mid-March 2020, state and local officials in the United States began issuing Stay-at-Home (SAH) orders, instructing people to remain at home except to do essential tasks or to do work deemed essential. By April 4th, 2020, nearly 95% of the U.S. population was under such orders. Over the same three week period, initial claims for unemployment spiked to unprecedented levels. In this paper, we use the high-frequency, decentralized implementation of SAH orders, along with high-frequency unemployment insurance (UI) claims, to disentangle the relative effect of SAH orders from the general economic disruption wrought by the pandemic that affected all regions similarly. We find that, all else equal, each week of Stay-at-Home exposure increased a state’s weekly initial UI claims by 1.9% of its employment level relative to other states. Ignoring cross-regional spillovers, a back-of-the-envelope calculation implies that, of the 17 million UI claims made between March 14 and April 4, only 4 million were attributable to the Stay-at-Home orders. This evidence suggests that the direct effect of SAH orders accounted for a substantial, but minority share, of the overall initial rise in unemployment claims. We present a stylized currency union model to provide conditions under which this estimate represents an upper or lower bound for aggregate employment losses attributable to SAH orders.

110 citations


Journal ArticleDOI
TL;DR: In this article, a two-stream fusion network is designed to serve as a regularizer for the fusion problem, which is formulated as an optimization problem whose solution can be obtained by solving a Sylvester equation.
Abstract: Hyperspectral image (HSI) super-resolution is commonly used to overcome the hardware limitations of existing hyperspectral imaging systems on spatial resolution. It fuses a low-resolution (LR) HSI and a high-resolution (HR) conventional image of the same scene to obtain an HR HSI. In this work, we propose a method that integrates a physical model and deep prior information. Specifically, a novel, yet effective two-stream fusion network is designed to serve as a regularizer for the fusion problem. This fusion problem is formulated as an optimization problem whose solution can be obtained by solving a Sylvester equation. Furthermore, the regularization parameter is simultaneously estimated to automatically adjust contribution of the physical model and the learned prior to reconstruct the final HR HSI. Experimental results on both simulated and real data demonstrate the superiority of the proposed method over other state-of-the-art methods on both quantitative and qualitative comparisons.

33 citations


Proceedings ArticleDOI
01 May 2021
TL;DR: CanDID as mentioned in this paper is a platform for practical, user-friendly realization of decentralized identity, the idea of empowering end users with management of their own credentials by issuing credentials in a userfriendly way that draws securely and privately on data from existing, unmodified web service providers.
Abstract: We present CanDID, a platform for practical, user-friendly realization of decentralized identity, the idea of empowering end users with management of their own credentials.While decentralized identity promises to give users greater control over their private data, it burdens users with management of private keys, creating a significant risk of key loss. Existing and proposed approaches also presume the spontaneous availability of a credential-issuance ecosystem, creating a bootstrapping problem. They also omit essential functionality, like resistance to Sybil attacks and the ability to detect misbehaving or sanctioned users while preserving user privacy.CanDID addresses these challenges by issuing credentials in a user-friendly way that draws securely and privately on data from existing, unmodified web service providers. Such legacy compatibility similarly enables CanDID users to leverage their existing online accounts for recovery of lost keys. Using a decentralized committee of nodes, CanDID provides strong confidentiality for user’s keys, real-world identities, and data, yet prevents users from spawning multiple identities and allows identification (and blacklisting) of sanctioned users.We present the CanDID architecture and report on experiments demonstrating its practical performance.

24 citations


Proceedings ArticleDOI
23 May 2021
TL;DR: An open-source, Ethereum-based implementation of the Anonymous Zether construction is presented, which features proofs which grow only logarithmically in the size of the “anonymity sets” used, improving upon the linear growth attained by prior efforts.
Abstract: Anonymous Zether, proposed by Bunz, Agrawal, Zamani, and Boneh (FC’20), is a private payment design whose wallets demand little bandwidth and need not remain online; this unique property makes it a compelling choice for resource-constrained devices. In this work, we describe an efficient construction of Anonymous Zether. Our protocol features proofs which grow only logarithmically in the size of the "anonymity sets" used, improving upon the linear growth attained by prior efforts. It also features competitive transaction sizes in practice (on the order of 3 kilobytes).Our central tool is a new family of extensions to Groth and Kohlweiss’s one-out-of-many proofs (Eurocrypt 2015), which efficiently prove statements about many messages among a list of commitments. These extensions prove knowledge of a secret subset of a public list, and assert that the commitments in the subset satisfy certain properties (expressed as linear equations). Remarkably, our communication remains logarithmic; our computation increases only by a logarithmic multiplicative factor. This technique is likely to be of independent interest.We present an open-source, Ethereum-based implementation of our Anonymous Zether construction.

21 citations


Book ChapterDOI
16 Aug 2021
TL;DR: In this paper, the Damgard and Nielsen protocol was improved by a factor of 2, which was the first significant improvement to the basic semi-honest protocol in the honest majority setting.
Abstract: In this work, we address communication, computation, and round efficiency of unconditionally secure multi-party computation for arithmetic circuits in the honest majority setting. We achieve both algorithmic and practical improvements: The best known result in the semi-honest setting has been due to Damgard and Nielsen (CRYPTO 2007). Over the last decade, their construction has played an important role in the progress of efficient secure computation. However despite a number of follow-up works, any significant improvements to the basic semi-honest protocol have been hard to come by. We show \(33\%\) improvement in communication complexity of this protocol. We show how to generalize this result to the malicious setting, leading to the best known unconditional honest majority MPC with malicious security. We focus on the round complexity of the Damgard and Nielsen protocol and improve it by a factor of 2. Our improvement relies on a novel observation relating to an interplay between Damgard and Nielsen multiplication and Beaver triple multiplication. An implementation of our constructions shows an execution run time improvement compared to the state of the art ranging from \(30\%\) to \(50\%\).

20 citations


Book ChapterDOI
16 Aug 2021
TL;DR: The best known n party unconditional multiparty computation protocols with an optimal corruption threshold communicate O(n) field elements per gate as discussed by the authors, which has been the case even in the semi-honest setting despite over a decade of research on communication complexity.
Abstract: The best known n party unconditional multiparty computation protocols with an optimal corruption threshold communicates O(n) field elements per gate. This has been the case even in the semi-honest setting despite over a decade of research on communication complexity in this setting. Going to the slightly sub-optimal corruption setting, the work of Damgard, Ishai, and Kroigaard (EUROCRYPT 2010) provided the first protocol for a single circuit achieving communication complexity of \(O(\log |C|)\) elements per gate. While a number of works have improved upon this result, obtaining a protocol with O(1) field elements per gate has been an open problem.

14 citations


Book ChapterDOI
17 Oct 2021
TL;DR: This work constructs the first unconditional MPC protocol secure against a malicious adversary in the honest majority setting evaluating just a single boolean circuit with amortized communication complexity of O(n) bits per gate.
Abstract: We study the communication complexity of unconditionally secure multiparty computation (MPC) protocols in the honest majority setting. Despite tremendous efforts in achieving efficient protocols for binary fields under computational assumptions, there are no efficient unconditional MPC protocols in this setting. In particular, there are no n-party protocols with constant overhead admitting communication complexity of O(n) bits per gate. Cascudo, Cramer, Xing and Yuan (CRYPTO 2018) were the first ones to achieve such an overhead in the amortized setting by evaluating \(O(\log n)\) copies of the same circuit in the binary field in parallel. In this work, we construct the first unconditional MPC protocol secure against a malicious adversary in the honest majority setting evaluating just a single boolean circuit with amortized communication complexity of O(n) bits per gate.

12 citations


Proceedings ArticleDOI
14 Aug 2021
TL;DR: In this article, a model-based counterfactual synthesizer (MCS) framework is proposed for interpreting machine learning models, which is based on conditional generative adversarial net (CGAN).
Abstract: Counterfactuals, serving as one of the emerging type of model interpretations, have recently received attention from both researchers and practitioners. Counterfactual explanations formalize the exploration of "what-if'' scenarios, and are an instance of example-based reasoning using a set of hypothetical data samples. Counterfactuals essentially show how the model decision alters with input perturbations. Existing methods for generating counterfactuals are mainly algorithm-based, which are time-inefficient and assume the same counterfactual universe for different queries. To address these limitations, we propose a Model-based Counterfactual Synthesizer (MCS) framework for interpreting machine learning models. We first analyze the model-based counterfactual process and construct a base synthesizer using a conditional generative adversarial net (CGAN). To better approximate the counterfactual universe for those rare queries, we novelly employ the umbrella sampling technique to conduct the MCS framework training. Besides, we also enhance the MCS framework by incorporating the causal dependence among attributes with model inductive bias, and validate its design correctness from the causality identification perspective. Experimental results on several datasets demonstrate the effectiveness as well as efficiency of our proposed MCS framework, and verify the advantages compared with other alternatives.

12 citations


Journal ArticleDOI
TL;DR: A camera-based system to recognise the non-driving activities (NDAs) which may lead to different cognitive capabilities for take-over based on a fusion of spatial and temporal information is presented.
Abstract: It is of great importance to monitor the driver’s status to achieve an intelligent and safe take-over transition in the level 3 automated driving vehicle. We present a camera-based system to recognise the non-driving activities (NDAs) which may lead to different cognitive capabilities for take-over based on a fusion of spatial and temporal information. The region of interest (ROI) is automatically selected based on the extracted masks of the driver and the object/device interacting with. Then, the RGB image of the ROI (the spatial stream) and its associated current and historical optical flow frames (the temporal stream) are fed into a two-stream convolutional neural network (CNN) for the classification of NDAs. Such an approach is able to identify not only the object/device but also the interaction mode between the object and the driver, which enables a refined NDA classification. In this paper, we evaluated the performance of classifying 10 NDAs with two types of devices (tablet and phone) and 5 types of tasks (emailing, reading, watching videos, web-browsing and gaming) for 10 participants. Results show that the proposed system improves the averaged classification accuracy from 61.0% when using a single spatial stream to 90.5%.

9 citations


Proceedings ArticleDOI
TL;DR: In this article, a model-based counterfactual synthesizer (MCS) framework is proposed for interpreting machine learning models, which is based on conditional generative adversarial net (CGAN).
Abstract: Counterfactuals, serving as one of the emerging type of model interpretations, have recently received attention from both researchers and practitioners. Counterfactual explanations formalize the exploration of ``what-if'' scenarios, and are an instance of example-based reasoning using a set of hypothetical data samples. Counterfactuals essentially show how the model decision alters with input perturbations. Existing methods for generating counterfactuals are mainly algorithm-based, which are time-inefficient and assume the same counterfactual universe for different queries. To address these limitations, we propose a Model-based Counterfactual Synthesizer (MCS) framework for interpreting machine learning models. We first analyze the model-based counterfactual process and construct a base synthesizer using a conditional generative adversarial net (CGAN). To better approximate the counterfactual universe for those rare queries, we novelly employ the umbrella sampling technique to conduct the MCS framework training. Besides, we also enhance the MCS framework by incorporating the causal dependence among attributes with model inductive bias, and validate its design correctness from the causality identification perspective. Experimental results on several datasets demonstrate the effectiveness as well as efficiency of our proposed MCS framework, and verify the advantages compared with other alternatives.

9 citations


Journal ArticleDOI
26 Oct 2021
TL;DR: In this article, a large ensemble of climate simulations forced by observed sea surface temperatures (SSTs) is analyzed to demonstrate that seasonal variations of baroclinic wave activity (BWA) are potentially predictable.
Abstract: Midlatitude baroclinic waves drive extratropical weather and climate variations, but their predictability beyond 2 weeks has been deemed low. Here we analyze a large ensemble of climate simulations forced by observed sea surface temperatures (SSTs) and demonstrate that seasonal variations of baroclinic wave activity (BWA) are potentially predictable. This potential seasonal predictability is denoted by robust BWA responses to SST forcings. To probe regional sources of the potential predictability, a regression analysis is applied to the SST-forced large ensemble simulations. By filtering out variability internal to the atmosphere and land, this analysis identifies both well-known and unfamiliar BWA responses to SST forcings across latitudes. Finally, we confirm the model-indicated predictability by showing that an operational seasonal prediction system can leverage some of the identified SST-BWA relationships to achieve skillful predictions of BWA. Our findings help to extend long-range predictions of the statistics of extratropical weather events and their impacts.

Journal ArticleDOI
TL;DR: In this article, the authors compare the investment behavior of public and private firms for a representative sample of all U.S. corporations and find that while both types of firms invest similarly in physical capital, public firms out-invest private firms in R&D.
Abstract: Using tax data, we compare the investment behavior of public and private firms for a representative sample of all U.S. corporations. We find that while both types of firms invest similarly in physical capital, public firms out-invest private firms in R&D. Compared to observationally-similar private firms, public firms invest roughly 50% more in R&D relative to their asset bases. Further, public firms dedicate 7.4 percentage points more of their investments to R&D than private firms. This stronger public firm R&D investment is muted when shareholder earnings pressures are heightened, but not so much as to overcome the baseline investment advantage.

Journal ArticleDOI
TL;DR: Artificial intelligence (AI) is a science and engineering discipline that is highly relevant to financial services, given the significant amount and diversity of data generated (and consumed) as those services are delivered worldwide.
Abstract: Artificial intelligence (AI) is a science and engineering discipline that is highly relevant to financial services, given the significant amount and diversity of data generated (and consumed) as those services are delivered worldwide. Global banks process billions of international payments each day, while equity exchanges handle trillions of orders and billions of transactions. All of this activity is recorded as data, and driven by exogenous information sources such as news services and social media. To address these challenges, at J.P. Morgan, we established a new group dedicated to research at the intersection of AI and finance in mid-2018 to investigate how to develop and optimize the use of AI. In this article, we introduce and discuss the directions of focus of AI Research and present a few selective projects that illustrate potential novel applications to finance.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the investment behavior of public and private firms for a representative sample of all U.S. corporations and find that while both types of firms invest similarly in physical capital, public firms out-invest private firms in R&D.

Journal ArticleDOI
18 Apr 2021
TL;DR: This work generates document representations that capture both text and metadata in a task agnostic manner and demonstrates through extensive evaluation that the proposed cross-model fusion solution outperforms several competitive baselines on multiple domains.
Abstract: Fine-tuning a pre-trained neural language model with a task specific output layer is the de facto approach of late when dealing with document classification. This technique is inadequate when labeled examples are unavailable at training time and when the metadata artifacts in a document must be exploited. We address these challenges by generating document representations that capture both text and metadata in a task agnostic manner. Instead of traditional auto-regressive or auto-encoding based training, our novel self-supervised approach learns a soft-partition of the input space when generating text embeddings by employing a pre-learned topic model distribution as surrogate labels. Our solution also incorporates metadata explicitly rather than just augmenting them with text. The generated document embeddings exhibit compositional characteristics and are directly used by downstream classification tasks to create decision boundaries from a small number of labels, thereby eschewing complicated recognition methods. We demonstrate through extensive evaluation that our proposed cross-model fusion solution outperforms several competitive baselines on multiple domains.


Proceedings ArticleDOI
08 Nov 2021
TL;DR: In this paper, the authors considered information-theoretically secure MPC against an adversary who can corrupt parties passively, maliciously and maliciously, and make them fail-stop.
Abstract: In this work we consider information-theoretically secure MPC against an mixed adversary who can corrupt \(t_p\) parties passively, \(t_a\) parties actively, and can make \(t_f\) parties fail-stop. With perfect security, it is known that every function can be computed securely if and only if \(3t_a + 2t_p + t_f < n\), and for statistical security the bound is \(2t_a + 2t_p + t_f < n\).

Proceedings ArticleDOI
15 Nov 2021
TL;DR: ACCO as mentioned in this paper is the first maliciously secure multiparty computation engine in the honest majority setting, which also supports secure and efficient comparison and integer truncation, and achieves information theoretic security.
Abstract: We propose ACCO: the first maliciously secure multiparty computation engine in the honest majority setting, which also supports secure and efficient comparison and integer truncation. Our system is also the first to achieve information theoretic security. We use ACCO to build an information theoretic privacy preserving machine learning system where a set of parties collaboratively train regression models in the presence of a malicious adversary. We report an implementation of our system and compare the performance against Helen, the work of Zheng, Popa, Gonzalez and Stoica (SP'19) which provided multiparty regression models secure against malicious adversaries. Our system offers a significant speedup over Helen.

04 May 2021
TL;DR: In this paper, the authors present MAS-GAN, a multi-agent simulator calibration method that allows to tune simulator parameters and to support more accurate evaluations of candidate trading algorithm, such as the relative proportions of the various types of agents that populate the simulation.
Abstract: We look at the problem of how the simulation of a financial market should be configured so that it most accurately emulates the behavior of a real market. In particular, we address agent-based simulations of markets that are composed of many hundreds or thousands of trading agents. A solution to this problem is important because it provides a credible test bed for evaluating potential trading algorithms (e.g., execution strategies). Simple backtesting of such algorithms suffers from a critical weaknesses, chiefly that the overall market is not responsive to the candidate trading algorithm. Multi-agent simulations address this weakness by simulating {\it market impact} via interaction between market participants. Calibration of such multi-agent simulators to ensure realism, however, is a challenge. In this paper, we present MAS-GAN -- a multi-agent simulator calibration method that allows to tune simulator parameters and to support more accurate evaluations of candidate trading algorithm. Our calibration focus is on high level parameters such as the relative proportions of the various types of agents that populate the simulation. MAS-GAN is a two-step approach: first, we train a discriminator that is able to distinguish between ``real'' and ``fake'' market data as a part of GAN with self-attention, and then utilize it within an optimization framework to refine simulation parameters. The paper concludes with quantitative examples of applying MAS-GAN to improve simulator realism.

Posted Content
TL;DR: In this paper, a non-parametric sequential allocation (NPSA) algorithm is proposed to solve the problem where jobs arrive at random times and assume random values, and the decision-maker must decide immediately whether or not to accept the job and gain the value on offer as a reward with the constraint that they may only accept at most $n$ jobs over some reference time period.
Abstract: We consider a problem wherein jobs arrive at random times and assume random values. Upon each job arrival, the decision-maker must decide immediately whether or not to accept the job and gain the value on offer as a reward, with the constraint that they may only accept at most $n$ jobs over some reference time period. The decision-maker only has access to $M$ independent realisations of the job arrival process. We propose an algorithm, Non-Parametric Sequential Allocation (NPSA), for solving this problem. Moreover, we prove that the expected reward returned by the NPSA algorithm converges in probability to optimality as $M$ grows large. We demonstrate the effectiveness of the algorithm empirically on synthetic data and on public fraud-detection datasets, from where the motivation for this work is derived.

Journal ArticleDOI
TL;DR: In this article, the authors present a numerically efficient approach for machine learning a risk-neutral measure for paths of simulated spot and option prices up to a finite horizon under convex transaction costs and convex trading constraints.
Abstract: We present a numerically efficient approach for machine-learning a risk-neutral measure for paths of simulated spot and option prices up to a finite horizon under convex transaction costs and convex trading constraints. This approach can then be used to implement a stochastic implied volatility model in the following two steps: 1) Train a market simulator for option prices, for example as discussed in our recent work here; 2) Find a risk-neutral density, specifically in our approach the minimal entropy martingale measure. The resulting model can be used for risk-neutral pricing, or for Deep Hedging in the case of transaction costs or trading constraints. To motivate the proposed approach, we also show that market dynamics are free from "statistical arbitrage" in the absence of transaction costs if and only if they follow a risk-neutral measure. We additionally provide a more general characterization in the presence of convex transaction costs and trading constraints. These results can be seen as an analogue of the fundamental theorem of asset pricing for statistical arbitrage under trading frictions and are of independent interest.

Posted Content
TL;DR: In this article, the authors proposed a factor baseline which exploits independence structure encoded in a novel action-target influence network and demonstrated the performance advantages of their algorithm on large-scale bandit and traffic intersection problems, providing a novel contribution to the latter in the form of a spatial approximation.
Abstract: Policy gradient methods can solve complex tasks but often fail when the dimensionality of the action-space or objective multiplicity grow very large. This occurs, in part, because the variance on score-based gradient estimators scales quadratically. In this paper, we address this problem through a factor baseline which exploits independence structure encoded in a novel action-target influence network. Factored policy gradients (FPGs), which follow, provide a common framework for analysing key state-of-the-art algorithms, are shown to generalise traditional policy gradients, and yield a principled way of incorporating prior knowledge of a problem domain's generative processes. We provide an analysis of the proposed estimator and identify the conditions under which variance is reduced. The algorithmic aspects of FPGs are discussed, including optimal policy factorisation, as characterised by minimum biclique coverings, and the implications for the bias-variance trade-off of incorrectly specifying the network. Finally, we demonstrate the performance advantages of our algorithm on large-scale bandit and traffic intersection problems, providing a novel contribution to the latter in the form of a spatial approximation.

Proceedings ArticleDOI
11 Jul 2021
TL;DR: In this paper, the authors presented some works conducted with Jose Bioucas Dias for fusing high spectral resolution images and high spatial resolution images in order to build images with improved spectral and spatial resolutions.
Abstract: The first part of this paper presents some works conducted with Jose Bioucas Dias for fusing high spectral resolution images (such as hyperspectral images) and high spatial resolution images (such as panchromatic or multispectral images) in order to build images with improved spectral and spatial resolutions These works are related to Bayesian fusion strategies exploiting prior information about the target image to be recovered constructed by dictionary learning Interestingly, these Bayesian image fusion methods can be adapted with limited changes to motion estimation in pairs or sequences of images The second part of this paper explains how the work of Jose Bioucas Dias has been a source of inspiration for developing new Bayesian motion estimation methods for ultrasound images