scispace - formally typeset
Search or ask a question
Institution

J.P. Morgan & Co.

About: J.P. Morgan & Co. is a based out in . It is known for research contribution in the topics: Portfolio & Implied volatility. The organization has 328 authors who have published 436 publications receiving 14291 citations.


Papers
More filters
Posted Content
TL;DR: In this article, the authors propose new graph features' explanation methods to identify the informative components and important node features, which can be used for understanding data, debugging GNN models, and examine model decisions.
Abstract: Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features' explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the key factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions.

3 citations

Journal ArticleDOI

2 citations

Posted Content
TL;DR: In this paper, a statistical arbitrage trading strategy with two key elements in hi-frequency trading: stop-loss and leverage is developed, and the long-run return of the strategy is derived as an elementary function of the stop loss.
Abstract: In this paper we develop a statistical arbitrage trading strategy with two key elements in hi-frequency trading: stop-loss and leverage. We consider, as in Bertram (2009), a mean-reverting process for the security price with proportional transaction costs; we show how to introduce stop-loss and leverage in an optimal trading strategy. We focus on repeated strategies using a self-financing portfolio. For every given stop-loss level we derive analytically the optimal investment strategy consisting of optimal leverage and market entry/exit levels. First we show that the optimal strategy a' la Bertram depends on the probabilities to reach entry/exit levels, on expected First-Passage-Times and on expected First-Exit-Times from an interval. Then, when the underlying log-price follows an Ornstein-Uhlenbeck process, we deduce analytical expressions for expected First-Exit-Times and we derive the long-run return of the strategy as an elementary function of the stop-loss. Following industry practice of pairs trading we consider an example of pair in the energy futures' market, reporting in detail the analysis for a spread on Heating-Oil and Gas-Oil futures in one year sample of half-an-hour market prices.

2 citations

Journal ArticleDOI
18 Apr 2021
TL;DR: This work generates document representations that capture both text and metadata in a task agnostic manner and demonstrates through extensive evaluation that the proposed cross-model fusion solution outperforms several competitive baselines on multiple domains.
Abstract: Fine-tuning a pre-trained neural language model with a task specific output layer is the de facto approach of late when dealing with document classification. This technique is inadequate when labeled examples are unavailable at training time and when the metadata artifacts in a document must be exploited. We address these challenges by generating document representations that capture both text and metadata in a task agnostic manner. Instead of traditional auto-regressive or auto-encoding based training, our novel self-supervised approach learns a soft-partition of the input space when generating text embeddings by employing a pre-learned topic model distribution as surrogate labels. Our solution also incorporates metadata explicitly rather than just augmenting them with text. The generated document embeddings exhibit compositional characteristics and are directly used by downstream classification tasks to create decision boundaries from a small number of labels, thereby eschewing complicated recognition methods. We demonstrate through extensive evaluation that our proposed cross-model fusion solution outperforms several competitive baselines on multiple domains.

2 citations


Authors

Showing all 328 results

NameH-indexPapersCitations
Manuela Veloso7172027543
Tucker Balch4118110577
George Deodatis361255798
Mustafa Caglayan321444027
Henrique Andrade27813387
Daniel Borrajo261682619
Haibin Zhu25434945
Paolo Pasquariello24532409
Andrew M. Abrahams21371130
Alan Nicholson19901478
Samuel Assefa19342112
Joshua D. Younger17182305
Espen Gaarder Haug171431653
Jeffrey S. Saltz1657852
Guy Coughlan15272729
Network Information
Related Institutions (5)
Federal Reserve System
10.3K papers, 511.9K citations

81% related

Federal Reserve Bank of New York
2.6K papers, 156.1K citations

80% related

Max M. Fisher College of Business
1.3K papers, 147.4K citations

80% related

London Business School
5.1K papers, 437.9K citations

79% related

INSEAD
4.8K papers, 369.4K citations

79% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20221
202123
202050
201920
20188
201712