scispace - formally typeset
Search or ask a question
Institution

Carnegie Mellon University

EducationPittsburgh, Pennsylvania, United States
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Population & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.


Papers
More filters
Posted Content
TL;DR: In this article, the authors use a dynamic model to predict changes in a firm's systematic risk, and its expected return, and show that the model simultaneously reproduces the time series relation between the book-to-market ratio and asset returns, the cross-sectional relation between book to market, market value and return, contrarian effects at short horizons, momentum effects at longer horizons and the inverse relation between interest rates and the market risk premium.
Abstract: As a consequence of optimal investment choices, firms' assets and growth options change in predictable ways. Using a dynamic model, we show that this imparts predictability to changes in a firm's systematic risk, and its expected return. Simulations show that the model simultaneously reproduces: (i) the time series relation between the book-to-market ratio and asset returns, (ii) the cross-sectional relation between book to market, market value and return, (iii) contrarian effects at short horizons, (iv) momentum effects at longer horizons, and (v) the inverse relation between interest rates and the market risk premium.

1,308 citations

Journal ArticleDOI
TL;DR: In machine learning, the concept of interpretability is both important and slippery, so it is important to understand how these concepts can be modified.
Abstract: Supervised machine-learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world?

1,307 citations

Journal ArticleDOI
25 Apr 2018
TL;DR: An overview of core ideas in GSP and their connection to conventional digital signal processing are provided, along with a brief historical perspective to highlight how concepts recently developed build on top of prior research in other areas.
Abstract: Research in graph signal processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper, we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering, or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.

1,306 citations

Proceedings ArticleDOI
06 Jun 2016
TL;DR: The authors use attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable and achieving state-of-the-art results on the Stanford Natural Language Inference (SNLI) dataset.
Abstract: We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.

1,306 citations

Posted Content
TL;DR: In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values and yields consistent improvement over strong baselines in image classification, object detection and person re-identification.
Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: this https URL.

1,305 citations


Authors

Showing all 36645 results

NameH-indexPapersCitations
Yi Chen2174342293080
Rakesh K. Jain2001467177727
Robert C. Nichol187851162994
Michael I. Jordan1761016216204
Jasvinder A. Singh1762382223370
J. N. Butler1722525175561
P. Chang1702154151783
Krzysztof Matyjaszewski1691431128585
Yang Yang1642704144071
Geoffrey E. Hinton157414409047
Herbert A. Simon157745194597
Yongsun Kim1562588145619
Terrence J. Sejnowski155845117382
John B. Goodenough1511064113741
Scott Shenker150454118017
Network Information
Related Institutions (5)
Massachusetts Institute of Technology
268K papers, 18.2M citations

95% related

University of Maryland, College Park
155.9K papers, 7.2M citations

93% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

93% related

IBM
253.9K papers, 7.4M citations

93% related

Princeton University
146.7K papers, 9.1M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023120
2022499
20214,980
20205,375
20195,420
20184,972