scispace - formally typeset
J

Jake M. Hofman

Researcher at Microsoft

Publications -  62
Citations -  8423

Jake M. Hofman is an academic researcher from Microsoft. The author has contributed to research in topics: Computer science & Hidden Markov model. The author has an hindex of 28, co-authored 57 publications receiving 7224 citations. Previous affiliations of Jake M. Hofman include Columbia University & Yahoo!.

Papers
More filters
Proceedings ArticleDOI

Everyone's an influencer: quantifying influence on twitter

TL;DR: It is concluded that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects and that predictions of which particular user or URL will generate large cascades are relatively unreliable.
Proceedings ArticleDOI

Who says what to whom on twitter

TL;DR: A striking concentration of attention is found on Twitter, in that roughly 50% of URLs consumed are generated by just 20K elite users, where the media produces the most information, but celebrities are the most followed.
Journal ArticleDOI

Predicting consumer behavior with Web search

TL;DR: This work uses search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes.
Journal ArticleDOI

The Structural Virality of Online Diffusion

TL;DR: This work proposes a formal measure of what it label “structural virality” that interpolates between two conceptual extremes: content that gains its popularity through a single, large broadcast and that which grows through multiple generations with any one individual directly responsible for only a fraction of the total adoption.
Posted Content

Manipulating and Measuring Model Interpretability

TL;DR: A sequence of pre-registered experiments showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box).