scispace - formally typeset
M

Marzia Polito

Researcher at Amazon.com

Publications -  22
Citations -  681

Marzia Polito is an academic researcher from Amazon.com. The author has contributed to research in topics: Thread (computing) & Linear model. The author has an hindex of 10, co-authored 22 publications receiving 595 citations. Previous affiliations of Marzia Polito include Intel & California Institute of Technology.

Papers
More filters
Proceedings Article

Grouping and dimensionality reduction by locally linear embedding

TL;DR: A variant of LLE that can simultaneously group the data and calculate local embedding of each group is studied, and an estimate for the upper bound on the intrinsic dimension of the data set is obtained automatically.
Patent

Method for personalized named entity recognition

TL;DR: In this article, personalized named entity recognition may be accomplished by parsing input text to determine a subset of the input text, generating a plurality of queries based at least in part on the subset of input text and submitting the queries to a pluralityof reference resources, processing responses to the queries and generating a vector based on the responses, and performing classification based on a vector and a set of model parameters.
Proceedings ArticleDOI

Detecting phases in parallel applications on shared memory architectures

TL;DR: This paper examines applying phase analysis algorithms and how to adapt them to parallel applications running on shared memory processors, and examines using the phase analysis to pick simulation points to guide multithreaded simulation.
Proceedings ArticleDOI

The Fuzzy Correlation between Code and Performance Predictability

TL;DR: The results show that for most server workloads and, surprisingly, even for CPU2K benchmarks, the accuracy of predicting CPI from EIPs varies widely, and a new methodology is proposed that selects the best-suited sampling technique to accurately capture the program behavior.
Proceedings ArticleDOI

Mixed-Privacy Forgetting in Deep Networks

TL;DR: In this paper, the influence of a subset of the training samples can be removed from the weights of a network trained on large-scale image classification tasks, and they provide strong computable bounds on the amount of remaining information after forgetting.