scispace - formally typeset
S

Suresh Venkatasubramanian

Researcher at University of Utah

Publications -  190
Citations -  13516

Suresh Venkatasubramanian is an academic researcher from University of Utah. The author has contributed to research in topics: Approximation algorithm & Deep learning. The author has an hindex of 47, co-authored 184 publications receiving 11157 citations. Previous affiliations of Suresh Venkatasubramanian include National University of Singapore & AT&T Labs.

Papers
More filters
Proceedings ArticleDOI

t-Closeness: Privacy Beyond k-Anonymity and l-Diversity

TL;DR: T-closeness as mentioned in this paper requires that the distribution of a sensitive attribute in any equivalence class is close to the distributions of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t).
Proceedings ArticleDOI

Certifying and Removing Disparate Impact

TL;DR: This work links disparate impact to a measure of classification accuracy that while known, has received relatively little attention and proposes a test for disparate impact based on how well the protected class can be predicted from the other attributes.
Posted Content

Certifying and removing disparate impact

TL;DR: In this paper, the authors propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes, and present empirical evidence supporting the effectiveness of their test and their approach for masking bias and preserving relevant information in the data.
Proceedings ArticleDOI

Fairness and Abstraction in Sociotechnical Systems

TL;DR: This paper outlines this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science and suggests ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions.
Proceedings ArticleDOI

A comparative study of fairness-enhancing interventions in machine learning

TL;DR: It is found that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.