scispace - formally typeset
J

Joymallya Chakraborty

Researcher at North Carolina State University

Publications -  20
Citations -  414

Joymallya Chakraborty is an academic researcher from North Carolina State University. The author has contributed to research in topics: Software & Computer science. The author has an hindex of 6, co-authored 18 publications receiving 130 citations. Previous affiliations of Joymallya Chakraborty include University of Calcutta.

Papers
More filters
Proceedings ArticleDOI

Investigating the effects of gender bias on GitHub

TL;DR: The effects of gender bias are largely invisible on the GitHub platform itself, but there are still signals of women concentrating their work in fewer places and being more restrained in communication than men.
Proceedings ArticleDOI

Bias in machine learning software: why? how? what to do?

TL;DR: In this paper, the authors postulate that the root causes of bias are the prior decisions that affect-what data was selected and the labels assigned to those examples, and the Fair-SMOTE algorithm removes biased labels; and rebalances internal distributions such that based on sensitive attribute, examples are equal in both positive and negative classes.
Proceedings ArticleDOI

Bias in Machine Learning Software: Why? How? What to do?

TL;DR: In this paper, the authors postulate that the root causes of bias are the prior decisions that affect-what data was selected and the labels assigned to those examples, and the Fair-SMOTE algorithm removes biased labels; and rebalances internal distributions such that based on sensitive attribute, examples are equal in both positive and negative classes.
Proceedings ArticleDOI

Fairway: A Way to Build Fair ML Software

TL;DR: This work explains how ground-truth bias in training data affects machine learning model fairness and how to find that bias in AI software, and proposes a method Fairway which combines pre-processing and in-processing approach to remove ethical bias from training data and trained model.
Proceedings ArticleDOI

Fairway: a way to build fair ML software

TL;DR: In this paper, the authors proposed Fairway, a method that combines pre-processing and in-processing approach to remove ethical bias from training data and trained model, without much damaging the predictive performance of that model.