scispace - formally typeset
Search or ask a question
Author

Karthikeyan Natesan Ramamurthy

Bio: Karthikeyan Natesan Ramamurthy is an academic researcher from IBM. The author has contributed to research in topics: Sparse approximation & K-SVD. The author has an hindex of 24, co-authored 166 publications receiving 2594 citations. Previous affiliations of Karthikeyan Natesan Ramamurthy include Arizona State University & Arizona's Public Universities.


Papers
More filters
Proceedings Article
01 Jan 2017
TL;DR: This paper proposes a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility, and describes the impact of limited sample size in accomplishing this objective.
Abstract: Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples, and preserving utility. We characterize the impact of limited sample size in accomplishing this objective. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy.

566 citations

Posted Content
TL;DR: A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
Abstract: Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This paper introduces a new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license {this https URL). The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. The package includes a comprehensive set of fairness metrics for datasets and models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. It also includes an interactive Web experience (this https URL) that provides a gentle introduction to the concepts and capabilities for line-of-business users, as well as extensive documentation, usage guidance, and industry-specific tutorials to enable data scientists and practitioners to incorporate the most appropriate tool for their problem into their work products. The architecture of the package has been engineered to conform to a standard paradigm used in data science, thereby further improving usability for practitioners. Such architectural design and abstractions enable researchers and developers to extend the toolkit with their new algorithms and improvements, and to use it for performance benchmarking. A built-in testing infrastructure maintains code quality.

501 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A novel stitching method, that uses a smooth stitching field over the entire target image, while accounting for all the local transformation variations, that is more robust to parameter selection, and hence more automated compared with state-of-the-art methods.
Abstract: The goal of image stitching is to create natural-looking mosaics free of artifacts that may occur due to relative camera motion, illumination changes, and optical aberrations. In this paper, we propose a novel stitching method, that uses a smooth stitching field over the entire target image, while accounting for all the local transformation variations. Computing the warp is fully automated and uses a combination of local homography and global similarity transformations, both of which are estimated with respect to the target. We mitigate the perspective distortion in the non-overlapping regions by linearizing the homography and slowly changing it to the global similarity. The proposed method is easily generalized to multiple images, and allows one to automatically obtain the best perspective in the panorama. It is also more robust to parameter selection, and hence more automated compared with state-of-the-art methods. The benefits of the proposed approach are demonstrated using a variety of challenging cases.

250 citations

Proceedings ArticleDOI
03 Mar 2010
TL;DR: Results show that the performance of the algorithm is superior to using a support vector machines based approach with similar assumptions, and significant complexity reduction is obtained by reducing the dimensions of the data using random projections for only a small loss in performance.
Abstract: We propose a sparse representation approach for classifying different targets in Synthetic Aperture Radar (SAR) images. Unlike the other feature based approaches, the proposed method does not require explicit pose estimation or any preprocessing. The dictionary used in this setup is the collection of the normalized training vectors itself. Computing a sparse representation for the test data using this dictionary corresponds to finding a locally linear approximation with respect to the underlying class manifold. SAR images obtained from the Moving and Stationary Target Acquisition and Recognition (MSTAR) public database were used in the classification setup. Results show that the performance of the algorithm is superior to using a support vector machines based approach with similar assumptions. Significant complexity reduction is obtained by reducing the dimensions of the data using random projections for only a small loss in performance.

138 citations

Posted Content
22 Aug 2018
TL;DR: In this article, a supplier's declaration of conformity (SDoC) for artificial intelligence (AI) services is proposed to help increase trust in AI services, which is a transparent, standardized, but often not legally required, document used in many industries and sectors to describe the lineage of a product along with safety and performance testing it has undergone.
Abstract: The accuracy and reliability of machine learning algorithms are an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety, security, and provenance, are also critical elements to engender consumers' trust in a service. In this paper, we propose a supplier's declaration of conformity (SDoC) for AI services to help increase trust in AI services. An SDoC is a transparent, standardized, but often not legally required, document used in many industries and sectors to describe the lineage of a product along with the safety and performance testing it has undergone. We envision an SDoC for AI services to contain purpose, performance, safety, security, and provenance information to be completed and voluntarily released by AI service providers for examination by consumers. Importantly, it conveys product-level rather than component-level functional testing. We suggest a set of declaration items tailored to AI and provide examples for two fictitious AI services.

94 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box decision support systems, given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work.
Abstract: In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.

2,805 citations

21 Jan 2018
TL;DR: It is shown that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men, in commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition.
Abstract: The paper “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” by Joy Buolamwini and Timnit Gebru, that will be presented at the Conference on Fairness, Accountability, and Transparency (FAT*) in February 2018, evaluates three commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition. The study finds these services to have recognition capabilities that are not balanced over genders and skin tones [1]. In particular, the authors show that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men.

2,528 citations

Journal ArticleDOI
Amina Adadi1, Mohammed Berrada1
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract: At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

2,258 citations

Journal Article
TL;DR: Qualitative research in such mobile health clinics has found that patients value the informal, familiar environment in a convenient location, with staff who “are easy to talk to,” and that the staff’s “marriage of professional and personal discourses” provides patients the space to disclose information themselves.
Abstract: www.mobilehealthmap.org 617‐442‐3200 New research shows that mobile health clinics improve health outcomes for hard to reach populations in cost‐effective and culturally competent ways . A Harvard Medical School study determined that for every dollar invested in a mobile health clinic, the US healthcare system saves $30 on average. Mobile health clinics, which offer a range of services from preventive screenings to asthma treatment, leverage their mobility to treat people in the convenience of their own communities. For example, a mobile health clinic in Baltimore, MD, has documented savings of $3,500 per child seen due to reduced asthma‐related hospitalizations. The estimated 2,000 mobile health clinics across the country are providing similarly cost‐effective access to healthcare for a wide range of populations. Many successful mobile health clinics cite their ability to foster trusting relationships. Qualitative research in such mobile health clinics has found that patients value the informal, familiar environment in a convenient location, with staff who “are easy to talk to,” and that the staff’s “marriage of professional and personal discourses” provides patients the space to disclose information themselves. A communications academic argued that mobile health clinics’ unique use of space is important in facilitating these relationships. Mobile health clinics park in the heart of the community in familiar spaces, like shopping centers or bus stations, which lend themselves to the local community atmosphere.

2,003 citations