scispace - formally typeset
Search or ask a question
Author

Sameep Mehta

Bio: Sameep Mehta is an academic researcher from IBM. The author has contributed to research in topics: Service (business) & Resource (project management). The author has an hindex of 22, co-authored 160 publications receiving 2093 citations. Previous affiliations of Sameep Mehta include Lady Hardinge Medical College & All India Institute of Medical Sciences.


Papers
More filters
Proceedings ArticleDOI
19 Oct 2011
TL;DR: An evidence based analytics approach to identify top opportunities for process automation, and provide objective assessment of benefit to enable process leaders to take informed decisions is presented.
Abstract: Process Automation is a critical activity to reduce the need for human work in the production of goods and services to make the processes for uniform and efficient It is always challenging to make informed decisions on areas for automation which addresses time-to-value to the business In this paper, we present an evidence based analytics approach to identify top opportunities for process automation, and provide objective assessment of benefit to enable process leaders to take informed decisions This approach is composed of three major steps The first step is to identify the top hitters of human intensity in the delivery processes through analyzing evidence gathered from activity time-motion monitoring and onsite process deep dive The second step is to prioritize the opportunities for automation by analyzing technology choices and estimated business impact The final step is to assess the benefit through deep analysis on the additional evidence gathered on time-motion data on operational procedures through sampling and extrapolation, and degree of automation can be achieved through technology components A case study in Finance and Administration Process Delivery Services is used to illustrate the core idea of our analytical approach
Patent
23 May 2016
TL;DR: In this paper, the authors propose a method and associated systems for automatically identifying critical resources in an organization, where an organization creates a model of the dependencies between pairs of resource types, wherein that model describes how the organization's projects and services are affected when a resource type becomes unavailable.
Abstract: A method and associated systems for automatically identifying critical resources in an organization. An organization creates a model of the dependencies between pairs of resource types, wherein that model describes how the organization's projects and services are affected when a resource type becomes unavailable. This model may include a system of directed graphs. This model may be used to automatically identify a resource type as critical if unacceptable cost is incurred by resuming projects and services rendered infeasible when the resource type is disrupted. The model may also be used to automatically identify a first resource type as critical for a second resource type when disruption of the first resource type forces the available capacity of the second resource type to fall below a threshold value.
Patent
10 Apr 2014
TL;DR: In this paper, the authors propose a method and associated systems for automatically identifying critical resources in an organization, where an organization creates a model of the dependencies between pairs of resource types, wherein that model describes how the organization's projects and services are affected when a resource type becomes unavailable.
Abstract: A method and associated systems for automatically identifying critical resources in an organization. An organization creates a model of the dependencies between pairs of resource types, wherein that model describes how the organization's projects and services are affected when a resource type becomes unavailable. This model may include a system of directed graphs. This model may be used to automatically identify a resource type as critical if unacceptable cost is incurred by resuming projects and services rendered infeasible when the resource type is disrupted. The model may also be used to automatically identify a first resource type as critical for a second resource type when disruption of the first resource type forces the available capacity of the second resource type to fall below a threshold value.
Patent
Sameep Mehta1, Deepak Padmanabhan
24 Sep 2015
TL;DR: In this article, a computer-implemented method for updating annotator collections using run traces is described, which includes generating one or more alternate versions of annotators selected from a set of multiple document annotators; and outputting an instruction to modify, based on the generated log information for each annotator in the set and each alternate version, at least one document annotator from the set.
Abstract: Methods, systems, and computer program products for updating annotator collections using run traces are provided herein. A computer-implemented method includes generating one or more alternate versions of one or more document annotators selected from a set of multiple document annotators; executing, on one or more document data sets, (i) one or more document annotators from the set of multiple document annotators and (ii) the one or more alternate versions to generate log information for each document annotator in the set and each alternate version of the one or more alternate versions; and outputting an instruction to modify, based on the generated log information for each document annotator in the set and each alternate version, at least one document annotator from the set with at least one alternate version from the one or more alternate versions.
Proceedings ArticleDOI
01 Nov 2020
TL;DR: This work presents a decentralized trusted data and model platform for collaborative AI, that leverages blockchain as an immutable metadata store of data andmodel resources and operations performed on them, to support and enforce ownership, authenticity, integrity, lineage and auditability properties.
Abstract: Data analytics and artificial intelligence are extensively used by enterprises today and they increasingly span organization boundaries. Such collaboration between organizations today happens in an ad hoc manner, with very little visibility and systemic control on who is accessing the data, how, and for what purpose. When sharing data and AI models with other organizations, the owners desire the ability to control access, have visibility into the entire data pipeline and lineage, and ensure integrity. In this work, we present a decentralized trusted data and model platform for collaborative AI, that leverages blockchain as an immutable metadata store of data and model resources and operations performed on them, to support and enforce ownership, authenticity, integrity, lineage and auditability properties. Smart contracts enforce policies specified on data, including hierarchical and composite policies that are uniquely enabled by the use of blockchain. We demonstrate that our system is light-weight and can support over 1000 transactions per second with sub-second latency, significantly lower than the time taken to execute data pipelines.

Cited by
More filters
Journal ArticleDOI
09 Mar 2018-Science
TL;DR: A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth, and false news was more novel than true news, which suggests that people were more likely to share novel information.
Abstract: We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

4,241 citations

01 Jan 2012

3,692 citations

21 Jan 2018
TL;DR: It is shown that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men, in commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition.
Abstract: The paper “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” by Joy Buolamwini and Timnit Gebru, that will be presented at the Conference on Fairness, Accountability, and Transparency (FAT*) in February 2018, evaluates three commercial API-based classifiers of gender from facial images, including IBM Watson Visual Recognition. The study finds these services to have recognition capabilities that are not balanced over genders and skin tones [1]. In particular, the authors show that the highest error involves images of dark-skinned women, while the most accurate result is for light-skinned men.

2,528 citations

Posted Content
TL;DR: This survey investigated different real-world applications that have shown biases in various ways, and created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems.
Abstract: With the widespread use of AI systems and applications in our everyday lives, it is important to take fairness issues into consideration while designing and engineering these types of systems. Such systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that the decisions do not reflect discriminatory behavior toward certain groups or populations. We have recently seen work in machine learning, natural language processing, and deep learning that addresses such challenges in different subdomains. With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them. In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined in order to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and how they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.

1,571 citations