scispace - formally typeset
Search or ask a question

Showing papers by "Sameep Mehta published in 2012"


Proceedings ArticleDOI
Haggai Roitman1, Jonathan Mamou1, Sameep Mehta1, Aharon Satt1, L. V. Subramaniam1 
02 Nov 2012
TL;DR: An high-level overview of a novel crowd sensing system that is developed in IBM for the smart cities domain and some preliminary results using public safety as an example usecase are presented.
Abstract: In this work we discuss the challenge of harnessing the crowd for smart city sensing. Within a city's context, such reports by citizen or city visitor eye witnesses may provide important information to city officials, additionally to more traditional data gathered by other means (e.g., through the city's control center, emergency services, sensors spread across the city, etc). We present an high-level overview of a novel crowd sensing system that we develop in IBM for the smart cities domain. As a proof of concept, we present some preliminary results using public safety as our example usecase.

62 citations


Patent
Sreyash Kenkre1, Sameep Mehta1, Krishnasuri Narayanam1, Vinayaka Pandit1, Soujanya Soni1 
06 Feb 2012
TL;DR: In this paper, a description of a resource associated with a service of an entity can be captured, where the service can be associated with one or more resources, a constraint, and a demand.
Abstract: A description of a resource associated with a service of an entity can be captured. The service can be associated with one or more resources, a constraint, and a demand. The resource can be associated with one or more characteristics including a utility, a limited availability, and a consumption rate. The entity can be an organization or a system. An initial allocation problem associated with the resource can be formulated as a two phase problem. The first phase can be an optimization problem and the second phase can be a restricted allocation problem. The initial allocation problem can be associated with reconfiguring a previously established allocation of a baseline scenario. The optimization problem can be solved optimally or approximately to establish a favorable allocation. The favorable allocation can minimizes the reconfiguration cost of the reconfiguring. The baseline scenario can be a normal operation of the service.

18 citations


Patent
Sandeep Hans1, Sameep Mehta1, Soujanya Soni1
31 Jul 2012
TL;DR: A data validation service as discussed by the authors provides a user interface to a subscriber of the service via a computer device of the subscriber, receiving, via the user interface, a data validation rule specified by the subscriber and an address of a database subject to the data validation, and generating a configuration file that includes the address of the database and a location of executable code corresponding to the validation rule.
Abstract: A data validation service includes providing a user interface to a subscriber of the service via a computer device of the subscriber, receiving, via the user interface, a data validation rule specified by the subscriber and an address of a database subject to the data validation, and generating a configuration file that includes the address of the database and an address of a location of executable code corresponding to the data validation rule. The data validation service also includes transmitting the configuration file and remote methods to the computer device over the network. The remote methods are configured to execute the data validation rule with respect to the data and compile results of the execution.

12 citations


Patent
30 May 2012
TL;DR: In this article, the authors present a method and associated systems for automatically identifying critical resources in an organization, where an organization creates a model of the dependencies between pairs of resource instances, wherein that model describes how the organization's projects and services are affected when a resource instance becomes unavailable.
Abstract: A method and associated systems for automatically identifying critical resources in an organization. An organization creates a model of the dependencies between pairs of resource instances, wherein that model describes how the organization's projects and services are affected when a resource instance becomes unavailable. This model may be represented as a system of directed graphs. This model may be used to automatically identify a resource instance as “critical” when excessive cost is required to resume all projects and services rendered infeasible by the disruption of that resource instance. This model may also be used to automatically identify a resource instance as “critical for a resource type” when disruption of the resource instance forces the capacity of the resource type available to the entire organization to fall below a threshold value.

7 citations


Patent
30 Nov 2012
TL;DR: In this article, a publically disseminated media transmission is received and public influence of the media transmission was measured via identifying one or more media sources used to disseminate the media transmissions; and obtaining one or some predetermined influence values associated with the media sources.
Abstract: Methods and arrangements for measuring and utilizing media topic influence. A publically disseminated media transmission is received. Public influence of the media transmission is measured via: identifying one or more media sources used to disseminate the media transmission; and obtaining one or more predetermined influence values associated with the one or more media sources.

7 citations


Patent
Sameep Mehta1, Rakesh Pimplikar1, Karthik Visweswariah1, Lav R. Varshney1, Amit Singh1 
07 Nov 2012
TL;DR: In this paper, a method, system and computer program product are disclosed for candidate screening, which comprises identifying a multitude of dimensions of a specified job, assigning a weight to each of the dimensions, and determining whether each of a group of candidate satisfies the weights assigned to the dimensions.
Abstract: A method, system and computer program product are disclosed for candidate screening. In one embodiment, the method comprises identifying a multitude of dimensions of a specified job, assigning a weight to each of the dimensions, and determining whether each of a group of candidate satisfies the weights assigned to the dimensions. In an embodiment, a score is assigned to each candidate based on the weights assigned to the dimensions, and the number of the candidates score above a threshold is determined. In an embodiment, if the number of the candidates that have a score above the threshold is less than a defined number, one or more of the weights is adjusted, and the candidates are rescored based on these adjusted weights. In embodiments of the invention, the weights are adjusted based on the number of the candidates that score above the threshold, or based on the other weights.

4 citations


Proceedings ArticleDOI
24 Jun 2012
TL;DR: This work argues that an alternative of data validation as a service offering would be a profitable proposition for both the parties, provider as well as consumer, and proposes multiple variants of the offering to handle privacy concerns of the consumer.
Abstract: Data validation is one of the most important and, possibly, most under-valued task in an organization. Without clean data, an organization cannot employ sophisticated analysis and optimization tools to strive for excellence in operations, delivery or planning. Organizations have started to realize the value of data and its impact on their efficiency. Typically, they either develop in-house solutions or purchase industry standard solutions. In this work, we propose an alternative of data validation as a service offering. We argue that such a service would be a profitable proposition for both the parties, provider as well as consumer. We present a general framework to enable such an offering. We provide details on one such implementation that we carried to showcase the viability of such an approach. We propose multiple variants of the offering to handle privacy concerns of the consumer. Finally, we present a set of initial results comparing the different variants.

3 citations


Proceedings ArticleDOI
24 Jul 2012
TL;DR: It is shown how the data collected through surveys can be mapped to already existing enterprise data and how this can reduce the size of such surveys or eliminate them altogether.
Abstract: In this paper we present a conceptual model of employee engagement (EE). We outline various factors that influence EE. Contrary to the common practice of measuring EE by surveys and measuring EE collectively for large groups (usually corresponding to different lines of business), we propose using direct and sensed data for each individual to compute a personalized measure of EE. We show how the data collected through surveys can be mapped to already existing enterprise data and how this can reduce the size of such surveys or eliminate them altogether. The framework also allows us to capture the differences between employees based on personal factors like the number of years of experience and the business unit with which they are associated in computing EE. Since the computation is based on an employee's data, we can point to exact data dimensions that resulted in a low EE score. This enables the organization to take personalized actions for each employee to improve his/her employee engagement. Additionally, with our approach, EE measurement can be a continuous process, as opposed to an irregular periodic one where we might conduct surveys once or twice a year. Thus, we can help reduce the IT and administrative cost of conducting surveys.

2 citations


Proceedings ArticleDOI
08 Jul 2012
TL;DR: A Metadata driven rule-based data validation system, which is domain independent, distributed, scalable and can easily accommodate changes in business requirements is employed.
Abstract: In this paper we present a system and case study for business data validation in large organizations. The validated and consistent data provides the capability to handle outages and incidents in a more principled fashion and helps in business continuity. Typically, different business units employ separate systems to produce and store their data. The data owners choose their own technology for data base storage. It is a non-trivial task to keep the data consistent across business units in the organization. This non-availability of consistent data can lead to sub optimal planning during outages and organizations can incur huge financial costs. Traditional custom data validation system fetches the data from various data sources and flow it through the central validation system resulting in huge data transfer cost. Moreover, accommodating change in business rules is laborious process. Accommodating such changes in the system can lead to re-design and re-development of the system. This is a very costly and time consuming activity. In this paper, we employ a Metadata driven rule-based data validation system, which is domain independent, distributed, scalable and can easily accommodate changes in business requirements. We have deployed our system in real life settings. We present some of the results in this paper.

1 citations


Book ChapterDOI
Rakesh Pimplikar1, Sameep Mehta1
12 Nov 2012
TL;DR: A system RETRAiN is presented to enable calibration of various components of bank operations and provides recommendations for reconfiguration of the operations based on real time data like waiting customers, service requests, availability of service personnel and business metrics.
Abstract: Customers in many developing regions (like India) use physical bank branch as primary and preferred banking channel, resulting in high footfall in the branch. This results in high wait time of customers and high pressure on organization's resources, impacting customer satisfaction (CSAT) as well as employee satisfaction (ESAT) adversely. A naive solution to handle this is to increase the service personnel to cater to the customers. However, this is an unviable alternative because this impacts top and bottom line of the bank. Therefore, organizations are strategically looking for intelligent systems which can help in fine tuning the overall business process to maximize their business objectives while incurring zero or very less investments. Towards this end, we present a system RETRAiN to enable such calibration of various components of bank operations. Based on real time data like waiting customers, service requests, availability of service personnel and business metrics, the system provides recommendations for reconfiguration of the operations. The reconfiguration includes selection of scheduling policy, number of service personnel and configuration of service personnel. We present the overall system along with analysis and optimization algorithms for generating the recommendations. To showcase the efficacy and usefulness of our system, we present results based on data collected over a period of four months from multiple branches of a leading bank in India.