scispace - formally typeset
Search or ask a question

Showing papers by "Sameep Mehta published in 2008"


Proceedings ArticleDOI
Sameep Mehta1, Anindya Neogi1
07 Apr 2008
TL;DR: This paper describes a consolidation recommendation tool, called ReCon, that takes static and dynamic costs of given servers, the costs of VM migration, the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time.
Abstract: Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center. The ability to migrate virtual machines dynamically between physical servers in real-time has also added a dynamic aspect to consolidation. However, there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting. In this paper we describe such a consolidation recommendation tool, called ReCon. Recon takes static and dynamic costs of given servers, the costs of VM migration, the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time. We also present the results of applying the tool on historical data obtained from a large production environment.

77 citations


Patent
Ramesh Baskaran1, Sameep Mehta1, Anindya Neogi1, Vinayaka Pandit1, Gyana R. Parija1, Akshat Verma1 
03 Jul 2008
TL;DR: In this article, a plurality of application profiles are obtained, for plurality of applications, and a recommended server configuration is generated for running the applications by formulating and solving a bin packing problem, where each of the at least two different kinds of servers is treated as an item, with an associated size, to be packed into the bins.
Abstract: A plurality of application profiles are obtained, for a plurality of applications. Each of the profiles specifies a list of resources, and requirements for each of the resources, associated with a corresponding one of the applications. Specification of a plurality of constraints associated with the applications is facilitated, as is obtaining a plurality of cost models associated with at least two different kinds of servers on which the applications are to run. A recommended server configuration is generated for running the applications, by formulating and solving a bin packing problem. Each of the at least two different kinds of servers is treated as a bin of a different size, based on its capacity, and has an acquisition cost associated therewith. The size is substantially equal to a corresponding one of the resource requirement as given by a corresponding one of the application profiles. Each of the applications is treated as an item, with an associated size, to be packed into the bins. The bin packing problem develops the recommended server configuration based on reducing a total acquisition cost while satisfying the constraints and the sizes of the applications.

53 citations


Proceedings ArticleDOI
24 Aug 2008
TL;DR: A visual-analytic tool for the interrogation of evolving interaction network data such as those found in social, bibliometric, WWW and biological applications and incorporates common visualization paradigms such as zooming, coarsening and filtering while naturally integrating information extracted by a previously described event-driven framework.
Abstract: In this article we describe a visual-analytic tool for the interrogation of evolving interaction network data such as those found in social, bibliometric, WWW and biological applications. The tool we have developed incorporates common visualization paradigms such as zooming, coarsening and filtering while naturally integrating information extracted by a previously described event-driven framework for characterizing the evolution of such networks. The visual front-end provides features that are specifically useful in the analysis of interaction networks, capturing the dynamic nature of both individual entities as well as interactions among them. The tool provides the user with the option of selecting multiple views, designed to capture different aspects of the evolving graph from the perspective of a node, a community or a subset of nodes of interest. Standard visual templates and cues are used to highlight critical changes that have occurred during the evolution of the network. A key challenge we address in this work is that of scalability - handling large graphs both in terms of the efficiency of the back-end, and in terms of the efficiency of the visual layout and rendering. Two case studies based on bibliometric and Wikipedia data are presented to demonstrate the utility of the toolkit for visual knowledge discovery.

42 citations


Proceedings ArticleDOI
30 Mar 2008
TL;DR: This work proposes and model a simple and comprehensive set of transformations that capture evolution of a single actor and interactions among multiple actors, and presents algorithms to rank each transformation and shows how ranking helps to infer important relationships between actors and stories in a corpus.
Abstract: The natural way to model a news corpus is as a directed graph where stories are linked to one another through a variety of relationships We formalize this notion by viewing each news story as a set of actors, and by viewing links between stories as transformations these actors go through We propose and model a simple and comprehensive set of transformations: create, merge, split, continue, and cease These transformations capture evolution of a single actor and interactions among multiple actors We present algorithms to rank each transformation and show how ranking helps us to infer important relationships between actors and stories in a corpus We demonstrate the effectiveness of our notions by experimenting on large news corpora

17 citations


Patent
Tarun Kumar1, Sameep Mehta1, Vinayaka Pandit1, Gyana R. Parija1, Anupam Saronwala1, Rashmi Singh1 
28 Apr 2008
TL;DR: In this paper, a method and system for planning a workforce headcount for a given business process is described, which comprises the steps of providing as inputs, i) productivity rampups to model the level of experience and to measure the performance of both new hires and current employees, and ii) industry/market attrition rates for employees; and performing an evaluation, using said inputs, of at least one given management objective.
Abstract: A method and system are disclosed for planning a workforce headcount for a given business process. The method comprises the steps of providing as inputs, i) productivity ramp-ups to model the level of experience and to measure the performance of both new hires and current employees, and ii) industry/market attrition rates for employees; and performing an evaluation, using said inputs, of at least one given management objective. On the basis of this evaluation, a future hiring and transition plan is provided for the given business process for a defined period of time. In the preferred embodiment of the invention, uncertainty is associated with one or more of the inputs, and the future hiring and transition plan is provided by using stochastic programming to model the uncertainty associated with at least one of said one or more of the inputs.

5 citations


Proceedings ArticleDOI
26 Oct 2008
TL;DR: A general framework to quantify changes in temporally evolving data by identifying various factors which influence the importance of each transformation using a weight vector, which encapsulates domain knowledge.
Abstract: In this paper, we present a general framework to quantify changes in temporally evolving data. We focus on changes that materialize due to evolution and interactions of features extracted from the data. The changes are captured by the following key transformations: create, merge, split, continue, and cease. First, we identify various factors which influence the importance of each transformation. These factors are then combined using a weight vector. The weight vector encapsulates domain knowledge. We evaluate our algorithm using the following datasets: DBLP, IMDB, Text and Scientific Dataset.

3 citations