scispace - formally typeset
Search or ask a question
Author

Rashmi Phalnikar

Other affiliations: College of Engineering, Pune
Bio: Rashmi Phalnikar is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Software system & Image retrieval. The author has an hindex of 5, co-authored 29 publications receiving 180 citations. Previous affiliations of Rashmi Phalnikar include College of Engineering, Pune.

Papers
More filters
Journal ArticleDOI
TL;DR: “Requirement analysis to Provide Instant Diagrams (RAPID)” is a desktop tool to assist requirements analysts and Software Engineering students to analyze textual requirements, finding core concepts and its relationships, and extraction UML diagrams.
Abstract: The process of generating UML Diagrams from natural language specification is a highly challenging task. This paper proposes a method and tool to facilitate the requirements analysis process and extract UML diagrams from textual requirements using natural language processing (NLP) and Domain Ontology techniques. Requirements engineers analyze requirements manually to understand the scope of the system. The time spent on the analysis and the low quality of human analysis justifies the need of a tool for better understanding of the system. “Requirement analysis to Provide Instant Diagrams (RAPID)” is a desktop tool to assist requirements analysts and Software Engineering students to analyze textual requirements, finding core concepts and its relationships, and extraction UML diagrams. The evaluation of RAPID system is in the process and will be conducted through two forms of evaluation, experimental and expert evaluation.

60 citations

Proceedings ArticleDOI
22 Jan 2009
TL;DR: The benefits of using Agile process and Scrum the iterative incremental process in Distributed Software development are described, and a two team structures for its implementation are proposed.
Abstract: The necessity of finding right skilled people, sharing resource and limitation on cost has made distributed software development indispensable. In a distributed development project, but are working collaboratively toward the outcome. Such offshore service providers follow the traditional process models. Agile practices promote development iterations, open collaboration, and process adaptability throughout the life cycle of the project. Adapting these practices in a distributed environment can help distributed development tackle the challenges of cultural incompatibility, leadership struggle and lack of trust. This paper describes the benefits of using Agile process and Scrum the iterative incremental process in Distributed Software development, and proposes two team structures for its implementation.

48 citations

Proceedings ArticleDOI
01 Aug 2018
TL;DR: This work is working on heart disease prediction on the basis of the dataset with help of Naïve bayes and KNN algorithm, and proposes the disease risk prediction using structured data which answers the question related to disease which people face in their life.
Abstract: Data analysis plays a significant role in handling a large amount of data in the healthcare. The previous medical researches based on handling and assimilate a huge amount of hospital data instead of prediction. Due to an enormous amount of data growth in the biomedical and healthcare field the accurate analysis of medical data becomes propitious for earlier detection of disease and patient care. However, the accuracy decreases when the medical data is partially missing. To overcome the problem of missing medical data, we perform data cleaning and imputation to transform the incomplete data to complete data. We are working on heart disease prediction on the basis of the dataset with help of Naive bayes and KNN algorithm. To extend this work, we propose the disease risk prediction using structured data. We use convolutional neural network based unimodel disease risk prediction algorithm. The prediction accuracy of CNN-UDRP algorithm reaches more than 65%. Moreover, this system answers the question related to disease which people face in their life.

41 citations

Proceedings ArticleDOI
01 Dec 2012
TL;DR: Improved apriori algorithm based on compressed transaction database, which is compressed based on the consequence of interest, is proposed.
Abstract: Association rule mining is used to uncover closely related item sets in transactions for deciding business policies. Apriori algorithm is widely adopted is association rule mining for generating closely related item sets. Traditional apriori algorithm is space and time consuming since it requires repeated scanning of whole transaction database. In this paper we propose improved apriori algorithm based on compressed transaction database. Transaction database is compressed based on the consequence of interest.

14 citations

Proceedings ArticleDOI
01 Oct 2012
TL;DR: This paper believes that the user NFR should be expressed explicitly to aid in selecting a web service that best fulfils them, and presents a detailed survey of web service discovery systems that makes use of QoS factors.
Abstract: As more and more web services that provide the same functionality are developed, it is a major issue to consider not only the functional requirements but also the non functional requirement (NFR) during web service discovery process. The NFR define the Quality of Service (QoS) of the system. Much work has been done on web service discovery process that considers the functional requirements as selection criteria. However not many service discovery strategies consider the QoS properties. On the basis of our study and survey, we advocate that the best performance is achieved by use of both functional and NFR properties. We believe that the user NFR should be expressed explicitly to aid in selecting a web service that best fulfils them. To support this claim we present a detailed survey of web service discovery systems that makes use of QoS factors. Paper also highlights the importance of ontology to represent the non functional requirements in web service discovery. We emphasize how Ontology if built and refined by domain experts can be used for web service discovery process with the purpose of reuse and improved design.

13 citations


Cited by
More filters
01 Jan 2002

9,314 citations

Journal Article
TL;DR: AspectJ as mentioned in this paper is a simple and practical aspect-oriented extension to Java with just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns.
Abstract: Aspect] is a simple and practical aspect-oriented extension to Java With just a few new constructs, AspectJ provides support for modular implementation of a range of crosscutting concerns. In AspectJ's dynamic join point model, join points are well-defined points in the execution of the program; pointcuts are collections of join points; advice are special method-like constructs that can be attached to pointcuts; and aspects are modular units of crosscutting implementation, comprising pointcuts, advice, and ordinary Java member declarations. AspectJ code is compiled into standard Java bytecode. Simple extensions to existing Java development environments make it possible to browse the crosscutting structure of aspects in the same kind of way as one browses the inheritance structure of classes. Several examples show that AspectJ is powerful, and that programs written using it are easy to understand.

2,947 citations

Proceedings ArticleDOI
23 Aug 2010
TL;DR: The analysis revealed that in most cases agile practices were modified with respect to the context and situational requirements, indicating the need for future research on how to integrate all experiences and practices in a way to assist practitioners when setting up non-collocated agile projects.
Abstract: This paper presents the results of systematically reviewing the current research literature on the use of agile practices and lean software development in global software engineering (GSE). The primary purpose is to highlight under which circumstances they have been applied efficiently. Some common terms related to agile practices (e.g. scrum, extreme programming) were considered in formulating the search strings, along with a number of alternatives for GSE such as offshoring, outsourcing, and virtual teams. The results were limited to peer-reviewed conference papers/journal articles, published between 1999 and 2009. The synthesis was made through classifying the papers into different categories (e.g. research type, distribution). The analysis revealed that in most cases agile practices were modified with respect to the context and situational requirements. This indicates the need for future research on how to integrate all experiences and practices in a way to assist practitioners when setting up non-collocated agile projects.

186 citations

Journal ArticleDOI
TL;DR: This paper proposes two novel prediction models, which are capable of using the context information of users and services respectively, and proposes an ensemble model to combine the results of the two models.
Abstract: QoS prediction is one of the key problems in Web service recommendation and selection. The context information is a dominant factor affecting QoS, but is ignored by most of existing works. In this paper, we employ the context information, from both the user side and service side, to achieve superior QoS prediction accuracy. We propose two novel prediction models, which are capable of using the context information of users and services respectively. In the user side, we use the geographical information as the user context, and identify similar neighbors for each user based on the similarity of their context. We study the mapping relationship between the similarity value and the geographical distance. In the service side, we use the affiliation information as the service context, including the company affiliation and country affiliation. In the two models, the prediction value is learned by the QoS records of a user (or a service) and the neighbors. Also, we propose an ensemble model to combine the results of the two models. We conduct comprehensive experiments in two real-world datasets, and the experimental results demonstrate the effectiveness of our models.

108 citations