scispace - formally typeset
Search or ask a question
Author

Shridevi Karande

Bio: Shridevi Karande is an academic researcher from Maharashtra Institute of Technology. The author has contributed to research in topics: Cloud computing & Encryption. The author has an hindex of 2, co-authored 4 publications receiving 16 citations.

Papers
More filters
Proceedings ArticleDOI
25 May 2016
TL;DR: This project intends to overcome all obstacles and built a user friendly SAAS platform which stores data in Amazon S3 and uses Amazon EMR and MapReduce paradigm using opensource R scripting language to perform analysis of big data analysis in desire time.
Abstract: With enterprises collecting feedback down to every possible detail, data repositories are being over flooded with information. In-order to access valuable information, these data should be processed using sophisticated statistical analysis. Traditional analytical tools, existing statistical software and data management systems find it challenging to perform deep analysis upon large data libraries. Users demand a service platform that can store and handle large quantities of data with some features such as easy accessibility, fast performance, durable and secure. These features can be availed without having to spend too much on hardware, upgrading, configuring etc to perform analysis of big data. This project intends to overcome all these obstacles and built a user friendly SAAS platform. It is cloud based web application which stores data in Amazon S3. As this system supports dynamic and optimized cluster nodes size as per the desired time, user doesn't need to calculate and estimate the number of nodes. The system uses Amazon EMR and MapReduce paradigm using opensource R scripting language to perform analysis of bigdata analysis in desire time.

9 citations

Proceedings ArticleDOI
26 Feb 2015
TL;DR: Object is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software and the benefits are user friendliness & cost.
Abstract: The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores a analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness a cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.

8 citations

Book ChapterDOI
01 Jan 2019
TL;DR: The cloud servers help in sharing data among the people within an organization or outside it and the number of keywords defined can be numerous, and hence the term multi-keyword.
Abstract: The cloud servers help in sharing data among the people within an organization or outside it. The outsourcing of one’s data over the Internet has become much easier with the help of the newer technology, the cloud servers. The cloud servers help in sharing data among the people within an organization or outside it. As security over the cloud has become a major challenge and also a very important aspect, providing security for the data that is being stored has become crucial. Hence, encryption of data that would be outsourced has become a very prominent talk recently. Similarly, fetching of the documents over the cloud should give faster and better results which would eventually give a better performance result. The number of keywords defined can be numerous, and hence the term multi-keyword.
Proceedings ArticleDOI
25 May 2022
TL;DR: Risk and challenges associated with cloud computing, as well as the measures that can be taken to help shield the cloud from security threats, are discussed in this paper.
Abstract: The advent of cloud computing has enabled many users and organizations to make better use of system and business resources. Cloud computing's primary benefits are its cheaper service costs and the absence of the requirement for users to invest in expensive computing hardware. Because of the accessibility of its resources and flexibility for user computing operations, individuals and organizations migrate their programs, data, and resources to cloud storage services. A transition from localized to remote computing has brought with it a plethora of security risks and challenges for both users and service providers. However, there is one significant disadvantage to cloud computing: the storage of your data is in the hands of a third party. Amongst the most daunting concerns in providing powerful storage and processing as on-demand Services is the security threat posed by resource sharing in Cloud computing. Increasing efficiency and better performance are driving governments and organizations around the world to use cloud computing, either from scratch or as part of existing infrastructure. Risk and challenges associated with cloud computing, as well as the measures that can be taken to help shield the cloud from security threats, are discussed in this paper.

Cited by
More filters
Proceedings ArticleDOI
01 Dec 2015
TL;DR: In this paper, performance of MapReduce jobs has been evaluated with respect to CPU execution time with varying size of Hadoop cluster and it is found that CPU execution times to finish the jobs decrease as the number of Data Nodes in HDInsight cluster increases and indicates the good response time with increase in performance as well as more customer satisfaction.
Abstract: Recently Cloud based Hadoop has gained a lot of interest that offer ready to use Hadoop cluster environment for processing of Big Data, eliminating the operational challenges of on-site hardware investment, IT support, and installing, configuring of Hadoop components such as HDFS and MapReduce. On demand Hadoop as a service helps the industries to focus on business growth and based on pay per use model for Big Data processing with auto-scaling of Hadoop cluster feature. In this paper implementation of various MapReduce jobs like Pi, TeraSort, WordCount has been done on cloud based Hadoop deployment by using Microsoft Azure cloud services. Performance of MapReduce jobs has been evaluated with respect to CPU execution time with varying size of Hadoop cluster. From the experimental result, it is found that CPU execution time to finish the jobs decrease as the number of Data Nodes in HDInsight cluster increases and indicates the good response time with increase in performance as well as more customer satisfaction.

18 citations

Proceedings ArticleDOI
25 May 2016
TL;DR: This project intends to overcome all obstacles and built a user friendly SAAS platform which stores data in Amazon S3 and uses Amazon EMR and MapReduce paradigm using opensource R scripting language to perform analysis of big data analysis in desire time.
Abstract: With enterprises collecting feedback down to every possible detail, data repositories are being over flooded with information. In-order to access valuable information, these data should be processed using sophisticated statistical analysis. Traditional analytical tools, existing statistical software and data management systems find it challenging to perform deep analysis upon large data libraries. Users demand a service platform that can store and handle large quantities of data with some features such as easy accessibility, fast performance, durable and secure. These features can be availed without having to spend too much on hardware, upgrading, configuring etc to perform analysis of big data. This project intends to overcome all these obstacles and built a user friendly SAAS platform. It is cloud based web application which stores data in Amazon S3. As this system supports dynamic and optimized cluster nodes size as per the desired time, user doesn't need to calculate and estimate the number of nodes. The system uses Amazon EMR and MapReduce paradigm using opensource R scripting language to perform analysis of bigdata analysis in desire time.

9 citations

Book ChapterDOI
02 Jul 2018
TL;DR: A value decomposition approach that allows for operationalizing composite public values is proposed and results featuring data analytics using self-administrated surveys are presented.
Abstract: Public values are desires of the general public, that are about properties considered societally valuable, such as respecting the privacy of citizens or prohibiting polluting activities. “Translating” public values into functional solutions is thus an actual challenge. Even though Value-Sensitive Design (VSD) is about weaving public values in the design of (technical) systems, it stays insufficiently concrete as it concerns the alignment between abstract public values and technical (software) solutions. Still, VSD indirectly inspires ideas in that direction as for example the idea to consider business process variants for achieving such an alignment. Nevertheless, this is all about “atomic” public values (encapsulating only one particular behavioral goal) while one would often face public values that are “composite” in the sense that they reflect a particular human attitude rather than just a desired behavioral goal. In the current paper, we propose a value decomposition approach that allows for operationalizing composite public values. We also present experimental results featuring data analytics using self-administrated surveys.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the authors draw on organizational learning and strategic decision-making theory to develop a conceptual framework to explore how the selection measures of BDA systems and external support partners are selected.
Abstract: This study draws on organizational learning and strategic decision-making theory to develop a conceptual framework to explore how the selection measures of BDA systems and external support partners...

5 citations

Journal ArticleDOI
TL;DR: This paper has discussed concept of Big Data, characteristics and challenges, its main focus is over data generated in various sector, analytics and various tools to manage data.
Abstract: Big Data is nowadays one of the apex fields of research area. It is due to expansion in technological field at rapid rate. Expansion of storage area and data has been seen from past five year which is exponentially. It is envisioned that concept of Big Data will assure to reduce the huge chunks of data into manageable form. In this paper, we have discussed concept of Big Data, characteristics and challenges. Its main focus is over data generated in various sector, analytics and various tools to manage data.

5 citations