scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Dynamic data leakage using guilty agent detection over cloud

K. Govinda1, Divya Joseph1
01 Dec 2017-
TL;DR: This paper proposed an agent based model to detect data leakage over cloud, because the security play's a major role in cloud because of its open system architecture.
Abstract: Cloud computing has been unreal and unimaginable because it's the fore coming model of IT industries. In distinction to the solutions provided in the ancient period, cloud has brought about a new perspective to the problem. By reducing infrastructural cost and improving services through buying services. Cloud computing has made the impossible possible. Hence greater challenges wait, when talking about cloud security. Through cloud computing techniques and knowledge we have been able to solve “big data” problems. Through online cloud services such as “pay as you go” models. Now third parties can take advantage of these cloud services such as SLA etc to reach greater optimization and profit. The security play's a major role in cloud because of its open system architecture, this paper proposed an agent based model to detect data leakage over cloud.
Citations
More filters
Journal ArticleDOI
TL;DR: The results show that DocGuard is highly effective not only for stopping the initial leak but also in preventing the propagation of leaked files over the Internet and though social networks.

7 citations

Book ChapterDOI
06 Nov 2022
TL;DR: In this article , a case study that analyzes the various algorithms and methods proposed across the various domains, and a comparative analysis was done, is presented, where the authors compare the performance of different methods and algorithms across different domains.
Abstract: AbstractWith the steep growth in information technology and its global reach, as well as the common citizen’s ever-increasing reliance on technology, data privacy and security have become a major source of concern for individuals all over the world. In today’s era, computing devices like virtual servers, databases, physical servers, databases, and many more devices are occupied with confidential data. This paper is an exploratory case study that analyzes the various algorithms and methods proposed across the various domains, and a comparative analysis was done.KeywordsData leakage detectionAndroidNetworkingCloud computingMachine learningGuilty agentPrivacyWatermarkFake objectBigraph

1 citations

References
More filters
Journal ArticleDOI
TL;DR: A privacy-preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection, and how Internet service providers can offer their customers DLD as an add-on service with strong privacy guarantees is described.
Abstract: Statistics from security firms, research institutions and government organizations show that the number of data-leak instances have grown rapidly in recent years. Among various data-leak cases, human mistakes are one of the main causes of data loss. There exist solutions detecting inadvertent sensitive data leaks caused by human mistakes and to provide alerts for organizations. A common approach is to screen content in storage and transmission for exposed sensitive information. Such an approach usually requires the detection operation to be conducted in secrecy. However, this secrecy requirement is challenging to satisfy in practice, as detection servers may be compromised or outsourced. In this paper, we present a privacy-preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection. The advantage of our method is that it enables the data owner to safely delegate the detection operation to a semihonest provider without revealing the sensitive data to the provider. We describe how Internet service providers can offer their customers DLD as an add-on service with strong privacy guarantees. The evaluation results show that our method can support accurate detection with very small number of false alarms under various data-leak scenarios.

92 citations

Proceedings ArticleDOI
14 Nov 2014
TL;DR: The proposed work is suggesting a model for data leakage problem, and its aim is to identify the culprit who has leaked the critical organizational data.
Abstract: In the recent years internet technologies has become the backbone of any business organization. These organizations use this facility to improve their efficiency by transferring data from one location to another. But, there are number of threats in transferring critical organizational data as any culprit employee may public this data. This problem is known as data leakage problem. In the proposed work, we are suggesting a model for data leakage problem. In this model, our aim is to identify the culprit who has leaked the critical organizational data.

17 citations


"Dynamic data leakage using guilty a..." refers background in this paper

  • ...This technique is not full proof as watermarks can be corrupted and partially destroyed, Moreover these attacks are categorized under silent attacks, where knowledge is leaked without any prior knowledge of it[1]....

    [...]

Proceedings ArticleDOI
08 Nov 2014
TL;DR: A new digital watermarking algorithm of color image is proposed that process the watermark to generate two shares based on visual cryptography and one of the shares is embedded into a color image and another is protected by the copyright.
Abstract: In this paper, we propose a new digital watermarking algorithm of color image. We process the watermark to generate two shares based on visual cryptography. And one of the shares is embedded into a color image and another is protected by the copyright. The scheme is easy to implement and highly feasible. In addition, the embedding capacity of watermarks and robustness are improved to effectively.

14 citations


Additional excerpts

  • ...This technique targets the most prominent section of the data and the incorporation of invisible watermark is such that it can't be separated from the data without degrading the quality of the source image[2]....

    [...]

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper investigates the use of N-grams statistical analysis for data classification purposes and shows that the method is capable of correctly classifying up to 90.5% of the tested documents.
Abstract: Data confidentiality, integrity and availability are the ultimate goals for all information security mechanisms. However, most of these mechanisms do not proactively protect sensitive data; rather, they work under predefined policies and conditions to protect data in general. Few systems such as anomaly-based intrusion detection systems (IDS) might work independently without much administrative interference, but with no dedication to sensitivity of data. New mechanisms called data leakage prevention systems (DLP) have been developed to mitigate the risk of sensitive data leakage. Current DLPs mostly use data fingerprinting and exact and partial document matching to classify sensitive data. These approaches can have a serious limitation because they are susceptible to data misidentification. In this paper, we investigate the use of N-grams statistical analysis for data classification purposes. Our method is based on using N-grams frequency to classify documents under distinct categories. We are using simple taxicap geometry to compute the similarity between documents and existing categories. Moreover, we examine the effect of removing the most common words and connecting phrases on the overall classification. We are aiming to compensate the limitations in current data classification approaches used in the field of data leakage prevention. We show that our method is capable of correctly classifying up to 90.5% of the tested documents.

10 citations


"Dynamic data leakage using guilty a..." refers result in this paper

  • ...In the end the paper concludes with positive outcomes in support of the model[3]....

    [...]

Proceedings ArticleDOI
01 Feb 2016
TL;DR: An algorithm for data leakage prevention with time stamp is developed, because in a particular period of time the data is confidential after the time stamp the same data could be non confidential.
Abstract: Because of huge usage of data, necessity of the Data leakage prevention is growing day by day. Data Leakage Prevention system decided that particular data (confidential or non-confidential) is permitted to access or not. In Data leakage Prevention, time stamp is very important for giving permission to access a particular data, because in a particular period of time the data is confidential after the time stamp the same data could be non confidential, here we developed an algorithm for data leakage prevention with time stamp.

10 citations


"Dynamic data leakage using guilty a..." refers background in this paper

  • ...The system knows if the document is sensitive or not by comparing the time stamp, if time stamp is bigger or equal to time stamp then document is blocked[4]....

    [...]