scispace - formally typeset
Search or ask a question

Showing papers on "Distributed File System published in 2006"


Proceedings ArticleDOI
06 Nov 2006
TL;DR: Performance measurements under a variety of workloads show that Ceph has excellent I/O performance and scalable metadata management, supporting more than 250,000 metadata operations per second.
Abstract: We have developed Ceph, a distributed file system that provides excellent performance, reliability, and scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable object storage devices (OSDs). We leverage device intelligence by distributing data replication, failure detection and recovery to semi-autonomous OSDs running a specialized local object file system. A dynamic distributed metadata cluster provides extremely efficient metadata management and seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. Performance measurements under a variety of workloads show that Ceph has excellent I/O performance and scalable metadata management, supporting more than 250,000 metadata operations per second.

1,621 citations


Proceedings ArticleDOI
Michael Burrows1
06 Nov 2006
TL;DR: The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the differences.
Abstract: We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby provides an interface much like a distributed file system with advisory locks, but the design emphasis is on availability and reliability, as opposed to high performance. Many instances of the service have been used for over a year, with several of them each handling a few tens of thousands of clients concurrently. The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the differences.

905 citations


Proceedings ArticleDOI
30 Oct 2006
TL;DR: A novel secure information management architecture based on emerging attribute-based encryption (ABE) primitives is introduced and a policy system that meets the needs of complex policies is defined and illustrated and cryptographic optimizations that vastly improve enforcement efficiency are proposed.
Abstract: Attributes define, classify, or annotate the datum to which they are assigned. However, traditional attribute architectures and cryptosystems are ill-equipped to provide security in the face of diverse access requirements and environments. In this paper, we introduce a novel secure information management architecture based on emerging attribute-based encryption (ABE) primitives. A policy system that meets the needs of complex policies is defined and illustrated. Based on the needs of those policies, we propose cryptographic optimizations that vastly improve enforcement efficiency. We further explore the use of such policies in two example applications: a HIPAA compliant distributed file system and a social network. A performance analysis of our ABE system and example applications demonstrates the ability to reduce cryptographic costs by as much as 98% over previously proposed constructions. Through this, we demonstrate that our attribute system is an efficient solution for securely managing information in large, loosely-coupled, distributed systems.

463 citations


Proceedings Article
08 May 2006
TL;DR: Ventana combines the file-based storage and sharing benefits of a conventional distributed file system with the versioning, mobility, and access control features that make virtual disks so compelling.
Abstract: Virtual disks are the main form of storage in today's virtual machine environments. They offer many attractive features, including whole system versioning, isolation, and mobility, that are absent from current file systems. Unfortunately, the low-level interface of virtual disks is very coarse-grained, forcing all-or-nothing whole system rollback, and opaque, offering no practical means of sharing. These problems impose serious limitations on virtual disks' usability, security, and ease of management. To overcome these limitations, we offer Ventana, a virtualization aware file system. Ventana combines the file-based storage and sharing benefits of a conventional distributed file system with the versioning, mobility, and access control features that make virtual disks so compelling.

101 citations


Patent
Robert L. Beck1
11 Oct 2006
TL;DR: The extensible file sharing described in this article allows users, administrators, or other third party developers to expand or enhance a file sharing service or application to provide virtually any desired functionality, such as resource management, security management, management of user experience, and the like.
Abstract: A file sharing service facilitates file sharing between a client and a host over a network. An extensible architecture provides an interface by which the file sharing service can be expanded to include additional functionality. This additional functionality may include resource management, security management, management of user experience, and the like. For example, users or administrators of the host or another computing device on the network may wish to oversee the file sharing service as a whole and/or individual file sharing transactions. The extensible file sharing described herein allows users, administrators, or other third party developers to expand or enhance a file sharing service or application to provide virtually any desired functionality.

76 citations


Proceedings ArticleDOI
06 Nov 2006
TL;DR: The results show that Ensem-Blue's features impose little overhead, yet they enable the integration of emerging platforms such as digital cameras, MP3 players, and DVRs.
Abstract: EnsemBlue is a distributed file system for personal multimedia that incorporates both general-purpose computers and consumer electronic devices (CEDs). Ensem-Blue leverages the capabilities of a few general-purpose computers to make CEDs first class clients of the file system. It supports namespace diversity by translating between its distributed namespace and the local namespaces of CEDs. It supports extensibility through persistent queries, a robust event notification mechanism that leverages the underlying cache consistency protocols of the file system. Finally, it allows mobile clients to self-organize and share data through device ensembles. Our results show that these features impose little overhead, yet they enable the integration of emerging platforms such as digital cameras, MP3 players, and DVRs.

75 citations


Patent
02 Feb 2006
TL;DR: In this article, the authors propose a proxy (e.g., a switch) that resides in a respective network environment between one or more clients and multiple servers, and the proxy facilitates a flow of data on the first connection and the set of second connections.
Abstract: A proxy (e.g., a switch) resides in a respective network environment between one or more clients and multiple servers. One purpose of the proxy is to provide the clients a unified view of a distributed file system having respective data stored amongst multiple remote and disparate storage locations over a network. Another purpose of the proxy is to enable the clients to retrieve data stored at the multiple servers. To establish a first connection between the proxy and a respective client, the proxy communicates with an authentication agent (residing at a location other than at the client) to verify a challenge response received from the client. When establishing a set of second connections with the multiple servers, the proxy communicates with the authentication agent to generate challenge responses on behalf of the client. The proxy facilitates a flow of data on the first connection and the set of second connections.

73 citations


Patent
03 Aug 2006
TL;DR: In this paper, a method for distributed caching and download of file is described that includes building a peer list comprising a listing of potential peer servers from among one or more networked computers.
Abstract: Distributed caching and download of file. A method is described that includes building a peer list comprising a listing of potential peer servers from among one or more networked computers. The peer list includes no more than a predetermined number of potential peer servers. Potential peer servers in the peer list are queried for a file or portion of a file. A message from a peer server in the peer list is received indicating that the peer server has the file or portion of a file available for download. The computer system downloads the file or portion of a file from the peer server.

56 citations


Journal ArticleDOI
TL;DR: The client, server, and network protocol of two distributed file systems are modified to use Speculator, which enables the Blue File System to provide the consistency of single-copy file semantics and the safety of synchronous I/O, yet still outperform current distributedfile systems with weaker consistency and safety.
Abstract: Speculator provides Linux kernel support for speculative execution. It allows multiple processes to share speculative state by tracking causal dependencies propagated through interprocess communication. It guarantees correct execution by preventing speculative processes from externalizing output, for example, sending a network message or writing to the screen, until the speculations on which that output depends have proven to be correct. Speculator improves the performance of distributed file systems by masking I/O latency and increasing I/O throughput. Rather than block during a remote operation, a file system predicts the operation's result, then uses Speculator to checkpoint the state of the calling process and speculatively continue its execution based on the predicted result. If the prediction is correct, the checkpoint is discarded; if it is incorrect, the calling process is restored to the checkpoint, and the operation is retried. We have modified the client, server, and network protocol of two distributed file systems to use Speculator. For PostMark and Andrew-style benchmarks, speculative execution results in a factor of 2 performance improvement for NFS over local area networks and an order of magnitude improvement over wide area networks. For the same benchmarks, Speculator enables the Blue File System to provide the consistency of single-copy file semantics and the safety of synchronous I/O, yet still outperform current distributed file systems with weaker consistency and safety.

52 citations


Patent
17 Oct 2006
TL;DR: In this paper, a method and system for recording interactions of distributed users in a distributed system is described, where a plurality of distributed clients each interact with a system of interest and a shared network file is provided which is accessible by each of the distributed users.
Abstract: A method and system are provided for recording interactions of distributed users in a distributed system. A plurality of distributed clients each interact with a system of interest. A shared network file is provided which is accessible by each of the distributed users. A recorder records a client's use activity on the system of interest as a record in the shared network file. The records of multiple clients are combined in an interleaved, time ordered record.

50 citations


Journal ArticleDOI
TL;DR: A data management solution which allows fast Virtual Machine (VM) instantiation and efficient run-time execution to support VMs as execution environments in Grid computing and can bring the application-perceived overheads below 10% compared to a local-disk setup.
Abstract: This paper presents a data management solution which allows fast Virtual Machine (VM) instantiation and efficient run-time execution to support VMs as execution environments in Grid computing. It is based on novel distributed file system virtualization techniques and is unique in that: (1) it provides on-demand cross-domain access to VM state for unmodified VM monitors; (2) it enables private file system channels for VM instantiation by secure tunneling and session-key based authentication; (3) it supports user-level and write-back disk caches, per-application caching policies and middleware-driven consistency models; and (4) it leverages application-specific meta-data associated with files to expedite data transfers. The paper reports on its performance in wide-area setups using VMware-based VMs. Results show that the solution delivers performance over 30% better than native NFS and with warm caches it can bring the application-perceived overheads below 10% compared to a local-disk setup. The solution also allows a VM with 1.6 GB virtual disk and 320 MB virtual memory to be cloned within 160 seconds for the first clone and within 25 seconds for subsequent clones.

Proceedings ArticleDOI
30 Nov 2006
TL;DR: The mathematical analysis of two important performance measures for a BitTorrent (BT) like P2P file sharing system, namely, average file downloading time and file availability, is presented and a novel chunk selection algorithm is proposed to enhance the overall system file availability.
Abstract: In this paper, we present the mathematical analysis of two important performance measures for a BitTorrent (BT) like P2P file sharing system, namely, average file downloading time and file availability. For the file downloading time, we develop a model using the "stochastic differential equation" approach, which can capture the system more accurately than some previous approach [17] and can capture various network settings and peers behavior. We study the steady-state behavior and obtain the closed-form solutions for performance measures which allow us to carry sensitivity analysis on various performance measures for various system parameters. We then extend this model to consider multiclass peers wherein some peers are behind firewalls which may impede the uploading service. We also present the mathematical model to study the file availability of a BT-like system. The model helps us gain the understanding of why the "rarest-first" chunk selection policy is used in today's BT protocol. We propose a novel chunk selection algorithm to enhance the overall system file availability. Extensive simulations are carried to validate our analysis.

Proceedings ArticleDOI
01 Aug 2006
TL;DR: DFS's resistance against large scale DDoS flooding attacks is analyzed; DFS offers relatively strong protection against DDoS attacks.
Abstract: We present a new scheme, Distributed Filtering Service or DFS, for protecting services against Distributed Denial of Service (DDoS) attacks. Our system is proactive and requires no changes to the Internet core, and no changes to existing ISP routers. DFS can be deployed incrementally, and benefits are obtained immediately. The key to our approach is forcing traffic destined for protected services to widely dispersed filtering points on the Internet, using IP anycast. DFS requires no unicast address nodes that can be targetted by an attacker; we are unaware of any other DDoS defensive system with this property. We also use two other techniques that have not been well used in DDoS defensive systems: key logging and the IPsec replay window. For the latter, we model attacks and give lower bounds for its effectiveness. We analyze DFS's resistance against large scale DDoS flooding attacks; DFS offers relatively strong protection against DDoS attacks.

Patent
28 Apr 2006
TL;DR: In this article, a document management system is configured to link an email folder in the email server to a file folder in a file system, represent information items such as documents or references to documents, in the file folder as corresponding email items in the linked email folder.
Abstract: A document management system includes at least one workstation with an email client provided with at least one dedicated email folder linked to a user account in an email server of the system. The system also includes a file system file system with a file system storage that stores digital documents. The document management system is configured to link an email folder in the email server to a file folder in the file system, represent information items, such as documents or references to documents, in the file folder as corresponding email items in the linked email folder, and synchronize the email server and the file system so as to dynamically reflect changes in information items in the file folder in the corresponding email items in the email folder.

Proceedings Article
John R. Douceur1, Jon Howell
01 Jan 2006
TL;DR: This work introduces Byzantine Fault Isolation (BFI), a technique that enables a distributed system to operate with application-defined partial correctness when some of its constituent groups are faulty, and describes its use in Farsite, a peer-to-peer file system designed to scale to 100,000 machines.
Abstract: In a peer-to-peer system of interacting Byzantine-fault-tolerant replicated-state-machine groups, as system scale increases, so does the probability that a group will manifest a fault. If no steps are taken to prevent faults from spreading among groups, a single fault can result in total system failure. To address this problem, we introduce Byzantine Fault Isolation (BFI), a technique that enables a distributed system to operate with application-defined partial correctness when some of its constituent groups are faulty. We quantify BFI’s benefit and describe its use in Farsite, a peer-to-peer file system designed to scale to 100,000 machines.

Patent
06 Feb 2006
TL;DR: In this article, a distributed file system is extended to allow multiple servers to seamlessly host files associated with aggregated links and/or aggregated roots to create and return a concatenated result.
Abstract: Aspects of the subject matter described herein relate to distributed namespace aggregation. In aspects, a distributed file system is extended to allow multiple servers to seamlessly host files associated with aggregated links and/or aggregated roots. A request for a directory listing of an aggregated link or root may cause a server to sniff multiple other servers that host files associated with the link or root to create and return a concatenated result. Sniffing may also be used to determine which servers host the file to which the client is requesting access. Altitude may be used to determine which servers to make visible to the client and may also be used to determine which servers are in the same replica group and which are not.

Patent
31 May 2006
TL;DR: In this article, a method for notifying an application coupled to a distributed file system is described, where a command for a file is received and the file is compared with a notification table.
Abstract: A method for notifying an application coupled to a distributed file system is described. A command for a file for a distributed file system is received. The distributed file system stores portions of files across a plurality of distinct physical storage locations. The command for the file is compared with a notification table of the distributed file system of the distributed file system. At least one application communicates with the distributed file system. The notification system notifies the corresponding application associated with the command with the notification system.

Proceedings ArticleDOI
26 Jun 2006
TL;DR: An algorithm for file Consistency Maintenance through Virtual servers (CMV) is proposed for unstructured and decentralized P2P systems and numerical results indicate that CMV is well suited for efficient file consistency maintenance in P1P systems.
Abstract: As the tremendous growth in peer-to-peer (P2P) applications, the issues related to file consistency become critical. In this paper, an algorithm for file Consistency Maintenance through Virtual servers (CMV) is proposed for unstructured and decentralized P2P systems. In CMV, consistency of each dynamic file is maintained by a virtual server (VS). A file update can only be accepted through the VS to ensure the one-copy serializability. The VS of a file is a logical network composed of multiple replica peers (RPs) that have replicas of the file. Mathematical analysis is performed to determine the optimal parameter selections that achieve minimum overhead messages for maintaining file consistency. The numerical results indicate that CMV is well suited for efficient file consistency maintenance in P2P systems.

Patent
30 Nov 2006
TL;DR: In this paper, the authors propose a method for playing for a file on a reserved server, where the selected file is also distributed in segments across a peer-to-peer network.
Abstract: An embodiment relates generally to a method for playing for a file The method includes initiating a playback for a selected file on a reserved server, where the selected file is also distributed in segments across a peer-to-peer network The method also includes initiating a retrieval of the selected file from the peer-to-peer network or reserved server and ordering the retrieved segments of the selected file for playback The method further includes switching playback of the selected file between the peer-to-peer network and reserved server according to the real-time performance and availability of the peer-to-peer network and reserved server

Patent
27 Sep 2006
TL;DR: In this paper, the authors present methods, apparatus, and systems for modifying data structures in distributed file systems, using minitransactions that include a set of compare items and set of write items to atomically modify data structures.
Abstract: Embodiments include methods, apparatus, and systems for modifying data structures in distributed file systems. One method of software execution includes using minitransactions that include a set of compare items and a set of write items to atomically modify data structures in a distributed file system.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This paper describes both the location and download phases of a generic peer-to-peer file sharing application using a fluid model that allows the computation of the transfer time distribution, and it is capable of considering some advanced characteristic such as parallel downloads and on-off peer behavior.
Abstract: File transfer using peer-to-peer file sharing applications is usually divided into two steps: resource search and resource download. Depending on the file size and its popularity, either of the two phases can become the bottleneck. In this paper we describe both the location and download phases of a generic peer-to-peer file sharing application using a fluid model. The proposed model allows the computation of the transfer time distribution, and it is capable of considering some advanced characteristic such as parallel downloads and on-off peer behavior. Model parameters reflect network, application, resource and user characteristics, and can be tuned to analyze a large number of different real peer-to-peer implementations file transfer using peer-to-peer file sharing applications is usually divided into two steps: resource search and resource download. Depending on the file size and its popularity, either of the two phases can become the bottleneck. In this paper we describe both the location and download phases of a generic peer-to-peer file sharing application using a fluid model. The proposed model allows the computation of the transfer time distribution, and it is capable of considering some advanced characteristic such as parallel downloads and on-off peer behavior. Model parameters reflect network, application, resource and user characteristics, and can be tuned to analyze a large number of different real peer-to-peer implementations.

Proceedings ArticleDOI
18 Apr 2006
TL;DR: This paper proposes a novel fault tolerant and conflict free distributed file system for mobile clients, which provides high available and reliable storage for files and guarantees that file operations are executed in spite of concurrency and failures.
Abstract: The rising demand for mobile computing has created a need for improved file system that supports mobile clients. Current file systems with support for mobility provide availability through file replicas that are cached at the client side. However, mobile clients may experience different obstacles in regard to the local cache, such as the limited network bandwidth, the intermittent connection, and serious conflicts when synchronizing back to the server, to mention a few. In this paper, we propose a novel fault tolerant and conflict free distributed file system for mobile clients, which provides high available and reliable storage for files and guarantees that file operations are executed in spite of concurrency and failures. The design is intended to fit mobile clients (e.g., PDAs and cell phones) that have limited storage space and can not store all data they need, yet they require to access these data at all times. We present our mobile file system model, describe its implementation, and report on its performance evaluation using an extensive set of simulation experiments. Our results indicate clearly that our model exhibits a significant degree of automation and conflict-free mobile file system.

05 Nov 2006
TL;DR: XUFS as mentioned in this paper is a wide-area distributed file system for the NSF TeraGrid that allows private distributed name spaces to be created for transparent access to personal files across over 9000 computer nodes.
Abstract: We describe our work in implementing a wide-area distributed file system for the NSF TeraGrid. The system, called XUFS, allows private distributed name spaces to be created for transparent access to personal files across over 9000 computer nodes. XUFS builds on many principles from prior distributed file systems research, but extends key design goals to support the workflow of computational science researchers. Specifically, XUFS supports file access from the desktop to the wide-area network seamlessly, survives transient disconnected operations robustly, and demonstrates comparable or better throughput than some current high performance file systems on the wide-area network.

Journal ArticleDOI
TL;DR: The sensitivity analysis of the file server selection technique shows that it performs significantly better than random selection, and the results show at least 50 percent reduction in download time when compared to the traditional file-transfer approach.
Abstract: In this paper, we propose a novel approach for reducing the download time of large files over the Internet. Our approach, known as parallelized file transport protocol (P-FTP), proposes simultaneous downloads of disjoint file portions from multiple file servers. P-FTP server selects file servers for the requesting client on the basis of a variety of QoS parameters, such as available bandwidth and server utilization. The sensitivity analysis of our file server selection technique shows that it performs significantly better than random selection. During the file transfer, P-FTP client monitors the file transfer flows to detect slow servers and congested links and adjusts the file distributions accordingly. P-FTP is evaluated with simulations and real-world implementation. The results show at least 50 percent reduction in download time when compared to the traditional file-transfer approach. Moreover, we have also carried out a simulation-based study to investigate the issues related to large scale deployment of our approach on the Internet. Our results demonstrate that a large number of P-FTP users has no adverse effect on the performance perceived by non-P-FTP users. In addition, the file servers and network are not significantly affected by large scale deployment of P-FTP

Proceedings ArticleDOI
13 Mar 2006
TL;DR: Omero, a UI server built along this line for the plan B operating system, is described, using distributed file systems that export widgets to applications.
Abstract: It is difficult to build user interfaces that must be distributed over a set of dynamic and heterogeneous I/O devices. This difficulty increases when we want to split, merge, replicate, and relocate the UI across a set of heterogeneous devices, without the application intervention. Furthermore, using generic tools, e.g. to search for UI components or to save/restore them, is usually not feasible. We follow a novel approach for building UIs that overcomes these problems: using distributed file systems that export widgets to applications. In this paper we describe Omero, a UI server built along this line for the plan B operating system.

Journal Article
TL;DR: New general-purpose Complex Adaptive System algorithms that solve the resource allocation problem in distributed systems that are based on squirrel natural behaviors and provide a novel CAS metaphor are introduced.
Abstract: This paper introduces new general-purpose Complex Adaptive System (CAS) algorithms that solve the resource allocation problem in distributed systems. These CAS algorithms are based on squirrel natural behaviors and provide a novel CAS metaphor. The CAS Squirrels system is described together with its associated class architecture. A comprehensive set of experiments is carried to corroborate our hypothesis that CAS Squirrel based algorithms provide a efficient resource allocation on a distributed system. The algorithms are based on squirrel hoarding mechanisms. The scalability and reliability obtained from the experiments are maintained across a wide variety of distributed system characteristics. The research work uses the Peer-to-Peer Distributed File System storage resource allocation problem to validate our hypothesis.

Patent
24 Aug 2006
TL;DR: In this article, a distributed file system is described which includes one or more input/output (I/O) nodes and one/more compute nodes, and the I/O nodes and compute nodes may be communicably coupled through an interconnect.
Abstract: A distributed file system is disclosed which may include one or more input/output (I/O) nodes and one or more compute nodes. The I/O nodes and the compute nodes may be communicably coupled through an interconnect. Each compute node may include applications to perform specific functions and perform I/O functions through libraries and file system call handlers. The file system call handlers may be capable of providing application programming interfaces (APIs) to facilitate communication between the plurality of I/O nodes and the applications. The file system call handlers may use a message port system to communicate with other compute nodes.

Proceedings ArticleDOI
24 Jul 2006
TL;DR: This work describes a new mobile file system, MAFS, that supports graceful degradation of file system performance as bandwidth is reduced, as well as rapid propagation of essential file updates.
Abstract: Wireless networks present unusual challenges for mobile file system clients, since they are characterised by unpredictable connectivity and widely-varying bandwidth. The traditional approach to adapting network communication to these conditions is to write back file updates asynchronously when bandwidth is low. Unfortunately, this can lead to underutilisation of bandwidth and inconsistencies between clients. We describe a new mobile file system, MAFS, that supports graceful degradation of file system performance as bandwidth is reduced, as well as rapid propagation of essential file updates. MAFS is able to achieve 10-20% improvements in execution time for real-life file system traces featuring read-write contention.

Journal Article
TL;DR: In this article, the authors proposed secure, efficient and scalable key management algorithms to support monotone access structures on large file systems, where a user who is authorized to access a file, can efficiently derive the file's encryption key.
Abstract: Advances in networking technologies have triggered the storage as a service (SAS) model. The SAS model allows content providers to leverage hardware and software solutions provided by the storage service providers (SSPs), without having to develop them on their own, thereby freeing them to concentrate on their core business. The SAS model is faced with at least two important security issues: (i) How to maintain the confidentiality and integrity of files stored at the SSPs? (ii) How to efficiently support flexible access control policies on the file system? The former problem is handled using a cryptographic file system, while the later problem is largely unexplored. In this paper, we propose secure, efficient and scalable key management algorithms to support monotone access structures on large file systems. We use key derivation algorithms to ensure that a user who is authorized to access a file, can efficiently derive the file's encryption key. However, it is computationally infeasible for a user to guess the encryption keys for those files that she is not authorized to access. We present concrete algorithms to efficiently and scaleably support a discretionary access control model (DAC) and handle dynamic access control updates & revocations. We also present a prototype implementation of our proposal on a distributed file system. A trace driven evaluation of our prototype shows that our algorithms meet the security requirements while incurring a low performance overhead on the file system.

Book ChapterDOI
29 Aug 2006
TL;DR: This analysis yields that the existing range of security solutions can be employed to secure large-scale distributed file systems, however, they should be holistically employed to triumph over the security chinks in the FileStamp's armor.
Abstract: This paper presents an analysis of security requirements of large-scale distributed file systems. Our objective is to identify their generic as well as specific security requirements and to propose potential solutions that can be employed to address these requirements. FileStamp - a multi-writer distributed file system developed at CETIC is considered as a case study for this analysis. This analysis yields that the existing range of security solutions can be employed to secure large-scale distributed file systems. However, they should be holistically employed to triumph over the security chinks in the FileStamp's armor.