scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Dependable and Secure Computing in 2010"


Journal ArticleDOI
TL;DR: Analysis of failure data collected at two large high-performance computing sites finds that average failure rates differ wildly across systems, ranging from 20-1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate.
Abstract: Designing highly dependable systems requires a good understanding of failure characteristics. Unfortunately, little raw data on failures in large IT installations are publicly available. This paper analyzes failure data collected at two large high-performance computing sites. The first data set has been collected over the past nine years at Los Alamos National Laboratory (LANL) and has recently been made publicly available. It covers 23,000 failures recorded on more than 20 different systems at LANL, mostly large clusters of SMP and NUMA nodes. The second data set has been collected over the period of one year on one large supercomputing system comprising 20 nodes and more than 10,000 processors. We study the statistics of the data, including the root cause of failures, the mean time between failures, and the mean time to repair. We find, for example, that average failure rates differ wildly across systems, ranging from 20-1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate. From one system to another, mean repair time varies from less than an hour to more than a day, and repair times are well modeled by a lognormal distribution.

575 citations


Journal ArticleDOI
TL;DR: This paper presents the design of an advanced hybrid peer-to-peer botnet, which provides robust network connectivity, individualized encryption and control traffic dispersion, limited botnet exposure by each bot, and easy monitoring and recovery by its botmaster.
Abstract: A “botnet” consists of a network of compromised computers controlled by an attacker (“botmaster”). Recently, botnets have become the root cause of many Internet attacks. To be well prepared for future attacks, it is not enough to study how to detect and defend against the botnets that have appeared in the past. More importantly, we should study advanced botnet designs that could be developed by botmasters in the near future. In this paper, we present the design of an advanced hybrid peer-to-peer botnet. Compared with current botnets, the proposed botnet is harder to be shut down, monitored, and hijacked. It provides robust network connectivity, individualized encryption and control traffic dispersion, limited botnet exposure by each bot, and easy monitoring and recovery by its botmaster. In the end, we suggest and analyze several possible defenses against this advanced botnet.

260 citations


Journal ArticleDOI
TL;DR: This work describes Instruction-Set Randomization (ISR), a general approach for safeguarding systems against any type of code-injection attack, and discusses three approaches (protection for Intel x86 executables, Perl scripts, and SQL queries), one from each of the above categories.
Abstract: We describe Instruction-Set Randomization (ISR), a general approach for safeguarding systems against any type of code-injection attack. We apply Kerckhoffs' principle to create OS process-specific randomized instruction sets (e.g., machine instructions) of the system executing potentially vulnerable software. An attacker who does not know the key to the randomization algorithm will inject code that is invalid for that (randomized) environment, causing a runtime exception. Our approach is applicable to machine-language programs and scripting and interpreted languages. We discuss three approaches (protection for Intel x86 executables, Perl scripts, and SQL queries), one from each of the above categories. Our goal is to demonstrate the generality and applicability of ISR as a protection mechanism. Our emulator-based prototype demonstrates the feasibility ISR for x86 executables and should be directly usable on a suitably modified processor. We demonstrate how to mitigate the significant performance impact of emulation-based ISR by using several heuristics to limit the scope of randomized (and interpreted) execution to sections of code that may be more susceptible to exploitation. The SQL prototype consists of an SQL query-randomizing proxy that protects against SQL injection attacks with no changes to database servers, minor changes to CGI scripts, and with negligible performance overhead. Similarly, the performance penalty of a randomized Perl interpreter is minimal. Where the performance impact of our proposed approach is acceptable (i.e., in an already-emulated environment, in the presence of programmable or specialized hardware, or in interpreted languages), it can serve as a broad protection mechanism and complement other security mechanisms.

170 citations


Journal ArticleDOI
TL;DR: It is demonstrated that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach and the proposed system is robust and is able to handle noisy data without compromising performance.
Abstract: Intrusion detection faces a number of challenges; an intrusion detection system must reliably detect malicious activities in a network and must perform efficiently to cope with the large amount of network traffic. In this paper, we address these two issues of Accuracy and Efficiency using Conditional Random Fields and Layered Approach. We demonstrate that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach. Experimental results on the benchmark KDD '99 intrusion data set show that our proposed system based on Layered Conditional Random Fields outperforms other well-known methods such as the decision trees and the naive Bayes. The improvement in attack detection accuracy is very high, particularly, for the U2R attacks (34.8 percent improvement) and the R2L attacks (34.5 percent improvement). Statistical Tests also demonstrate higher confidence in detection accuracy for our method. Finally, we show that our system is robust and is able to handle noisy data without compromising performance.

159 citations


Journal ArticleDOI
TL;DR: A rigorous semantic interpretation of DFTs is introduced and it is shown by a number of realistic and complex systems that this methodology achieves drastic reductions in the state space.
Abstract: Fault trees (FTs) are among the most prominent formalisms for reliability analysis of technical systems. Dynamic FTs extend FTs with support for expressing dynamic dependencies among components. The standard analysis vehicle for DFTs is state-based, and treats the model as a continuous-time Markov chain (CTMC). This is not always possible, as we will explain, since some DFTs allow multiple interpretations. This paper introduces a rigorous semantic interpretation of DFTs. The semantics is defined in such a way that the semantics of a composite DFT arises in a transparent manner from the semantics of its components. This not only eases the understanding of how the FT building blocks interact. It is also a key to alleviate the state explosion problem. By lifting a classical aggregation strategy to our setting, we can exploit the DFT structure to build the smallest possible Markov chain representation of the system. The semantics - as well as the aggregation and analysis engine is implemented in a tool, called CORAL. We show by a number of realistic and complex systems that this methodology achieves drastic reductions in the state space.

155 citations


Journal ArticleDOI
TL;DR: An unsupervised host-based intrusion detection system based on system call arguments and sequences that has a good signal-to-noise ratio, and is also able to correctly contextualize alarms, giving the user more information to understand whether a true or false positive happened.
Abstract: We describe an unsupervised host-based intrusion detection system based on system call arguments and sequences. We define a set of anomaly detection models for the individual parameters of the call. We then describe a clustering process that helps to better fit models to system call arguments and creates interrelations among different arguments of a system call. Finally, we add a behavioral Markov model in order to capture time correlations and abnormal behaviors. The whole system needs no prior knowledge input; it has a good signal-to-noise ratio, and it is also able to correctly contextualize alarms, giving the user more information to understand whether a true or false positive happened, and to detect global variations over the entire execution flow, as opposed to punctual ones over individual instances.

146 citations


Journal ArticleDOI
TL;DR: This paper presents the first hierarchical byzantine fault-tolerant replication architecture suitable to systems that span multiple wide-area sites, and presents proofs that the algorithm provides safety and liveness properties.
Abstract: This paper presents the first hierarchical byzantine fault-tolerant replication architecture suitable to systems that span multiple wide-area sites. The architecture confines the effects of any malicious replica to its local site, reduces message complexity of wide-area communication, and allows read-only queries to be performed locally within a site for the price of additional standard hardware. We present proofs that our algorithm provides safety and liveness properties. A prototype implementation is evaluated over several network topologies and is compared with a flat byzantine fault-tolerant approach. The experimental results show considerable improvement over flat byzantine replication algorithms, bringing the performance of byzantine replication closer to existing benign fault-tolerant replication techniques over wide area networks.

106 citations


Journal ArticleDOI
TL;DR: In this paper, an overview of end-to-end encryption solutions for convergecast traffic in wireless sensor networks that support in-network processing at forwarding intermediate nodes is presented, and a qualitative comparison of available approaches, point out their strengths, respectively weaknesses, and investigate opportunities for further research.
Abstract: We present an overview of end-to-end encryption solutions for convergecast traffic in wireless sensor networks that support in-network processing at forwarding intermediate nodes. Other than hop-by-hop based encryption approaches, aggregator nodes can perform in-network processing on encrypted data. Since it is not required to decrypt the incoming ciphers before aggregating, substantial advantages are 1) neither keys nor plaintext is available at aggregating nodes, 2) the overall energy consumption of the backbone can be reduced, 3) the system is more flexible with respect to changing routes, and finally 4) the overall system security increases. We provide a qualitative comparison of available approaches, point out their strengths, respectively weaknesses, and investigate opportunities for further research.

89 citations


Journal ArticleDOI
TL;DR: A novel semi-Markov process model is proposed to characterize the evolution of node behaviors and the effects of node misbehaviors on both topological survivability and network performance and is validated by simulations and numerical analysis.
Abstract: Network survivability is the ability of a network to stay connected under failures and attacks, which is a fundamental issue to the design and performance evaluation of wireless ad hoc networks. In this paper, we focus on the analysis of network survivability in the presence of node misbehaviors and failures. First, we propose a novel semi-Markov process model to characterize the evolution of node behaviors. As an immediate application of the proposed model, we investigate the problem of node isolation where the effects of denial-of-service (DoS) attacks are considered. Then, we present the derivation of network survivability and obtain the lower and upper bounds on the topological survivability for k-connected networks. We find that the network survivability degrades very quickly with the increasing likelihood of node misbehaviors, depending on the requirements of disjoint outgoing paths or network connectivity. Moreover, DoS attacks have a significant impact on the network survivability, especially in dense networks. Finally, we validate the proposed model and analytical result by simulations and numerical analysis, showing the effects of node misbehaviors on both topological survivability and network performance.

80 citations


Journal ArticleDOI
TL;DR: This paper investigates the problem of optimal allocation of sensitive data objects that are partitioned by using secret sharing scheme or erasure coding scheme and/or replicated in data grids, and develops two heuristic algorithms for the two subproblems.
Abstract: Secret sharing and erasure coding-based approaches have been used in distributed storage systems to ensure the confidentiality, integrity, and availability of critical information. To achieve performance goals in data accesses, these data fragmentation approaches can be combined with dynamic replication. In this paper, we consider data partitioning (both secret sharing and erasure coding) and dynamic replication in data grids, in which security and data access performance are critical issues. More specifically, we investigate the problem of optimal allocation of sensitive data objects that are partitioned by using secret sharing scheme or erasure coding scheme and/or replicated. The grid topology we consider consists of two layers. In the upper layer, multiple clusters form a network topology that can be represented by a general graph. The topology within each cluster is represented by a tree graph. We decompose the share replica allocation problem into two subproblems: the optimal intercluster resident set problem (OIRSP) that determines which clusters need share replicas and the optimal intracluster share allocation problem (OISAP) that determines the number of share replicas needed in a cluster and their placements. We develop two heuristic algorithms for the two subproblems. Experimental studies show that the heuristic algorithms achieve good performance in reducing communication cost and are close to optimal solutions.

62 citations


Journal ArticleDOI
TL;DR: This work presents the first comprehensive evaluation of NoC susceptibility to PV effects, and proposes an array of architectural improvements in the form of a new router design-called SturdiSwitch-to increase resiliency to these effects.
Abstract: The advent of diminutive technology feature sizes has led to escalating transistor densities. Burgeoning transistor counts are casting a dark shadow on modern chip design: global interconnect delays are dominating gate delays and affecting overall system performance. Networks-on-Chip (NoC) are viewed as a viable solution to this problem because of their scalability and optimized electrical properties. However, on-chip routers are susceptible to another artifact of deep submicron technology, Process Variation (PV). PV is a consequence of manufacturing imperfections, which may lead to degraded performance and even erroneous behavior. In this work, we present the first comprehensive evaluation of NoC susceptibility to PV effects, and we propose an array of architectural improvements in the form of a new router design-called SturdiSwitch-to increase resiliency to these effects. Through extensive reengineering of critical components, SturdiSwitch provides increased immunity to PV while improving performance and increasing area and power efficiency.

Journal ArticleDOI
TL;DR: SigFree is an online signature-free out-of-the-box application-layer method for blocking code-injection buffer overflow attack messages targeting at various Internet services such as Web service and is good for economical Internet-wide deployment with very low deployment and maintenance cost.
Abstract: We propose SigFree, an online signature-free out-of-the-box application-layer method for blocking code-injection buffer overflow attack messages targeting at various Internet services such as Web service Motivated by the observation that buffer overflow attacks typically contain executables whereas legitimate client requests never contain executables in most Internet services, SigFree blocks attacks by detecting the presence of code Unlike the previous code detection algorithms, SigFree uses a new data-flow analysis technique called code abstraction that is generic, fast, and hard for exploit code to evade SigFree is signature free, thus it can block new and unknown buffer overflow attacks; SigFree is also immunized from most attack-side code obfuscation methods Since SigFree is a transparent deployment to the servers being protected, it is good for economical Internet-wide deployment with very low deployment and maintenance cost We implemented and tested SigFree; our experimental study shows that the dependency-degree-based SigFree could block all types of code-injection attack packets (above 750) tested in our experiments with very few false positives Moreover, SigFree causes very small extra latency to normal client requests when some requests contain exploit code

Journal ArticleDOI
TL;DR: Game theory is used to propose a series of optimal puzzle-based strategies for handling increasingly sophisticated flooding attack scenarios and the solution concept of Nash equilibrium is used in a prescriptive way.
Abstract: In recent years, a number of puzzle-based defense mechanisms have been proposed against flooding denial-of-service (DoS) attacks in networks. Nonetheless, these mechanisms have not been designed through formal approaches and thereby some important design issues such as effectiveness and optimality have remained unresolved. This paper utilizes game theory to propose a series of optimal puzzle-based strategies for handling increasingly sophisticated flooding attack scenarios. In doing so, the solution concept of Nash equilibrium is used in a prescriptive way, where the defender takes his part in the solution as an optimum defense against rational attackers. This study culminates in a strategy for handling distributed attacks from an unknown number of sources.

Journal ArticleDOI
TL;DR: This paper describes a tool that extracts an annotated control flow graph from the binary and automatically verifies it against a formal malware specification, and introduces the new specification language CTPL, which balances the high expressive power needed for malware signatures with efficient model checking algorithms.
Abstract: Although recent estimates are speaking of 200,000 different viruses, worms, and Trojan horses, the majority of them are variants of previously existing malware. As these variants mostly differ in their binary representation rather than their functionality, they can be recognized by analyzing the program behavior, even though they are not covered by the signature databases of current antivirus tools. Proactive malware detectors mitigate this risk by detection procedures that use a single signature to detect whole classes of functionally related malware without signature updates. It is evident that the quality of proactive detection procedures depends on their ability to analyze the semantics of the binary. In this paper, we propose the use of model checking-a well-established software verification technique-for proactive malware detection. We describe a tool that extracts an annotated control flow graph from the binary and automatically verifies it against a formal malware specification. To this end, we introduce the new specification language CTPL, which balances the high expressive power needed for malware signatures with efficient model checking algorithms. Our experiments demonstrate that our technique indeed is able to recognize variants of existing malware with a low risk of false positives.

Journal ArticleDOI
TL;DR: An unsupervised approach, called RoleMiner, for mining roles from existing user-permission assignments is presented, which is fairly robust to reasonable levels of noise and may help automate the process of role definition.
Abstract: Today, role-based access control (RBAC) has become a well-accepted paradigm for implementing access control because of its convenience and ease of administration. However, in order to realize the full benefits of the RBAC paradigm, one must first define the roles accurately. This task of defining roles and associating permissions with them, also known as role engineering, is typically accomplished either in a top-down or in a bottom-up manner. Under the top-down approach, a careful analysis of the business processes is done to first define job functions and then to specify appropriate roles from them. While this approach can help in defining roles more accurately, it is tedious and time consuming since it requires that the semantics of the business processes be well understood. Moreover, it ignores existing permissions within an organization and does not utilize them. On the other hand, under the bottom-up approach, existing permissions are used to derive roles from them. As a result, it may help automate the process of role definition. In this paper, we present an unsupervised approach, called RoleMiner, for mining roles from existing user-permission assignments. Since a role, when semantics are unavailable, is nothing but a set of permissions, the task of role mining is essentially that of clustering users having the same (or similar) permissions. However, unlike the traditional applications of data mining that ideally require identification of nonoverlapping clusters, roles will have overlapping permissions and thus permission sets that define roles should be allowed to overlap. It is this distinction from traditional clustering that makes the problem of role mining nontrivial. Our experiments with real and simulated data sets indicate that our role mining process is quite accurate and efficient. Since our role mining approach is based on subset enumeration, it is fairly robust to reasonable levels of noise.

Journal ArticleDOI
TL;DR: It is shown that a simple greedy heuristic works with accuracy exceeding 80 percent for many failure scenarios in simulation, while delivering extremely high precision (greater than 80 percent) in operational experience used to isolate optical component and MPLS control plane failures in an ISP backbone.
Abstract: Internet backbone networks are under constant flux in order to keep up with demand and offer new features. The pace of change in technology often outstrips the pace of introduction of associated fault monitoring capabilities that are built into today's IP protocols and routers. Moreover, some of these new technologies cross networking layers, raising the potential for unanticipated interactions and service disruptions, which the individual layers' built-in monitoring capabilities may not detect. In these instances, operators typically employ higher layer monitoring techniques such as end-to-end liveness probing to detect lower or cross-layer failures, but lack tools to precisely determine where a detected failure may have occurred. In this paper, we evaluate the effectiveness of using risk modeling to translate high-level failure notifications into lower layer root causes in two specific scenarios in a tier-1 ISP. We show that a simple greedy heuristic works with accuracy exceeding 80 percent for many failure scenarios in simulation, while delivering extremely high precision (greater than 80 percent). We report our operational experience using risk modeling to isolate optical component and MPLS control plane failures in an ISP backbone.

Journal ArticleDOI
TL;DR: An approach for conformance testing of implementations required to enforce access control policies specified using the Temporal Role-Based Access Control (TRBAC) model is proposed, which uses Timed Input-Output Automata to model the behavior specified by a TRBAC policy.
Abstract: We propose an approach for conformance testing of implementations required to enforce access control policies specified using the Temporal Role-Based Access Control (TRBAC) model. The proposed approach uses Timed Input-Output Automata (TIOA) to model the behavior specified by a TRBAC policy. The TIOA model is transformed to a deterministic se-FSA model that captures any temporal constraint by using two special events Set and Exp. The modified W-method and integer-programming-based approach are used to construct a conformance test suite from the transformed model. The conformance test suite so generated provides complete fault coverage with respect to the proposed fault model for TRBAC specifications.

Journal ArticleDOI
TL;DR: A change to the memory architecture of modern processors is proposed that addresses the code injection problem at its very root by virtually splitting memory into code memory and data memory such that a processor will never be able to fetch injected code for execution.
Abstract: Code injection attacks, despite being well researched, continue to be a problem today. Modern architectural solutions such as the execute-disable bit and PaX have been useful in limiting the attacks; however, they enforce program layout restrictions and can oftentimes still be circumvented by a determined attacker. We propose a change to the memory architecture of modern processors that addresses the code injection problem at its very root by virtually splitting memory into code memory and data memory such that a processor will never be able to fetch injected code for execution. This virtual split memory system can be implemented as a software-only patch to an operating system and can be used to supplement existing schemes for improved protection. Furthermore, our system is able to accommodate a number of response modes when a code injection attack occurs. Our experiments with both benchmarks and real-world attacks show the system is effective in preventing a wide range of code injection attacks while incurring reasonable overhead.

Journal ArticleDOI
TL;DR: These simple-to-implement techniques are shown to improve the processor's reliability with relatively low performance, power, and hardware overheads, and the resulting excessive reliability can be traded back for performance by increasing clock rate and/or reducing voltage, thereby improving upon single execution approaches.
Abstract: Soft errors (or transient faults) are temporary faults that arise in a circuit due to a variety of internal noise and external sources such as cosmic particle hits. Though soft errors still occur infrequently, they are rapidly becoming a major impediment to processor reliability. This is due primarily to processor scaling characteristics. In the past, systems designed to tolerate such faults utilized costly customized solutions, entailing the use of replicated hardware components to detect and recover from microprocessor faults. As the feature size keeps shrinking and with the proliferation of multiprocessor on die in all segments of computer-based systems, the capability to detect and recover from faults is also desired for commodity hardware. For such systems, however, performance and power constitute the main drivers, so the traditional solutions prove inadequate and new approaches are required. We introduce two independent and complementary microarchitecture-level techniques: double execution and double decoding. Both exploit the typically low average processor resource utilization of modern processors to enhance processor reliability. double execution protects the out-of-order part of the CPU by executing each instruction twice. Double decoding uses a second, low-performance low-power instruction decoder to detect soft errors in the decoder logic. These simple-to-implement techniques are shown to improve the processor's reliability with relatively low performance, power, and hardware overheads. Finally, the resulting ?excessive? reliability can even be traded back for performance by increasing clock rate and/or reducing voltage, thereby improving upon single execution approaches.

Journal ArticleDOI
TL;DR: It is shown how instruction caches can be thermally attacked by malicious codes and how simple techniques can be utilized to protect instruction caches from the thermal attack.
Abstract: The instruction cache has been recognized as one of the least hot units in microprocessors, which leaves the instruction cache largely ignored in on-chip thermal management Consequently, thermal sensors are not allocated near the instruction cache However, malicious codes can exploit the deficiency in this empirical design and heat up fine-grain localized hotspots in the instruction cache, which might lead to physical damages In this paper, we show how instruction caches can be thermally attacked by malicious codes and how simple techniques can be utilized to protect instruction caches from the thermal attack

Journal ArticleDOI
TL;DR: This paper presents a principled automated approach for designing dependable storage solutions for multiple applications in shared environments and shows that this approach consistently produces better designs for the cases it has studied.
Abstract: The costs of data loss and unavailability can be large, so businesses use many data protection techniques such as remote mirroring, snapshots, and backups to guard against failures. Choosing an appropriate combination of techniques is difficult because there are numerous approaches for protecting data and allocating resources. Storage system architects typically use ad hoc techniques, often resulting in overengineered expensive solutions or underprovisioned inadequate ones. In contrast, this paper presents a principled automated approach for designing dependable storage solutions for multiple applications in shared environments. Our contributions include search heuristics for intelligent exploration of the large design space and modeling techniques for capturing interactions between applications during recovery. Using realistic storage system requirements, we show that our design tool produces designs that cost up to two times less in initial outlays and expected data penalties than the designs produced by an emulated human design process. Additionally, we compare our design tool to a random search heuristic and a genetic algorithm metaheuristic, and show that our approach consistently produces better designs for the cases we have studied. Finally, we study the sensitivity of our design tool to several input parameters.

Journal ArticleDOI
TL;DR: A hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches is proposed, which offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one.
Abstract: Critical applications based on Systems-on-Chip (SoCs) require suitable techniques that are able to ensure a sufficient level of reliability. Several techniques have been proposed to improve fault detection and correction capabilities of faults affecting SoCs. This paper proposes a hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches. The proposed solution combines some software modifications, which are easy to automate, with the introduction of a hardware module, which is independent of the specific application. The method is particularly suitable to fit in a typical SoC design flow and is shown to achieve a better trade-off between the achieved results and the required costs than corresponding purely hardware or software techniques. In fact, the proposed approach offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one.

Journal ArticleDOI
TL;DR: A novel figure of merit to measure the DPA effectiveness of multibit attacks is proposed and several interesting properties of DPA attacks are derived, and suggestions to design algorithms and circuits with higher robustness against DPA are given.
Abstract: In this paper, a general model of multibit Differential Power Analysis (DPA) attacks to precharged buses is discussed, with emphasis on symmetric-key cryptographic algorithms. Analysis provides a deeper insight into the dependence of the DPA effectiveness (i.e., the vulnerability of cryptographic chips) on the parameters that define the attack, the algorithm, and the processor architecture in which the latter is implemented. To this aim, the main parameters that are of interest in practical DPA attacks are analytically derived under appropriate approximations, and a novel figure of merit to measure the DPA effectiveness of multibit attacks is proposed. This figure of merit allows for identifying conditions that maximize the effectiveness of DPA attacks, i.e., conditions under which a cryptographic chip should be tested to assess its robustness. Several interesting properties of DPA attacks are derived, and suggestions to design algorithms and circuits with higher robustness against DPA are given. The proposed model is validated in the case of DES and AES algorithms with both simulations on an MIPS32 architecture and measurements on an FPGA-based implementation of AES. The model accuracy is shown to be adequate, as the resulting error is always lower than 10 percent and typically of a few percentage points.

Journal ArticleDOI
TL;DR: A novel key predistribution scheme that uses deployment knowledge to divide deployment regions into overlapping clusters, each of which has its own distinct key space, which improves network resilience without compromising connectivity or communications overhead.
Abstract: We present a novel key predistribution scheme that uses deployment knowledge to divide deployment regions into overlapping clusters, each of which has its own distinct key space. Through careful construction of these clusters, network resilience is improved, without compromising connectivity or communications overhead. Experimental results show significant improvement in performance over existing schemes based on deployment knowledge.

Journal ArticleDOI
TL;DR: This paper proposes a new web referral architecture for privileged service (“WRAPS”), which allows a legitimate client to obtain a privilege URL through a simple click on a referral hyperlink, from a website trusted by the target website.
Abstract: The web is a complicated graph, with millions of websites interlinked together. In this paper, we propose to use this web sitegraph structure to mitigate flooding attacks on a website, using a new web referral architecture for privileged service (“WRAPS”). WRAPS allows a legitimate client to obtain a privilege URL through a simple click on a referral hyperlink, from a website trusted by the target website. Using that URL, the client can get privileged access to the target website in a manner that is far less vulnerable to a distributed denial-of-service (DDoS) flooding attack than normal access would be. WRAPS does not require changes to web client software and is extremely lightweight for referrer websites, which makes its deployment easy. The massive scale of the web sitegraph could deter attempts to isolate a website through blocking all referrers. We present the design of WRAPS, and the implementation of a prototype system used to evaluate our proposal. Our empirical study demonstrates that WRAPS enables legitimate clients to connect to a website smoothly in spite of a very intensive flooding attack, at the cost of small overheads on the website's ISP's edge routers. We discuss the security properties of WRAPS and a simple approach to encourage many small websites to help protect an important site during DoS attacks.

Journal ArticleDOI
TL;DR: This paper shows that the dual-quorum protocol can approach the optimal performance and availability of Read-One/Write-All-Asynchronously (ROWA-A) epidemic algorithms without suffering the weak consistency guarantees and resulting design complexity inherent in ROWa-A systems.
Abstract: This paper introduces dual-quorum replication, a novel data replication algorithm designed to support Internet edge services. Edge services allow clients to access Internet services via distributed edge servers that operate on a shared collection of underlying data. Although it is generally difficult to share data while providing high availability, good performance, and strong consistency, replication algorithms designed for specific access patterns can offer nearly ideal trade-offs among these metrics. In this paper, we focus on the key problem of sharing read/write data objects across a collection of edge servers when the references to each object (1) tend not to exhibit high concurrency across multiple nodes and (2) tend to exhibit bursts of read-dominated or write-dominated behavior. Dual-quorum replication combines volume leases and quorum-based techniques to achieve excellent availability, response time, and consistency for such workloads. In particular, through both analytical and experimental evaluations, we show that the dual-quorum protocol can (for the workloads of interest) approach the optimal performance and availability of Read-One/Write-All-Asynchronously (ROWA-A) epidemic algorithms without suffering the weak consistency guarantees and resulting design complexity inherent in ROWA-A systems.

Journal ArticleDOI
TL;DR: This work model the probabilistic behavior of a system comprising a failure detector and a monitored crash-recovery target and indicates that variation in the MTTF and MTTR of the monitored process can have a significant impact on the QoS of the failure detector.
Abstract: We model the probabilistic behavior of a system comprising a failure detector and a monitored crash-recovery target. We extend failure detectors to take account of failure recovery in the target system. This involves extending QoS measures to include the recovery detection speed and proportion of failures detected. We also extend estimating the parameters of the failure detector to achieve a required QoS to configuring the crash-recovery failure detector. We investigate the impact of the dependability of the monitored process on the QoS of our failure detector. Our analysis indicates that variation in the MTTF and MTTR of the monitored process can have a significant impact on the QoS of our failure detector. Our analysis is supported by simulations that validate our theoretical results.

Journal ArticleDOI
TL;DR: This work identifies a range of greedy receiver misbehaviors, and quantifies their damage using both simulation and testbed experiments, and develops techniques to detect and mitigate greedy Receiver misbehavior, and demonstrates their effectiveness.
Abstract: As wireless hotspot business becomes a tremendous financial success, users of these networks have increasing motives to misbehave in order to obtain more bandwidth at the expense of other users. Such misbehaviors threaten the performance and availability of hotspot networks and have recently attracted increasing research attention. However, the existing work so far focuses on sender-side misbehavior. Motivated by the observation that many hotspot users receive more traffic than they send, we study greedy receivers in this paper. We identify a range of greedy receiver misbehaviors, and quantify their damage using both simulation and testbed experiments. Our results show that even though greedy receivers do not directly control data transmission, they can still result in very serious damage, including completely shutting off the competing traffic. To address the issues, we further develop techniques to detect and mitigate greedy receiver misbehavior, and demonstrate their effectiveness.

Journal ArticleDOI
TL;DR: EHMA as discussed by the authors is a two-tier and cluster-wise matching algorithm, which significantly reduces the amount of external memory accesses and the capacity of memory, and is very simple and therefore practical for both software and hardware implementations.
Abstract: Detection engines capable of inspecting packet payloads for application-layer network information are urgently required. The most important technology for fast payload inspection is an efficient multipattern matching algorithm, which performs exact string matching between packets and a large set of predefined patterns. This paper proposes a novel Enhanced Hierarchical Multipattern Matching Algorithm (EHMA) for packet inspection. Based on the occurrence frequency of grams, a small set of the most frequent grams is discovered and used in the EHMA. EHMA is a two-tier and cluster-wise matching algorithm, which significantly reduces the amount of external memory accesses and the capacity of memory. Using a skippable scan strategy, EHMA speeds up the scanning process. Furthermore, independent of parallel and special functions, EHMA is very simple and therefore practical for both software and hardware implementations. Simulation results reveal that EHMA significantly improves the matching performance. The speed of EHMA is about 0.89-1,161 times faster than that of current matching algorithms. Even under real-life intense attack, EHMA still performs well.

Journal ArticleDOI
TL;DR: This paper proposes a new inference control architecture, entrusting inference control to each user's platform that is equipped with trusted computing technology, which avoids the bottleneck in the traditional architecture and can potentially support a large number of users making queries.
Abstract: Inference has been a longstanding issue in database security, and inference control, aiming to curb inference, provides an extra line of defense to the confidentiality of databases by complementing access control. However, in traditional inference control architecture, database server is a crucial bottleneck, as it enforces highly computation-intensive auditing for all users who query the protected database. As a result, most auditing methods, though rigorously studied, are not practical for protecting large-scale real-world database systems. In this paper, we shift this paradigm by proposing a new inference control architecture, entrusting inference control to each user's platform that is equipped with trusted computing technology. The trusted computing technology is designed to attest the state of a user's platform to the database server, so as to assure the server that inference control could be enforced as prescribed. A generic protocol is proposed to formalize the interactions between the user's platform and database server. The authentication property of the protocol is formally proven. Since inference control is enforced in a distributed manner, our solution avoids the bottleneck in the traditional architecture, thus can potentially support a large number of users making queries.