Showing papers in "Computer Networks in 2004"
••
TL;DR: This paper studies the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets using a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration.
985 citations
••
TL;DR: A theoretical framework is developed to model the spatial and temporal correlations in WSN to enable the development of efficient communication protocols which exploit these advantageous intrinsic features of the WSN paradigm.
687 citations
••
TL;DR: The goal of the paper is to place some order into the existing attack and defense mechanisms, so that a better understanding of DDoS attacks can be achieved and subsequently more efficient and effective algorithms, techniques and procedures to combat these attacks may be developed.
641 citations
••
TL;DR: The findings point out the need for continuously questioning the applicability and completeness of data sets at hand when establishing the generality of any particular Internet-specific observation and for assessing its (in)sensitivity to deficiencies in the measurements.
215 citations
••
TL;DR: The technique used to minimize communication costs combines analytical results from stochastic geometry with a distributed, randomized algorithm for generating clusters of sensors to achieve the minimum communication energy.
201 citations
••
TL;DR: The impact of adding retransmission costs on the equilibrium is investigated and it is shown how this pricing can be used to make the equilibrium throughput coincide with the optimal team throughput.
135 citations
••
TL;DR: It is observed that the contribution of multi-homing and that of load balancing grow faster than the routing table does and that the load balancing has surpassed multihoming becoming the fastest growing contributor.
131 citations
••
TL;DR: This paper investigates the practical issue of designing several simpler incentive schemes requiring less information and shows using numerical analysis that these schemes converge to a fixed proportion of the full information optimal as the number of peers in the network becomes large.
122 citations
••
TL;DR: A probabilistic solution based on distributed trust model is proposed that is highly resilient to dynamic membership changing and scales well, and a secret dealer is introduced only in the system bootstrapping phase to complement the assumption in trust initialization.
118 citations
••
TL;DR: In this paper, the authors propose a peer-to-peer (P2P) architecture for on-demand media streaming, where peers share some of their resources with the system.
118 citations
••
TL;DR: A broad look at the problem of enhancing TCP performance under corruption losses, and provides a taxonomy of potential practical classes of mitigations that TCP end-points and intermediate network elements can cooperatively use to decrease the performance impact of corruption-based loss.
••
TL;DR: A probabilistic event-driven fault localization technique that isolates the most probable set of faults through incremental updating of a symptom-explanation hypothesis and provides a set of alternative hypotheses, each of which is a complete explanation of the set of symptoms observed thus far.
••
TL;DR: A survey of recent results on the performance of a network handling elastic data traffic under the assumption that flows are generated as a random process highlights the insensitivity results allowing a relatively simple expression of performance when bandwidth sharing realizes so-called "balanced fairness".
••
TL;DR: A new IP lookup scheme with worst-case search and update time of O(log n), where n is the number of prefixes in the forwarding table, based on a new data structure, a multiway range tree, which achieves the optimal lookup time of binary search, but can also be updated in logarithmic time.
••
TL;DR: The experimental results of this study reveal that certain features extracted from HTTP requests can be used to distinguish anomalous (and, therefore, suspicious) traffic from that corresponding to correct, normal connections.
••
TL;DR: This paper analyses the stability of the VRC algorithm based on a linearized TCP model with time delay and provides a design guideline for parameter setting to make the overall system stable and confirms the validity of the analysis and the effectiveness of VRC compared to RED, PI, REM, and AVQ through extensive ns-2 simulations.
••
TL;DR: A traffic model and a parameter fitting procedure that are capable of achieving accurate prediction of the queuing behavior for IP traffic exhibiting long-range dependence are proposed and very good results were obtained since the fitted dBMAPs match closely the autocovariance, the marginal distribution and the queued behavior of the measured traces.
••
TL;DR: This paper addresses the scenario of a Premium service which provides bandwidth on demand and which in addition allows placing deterministic bounds on the delay, and proposes a set of admission control procedures that are part of a particular resource manager such as a Bandwidth Broker.
••
TL;DR: A peer-to-peer architecture whereby each peer is responsible for detecting whether a virus or worm is uncontrollably propagating through the network resulting in an epidemic and taking specific precautions for protecting their host by automatically hardening their security measures during the epidemic is proposed.
••
TL;DR: This paper proposes a tariff-based architecture framework that flexibly integrates pricing and admission control for multi-domain DiffServ networks and model the system as a market so that the price of a service class reflects the resource availability inside the network and is regulated by the market itself.
••
TL;DR: This paper analyzes the so-called Paris Metro Pricing scheme which separates the network into different and independent subnetworks, each behaving equivalently, except that they charge their customers at different rates.
••
TL;DR: A method for aggregating the state of an area and extend the proportional routing approach to provide hierarchical routing across multiple areas in a large network.
••
TL;DR: This paper considers policies (independent of the system that they control) as an application domain for feature interaction techniques and gives a taxonomy for policy conflict, and introduces a generic architecture for handling policy conflict.
••
TL;DR: It is shown that a slight modification of Freenet's routing table cache replacement scheme (from LRU to a replacement scheme that enforces clustering in the key space) can significantly improve performance.
••
TL;DR: This paper proposes several novel algorithms for scheduling bursts in OBS networks with and without wavelength conversion capability to significantly reduce the loss rate while ensuring that maximum delay of a burst does not exceed its prescribed limit.
••
TL;DR: A systematic approach for the automatic detection of feature interactions in embedded control systems is presented, which allows the identification of interactions within a system as well as the detection of interactions that are caused by the environment.
••
TL;DR: A protocol that attempts to maximize the quality of real-time MPEG-4 video streams while simultaneously providing basic end-to-end congestion control and results show that VTP delivers consistent quality video in moderately congested networks and fairly shares bandwidth with TCP in all but a few extreme cases.
••
TL;DR: This paper describes a scalable connection management strategy for QoS-enabled networks that maximizes profit, while reducing blocking experienced by users.
••
TL;DR: By designing congestion resolution algorithms that combine the use of the wavelength and the time domain it is possible to significantly reduce information loss phenomena and also to guarantee quality of service differentiation among traffic classes by means of QoS algorithms specifically designed to exploit the characteristics of optical technology.
••
TL;DR: The Markovian model precisely captures the degrees to which temporal correlations and document popularity influence web trace requests and provides guidelines for designing efficient replacement algorithms that adapt to the degree of temporal correlation present in the request sequence.