scispace - formally typeset
Search or ask a question

Showing papers by "Gyorgy Dan published in 2010"


Proceedings ArticleDOI
04 Nov 2010
TL;DR: This work proposes two algorithms to place encrypted devices in the system such as to maximize their utility in terms of increased system security, and illustrates the effectiveness of these algorithms on two IEEE benchmark power networks under two attack and protection cost models.
Abstract: State estimators in power systems are currently used to, for example, detect faulty equipment and to route power flows. It is believed that state estimators will also play an increasingly important role in future smart power grids, as a tool to optimally and more dynamically route power flows. Therefore security of the estimator becomes an important issue. The estimators are currently located in control centers, and large numbers of measurements are sent over unencrypted communication channels to the centers. We here study stealthy false-data attacks against these estimators. We define a security measure tailored to quantify how hard attacks are to perform, and describe an efficient algorithm to compute it. Since there are so many measurement devices in these systems, it is not reasonable to assume that all devices can be made encrypted overnight in the future. Therefore we propose two algorithms to place encrypted devices in the system such as to maximize their utility in terms of increased system security. We illustrate the effectiveness of our algorithms on two IEEE benchmark power networks under two attack and protection cost models.

419 citations


Posted Content
TL;DR: In this article, the authors analyze the cyber security of state estimators in supervisory control and data acquisition (SCADA) for energy management systems (EMS) operating the power network.
Abstract: The electrical power network is a critical infrastructure in today's society, so its safe and reliable operation is of major concern. State estimators are commonly used in power networks, for example, to detect faulty equipment and to optimally route power flows. The estimators are often located in control centers, to which large numbers of measurements are sent over unencrypted communication channels. Therefore cyber security for state estimators becomes an important issue. In this paper we analyze the cyber security of state estimators in supervisory control and data acquisition (SCADA) for energy management systems (EMS) operating the power network. Current EMS state estimation algorithms have bad data detection (BDD) schemes to detect outliers in the measurement data. Such schemes are based on high measurement redundancy. Although these methods may detect a set of basic cyber attacks, they may fail in the presence of an intelligent attacker. We explore the latter by considering scenarios where stealthy deception attacks are performed by sending false information to the control center. We begin by presenting a recent framework that characterizes the attack as an optimization problem with the objective specified through a security metric and constraints corresponding to the attack cost. The framework is used to conduct realistic experiments on a state-of-the-art SCADA EMS software for a power network example with 14 substations, 27 buses, and 40 branches. The results indicate how state estimators for power networks can be made more resilient to cyber security attacks.

96 citations


Proceedings Article
27 Apr 2010
TL;DR: A large-scale measurement of the most popular peer-to-peer content distribution system, BitTorrent, over eleven months shows that while short-term or small-scale measurements can conclude that the popularity of contents exhibits a power-law tail, the tail is likely exponentially decreasing, especially over long time intervals.
Abstract: The popularity of contents on the Internet is often said to follow a Zipf-like distribution. Different measurement studies showed, however, significantly different distributions depending on the measurement methodology they followed. We performed a large-scale measurement of the most popular peer-to-peer (P2P) content distribution system, BitTorrent, over eleven months. We collected data on a daily to weekly basis from 500 to 800 trackers, with information about 40 to 60 million peers that participated in the distribution of over 10 million torrents. Based on these measurements we show how fundamental characteristics of the observed distribution of content popularity change depending on the measurement methodology and the length of the observation interval. We show that while short-term or small-scale measurements can conclude that the popularity of contents exhibits a power-law tail, the tail is likely exponentially decreasing, especially over long time intervals.

85 citations


Proceedings ArticleDOI
13 Sep 2010
TL;DR: A fluid model is developed that captures the effects of the caches on the system dynamics of peer-to-peer networks, and it is shown that caches can have adverse effects on thesystem dynamics depending on theSystem parameters.
Abstract: Peer-to-peer file-sharing systems are responsible for a significant share of the traffic between Internet service providers (ISPs) in the Internet. In order to decrease their peer-to-peer related transit traffic costs, many ISPs have deployed caches for peer-to-peer traffic in recent years. We consider how the different types of peer-to-peer caches - caches already available on the market and caches expected to become available in the future - can possibly affect the amount of inter-ISP traffic. We develop a fluid model that captures the effects of the caches on the system dynamics of peer-to-peer networks, and show that caches can have adverse effects on the system dynamics depending on the system parameters. We combine the fluid model with a simple model of inter-ISP traffic and show that the impact of caches cannot be accurately assessed without considering the effects of the caches on the system dynamics. We identify scenarios when caching actually leads to increased transit traffic. Our analytical results are supported by extensive simulations and experiments with real BitTorrent clients.

27 citations


Proceedings ArticleDOI
22 Feb 2010
TL;DR: This work describes a model of the channel distortion of scalable video coding and uses it in a detailed simulation setup to compare the performance of six schedulers, among them the Max-Sum and Max-Prod scheduler, which aim to maximize the sum and the product of streaming utilities, respectively.
Abstract: We consider how relatively simple extensions of popular channel-aware schedulers can be used to multicast scalable video streams in high speed radio access networks. To support the evaluation, we first describe a model of the channel distortion of scalable video coding and validate it using eight commonly used test sequences. We use the distortion model in a detailed simulation setup to compare the performance of six schedulers, among them the Max-Sum and Max-Prod schedulers, which aim to maximize the sum and the product of streaming utilities, respectively. We investigate how the traffic load, user mobility, layering structure, and users' aversion of fluctuating distortion influence the streaming performance. Our results show that the Max-Sum scheduler performs better than other considered schemes in almost all scenarios. With the Max-Sum scheduler, the gain of scalable video coding compared to non-scalable coding is substantial, even when users do not tolerate frequent changes in video quality.

22 citations


Journal ArticleDOI
TL;DR: This paper proposes an analytic framework that allows the evaluation of scheduling algorithms, and considers four solutions in which scheduling is performed at the forwarding peer, based on the knowledge of the playout buffer content at the neighbors.
Abstract: In mesh-based peer-to-peer streaming systems data is distributed among the peers according to local scheduling decisions. The local decisions affect how packets get distributed in the mesh, the probability of duplicates and consequently, the probability of timely data delivery. In this paper we propose an analytic framework that allows the evaluation of scheduling algorithms. We consider four solutions in which scheduling is performed at the forwarding peer, based on the knowledge of the playout buffer content at the neighbors. We evaluate the effectiveness of the solutions in terms of the probability that a peer can play out a packet versus the playback delay, the sensitivity of the solutions to the accuracy of the knowledge of the neighbors’ playout buffer contents, and the scalability of the solutions with respect to the size of the overlay. We also show how the model can be used to evaluate the effects of node arrivals and departures on the overlay’s performance.

19 citations


Journal ArticleDOI
TL;DR: An analytical model of a large class of peer-to-peer streaming architectures based on decomposition and non-linear recurrence relations is developed and it is shown how and under what conditions overlays can benefit from the use of error control solutions, prioritization and taxation schemes.

6 citations


Book ChapterDOI
11 May 2010
TL;DR: It is shown that under very general conditions, there exists exactly one server capacity allocation that maximizes the social welfare under SGC, hence simple gradient based method can be used to find the optimal allocation.
Abstract: We address the problem of maximizing the social welfare in a peer-to-peer streaming overlay given a fixed amount of server upload capacity. We show that peers' selfish behavior leads to an equilibrium that is suboptimal in terms of social welfare, because selfish peers are interested in forming clusters and exchanging data among themselves. In order to increase the social welfare we propose a novel incentive mechanism, Server Guaranteed Cap (SGC), that uses the server capacity as an incentive for high contributing peers to upload to low contributing ones. We prove that SGC is individually rational and incentive compatible. We also show that under very general conditions, there exists exactly one server capacity allocation that maximizes the social welfare under SGC, hence simple gradient based method can be used to find the optimal allocation.

6 citations


Book ChapterDOI
01 Jan 2010
TL;DR: In this article, supervisory control and data acquisition (SCADA) systems are widely used to monitor and control large-scale transmission power grids, and they are used for power grid monitoring.
Abstract: Introduction Supervisory control and data acquisition (SCADA) systems are widely used to monitor and control large-scale transmission power grids. Monitoring traditionally involves the measurement ...

1 citations


01 Jan 2010
TL;DR: Two swarm management algorithms, Random Peer Migration (RPM) and Random Multi Tracking (RMT) that introduce peers in different swarms to each other by leveraging the Peer Exchange (PEX) protocol are modified.
Abstract: Among the peer-to-peer systems, BitTorrent has attracted significant attention in research community because of its efficiency, scalability and robustness. BitTorrent utilizes peer contribution to distribute content by splitting the content into many pieces which can be transferred among peers. Unfortunately BitTorrent depends on trackers in order to let peers interested in same content discover each other. Trackers can be considered as a single point of failure and a bottleneck in terms of system scalability. The scalability and availability of the tracker can be improved by introducing multiple trackers, an extension that allows the co-existence of multiple swarms sharing the same content. Existence of multiple swarms that are not aware of each other may cause efficiency to degrade due to piece and bandwidth unavailability in small swarms. Swarm management algorithms therefore aim to increase the swarm sizes virtually at a low cost, consequently increasing piece availability and peer contribution for performance improvement. In this thesis we developed a framework for measuring the performance of swarm management algorithms in an experimental testbed. The testbed offers the opportunity to perform controlled experiments in different scenarios. An improved PEX protocol was also developed that takes swarm membership information of peers into account to utilize mixing among swarms. We modified an existing BitTorrent client to implement two swarm management algorithms, Random Peer Migration (RPM) and Random Multi Tracking (RMT) that introduce peers in different swarms to each other by leveraging the Peer Exchange (PEX) protocol. RPM achieves mixing through peers migrating between swarms. RMT allows some fraction of peers to associate with more than one tracker and mix peer information between swarms. We evaluated the performance of the two swarm management algorithms in torrents in which all swarms are in the steady state and have a publisher always available. The algorithms are estimated to improve the protocol performance around 5% in most scenarios whereas gain around 40% can be observed for small torrents. The algorithms are shown to improve BitTorrent performance without sacrificing the robustness and load balancing properties of the multi-tracker extension.