scispace - formally typeset
Search or ask a question
Author

Andrzej Kamisinski

Bio: Andrzej Kamisinski is an academic researcher from AGH University of Science and Technology. The author has contributed to research in topics: Dependability & Forwarding plane. The author has an hindex of 10, co-authored 22 publications receiving 260 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper provides necessary background information, reviews the available literature, outlines the proposed solutions, and identifies some design and research problems that must be addressed.
Abstract: The introduction of network function virtualisation (NFV) represents a significant change in networking technology, which may create new opportunities in terms of cost efficiency, operations, and service provisioning. Although not explicitly stated as an objective, the dependability of the services provided using this technology should be at least as good as conventional solutions. Logical centralisation, off-the-shelf computing platforms, and increased system complexity represent new dependability challenges relative to the state of the art. The core function of the network, with respect to failure and service management, is orchestration. The failure and misoperation of the NFV orchestrator (NFVO) will have huge network-wide consequences. At the same time, NFVO is vulnerable to overload and design faults. Thus, the objective of this paper is to give a tutorial on the dependability challenges of the NFVO, and to give insight into the required future research. This paper provides necessary background information, reviews the available literature, outlines the proposed solutions, and identifies some design and research problems that must be addressed.

74 citations

Proceedings ArticleDOI
12 Oct 2015
TL;DR: This work investigates a methodology for an SDN controller to detect compromised switches through real-time analysis of the periodically collected reports, and proposes two anomaly detection algorithms to detect packet droppers and packet swappers.
Abstract: Software-Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention recently. In SDN, a network controller overlooks and manages the entire network by configuring routing mechanisms for underlying switches. The switches report their status to the controller periodically, such as port statistics and flow statistics, according to their communication protocol. However, switches may contain vulnerabilities that can be exploited by attackers. A compromised switch may not only lose its normal functionality, but it may also maliciously paralyze the network by creating network congestions or packet loss. Therefore, it is important for the system to be able to detect and isolate malicious switches. In this work, we investigate a methodology for an SDN controller to detect compromised switches through real-time analysis of the periodically collected reports. Two types of malicious behavior of compromised switches are investigated: packet dropping and packet swapping. We proposed two anomaly detection algorithms to detect packet droppers and packet swappers. Our simulation results show that our proposed methods can effectively detect packet droppers and swappers. To the best of our knowledge, our work is the first to address malicious switches detection using statistics reports in SDN.

52 citations

Journal ArticleDOI
TL;DR: This survey presents a systematic, tutorial-like overview of packet-based fast-recovery mechanisms in the data plane, focusing on concepts but structured around different networking technologies, from traditional link-layer and IP-based mechanisms, over BGP and MPLS to emerging software-defined networks and programmable data planes.
Abstract: In order to meet their stringent dependability requirements, most modern packet-switched communication networks support fast-recovery mechanisms in the data plane. While reactions to failures in the data plane can be significantly faster compared to control plane mechanisms, implementing fast recovery in the data plane is challenging, and has recently received much attention in the literature. This survey presents a systematic, tutorial-like overview of packet-based fast-recovery mechanisms in the data plane, focusing on concepts but structured around different networking technologies, from traditional link-layer and IP-based mechanisms, over BGP and MPLS to emerging software-defined networks and programmable data planes. We examine the evolution of fast-recovery standards and mechanisms over time, and identify and discuss the fundamental principles and algorithms underlying different mechanisms. We then present a taxonomy of the state of the art, summarize the main lessons learned, and propose a few concrete future directions.

42 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: The main objective of this paper is to bring the performance of an SDN Master-Slave controller as close as possible to the one offered by a single controller by introducing a simple replication scheme combined with a consistency check and a correction mechanism.
Abstract: Software-Defined Networking (SDN) is a new paradigm that promises to enhance network flexibility and innovation. However, operators need to thoroughly assess its advantages and threats before they can implement it. Robustness and fault tolerance are among the main criteria to be considered in such assessment. The currently available SDN controllers offer different fault tolerance mechanisms, but there are still many open issues, especially regarding the trade-off between consistency and performance in a fault- tolerant SDN platform. In this paper, we describe existing fault-tolerant SDN controller solutions, and propose a mechanism to design a consistent and fault-tolerant Master-Slave SDN controller that is able to balance consistency and performance. The main objective of this paper is to bring the performance of an SDN Master-Slave controller as close as possible to the one offered by a single controller. This is obtained by introducing a simple replication scheme, combined with a consistency check and a correction mechanism, that influence the performance only during the few intervals when it is needed, instead of being active during the entire operation time.

36 citations

Proceedings ArticleDOI
01 Jun 2016
TL;DR: A quantitative assessment of the properties of SDN backbone networks to determine whether they can provide similar availability to the traditional IP backbone networks and shows that the impact of software and hardware failures on the overall availability can be significantly reduced through proper overprovisioning of the SDN controller(s).
Abstract: Software-Defined Networking (SDN) promises to improve the programmability and flexibility of networks, but it may also bring new challenges that need to be explored. The main objective of this paper is to present a quantitative assessment of the properties of SDN backbone networks to determine whether they can provide similar availability to the traditional IP backbone networks. To achieve this goal, we have completed the following steps: i) we formalized a two-level availability model that is able to capture the global network connectivity without neglecting the essential details: ii) we proposed Markov models for characterizing the single network elements in both SDN and traditional networks: iii) we carried out an extensive sensitivity analysis of a~national and a~world-wide backbone networks. The results have highlighted the considerable impact of operational and management (OaM) failures on the overall availability of SDN. High OaM failure intensity may reduce the availability of SDN as much as one order of magnitude compared to traditional networks. Moreover, the results show that the impact of software and hardware failures on the overall availability of SDN can be significantly reduced through proper overprovisioning of the SDN controller(s).

28 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.
Abstract: The fifth generation (5G) wireless communication networks are being deployed worldwide from 2020 and more capabilities are in the process of being standardized, such as mass connectivity, ultra-reliability, and guaranteed low latency. However, 5G will not meet all requirements of the future in 2030 and beyond, and sixth generation (6G) wireless communication networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, better intelligence level and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architecture, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing. Our vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-air-ground-sea integrated communication network. Second, all spectra will be fully explored to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the big datasets generated by the use of extremely heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of artificial intelligence (AI) and big data technologies. Fourth, network security will have to be strengthened when developing 6G networks. This article provides a comprehensive survey of recent advances and future trends in these four aspects. Clearly, 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.

935 citations

01 Jan 2013

801 citations

Journal ArticleDOI
TL;DR: A taxonomy of edge computing in 5G is established, which gives an overview of existing state-of-the-art solutions of edge Computing in5G on the basis of objectives, computational platforms, attributes, 5G functions, performance measures, and roles.
Abstract: 5G is the next generation cellular network that aspires to achieve substantial improvement on quality of service, such as higher throughput and lower latency. Edge computing is an emerging technology that enables the evolution to 5G by bringing cloud capabilities near to the end users (or user equipment, UEs) in order to overcome the intrinsic problems of the traditional cloud, such as high latency and the lack of security. In this paper, we establish a taxonomy of edge computing in 5G, which gives an overview of existing state-of-the-art solutions of edge computing in 5G on the basis of objectives, computational platforms, attributes, 5G functions, performance measures, and roles. We also present other important aspects, including the key requirements for its successful deployment in 5G and the applications of edge computing in 5G. Then, we explore, highlight, and categorize recent advancements in edge computing for 5G. By doing so, we reveal the salient features of different edge computing paradigms for 5G. Finally, open research issues are outlined.

214 citations

Journal ArticleDOI
TL;DR: This paper seeks to identify some of the many challenges where new and current researchers can still contribute to the advancement of SDN and further hasten its broadening adoption by network operators.
Abstract: Having gained momentum from its promise of centralized control over distributed network architectures at bargain costs, software-defined Networking (SDN) is an ever-increasing topic of research. SDN offers a simplified means to dynamically control multiple simple switches via a single controller program, which contrasts with current network infrastructures where individual network operators manage network devices individually. Already, SDN has realized some extraordinary use cases outside of academia with companies, such as Google, AT&T, Microsoft, and many others. However, SDN still presents many research and operational challenges for government, industry, and campus networks. Because of these challenges, many SDN solutions have developed in an ad hoc manner that are not easily adopted by other organizations. Hence, this paper seeks to identify some of the many challenges where new and current researchers can still contribute to the advancement of SDN and further hasten its broadening adoption by network operators.

185 citations