scispace - formally typeset
Search or ask a question
Author

Andrei Bogdan Rus

Bio: Andrei Bogdan Rus is an academic researcher from Technical University of Cluj-Napoca. The author has contributed to research in topics: The Internet & Quality of service. The author has an hindex of 6, co-authored 28 publications receiving 138 citations.

Papers
More filters
Proceedings ArticleDOI
20 Oct 2011
TL;DR: Comparisons of predictions produced by different types of neural networks with forecasts from statistical time series models show that nonlinear prediction based on NNs is better suited for traffic prediction purposes than linear forecasting models.
Abstract: Network traffic exhibits strong correlations which make it suitable for prediction. Real-time forecasting of network traffic load accurately and in a computationally efficient manner is the key element of proactive network management and congestion control. This paper compares predictions produced by different types of neural networks (NN) with forecasts from statistical time series models (ARMA, ARAR, HW). The novelty of our approach is to predict aggregated Ethernet traffic with NNs employing multiresolution learning (MRL) which is based on wavelet decomposition. In addition, we introduce a new NN training paradigm, namely the combination of multi-task learning with MRL. The experimental results show that nonlinear prediction based on NNs is better suited for traffic prediction purposes than linear forecasting models. Moreover, MRL helps to exploit the correlation structures at lower resolutions of the traffic trace and improves the generalization capability of NNs.

43 citations

Proceedings ArticleDOI
05 May 2010
TL;DR: This system presents an enhanced distributed routing that preserves the performances of the running services, despite the congestion which cannot be eliminated, and is an alternative solution to QoS-aware routing.
Abstract: The paper proposes a preliminary design of cross-layer quality of service applied to congestion control in future Internet. This is an alternative solution to QoS-aware routing, whenever the infrastructure operator cannot add new resources, and/or when re-routing is not possible. Dedicated software, running in each node, collects a list of local parameters such as available transfer rate and one-way delay between all neighbors. Then this real-time status information is distributed to all the in-network management enabled nodes that are allowed to be reached. Due to the statistics regarding individual link traffic, a minimal network coding scheme, triggered by cross-layer quality of service, is temporarily activated. This system presents an enhanced distributed routing that preserves the performances of the running services, despite the congestion which cannot be eliminated.

13 citations

Proceedings ArticleDOI
01 Sep 2013
TL;DR: This paper continues the idea of a gearbox-like routing algorithm selection in runtime presented at IEEE LANMAN 2011 by having a real implementation of the the Modified Dijkstra's and Floyd-Warshall algorithms in OpenFlow.
Abstract: This paper continues the idea of a gearbox-like routing algorithm selection in runtime presented at IEEE LANMAN 2011 Following the results obtained by simulations, the objectives were this time to have a real implementation of the the Modified Dijkstra's and Floyd-Warshall algorithms in OpenFlow The testbed under Fedora Core consisted on four virtual switches Open vSwitch 13 and a software controller Beacon 102 The individual performance evaluation of the two algorithms was done based on the end-to-end available transfer rate, using RTSP video flows over UDP without transfer rate obtrusion The congestion was realized by generating UDP background traffic using iperf

12 citations

01 Jan 2010
TL;DR: A modified Dijkstra's algorithm that calculates the distance between multiple sources and a single destination is presented, correcting the deficiencies of the classical approach by taking into account the dynamicity of the QoS parameters at the Physical Layer and MAC Sub-layer.
Abstract: This paper presents a modified Dijkstra's algorithm that calculates the distance between multiple sources and a single destination. It corrects the deficiencies of the classical approach by taking into account the dynamicity of the QoS parameters at the Physical Layer and MAC Sub-layer. The proposed composite metric is based on the available transfer rate, one-way delay and bit error rate, all of them measured or calculated in real time due to a Cross-Layer QoS software module. The proof-of-concept was obtained by simulations in OMNET++.

10 citations

Proceedings ArticleDOI
01 Sep 2016
TL;DR: The Chef tool proved to be a good candidate, together with Vagrant (for automatic creation and configuration of virtual environments) and VirtualBox (as hypervisor) for automatic deployment of both OpenStack and the underlying architecture at once.
Abstract: The legacy automatic deployments of OpenStack are mostly focused on specific packets and configurations. This paper investigates the possibility to automate the deployment of both OpenStack and the underlying architecture at once (minimum one Controller Node and one Compute Node). The Chef tool proved to be a good candidate, together with Vagrant (for automatic creation and configuration of virtual environments) and VirtualBox (as hypervisor). Comparing to the manual installation, the time needed to have a fully functional solution decreased from more than 30 minutes/node (or hours for unexperienced or unlucky system admins) to 10 minutes/node (or even just 2 minutes/node if the Vagrant box has been executed before). As a major drawback, Chef does not know how to handle prompts from installed packages which imposed developing a workaround.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems, and a new use case, i.e., deep learning based intelligent routing, which is demonstrated to be effective in contrast with the conventional routing strategy.
Abstract: Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.

643 citations

01 Jan 2005

454 citations

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposal outperforms the benchmark method in terms of delay, throughput, and signaling overhead, and it is demonstrated how the uniquely characterized input and output traffic patterns can enhance the route computation of the deep learning based SDRs.
Abstract: Recent years, Software Defined Routers (SDRs) (programmable routers) have emerged as a viable solution to provide a cost-effective packet processing platform with easy extensibility and programmability Multi-core platforms significantly promote SDRs’ parallel computing capacities, enabling them to adopt artificial intelligent techniques, ie, deep learning, to manage routing paths In this paper, we explore new opportunities in packet processing with deep learning to inexpensively shift the computing needs from rule-based route computation to deep learning based route estimation for high-throughput packet processing Even though deep learning techniques have been extensively exploited in various computing areas, researchers have, to date, not been able to effectively utilize deep learning based route computation for high-speed core networks We envision a supervised deep learning system to construct the routing tables and show how the proposed method can be integrated with programmable routers using both Central Processing Units (CPUs) and Graphics Processing Units (GPUs) We demonstrate how our uniquely characterized input and output traffic patterns can enhance the route computation of the deep learning based SDRs through both analysis and extensive computer simulations In particular, the simulation results demonstrate that our proposal outperforms the benchmark method in terms of delay, throughput, and signaling overhead

287 citations

Posted Content
TL;DR: This paper proposes a LSTM RNN framework for predicting short and long term Traffic Matrix (TM) in large networks and validates the framework on real-world data from GEANT network, showing that the LSTm models converge quickly and give state of the art TM prediction performance for relatively small sized models.
Abstract: Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting Traffic Matrix (TM) in large networks. By validating our framework on real-world data from G ´ EANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models.

122 citations

Proceedings ArticleDOI
23 Apr 2018
TL;DR: NeuTM as mentioned in this paper is a LSTM RNN-based framework for predicting traffic matrix in large networks, which is well suited to learn from data and classify or predict time series with time lags of unknown size.
Abstract: This paper presents NeuTM, a framework for network Traffic Matrix (TM) prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs). TM prediction is defined as the problem of estimating future network traffic matrix from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from data and classify or predict time series with time lags of unknown size. LSTMs have been shown to model longrange dependencies more accurately than conventional RNNs. NeuTM is a LSTM RNN-based framework for predicting TM in large networks. By validating our framework on real-world data from GEANT network, we show that our model converges quickly and gives state of the art TM prediction performance.

117 citations