On the Optimizing of LTE System Performance for SISO and MIMO Modes
01 Dec 2015-pp 412-416
TL;DR: Experiments of packet loss illustrated that it is possible to get an accepted Packet Error Rate (PER) before reaching to SNR threshold values by enhancing an optimizing technique such as Markov Decision Process (MDP), which can enhance the performance of LTE system.
Abstract: Packet size and modulation scheme can play an important role on optimizing the data rate of Long Term Evolution (LTE) system. The performance of the existing LTE system is evaluated in this paper in term of throughput and packet loss for Orthogonal Frequency-Division Multiple Access (OFDMA) modulation scheme with different packet sizes. The performance of LTE system for different packet size is done. The simulation experiments are done using Mat lab and Simulink libraries. The results show and discuss the effects of packet size and adaptation modulation on the performance of LTE based on throughput and packet loss. Moreover, the Signal-to-noise ratio (SNR) threshold values can be optimized in order to enhance the performance of LTE system. Experiments of packet loss illustrated that it is possible to get an accepted Packet Error Rate (PER) before reaching to SNR threshold values by enhancing an optimizing technique such as Markov Decision Process (MDP).
Citations
More filters
[...]
TL;DR: A joint multi-channel reassignment and traffic control framework for the core backbone network in SDN-IoT and a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm to optimize the objection function to achieve traffic control and channel reassignment is developed.
Abstract: Channel reassignment is to assign again on the assigned channel resources in order to use the channel resources more efficiently. Channel reassignment in the Software-Defined Networking (SDN) based Internet of Things (SDN-IoT) is a promising paradigm to improve the communication performance of the network, since it allows software-defined routers (SDRs) with the help of SDN controller to appropriately schedule the traffic loads to meet the better transaction of corresponding channels in one link. However, the existing channel reassignment works have many limitations. In this paper, we develop a joint multi-channel reassignment and traffic control framework for the core backbone network in SDN-IoT. Comparing to classic performance metrics, we design a more comprehensive objection function to maximize the throughput and to minimize packet loss rate and the time delay by scheduling the appropriate traffic loads to corresponding channels in one link. We develop a Multi-Agent Deep Deterministic Policy Gradient (MADDPG)-based traffic control and multi-channel reassignment (TCCA-MADDPG) algorithm to optimize the objection function to achieve traffic control and channel reassignment. To tackle the dynamics and complexity of the core backbone network, we use the traffic prediction result as the part of the channel state information. In order to make better use of the time continuity of the channel state, we add an LSTM layer to the neural network in the experiment to capture the timing information of the channel. Simulation results show that the proposed algorithm converges faster and outperform existing methods.
4 citations
Additional excerpts
[...]
[...]
TL;DR: In this article , a novel traffic control and resource allocation method is proposed based on deep Q-learning (DQL) which allows reducing the end-to-end delay in cellular networks and in the mobile edge network.
Abstract: Abstract With the expansion of the communicative and perceptual capabilities of mobile devices in recent years, the number of complex and high computational applications has also increased rendering traditional methods of traffic management and resource allocation quite insufficient. Recently, mobile edge computing (MEC) has emerged as a new viable solution to these problems. It can provide additional computing features at the edge of the network and allow alleviation of the resource limit of mobile devices while increasing the performance for critical applications especially in terms of latency. In this work, we addressed the issue of reducing the service delay by choosing the optimal path in the MEC network, which consists of multiple MEC servers that has different capabilities, applying network load balancing where multiple requests need to be handled simultaneously and routing selection based on a deep- Q network (DQN) algorithm. A novel traffic control and resource allocation method is proposed based on deep Q-learning (DQL) which allows reducing the end-to-end delay in cellular networks and in the mobile edge network. Real life traffic scenarios with various types of user requests are considered and a novel DQL resource allocation scheme which adaptively assigns computing and network resources is proposed. The algorithm optimizes traffic distribution between servers reducing the total service time and balancing the use of available resources under varying environmental conditions.
[...]
TL;DR: In this paper , the authors proposed two link adaptation models, namely the downward cross-layer link adaptation (CLLA) model and the Markov decision process over the CLLA (MDP-CLLAs) model, to improve the channel efficiency and throughput of LTE/LTE-A.
Abstract: Link adaptation (LA) is the ability to adapt the modulation scheme (MS) and the coding rate of the error correction in accordance with the quality of the radio link. The MS plays an important role in enhancing the performance of LTE/LTE-A, which is typically dependent on the received signal to noise ratio (SNR). However, using the SNR to select the proper MSs is not enough given that adaptive MSs are sensitive to error. Meanwhile, non-optimal MS selection may seriously impair the system performance and hence degrades LA. In LTE/ LTE-A, the LA system must be designed and optimized in accordance with the characteristics of the physical (e.g., MSs) and MAC layers (e.g., Packet loss) to enhance the channel efficiency and throughput. Accordingly, this study proposes using two LA models to overcome the problem. The first model, named the cross-layer link adaptation (CLLA) model, is based on the downward cross-layer approach. This model is designed to overcome the accuracy issue of adaptive modulation in existing systems and improve the channel efficiency and throughput. The second model, named the Markov decision process over the CLLA (MDP-CLLA) model, is designed to improve on the selection of modulation levels. Besides that, our previous contribution, namely the modified alpha-Shannon capacity formula, is adopted as part of the MDP-CLLA model to enhance the link adaptation of LTE/LTE-A. The effectiveness of the proposed models is evaluated in terms of throughput and packet loss for different packet sizes using the MATLAB and Simulink environments for the single input single output (SISO) mode for transmissions over Rayleigh fading channels. In addition, phase productivity, which is defined as the multiplication of the total throughput for a specific modulation with the difference between adjacent modulation SNR threshold values, is used to determine the best model for specific packet sizes in addition to determine the optimal packet size for specific packet sizes among models. Results generally showed that the throughput improved from 87.5 to 89.6% for (QPSK → 16-QAM) and from 0 to 43.3% for (16-QAM → 64-QAM) modulation transitions, respectively, using the CLLA model when compared with the existing system. Moreover, the throughput using the MDP-CLLA model was improved by 87.5-88.6% and by 0-43.2% for the (QPSK → 16-QAM)and (16-QAM → 64-QAM) modulation transitions, respectively, when compared with the CLLA model and the existing system. Results were also validated for each model via the summation of the phase productivity for every modulation at specific packet sizes, followed by the application one-way analysis of variance (ANOVA) statistical analysis with a post hoc test, to prove that the MDP-CLLA model improves with best high efficiency than the CLLA model and the existing system.
[...]
TL;DR: In this paper , the authors proposed two link adaptation models, namely the downward cross-layer link adaptation (CLLA) model and the Markov decision process over the CLLA (MDP-CLLAs) model, to improve the channel efficiency and throughput of LTE/LTE-A.
Abstract: Link adaptation (LA) is the ability to adapt the modulation scheme (MS) and the coding rate of the error correction in accordance with the quality of the radio link. The MS plays an important role in enhancing the performance of LTE/LTE-A, which is typically dependent on the received signal to noise ratio (SNR). However, using the SNR to select the proper MSs is not enough given that adaptive MSs are sensitive to error. Meanwhile, non-optimal MS selection may seriously impair the system performance and hence degrades LA. In LTE/ LTE-A, the LA system must be designed and optimized in accordance with the characteristics of the physical (e.g., MSs) and MAC layers (e.g., Packet loss) to enhance the channel efficiency and throughput. Accordingly, this study proposes using two LA models to overcome the problem. The first model, named the cross-layer link adaptation (CLLA) model, is based on the downward cross-layer approach. This model is designed to overcome the accuracy issue of adaptive modulation in existing systems and improve the channel efficiency and throughput. The second model, named the Markov decision process over the CLLA (MDP-CLLA) model, is designed to improve on the selection of modulation levels. Besides that, our previous contribution, namely the modified alpha-Shannon capacity formula, is adopted as part of the MDP-CLLA model to enhance the link adaptation of LTE/LTE-A. The effectiveness of the proposed models is evaluated in terms of throughput and packet loss for different packet sizes using the MATLAB and Simulink environments for the single input single output (SISO) mode for transmissions over Rayleigh fading channels. In addition, phase productivity, which is defined as the multiplication of the total throughput for a specific modulation with the difference between adjacent modulation SNR threshold values, is used to determine the best model for specific packet sizes in addition to determine the optimal packet size for specific packet sizes among models. Results generally showed that the throughput improved from 87.5 to 89.6% for (QPSK → 16-QAM) and from 0 to 43.3% for (16-QAM → 64-QAM) modulation transitions, respectively, using the CLLA model when compared with the existing system. Moreover, the throughput using the MDP-CLLA model was improved by 87.5-88.6% and by 0-43.2% for the (QPSK → 16-QAM)and (16-QAM → 64-QAM) modulation transitions, respectively, when compared with the CLLA model and the existing system. Results were also validated for each model via the summation of the phase productivity for every modulation at specific packet sizes, followed by the application one-way analysis of variance (ANOVA) statistical analysis with a post hoc test, to prove that the MDP-CLLA model improves with best high efficiency than the CLLA model and the existing system.
References
More filters
[...]
TL;DR: A packet length is suggested to reduce the effect of the IEEE 802.15.4 interference and obtain a maximum throughput of theEEE 802.11b.
Abstract: This paper presents an interference model of IEEE 802.11b wireless local area network (WLAN) affected by IEEE 802.15.4 wireless personal area network (WPAN). The packet error rate (PER) of the IEEE 802.11b under the interference of the IEEE 802.15.4 is analyzed, and is obtained by the bit error rate (BER) and the collision time. The safe distance ratio can be obtained from the PER. Further, this paper suggests a packet length to reduce the effect of the IEEE 802.15.4 interference and obtain a maximum throughput of the IEEE 802.1lb. The analytic results are validated using the simulation.
146 citations
"On the Optimizing of LTE System Per..." refers background in this paper
[...]
[...]
TL;DR: The goal in this article is to show that this provides a simplified computer tool that allows efficient simulation and modeling for DE systems.
Abstract: Simulation schemes for discrete event (DE) systems based on a new DE matrix formulation are presented. This new formulation is a hybrid system with logical and algebraic components that allows fast, direct design and reconfiguration of rule-based controllers for manufacturing systems. It applies to general DE systems that include shared resources, dispatching, circular waits, and variable part routing. A certain DE matrix state equation together with the familiar Petri net marking transition equation yield a complete dynamical description of a DE system. Our goal in this article is to show that this provides a simplified computer tool that allows efficient simulation and modeling for DE systems.
114 citations
"On the Optimizing of LTE System Per..." refers methods in this paper
[...]
[...]
TL;DR: In this paper, the packet error rate (PER) of IEEE 802.15.4 under the interference of a saturated IEEE802.11b network is evaluated using an analytic model and the analytic results are validated using simulations.
Abstract: In this paper, the packet error rate (PER) of IEEE 802.15.4 under the interference of a saturated IEEE 802.11b network is evaluated using an analytic model when IEEE 802.15.4 and IEEE 802.1 11b coexist. The PER is obtained from the bit error rate (BER) and the collision time, where the BER is obtained from the signal-to-interference-plus-noise ratio. The analytic results are validated using simulations.
48 citations
[...]
TL;DR: A comparative study among the different approaches in cross-layer design is presented based on cost, QoS parameters, throughput, and complexity factors, to help in the selection of an appropriate design for the problems desired to be solved.
Abstract: Wireless networks are designed to enable a variety of existing and emerging multimedia streaming applications. Multimedia streaming applications demand high quality of service (QoS), especially from wireless networks, because of the problems that exist in these types of environments, including interference, packet loss, delay, reduction of wireless link utilization, and so on. QoS necessitates the utilization of available resources at different layers of the multimedia system components, such as network, terminal, and content. Thus, a new design called cross-layer has been developed and is now considered beneficial by the research community. The aim of the current paper is to present different cross-layer designs or approaches that are classified into many categories. A cross-layer design based on the direction of information (i.e., downward, upward, hybrid, MAC-centric, and joint adaptation) and on the information itself (i.e., channel state information, QoS-related parameters, resources informat...
12 citations
"On the Optimizing of LTE System Per..." refers methods in this paper
[...]
[...]
TL;DR: The background and current situation of voice over LTE (VoLTE) technology in LTE netwrok is described and some presentations about existing solutions and operator situation are provided first and then video over LTE technology is introduced.
Abstract: Long-term evolution (LTE) is a broadband wireless communication standard evolved from the Universal Mobile Telecommunications System (UMTS) network. A report from GSA (Global mobile Suppliers Association) said that there are 274 LTE networks have been launched in 101 countries until 2014 Feb [1]. This paper describes the background and current situation of voice over LTE (VoLTE) technology in LTE netwrok. We provide some presentations about existing solutions and operator situation first. And then, we introduce video over LTE technology and elaborate the challenges from voice over LTE to video over LTE. Keywords—Long-term evolution (LTE), Voice over LTE (VoLTE)
5 citations
"On the Optimizing of LTE System Per..." refers background in this paper
[...]
Related Papers (5)
[...]
[...]
[...]
[...]
[...]