scispace - formally typeset
Search or ask a question
Author

Hussein Al-Shatri

Other affiliations: University of Rostock
Bio: Hussein Al-Shatri is an academic researcher from Technische Universität Darmstadt. The author has contributed to research in topics: Relay & Wireless network. The author has an hindex of 13, co-authored 58 publications receiving 629 citations. Previous affiliations of Hussein Al-Shatri include University of Rostock.

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
22 May 2016
TL;DR: Numerical results show that the performance of the proposed approach, which requires only causal knowledge of the energy harvesting process and channel coefficients, has only a small degradation compared to the optimum case which requires perfect non-causal knowledge.
Abstract: Energy harvesting point-to-point communications are considered. The transmitter harvests energy from the environment and stores it in a finite battery. It is assumed that the transmitter has always data to transmit and the harvested energy is used exclusively for data transmission. As in practical scenarios prior knowledge about the energy harvesting process might not be available, we assume that at each time instant only information about the current state of the transmitter is available, i.e., harvested energy, battery level and channel coefficient. We model the scenario as a Markov decision process and we implement reinforcement learning at the transmitter to find a power allocation policy that aims at maximizing the throughput. To overcome the limitations of traditional reinforcement learning algorithms, we apply the concept of function approximation and we propose a set of binary functions to approximate the expected throughput given the state of the transmitter. Numerical results show that the performance of the proposed approach, which requires only causal knowledge of the energy harvesting process and channel coefficients, has only a small degradation compared to the optimum case which requires perfect non-causal knowledge. Additionally, the proposed approach outperforms naive policies that assume only causal knowledge at the transmitter.

77 citations

Proceedings ArticleDOI
01 Jun 2017
TL;DR: This work considers a multi-user MECO system with a base station equipped with a single cloudlet server, and considers parallel sharing of the cloudlet, where each user is allocated a certain fraction of the total computation power.
Abstract: Mobile-edge computation offloading (MECO) is a promising solution for enhancing the capabilities of mobile devices. For an optimal usage of the offloading, a joint consideration of radio resources and computation resources is important, especially in multiuser scenarios where the resources must be shared between multiple users. We consider a multi-user MECO system with a base station equipped with a single cloudlet server. Each user can offload its entire task or part of its task. We consider parallel sharing of the cloudlet, where each user is allocated a certain fraction of the total computation power. The objective is to minimize the completion time of users' tasks. Two different access schemes for the radio channel are considered: Time Division Multiple Access (TDMA) and Frequency Division Multiple Access (FDMA). For each access scheme, we formulate the corresponding joint optimization problem and propose efficient algorithms to solve it. Both algorithms use the bisection-search method, where each step requires solving a feasibility problem. For TDMA, the feasibility problem has a closed-form solution. Numerical results show that the performance of offloading is higher than of local computing. In particular, MECO with FDMA outperforms MECO with TDMA, but with a small margin.

71 citations

Journal ArticleDOI
TL;DR: The results show that the proposed algorithm outperforms the known conventional suboptimum schemes and it is shown that the algorithm asymptotically converges to a globally optimum power allocation.
Abstract: Intercell interference is the major limitation to the performance of future cellular systems. Despite the joint detection and joint transmission techniques aiming at interference cancellation which suffer from the limited possible cooperation among the nodes, power allocation is a promising approach for optimizing the system performance. If the interference is treated as noise, the power allocation optimization problem aiming at maximizing the sum rate with a total power constraint is nonconvex and up to now an open problem. In the present paper, the solution is found by reformulating the nonconvex objective function of the sum rate as a difference of two concave functions. A globally optimum power allocation is found by applying a branch-and-bound algorithm to the new formulation. In principle, the algorithm partitions the feasible region recursively into subregions where for every subregion the objective function is upper and lower bounded. For each subregion, a linear program is applied for estimating the upper bound of the sum rate which is derived from a convex maximization formulation of the problem with piecewise linearly approximated constraints. The performance is investigated by system-level simulations. The results show that the proposed algorithm outperforms the known conventional suboptimum schemes. Furthermore, it is shown that the algorithm asymptotically converges to a globally optimum power allocation.

62 citations

Journal ArticleDOI
TL;DR: A method for capacity and coverage optimization using base station antenna electrical tilt in mobile networks is proposed, which has the potential to improve network performance while reducing operational costs and complexity, and to offer better quality of experience for the mobile users.
Abstract: One major factor influencing the coverage and capacity in mobile networks is related to the configuration of the antennas and especially the antenna tilt angle By utilizing antenna tilt, signal reception within a cell can be improved and interference radiation towards other cells can be effectively reduced, which leads to a higher signal-to-interference-plus-noise ratio received by the users and increased sum data rate in the network In this work, a method for capacity and coverage optimization using base station antenna electrical tilt in mobile networks is proposed It has the potential to improve network performance while reducing operational costs and complexity, and to offer better quality of experience for the mobile users Our solution is based on the application of reinforcement learning and the simulation results show that the algorithm improves significantly the overall data rate of the network, as compared to no antenna tilt optimization The analysis in this paper focuses on the downlink of the cellular system For the simulation experiments a multicellular and sectorized mobile network in an urban environment and randomly distributed user terminals are considered The main contribution in this work is related to the development of a learning algorithm for automated antenna tilting

58 citations

Journal ArticleDOI
11 May 2017
TL;DR: The proposed approach has only a small degradation as compared to the offline optimum case and with the use of the proposed feature functions a better performance is achieved compared to standard approximation techniques.
Abstract: Energy harvesting (EH) two-hop communications are considered. The transmitter and the relay harvest energy from the environment and use it exclusively for transmitting data. A data arrival process is assumed at the transmitter. At the relay, a finite data buffer is used to store the received data. We consider a realistic scenario in which the EH nodes have only local causal knowledge, i.e., at any time instant, each EH node only knows the current value of its EH process, channel state, and data arrival process. Our goal is to find a power allocation policy to maximize the throughput at the receiver. We show that because the EH nodes have local causal knowledge, the two-hop communication problem can be separated into two point-to-point problems. Consequently, independent power allocation problems are solved at each EH node. To find the power allocation policy, reinforcement learning with linear function approximation is applied. Moreover, to perform function approximation two feature functions which consider the data arrival process are introduced. Numerical results show that the proposed approach has only a small degradation as compared to the offline optimum case. Furthermore, we show that with the use of the proposed feature functions a better performance is achieved compared to standard approximation techniques.

55 citations


Cited by
More filters
Book ChapterDOI
01 Jan 1998

552 citations

Book
31 Jan 2013
TL;DR: The use of multiple antennas at base stations is a key component in the design of cellular communication systems that can meet high-capacity demands in the downlink.
Abstract: The use of multiple antennas at base stations is a key component in the design of cellular communication systems that can meet high-capacity demands in the downlink. Under ideal conditions, the gai ...

456 citations

01 Jan 1997

423 citations

Journal ArticleDOI
TL;DR: In this article, the authors review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning and investigate their employment in the compelling applications of wireless networks, including heterogeneous networks, cognitive radios (CR), Internet of Things (IoT), machine to machine networks (M2M), and so on.
Abstract: Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of Things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.

413 citations