scispace - formally typeset
Search or ask a question

Showing papers by "Hussein Al-Shatri published in 2019"


Proceedings ArticleDOI
15 Apr 2019
TL;DR: A non-linear central energy minimization problem is decompose into subproblems and a distributed algorithm that separates the allocation of shared communication and computation resources by the AP from the offloading decisions by the MUs is proposed.
Abstract: The considered hierarchical multi-level offloading scenario consists of multiple mobiles units (MUs), an access point (AP) with attached cloudlet for mobile edge computing (MEC) and a cloud server. Each user has an arbitrarily splittable task and three possible options for the computation of fractions of this task, which are local computation, offloading to the cloudlet and offloading to the cloud server. We decompose a non-linear central energy minimization problem into subproblems and propose a distributed algorithm that separates the allocation of shared communication and computation resources by the AP from the offloading decisions by the MUs. The AP assigns fractions of the shared bandwidth of the radio access channel, the shared backhaul transmission link to the cloud server and the shared computation frequency at the cloudlet according to offloading decisions of the MUs by solving closed-form expressions which are derived in this paper. Given the available resources, each MU solves a linear optimization problem to calculate the optimal fractions of its task to be computed locally or offloaded. In numerical simulations, the algorithm is proven to be stable and reaching results close to the optimal policy.

11 citations


Journal ArticleDOI
TL;DR: This work provides a general framework which elicits users’ preferences for underlay networks (UUP) and active roles in multihop networks and defines an interface which translates the technical jargon related to the topic into non-technical terminology and introduces a virtual scenario which is also understandable for users with no technical background.
Abstract: Until now, user preferences remained widely unconsidered in the design process of underlay wireless networks. Yet, with new technologies, such as device-to-device (D2D) communications being contingent upon user acceptance and their participation, user preferences are the key ingredient for designing successful products and services. Following this notion, we provide a general framework which elicits users' preferences for underlay networks (UUP) and active roles in multihop networks. Furthermore, we define an interface which translates the technical jargon related to the topic into non-technical terminology and introduce a virtual scenario which is also understandable for users with no technical background. Subsequently, based on a choice-based conjoint study, we derive the corresponding UUPs, translate them back into technical relationships, and assess the system's performance and the user participation by incorporating the elicited UUPs into a suitable D2D scenario.

7 citations


Book ChapterDOI
28 Mar 2019
TL;DR: This paper proposes a hybrid DL architecture which aims at achieving high recognition rate at low signal-to-noise ratio (SNR) and various channel impairments including fading because such are the relevant conditions of operation of the CR.
Abstract: The increasing maturity of the concepts which would allow for the operation of a practical Cognitive Radio (CR) Network require functionalities derived through different methodologies from other fields. One such approach is Deep Learning (DL) which can be applied to diverse problems in CR to enhance its effectiveness by increasing the utilization of the unused radio spectrum. Using DL, the CR device can identify whether the signal comes from the Primary User (PU) transmitter or from an interferer. The method proposed in this paper is a hybrid DL architecture which aims at achieving high recognition rate at low signal-to-noise ratio (SNR) and various channel impairments including fading because such are the relevant conditions of operation of the CR. It consists of an autoencoder and a neural network structure due to the good denoising qualities of the former and the recognition accuracy of the latter. The autoencoder aims to restore the original signal from the corrupted samples which would increase the accuracy of the classifier. Afterwards its output is fed into the NN which learns the characteristics of each modulation type and classifies the restored signal correctly with certain probability. To determine the optimal classification DL model, several types of NN structures are examined and compared for input comprised of the IQ samples of the reconstructed signal. The performance of the proposed DL architecture in comparison to similar models for the relevant parameters in different channel impairments scenarios is also analyzed.

4 citations


11 Feb 2019
TL;DR: A trade-off between measurement accuracy and quantization precision for minimum Bayes risk is found and both parameters show great influence on the optimum time and energy resource allocation.
Abstract: In this work, we focus on the transmission of measurements to an estimator over a wireless communication channel with limited capacity. The process it divided into two phases, measurement and transmission. In the first phase, multiple noisy measurements of the sensor value, which is assumed to stay constant over those measurements, are taken. More measurements can improve the accuracy, but also consume more time and energy. These measurements are aggregated into a sum value, which is quantized and transmitted over the communication channel with limited capacity. The measurement and transmission phases share the same time and energy budget, which limits the number of measurements and the number of bits that can be transmitted.At the receiver, the aggregated value is fed into an estimator optimized for a certain Bayes risk distortion function. Since the resources used for measurement and transmission are limited, a trade-off between measurement accuracy and quantization precision for minimum Bayes risk is found. Moreover, the influence of different resource limits for time and energy, as well as different ratios of the resources used by the measuring process and the transmission process are investigated. Both parameters show great influence on the optimum time and energy resource allocation.

2 citations


Proceedings ArticleDOI
15 Apr 2019
TL;DR: In this paper, formation control using consensus in which agents coordinate to form a circle is considered, which greatly reduces the convergence time as compared to conventional schemes.
Abstract: A scenario of multiple agents working together to accomplish a common task is considered. Consensus control facilitates the coordination among the agents over time till they accomplish the task. In this paper, we consider formation control using consensus in which agents coordinate to form a circle. To make the coordination possible, agents need to periodically exchange their positions using orthogonal transmissions through a band-limited wireless channel which encounters different transmission delays on different links. To guarantee over all agents stability and convergence, we optimize the bandwidth allocation for equal rate transmission, which maintains the synchronization among agents. Moreover, we minimize the convergence time by optimizing the weights of the consensus algorithm. An ultra-reliable low latency communication between agents is guaranteed by transmitting short packets. The simulation results show that jointly optimizing the confidence weights and bandwidth allocation greatly reduces the convergence time as compared to conventional schemes.