scispace - formally typeset
Search or ask a question
Author

Rahul Thakur

Bio: Rahul Thakur is an academic researcher from Indian Institute of Technology Madras. The author has contributed to research in topics: Femtocell & Cellular network. The author has an hindex of 8, co-authored 19 publications receiving 188 citations. Previous affiliations of Rahul Thakur include Indian Institute of Technology Roorkee & Indian Institutes of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: A maiden approach to analyze the performance of a vehicular network with cellular infrastructure as a backbone using mobile Femto Access Points (FAPs) as relays in place of RSUs and shows improvement in terms of delay, throughput, and energy efficiency.
Abstract: A vehicular network with Road Side Units (RSUs) provides an efficient way to connect vehicles even on the move. However, due to high deployment and maintenance cost of RSUs, it is necessary to use fewer RSUs, such that the total cost is minimized. It is suggested that cellular networks, such as Long-Term Evolution (LTE), are capable of fulfilling the demands posed in vehicular network scenarios. Availability of high bandwidth, large coverage area, and low latency are some of the advantages of cellular networks, which help in overcoming the challenges of high-speed vehicular communication. In this paper, we propose a maiden approach to analyze the performance of a vehicular network with cellular infrastructure as a backbone. For this, we use mobile Femto Access Points (FAPs) as relays in place of RSUs. We model the network using $M/M/m$ queue and compare the delay and throughput performance with traditional IEEE 802.11p vehicular networks. We also formulate an optimization problem and propose a subchannel power control algorithm to handle increased co-channel interference, which emerges due to high mobility of vehicles in the network. Our suggested approach shows improvement in terms of delay, throughput, and energy efficiency. The results are verified using extensive simulations.

70 citations

Journal ArticleDOI
TL;DR: This letter proposes a power control and resource block allocation scheme called breathing for an IoT network that performs better than the traditional maximum power allocation scheme and greedy power reduction scheme in terms of energy efficiency, system throughput, and system blocking.
Abstract: Undoubtedly, the Internet of Things (IoT) is the next big revolution in the field of wireless communication networks. IoT is an invisible network, which connects the physical world to the virtual world. Seamless Internet connectivity is essential between these two worlds for IoT to become a reality. In this aspect, long-term evolution advanced (LTE-A) is a promising technology, which meets the requirements of IoT. However, an exponential increase in the number of IoT devices will increase the energy consumption at base stations in LTE-A. Therefore, in this letter, we study the downlink energy efficiency aspect of LTE-A in the IoT networks. We propose a power control and resource block allocation scheme called breathing for an IoT network. Simulation results have shown that breathing performs better than the traditional maximum power allocation scheme and greedy power reduction scheme in terms of energy efficiency, system throughput, and system blocking.

31 citations

Journal ArticleDOI
TL;DR: A stochastic geometry based framework to analyze the coverage probability and average data rate of a three-hop M2M network deployed along with User Equipments (UEs) and extensive simulations to study the system performance show that the three- Hopkins formed from out-of-range MTC devices and UEs can significantly improve the coverage and average rate of the entire network.
Abstract: With a wide range of applications, Machine-to-Machine (M2M) communication has become an emerging technology for connecting generic machines to the Internet. To ensure ubiquity in connections across all machines, it is necessary to have a standard infrastructure, such as 3GPP LTE-A network infrastructure, that facilitates such type of communications. However, owing to the huge scale of ?> machines to be deployed in near future and the nature of data transactions, ensuring ubiquitous connections among all the machines will be difficult. Solutions that not only maintain connectivity but also route machine data in a cost effective manner are the need of the hour. In this context, it has been suggested that Device-to-Device (D2D) communication can play a very important role in expanding network coverage and routing the data between source-destination machine pairs. In this paper, we conduct a feasibility study to highlight the impact of multi-hop D2D communication in increasing the network coverage and average rate of a Machine Type Communication (MTC) device. We present a stochastic geometry based framework to analyze the coverage probability and average data rate of a three-hop M2M network deployed along with User Equipments (UEs) and conduct extensive simulations to study the system performance. Our simulation results show that the three-hop M2M network formed from out-of-range MTC devices and UEs can significantly improve the coverage and average rate of the entire network. Due to the mobility of users in the network, design of robust routing mechanisms in such a time evolving network becomes difficult. Hence, we suggest the use of space-time graph built from the predicted user locations to design a cost efficient multi-hop D2D topology that enables routing of MTC data to its destination.

27 citations

Proceedings Article
13 May 2013
TL;DR: This work analyses the effects of cell biasing on femtocell based cellular network and provides improvement in capacity and energy efficiency of the network through frequency reuse and subchannel power control.
Abstract: Future of cellular networks lies in heterogeneity. Heterogeneous cellular networks are characterized by overlay of low power nodes such as microcells, picocells, and femtocells along with traditional macrocell base stations. These nodes help operators to improve system capacity in cost effective manner while making the environment greener by reducing the carbon footprint. Research has shown that femtocells can be an effective solution to handle the increasing demands for indoor mobile traffic. However, low utilization of femtocell resources limits the gain obtained from their large scale deployment. Also, random placement of femtocells accumulate additional interference to macrocell users. In this paper, we introduce the concept of cell biasing for femtocells to improve user association and resource utilization. Our work analyses the effects of cell biasing on femtocell based cellular network and provides improvement in capacity and energy efficiency of the network through frequency reuse and subchannel power control. The obtained analytical results are verified through simulation.

22 citations

Proceedings ArticleDOI
25 Nov 2013
TL;DR: An enhanced cell selection scheme that considers scheduling opportunities available at femto cell by analyzing the current load and femtocell specific constraints such as maximum user count and minimum signal strength is proposed.
Abstract: To improve benefits of macrocell offloading in femto-assisted cellular networks, concept of cell biasing has been proposed. Cell biasing attempts to offload users from macrocell by modifying cell selection criteria. This is done by adding a positive bias to the measured signal from femtocells before performing cell selection. While the macrocell offloaded users may experience lower signal quality from femtocells, they are benefited by receiving higher bandwidth. From users' point of view, it is desirable that user equipments receive highest possible bitrate from the target base station. However, cell biasing only considers received signal strength to make cell selection decisions. The bitrate received at a user equipment is directly proportional to available bandwidth and user load at target base station. In this paper, we propose an enhanced cell selection scheme that considers scheduling opportunities available at femtocell by analyzing the current load and femtocell specific constraints such as maximum user count and minimum signal strength. Obtained results show that our work provides the best performance in terms of both system throughput and energy efficiency among all compared cell selection schemes.

17 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A deep recurrent neural network-based algorithm is proposed to solve the energy efficient resource allocation (RA) problem for the NOMA-based heterogeneous IoT with fast convergence and low computational complexity.
Abstract: The Internet of Things (IoT) has attracted significant attentions in the fifth generation mobile networks and the smart cities. However, considering the large numbers of connectivity demands, it is vital to improve the spectrum efficiency (SE) of the IoT with an affordable power consumption. To improve the SE, the nonorthogonal multiple access (NOMA) technology is newly proposed through accommodating multiple users in the same spectrums. As a result, in this paper, an energy efficient resource allocation (RA) problem is introduced for the NOMA-based heterogeneous IoT. At first, we assume the successive interference cancelation (SIC) is imperfect for practical implementations. Then, based on the analyzing method for cognitive radio networks, we present a stepwise RA scheme for the mobile users and the IoT users with the mutual interference management. Third, we propose a deep recurrent neural network-based algorithm to solve the problem optimally and rapidly. Moreover, a priorities and rate demands-based user scheduling method is supplemented, to coordinate the access of the heterogeneous users with the limited radio resource. At last, the simulation results verify that the deep learning-based scheme is able to provide optimal RA results for the NOMA heterogeneous IoT with fast convergence and low computational complexity. Compared with the conventional orthogonal frequency division multiple access system, the NOMA system with imperfect SIC yields better performance on the SE and the scale of connectivity, at the cost of high power consumption and low energy efficiency.

236 citations

Journal ArticleDOI
TL;DR: A profound view of IoT and NBIoT is presented, subsuming their technical features, resource allocation, and energy-efficiency techniques and applications, and two novel energy-efficient techniques "zonal thermal pattern analysis" and "energy-efficient adaptive health monitoring system" have been proposed towards green IoT.
Abstract: The advancement of technologies over years has poised Internet of Things (IoT) to scoop out untapped information and communication technology opportunities. It is anticipated that IoT will handle the gigantic network of billions of devices to deliver plenty of smart services to the users. Undoubtedly, this will make our life more resourceful but at the cost of high energy consumption and carbon footprint. Consequently, there is a high demand for green communication to reduce energy consumption, which requires optimal resource availability and controlled power levels. In contrast to this, IoT devices are constrained in terms of resources—memory, power, and computation. Low power wide area (LPWA) technology is a response to the need for efficient utilization of power resource, as it evinces characteristics such as the capability to proffer low power connectivity to a huge number of devices spread over wide geographical areas at low cost. Various LPWA technologies, such as LoRa and SigFox, exist in the market, offering a proficient solution to the users. However, in order to abstain the need of new infrastructure (like base station) that is required for proprietary technologies, a new cellular-based licensed technology, narrowband IoT (NBIoT), is introduced by 3GPP in Rel-13. This technology presents a good candidature to handle LPWA market because of its characteristics like enhanced indoor coverage, low power consumption, latency insensitivity, and massive connection support towards NBIoT. This survey presents a profound view of IoT and NBIoT, subsuming their technical features, resource allocation, and energy-efficiency techniques and applications. The challenges that hinder the NBIoT path to success are also identified and discussed. In this paper, two novel energy-efficient techniques “zonal thermal pattern analysis” and energy-efficient adaptive health monitoring system have been proposed towards green IoT.

214 citations

Journal ArticleDOI
TL;DR: This paper formulate an energy optimization problem of offloading, which aims at minimizing the overall energy consumption at all system entities and takes into account of the constraints from both computation capabilities and service delay requirement, and develop an artificial fish swarm algorithm based scheme.
Abstract: Mobile edge computing has been proposed in recent years to offload computation tasks from user equipments (UEs) to the network edge to break hardware limitations and resource constraints at UEs. Although there have been some existing works on computation offloading in 5G, most of them fail to take into account the unique property of 5G in their scheme design. In this paper, we consider small-cell network architecture for task offloading. In order to achieve energy efficiency, we model the energy consumption of offloading from both task computation and communication aspects. Besides, transmission scheduling are carried over both the fronthaul and backhaul links. We first formulate an energy optimization problem of offloading, which aims at minimizing the overall energy consumption at all system entities and takes into account of the constraints from both computation capabilities and service delay requirement. We then develop an artificial fish swarm algorithm based scheme to solve the energy optimization problem. Besides, the global convergence property of the our scheme is formally proven. Finally, various simulation results demonstrate the efficiency of our scheme.

196 citations

Journal Article
TL;DR: The advances in photonic device technologies are bringing ultra-high-bit-rate networking-at speeds towards 100 Gb/s and beyond-much closer to practical reality, making it possible now to envisage the use of OTDM techniques not just in the highest layers of national and international networks, but also much closer to the user.
Abstract: The advances in photonic device technologies are bringing ultra-high-bit-rate networking-at speeds towards 100 Gb/s and beyond-much closer to practical reality. It is increasingly likely that in the longer term ultrafast optical time-division techniques-together with wavelength multiplexing-will be used in networks at all levels, from the transcontinental backbone to the desktop. Examples of devices include a subpicosecond clock source packaged inside a laptop personal computer and an OTDM switch on a single semiconductor chip, both produced at HHI. Advances similar to these make it possible now to envisage the use of OTDM techniques, not just in the highest layers of national and international networks, but also much closer to the user-such as the world-first demonstrations at BT Laboratories of a 40 Gb/s TDMA LAN and a 100 Gb/s packet self-routing switch for multiprocessor interconnection. Ultrafast networks might even provide the interconnection backplane inside future desktop routers and servers with massive throughput.

168 citations

Journal ArticleDOI
TL;DR: This paper studies the energy-efficient workload offloading problem and proposes a low-complexity distributed solution based on consensus alternating direction method of multipliers, which is validated based on a realistic road topology of Beijing, China.
Abstract: In vehicular networks, in-vehicle user equipment (UE) with limited battery capacity can achieve opportunistic energy saving by offloading energy-hungry workloads to vehicular edge computing nodes via vehicle-to-infrastructure links. However, how to determine the optimal portion of workload to be offloaded based on the dynamic states of energy consumption and latency in local computing, data transmission, workload execution and handover, is still an open issue. In this paper, we study the energy-efficient workload offloading problem and propose a low-complexity distributed solution based on consensus alternating direction method of multipliers. By incorporating a set of local variables for each UE, the original problem, in which the optimization variables of UEs are coupled together, is transformed into an equivalent general consensus problem with separable objectives and constraints. The consensus problem can be further decomposed into a bunch of subproblems, which are distributed across UEs and solved in parallel simultaneously. Finally, the proposed solution is validated based on a realistic road topology of Beijing, China. Simulation results have demonstrated that significant energy saving gain can be achieved by the proposed algorithm.

159 citations