scispace - formally typeset
Search or ask a question
Author

Thomas Valerrian Pasca

Bio: Thomas Valerrian Pasca is an academic researcher from Indian Institute of Technology, Hyderabad. The author has contributed to research in topics: Cellular network & Throughput. The author has an hindex of 4, co-authored 7 publications receiving 58 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This article looks at two typical deployment scenarios of C-IoT with EC, identifies the crucial challenges, and provides solutions, one of which is the RACH procedure, which enables a million devices to access the cellular network.
Abstract: The cellular Internet of Things (C-IoT) with edge computing (EC) is one of the most promising technologies in fifth-generation (5G) cellular systems, which enables connecting everything to the Internet. C-IoT devices are low powered and support a longer transmission range, and they are developed to enable IoT in both dense and remote areas. Machine-type communication (MTC)/C-IoT devices need a ubiquitous connectivity, which is provided by the cellular infrastructure. As the density of these devices increases, it becomes more challenging to connect them to the cellular network within the stipulated amount of time. In this article, we look at two typical deployment scenarios of C-IoT with EC, identify the crucial challenges, and provide solutions. One solution is the RACH procedure, which enables a million devices to access the cellular network. Our emphasis is on a fast RACH procedure that allows C-IoT devices to perform a successful RACH with fewer attempts, which saves power.

32 citations

Proceedings ArticleDOI
03 Apr 2016
TL;DR: This paper proposes a novel RACH mechanism that allows millions of C-IoT devices to associate with the base station within the existing 4G LTE framework and shows that the proposed mechanism is faster and saves power when compared to existing 3GPP extended access barring mechanism under perfect synchronization.
Abstract: Cellular Internet of Things (C-IoT) is one of the emerging 5G technologies. 4G LTE being the most promising cellular technology so far and is a suitor for serving C-IoT devices. In future millions of C-IoT devices will be deployed in the coverage of a single 4G LTE base station. However, the existing random access (RACH) mechanism in 4G LTE is not designed to connect millions of devices. Hence, in this paper, we propose a novel RACH mechanism that allows millions of C-IoT devices to associate with the base station within the existing 4G LTE framework. 3GPP has proposed an extended access barring mechanism to solve this problem for machine type communication (MTC). We compare the performance of the proposed RACH mechanism with the existing 3GPP extended access barring mechanism through analysis. Further, through simulation results, we show that the proposed mechanism is faster and saves power when compared to existing 3GPP extended access barring mechanism under perfect synchronization.

12 citations

Journal ArticleDOI
TL;DR: An uplink packet scheduling algorithm, called as enhanced class based dynamic priority (E-CBDP) algorithm, which ensures theQoS of H2H traffic by giving it priority over M2M traffic but, optimizes the QoS of M1M traffic by pushing the scheduling of H1H traffic to their delay boundaries is proposed.

10 citations

Journal ArticleDOI
TL;DR: An enhanced version of CBDP algorithm is proposed, named as Non-SDT-CBDP algorithm or NSDT, which schedules resources only to H2H and NSDt-M2M flows while SDT-M 2M flows are piggybacked with Message 3 (MSG-3) of lightweight EPS bearer establishment procedure.
Abstract: Large coverage and global connectivity makes cellular networks as preferred choice for internet of things (IoT). Machine-to-machine (M2M) communications deal with communication and networking aspects of IoT. Since, cellular networks are optimized to support human-to-human (H2H) communication (e.g., Voice calls, Internet), incorporating M2M communication may affect the QoS of the former. Also, the large number of M2M devices incur significant signaling overhead on both core network (CN) and radio access network (RAN). In LTE-A networks, EPS bearer establishment procedure to connect a device to the PDN gateway involves several signaling messages exchange between the device and the network. M2M devices mostly generate traffic of low volume and less frequent, in nature. So, it is very uneconomical to have rigorous signaling messages exchange to send few bytes of data. In this paper, we first studied class based dynamic priority (CBDP) algorithm Giluka et al. (in: Proceedings of IEEE WF-IoT, 2014), which is a delay aware radio resource scheduling algorithm to support uplink M2M traffic with minimal effect on QoS of uplink H2H traffic. Further, we modeled the optimal behavior of the CBDP algorithm and compared with its behavior in practical scenarios. Apart from this, we propose a lightweight EPS bearer establishment procedure to be followed by M2M devices sending small data, in which M2M small data is piggybacked with control message. Further, in the same procedure, redundant signaling messages for small data transmission (SDT) are carefully removed preserving the security aspects of the system. To ensure security for the small data transmitted, a new insightful technique of replacing authentication with confidentiality is conceived. With this, we propose an enhanced version of CBDP algorithm, named as Non-SDT-CBDP algorithm or NSDT-CBDP algorithm, which schedules resources only to H2H and NSDT-M2M flows while SDT-M2M flows are piggybacked with Message 3 (MSG-3) of lightweight EPS bearer establishment procedure. The simulation results show performance gain of NSDT-CBDP over CBDP, specially for class-3 M2M and class-4 H2H. The NSDT-CBDP algorithm show percentage reduction in packet loss ratio by 25% for class-3 M2M, percentage reduction in end-to-end delay by 11 and 19% for class-4 H2H and class-3 M2M, percentage gain in throughput by 27 and 19% for class-4 H2H and class-3 M2M. Apart from this, NSDT-CBDP algorithm is able to allocate 12% more RBs to H2H devices in comparison to CBDP algorithm.

6 citations

Dissertation
01 Jan 2019
TL;DR: This thesis addresses problems pertaining to the dense deployment of LWIP nodes by proposing a virtual congestion control mechanism that not only improves the throughput of a flow by reducing the number of unnecessary DUPACKs delivered to the TCP sender but also sends Boost ACKs in order to rapidly grow the congestion window to reap in aggregation benefits of heterogeneous links.
Abstract: A smartphone generates approximately 1, 614 MB of data per month which is 48 times of the data generated by a typical basic-feature cell phone. Cisco forecasts that the mobile data traffic growth will remain to increase and reach 49 Exabytes per month by 2021. However, the telecommunication service providers/operators face many challenges in order to improve cellular network capacity to match these ever-increasing data demands due to low, almost flat Average Revenue Per User (ARPU) and low Return on Investment (RoI). Spectrum resource crunch and licensing requirement for operation in cellular bands further complicate the procedure to support and manage the network. In order to deal with the aforementioned challenges, one of the most vital solutions is to leverage the integration benefits of cellular networks with unlicensed operation of Wi-Fi networks. A closer level of cellular and Wi-Fi coupling/interworking improves Quality of Service (QoS) by unified connection management to user devices (UEs). It also offloads a significant portion of user traffic from cellular Base Station (BS) to Wi-Fi Access Point (AP). In this thesis, we have considered the cellular network to be Long Term Evolution (LTE) popularly known as 4G-LTE for interworking with Wi-Fi. Third Generation Partnership Project (3GPP) defined various LTE and Wi-Fi interworking architectures from Rel-8 to Rel-11. Because of the limitations in these legacy LTE Wi-Fi interworking solutions, 3GPP proposed Radio Level Integration (RLI) architectures to enhance flow mobility and to react fast to channel dynamics. RLI node encompasses link level connection between Small cell deployments, (ii) Meeting Guaranteed Bit Rate (GBR) requirements of the users including those experiencing poor Signal to Interference plus Noise Ratio (SINR), and (iii) Dynamic steering of the flows across LTE and Wi-Fi links to maximize the system throughput. The second important problem addressed is the uplink traffic steering. To enable efficient uplink traffic steering in LWIP system, in this thesis, Network Coordination Function (NCF) is proposed. NCF is realized at the LWIP node by implementing various uplink traffic steering algorithms. NCF encompasses four different uplink traffic steering algorithms for efficient utilization of Wi-Fi resources in LWIP system. NCF facilitates the network to take intelligent decisions rather than individual UEs deciding to steer the uplink traffic onto LTE link or Wi-Fi link. The NCF algorithms work by leveraging the availability of LTE as the anchor to improvise the channel utilization of Wi-Fi. The third most important problem is to enable packet level steering in LWIP. When data rates of LTE and Wi-Fi links are incomparable, steering packets across the links create problems for TCP traffic. When the packets are received Out-of-Order (OOO) at the TCP receiver due to variation in delay experienced on each link, it leads to the generation of DUPlicate ACKnowledgements (DUP-ACK). These unnecessary DUP-ACKs adversely affect the TCP congestion window growth and thereby lead to poor TCP performance. This thesis addresses this problem by proposing a virtual congestion control mechanism (VIrtual congeStion control wIth Boost acknowLedgEment -VISIBLE). The proposed mechanism not only improves the throughput of a flow by reducing the number of unnecessary DUPACKs delivered to the TCP sender but also sends Boost ACKs in order to rapidly grow the congestion window to reap in aggregation benefits of heterogeneous links. The fourth problem considered is the placement of LWIP nodes. In this thesis, we have addressed problems pertaining to the dense deployment of LWIP nodes. LWIP deployment can be realized in colocated and non-colocated fashion. The placement of LWIP nodes is done with the following objectives: (i) Minimizing the number of LWIP nodes deployed without any coverage holes, (ii) Maximizing SINR in every sub-region of a building, and (iii) Minimizing the energy spent by UEs and LWIP nodes. Finally, prototypes of RLI architectures are presented (i.e., LWIP and LWA testbeds). The prototypes are developed using open source LTE platform OpenAirInterface (OAI) and commercial-off-the-shelf hardware components. The developed LWIP prototype is made to work with commercial UE (Nexus 5). The LWA prototype requires modification at the UE protocol stack, hence it is realized using OAI-UE. The developed prototypes are coupled with the legacy multipath protocol such as MPTCP to investigate the coupling benefits.

2 citations


Cited by
More filters
01 Jan 2013
TL;DR: From the experience of several industrial trials on smart grid with communication infrastructures, it is expected that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission.
Abstract: A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.

1,036 citations

Journal ArticleDOI
TL;DR: After analyzing the different network properties in the system, the results show that EC systems perform better than cloud computing systems, and this paper aims to validate the efficiency and resourcefulness of EC.
Abstract: A centralized infrastructure system carries out existing data analytics and decision-making processes from our current highly virtualized platform of wireless networks and the Internet of Things (IoT) applications. There is a high possibility that these existing methods will encounter more challenges and issues in relation to network dynamics, resulting in a high overhead in the network response time, leading to latency and traffic. In order to avoid these problems in the network and achieve an optimum level of resource utilization, a new paradigm called edge computing (EC) is proposed to pave the way for the evolution of new age applications and services. With the integration of EC, the processing capabilities are pushed to the edge of network devices such as smart phones, sensor nodes, wearables, and on-board units, where data analytics and knowledge generation are performed which removes the necessity for a centralized system. Many IoT applications, such as smart cities, the smart grid, smart traffic lights, and smart vehicles, are rapidly upgrading their applications with EC, significantly improving response time as well as conserving network resources. Irrespective of the fact that EC shifts the workload from a centralized cloud to the edge, the analogy between EC and the cloud pertaining to factors such as resource management and computation optimization are still open to research studies. Hence, this paper aims to validate the efficiency and resourcefulness of EC. We extensively survey the edge systems and present a comparative study of cloud computing systems. After analyzing the different network properties in the system, the results show that EC systems perform better than cloud computing systems. Finally, the research challenges in implementing an EC system and future research directions are discussed.

327 citations

Journal ArticleDOI
Jiming Chen1, Kang Hu1, Wang Qi1, Yuyi Sun1, Zhiguo Shi1, Shibo He1 
TL;DR: A system that includes NB devices, an IoT cloud platform, an application server, and a user app is designed that provides an easy approach to academic research as well as commercial applications.
Abstract: Recently, narrowband Internet of Things (NB-IoT), one of the most promising low power wide area (LPWA) technologies, has attracted much attention from both academia and industry. It has great potential to meet the huge demand for machine-type communications in the era of IoT. To facilitate research on and application of NB-IoT, in this paper, we design a system that includes NB devices, an IoT cloud platform, an application server, and a user app. The core component of the system is to build a development board that integrates an NB-IoT communication module and a subscriber identification module, a micro-controller unit and power management modules. We also provide a firmware design for NB device wake-up, data sensing, computing and communication, and the IoT cloud configuration for data storage and analysis. We further introduce a framework on how to apply the proposed system to specific applications. The proposed system provides an easy approach to academic research as well as commercial applications.

233 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a traffic-aware spatio-temporal model to analyze RACH in cellular-based mIoT networks, where the physical layer network is modeled and analyzed based on stochastic geometry in the spatial domain, and the queue evolution is analyzed in the time domain.
Abstract: Massive Internet of Things (mIoT) has provided an auspicious opportunity to build powerful and ubiquitous connections that face a plethora of new challenges, where cellular networks are potential solutions due to their high scalability, reliability, and efficiency. The random access channel (RACH) procedure is the first step of connection establishment between IoT devices and base stations in the cellular-based mIoT network, where modeling the interactions between static properties of the physical layer network and dynamic properties of queue evolving in each IoT device are challenging. To tackle this, we provide a novel traffic-aware spatio-temporal model to analyze RACH in cellular-based mIoT networks, where the physical layer network is modeled and analyzed based on stochastic geometry in the spatial domain, and the queue evolution is analyzed based on probability theory in the time domain. For performance evaluation, we derive the exact expressions for the preamble transmission success probabilities of a randomly chosen IoT device with different RACH schemes in each time slot, which offer insights into the effectiveness of each RACH scheme. Our derived analytical results are verified by the realistic simulations capturing the evolution of packets in each IoT device. This mathematical model and the analytical framework can be applied to evaluate the performance of other types of RACH schemes in the cellular-based networks by simply integrating its preamble transmission principle.

93 citations

Journal ArticleDOI
TL;DR: A step-by-step tutorial discussing the development of MTC design across different releases of LTE and the newly introduced user equipment categories, namely, MTC category (CAT-M) and narrowband IoT category ( CAT-N).
Abstract: Human-generated information has been the main interest of the wireless communication technologies designs for decades. However, we are currently witnessing the emerge of an entirely different paradigm of communication introduced by machines, and hence, the name machine type communication (MTC). Such paradigm arises as a result of the new applications included in the Internet-of-Things (IoT) framework. Among the enabling technologies of the IoT, cellular-based communication is the most promising and more efficient. This is justified by the currently well-developed and mature radio access networks, along with the large capacities and flexibility of the offered data rates to support a large variety of applications. On the other hand, several radio-access-network groups put efforts to optimize the 3GPP LTE standard to accommodate for the new challenges by introducing new communication categories paving the way to support the machine-to-machine communication within the IoT framework. In this paper, we provide a step-by-step tutorial discussing the development of MTC design across different releases of LTE and the newly introduced user equipment categories, namely, MTC category (CAT-M) and narrowband IoT category (CAT-N). We start by briefly discussing the different physical channels of the legacy LTE. Then we provide a comprehensive and up-to-date background for the most recent standard activities to specify CAT-M and CAT-N technologies. We also emphasize on some of necessary concepts used in the new specifications, such as the narrowband concept used in CAT-M and the frequency hopping. Finally, we identify and discuss some of the open research challenges related to the implementation of the new technologies in real life scenarios.

87 citations