scispace - formally typeset
Search or ask a question
Author

Krishna P. Kadiyala

Bio: Krishna P. Kadiyala is an academic researcher from University of Texas at Dallas. The author has contributed to research in topics: Edge computing & Cloudlet. The author has an hindex of 3, co-authored 4 publications receiving 542 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper provides a tutorial on fog computing and its related computing paradigms, including their similarities and differences, and provides a taxonomy of research topics in fog computing.

783 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences, and provide a taxonomy of research topics in fog computing.
Abstract: With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.

360 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: This paper forms the optimization problem, shows that it is related to the makespan scheduling problem of unrelated parallel machines, and proposes heuristics to solve it, and performs simulations to evaluate the heuristic and show that the maximum link utilization on the egress links can be lower than that of traditional HPR.
Abstract: Egress selection in an Internet Service Provider (ISP) is the process of selecting an egress router to route interdomain traffic across the ISP such that a traffic engineering objective is achieved. In traditional ISP networks, traffic through the ISP is carried through the network and exits via an egress that is closest to the source in an attempt to minimize network resources for transit traffic. This exit strategy is known as Hot Potato Routing (HPR). The emerging field of Software-Defined Networking (SDN) has opened up many possibilities and promised to bring new flexibility to the rigid traditional paradigm of networking. In an ISP network, however, completely replacing legacy network devices with SDN nodes is neither simple nor straightforward. This has led to the idea of incremental and selective deployment of SDN nodes in an ISP network. Such a hybrid network gives us control over traffic flows that pass through the SDN nodes without requiring extensive changes to an existing ISP network. In this paper, we look at the problem of choosing an optimal set of egress routers to route inter-domain transit traffic in a hybrid SDN network such that the maximum link utilization of the egress links is minimized. We formulate the optimization problem, show that it is related to the makespan scheduling problem of unrelated parallel machines, and propose heuristics to solve it. We perform simulations to evaluate our heuristic on a real ISP topology and show that even with a small number of SDN nodes in the network, the maximum link utilization on the egress links can be lower than that of traditional HPR.

9 citations

Proceedings ArticleDOI
25 Jun 2018
TL;DR: The focus of this paper is to identify a set of candidate nodes in an Autonomous System that can be upgraded to SDN nodes with the objective of minimizing the maximum link utilization at the inter-AS links of the service provider.
Abstract: The growth of Software Defined Networking (SDN) has made it appealing for Internet service providers to incorporate SDN into their existing legacy networks This has led to the idea of incrementally introducing SDN elements into the existing legacy infrastructure Transitioning to such a hybrid SDN network is no small task by itself, and requires carefully planning which nodes in the legacy network can take on the role of SDN elements The focus of this paper is to identify a set of candidate nodes in an Autonomous System (AS) that can be upgraded to SDN nodes with the objective of minimizing the maximum link utilization at the inter-AS links of the service provider To this end, we first introduce the SDN node selection problem Due to its intractability, we propose different greedy heuristics to help select the SDN nodes and show that by selecting the right set of candidate SDN nodes, a small fraction of SDN nodes help achieve a significant reduction in link utilization

2 citations

Proceedings ArticleDOI
22 Aug 2022
TL;DR: This paper proposes crypto externs for Netronome Agilio smartNICs that implement authentication and confidentiality (encryption/decryption) using the ChaCha stream cipher algorithm and satisfies the scalability requirement of popular applications such as serverless management functions and host in-band network telemetry.
Abstract: Control and management plane applications such as serverless function orchestration and 4G/5G control plane functions are offloaded to smartNICs to reduce communication and processing latency. Such applications involve multiple inter-host interactions that were traditionally secured using SSL/TLS gRPC-based communication channels. Offloading the applications to smartNIC implies that we must also offload the security algorithms. Otherwise, we need to send the application messages to the host VM/container for crypto operations, negating offload benefits. We propose crypto externs for Netronome Agilio smartNICs that implement authentication and confidentiality (encryption/decryption) using the ChaCha stream cipher algorithm. AES and ChaCha are two popular cipher suites, but we chose ChaCha since none of the smartNICs have ChaCha-based crypto accelerators. However, smartNICs have restricted instruction set, and limited memory, making it difficult to implement security algorithms. This paper identifies and addresses several challenges to implement ChaCha crypto primitives successfully. Our evaluations show that our crypto extern implementation satisfies the scalability requirement of popular applications such as serverless management functions and host in-band network telemetry.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper provides a tutorial on fog computing and its related computing paradigms, including their similarities and differences, and provides a taxonomy of research topics in fog computing.

783 citations

Journal ArticleDOI
TL;DR: By consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.
Abstract: Ubiquitous sensors and smart devices from factories and communities are generating massive amounts of data, and ever-increasing computing power is driving the core of computation and services from the cloud to the edge of the network. As an important enabler broadly changing people’s lives, from face recognition to ambitious smart factories and cities, developments of artificial intelligence (especially deep learning, DL) based applications and services are thriving. However, due to efficiency and latency issues, the current cloud computing service architecture hinders the vision of “providing artificial intelligence for every person and every organization at everywhere”. Thus, unleashing DL services using resources at the network edge near the data sources has emerged as a desirable solution. Therefore, edge intelligence , aiming to facilitate the deployment of DL services by edge computing, has received significant attention. In addition, DL, as the representative technique of artificial intelligence, can be integrated into edge computing frameworks to build intelligent edge for dynamic, adaptive edge maintenance and management. With regard to mutually beneficial edge intelligence and intelligent edge , this paper introduces and discusses: 1) the application scenarios of both; 2) the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework; 3) challenges and future trends of more pervasive and fine-grained intelligence. We believe that by consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge , i.e., Edge DL.

611 citations

Journal ArticleDOI
TL;DR: In this paper, a survey on the relationship between edge intelligence and intelligent edge computing is presented, and the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework, challenges and future trends of more pervasive and fine-grained intelligence.
Abstract: Ubiquitous sensors and smart devices from factories and communities are generating massive amounts of data, and ever-increasing computing power is driving the core of computation and services from the cloud to the edge of the network. As an important enabler broadly changing people's lives, from face recognition to ambitious smart factories and cities, developments of artificial intelligence (especially deep learning, DL) based applications and services are thriving. However, due to efficiency and latency issues, the current cloud computing service architecture hinders the vision of "providing artificial intelligence for every person and every organization at everywhere". Thus, unleashing DL services using resources at the network edge near the data sources has emerged as a desirable solution. Therefore, edge intelligence, aiming to facilitate the deployment of DL services by edge computing, has received significant attention. In addition, DL, as the representative technique of artificial intelligence, can be integrated into edge computing frameworks to build intelligent edge for dynamic, adaptive edge maintenance and management. With regard to mutually beneficial edge intelligence and intelligent edge, this paper introduces and discusses: 1) the application scenarios of both; 2) the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework; 3) challenges and future trends of more pervasive and fine-grained intelligence. We believe that by consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.

518 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences, and provide a taxonomy of research topics in fog computing.
Abstract: With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.

360 citations

Posted Content
TL;DR: This survey provides a holistic overview of MEC technology and its potential use cases and applications, and outlines up-to-date researches on the integration of M EC with the new technologies that will be deployed in 5G and beyond.
Abstract: Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research.

279 citations