scispace - formally typeset
Search or ask a question
Author

Satyabrata Swain

Bio: Satyabrata Swain is an academic researcher from VIT University. The author has contributed to research in topics: Backhaul (telecommunications) & Automated fingerprint identification. The author has an hindex of 3, co-authored 10 publications receiving 217 citations. Previous affiliations of Satyabrata Swain include National Institute of Technology, Rourkela.

Papers
More filters
Proceedings ArticleDOI
05 Mar 2015
TL;DR: This paper discusses the cloud computing architecture and the numerous services it offered, and identifies several security issues in cloud computing based on its service layer and highlights the available platforms for cloud research and development.
Abstract: Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.

218 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: A simple method for D2D traffic to operate in coexistence with the legacy cellular traffic in ultra-dense 5G cellular networks is proposed and the initial results show that the access capacity for both the traffic has not affected after sharing the spectrum.
Abstract: In this paper, we investigated the spectrum sharing problem for Device-to-Device (D2D) communications in ultra-dense 5G cellular networks. We propose a simple method for D2D traffic to operate in coexistence with the legacy cellular traffic. We develop a two-step distributed algorithm for channel allocation and mode selection (i.e. cellular mode or D2D mode) for the mobile station (MS) with the objectives of capacity maximization and minimizing interfering D2D links. In particular, considering the number of successful flow rate traffic as the performance measure, we access the benefit of the proposed method. We implemented our scheme in a multicell scenario with a mmWave-enabled BS at the center. Our initial results show that the access capacity for both the traffic has not affected after sharing the spectrum.

16 citations

Proceedings ArticleDOI
05 Mar 2015
TL;DR: Packet scheduling schemes for LTE systems are studied in implication to real-time services such as online video streaming and Voice over Internet Protocol (VOIP), with results that will help researchers to design more efficient scheduling schemes, aiming to get better overall system performance.
Abstract: The revolution in high-speed broadband network is the requirement of the current time, in other words here is an unceasing demand for high data rate and mobility. Both provider and customer see, the long time evolution (LTE) could be the promising technology for providing broadband, mobile Internet access. To provide better quality of service (QoS) to customers, the resources must be utilized at its fullest impeccable way. Resource scheduling is one of the important functions for remanufacturing or upgrading system performance. This paper studies the recently proposed packet scheduling schemes for LTE systems. The study has been concentrated in implication to real-time services such as online video streaming and Voice over Internet Protocol (VOIP). For performance study, the LTE-Sim simulator is used. The primary objective of this paper is to provide results that will help researchers to design more efficient scheduling schemes, aiming to get better overall system performance. For the simulation study, two scenarios, one for video traffic and other for VoIP have been created. Various performances metric such as packet loss, fairness, end-to-end (E2E) delay, cell throughput and spectral efficiency has been measured for both the scenarios varying numbers of users. In the light of the simulation result analysis, the frame level scheduler (FLS) algorithms outperform other, by balancing the QoS requirements for multimedia services.

4 citations

Book ChapterDOI
01 Jan 2021
TL;DR: This paper proposed a directional beam-based power allocation/re-allocation scheme to guarantee the quality of service (QoS) in a high user mobility scenario operating on mmWave band and performed simulation results show that the proposed scheme outperformed the baseline scheme without power allocation.
Abstract: The next-generation vehicular network will see an unprecedented amount of data exchanged which is beyond the capacity of existing communication technologies for vehicular network The much talked millimeter-wave (mmWave)-enabled communication technology is the potential candidate to this growing demand of ultra-high data transmission and related services However, the unfavorable signal characteristics of mmWave bands make the quality of service guarantee more difficult when it is applied to user mobility In this paper, we proposed a directional beam-based power allocation/re-allocation scheme to guarantee the quality of service (QoS) in a high user mobility scenario operating on mmWave band The performed simulation results show that our proposed scheme outperformed the baseline scheme without power allocation

4 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This paper mainly deals with the analysis of phishing attacks in the cyberspace and any malicious content that is associated with the web, and is carried out within the browser.
Abstract: Now-a-days internet become a very unsafe space to deal with. Hackers are constantly trying to gain the user’s personal information, and detailed credentials. So many websites on the internet, even though safe, this safety cannot be assured by all websites. These rule breakers avoid abiding by rules, and try to employ methods like trickery and hacking to gain illegal access to private information. To be able to overcome this problem, we need to first understand the intricacies of how the virus is designed. This paper mainly deals with the analysis of phishing attacks in the cyberspace and any malicious content that is associated with the web, and is carried out within the browser. The files which are downloaded with virus, and involve third party applications from the PC, cannot be checked for virus. For instance, if there is a word file that is downloaded to the PC, it uses apps outside the web in the VM, and hence cannot be controlled by the VM.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The fundamental data management techniques employed to ensure consistency, interoperability, granularity, and reusability of the data generated by the underlying IoT for smart cities are described.
Abstract: Integrating the various embedded devices and systems in our environment enables an Internet of Things (IoT) for a smart city. The IoT will generate tremendous amount of data that can be leveraged for safety, efficiency, and infotainment applications and services for city residents. The management of this voluminous data through its lifecycle is fundamental to the realization of smart cities. Therefore, in contrast to existing surveys on smart cities we provide a data-centric perspective, describing the fundamental data management techniques employed to ensure consistency, interoperability, granularity, and reusability of the data generated by the underlying IoT for smart cities. Essentially, the data lifecycle in a smart city is dependent on tightly coupled data management with cross-cutting layers of data security and privacy, and supporting infrastructure. Therefore, we further identify techniques employed for data security and privacy, and discuss the networking and computing technologies that enable smart cities. We highlight the achievements in realizing various aspects of smart cities, present the lessons learned, and identify limitations and research challenges.

390 citations

Journal ArticleDOI
TL;DR: This paper presents a method for minimizing Service Delay in a scenario with two cloudlet servers, which has a dual focus on computation and communication elements, controlling Processing Delay through virtual machine migration and improving Transmission Delay with Transmission Power Control.
Abstract: Due to physical limitations, mobile devices are restricted in memory, battery, processing, among other characteristics. This results in many applications that cannot be run in such devices. This problem is fixed by Edge Cloud Computing, where the users offload tasks they cannot run to cloudlet servers in the edge of the network. The main requirement of such a system is having a low Service Delay, which would correspond to a high Quality of Service. This paper presents a method for minimizing Service Delay in a scenario with two cloudlet servers. The method has a dual focus on computation and communication elements, controlling Processing Delay through virtual machine migration and improving Transmission Delay with Transmission Power Control. The foundation of the proposal is a mathematical model of the scenario, whose analysis is used on a comparison between the proposed approach and two other conventional methods; these methods have single focus and only make an effort to improve either Transmission Delay or Processing Delay, but not both. As expected, the proposal presents the lowest Service Delay in all study cases, corroborating our conclusion that a dual focus approach is the best way to tackle the Service Delay problem in Edge Cloud Computing.

335 citations

Journal ArticleDOI
TL;DR: This work shows the evolution of modern computing paradigms and related research interest, and extensively addresses Fog computing, remarking its outstanding role as the glue between IoT, Cloud, Edge, and Edge computing.
Abstract: In the last few years, Internet of Things, Cloud computing, Edge computing, and Fog computing have gained a lot of attention in both industry and academia. However, a clear and neat definition of these computing paradigms and their correlation is hard to find in the literature. This makes it difficult for researchers new to this area to get a concrete picture of these paradigms. This work tackles this deficiency, representing a helpful resource for those who will start next. First, we show the evolution of modern computing paradigms and related research interest. Then, we address each paradigm, neatly delineating its key points and its relation with the others. Thereafter, we extensively address Fog computing, remarking its outstanding role as the glue between IoT, Cloud, and Edge computing. In the end, we briefly present open challenges and future research directions for IoT, Cloud, Edge, and Fog computing.

210 citations

Journal Article
TL;DR: This work proposes using a trusted device to perform mutual authentication that eliminates reliance on perfect user behavior, thwarts Man-in-the-Middle attacks after setup, and protects a user's account even in the presence of keyloggers and most forms of spyware.
Abstract: Phishing, or web spoofing, is a growing problem: the Anti-Phishing Working Group (APWG) received almost 14,000 unique phishing reports in August 2005, a 56% jump over the number of reports in December 2004 [3]. For financial institutions, phishing is a particularly insidious problem, since trust forms the foundation for customer relationships, and phishing attacks undermine confidence in an institution. Phishing attacks succeed by exploiting a user's inability to distinguish legitimate sites from spoofed sites. Most prior research focuses on assisting the user in making this distinction; however, users must make the right security decision every time. Unfortunately, humans are ill-suited for performing the security checks necessary for secure site identification, and a single mistake may result in a total compromise of the user's online account. Fundamentally, users should be authenticated using information that they cannot readily reveal to malicious parties. Placing less reliance on the user during the authentication process will enhance security and eliminate many forms of fraud. We propose using a trusted device to perform mutual authentication that eliminates reliance on perfect user behavior, thwarts Man-in-the-Middle attacks after setup, and protects a user's account even in the presence of keyloggers and most forms of spyware. We demonstrate the practicality of our system with a prototype implementation.

191 citations

Journal ArticleDOI
TL;DR: This paper proposes an algorithm that utilizes Virtual Machine Migration and Transmission Power Control, together with a mathematical model of delay in Mobile Edge Computing and a heuristic algorithm called Particle Swarm Optimization, to balance the workload between cloudlets and consequently maximize cost-effectiveness.
Abstract: Mobile devices have several restrictions due to design choices that guarantee their mobility. A way of surpassing such limitations is to utilize cloud servers called cloudlets on the edge of the network through Mobile Edge Computing. However, as the number of clients and devices grows, the service must also increase its scalability in order to guarantee a latency limit and quality threshold. This can be achieved by deploying and activating more cloudlets, but this solution is expensive due to the cost of the physical servers. The best choice is to optimize the resources of the cloudlets through an intelligent choice of configuration that lowers delay and raises scalability. Thus, in this paper we propose an algorithm that utilizes Virtual Machine Migration and Transmission Power Control, together with a mathematical model of delay in Mobile Edge Computing and a heuristic algorithm called Particle Swarm Optimization, to balance the workload between cloudlets and consequently maximize cost-effectiveness. Our proposal is the first to consider simultaneously communication, computation, and migration in our assumed scale and, due to that, manages to outperform other conventional methods in terms of number of serviced users.

163 citations