scispace - formally typeset
Search or ask a question
Author

K. Naresh

Bio: K. Naresh is an academic researcher from VIT University. The author has contributed to research in topics: Wireless network & Static single assignment form. The author has an hindex of 2, co-authored 5 publications receiving 19 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper would discuss large scale data analysis using different implementations on the above mentioned tools and after that it would give a performance analysis of these tools on the given implementation like Cap3, HEP, Cloudburst.

9 citations

Book ChapterDOI
01 Jan 2014
TL;DR: The proposed fusion-centric scheme controls node congestion and keeps the network without consuming much resource to achieve this, selectively concentrate on intermediate level.
Abstract: The data-centric wireless sensor networks comprise numerous autonomous tiny nodes forms random topology in nature. Applications oriented WSNs immensely used for monitoring harsh environment. Its unique constrains are distinguishing it from traditional networks by energy, lifetime, fault tolerance, scalability and computational power. When they are deployed randomly sensing & generating vast amount of data, for which they are being used. Network meets Congestion, if huge volume of data passed. To eradicate it, misbehaving nodes are identified and skilled for self-healing without human intervention. Existing approaches focus on controlling link level congestions not node level. Our proposed fusion-centric scheme controls node congestion. To achieve this, selectively concentrate on intermediate level. Misbehaving nodes are identified by using their historic data based on fault level occurred. Lifetime is determined by allocation-rate and more radio signal usage. Our scheme keeps the network without consuming much resource. Node level control is needed for self-configuring WSNs.

7 citations

Proceedings ArticleDOI
TL;DR: This paper shows a compiler machine for adaptable figuring that assembles the flexibility and comfort in a way that grants to port the structure to different centers with an irrelevant effort.
Abstract: This paper shows a compiler machine for adaptable figuring. Our technique assembles the flexibility and comfort in a way that grants to port the structure to different centers with an irrelevant effort. In light of a present arrangement stream, we endeavor to accomplish another tier of handiness in the way of exploring and dividing programs written in C on most hoisted able to be done delineation tier. We show that the examination on this level is more successful than on lower ones as a result of usage of more communicative fabricate of programming. The better examination comes to fruition merged with Static single assignment based estimation for data way creation might provoke upper game plan nature of the last structure setup.

2 citations

Journal ArticleDOI
TL;DR: In this article, Harald Haas from the University of Edinburgh, UK termed and introduced light fidelity to world through the global talk show in which he demonstrated of a Li-Fi prototype at the TED Global Conference in Edinburgh on 12 July 2011.
Abstract: Light fidelity (Li-Fi) is an unfolding technology which can be used to transfer data through light. It is a complete transformation to the world of wireless data transfer. Harald Haas from the University of Edinburgh, UK termed and introduced light fidelity to world through the global talk show in which he demonstrated of a Li-Fi prototype at the TED Global Conference in Edinburgh on 12 July 2011. Challenging the pre-existing data transfer model namely the wireless fidelity (Wi-Fi) on various parameters such as speed, safety, reliability eco-friendliness and efficiency. The light emitting diode in a Li-Fi system is the source of data transfer utilising visible light as medium of communication. This can provide greater download capacity in comparison to the existing wired or wireless networks due to higher bandwidth of light. With such high potential every electronic day to day use commodity that has role of light over it can be thought of to be used as an internet accesses point. Since it cannot penetrate through walls, they have short range but are highly secure in the confinement of the surrounding in comparison to Wi-Fi and overcome the radio frequency bandwidth availability issues in the near future.

1 citations


Cited by
More filters
Journal ArticleDOI
23 Apr 2020
TL;DR: This paper reviews in detail the cloud computing system, its used technologies, and the best technologies used with it according to multiple factors and criteria such as the procedure cost, speed cons and pros.
Abstract: The cloud is the best method used for the utilization and organization of data. The cloud provides many resources for us via the internet. There are many technologies used in cloud computing systems; each one uses a different kind of protocols and methods. Many tasks can execute on different servers per second, which cannot execute on their computer. The most popular technologies used in the cloud system are Hadoop, Dryad, and another map reducing framework. Also, there are many tools used to optimize the performance of the cloud system, such as Cap3, HEP, and Cloudburst. This paper reviews in detail the cloud computing system, its used technologies, and the best technologies used with it according to multiple factors and criteria such as the procedure cost, speed cons and pros. Moreover, A comprehensive comparison of the tools used for the utilization of cloud computing systems is presented.

68 citations

Journal ArticleDOI
TL;DR: Results utilizing temperature measurements indicate that the proposed Data Traffic Management based on Compression and Minimum Description Length (MDL) Techniques outperforms common methods developed especially for WSNs in reducing the amount of data transmitted and saving energy, even though the suggested system does not reach the theoretical maximum.
Abstract: The sector of agriculture facing numerous challenges for the proper utilization of its natural resources. For that reason, and to the growing risk of changing weather conditions, we must monitor the soil conditions and meteorological data locally in order to accelerate the adoption of appropriate decisions that help the culture. In the era of the Internet of Things (IoT), a solution is to deploy a Wireless Sensor Network (WSN) as a low-cost remote monitoring and management system for these kinds of features. But WSN is suffering from the motes’ limited energy supplies, which decrease the total network’s lifetime. Each mote collects periodically the tracked feature and transmitting the data to the edge Gateway (GW) for further study. This method of transmitting massive volumes of data allows the sensor node to use high energy and substantial usage of bandwidth on the network. In this research, Data Traffic Management based on Compression and Minimum Description Length (MDL) Techniques is proposed which works at the level of sensor nodes (i.e., Things level) and at the edge GW level. In the first level, a lightweight lossless compression algorithm based on Differential Encoding and Huffman techniques which is particularly beneficial for IoT nodes, that monitoring the features of the environment, especially those with limited computing and memory resources. Instead of trying to formulate innovative ad hoc algorithms, we demonstrate that, provided general awareness of the features to be monitored, classical Huffman coding can be used effectively to describe the same features that measure at various time periods and locations. In the second level, the principle of MDL with hierarchical clustering was utilized for the purpose of clustering the sets of data coming from the first level. The strategy used to minimize data sets transmitted at this level is fairly simple. Any pair of data sets that can be compressed according to the MDL principle is combined into one cluster. As a result of this strategy, the number of data sets is gradually decreasing and the process of merging similar sets into a single cluster is stopped if no more pairs of sets can be compressed. Results utilizing temperature measurements indicate that it outperforms common methods developed especially for WSNs in reducing the amount of data transmitted and saving energy, even though the suggested system does not reach the theoretical maximum.

25 citations

Journal ArticleDOI
01 Mar 2021
TL;DR: A systematic review of data aggregating schemes in flat and hierarchical WSNs, which are compared in light of different factors, including the node heterogeneity, mobility of aggregator nodes, and types of networks and algorithms.
Abstract: A Wireless Sensor Network (WSN) consists of a number of sensor nodes which can sense, communicate, and store data, and it has a battery capacity. Data aggregation can be defined as a procedure applied for the elimination of redundant transmissions, and it provides fused information to the base stations, that in turn improve the energy effectiveness and increases the life span of energy constrained WSNs. The current article presents a systematic review of data aggregating schemes in flat and hierarchical WSNs, which are compared in light of different factors, including the node heterogeneity, mobility of aggregator nodes, and types of networks and algorithms. By comparing these remarkable techniques, a set of aspects are suggested to be dealt with in future works. Index words: Data aggregation, WSNs, network lifetime, Flat Network, Hierarchical network.

15 citations

Journal ArticleDOI
TL;DR: A novel sensor fusion technique based on fuzzy theory for the earlier proposed Cognitive Radio-based Vehicular Ad Hoc and Sensor Networks (CR-VASNET) is proposed and introduced as an applicable system to be employed to reduce the causalities rate of the vehicles’ crashes.
Abstract: In wireless sensor networks, sensor fusion is employed to integrate the acquired data from diverse sensors to provide a unified interpretation. The best and most salient advantage of sensor fusion is to obtain high-level information in both statistical and definitive aspects, which cannot be attained by a single sensor. In this paper, we propose a novel sensor fusion technique based on fuzzy theory for our earlier proposed Cognitive Radio-based Vehicular Ad Hoc and Sensor Networks (CR-VASNET). In the proposed technique, we considered four input sensor readings (antecedents) and one output (consequent). The employed mobile nodes in CR-VASNET are supposed to be equipped with diverse sensors, which cater to our antecedent variables, for example, The Jerk, Collision Intensity, and Temperature and Inclination Degree. Crash_Severity is considered as the consequent variable. The processing and fusion of the diverse sensory signals are carried out by fuzzy logic scenario. Accuracy and reliability of the proposed protocol, demonstrated by the simulation results, introduce it as an applicable system to be employed to reduce the causalities rate of the vehicles’ crashes.

9 citations

Journal ArticleDOI
TL;DR: This paper focuses on minimizing the task of CH using relay nodes such as participatory devices as relay nodes and the faulty nodes are identified over Poisson distribution which observes the failure probability without affecting communication and reduced resource consumption.
Abstract: Wireless sensor network being a dominant prerequisite in the modern pervasive environment has nodes connected with multi-hop to transmit and reinforce continuous monitoring with real-time updates from the field environment. To achieve pervasiveness integrating wireless and physical devices is unavoidable. Numerous self-organized tiny sensor nodes cooperate with each other to form the clusters and the most prominent node act as cluster head (CH). The cluster head pioneered based on its battery forte whose failure affects rest of the communications. In this paper, we discuss on the essential of idle resource sharing using participatory devices as relay nodes along with node failure rate and node density to achieve reliable communication. The earlier performances are observed and results are revealed. Hence we concentrate on minimizing the task of CH using relay nodes such as participatory devices and the faulty nodes are identified over Poisson distribution which observes the failure probability without affecting communication and reduced resource consumption.

7 citations