scispace - formally typeset
Search or ask a question
Author

Avinash G. Keskar

Bio: Avinash G. Keskar is an academic researcher from Visvesvaraya National Institute of Technology. The author has contributed to research in topics: Discrete wavelet transform & Rough set. The author has an hindex of 12, co-authored 112 publications receiving 668 citations.


Papers
More filters
Journal ArticleDOI
19 Jun 2020
TL;DR: A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way and is able to outperform all the individual models.
Abstract: Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population Chest X-rays are primarily used for the diagnosis of this disease However, even for a trained radiologist, it is a challenging task to examine chest X-rays There is a need to improve the diagnosis accuracy In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy Partial data augmentation techniques are employed to increase the training dataset in a balanced way The proposed weighted classifier is able to outperform all the individual models Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score The final proposed weighted classifier model is able to achieve a test accuracy of 9843% and an AUC score of 9976 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process

155 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: An algorithm of image-tamper detection based on the Discrete Wavelet Transform i.e. DWT is developed that allows us to detect whether image forgery has occurred or not and also localizes the forgery i.i. it tells us visually where the copy-move forgeries has occurred.
Abstract: Powerful image editing tools like Adobe Photoshop etc. are very common these days. However due to such tools tampering of images has become very easy. Such tampering with digital images is known as image forgery. The most common type of digital image forgery is known as copy-move forgery wherein a part of image is cut/copied and pasted in another area of the same image. The aim behind this type of forgery may be to hide some particularly important details in the image. A method has been proposed to detect copy-move forgery in images. We have developed an algorithm of image-tamper detection based on the Discrete Wavelet Transform i.e. DWT. DWT is used for dimension reduction, which in turn increases the accuracy of results. First DWT is applied on a given image to decompose it into four parts LL, LH, HL, and HH. Since LL part contains most of the information, SIFT is applied on LL part only to extract the key features and find descriptor vector of these key features and then find similarities between various descriptor vectors to conclude that the given image is forged. This method allows us to detect whether image forgery has occurred or not and also localizes the forgery i.e. it tells us visually where the copy-move forgery has occurred.

78 citations

Journal ArticleDOI
TL;DR: A dedicated hardware-based HAR system for smart military wearables, which uses a multilayer perceptron (MLP) algorithm to perform activity classification, is proposed, which requires only 270 ns for classification and consumes 120 mW of power.
Abstract: The smartphone-based human activity recognition (HAR) systems are not capable to deliver high-end performance for challenging applications. We propose a dedicated hardware-based HAR system for smart military wearables, which uses a multilayer perceptron (MLP) algorithm to perform activity classification. To achieve the flexible and efficient hardware design, the inherent MLP architecture with parallel computation is implemented on FPGA. The system performance has been evaluated using the UCI human activity dataset with 7767 feature samples of 20 subjects. The three combinations of a dataset are trained, validated, and tested on ten different MLP models with distinct topologies. The MLP design with the 7-6-5 topology is finalized from the classification accuracy and cross entropy performance. The five versions of the final MLP design (7-6-5) with different data precision are implemented on FPGA. The analysis shows that the MLP designed with 16-bit fixed-point data precision is the most efficient MLP implementation in the context of classification accuracy, resource utilization, and power consumption. The proposed MLP design requires only 270 ns for classification and consumes 120 mW of power. The recognition accuracy and hardware results performance achieved are better than many of the recently reported works.

67 citations

Journal ArticleDOI
TL;DR: An exhaustive survey of all the published research works on ball tracking in a categorical manner is presented to present discussions on the published work so far and views and opinions followed by a modified block diagram of the tracking process.
Abstract: Increase in the number of sport lovers in games like football, cricket, etc. has created a need for digging, analyzing and presenting more and more multidimensional information to them. Different classes of people require different kinds of information and this expands the space and scale of the required information. Tracking of ball movement is of utmost importance for extracting any information from the ball based sports video sequences. Based on the literature survey, we have initially proposed a block diagram depicting different steps and flow of a general tracking process. The paper further follows the same flow throughout. Detection is the first step of tracking. Dynamic and unpredictable nature of ball appearance, movement and continuously changing background make the detection and tracking processes challenging. Due to these challenges, many researchers have been attracted to this problem and have produced good results under specific conditions. However, generalization of the published work and algorithms to different sports is a distant dream. This paper is an effort to present an exhaustive survey of all the published research works on ball tracking in a categorical manner. The work also reviews the used techniques, their performance, advantages, limitations and their suitability for a particular sport. Finally, we present discussions on the published work so far and our views and opinions followed by a modified block diagram of the tracking process. The paper concludes with the final observations and suggestions on scope of future work.

53 citations

Journal ArticleDOI
TL;DR: A novel deep learning approach for 2D ball detection and tracking (DLBT) in soccer videos posing various challenges is presented and yields extraordinary accurate and robust tracking results compared to the other contemporary 2D trackers.
Abstract: Increasing interest, enthusiasm of sport lovers, and economics involved offer high importance to sports video recording and analysis. Being crucial for decision making, ball detection and tracking in soccer has become a challenging research area. This paper presents a novel deep learning approach for 2D ball detection and tracking (DLBT) in soccer videos posing various challenges. A new 2-stage buffer median filtering background modelling is used for moving objects blob detection. A deep learning approach for classification of an image patch into three classes, i.e. ball, player, and background is initially proposed. Probabilistic bounding box overlapping technique is proposed further for robust ball track validation. Novel full and boundary grid concepts resume tracking in ball_track_lost and ball_out_of_frame situations. DLBT does not require human intervention to identify ball from the initial frames unlike the most published algorithms. DLBT yields extraordinary accurate and robust tracking results compared to the other contemporary 2D trackers even in presence of various challenges including very small ball size and fast movements.

43 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations

Journal ArticleDOI
TL;DR: A new technique for NTC based on a combination of deep learning models that can be used for IoT traffic provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models.
Abstract: A network traffic classifier (NTC) is an important part of current network monitoring systems, being its task to infer the network service that is currently used by a communication flow (e.g., HTTP and SIP). The detection is based on a number of features associated with the communication flow, for example, source and destination ports and bytes transmitted per packet. NTC is important, because much information about a current network flow can be learned and anticipated just by knowing its network service (required latency, traffic volume, and possible duration). This is of particular interest for the management and monitoring of Internet of Things (IoT) networks, where NTC will help to segregate traffic and behavior of heterogeneous devices and services. In this paper, we present a new technique for NTC based on a combination of deep learning models that can be used for IoT traffic. We show that a recurrent neural network (RNN) combined with a convolutional neural network (CNN) provides best detection results. The natural domain for a CNN, which is image processing, has been extended to NTC in an easy and natural way. We show that the proposed method provides better detection results than alternative algorithms without requiring any feature engineering, which is usual when applying other models. A complete study is presented on several architectures that integrate a CNN and an RNN, including the impact of the features chosen and the length of the network flows used for training.

469 citations

Journal ArticleDOI
TL;DR: This comprehensive survey focuses on the security architecture of IoT and provides a detailed taxonomy of major challenges associated with the field and the key technologies, including Radio Frequency Identification and Wireless Sensor Networks, that are enabling factors in the development of IoT.
Abstract: Understanding of any computing environment requires familiarity with its underlying technologies. Internet of Things (IoT), being a new era of computing in the digital world, aims for the development of large number of smart devices that would support a variety of applications and services. These devices are resource‐constrained, and the services they would provide are going to impose specific requirements, among which security is the most prominent one. Therefore, in order to comprehend and conform these requirements, there is a need to illuminate the underlying architecture of IoT and its associated elements. This comprehensive survey focuses on the security architecture of IoT and provides a detailed taxonomy of major challenges associated with the field and the key technologies, including Radio Frequency Identification (RFID) and Wireless Sensor Networks (WSN), that are enabling factors in the development of IoT. The paper also discusses some of the protocols suitable for IoT infrastructure and open source tools and platforms for its development. Finally, a brief outline of major open issues, along with their potential solutions and future research directions, is given.

176 citations