scispace - formally typeset
Search or ask a question
Author

Sanjeev Sofat

Bio: Sanjeev Sofat is an academic researcher from PEC University of Technology. The author has contributed to research in topics: Malware & Malware analysis. The author has an hindex of 17, co-authored 89 publications receiving 1303 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This survey paper provides an overview of techniques for analyzing and classifying the malwares and finds that behavioral patterns obtained either statically or dynamically can be exploited to detect and classify unknownmalwares into their known families using machine learning techniques.
Abstract: One of the major and serious threats on the Internet today is malicious software, often referred to as a malware. The malwares being designed by attackers are polymorphic and metamorphic which have the ability to change their code as they propagate. Moreover, the diversity and volume of their variants severely undermine the effectiveness of traditional defenses which typically use signature based techniques and are unable to detect the previously unknown malicious executables. The variants of malware families share typical behavioral patterns reflecting their origin and purpose. The behavioral patterns obtained either statically or dynamically can be exploited to detect and classify unknown malwares into their known families using machine learning techniques. This survey paper provides an overview of techniques for analyzing and classifying the malwares.

350 citations

Journal Article
TL;DR: In this paper a review of vision based hand gesture recognition is presented, the existing approaches are categorized into 3D model based approaches and appearance based approaches, highlighting their advantages and shortcomings and identifying the open issues.
Abstract: With the development of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient. Due to the limitation of these devices the useable command set is also limited. Direct use of hands as an input device is an attractive method for providing natural Human Computer Interaction which has evolved from text-based interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to fully fledged multi-participant Virtual Environment (VE) systems. Imagine the human-computer interaction of the future: A 3D- application where you can move and rotate objects simply by moving and rotating your hand - all without touching any input device. In this paper a review of vision based hand gesture recognition is presented. The existing approaches are categorized into 3D model based approaches and appearance based approaches, highlighting their advantages and shortcomings and identifying the open issues.

296 citations

Journal ArticleDOI
TL;DR: A smartphone based sensing and crowdsourcing technique to detect the road surface conditions using DTW2 technique which has not been researched on data based on motion sensors and shows better accuracy and efficiency when compared with the existing techniques.

100 citations

Proceedings ArticleDOI
01 Sep 2017
TL;DR: A potential method for tuberculosis detection using deep-learning which classifies CXR images into two categories, that is, normal and abnormal, which is presented in this paper.
Abstract: Tuberculosis (TB) is a major health threat in the developing countries. Many patients die every year due to lack of treatment and error in diagnosis. Developing a computer-aided diagnosis (CAD) system for TB detection can help in early diagnosis and containing the disease. Most of the current CAD systems use handcrafted features, however, lately there is a shift towards deep-learning-based automatic feature extractors. In this paper, we present a potential method for tuberculosis detection using deep-learning which classifies CXR images into two categories, that is, normal and abnormal. We have used CNN architecture with 7 convolutional layers and 3 fully connected layers. The performance of three different optimizers has been compared. Out of these, Adam optimizer with an overall accuracy of 94.73% and validation accuracy of 82.09% performed best amongst them. All the results are obtained using Montgomery and Shenzhen datasets which are available in public domain.

65 citations

Journal ArticleDOI
TL;DR: A deep learning-based fully convolutional encoder-decoder network for segmenting lung fields from chest radiographs that is especially suitable for lung field segmentation is presented.
Abstract: Segmentation of lung fields is an important pre-requisite step in chest radiographic computer-aided diagnosis systems as it precisely defines the region-of-interest on which different operations are applied. However, it is immensely challenging due to extreme variations in shape and size of lungs. Manual segmentation is also prone to large inter-observer and intra-observer variations. Thus, an automated method for lung field segmentation with sufficiently high accuracy is unsparingly required. This paper presents a deep learning-based fully convolutional encoder-decoder network for segmenting lung fields from chest radiographs. The major contribution of this work is in the unique design of the encoder-decoder network that makes it especially suitable for lung field segmentation. The proposed network is trained, tested and evaluated on publicly available standard datasets. The result of evaluation indicates that the performance of the proposed method, i.e. accuracy of 98.73% and overlap of 95.10%, is better than state-of-the-art methods.

57 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

Journal ArticleDOI
TL;DR: An analysis of comparative surveys done in the field of gesture based HCI and an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters are provided.
Abstract: As computers become more pervasive in society, facilitating natural human---computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human---computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human computer interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

1,338 citations

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This work uses a state-of-the-art big and deep neural network combining convolution and max-pooling for supervised feature learning and classification of hand gestures given by humans to mobile robots using colored gloves.
Abstract: Automatic recognition of gestures using computer vision is important for many real-world applications such as sign language recognition and human-robot interaction (HRI). Our goal is a real-time hand gesture-based HRI interface for mobile robots. We use a state-of-the-art big and deep neural network (NN) combining convolution and max-pooling (MPCNN) for supervised feature learning and classification of hand gestures given by humans to mobile robots using colored gloves. The hand contour is retrieved by color segmentation, then smoothened by morphological image processing which eliminates noisy edges. Our big and deep MPCNN classifies 6 gesture classes with 96% accuracy, nearly three times better than the nearest competitor. Experiments with mobile robots using an ARM 11 533MHz processor achieve real-time gesture recognition performance.

555 citations