Dakshina Ranjan Kisku
Other affiliations: Dr. B.C. Roy Engineering College, Durgapur, Indian Institute of Technology Kanpur, Asansol Engineering College
Bio: Dakshina Ranjan Kisku is an academic researcher from National Institute of Technology, Durgapur. The author has contributed to research in topics: Facial recognition system & Biometrics. The author has an hindex of 17, co-authored 120 publications receiving 1032 citations. Previous affiliations of Dakshina Ranjan Kisku include Dr. B.C. Roy Engineering College, Durgapur & Indian Institute of Technology Kanpur.
Papers published on a yearly basis
••12 Dec 2007
TL;DR: The aim of this paper is to study the fusion at feature extraction level for face and fingerprint biometrics by extracting independent feature pointsets from the two modalities, and making the two pointsets compatible for concatenation.
Abstract: The aim of this paper is to study the fusion at feature extraction level for face and fingerprint biometrics. The proposed approach is based on the fusion of the two traits by extracting independent feature pointsets from the two modalities, and making the two pointsets compatible for concatenation. Moreover, to handle the 'problem of curse of dimensionality', the feature pointsets are properly reduced in dimension. Different feature reduction techniques are implemented, prior and after the feature pointsets fusion, and the results are duly recorded. The fused feature pointset for the database and the query face and fingerprint images are matched using techniques based on either the point pattern matching, or the Delaunay triangulation. Comparative experiments are conducted on chimeric and real databases, to assess the actual advantage of the fusion performed at the feature extraction level, in comparison to the matching score level.
••07 Jun 2007
TL;DR: The experimental results demonstrate the effectiveness of the proposed system for automatic face identification based on graph matching technique on SIFT features extracted from face images.
Abstract: This paper presents a new face identification system based on graph matching technique on SIFT features extracted from face images. Although SIFT features have been successfully used for general object detection and recognition, only recently they were applied to face recognition. This paper further investigates the performance of identification techniques based on Graph matching topology drawn on SIFT features which are invariant to rotation, scaling and translation. Face projections on images, represented by a graph, can be matched onto new images by maximizing a similarity function taking into account spatial distortions and the similarities of the local features. Two graph based matching techniques have been investigated to deal with false pair assignment and reducing the number of features to find the optimal feature set between database and query face SIFT features. The experimental results, performed on the BANCA database, demonstrate the effectiveness of the proposed system for automatic face identification.
20 Jun 2009
TL;DR: A robust face recognition technique based on the extraction and matching of SIFT features related to independent face areas and the Dempster-Shafer decision theory is applied to fuse the two matching techniques.
Abstract: Faces are highly deformable objects which may easily change their appearance over time. Not all face areas are subject to the same variability. Therefore decoupling the information from independent areas of the face is of paramount importance to improve the robustness of any face recognition technique. This paper presents a robust face recognition technique based on the extraction and matching of SIFT features related to independent face areas. Both a global and local (as recognition from parts) matching strategy is proposed. The local strategy is based on matching individual salient facial SIFT features as connected to facial landmarks such as the eyes and the mouth. As for the global matching strategy, all SIFT features are combined together to form a single feature. In order to reduce the identification errors, the Dempster-Shafer decision theory is applied to fuse the two matching techniques. The proposed algorithms are evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition techniques also in the case of partially occluded faces or with missing information.
15 Jul 2009
TL;DR: A robust and efficient ear recognition system is presented, which uses Scale Invariant Feature Transform (SIFT) as feature descriptor for structural representation of ear images and results show improvements in recognition accuracy while invariant features are extracted from color slice regions to maintain the robustness of the system.
Abstract: Ear biometric is considered as one of the most reliable and invariant biometrics characteristics in line with iris and fingerprint characteristics. In many cases, ear biometrics can be compared with face biometrics regarding many physiological and texture characteristics. In this paper, a robust and efficient ear recognition system is presented, which uses Scale Invariant Feature Transform (SIFT) as feature descriptor for structural representation of ear images. In order to make it more robust to user authentication, only the regions having color probabilities in a certain ranges are considered for invariant SIFT feature extraction, where the K-L divergence is used for keeping color consistency. Ear skin color model is formed by Gaussian mixture model and clustering the ear color pattern using vector quantization. Finally, K-L divergence is applied to the GMM framework for recording the color similarity in the specified ranges by comparing color similarity between a pair of reference model and probe ear images. After segmentation of ear images in some color slice regions, SIFT keypoints are extracted and an augmented vector of extracted SIFT features are created for matching, which is accomplished between a pair of reference model and probe ear images. The proposed technique has been tested on the IITK Ear database and the experimental results show improvements in recognition accuracy while invariant features are extracted from color slice regions to maintain the robustness of the system.
TL;DR: A convolutional neural network (CNN) based multi-image augmentation technique for detecting COVID-19 in chest X-Ray and chest CT scan images of coronavirus suspected individuals to assist traditional RT-PCR methodology for accurate clinical diagnosis.
Abstract: COVID-19 is posed as very infectious and deadly pneumonia type disease until recent time. Novel coronavirus or SARS-COV-2 strain is responsible for COVID-19 and it has already shown the deadly nature of respiratory disease by threatening the health of millions of lives across the globe. Clinical study reveals that a COVID-19 infected person may experience dry cough, muscle pain, headache, fever, sore throat and mild to moderate respiratory illness. At the same time, it affects the lungs badly with virus infection. So, the lung can be a prominent internal organ to diagnose the gravity of COVID-19 infection using X-Ray and CT scan images of chest. Despite having lengthy testing time, RT-PCR is a proven testing methodology to detect coronavirus infection. Sometimes, it might give more false positive and false negative results than the desired rates. Therefore, to assist the traditional RT-PCR methodology for accurate clinical diagnosis, COVID-19 screening can be adopted with X-Ray and CT scan images of lung of an individual. This image based diagnosis will bring radical change in detecting coronavirus infection in human body with ease and having zero or near to zero false positives and false negatives rates. This paper reports a convolutional neural network (CNN) based multi-image augmentation technique for detecting COVID-19 in chest X-Ray and chest CT scan images of coronavirus suspected individuals. Multi-image augmentation makes use of discontinuity information obtained in the filtered images for increasing the number of effective examples for training the CNN model. With this approach, the proposed model exhibits higher classification accuracy around 95.38% and 98.97% for CT scan and X-Ray images respectively. CT scan images with multi-image augmentation achieves sensitivity of 94.78% and specificity of 95.98%, whereas X-Ray images with multi-image augmentation achieves sensitivity of 99.07% and specificity of 98.88%. Evaluation has been done on publicly available databases containing both chest X-Ray and CT scan images and the experimental results are also compared with ResNet-50 and VGG-16 models.
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.
01 Jan 1999
16 Nov 1998
01 Jan 2002
TL;DR: In this paper, a discriminant correlation analysis (DCA) is proposed for feature fusion by maximizing the pairwise correlations across the two feature sets and eliminating the between-class correlations and restricting the correlations to be within the classes.
Abstract: Information fusion is a key step in multimodal biometric systems. The fusion of information can occur at different levels of a recognition system, i.e., at the feature level, matching-score level, or decision level. However, feature level fusion is believed to be more effective owing to the fact that a feature set contains richer information about the input biometric data than the matching score or the output decision of a classifier. The goal of feature fusion for recognition is to combine relevant information from two or more feature vectors into a single one with more discriminative power than any of the input feature vectors. In pattern recognition problems, we are also interested in separating the classes. In this paper, we present discriminant correlation analysis (DCA), a feature level fusion technique that incorporates the class associations into the correlation analysis of the feature sets. DCA performs an effective feature fusion by maximizing the pairwise correlations across the two feature sets and, at the same time, eliminating the between-class correlations and restricting the correlations to be within the classes. Our proposed method can be used in pattern recognition applications for fusing the features extracted from multiple modalities or combining different feature vectors extracted from a single modality. It is noteworthy that DCA is the first technique that considers class structure in feature fusion. Moreover, it has a very low computational complexity and it can be employed in real-time applications. Multiple sets of experiments performed on various biometric databases and using different feature extraction techniques, show the effectiveness of our proposed method, which outperforms other state-of-the-art approaches.