scispace - formally typeset
Search or ask a question
Author

Sruti Das Choudhury

Other affiliations: University of Warwick
Bio: Sruti Das Choudhury is an academic researcher from University of Nebraska–Lincoln. The author has contributed to research in topics: Computer science & Segmentation. The author has an hindex of 10, co-authored 24 publications receiving 423 citations. Previous affiliations of Sruti Das Choudhury include University of Warwick.

Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results show STM-SPP outperforms several silhouette-based gait recognition methods.

104 citations

Journal ArticleDOI
TL;DR: A framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system and a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants are provided.
Abstract: The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant's phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field.

94 citations

Journal ArticleDOI
TL;DR: A two-phase view-invariant multiscale gait recognition method which is robust to variation in clothing and presence of a carried item and a weighted random subspace learning based classification is used to exploit the high dimensionality of the feature space for improved identification.

82 citations

Journal ArticleDOI
TL;DR: A three-phase gait recognition method that analyses the spatio-temporal shape and dynamic motion characteristics of a human subject's silhouettes to identify the subject in the presence of most of the challenging factors that affect existing gait recognized systems is presented.

58 citations

Journal ArticleDOI
TL;DR: A novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD is introduced.
Abstract: Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska–Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science.

54 citations


Cited by
More filters
01 Jan 2016
TL;DR: Biomechanics and motor control of human movement is downloaded so that people can enjoy a good book with a cup of tea in the afternoon instead of juggling with some malicious virus inside their laptop.
Abstract: Thank you very much for downloading biomechanics and motor control of human movement. Maybe you have knowledge that, people have search hundreds times for their favorite books like this biomechanics and motor control of human movement, but end up in infectious downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they juggled with some malicious virus inside their laptop.

1,689 citations

Journal ArticleDOI
TL;DR: A new finger vein recognition algorithm based on Band Limited Phase Only Correlation (BLPOC) and a new type of geometrical features called Width-Centroid Contour Distance (WCCD) which can improve the accuracy of finger geometry recognition.
Abstract: A new finger vein recognition algorithm based on Band Limited Phase Only Correlation.Finger width and Centroid Contour Distance for finger geometry recognition.The fusion of vein and geometry for a finger based bimodal biometrics system.A new infrared finger image database is made publicly available on the web. In this paper, a new approach of multimodal finger biometrics based on the fusion of finger vein and finger geometry recognition is presented. In the proposed method, Band Limited Phase Only Correlation (BLPOC) is utilized to measure the similarity of finger vein images. Unlike previous methods, BLPOC is resilient to noise, occlusions and rescaling factors; thus can enhance the performance of finger vein recognition. As for finger geometry recognition, a new type of geometrical features called Width-Centroid Contour Distance (WCCD) is proposed. This WCCD combines the finger width with Centroid Contour Distance (CCD). As compared with the single type of feature, the fusion of W and CCD can improve the accuracy of finger geometry recognition. Finally, we integrate the finger vein and finger geometry recognitions by a score-level fusion method based on the weighted SUM rule. Experimental evaluation using our own database which was collected from 123 volunteers resulted in an efficient recognition performance where the equal error rate (EER) was 1.78% with a total processing time of 24.22ms.

235 citations

Posted Content
TL;DR: DistancePPG as discussed by the authors proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate.
Abstract: Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios.

225 citations

Journal ArticleDOI
TL;DR: IDNet is the first system that exploits a deep learning approach as universal feature extractors for gait recognition, and that combines classification results from subsequent walking cycles into a multi-stage decision making framework.

209 citations