Other affiliations: Wright State University, Xerox, Indraprastha Institute of Information Technology
Bio: Himanshu Bhatt is an academic researcher from Birla Institute of Technology and Science. The author has contributed to research in topics: Facial recognition system & Three-dimensional face recognition. The author has an hindex of 20, co-authored 54 publications receiving 1512 citations. Previous affiliations of Himanshu Bhatt include Wright State University & Xerox.
TL;DR: The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance, so that future face recognition systems will be able to address this important problem.
Abstract: Advancement and affordability is leading to the popularity of plastic surgery procedures. Facial plastic surgery can be reconstructive to correct facial feature anomalies or cosmetic to improve the appearance. Both corrective as well as cosmetic surgeries alter the original facial information to a large extent thereby posing a great challenge for face recognition algorithms. The contribution of this research is 1) preparing a face database of 900 individuals for plastic surgery, and 2) providing an analytical and experimental underpinning of the effect of plastic surgery on face recognition algorithms. The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance. Therefore, it is imperative to initiate a research effort so that future face recognition systems will be able to address this important problem.
11 Nov 2010
TL;DR: A novel algorithm to recognize periocular images in visible spectrum is proposed and the results show promise towards using peroocular region for recognition when the information is not sufficient for iris recognition.
Abstract: The performance of iris recognition is affected if iris is captured at a distance. Further, images captured in visible spectrum are more susceptible to noise than if captured in near infrared spectrum. This research proposes periocular biometrics as an alternative to iris recognition if the iris images are captured at a distance. We propose a novel algorithm to recognize periocular images in visible spectrum and study the effect of capture distance on the performance of periocular biometrics. The performance of the algorithm is evaluated on more than 11,000 images of the UBIRIS v2 database. The results show promise towards using periocular region for recognition when the information is not sufficient for iris recognition.
TL;DR: Quality by Design approach was applied on the development and optimization of solid lipid nanoparticle (SLN) formulation of hydrophilic drug rivastigmine (RHT) and Histopathology study showed intact nasal mucosa with RHT SLN indicating safety of R HT SLN for intranasal administration.
Abstract: In the present investigation, Quality by Design (QbD) approach was applied on the development and optimization of solid lipid nanoparticle (SLN) formulation of hydrophilic drug rivastigmine (RHT). RHT SLN were formulated by homogenization and ultrasonication method using Compritol 888 ATO, tween-80 and poloxamer-188 as lipid, surfactant and stabilizer respectively. The effect of independent variables (X1 - drug: lipid ratio, X2 - surfactant concentration and X3 - homogenization time) on quality attributes of SLN i.e. dependent variables (Y1 - size, Y2 - PDI and Y3 - %entrapment efficiency (%EE)) were investigated using 3(3) factorial design. Multiple linear regression analysis and ANOVA were employed to indentify and estimate the main effect, 2FI, quadratic and cubic effect. Optimized RHT SLN formula was derived from an overlay plot on which further effect of probe sonication was evaluated. Final RHT SLN showed narrow size distribution (PDI- 0.132±0.016) with particle size of 82.5±4.07 nm and %EE of 66.84±2.49. DSC and XRD study showed incorporation of RHT into imperfect crystal lattice of Compritol 888 ATO. In comparison to RHT solution, RHT SLN showed higher in-vitro and ex-vivo diffusion. The diffusion followed Higuchi model indicating drug diffusion from the lipid matrix due to erosion. Histopathology study showed intact nasal mucosa with RHT SLN indicating safety of RHT SLN for intranasal administration.
TL;DR: An automated algorithm to extract discriminating information from local regions of both sketches and digital face images is presented and yields better identification performance compared to existing face recognition algorithms and two commercial face recognition systems.
Abstract: One of the important cues in solving crimes and apprehending criminals is matching sketches with digital face images. This paper presents an automated algorithm to extract discriminating information from local regions of both sketches and digital face images. Structural information along with minute details present in local facial regions are encoded using multiscale circular Weber's local descriptor. Further, an evolutionary memetic optimization algorithm is proposed to assign optimal weight to every local facial region to boost the identification performance. Since forensic sketches or digital face images can be of poor quality, a preprocessing technique is used to enhance the quality of images and improve the identification performance. Comprehensive experimental evaluation on different sketch databases show that the proposed algorithm yields better identification performance compared to existing face recognition algorithms and two commercial face recognition systems.
TL;DR: A multiobjective evolutionary granular algorithm is proposed to match face images before and after plastic surgery and yields high identification accuracy as compared to existing algorithms and a commercial face recognition system.
Abstract: Widespread acceptability and use of biometrics for person authentication has instigated several techniques for evading identification. One such technique is altering facial appearance using surgical procedures that has raised a challenge for face recognition algorithms. Increasing popularity of plastic surgery and its effect on automatic face recognition has attracted attention from the research community. However, the nonlinear variations introduced by plastic surgery remain difficult to be modeled by existing face recognition systems. In this research, a multiobjective evolutionary granular algorithm is proposed to match face images before and after plastic surgery. The algorithm first generates non-disjoint face granules at multiple levels of granularity. The granular information is assimilated using a multiobjective genetic approach that simultaneously optimizes the selection of feature extractor for each face granule along with the weights of individual granules. On the plastic surgery face database, the proposed algorithm yields high identification accuracy as compared to existing algorithms and a commercial face recognition system.
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
01 Jan 1997
TL;DR: The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality.
Abstract: In recent decades, we have witnessed the evolution of biometric technology from the first pioneering works in face and voice recognition to the current state of development wherein a wide spectrum of highly accurate systems may be found, ranging from largely deployed modalities, such as fingerprint, face, or iris, to more marginal ones, such as signature or hand. This path of technological evolution has naturally led to a critical issue that has only started to be addressed recently: the resistance of this rapidly emerging technology to external attacks and, in particular, to spoofing. Spoofing, referred to by the term presentation attack in current standards, is a purely biometric vulnerability that is not shared with other IT security solutions. It refers to the ability to fool a biometric system into recognizing an illegitimate user as a genuine one by means of presenting a synthetic forged version of the original biometric trait to the sensor. The entire biometric community, including researchers, developers, standardizing bodies, and vendors, has thrown itself into the challenging task of proposing and developing efficient protection methods against this threat. The goal of this paper is to provide a comprehensive overview on the work that has been carried out over the last decade in the emerging field of antispoofing, with special attention to the mature and largely deployed face modality. The work covers theories, methodologies, state-of-the-art techniques, and evaluation databases and also aims at providing an outlook into the future of this very active field of research.
TL;DR: A generic HFR framework is proposed in which both probe and gallery images are represented in terms of nonlinear similarities to a collection of prototype face images, and Random sampling is introduced into the H FR framework to better handle challenges arising from the small sample size problem.
Abstract: Heterogeneous face recognition (HFR) involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph. Accurate HFR systems are of great value in various applications (e.g., forensics and surveillance), where the gallery databases are populated with photographs (e.g., mug shot or passport photographs) but the probe images are often limited to some alternate modality. A generic HFR framework is proposed in which both probe and gallery images are represented in terms of nonlinear similarities to a collection of prototype face images. The prototype subjects (i.e., the training set) have an image in each modality (probe and gallery), and the similarity of an image is measured against the prototype images from the corresponding modality. The accuracy of this nonlinear prototype representation is improved by projecting the features into a linear discriminant subspace. Random sampling is introduced into the HFR framework to better handle challenges arising from the small sample size problem. The merits of the proposed approach, called prototype random subspace (P-RS), are demonstrated on four different heterogeneous scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to photograph, and 4) forensic sketch to photograph.