scispace - formally typeset
Search or ask a question
Author

Bir Bhanu

Other affiliations: University of Utah, Ford Motor Company, University of California  ...read more
Bio: Bir Bhanu is an academic researcher from University of California, Riverside. The author has contributed to research in topics: Feature extraction & Cognitive neuroscience of visual object recognition. The author has an hindex of 56, co-authored 553 publications receiving 13849 citations. Previous affiliations of Bir Bhanu include University of Utah & Ford Motor Company.


Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches.
Abstract: In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches

1,670 citations

Journal ArticleDOI
Bir Bhanu1
TL;DR: A review of the techniques used to solve the automatic target recognition (ATR) problem is given, with emphasis on algorithmic and implementation approaches.
Abstract: In this paper a review of the techniques used to solve the automatic target recognition (ATR) problem is given. Emphasis is placed on algorithmic and implementation approaches. ATR algorithms such as target detection, segmentation, feature computation, classification, etc. are evaluated and several new quantitative criteria are presented. Evaluation approaches are discussed and various problems encountered in the evaluation of algorithms are addressed. Strategies used in the data base design are outlined. New techniques such as the use of contextual cues, semantic and structural information, hierarchical reasoning in the classification and incorporation of multisensors in ATR systems are also presented.

481 citations

Journal ArticleDOI
TL;DR: An integrated local surface descriptor for surface representation and object recognition is introduced and, in order to speed up the search process and deal with a large set of objects, model local surface patches are indexed into a hash table.

456 citations

Journal ArticleDOI
TL;DR: The experimental results on the UCR data set of 155 subjects with 902 images under pose variations and the University of Notre Dame dataSet of 302 subjects with time-lapse gallery-probe pairs are presented to compare and demonstrate the effectiveness of the proposed algorithms and the system.
Abstract: Human ear is a new class of relatively stable biometrics that has drawn researchers' attention recently. In this paper, we propose a complete human recognition system using 3D ear biometrics. The system consists of 3D ear detection, 3D ear identification, and 3D ear verification. For ear detection, we propose a new approach which uses a single reference 3D ear shape model and locates the ear helix and the antihelix parts in registered 2D color and 3D range images. For ear identification and verification using range images, two new representations are proposed. These include the ear helix/antihelix representation obtained from the detection algorithm and the local surface patch (LSP) representation computed at feature points. A local surface descriptor is characterized by a centroid, a local surface type, and a 2D histogram. The 2D histogram shows the frequency of occurrence of shape index values versus the angles between the normal of reference feature point and that of its neighbors. Both shape representations are used to estimate the initial rigid transformation between a gallery-probe pair. This transformation is applied to selected locations of ears in the gallery set and a modified iterative closest point (ICP) algorithm is used to iteratively refine the transformation to bring the gallery ear and probe ear into the best alignment in the sense of the least root mean square error. The experimental results on the UCR data set of 155 subjects with 902 images under pose variations and the University of Notre Dame data set of 302 subjects with time-lapse gallery-probe pairs are presented to compare and demonstrate the effectiveness of the proposed algorithms and the system

345 citations

Journal ArticleDOI
01 Dec 1995
TL;DR: In this paper, a closed loop image segmentation system which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc.
Abstract: We present the first closed loop image segmentation system which incorporates a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc. The segmentation problem is formulated as an optimization problem and the genetic algorithm efficiently searches the hyperspace of segmentation parameter combinations to determine the parameter set which maximizes the segmentation quality criteria. The goals of our adaptive image segmentation system are to provide continuous adaptation to normal environmental variations, to exhibit learning capabilities, and to provide robust performance when interacting with a dynamic environment. We present experimental results which demonstrate learning and the ability to adapt the segmentation performance in outdoor color imagery.

324 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: A new optimization algorithm based on the law of gravity and mass interactions is introduced and the obtained results confirm the high performance of the proposed method in solving various nonlinear functions.

5,501 citations

Journal ArticleDOI
TL;DR: 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images, and the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications are identified.
Abstract: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316)

4,543 citations

Book
17 May 2013
TL;DR: This research presents a novel and scalable approach called “Smartfitting” that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of designing and implementing statistical models for regression models.
Abstract: General Strategies.- Regression Models.- Classification Models.- Other Considerations.- Appendix.- References.- Indices.

3,672 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations