scispace - formally typeset
Search or ask a question
Author

Mongi A. Abidi

Bio: Mongi A. Abidi is an academic researcher from University of Tennessee. The author has contributed to research in topics: Image processing & Image segmentation. The author has an hindex of 42, co-authored 365 publications receiving 7573 citations. Previous affiliations of Mongi A. Abidi include Centre national de la recherche scientifique & Oak Ridge National Laboratory.


Papers
More filters
Journal ArticleDOI
Seong G. Kong1, Jingu Heo1, Besma Abidi1, Joonki Paik1, Mongi A. Abidi1 
TL;DR: This paper provides an up-to-date review of research efforts in face recognition techniques based on two-dimensional images in the visual and infrared (IR) spectra.

650 citations

Book
03 Jan 1992
TL;DR: This chapter discusses data fusion and sensor integration - state-of-the-art 1990s, R.C. Kak data fusion techniques using robust statistics and Y. Mintz recursive fusion operators - desirable properties and illustrations.
Abstract: Data fusion and sensor integration - state-of-the-art 1990s, R.C. Luo and M.G. Gay multi-source spatial fusion using Bayesian reasoning, A. Elfes multi-sensor strategies using Dempster/Shafer belief accumulation, S.A. Hutchinson and A.C. Kak data fusion techniques using robust statistics, R. McKendall and M. Mintz recursive fusion operators - desirable properties and illustrations, Y. Chen and R.L. Kashyap distributed data fusion using Kalman filtering - a robotics application, C. Brown, et al kinematic and satistical models for data fusion using Kalman filtering, T.J. Broida and S.S. Blackman least-squares fusion of multi-sensory data, R.O. Eason and R.C. Gonzalez fusion of multi-dimensional data using regularization, M.A. Abidi geometric fusion - minimizing uncertainty ellipsoid volumes, Y. Nakamura combination of fuzzy information in the framework of possibility theory, D. Dubois and H. Prade data fusion - a neural networks implementation, T.L. Huntsberger.

555 citations

Journal ArticleDOI
TL;DR: The basic procedure is to first group the histogram components of a low-contrast image into a proper number of bins according to a selected criterion, then redistribute these bins uniformly over the grayscale, and finally ungroup the previously grouped gray-levels.
Abstract: This is Part II of the paper, "Gray-Level Grouping (GLG): an Automatic Method for Optimized Image Contrast Enhancement". Part I of this paper introduced a new automatic contrast enhancement technique: gray-level grouping (GLG). GLG is a general and powerful technique, which can be conveniently applied to a broad variety of low-contrast images and outperforms conventional contrast enhancement techniques. However, the basic GLG method still has limitations and cannot enhance certain classes of low-contrast images well, e.g., images with a noisy background. The basic GLG also cannot fulfill certain special application purposes, e.g., enhancing only part of an image which corresponds to a certain segment of the image histogram. In order to break through these limitations, this paper introduces an extension of the basic GLG algorithm, selective gray-level grouping (SGLG), which groups the histogram components in different segments of the grayscale using different criteria and, hence, is able to enhance different parts of the histogram to various extents. This paper also introduces two new preprocessing methods to eliminate background noise in noisy low-contrast images so that such images can be properly enhanced by the (S)GLG technique. The extension of (S)GLG to color images is also discussed in this paper. SGLG and its variations extend the capability of the basic GLG to a larger variety of low-contrast images, and can fulfill special application requirements. SGLG and its variations not only produce results superior to conventional contrast enhancement techniques, but are also fully automatic under most circumstances, and are applicable to a broad variety of images.

303 citations

Journal ArticleDOI
TL;DR: In this paper, an ellipse fitting method was used to detect eyeglass regions and replaced with eye template patterns to preserve the details useful for face recognition in the fused image.
Abstract: This paper describes a new software-based registration and fusion of visible and thermal infrared (IR) image data for face recognition in challenging operating environments that involve illumination variations. The combined use of visible and thermal IR imaging sensors offers a viable means for improving the performance of face recognition techniques based on a single imaging modality. Despite successes in indoor access control applications, imaging in the visible spectrum demonstrates difficulties in recognizing the faces in varying illumination conditions. Thermal IR sensors measure energy radiations from the object, which is less sensitive to illumination changes, and are even operable in darkness. However, thermal images do not provide high-resolution data. Data fusion of visible and thermal images can produce face images robust to illumination variations. However, thermal face images with eyeglasses may fail to provide useful information around the eyes since glass blocks a large portion of thermal energy. In this paper, eyeglass regions are detected using an ellipse fitting method, and replaced with eye template patterns to preserve the details useful for face recognition in the fused image. Software registration of images replaces a special-purpose imaging sensor assembly and produces co-registered image pairs at a reasonable cost for large-scale deployment. Face recognition techniques using visible, thermal IR, and data-fused visible-thermal images are compared using a commercial face recognition software (FaceIt®) and two visible-thermal face image databases (the NIST/Equinox and the UTK-IRIS databases). The proposed multiscale data-fusion technique improved the recognition accuracy under a wide range of illumination changes. Experimental results showed that the eyeglass replacement increased the number of correct first match subjects by 85% (NIST/Equinox) and 67% (UTK-IRIS).

211 citations

Journal ArticleDOI
TL;DR: Various vector-valued techniques for detecting discontinuities in color images are discussed, mainly based on vector order statistics, followed by presentation by examples of a couple of results of color edge detection.
Abstract: Up to now, most of the color edge detection methods are monochromatic-based techniques, which produce, in general, better than when traditional gray-value techniques are applied. In this overview, we focus mainly on vector-valued techniques because it is easy to understand how to apply common edge detection schemes to every color component. Opposed to this, vector-valued techniques are new and different. The second part of the article addresses the topic of edge classification. While edges are often classified into step edges and ramp edges, we address the topic of physical edge classification based on their origin into shadow edges, reflectance edges, orientation edges, occlusion edges, and specular edges. In the rest of this article we discuss various vector-valued techniques for detecting discontinuities in color images. Then operators are presented based on vector order statistics, followed by presentation by examples of a couple of results of color edge detection. We then discuss different approaches to a physical classification of edges by their origin.

201 citations


Cited by
More filters
MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This work proposes to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network, and shows that this 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
Abstract: 3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.

4,266 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Journal ArticleDOI
TL;DR: This survey reviews recent trends in video-based human capture and analysis, as well as discussing open problems for future research to achieve automatic visual analysis of human movement.

2,738 citations

Journal ArticleDOI
01 Jan 1997
TL;DR: This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques.
Abstract: Multisensor data fusion is an emerging technology applied to Department of Defense (DoD) areas such as automated target recognition, battlefield surveillance, and guidance and control of autonomous vehicles, and to non-DoD applications such as monitoring of complex machinery, medical diagnosis, and smart buildings. Techniques for multisensor data fusion are drawn from a wide range of areas including artificial intelligence, pattern recognition, statistical estimation and other areas. This paper provides a tutorial on data fusion, introducing data fusion applications, process models, and identification of applicable techniques. Comments are made on the state-of-the-art in data fusion.

2,356 citations