scispace - formally typeset
Search or ask a question

Showing papers by "Mongi A. Abidi published in 2016"


Journal ArticleDOI
TL;DR: An automatic selection framework for the optimal alignment method to improve the performance of face recognition and develops two qualitative prediction models based on a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric.
Abstract: A fundamental limitation of hyperspectral imaging is the inter-band misalignment correlated with subject motion during data acquisition. One way of resolving this problem is to assess the alignment quality of hyperspectral image cubes derived from the state-of-the-art alignment methods. In this paper, we present an automatic selection framework for the optimal alignment method to improve the performance of face recognition. Specifically, we develop two qualitative prediction models based on: 1) a principal curvature map for evaluating the similarity index between sequential target bands and a reference band in the hyperspectral image cube as a full-reference metric; and 2) the cumulative probability of target colors in the HSV color space for evaluating the alignment index of a single sRGB image rendered using all of the bands of the hyperspectral image cube as a no-reference metric. We verify the efficacy of the proposed metrics on a new large-scale database, demonstrating a higher prediction accuracy in determining improved alignment compared to two full-reference and five no-reference image quality metrics. We also validate the ability of the proposed framework to improve hyperspectral face recognition.

19 citations


Book ChapterDOI
01 Jan 2016
TL;DR: This chapter reviews four publically available hyperspectral face databases (HFDs): CMU, PolyU-HSFD, IRIS-M, and Stanford databases toward providing information on the key points of each of the considered databases, and introduces a new large HFD, called IRIS, which can serve as a benchmark for statistically evaluating the performance of current and future HFR algorithms.
Abstract: Spectral imaging (SI) enables us to collect various spectral information at specific wavelengths by dividing the spectrum into multiple bands. As such, SI offers a means to overcome several major challenges specific to current face recognition systems. However, the practical usage of hyperspectral face recognition (HFR) has, to date, been limited due to database restrictions in the public domain for comparatively evaluating HFR. In this chapter, we review four publically available hyperspectral face databases (HFDs): CMU, PolyU-HSFD, IRIS-M, and Stanford databases toward providing information on the key points of each of the considered databases. In addition, a new large HFD , called IRIS-HFD-2014, is introduced. IRIS-HFD-2014 can serve as a benchmark for statistically evaluating the performance of current and future HFR algorithms and will be made publicly available.

18 citations


Book
23 Dec 2016
TL;DR: This volume summarizes and explains various optimization techniques as applied to image processing and computer vision, and describes regularized optimization, a special method used to solve a class of constrained optimization problems.
Abstract: This book presents practical optimization techniques used in image processing and computer vision problems. Ill-posed problems are introduced and used as examples to show how each type of problem is related to typical image processing and computer vision problems. Unconstrained optimization gives the best solution based on numerical minimization of a single, scalar-valued objective function or cost function. Unconstrained optimization problems have been intensively studied, and many algorithms and tools have been developed to solve them. Most practical optimization problems, however, arise with a set of constraints. Typical examples of constraints include: (i) pre-specified pixel intensity range, (ii) smoothness or correlation with neighboring information, (iii) existence on a certain contour of lines or curves, and (iv) given statistical or spectral characteristics of the solution. Regularized optimization is a special method used to solve a class of constrained optimization problems. The term regularization refers to the transformation of an objective function with constraints into a different objective function, automatically reflecting constraints in the unconstrained minimization process. Because of its simplicity and efficiency, regularized optimization has many application areas, such as image restoration, image reconstruction, optical flow estimation, etc. Optimization plays a major role in a wide variety of theories for image processing and computer vision. Various optimization techniques are used at different levels for these problems, and this volume summarizes and explains these techniques as applied to image processing and computer vision.

4 citations



BookDOI
01 Jan 2016
TL;DR: This chapter considers basic optimization theory and application as related to image processing, and the focus presented in the examples will be to functions of one dimension, or line functions.
Abstract: Many engineering problems, particularly in image processing, can be expressed as optimization problems. Often, we must make approximations of our mathematical models in order to cast the problems into optimization form. This chapter considers basic optimization theory and application as related to image processing. The focus presented in the examples will be to functions of one dimension, or line functions. Later chapters will give emphasis to the multidimensional case. 3.1 Optimization Problems Suppose we want to find a set of data {x1, x2, . . ., xN} that minimizes an objective function f(x1, x2, . . ., xN). If there is a constraint on the data, the optimization problem can be described as minimize f x ð Þ, subject tox2U; ð3:1Þ where x 1⁄4 x1; x2; . . . ; xN 1⁄2 , f x ð Þ : R ! R, and U R [chong96]. For example, we consider that x represents an image with N pixels, each of which has a continuous intensity value in the range [0, 255]. If the image is observed as y and is obtained by the relationship y 1⁄4 Dx; ð3:2Þ then the original image can be estimated by minimizing y Dx k k subject tox2 xi 0 xi 255, i 1⁄4 1, . . . ,N : ð3:3Þ The problem described in Eq. (3.3) can fit into the general optimization structure given in Eq. (3.1) if f x ð Þ 1⁄4 y Dx k k and U 1⁄4 xi 0 xi 255, i 1⁄4 1, . . . ,N : As another example, consider a 3 5 array of edges with gradient magnitude, as shown in Fig. 3.1a. By using the gradient magnitude table, we want to link edge © Springer International Publishing Switzerland 2016 M.A. Abidi et al., Optimization Techniques in Computer Vision, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-46364-3_3 53 points for boundary extraction. Note that we did not use gradient directions for simplicity, which should be considered for more practical edge linking problems [jain89]. If we assume that the contour should pass an edge of nonzero gradient magnitude from left to right, then three possible contours are shown in Fig. 3.1b. Let xi represent the row position of the ith column in Fig. 3.1a. Then the vector for contour A in Fig. 3.1b can be described as x 1⁄4 2; 1; 2; 3; 2 ð Þ. Likewise, contours B and C, respectively, can be described as x 1⁄4 2; 3; 2; 3; 2 ð Þ and x 1⁄4 2; 3; 3; 3; 2 ð Þ. To choose the best contour, we define the objective function as the sum of cumulative gradient magnitudes. We then have g x ð Þ 1⁄4 5þ 7þ 1þ 8þ 6 1⁄4 27, g x ð Þ 1⁄4 23, and g x 1⁄4 24. Thus, to maximize g(x), the best contour is x, which gives the maximum cumulative gradient magnitudes. This example can be described as an optimization problem for which maximize g x ð Þ, subject tox2 x; x; x : ð3:4Þ The problem described in Eq. (3.3) becomes equivalent to Eq. (3.1) if we define f x ð Þ 1⁄4 g x ð Þ and U 1⁄4 x; x; x . The problems discussed above have the general form of a constrained optimization problem. This is true since the variables are constrained to be in the constraint set U. If U 1⁄4 R , where N is the number of variables, we refer to the problem as an unconstrained optimization problem. Generally, most practical problems have constraints on the solution. However, as a logical progression, starting in Chap. 4, we first discuss the basic unconstrained optimization problem. This approach is taken because a good mathematical description and analysis for unconstrained optimization problems can serve as a basic approach to solve the more general constrained optimization problem. Constrained optimization problems will be discussed in Chap. 5. Many useful methods for solving constrained problems have been developed. However, to help simplify the numerical work, it is not uncommon to first transform the constrained optimization problem to an appropriate unconstrained optimization problem. Then the approximate solution is obtained by solving the unconstrained problem. This approach, called the regularization method, will be presented in Chap. 6. x1 x2 x3 x4 x5

4 citations


Book ChapterDOI
01 Jan 2016
TL;DR: An adaptive regularized image interpolation algorithm, which is developed in a general framework of data fusion, to enlarge noisy-blurred, low-resolution (LR) image sequences by minimizing the residual between the given LR image frame and the subsampled estimated solution with appropriate smoothness constraints.
Abstract: This chapter presents an adaptive regularized image interpolation algorithm, which is developed in a general framework of data fusion, to enlarge noisy-blurred, low-resolution (LR) image sequences. Initially, the assumption is made that each LR image frame is obtained by subsampling the corresponding original high-resolution (HR) image frame. Then the mathematical model of the subsampling process is obtained. Given a sequence of LR image frames and the mathematical model of subsampling, the general regularized image interpolation estimates HR image frames by minimizing the residual between the given LR image frame and the subsampled estimated solution with appropriate smoothness constraints.

2 citations


Book ChapterDOI
01 Jan 2016
TL;DR: The L-curve method is widely used; however, this method is nonconvergent [leonov97, vogel96]; an example of image restoration using different values of regularization parameters is shown.
Abstract: The success of all currently available regularization techniques relies heavily on the proper choice of the regularization parameter. Although many regularization parameter selection methods (RPSMs) have been proposed, very few of them are used in engineering practice. This is due to the fact that theoretically justified methods often require unrealistic assumptions, while empirical methods do not guarantee a good regularization parameter for any set of data. Among the methods that found their way into engineering applications, the most common are Morozov’s Discrepancy Principle (abbreviated as MDP) [morozov84, phillips62], Mallows’ CL [mallows73], generalized cross validation (abbreviated as GCV) [wahba90], and the L-curve method [hansen98]. A high sensitivity of CL and MDP to an underestimation of the noise level has limited their application to cases in which the noise level can be estimated with high fidelity [hansen98]. On the other hand, noise-estimate-free GCV occasionally fails, presumably due to the presence of correlated noise [wahba90]. The L-curve method is widely used; however, this method is nonconvergent [leonov97, vogel96]. An example of image restoration using different values of regularization parameters is shown in Figs. 2.1, 2.2, 2.3, 2.4, and 2.5. The Matlab code for this example was provided by Dr. Curt Vogel of Montana State University in a personal communication. The original image is presented in Fig. 2.1, and the observed blurred image is in Fig. 2.2.

2 citations


Book ChapterDOI
01 Jan 2016
TL;DR: In this paper, regularization methods play an important role in solving linear equations with prior knowledge about the solution, and the corresponding regularization results in minimization of the linear equations of the form
Abstract: Regularization methods play an important role in solving linear equations of the form $$ y=Hx, $$ with prior knowledge about the solution. The corresponding regularization results in minimization of $$ f(x)={\left\Vert y-Hx\kern0.1em \right\Vert}^2+\lambda {\left\Vert \kern0.1em Cx\kern0.1em \right\Vert}^2. $$

1 citations


Book ChapterDOI
01 Jan 2016
TL;DR: This chapter presents, as an example of optimization, a photo-realistic scene reconstruction procedure using laser range data and color photographs, for multimodal scene representation.
Abstract: Many applications require the use of 3D graphics to create models of real environments. These models are usually built from range or depth images. In the scene modeling process, the use of additional 2D digital sensorial information leads to multimodal scene representation, where an image acquired by a 2D sensor is used as a texture map for a geometric model of a scene. In this chapter we present, as an example of optimization, a photo-realistic scene reconstruction procedure using laser range data and color photographs.

Book ChapterDOI
01 Jan 2016
TL;DR: In this paper, a new surface smoothing method based on area decreasing flow is proposed, which can be used for preprocessing raw range data or postprocessing reconstructed surfaces, and the edge strength of each vertex on a triangle mesh is computed by fusing the tensor voting and the orientation check of the normal vector field inside a geodesic window.
Abstract: This chapter discusses a new surface smoothing method based on area decreasing flow, which can be used for preprocessing raw range data or postprocessing reconstructed surfaces. Although surface area minimization is mathematically equivalent to the mean curvature flow, area decreasing flow is far more efficient for smoothing the discrete surface on which the mean curvature is difficult to estimate. A general framework of regularization based on area decreasing flow is proposed and applied to smoothing range data and arbitrary triangle mesh. Crease edges are preserved by adaptively changing the regularization parameter. The edge strength of each vertex on a triangle mesh is computed by fusing the tensor voting and the orientation check of the normal vector field inside a geodesic window. Experimental results show that the proposed algorithm provides successful smoothing for both raw range data and surface meshes.

Book ChapterDOI
01 Jan 2016
TL;DR: In many image processing problems, the original complete data sets are needed, generally from incomplete and, most often, from degraded observations, to estimate the original pixel intensity value, which has been attenuated in the imaging system, without any correlation with neighboring pixels.
Abstract: In many image processing problems, we need to estimate the original complete data sets, generally from incomplete and, most often, from degraded observations. One simplified example is to estimate the original pixel intensity value, which has been attenuated in the imaging system, without any correlation with neighboring pixels. If we know the nonzero attenuation factor for the imaging system, we can easily estimate the original value by multiplying by this attenuation factor. Figure 1.1 shows the corresponding attenuation and the restoration processes.

Book ChapterDOI
01 Jan 2016
TL;DR: This chapter considers basic optimization theory and application as related to image processing and the focus presented in the examples will be to functions of one dimension, or line functions.
Abstract: Many engineering problems, particularly in image processing, can be expressed as optimization problems. Often, we must make approximations of our mathematical models in order to cast the problems into optimization form. This chapter considers basic optimization theory and application as related to image processing. The focus presented in the examples will be to functions of one dimension, or line functions. Later chapters will give emphasis to the multidimensional case.

Book ChapterDOI
01 Jan 2016
TL;DR: In this chapter, a parametric model is proposed as a coarse description of the volumetric part of 3D objects for object recognition using the parameterized part-based superquadric model, which has the advantage that it can describe various primitive shapes using a finite number of parameters.
Abstract: Three-dimensional (3D) object representation and shape reconstruction play an important role in the field of computer vision. In this chapter, a parametric model is proposed as a coarse description of the volumetric part of 3D objects for object recognition. The parameterized part-based superquadric model is used as a volumetric parameter model. This superquadric model has the advantage that it can describe various primitive shapes using a finite number of parameters including translation, rotation, and global deformation. Shape recovery is performed by least square minimization of the objective function for all range points belonging to a single part of the object. The set of superquadric parameters representing each single part of the 3D object can then be obtained.

Book ChapterDOI
01 Jan 2016
TL;DR: In this chapter, a modified, regularized image restoration algorithm useful in reducing blocking artifacts in predictive-coded (P) pictures of compressed video, based on the corresponding image degradation model is presented.
Abstract: In this chapter, a modified, regularized image restoration algorithm useful in reducing blocking artifacts in predictive-coded (P) pictures of compressed video, based on the corresponding image degradation model, is presented. Since most video coding standards adopt a hybrid structure of macroblock-based motion compensation (MC) and block discrete cosine transform (BDCT), the blocking artifacts occur at both the block boundary and block interior, and the degradation process due to quantization is generated on just differential images. Based on observation, a new degradation model is needed for differential images and the corresponding restoration algorithm, which directly processes the differential images before reconstructing decoded images. For further removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints: directional discontinuities on the block boundary and on the block interior. These constraints have been used for defining convex sets for restoring differential images. In-depth analysis of differential domain processing is presented in the appendix and serves as the theoretical basis for justifying differential domain image processing. Experimental results also show significant improvement over conventional methods in the sense of both objective and subjective criteria.

Journal ArticleDOI
TL;DR: A robust blind deconvolution algorithm is proposed by adopting a penalty-weighted anisotropic diffusion prior that effectively eliminates the discontinuity in the blur kernel caused by the noisy input image during the process of kernel estimation and reduces the speckle noise of the blur Kernel, thus improving the quality of the restored image.

01 Jan 2016
TL;DR: A segmentation method that uses features to indicate boundaries or edges between regions to obtain an more accurate segmentation of objects or object parts in the scene is presented.
Abstract: Scene segmentation is a pre-processing step for many vision systems. We are concerned with segmentation as aprecursor to 3-D scene modeling. Segmentation of a scene for this purpose usually involves dividing an image intoareas that are relatively uniform in some value (e.g. intensity, range, or curvature) . This single segmented image represents the analogous segmented scene. This paper presents a segmentation method that uses features to indicate boundaries or edges between regions. We incorporate features from multiple image types to obtain an more accurate segmentation of objects or object parts in the scene. Multiple features are not only combined directly to improvesegmentation results, but they are also used to guide a smoothing operation. This smoothing technique preserves features representing edges while smoothing noise in the images. The segmentation method is based on applying a watershed algorithm to a fuzzy feature map. A fuzzy featuremap being any image containing fuzzy values representing degree of membership in a particular feature class. Thefirst step in obtaining the fuzzy feature map involves smoothing noise from the image pair. We apply an anisotropicdiffusion algorithm to both images. This algorithm smooths noise in the images while preserving changes in range,intensity, and surface normal. We create three fuzzy feature maps from the smoothed range and intensity imagepair: gradient of the range image, gradient of the intensity image, and gradient of the surface normal of the rangeimage. We fuse these fuzzy feature maps to create a fuzzy feature map of edges. This map includes both step and