Other affiliations: Orange S.A.
Bio: Claude Labit is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Motion estimation & Data compression. The author has an hindex of 12, co-authored 54 publications receiving 516 citations. Previous affiliations of Claude Labit include Orange S.A..
23 Mar 1992
TL;DR: A new relaxation method for compact quantitative motion estimation using a hierarchy of motion models is presented and the advantages of region-based motion descriptions are illustrated and compared to those of the usual local ones.
Abstract: Motion analysis studies provide efficient estimation schemes for estimating dense optical flow fields. However, for image sequence coding where motion features have to be transmitted, an adaptive compact motion representation is required to extract temporal redundancy from an image sequence with a minimum amount of side information and, nevertheless, with a high quality of reconstruction. The advantages of region-based motion descriptions are illustrated and compared to those of the usual local ones (i.e., only two purely translational components of apparent motion). A new relaxation method for compact quantitative motion estimation using a hierarchy of motion models is presented. >
TL;DR: This paper presents two-dimensional motion estimation methods which take advantage of the intrinsic redundancies inside 3DTV stereoscopic image sequences, subject to the crucial assumption that an initial calibration of the stereoscopic sensors provides us with geometric change of coordinates for two matched features.
Abstract: This paper presents two-dimensional motion estimation methods which take advantage of the intrinsic redundancies inside 3DTV stereoscopic image sequences. Most of the previous studies extract, either disparity vector fields if they are involved in stereovision, or apparent motion vector fields to be applied to motion compensation coding schemes. For 3DTV image sequence analysis and transmission, we can jointly estimate these two feature fields. Locally, initial image data are grouped within two views (the left and right ones) at two successive time samples and spatio-temporal coherence has to be used to enhance motion vector field estimation. Three different levels of ‘coherence’ have been experimented subject to the crucial assumption that an initial calibration of the stereoscopic sensors provides us with geometric change of coordinates for two matched features.
TL;DR: A new image sequence coding technique which exploits the redundant information between two successive images of a sequence is presented, and high compression ratios with high qualities of reconstruction for motion-and illumination-compensated sequences are obtained.
TL;DR: A statistical scheme based on a subspace method is described for detecting and tracking faces under varying poses for videophone sequences and the amplitude projections around the speaker's mouth are analyzed to describe the shape of the lips.
••14 Apr 1991
TL;DR: A new method is proposed to estimate global motion parameters from an unusual initial dense velocity field and compute a spatiotemporal motion-based segmentation within a motion compensation loop for image sequence coding.
Abstract: A new method is proposed to estimate global motion parameters from an unusual initial dense velocity field. The authors first want to obtain a compact representation of a dense velocity field and compute a spatiotemporal motion-based segmentation; then these parameters are used as initial values of a cost-function minimization algorithm and applied within a motion compensation loop for image sequence coding. Promising results are obtained on real TV image sequences. A compact motion representation is generated at each frame, and a quite interpretable qualitative and quantitative motion field is synthesized. Moreover, high quality of reconstruction and motion interpretation is obtained using the minimization stage. >
TL;DR: In this article, the authors categorize and evaluate face detection algorithms and discuss relevant issues such as data collection, evaluation metrics and benchmarking, and conclude with several promising directions for future research.
Abstract: Images containing faces are essential to intelligent vision-based human-computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face, regardless of its 3D position, orientation and lighting conditions. Such a problem is challenging because faces are non-rigid and have a high degree of variability in size, shape, color and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
TL;DR: A comprehensive and critical survey of face detection algorithms, ranging from simple edge-based algorithms to composite high-level approaches utilizing advanced pattern recognition methods, is presented.
TL;DR: Numerical results support this approach, as validated by the use of these algorithms on complex sequences, and two robust estimators in a multi-resolution framework are developed.
••01 Jun 1995
TL;DR: This paper proposes a new locally adaptive multigrid block matching motion estimation technique that leads to a robust motion field estimation precise prediction along moving edges and a decreased amount of side information in uniform areas.
Abstract: The key to high performance in image sequence coding lies in an efficient reduction of the temporal redundancies. For this purpose, motion estimation and compensation techniques have been successfully applied. This paper studies motion estimation algorithms in the context of first generation coding techniques commonly used in digital TV. In this framework, estimating the motion in the scene is not an intrinsic goal. Motion estimation should indeed provide good temporal prediction and simultaneously require low overhead information. More specifically the aim is to minimize globally the bandwidth corresponding to both the prediction error information and the motion parameters. This paper first clarifies the notion of motion, reviews classical motion estimation techniques, and outlines new perspectives. Block matching techniques are shown to be the most appropriate in the framework of first generation coding. To overcome the drawbacks characteristic of most block matching techniques, this paper proposes a new locally adaptive multigrid block matching motion estimation technique. This algorithm has been designed taking into account the above aims. It leads to a robust motion field estimation precise prediction along moving edges and a decreased amount of side information in uniform areas. Furthermore, the algorithm controls the accuracy of the motion estimation procedure in order to optimally balance the amount of information corresponding to the prediction error and to the motion parameters. Experimental results show that the technique results in greatly enhanced visual quality and significant saving in terms of bit rate when compared to classical block matching techniques. >