Author
Ioannis Pitas
Other affiliations: University of Bristol, University of York, University of Toronto ...read more
Bio: Ioannis Pitas is an academic researcher from Aristotle University of Thessaloniki. The author has contributed to research in topics: Facial recognition system & Digital watermarking. The author has an hindex of 76, co-authored 795 publications receiving 24787 citations. Previous affiliations of Ioannis Pitas include University of Bristol & University of York.
Papers published on a yearly basis
Papers
More filters
01 Jan 2005
TL;DR: An integrated FISH image analysis system is developed to automate the classification of FISH images from breast carcinomas to avoid interobserver variations in the assessment of HER2/neu status.
Abstract: HER2/neu gene amplification is being evaluated by fluorescent in situ hybridization (FISH). In order to avoid interobserver variations in the assessment of HER2/neu status, an integrated FISH image analysis system is developed to automate the classification of FISH images from breast carcinomas. Using a two-stage algorithm, for nuclei and dot detection, and combining results from multiple images taken from a slice for overall case classification, FISH signals ratio per cell nucleus were measured and cases were classified as positive or negative. The system consists of functions for red spot detection, green spot detection, nuclei segmentation and FISH signal ratio. Therefore, it provides the capability to manually correct the resulted images after the analysis.
3 citations
••
10 Dec 2015
TL;DR: A novel subspace learning technique is introduced for facial image analysis that takes into account the symmetry nature of facial images by properly incorporating a symmetry constraint into the objective function of the Two-Dimensional Linear Discriminant Analysis to determine symmetric projection vectors.
Abstract: In this paper a novel subspace learning technique is introduced for facial image analysis The proposed technique takes into account the symmetry nature of facial images This information is exploited by properly incorporating a symmetry constraint into the objective function of the Two-Dimensional Linear Discriminant Analysis (2DLDA) to determine symmetric projection vectors The performance of the proposed Symmetric Two-Dimensional Linear Discriminant Analysis was evaluated on real face recognition databases Experimental results highlight the superiority of the proposed technique in comparison to standard approach
3 citations
••
TL;DR: In this paper, a centralized vision-based method for robust, on-the-fly 3D localization and mapping of human crowds in large-scale outdoor environments, assuming their independent visual detection on the camera feed of multiple UAVs, is presented.
Abstract: This paper presents a centralized, vision-based method for robust, on-the-fly 3D localization and mapping of human crowds in large-scale outdoor environments, assuming their independent visual detection on the camera feed of multiple UAVs. The proposed method aims at enhancing vision-assisted human crowd avoidance, in line with common UAV safety regulations, since the resulting 3D crowd annotations may be employed by other algorithms for on-line mission/path replanning during deployment of a UAV fleet. Initially, 2D crowd heatmaps are assumed to be derived per video frame on-board each UAV separately, using deep neural human crowd detectors, which indicate the probability of each pixel depicting a human crowd. The UAV-mounted cameras are assumed to be covering the same large-scale outdoor area over time. The heatmaps of each time instance are transmitted to a central computer and back-projected onto the common 3D terrain/map of the navigation environment, utilizing the intrinsic and extrinsic camera parameters. The projected crowd heatmaps derived from the different drones/cameras are fused by exploiting a Bayesian filtering approach that favors newer crowd observations over older ones. Thus, during flight, an area is marked as crowded (therefore, a no-fly zone) if all, or most, individual UAV-mounted visual detectors have recently and confidently indicated crowd existence on it. In order to calculate prior probabilities for Bayesian fusion, the method also proposes and exploits a simple, but efficient image processing-based algorithm for identifying flat terrain areas (under the assumption that people do not gather on highly curved or inclined terrain), relying on a priori available ground elevation data for the mapped area. Evaluation on both synthetic and real-world multiview video sequences depicting human crowds in outdoor environments verifies the effectiveness of the proposed method.
3 citations
••
20 Nov 2014TL;DR: Efficient approaches for detecting three typical artifacts, sharpness mismatch, synchronization mismatch and stereoscopic window violation, are presented and experimental results show that the algorithms have considerable robustness in detecting 3D defects.
Abstract: This paper summarizes some common artifacts in stereo video content. These artifacts lead to poor even uncomfortable 3D viewing experience. Efficient approaches for detecting three typical artifacts, sharpness mismatch, synchronization mismatch and stereoscopic window violation, are presented in detail. Sharpness mismatch is estimated by measuring the width deviations of edge pairs in depth planes. Synchronization mismatch is detected based on the motion inconsistencies of feature points between the stereoscopic channels in a short time frame. Stereoscopic window violation is detected, using connected component analysis, when objects hit the vertical frame boundaries while being in front of the virtual screen. For experiments, test sequences were created in a professional studio environment and state-of-the-art metrics were used for evaluating the proposed approaches. The experimental results show that our algorithms have considerable robustness in detecting 3D defects.
3 citations
••
01 Aug 2007TL;DR: A novel method for the recognition of facial expressions in videos is proposed that extracts the deformed Candide facial grid that corresponds to the facial expression depicted in the video sequence and calculates a new metric multidimensional scaling.
Abstract: In this paper, a novel method for the recognition of facial expressions in videos is proposed. The system first extracts the deformed Candide facial grid that corresponds to the facial expression depicted in the video sequence. The mean Euclidean distance of the deformed grids is then calculated to create a new metric multidimensional scaling. The classification of the sample under examination to one of the 7 possible classes of facial expressions, i.e., anger, disgust, fear, happiness, sadness, surprise and neutral, is performed using multiclass SVMs defined in the new space. The experiments were performed using the Cohn-Kanade database and the results show that the above mentioned system can achieve an accuracy of 95.6%.
3 citations
Cited by
More filters
••
[...]
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality.
Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …
33,785 citations
••
TL;DR: In this paper, the authors provide an up-to-date critical survey of still-and video-based face recognition research, and provide some insights into the studies of machine recognition of faces.
Abstract: As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered.
6,384 citations
•
3,940 citations
••
TL;DR: In this article, the authors categorize and evaluate face detection algorithms and discuss relevant issues such as data collection, evaluation metrics and benchmarking, and conclude with several promising directions for future research.
Abstract: Images containing faces are essential to intelligent vision-based human-computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face, regardless of its 3D position, orientation and lighting conditions. Such a problem is challenging because faces are non-rigid and have a high degree of variability in size, shape, color and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for future research.
3,894 citations