Author
Fan Liu
Bio: Fan Liu is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Feature detection (computer vision) & Image retrieval. The author has an hindex of 2, co-authored 3 publications receiving 18 citations.
Papers
More filters
02 Apr 2000
TL;DR: A region-based image retrieval (RBIR) approach is proposed, where each image is represented by several feature vectors extracted from homogeneous color regions within an image, and similar images are retrieved based on these region features.
Abstract: Natural image retrieval using low-level visual features is a challenging problem for content-based image retrieval. In this paper, a region-based image retrieval (RBIR) approach is proposed. Each image is represented by several feature vectors extracted from homogeneous color regions within an image, and similar images are retrieved based on these region features. In the experimental image database all images are grouped into 16 categories using a moment feature to speed up the retrieval performance. Color mean, color histogram and moment of regions are used as features. From the experiments, it is found that region-based retrieval returns more relevant images than using features based on the entire image.
9 citations
12 Mar 2000
TL;DR: In this paper, an image is segmented into "homogeneous" regions using a histogram clustering algorithm and each image is then represented by a set of regions with region descriptors.
Abstract: Representing general images using global features extracted from the entire image may be inappropriate because the images often contain several objects or regions that are totally different from each other in terms of visual image properties. These features cannot adequately represent the variations and hence fail to describe the image content correctly. We advocate the use of features extracted from image regions and represent the images by a set of regional features. In our work, an image is segmented into "homogeneous" regions using a histogram clustering algorithm. Each image is then represented by a set of regions with region descriptors. Region descriptors consist of feature vectors representing color, texture, area and location of regions. Image similarity is measured by a newly proposed Region Match Distance metric for comparing images by region similarity. Comparison of image retrieval using global and regional features is presented and the advantage of using regional representation is demonstrated.
7 citations
TL;DR: In this paper , a fusion method Di-based CAE (DiCAE) based on Di and CAE is proposed, which treats the injected MS details as panchromatic-detail and integration with injection gain.
Abstract: The purpose of pansharpening is to generate high-resolution multispectral (MS) images using both low-resolution MS images and high-resolution panchromatic images. Traditional remote sensing image fusion algorithms can be simplified to a unified detail injection (Di) context that treats the injected MS details as panchromatic-detail and integration with injection gain. The injected details are developed from traditional fusion strategies with clear physical interpretation and facilitate fast convergence of deep learning models for high-quality image fusion. The excellent ability of convolutional autoencoder (CAE) networks to retain image information enables its application to remote sensing image fusion. In this paper, a fusion method Di-based CAE (DiCAE) based on Di and CAE is proposed. DiCAE method is based on Di as the theoretical foundation and CAE network as the core of the algorithm. In addition, our method is evaluated through experiments on different satellite datasets, and the fusion results obtained by DiCAE have better objective evaluation metrics and better visual results compared to other state-of-the-art methods.
2 citations
Journal Article•
TL;DR: In this work, an image is segmented into homogeneous regions using a histogram clustering algorithm and each image is represented by a set of regions with region descriptors, measured by a newly proposed Region Match Distance metric.
Abstract: Representing general images using global features extracted from the entire image may be inappropriate because the images often contain several objects or regions that are totally different from each other in terms of visual image properties. These features cannot adequately represent the variations and hence fail to describe the image content correctly. We advocate the use of features extracted from image regions and represent the images by a set of regional features. In our work, an image is segmented into homogeneous regions using a histogram clustering algorithm. Each image is then represented by a set of regions with region descriptors. Region descriptors consist of feature vectors representing color, texture, area and location of regions. Image similarity is measured by a newly proposed Region Match Distance metric for comparing images by region similarity. Comparison of image retrieval using global and regional features is presented and the advantage of using regional representation is demonstrated.
2 citations
TL;DR: Based on the dark channel principle and OSTU theory, the dual-channel method can automatically select the image gray threshold, which is used to solve the recognition problem to achieve long-term monitoring as discussed by the authors .
Abstract: In the actual bridge monitoring field, the change of illumination is an urgent problem to be solved in digital image technology. At present, when using markers to measure the displacement of bridges, problems such as the change of image gray value, uneven distribution of light, and surface defects of material cause identification difficulties. In this paper, a new recognition algorithm called the dual-channel method is proposed. Based on the dark channel principle and OSTU theory, the dual-channel method can automatically select the image gray threshold, which is used to solve the recognition problem to achieve long-term monitoring. Experimental results show this method can accurately identify the markers under complex lighting conditions and greatly expand the range of gray thresholds. Compared with the traditional displacement meter, measurement errors of the dual-channel method are less than 3%. The new method can achieve real-term effective bridge monitoring, which provides a theoretical basis for practical engineering.
Cited by
More filters
TL;DR: The feasibility of using the periocular region as a biometric trait is studied, including the effectiveness of incorporating the eyebrows, and use of side information (left or right) in matching.
Abstract: The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers.
341 citations
28 Sep 2009
TL;DR: The feasibility of using periocular images of an individual as a biometric trait using texture and point operators resulting in a feature set that can be used for matching is studied.
Abstract: Periocular biometric refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric does not require high user cooperation and close capture distance unlike other ocular biometrics (e.g., iris, retina, and sclera). We study the feasibility of using periocular images of an individual as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set that can be used for matching. The effect of fusing these feature sets is also studied. The experimental results show a 77% rank-1 recognition accuracy using 958 images captured from 30 different subjects.
267 citations
TL;DR: A new and effective image indexing technique that employs local uni-color and bicolor distributions and local directional distribution of intensity gradient and introduces the histogram of directional changes in intensity gradient.
Abstract: In this paper, we present a new and effective image indexing technique that employs local uni-color and bicolor distributions and local directional distribution of intensity gradient. The image is divided into 4 by 4 nonoverlapping blocks. Each block, based on its gradient magnitude, is classified as uniform or non-uniform. Using the average of each color component for the pixels of a uniform block, its representative color is found. Then the histogram of unicolor uniform blocks of the image, HUCUB, is constructed. To each non-uniform block, two representative colors are assigned. Then the histogram of bi-color non-uniform blocks, HBCNB, is created. To represent the shape content of the image, the histogram of directional changes in intensity gradient, HDCIG, is introduced. Experimental results on a database of 2250 images are reported.
74 citations
TL;DR: This chapter recommends important and complex aspects required to handle visual content in healthcare, including calls for file storage standardization, querying procedures, efficient image transmission, realistic databases, global availability, access simplicity, and Internet-based structures.
Abstract: Content-Based Image Retrieval (CBIR) locates, retrieves and displays images alike to one given as a query, using a set of features. It demands accessible data in medical archives and from medical equipment, to infer meaning after some processing. A problem similar in some sense to the target image can aid clinicians. CBIR complements text-based retrieval and improves evidence-based diagnosis, administration, teaching, and research in healthcare. It facilitates visual/automatic diagnosis and decision-making in real-time remote consultation/screening, store-and-forward tests, home care assistance and overall patient surveillance. Metrics help comparing visual data and improve diagnostic. Specially designed architectures can benefit from the application scenario. CBIR use calls for file storage standardization, querying procedures, efficient image transmission, realistic databases, global availability, access simplicity, and Internet-based structures. This chapter recommends important and complex aspects required to handle visual content in healthcare.
26 citations
01 Jan 2009
TL;DR: The aim of this paper is to compare global features versus local features for Web images retrieval and propose two methods for image retrieving based on visual similarity.
Abstract: The need for efficient content-based image retrieval has increased hugely Two meth- ods are recognized for describing the content of images: using global features and using local features In this paper, we propose two methods for image retrieving based on visual similarity The first one characterizes images by global features, when the second is based on local fea- tures In the global descriptor attributes are computed on the whole image, whereas in the local descriptor attributes are computed on regions of the image The aim of this paper is to compare global features versus local features for Web images retrieval RESUME On reconnait actuellement, dans les systemes de recherche d'image par contenu, deux methodes pour la description du contenu des images : a travers des attributs locaux ou a travers des attributs globaux Dans ce papier, nous proposons deux methodes pour la recherche d'image qui sont basees sur la similitude visuelle La premiere caracterise les images par des attributs globaux, alors que la seconde est basee sur les attributs locaux Concernant le descripteur global, les attributs sont calcules sur l'ensemble de l'image, alors que pour le descripteur local, les attributs sont definis sur les regions de l'image L'objectif de ce papier est d'evaluer les performances des attributs locaux contre les attributs globaux pour la recherche des images Web par contenu
20 citations