scispace - formally typeset
Search or ask a question
Author

K.-H. Lin

Bio: K.-H. Lin is an academic researcher. The author has contributed to research in topics: Fractal dimension & Facial recognition system. The author has an hindex of 1, co-authored 1 publications receiving 57 citations.

Papers
More filters
Journal ArticleDOI
01 Dec 2001
TL;DR: A modified approach to estimating fractal dimensions that is less sensitive to lighting conditions and provides information about the orientation of an image under consideration is proposed.
Abstract: Facial feature extraction is an important step in many applications such as human face recognition, video conferencing, surveillance systems, human computer interfacing etc. The eye is the most important facial feature. A reliable and fast method for locating the eye pairs in an image is vital to many practical applications. A new method for locating eye pairs based on valley field detection and measurement of fractal dimensions is proposed. Possible eye candidates in an image with a complex background are identified by valley field detection. The eye candidates are then grouped to form eye pairs if their local properties for eyes are satisfied. Two eyes are matched if they have similar roughness and orientation as represented by fractal dimensions. A modified approach to estimating fractal dimensions that is less sensitive to lighting conditions and provides information about the orientation of an image under consideration is proposed. Possible eye pairs are further verified by comparing the fractal dimensions of the eye-pair window and the corresponding face region with the respective means of the fractal dimensions of the eye-pair windows and the face regions. The means of the fractal dimensions are obtained based on a number of facial images in a database. Experiments have shown that this approach is fast and reliable.

57 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new model is proposed to assign the smallest number of boxes to cover the entire image surface at each selected scale as required, thereby yielding more accurate estimates of fractional dimension estimation accuracy.

456 citations

Journal ArticleDOI
TL;DR: Two modified Hausdorff distances, namely, SEW2HD and SEW2HD are proposed, which incorporate the information about the location of important facial features such as eyes, mouth, and face contour so that distances at those regions will be emphasized.

80 citations

Journal ArticleDOI
TL;DR: In this paper, the acceleration signals acquired by sensor were decomposed into a series of intrinsic mode functions (IMFs) by the adaptive analysis method named ensemble empirical mode decomposition (EEMD), and the IMFs which contain the feature information of milling process were selected as the analyzed signals.
Abstract: Chatter is a kind of self-excited unstable vibration during machining process, which always leads to multiple negative effects such as poor surface quality, dimension accuracy error, excessive noise, and tool wear. For purposes of monitoring the processing state of milling process and detecting chatter timely, a novel online chatter detection method was proposed. In the proposed method, the acceleration signals acquired by sensor were decomposed into a series of intrinsic mode functions (IMFs) by the adaptive analysis method named ensemble empirical mode decomposition (EEMD), and the IMFs which contain the feature information of milling process were selected as the analyzed signals. The two indicators power spectral entropy and fractal dimension which is obtained by morphological covering method are introduced to detect the chatter features. Then, both the frequency characteristic and morphological feature of the extracted signals can be reflected by the two indicators. To verify the approach, milling experiments were performed; the experiment results show that the proposed method can detect chatter timely and effectively, which is important in the aspect of improving the milling quality. And finally, in order to detect milling chatter timely, an online milling chatter monitoring system was developed.

68 citations

Journal ArticleDOI
TL;DR: A new definition in order to compute the fractal dimension of a subset respect to any fractal structure, which completes the theory of classical box-counting dimension and allows to classify and distinguish a much larger number of topological spaces than the classical definition.

66 citations

Journal ArticleDOI
TL;DR: The status of differential box counting methods is concluded, some of the state-of-the-art methods have been implemented and the possible future directions are explored.
Abstract: Fractal dimension is extensively in use as features in computer vision applications to characterize roughness and self-similarity of objects in an image for many years. These features have been adopted successfully mainly in texture segmentation and classification. Differential box counting method is one of the widely accepted approaches, those exist in literature to estimate fractal dimension of an image. In this work, we comprehensively reviewed the available differential box counting methods. First, the differential box counting method is discussed in detail along with its computer vision applications and drawbacks. Second, various variants of differential box counting method are thoroughly studied and grouped using different parameters of differential box counting method. Third, the synthetic and real-world databases, considered for demonstrating experimental results by the state-of-the-art methods have been presented. Fourth, some of the state-of-the-art methods have been implemented and corresponding results obtained in this study are reported. Fifth, three evaluation metrics have also been reviewed. However, these metrics work only for synthetic fractal Brownian motion images because the theoretical fractal dimension values for these images are known and have been used as a set of ground truths. Finally, we concluded the status of differential box counting methods and explored the possible future directions.

56 citations