scispace - formally typeset
Search or ask a question
Author

Pan Bao-chang

Bio: Pan Bao-chang is an academic researcher from Guangdong University of Technology. The author has contributed to research in topics: Image segmentation & Rough set. The author has an hindex of 2, co-authored 7 publications receiving 9 citations.

Papers
More filters
Proceedings ArticleDOI
04 Jul 2009
TL;DR: The algorithm avoided segmenting image excessively and speeded up segmentation velocity by fuzzy grid dividing and has been put into use in tongue image segmentation of Traditional Chinese Medicine (TCM).
Abstract: The paper studied a new theory of fuzzy rough sets, and presented a method to segment tongue image. It presented an algorithm of Fuzzy Rough Clustering Based on Grid by the theory. The algorithm extracts condensation points by the theory of fuzzy rough sets, and quarters the data space layer by layer, and softens the edge of the dense block by drawing condensation points in the borders. The algorithm has been put into use in tongue image segmentation of Traditional Chinese Medicine (TCM). The application result indicated: The algorithm avoided segmenting image excessively and speeded up segmentation velocity by fuzzy grid dividing.

4 citations

Proceedings ArticleDOI
19 May 2009
TL;DR: The algorithm improved speed, reliability and accuracy of TCM tongue diagnosis, also met the requirements of intellectualization and digitization.
Abstract: The paper studied a new theory of fuzzy rough sets, and presented a method to approximately estimate objects in a range. It presented an algorithm of Fuzzy Rough Clustering Based on Grid by the theory. The algorithm extracts condensation points by the theory of fuzzy rough sets, and quarters the data space layer by layer, and softens the edge of the dense block by drawing condensation points in the borders. The tongue diagnosis system is a big, complex one, its data is of great amount and the data cluster has uncertainty. The algorithm has been put into use in rules mining of Traditional Chinese Medicine (TCM) tongue diagnosis system. The application result indicated: The algorithm speeded up cluster by fuzzy grid dividing, saved a lot of time than traditional fuzzy cluster algorithm. The algorithm improved speed, reliability and accuracy of TCM tongue diagnosis, also met the requirements of intellectualization and digitization.

2 citations

Journal Article
TL;DR: The background subtraction method based on Gaussian mixture model is improved and has gotten the ideal effect of image seg- mentation that its precision is high.
Abstract: The accurate segmentation of infrared body is a difficult problem under complicated background,especially in the environment that the gray values are very similar between the infrared human movement target and background when their temperatures vary slightly.Therefore,the background subtraction method based on Gaussian mixture model is improved.The fine segmentation is implemented by the modified Pulse Coupled Neural Network(PCNN) in its binary stage,and meanwhile the PCNN segmentation parameters are determined by using the multi-modal immune evolution algorithm(MIEA).The simula- tion results show that this algorithm is achieved fast automatic segmentation,and has gotten the ideal effect of image seg- mentation that its precision is high.

1 citations

Journal Article
TL;DR: A new PSO algorithm which uses mutation and self-adjustable parameters which can be dynamically adjusted by the evaluation of particle swarm's holistic capability so it can search fast in the prophase.
Abstract: Particle swarm optimization is an effective random and holistic optimization algorithm,but the classical PSO algorithm easily plunges into local minimums.The paper proposes a new PSO algorithm which uses mutation and self-adjustable parameters. Via introducing the particle swarm evaluation,all the parameters of PSO algorithm can be dynamically adjusted by the evaluation of particle swarm's holistic capability,then it can search fast in the prophase.At the same time the optimized result found in the particles mutated with the dynamic adjustable probability ensures the multiformity of particles,so it can prevent the algorithm plung-ing into the local minimums.The experimentative result of the three common testing functions shows the validity of the algorithm.

1 citations

Proceedings ArticleDOI
08 Dec 2008
TL;DR: The proposed facial key features location algorithm in color images by the color information of facial features is insensitive to the variant expression and pose, and the feature location accuracy rate is 95.52%.
Abstract: Facial feature accurately location is the critical step in face recognition. A new facial key features location algorithm in color images by the color information of facial features is presented. Since the RGB three components of eyes region are very close, the sum of RGB three components difference is calculated as eye feature. Then the eye center points are located by priori information. Combining redness with the sum of RGB three components difference, the whole mouth is effectively extracted to locate the mouth center. Experimental result on four databases with 5649 images indicates the proposed method is insensitive to the variant expression and pose, and the feature location accuracy rate is 95.52%.

1 citations


Cited by
More filters
Journal ArticleDOI
01 Jul 2014
TL;DR: In this paper, a summary of all previous Rough-set based image segmentation methods are described in detail and also categorized accordingly to provide a stable and better framework forimage segmentation.
Abstract: In the domain of image processing, image segmentation has become one of the key application that is involved in most of the image based operations. Image segmentation refers to the process of breaking or partitioning any image. Although, like several image processing operations, image segmentation also faces some problems and issues when segmenting process becomes much more complicated. Previously lot of work has proved that Rough-set theory can be a useful method to overcome such complications during image segmentation. The Rough-set theory helps in very fast convergence and in avoiding local minima problem, thereby enhancing the performance of the EM, better result can be achieved. During rough-set-theoretic rule generation, each band is individualized by using the fuzzy-correlation-based gray-level thresholding. Therefore, use of Rough-set in image segmentation can be very useful. In this paper, a summary of all previous Rough-set based image segmentation methods are described in detail and also categorized accordingly. Rough-set based image segmentation provides a stable and better framework for image segmentation.

77 citations

Book ChapterDOI
01 Jan 2015
TL;DR: This chapter describes the state-of-the-art in the combinations of fuzzy and rough sets dividing into three parts.
Abstract: Fuzzy sets and rough sets are known as uncertainty models. They are proposed to treat different aspects of uncertainty. Therefore, it is natural to combine them to build more powerful mathematical tools for treating problems under uncertainty. In this chapter, we describe the state-of-the-art in the combinations of fuzzy and rough sets dividing into three parts.

16 citations

Journal ArticleDOI
TL;DR: A novel method is suggested, which applies multiobjective greedy rules and makes fusion of color and space information in order to extract tongue image accurately.
Abstract: Tongue image with coating is of important clinical diagnostic meaning, but traditional tongue image extraction method is not competent for extraction of tongue image with thick coating. In this paper, a novel method is suggested, which applies multiobjective greedy rules and makes fusion of color and space information in order to extract tongue image accurately. A comparative study of several contemporary tongue image extraction methods is also made from the aspects of accuracy and efficiency. As the experimental results show, geodesic active contour is quite slow and not accurate, the other 3 methods achieve fairly good segmentation results except in the case of the tongue with thick coating, our method achieves ideal segmentation results whatever types of tongue images are, and efficiency of our method is acceptable for the application of quantitative check of tongue image.

9 citations

Posted ContentDOI
03 Jul 2015-viXra
TL;DR: This paper discusses about rough sets and fuzzy rough sets with its applications in data mining that can handle uncertain and vague data so as to reach at meaningful conclusions.
Abstract: Rough set theory is a new method that deals with vagueness and uncertainty emphasized in decision making. The theory provides a practical approach for extraction of valid rules fromdata.This paper discusses about rough sets and fuzzy rough sets with its applications in data mining that can handle uncertain and vague data so as to reach at meaningful conclusions.

8 citations

Dissertation
13 Sep 2010
TL;DR: A Bayesian Belief Network (BBN) is proposed for recognizing facial activities, such as facial expressions and facial action units, and a morphable partial face model, named SFAM, is proposed, based on Principle Component Analysis.
Abstract: This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.

5 citations