scispace - formally typeset
Search or ask a question
Author

Cheng-Tzu Wang

Other affiliations: Fo Guang University
Bio: Cheng-Tzu Wang is an academic researcher from National Taipei University of Education. The author has contributed to research in topics: Codebook & Cluster analysis. The author has an hindex of 3, co-authored 9 publications receiving 141 citations. Previous affiliations of Cheng-Tzu Wang include Fo Guang University.

Papers
More filters
Proceedings ArticleDOI
20 Aug 2006
TL;DR: Two detectors, one for face and the other for license plates, are proposed, both based on a modified convolutional neural network (CNN) verifier, and Pyramid-based localization techniques were applied to fuse the candidates and to identify the regions of faces or license plates.
Abstract: In this paper, two detectors, one for face and the other for license plates, are proposed, both based on a modified convolutional neural network(CNN) verifier. In our proposed verifier, a single feature map and a fully connected MLP were trained by examples to classify the possible candidates. Pyramid-based localization techniques were applied to fuse the candidates and to identify the regions of faces or license plates. In addition, geometrical rules filtered out false alarms in license plate detection. Some experimental results are given to show the effectiveness of the approach. Keywords: Face detection, license plate detection, convolution neural network, feature map.

84 citations

Journal ArticleDOI
TL;DR: In this paper, the distance between a point and the nearest feature line (NFL) or the NFS is embedded in the transformation through the discriminant analysis, and three factors, including class separability, neighborhood structure preservation, and NFS measurement, were considered to find the most effective and discriminating transformation in eigenspaces.
Abstract: Face recognition algorithms often have to solve problems such as facial pose, illumination, and expression (PIE). To reduce the impacts, many researchers have been trying to find the best discriminant transformation in eigenspaces, either linear or nonlinear, to obtain better recognition results. Various researchers have also designed novel matching algorithms to reduce the PIE effects. In this study, a nearest feature space embedding (called NFS embedding) algorithm is proposed for face recognition. The distance between a point and the nearest feature line (NFL) or the NFS is embedded in the transformation through the discriminant analysis. Three factors, including class separability, neighborhood structure preservation, and NFS measurement, were considered to find the most effective and discriminating transformation in eigenspaces. The proposed method was evaluated by several benchmark databases and compared with several state-of-the-art algorithms. According to the compared results, the proposed method outperformed the other algorithms.

40 citations

Journal ArticleDOI
TL;DR: A hybrid approach for vector quantization (VQ) is proposed for obtaining the better codebook based on the centroid neural network adaptive resonance theory (CNN-ART) and the enhanced Linde-Buzo-Gray (LBG) approaches to obtain the optimal solution.

13 citations

Proceedings ArticleDOI
15 Jul 2012
TL;DR: The entropy ratio is proposed to represent the navigability of web pages and Experimental results show the relation between entropy ratio and characteristic of a web page is quit close.
Abstract: Usability is critical to the success of a website and good navigability enhances the usability. Hence the navigability is the most important issue in designing websites. Many navigability measures have been proposed with different aspects. Applying information theory, a stack-based Markov model is proposed to represent the structure of a website and to include more surfing behavior. The dynamic users' log data is used to evaluate navigability of a web page. The entropy ratio is proposed to represent the navigability of web pages. Experimental results show the relation between entropy ratio and characteristic of a web page is quit close. Applying the entropy ratio of a web page, the web page can be recognized as a type of page which is good or not.

3 citations

Proceedings ArticleDOI
15 Jul 2012
TL;DR: The design of an intelligent decision support system, based on guidelines of Service-Oriented Architectures (SOA), is proposed to help solve nurse rostering problems with high flexibility, efficiency and effectiveness.
Abstract: Currently the crisis in hospital management is forcing hospital executives to operate their organizations in a more business-like manner. Nurse rostering problem (NRP) is an important on-going staff scheduling problem with multiple decision criteria to be considered in order to provide a high-quality healthcare service, to which today's hospital administrations are paying great attention. In this research, the design of an intelligent decision support system, based on guidelines of Service-Oriented Architectures (SOA), is proposed to help solve nurse rostering problems with high flexibility, efficiency and effectiveness. The design uses three evolutionary computation algorithms (AIS, GA, and PSO) as exchangeable intelligent planning and scheduling mechanisms for rostering nursing staffs.

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Deep Convolutional Neural Networks (CNNs) as mentioned in this paper are a special type of Neural Networks, which has shown exemplary performance on several competitions related to Computer Vision and Image Processing.
Abstract: Deep Convolutional Neural Network (CNN) is a special type of Neural Networks, which has shown exemplary performance on several competitions related to Computer Vision and Image Processing. Some of the exciting application areas of CNN include Image Classification and Segmentation, Object Detection, Video Processing, Natural Language Processing, and Speech Recognition. The powerful learning ability of deep CNN is primarily due to the use of multiple feature extraction stages that can automatically learn representations from the data. The availability of a large amount of data and improvement in the hardware technology has accelerated the research in CNNs, and recently interesting deep CNN architectures have been reported. Several inspiring ideas to bring advancements in CNNs have been explored, such as the use of different activation and loss functions, parameter optimization, regularization, and architectural innovations. However, the significant improvement in the representational capacity of the deep CNN is achieved through architectural innovations. Notably, the ideas of exploiting spatial and channel information, depth and width of architecture, and multi-path information processing have gained substantial attention. Similarly, the idea of using a block of layers as a structural unit is also gaining popularity. This survey thus focuses on the intrinsic taxonomy present in the recently reported deep CNN architectures and, consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature-map exploitation, channel boosting, and attention. Additionally, the elementary understanding of CNN components, current challenges, and applications of CNN are also provided.

1,328 citations

Journal ArticleDOI
TL;DR: An overview of the mainstream deep learning approaches and research directions proposed over the past decade is provided and some perspective into how it may evolve is presented.
Abstract: This article provides an overview of the mainstream deep learning approaches and research directions proposed over the past decade. It is important to emphasize that each approach has strengths and "weaknesses, depending on the application and context in "which it is being used. Thus, this article presents a summary on the current state of the deep machine learning field and some perspective into how it may evolve. Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs) (and their respective variations) are focused on primarily because they are well established in the deep learning field and show great promise for future work.

1,103 citations

Journal ArticleDOI
TL;DR: This paper offers to researchers a link to a public image database to define a common reference point for LPR algorithmic assessment and issues such as processing time, computational power, and recognition rate are addressed.
Abstract: License plate recognition (LPR) algorithms in images or videos are generally composed of the following three processing steps: 1) extraction of a license plate region; 2) segmentation of the plate characters; and 3) recognition of each character This task is quite challenging due to the diversity of plate formats and the nonuniform outdoor illumination conditions during image acquisition Therefore, most approaches work only under restricted conditions such as fixed illumination, limited vehicle speed, designated routes, and stationary backgrounds Numerous techniques have been developed for LPR in still images or video sequences, and the purpose of this paper is to categorize and assess them Issues such as processing time, computational power, and recognition rate are also addressed, when available Finally, this paper offers to researchers a link to a public image database to define a common reference point for LPR algorithmic assessment

575 citations

Journal ArticleDOI
TL;DR: A new method based on the firefly algorithm to construct the codebook of vector quantization, called FF-LBG algorithm, which shows that the reconstructed images get higher quality than those generated form the LBG, PSO and QPSO, but it is no significant superiority to the HBMO algorithm.
Abstract: The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) was adapted to obtain the near-global optimal codebook of vector quantization. An alterative method, called the quantum particle swarm optimization (QPSO) had been developed to improve the results of original PSO algorithm. The honey bee mating optimization (HBMO) was also used to develop the algorithm for vector quantization. In this paper, we proposed a new method based on the firefly algorithm to construct the codebook of vector quantization. The proposed method uses LBG method as the initial of FF algorithm to develop the VQ algorithm. This method is called FF-LBG algorithm. The FF-LBG algorithm is compared with the other four methods that are LBG, particle swarm optimization, quantum particle swarm optimization and honey bee mating optimization algorithms. Experimental results show that the proposed FF-LBG algorithm is faster than the other four methods. Furthermore, the reconstructed images get higher quality than those generated form the LBG, PSO and QPSO, but it is no significant superiority to the HBMO algorithm.

247 citations