scispace - formally typeset
Search or ask a question
Author

Hung-Hsin Chang

Bio: Hung-Hsin Chang is an academic researcher from University of Sydney. The author has contributed to research in topics: Skeletonization & Constrained Delaunay triangulation. The author has an hindex of 5, co-authored 5 publications receiving 156 citations.

Papers
More filters
Journal ArticleDOI
01 Feb 1999
TL;DR: Four new techniques are developed: a new thinning algorithm based on Euclidean distance transformation and gradient oriented tracing, a new line approximation method based on curvature segmentation, artifact removal strategies based on geometrical analysis, and 4) stroke segmentation rules based on splitting, merging and directional analysis.
Abstract: Most handwritten Chinese character recognition systems suffer from the variations in geometrical features for different writing styles. The stroke structures of different styles have proved to be more consistent than geometrical features. In an on-line recognition system, the stroke structure can be obtained according to the sequences of writing via a pen-based input device such as a tablet. But in an off-line recognition system, the input characters are scanned optically and saved as raster images, so the stroke structure information is not available. In this paper, we propose a method to extract strokes from an off-line handwritten Chinese character. We have developed four new techniques: 1) a new thinning algorithm based on Euclidean distance transformation and gradient oriented tracing, 2) a new line approximation method based on curvature segmentation, 3) artifact removal strategies based on geometrical analysis, and 4) stroke segmentation rules based on splitting, merging and directional analysis. Using these techniques, we can extract and trace the strokes in an off-line handwritten Chinese character accurately and efficiently.

65 citations

Journal ArticleDOI
TL;DR: A new curve-fitting algorithm for vectorizing hand-drawn key frames in a computer-aided cartooning system that can satisfy the geometrical continuity on a non-corner knot and piecewise cubic Bezier curves.

57 citations

Journal ArticleDOI
TL;DR: A new skeletonization method based on a novel concept — discrete local symmetry that can produce correct centre lines and junctions and is efficient and robust against noise is presented.

24 citations

Journal ArticleDOI
TL;DR: This paper introduces an approach to Euclidean distance transformation that achieves a better accuracy than D4, D8, or octagonal distance transformation and develops a fast method to compute the Euclideans distance transformation.
Abstract: In this paper we present a new thinning algorithm based on distance transformation. Because the choice of a distance measure will influence the result of skeletonization, we introduce an approach to Euclidean distance transformation that achieves a better accuracy than D4 , D8 , or octagonal distance transformation. We have developed a fast method to compute the Euclidean distance transformation. Using this technique, we can extract a reliable skeleton efficiently to represent a binary pattern. Our method works well on real images and compares favorably with other methods.

10 citations

Proceedings ArticleDOI
22 Aug 1999
TL;DR: A new skeletonization algorithm based on the constrained Delaunay triangulation (CDT) is proposed, which ensures that the structural information at intersections of a shape is preserved in its skeleton.
Abstract: A new skeletonization algorithm based on the constrained Delaunay triangulation (CDT) is proposed in this paper. The CDT partitions a shape into a set of nonoverlapping triangles which represent the shape's local symmetry properties and interconnecting relationships between branches. The skeleton of the shape is generated from the skeletons of the triangles. Methods for removing skeletonization artefacts are provided. An outstanding feature of the algorithm is that the structural information at intersections of a shape is preserved in its skeleton.

7 citations


Cited by
More filters
Journal ArticleDOI
11 Jul 2016
TL;DR: This paper presents a novel technique to simplify sketch drawings based on learning a series of convolution operators, which is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image.
Abstract: In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.

181 citations

Journal ArticleDOI
TL;DR: This work proposes a vectorization algorithm specialized for clean line drawings that analyzes the drawing's topology in order to overcome junction ambiguities and demonstrates results on professional examples and evaluates the vectorization quality with quantitative comparison to hand-traced centerlines as well as the results of leading commercial algorithms.
Abstract: Vectorization provides a link between raster scans of pencil-and-paper drawings and modern digital processing algorithms that require accurate vector representations. Even when input drawings are comprised of clean, crisp lines, inherent ambiguities near junctions make vectorization deceptively difficult. As a consequence, current vectorization approaches often fail to faithfully capture the junctions of drawn strokes. We propose a vectorization algorithm specialized for clean line drawings that analyzes the drawing's topology in order to overcome junction ambiguities. A gradient-based pixel clustering technique facilitates topology computation. This topological information is exploited during centerline extraction by a new “reverse drawing” procedure that reconstructs all possible drawing states prior to the creation of a junction and then selects the most likely stroke configuration. For cases where the automatic result does not match the artist's interpretation, our drawing analysis enables an efficient user interface to easily adjust the junction location. We demonstrate results on professional examples and evaluate the vectorization quality with quantitative comparison to hand-traced centerlines as well as the results of leading commercial algorithms.

111 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed wavelet-based scheme is capable of extracting exactly the skeleton of the Ribbon-like shape with different width as well as different gray-levels, and is robust against noise and affine transformation.
Abstract: A wavelet-based scheme to extract skeleton of Ribbon-like shape is proposed in this paper, where a novel wavelet function plays a key role in this scheme, which possesses three significant characteristics, namely, 1) the position of the local maximum moduli of the wavelet transform with respect to the Ribbon-like shape is independent of the gray-levels of the image. 2) When the appropriate scale of the wavelet transform is selected, the local maximum moduli of the wavelet transform of the Ribbon-like shape produce two new parallel contours, which are located symmetrically at two sides of the original one and have the same topological and geometric properties as that of the original shape. 3) The distance between these two parallel contours equals to the scale of the wavelet transform, which is independent of the width of the shape. This new scheme consists of two phases: 1) Generation of wavelet skeleton-based on the desirable properties of the new wavelet function, symmetry analyses of the maximum moduli of the wavelet transform is described. Midpoints of all pairs of contour elements can be connected to generate a skeleton of the shape, which is defined as wavelet skeleton. 2) Modification of the wavelet skeleton. Thereafter, a set of techniques are utilized for modifying the artifacts of the primary wavelet skeleton. The corresponding algorithm is also developed in this paper. Experimental results show that the proposed scheme is capable of extracting exactly the skeleton of the Ribbon-like shape with different width as well as different gray-levels. The skeleton representation is robust against noise and affine transformation.

109 citations

Journal ArticleDOI
TL;DR: This method is able to obtain reliable stroke correspondence and enable structural interpretation and some structural post-processing operations are applied to improve the stroke correspondence.

107 citations

Journal ArticleDOI
01 Jun 2001
TL;DR: Experiments show that skeletons obtained from the proposed indirect skeletonization method closely resemble human perceptions of the underlying shapes.
Abstract: A major problem with traditional skeletonization algorithms is that their results do not always conform to human perceptions since they often contain unwanted artifacts. This paper presents an indirect skeletonization method to reduce these artifacts. The method is based on analyzing regularities and singularities of shapes. A shape is first partitioned into a set of triangles using the constrained Delaunay triangulation technique. Then, regular and singular regions of the shape are identified from the partitioning. Finally, singular regions are stabilized to produce a better result. Experiments show that skeletons obtained from the proposed method closely resemble human perceptions of the underlying shapes.

88 citations