scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Thinning methodologies-a comprehensive survey

TL;DR: A comprehensive survey of thinning methodologies, including iterative deletion of pixels and nonpixel-based methods, is presented and the relationships among them are explored.
Abstract: A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. >
Citations
More filters
Journal ArticleDOI
TL;DR: The nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms are described.
Abstract: Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the online case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered.

2,653 citations

Journal ArticleDOI
11 Dec 2015-Science
TL;DR: A computational model is described that learns in a similar fashion and does so better than current deep learning algorithms and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce.
Abstract: People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

2,364 citations


Cites background from "Thinning methodologies-a comprehens..."

  • ...S3a) that reduces the line width to one pixel (68) (Fig....

    [...]

Journal ArticleDOI
TL;DR: This paper presents an overview of feature extraction methods for off-line recognition of segmented (isolated) characters in terms of invariance properties, reconstructability and expected distortions and variability of the characters.

1,376 citations


Additional excerpts

  • ...(6) contour profiles; (7) zoning; (8) geometric moment invariants;...

    [...]

Proceedings ArticleDOI
Jing Yuan1, Yu Zheng1, Xing Xie1
12 Aug 2012
TL;DR: This paper proposes a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region.
Abstract: The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility.

1,050 citations


Cites methods from "Thinning methodologies-a comprehens..."

  • ...Second, we obtain the skeleton of the road networks by performing a thinning operation based on the algorithm proposed in [9], as depicted in Figure 3(c)....

    [...]

01 Jan 2015
TL;DR: The authors presented a computational model that captures human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets, represented as simple programs that best explain observed examples under a Bayesian criterion.
Abstract: Handwritten characters drawn by a model Not only do children learn effortlessly, they do so quickly and with a remarkable ability to use what they have learned as the raw material for creating new stuff. Lake et al. describe a computational model that learns in a similar fashion and does so better than current deep learning algorithms. The model classifies, parses, and recreates handwritten characters, and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce. Science, this issue p. 1332 Combining the capacity to handle noise with probabilistic learning yields humanlike performance in a computational model. People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.

539 citations

References
More filters
Journal ArticleDOI
TL;DR: A fast parallel thinning algorithm that consists of two subiterations: one aimed at deleting the south-east boundary points and the north-west corner points while the other one is aimed at deletion thenorth-west boundarypoints and theSouth-east corner points.
Abstract: A fast parallel thinning algorithm is proposed in this paper It consists of two subiterations: one aimed at deleting the south-east boundary points and the north-west corner points while the other one is aimed at deleting the north-west boundary points and the south-east corner points End points and pixel connectivity are preserved Each pattern is thinned down to a skeleton of unitary thickness Experimental results show that this method is very effective 12 references

2,243 citations

Book
01 Mar 1987

1,918 citations

Journal ArticleDOI
TL;DR: The relative merits of performing local operations on ~ digitized picture in parallel or sequentially are discussed and some applications of the connected component and distance functions are presented.
Abstract: The relative merits of performing local operations on ~ digitized picture in parallel or sequentially are discussed. Sequential local operations are described which l~bel the connected components of a given subset of the picture and compute u \"distance\" from every picture element to the subset. In terms of the \"distance\" function, ~ \"skeleton\" subset is defined which, in a certain sense, minimally determines the original subset. Some applications of the connected component and distance functions are also presented.

1,707 citations

Journal ArticleDOI
TL;DR: The fundamental concepts of digital topology are reviewed and the major theoretical results in the field are surveyed, with a bibliography of almost 140 references.
Abstract: Digital topology deals with the topological properties of digital images: or, more generally, of discrete arrays in two or more dimensions. It provides the theoretical foundations for important image processing operations such as connected component labeling and counting, border following, contour filling, and thinning—and their generalizations to three- (or higher-) dimensional “images.” This paper reviews the fundamental concepts of digital topology and surveys the major theoretical results in the field. A bibliography of almost 140 references is included.

1,084 citations

Journal ArticleDOI
01 Mar 1977
TL;DR: A critical review is given of two kinds of Fourier descriptors (FD's) and a distance measure is proposed, in terms of FD's, that measures the difference between two boundary curves.
Abstract: Description or discrimination of boundary curves (shapes) is an important problem in picture processing and pattern recognition. Fourier descriptors (FD's) have interesting properties in this respect. First, a critical review is given of two kinds of FD's. Some properties of the FD's are given and a distance measure is proposed, in terms of FD's, that measures the difference between two boundary curves. It is shown how FD's can be used for obtaining skeletons of objects. Finally, experimental results are given in character recognition and machine parts recognition.

1,023 citations