scispace - formally typeset
Search or ask a question
Topic

Skeletonization

About: Skeletonization is a research topic. Over the lifetime, 1266 publications have been published within this topic receiving 23589 citations.


Papers
More filters
01 Jan 1989
TL;DR: The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically trim the least relevant units.
Abstract: This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically trim the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first learning with spare hidden units and then trimming the unnecessary ones away, thereby constraining generalization; and to understand the behavior of networks in terms of minimal "rules."

579 citations

Proceedings Article
01 Jan 1988
TL;DR: In this paper, the authors propose a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance.
Abstract: This paper proposes a means of using the knowledge in a network to determine the functionality or relevance of individual units, both for the purpose of understanding the network's behavior and improving its performance. The basic idea is to iteratively train the network to a certain performance criterion, compute a measure of relevance that identifies which input or hidden units are most critical to performance, and automatically trim the least relevant units. This skeletonization technique can be used to simplify networks by eliminating units that convey redundant information; to improve learning performance by first learning with spare hidden units and then trimming the unnecessary ones away, thereby constraining generalization; and to understand the behavior of networks in terms of minimal "rules."

482 citations

Proceedings ArticleDOI
19 Oct 1998
TL;DR: A process is described for analysing the motion of a human target in a video stream, where a "star" skeleton is produced and two motion cues are determined: body posture, and cyclic motion of skeleton segments.
Abstract: In this paper a process is described for analysing the motion of a human target in a video stream. Moving targets are detected and their boundaries extracted. From these, a "star" skeleton is produced. Two motion cues are determined from this skeletonization: body posture, and cyclic motion of skeleton segments. These cues are used to determine human activities such as walking or running, and even potentially, the target's gait. Unlike other methods, this does not require an a priori human model, or a large number of "pixels on target". Furthermore, it is computationally inexpensive, and thus ideal for real-world video applications such as outdoor video surveillance.

464 citations

Journal ArticleDOI
TL;DR: Robust and time-efficient skeletonization of a (planar) shape can be achieved by first regularizing the Voronoi diagram of a shape's boundary points and then by establishing a hierarchic organization of skeleton constituents.

422 citations

Journal ArticleDOI
01 Jan 1984
TL;DR: The morphological skeleton is shown to unify many previous approaches to skeletonization, and some of its theoretical properties are investigated.
Abstract: This paper presents the results of a study on the use of morphological set operations to represent and encode a discrete binary image by parts of its skeleton, a thinned version of the image containing complete information about its shape and size. Using morphological erosions and openings, a finite image can be uniquely decomposed into a finite number of skeleton subsets and then the image can be exactly reconstructed by dilating the skeleton subsets. The morphological skeleton is shown to unify many previous approaches to skeletonization, and some of its theoretical properties are investigated. Fast algorithms that reduce the original quadratic complexity to linear are developed for skeleton decomposition and reconstruction. Partial reconstructions of the image are quantified through the omission of subsets of skeleton points. The concepts of a globally and locally minimal skeleton are introduced and fast algorithms are developed for obtaining minimal skeletons. For images containing blobs and large areas, the skeleton subsets are much thinner than the original image. Therefore, encoding of the skeleton information results in lower information rates than optimum block-Huffman or optimum runlength-Huffman coding of the original image. The highest level of image compression was obtained by using Elias coding of the skeleton.

399 citations


Network Information
Related Topics (5)
Convolutional neural network
74.7K papers, 2M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Deep learning
79.8K papers, 2.1M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202335
202265
202139
202054
201952
201842