scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Algorithms for the reduction of the number of points required to represent a digitized line or its caricature

TL;DR: In this paper, two algorithms to reduce the number of points required to represent the line and, if desired, produce caricatures are presented and compared with the most promising methods so far suggested.
Abstract: All digitizing methods, as a general rule, record lines with far more data than is necessary for accurate graphic reproduction or for computer analysis. Two algorithms to reduce the number of points required to represent the line and, if desired, produce caricatures, are presented and compared with the most promising methods so far suggested. Line reduction will form a major part of automated generalization. Regle generale, les methodes numeriques enregistrent des lignes avec beaucoup plus de donnees qu'il n'est necessaire a la reproduction graphique precise ou a la recherche par ordinateur. L'auteur presente deux algorithmes pour reduire le nombre de points necessaires pour representer la ligne et produire des caricatures si desire, et les compare aux methodes les plus prometteuses suggerees jusqu'ici. La reduction de la ligne constituera une partie importante de la generalisation automatique.
Citations
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Journal ArticleDOI
TL;DR: In this article, an object-oriented image analysis software, eCognition, is proposed to integrate remote sensing imagery and GIS for mapping, environmental monitoring, disaster management and civil and military intelligence.
Abstract: Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semi-automatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first object-oriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing imagery and GIS.

2,539 citations


Cites methods from "Algorithms for the reduction of the..."

  • ...The computation of base polygons is done by means of a Douglas Peucker (Douglas and Peucker, 1973) algorithm....

    [...]

  • ...The computation of base polygons is done by means of a Douglas Peucker ( Douglas and Peucker, 1973 ) algorithm....

    [...]

Journal ArticleDOI
TL;DR: A fiducial marker system specially appropriated for camera pose estimation in applications such as augmented reality and robot localization is presented and an algorithm for generating configurable marker dictionaries following a criterion to maximize the inter-marker distance and the number of bit transitions is proposed.

1,758 citations

Journal ArticleDOI
TL;DR: The primary objective of this paper is to serve as a glossary for interested researchers to have an overall picture on the current time series data mining development and identify their potential research direction to further investigation.

1,358 citations


Cites methods from "Algorithms for the reduction of the..."

  • ...The idea is similar to a technique proposed about 30 years ago for reducing the number of points required to represent a line by Douglas and Peucker (1973) (see also Hershberger and Snoeyink, 1992). Perng et al. (2000) use a landmark model to identify the important points in the time series for similarity measure. Man and Wong (2001) propose a lattice structure to represent the identified peaks and troughs (called control points) in the time...

    [...]

  • ...The idea is similar to a technique proposed about 30 years ago for reducing the number of points required to represent a line by Douglas and Peucker (1973) (see also Hershberger and Snoeyink, 1992)....

    [...]

  • ...The idea is similar to a technique proposed about 30 years ago for reducing the number of points required to represent a line by Douglas and Peucker (1973) (see also Hershberger and Snoeyink, 1992). Perng et al. (2000) use a landmark model to identify the important points in the time series for similarity measure....

    [...]

Journal ArticleDOI
TL;DR: The concept of urban computing is introduced, discussing its general framework and key challenges from the perspective of computer sciences, and the typical technologies that are needed in urban computing are summarized into four folds.
Abstract: Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community.

1,290 citations


Cites methods from "Algorithms for the reduction of the..."

  • ...There are two major types of data reduction techniques running in a batch mode after the data is collected (e.g., Douglas-Peucker algorithm [Douglas and Peucker 1973]) or in an online mode as the data is being collected (such as the sliding window algorithm [Keogh et al. 2001; Maratnia 2004])....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: It is shown that one can determine through the use of relatively simple numerical techniques whether a given arbitrary plane curve is open or closed, whether it is singly or multiply connected, and what area it encloses.
Abstract: A method is described which permits the encoding of arbitrary geometric configurations so as to facilitate their analysis and manipulation by means of a digital computer. It is shown that one can determine through the use of relatively simple numerical techniques whether a given arbitrary plane curve is open or closed, whether it is singly or multiply connected, and what area it encloses. Further, one can cause a given figure to be expanded, contracted, elongated, or rotated by an arbitrary amount. It is shown that there are a number of ways of encoding arbitrary geometric curves to facilitate such manipulations, each having its own particular advantages and disadvantages. One method, the so-called rectangular-array type of encoding, is discussed in detail. In this method the slope function is quantized into a set of eight standard slopes. This particular representation is one of the simplest and one that is most readily utilized with present-day computing and display equipment.

1,751 citations

Journal ArticleDOI
TL;DR: Any region can be regarded as a union of maximal neighborhoods of its points, and can be specified by the centers and radii of these neighborhoods; this set is a sort of "skeleton" of the region.
Abstract: Any region can be regarded as a union of maximal neighborhoods of its points, and can be specified by the centers and radii of these neighborhoods; this set is a sort of \"skeleton\" of the region. The storage required to represent a region in this way is comparable to that required when it is represented by encoding its boundary. Moreover, the skeleton representation seems to have advantages when it is necessary to determine repeatedly whether points are inside or outside the region, or to perform set-theoretic operations on regions.

151 citations

Journal ArticleDOI
TL;DR: An account of a series of experiments in computer generalisation, in which the outline of the Netherlands at 1:25 000 scale has been used to generate aseries of generalisations between 1:600 000 and 1:3 500 000.
Abstract: This is an account of a series of experiments in computer generalisation, in which the outline of the Netherlands at 1:25 000 scale has been used to generate a series of generalisations between 1:600 000 and 1:3 500 000. Eight examples are reproduced, six of which compare the automatic generalisation with one taken from the Atlas of the Netherlands and the other two compare automatic generalisation using all digitised coordinates with that derived from using mean values of successive coordinates.The article was first published in Tijdschrift voor Kadaster en Landmeetkunde, 1969, 6, Leiden, under the title, “Toepassing van de reken-en-tekenautomaat bij structurele generalisatie” and is published here by kind permission of the Editors.

7 citations

DOI
01 Jan 1973
TL;DR: T h i s i s is designed to adapt to the i n d i v i d u a l u s e r ' s p r e f e r e n c e s by having the user reduce s e v e r a l o u t l i n eS by hand.
Abstract: Techniques from i n t e r a c t i v e g r a p h i c s and p a t t e r n r e c o g n i t i o n are a p p l i e d to the problem of reducing map o u t l i n e s . S i n c e the r e s u l t i n g g e n e r a l i 2 e d o u t l i n e s are intended f o r use i n i n t e r a c t i v e g r a p h i c s systems t h e i r data content should be c o n s i d e r a b l y l e s s than t h a t of the o r i g i n a l l i n e s . A l s o i t i s u s e f u l to have s e v e r a l l e v e l s of g e n e r a l i z a t i o n f c r the same l i n e and an e x t e n s i o n of the X-Y c o o r d i n a t e encoding scheme i s i n t r o d u c e d to r e p r e s e n t such h i e r a r c h i c a l l y reduced . l i n e s . Experiments are conducted t h a t suggest that people l c o k at o u t l i n e s i n d i f f e r e n t ways. To accomodate these d i f f e r e n c e s i n t a s t e and purpose the system i s designed to adapt to the i n d i v i d u a l u s e r ' s p r e f e r e n c e s . T h i s i s dene by having the user reduce s e v e r a l o u t l i n e s by hand. The system analyzes p a t t e r n s i n these l i n e s and so l e a r n s to mimic the user's behaviour. Once enough has been lea r n e d the system i s given new l i n e s to g e n e r a l i z e on i t s own. Experiments are performed to measure the l e a r n i n g a b i l i t y and the g e n e r a l i z a t i c n performance. Other e x p e r i n e n t s are performed to show the p o t e n t i a l f e a s i b i l i t y of t h i s approach. There i s a review of work dene i n r e l a t e d f i e l d s .

1 citations