scispace - formally typeset
Search or ask a question
Book ChapterDOI

2. Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or its Caricature

About: The article was published on 2011-03-18. It has received 320 citations till now. The article focuses on the topics: Line (text file) & Reduction (complexity).
Citations
More filters
Journal ArticleDOI
TL;DR: The primary objective of this paper is to serve as a glossary for interested researchers to have an overall picture on the current time series data mining development and identify their potential research direction to further investigation.

1,358 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper takes advantage of the latest developments in deep learning to have an initial segmentation of the aerial images and proposes an algorithm that reasons about missing connections in the extracted road topology as a shortest path problem that can be solved efficiently.
Abstract: Creating road maps is essential for applications such as autonomous driving and city planning. Most approaches in industry focus on leveraging expensive sensors mounted on top of a fleet of cars. This results in very accurate estimates when exploiting a user in the loop. However, these solutions are very expensive and have small coverage. In contrast, in this paper we propose an approach that directly estimates road topology from aerial images. This provides us with an affordable solution with large coverage. Towards this goal, we take advantage of the latest developments in deep learning to have an initial segmentation of the aerial images. We then propose an algorithm that reasons about missing connections in the extracted road topology as a shortest path problem that can be solved efficiently. We demonstrate the effectiveness of our approach in the challenging TorontoCity dataset [23] and show very significant improvements over the state-of-the-art.

373 citations


Cites methods from "2. Algorithms for the Reduction of ..."

  • ...To simplify the graph we employ the Ramer–Douglas–Peucker algorithm [17, 6], which outputs a piecewise linear approximation of our road skeletons....

    [...]

Proceedings ArticleDOI
06 Nov 2012
TL;DR: This paper presents an extensible map inference pipeline, designed to mitigate GPS error, admit less-frequently traveled roads, and scale to large datasets, and shows significant improvements over the current state of the art.
Abstract: This paper describes a process for automatically inferring maps from large collections of opportunistically collected GPS traces. In this type of dataset, there is often a great disparity in terms of coverage. For example, a freeway may be represented by thousands of trips, whereas a residential road may only have a handful of observations. Additionally, while modern GPS receivers typically produce high-quality location estimates, errors over 100 meters are not uncommon, especially near tall buildings or under dense tree coverage. Combined, GPS trace disparity and error present a formidable challenge for the current state of the art in map inference. By tuning the parameters of existing algorithms, a user may choose to remove spurious roads created by GPS noise, or admit less-frequently traveled roads, but not both.In this paper, we present an extensible map inference pipeline, designed to mitigate GPS error, admit less-frequently traveled roads, and scale to large datasets. We demonstrate and compare the performance of our proposed pipeline against existing methods, both qualitatively and quantitatively, using a real-world dataset that includes both high disparity and noise. Our results show significant improvements over the current state of the art.

193 citations

Proceedings ArticleDOI
10 Apr 2010
TL;DR: The architecture and algorithms for route data inferences and visualization for Biketastic, a platform designed to ensure the link between information gathering, visualization, and bicycling practices, are presented.
Abstract: Bicycling is an affordable, environmentally friendly alternative transportation mode to motorized travel. A common task performed by bikers is to find good routes in an area, where the quality of a route is based on safety, efficiency, and enjoyment. Finding routes involves trial and error as well as exchanging information between members of a bike community. Biketastic is a platform that enriches this experimentation and route sharing process making it both easier and more effective. Using a mobile phone application and online map visualization, bikers are able to document and share routes, ride statistics, sensed information to infer route roughness and noisiness, and media that documents ride experience. Biketastic was designed to ensure the link between information gathering, visualization, and bicycling practices. In this paper, we present architecture and algorithms for route data inferences and visualization. We evaluate the system based on feedback from bicyclists provided during a two-week pilot.

192 citations

Journal ArticleDOI
TL;DR: This article proposes an effective and efficient approach to computing shape compactness based on the moment of inertia (MI), a well-known concept in physics, and conducts a number of experiments that demonstrate the superiority of the MI over the popular isoperimetric quotient approach.
Abstract: A measure of shape compactness is a numerical quantity representing the degree to which a shape is compact. Ways to provide an accurate measure have been given great attention due to its application in a broad range of GIS problems, such as detecting clustering patterns from remote-sensing images, understanding urban sprawl, and redrawing electoral districts to avoid gerrymandering. In this article, we propose an effective and efficient approach to computing shape compactness based on the moment of inertia MI, a well-known concept in physics. The mathematical framework and the computer implementation for both raster and vector models are discussed in detail. In addition to computing compactness for a single shape, we propose a computational method that is capable of calculating the variations in compactness as a shape grows or shrinks, which is a typical application found in regionalization problems. We conducted a number of experiments that demonstrate the superiority of the MI over the popular isoperimetric quotient approach in terms of 1 computational efficiency; 2 tolerance of positional uncertainty and irregular boundaries; 3 ability to handle shapes with holes and multiple parts; and 4 applicability and efficacy in districting/zonation/regionalization problems.

151 citations

References
More filters
Journal ArticleDOI
TL;DR: Any region can be regarded as a union of maximal neighborhoods of its points, and can be specified by the centers and radii of these neighborhoods; this set is a sort of "skeleton" of the region.
Abstract: Any region can be regarded as a union of maximal neighborhoods of its points, and can be specified by the centers and radii of these neighborhoods; this set is a sort of \"skeleton\" of the region. The storage required to represent a region in this way is comparable to that required when it is represented by encoding its boundary. Moreover, the skeleton representation seems to have advantages when it is necessary to determine repeatedly whether points are inside or outside the region, or to perform set-theoretic operations on regions.

151 citations

Journal ArticleDOI
TL;DR: An account of a series of experiments in computer generalisation, in which the outline of the Netherlands at 1:25 000 scale has been used to generate aseries of generalisations between 1:600 000 and 1:3 500 000.
Abstract: This is an account of a series of experiments in computer generalisation, in which the outline of the Netherlands at 1:25 000 scale has been used to generate a series of generalisations between 1:600 000 and 1:3 500 000. Eight examples are reproduced, six of which compare the automatic generalisation with one taken from the Atlas of the Netherlands and the other two compare automatic generalisation using all digitised coordinates with that derived from using mean values of successive coordinates.The article was first published in Tijdschrift voor Kadaster en Landmeetkunde, 1969, 6, Leiden, under the title, “Toepassing van de reken-en-tekenautomaat bij structurele generalisatie” and is published here by kind permission of the Editors.

7 citations