scispace - formally typeset
Search or ask a question

Showing papers in "Geoinformatica in 2011"


Journal ArticleDOI
TL;DR: A generic approach for building a single computing framework capable of handling different data formats and multiple algorithms that can be used in potential distribution modelling is described and an example use case illustrates potential distribution maps generated by the framework.
Abstract: Species' potential distribution modelling is the process of building a representation of the fundamental ecological requirements for a species and extrapolating these requirements into a geographical region. The importance of being able to predict the distribution of species is currently highlighted by issues like global climate change, public health problems caused by disease vectors, anthropogenic impacts that can lead to massive species extinction, among other challenges. There are several computational approaches that can be used to generate potential distribution models, each achieving optimal results under different conditions. However, the existing software packages available for this purpose typically implement a single algorithm, and each software package presents a new learning curve to the user. Whenever new software is developed for species' potential distribution modelling, significant duplication of effort results because many feature requirements are shared between the different packages. Additionally, data preparation and comparison between algorithms becomes difficult when using separate software applications, since each application has different data input and output capabilities. This paper describes a generic approach for building a single computing framework capable of handling different data formats and multiple algorithms that can be used in potential distribution modelling. The ideas described in this paper have been implemented in a free and open source software package called openModeller. The main concepts of species' potential distribution modelling are also explained and an example use case illustrates potential distribution maps generated by the framework.

294 citations


Journal ArticleDOI
TL;DR: Experimental results show that the P2P spatial cloaking algorithm is scalable while guaranteeing the user’s location privacy protection, and a cloaked area adjustment scheme guarantees that the spatial cloaked algorithm is free from a “center-of-cloaked-area” privacy attack.
Abstract: This paper tackles a privacy breach in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. For example, a user who wants to issue a query asking about her nearest gas station has to report her exact location to an LBS provider. However, many recent research efforts have indicated that revealing private location information to potentially untrusted LBS providers may lead to major privacy breaches. To preserve user location privacy, spatial cloaking is the most commonly used privacy-enhancing technique in LBS. The basic idea of the spatial cloaking technique is to blur a user's exact location into a cloaked area that satisfies the user specified privacy requirements. Unfortunately, existing spatial cloaking algorithms designed for LBS rely on fixed communication infrastructure, e.g., base stations, and centralized/distributed servers. Thus, these algorithms cannot be applied to a mobile peer-to-peer (P2P) environment where mobile users can only communicate with other peers through P2P multi-hop routing without any support of fixed communication infrastructure or servers. In this paper, we propose a spatial cloaking algorithm for mobile P2P environments. As mobile P2P environments have many unique limitations, e.g., user mobility, limited transmission range, multi-hop communication, scarce communication resources, and network partitions, we propose three key features to enhance our algorithm: (1) An information sharing scheme enables mobile users to share their gathered peer location information to reduce communication overhead; (2) A historical location scheme allows mobile users to utilize stale peer location information to overcome the network partition problem; and (3) A cloaked area adjustment scheme guarantees that our spatial cloaking algorithm is free from a "center-of-cloaked-area" privacy attack. Experimental results show that our P2P spatial cloaking algorithm is scalable while guaranteeing the user's location privacy protection.

217 citations


Journal ArticleDOI
TL;DR: It is shown how a 3D city model can be decomposed into such components which are either topologically equivalent to a disk, a sphere, or a torus, enabling the modeling of the terrain, of buildings and other constructions, and of bridges and tunnels.
Abstract: Consistency is a crucial prerequisite for a large number of relevant applications of 3D city models, which have become more and more important in GIS. Users need efficient and reliable consistency checking tools in order to be able to assess the suitability of spatial data for their applications. In this paper we provide the theoretical foundations for such tools by defining an axiomatic characterization of 3D city models. These axioms are effective and efficiently supported by recent spatial database management systems and methods of Computational Geometry or Computer Graphics. They are equivalent to the topological concept of the 3D city model presented in this paper, thereby guaranteeing the reliability of the method. Hence, each error is detected by the axioms, and each violation of the axioms is in fact an error. This property, which is proven formally, is not guaranteed by existing approaches. The efficiency of the method stems from its locality: in most cases, consistency checks can safely be restricted to single components, which are defined topologically. We show how a 3D city model can be decomposed into such components which are either topologically equivalent to a disk, a sphere, or a torus, enabling the modeling of the terrain, of buildings and other constructions, and of bridges and tunnels, which are handles from a mathematical point of view. This enables a modular design of the axioms by defining axioms for each topological component and for the aggregation of the components. Finally, a sound, consistent concept for aggregating features, i.e. semantical objects like buildings or rooms, to complex features is presented.

76 citations


Journal ArticleDOI
TL;DR: This paper starts with the standard assumption of time geography (no further knowledge), and develops the appropriate probability distribution by three equivalent approaches.
Abstract: Time geography uses space---time volumes to represent the possible locations of a mobile agent over time in a x---y---t space A volume is a qualitative representation of the fact that the agent is at a particular time t i inside of the volume's base at t i Space---time volumes enable qualitative analysis such as potential encounters between agents In this paper the qualitative statements of time geography will be quantified For this purpose an agent's possible locations are modeled from a stochastic perspective It is shown that probability is not equally distributed in a space---time volume, ie, a quantitative analysis cannot be based simply on proportions of intersections The actual probability distribution depends on the degree of a priori knowledge about the agent's behavior This paper starts with the standard assumption of time geography (no further knowledge), and develops the appropriate probability distribution by three equivalent approaches With such a model any analysis of the location of an agent, or relations between the locations of two agents, can be improved in expressiveness as well as accuracy

70 citations


Journal ArticleDOI
TL;DR: Semantics-enhanced discovery for geospatial data, services/service chains, and process models is described and semantic search middleware that can support virtual data product materialization is developed for the geosp spatial catalogue service.
Abstract: A geospatial catalogue service provides a network-based meta-information repository and interface for advertising and discovering shared geospatial data and services. Descriptive information (i.e., metadata) for geospatial data and services is structured and organized in catalogue services. The approaches currently available for searching and using that information are often inadequate. Semantic Web technologies show promise for better discovery methods by exploiting the underlying semantics. Such development needs special attention from the Cyberinfrastructure perspective, so that the traditional focus on discovery of and access to geospatial data can be expanded to support the increased demand for processing of geospatial information and discovery of knowledge. Semantic descriptions for geospatial data, services, and geoprocessing service chains are structured, organized, and registered through extending elements in the ebXML Registry Information Model (ebRIM) of a geospatial catalogue service, which follows the interface specifications of the Open Geospatial Consortium (OGC) Catalogue Services for the Web (CSW). The process models for geoprocessing service chains, as a type of geospatial knowledge, are captured, registered, and discoverable. Semantics-enhanced discovery for geospatial data, services/service chains, and process models is described. Semantic search middleware that can support virtual data product materialization is developed for the geospatial catalogue service. The creation of such a semantics-enhanced geospatial catalogue service is important in meeting the demands for geospatial information discovery and analysis in Cyberinfrastructure.

67 citations


Journal ArticleDOI
TL;DR: A new location anonymization algorithm that is designed specifically for the road network environment, which relies on the commonly used concept of spatial cloaking and is more efficient and scalable than the state-of-the-art technique, in terms of both query execution cost and query quality.
Abstract: Recently, several techniques have been proposed to protect the user location privacy for location-based services in the Euclidean space. Applying these techniques directly to the road network environment would lead to privacy leakage and inefficient query processing. In this paper, we propose a new location anonymization algorithm that is designed specifically for the road network environment. Our algorithm relies on the commonly used concept of spatial cloaking, where a user location is cloaked into a set of connected road segments of a minimum total length ${\cal L}$ including at least ${\cal K}$ users. Our algorithm is "query-aware" as it takes into account the query execution cost at a database server and the query quality, i.e., the number of objects returned to users by the database server, during the location anonymization process. In particular, we develop a new cost function that balances between the query execution cost and the query quality. Then, we introduce two versions of our algorithm, namely, pure greedy and randomized greedy, that aim to minimize the developed cost function and satisfy the user specified privacy requirements. To accommodate intervals with a high workload, we introduce a shared execution paradigm that boosts the scalability of our location anonymization algorithm and the database server to support large numbers of queries received in a short time period. Extensive experimental results show that our algorithms are more efficient and scalable than the state-of-the-art technique, in terms of both query execution cost and query quality. The results also show that our algorithms have very strong resilience to two privacy attacks, namely, the replay attack and the center-of-cloaked-area attack.

61 citations


Journal ArticleDOI
TL;DR: This work proposes hybrid, two-step approaches for private location-based queries which provide protection for both the users and the database, and introduces algorithms to efficiently support PIR on dynamic POI sub-sets.
Abstract: Mobile devices with global positioning capabilities allow users to retrieve points of interest (POI) in their proximity. To protect user privacy, it is important not to disclose exact user coordinates to un-trusted entities that provide location-based services. Currently, there are two main approaches to protect the location privacy of users: (i) hiding locations inside cloaking regions (CRs) and (ii) encrypting location data using private information retrieval (PIR) protocols. Previous work focused on finding good trade-offs between privacy and performance of user protection techniques, but disregarded the important issue of protecting the POI dataset D. For instance, location cloaking requires large-sized CRs, leading to excessive disclosure of POIs (O(|D|) in the worst case). PIR, on the other hand, reduces this bound to $O(\sqrt{|D|})$ , but at the expense of high processing and communication overhead. We propose hybrid, two-step approaches for private location-based queries which provide protection for both the users and the database. In the first step, user locations are generalized to coarse-grained CRs which provide strong privacy. Next, a PIR protocol is applied with respect to the obtained query CR. To protect against excessive disclosure of POI locations, we devise two cryptographic protocols that privately evaluate whether a point is enclosed inside a rectangular region or a convex polygon. We also introduce algorithms to efficiently support PIR on dynamic POI sub-sets. We provide solutions for both approximate and exact NN queries. In the approximate case, our method discloses O(1) POI, orders of magnitude fewer than CR- or PIR-based techniques. For the exact case, we obtain optimal disclosure of a single POI, although with slightly higher computational overhead. Experimental results show that the hybrid approaches are scalable in practice, and outperform the pure-PIR approach in terms of computational and communication overhead.

45 citations


Journal ArticleDOI
TL;DR: The results of extensive simulations show that the proposed algorithms are able to answer MRPSR queries effectively and efficiently with underlying road networks and the response time of the algorithms is significantly reduced while the distances of the computed routes are only slightly longer than the shortest route.
Abstract: In modern geographic information systems, route search represents an important class of queries. In route search related applications, users may want to define a number of traveling rules (traveling preferences) when they plan their trips. However, these traveling rules are not considered in most existing techniques. In this paper, we propose a novel spatial query type, the multi-rule partial sequenced route (MRPSR) query, which enables efficient trip planning with user defined traveling rules. The MRPSR query provides a unified framework that subsumes the well-known trip planning query (TPQ) and the optimal sequenced route (OSR) query. The difficulty in answering MRPSR queries lies in how to integrate multiple choices of points-of-interest (POI) with traveling rules when searching for satisfying routes. We prove that MRPSR query is NP-hard and then provide three algorithms by mapping traveling rules to an activity on vertex network. Afterwards, we extend all the proposed algorithms to road networks. By utilizing both real and synthetic POI datasets, we investigate the performance of our algorithms. The results of extensive simulations show that our algorithms are able to answer MRPSR queries effectively and efficiently with underlying road networks. Compared to the Light Optimal Route Discoverer (LORD) based brute-force solution, the response time of our algorithms is significantly reduced while the distances of the computed routes are only slightly longer than the shortest route.

41 citations


Journal ArticleDOI
TL;DR: A novel approach to express and evaluate the complex class of queries in moving object databases called spatiotemporal pattern queries (STP queries) that is, one can specify temporal order constraints on the fulfillment of several predicates.
Abstract: This paper presents a novel approach to express and evaluate the complex class of queries in moving object databases called spatiotemporal pattern queries (STP queries). That is, one can specify temporal order constraints on the fulfillment of several predicates. This is in contrast to a standard spatiotemporal query that is composed of a single predicate. We propose a language design for spatiotemporal pattern queries in the context of spatiotemporal DBMSs. The design builds on the well established concept of lifted predicates. Hence, unlike previous approaches, patterns are neither restricted to specific sets of predicates, nor to specific moving object types. The proposed language can express arbitrarily complex patterns that involve various types of spatiotemporal operations such as range, metric, topological, set operations, aggregations, distance, direction, and boolean operations. This work covers the language integration in SQL, the evaluation of the queries, and the integration with the query optimizer. We also propose a simple language for defining the temporal constraints. The approach allows for queries that were never available. We provide a complete implementation in C+?+ and Prolog in the context of the Secondo platform. The implementation is made publicly available online as a Secondo Plugin, which also includes automatic scripts for repeating the experiments in this paper.

40 citations


Journal ArticleDOI
TL;DR: This work develops a novel approach to the discovery of geoprocessing services (WPS) based on Logic Programming query containment checking between these descriptions, which is applicable in the Web Service Modeling Framework (WSMF), a state-of-the-art semantic web service framework.
Abstract: Discovery of suitable web services is a crucial task in Spatial Data Infrastructures (SDI). In this work, we develop a novel approach to the discovery of geoprocessing services (WPS). Discovery requests and Web Processing Services are annotated as conjunctive queries in a logic programming (LP) language and the discovery process is based on Logic Programming query containment checking between these descriptions. Besides the types of input and output, we explicitly formalise the relation between them and hence are able to capture the functionality of a WPS more precisely. The use of Logic Programming query containment allows for effective reasoning during discovery. Furthermore, the relative simplicity of the semantic descriptions is advantageous for their creation by non-logics experts. The developed approach is applicable in the Web Service Modeling Framework (WSMF), a state-of-the-art semantic web service framework.

37 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient algorithm to compute viewsheds on terrain stored in external memory that is much faster than existing published algorithms.
Abstract: The recent availability of detailed geographic data permits terrain applications to process large areas at high resolution. However the required massive data processing presents significant challenges, demanding algorithms optimized for both data movement and computation. One such application is viewshed computation, that is, to determine all the points visible from a given point p. In this paper, we present an efficient algorithm to compute viewsheds on terrain stored in external memory. In the usual case where the observer's radius of interest is smaller than the terrain size, the algorithm complexity is ?(scan(n 2)) where n 2 is the number of points in an n × n DEM and scan(n 2) is the minimum number of I/O operations required to read n 2 contiguous items from external memory. This is much faster than existing published algorithms.

Journal ArticleDOI
TL;DR: This work proposes to model density change among spatial regions using a density tracing based approach that enables reasoning about large areal aggregated crime datasets and discovers patterns among datasets by finding those crime and spatial features that exhibit similar spatial distributions by measuring the dissimilarity of their density traces.
Abstract: Intelligent crime analysis allows for a greater understanding of the dynamics of unlawful activities, providing possible answers to where, when and why certain crimes are likely to happen. We propose to model density change among spatial regions using a density tracing based approach that enables reasoning about large areal aggregated crime datasets. We discover patterns among datasets by finding those crime and spatial features that exhibit similar spatial distributions by measuring the dissimilarity of their density traces. The proposed system incorporates both localized clusters (through the use of context sensitive weighting and clustering) and the global distribution trend. Experimental results validate and demonstrate the robustness of our approach.

Journal ArticleDOI
TL;DR: The results show that MOVIES outperforms state-of-the-art moving object indexes such as a main-memory adapted Bx-tree by orders of magnitude w.r.t. update rates and query rates.
Abstract: With the exponential growth of moving objects data to the Gigabyte range, it has become critical to develop effective techniques for indexing, updating, and querying these massive data sets. To meet the high update rate as well as low query response time requirements of moving object applications, this paper takes a novel approach in moving object indexing. In our approach, we do not require a sophisticated index structure that needs to be adjusted for each incoming update. Rather, we construct conceptually simple short-lived index images that we only keep for a very short period of time (sub-seconds) in main memory. As a consequence, the resulting technique MOVIES supports at the same time high query rates and high update rates, trading this property for query result staleness. Moreover, MOVIES is the first main memory method supporting time-parameterized predictive queries. To support this feature, we present two algorithms: non-predictive MOVIES and predictive MOVIES. We obtain the surprising result that a predictive indexing approach--considered state-of-the-art in an external-memory scenario--does not scale well in a main memory environment. In fact, our results show that MOVIES outperforms state-of-the-art moving object indexes such as a main-memory adapted B x -tree by orders of magnitude w.r.t. update rates and query rates. In our experimental evaluation, we index the complete road network of Germany consisting of 40,000,000 road segments and 38,000,000 nodes. We scale our workload up to 100,000,000 moving objects, 58,000,000 updates per second and 10,000 queries per second, a scenario at a scale unmatched by any previous work.

Journal ArticleDOI
TL;DR: This work incorporates these edge configurations in spatially autoregressive models and demonstrates how the Bayesian Information Criteria (BIC) can be used to detect difference boundaries in the map.
Abstract: Statistical models for areal data are primarily used for smoothing maps revealing spatial trends. Subsequent interest often resides in the formal identification of `boundaries' on the map. Here boundaries refer to `difference boundaries', representing significant differences between adjacent regions. Recently, Lu and Carlin (Geogr Anal 37:265---285, 2005) discussed a Bayesian framework to carry out edge detection employing a spatial hierarchical model that is estimated using Markov chain Monte Carlo (MCMC) methods. Here we offer an alternative that avoids MCMC and is easier to implement. Our approach resembles a model comparison problem where the models correspond to different underlying edge configurations across which we wish to smooth (or not). We incorporate these edge configurations in spatially autoregressive models and demonstrate how the Bayesian Information Criteria (BIC) can be used to detect difference boundaries in the map. We illustrate our methods with a Minnesota Pneumonia and Influenza Hospitalization dataset to elicit boundaries detected from the different models.

Journal ArticleDOI
TL;DR: A reward-based region discovery framework that employs a divisive grid-based supervised clustering for region discovery and investigates the duality between regional association rules and regions where the associations are valid.
Abstract: The motivation for regional association rule mining and scoping is driven by the facts that global statistics seldom provide useful insight and that most relationships in spatial datasets are geographically regional, rather than global. Furthermore, when using traditional association rule mining, regional patterns frequently fail to be discovered due to insufficient global confidence and/or support. In this paper, we systematically study this problem and address the unique challenges of regional association mining and scoping: (1) region discovery: how to identify interesting regions from which novel and useful regional association rules can be extracted; (2) regional association rule scoping: how to determine the scope of regional association rules. We investigate the duality between regional association rules and regions where the associations are valid: interesting regions are identified to seek novel regional patterns, and a regional pattern has a scope of a set of regions in which the pattern is valid. In particular, we present a reward-based region discovery framework that employs a divisive grid-based supervised clustering for region discovery. We evaluate our approach in a real-world case study to identify spatial risk patterns from arsenic in the Texas water supply. Our experimental results confirm and validate research results in the study of arsenic contamination, and our work leads to the discovery of novel findings to be further explored by domain scientists.

Journal ArticleDOI
TL;DR: The research reported in this paper uses wireless sensor networks to provide salient information about spatially distributed dynamic fields, such as regional variations in temperature or concentration of a toxic gas, and develops a distributed qualitative change reporting approach that detects the qualitative changes simply based on the connectivity between the sensor nodes without location information.
Abstract: The research reported in this paper uses wireless sensor networks to provide salient information about spatially distributed dynamic fields, such as regional variations in temperature or concentration of a toxic gas. The focus is on deriving qualitative descriptions of salient changes to areas of high-activity that occur during the temporal evolution of the field. The changes reported include region merging or splitting, and hole formation or elimination. Such changes are formally characterized, and a distributed qualitative change reporting (QCR) approach is developed that detects the qualitative changes simply based on the connectivity between the sensor nodes without location information. The efficiency of the QCR approach is investigated using simulation experiments. The results show that the communication cost of the QCR approach in monitoring large-scale phenomena is an order of magnitude lower than that using the standard boundary-based data collection approach, where each node is assumed to have its location information.

Journal ArticleDOI
TL;DR: This paper presents a data model that extends Piet, allowing tracking the history of spatial data in the GIS layers, and introduces a formal First-Order spatio-temporal query language, $\mathcal{L}_t,$ able to express spatio/temporal queries over GIS, OLAP, and trajectory data.
Abstract: In recent years, applications aimed at exploring and analyzing spatial data have emerged, powered by the increasing need of software that integrates Geographic Information Systems (GIS) and On-Line Analytical Processing (OLAP). These applications have been called SOLAP (Spatial OLAP). In previous work, the authors have introduced Piet, a system based on a formal data model that integrates in a single framework GIS, OLAP (On-Line Analytical Processing), and Moving Object data. Real-world problems are inherently spatio-temporal. Thus, in this paper we present a data model that extends Piet, allowing tracking the history of spatial data in the GIS layers. We present a formal study of the two typical ways of introducing time into Piet: timestamping the thematic layers in the GIS, and timestamping the spatial objects in each layer. We denote these strategies snapshot-based and timestamp-based representations, respectively, following well-known terminology borrowed from temporal databases. We present and discuss the formal model for both alternatives. Based on the timestamp-based representation, we introduce a formal First-Order spatio-temporal query language, which we denote $\mathcal{L}_t,$ able to express spatio-temporal queries over GIS, OLAP, and trajectory data. Finally, we discuss implementation issues, the update operators that must be supported by the model, and sketch a temporal extension to Piet-QL, the SQL-like query language that supports Piet.

Journal ArticleDOI
TL;DR: A hybrid semi-supervised learning algorithm is presented that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases and shows over 24% to 36% improvement in overall classification accuracy over conventional classification schemes.
Abstract: Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of the large number of accurate training samples (10 to 30 × |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, it is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of the statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately, there is no convenient multivariate statistical model that can be employed for multisource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on Landsat satellite image datasets, and our new hybrid approach shows over 24% to 36% improvement in overall classification accuracy over conventional classification schemes.

Journal ArticleDOI
TL;DR: A cloud computing implementation of a scalable argumentation mapping tool to produce a participatory GeoWeb tool for deliberative democracy and illustrates the opportunities of applying a Web 2.0 model to PPGIS.
Abstract: Public participation geographic information systems (PPGIS) support collaborative decision-making in the public realm. PPGIS provide advanced communication, deliberation, and conflict resolution mecha nisms to engage diverse stakeholder groups. Many of the functional characteristics of Web 2.0 echo basic PPGIS functions including the authoring, linking, and sharing of volunteered geographic information. However, with the increasing popularity of geospatial applications on the Web comes a need to develop concepts for scalable, reliable, and easy-to-maintain tools. In this paper, we propose a cloud computing implementation of a scalable argumentation mapping tool. The tool also illustrates the opportunities of applying a Web 2.0 model to PPGIS. The searching, linking, authoring, tagging, extension, and signalling (SLATES) functions are associated with PPGIS functionality to produce a participatory GeoWeb tool for deliberative democracy.

Journal ArticleDOI
TL;DR: The Soil Landscapes of Canada (SLC) as discussed by the authors is a national soil map and accompanying database of environmental information for all of Canada, produced and maintained by the Canadian Soil Information Service (CanSIS).
Abstract: The Soil Landscapes of Canada (SLC) is a national soil map and accompanying database of environmental information for all of Canada, produced and maintained by the Canadian Soil Information Service (CanSIS) which is a part of Agriculture and Agri-Food Canada. The SLC maps were originally published as a set of paper products for individual provinces and regions. The maps were digitized in CanSIS, using one of the first geographic information systems in the world, and linked to soil and landscape attribute tables to serve an evolving variety of spatial modelling applications. The SLCs form the lowest level of the National Ecological Framework for Canada. The latest public release of the SLC is version 3.2, which provides updated soil and landscape information for the agricultural areas of Canada. The SLC v3.2 digital coverage includes an extensive set of relational data tables. The component table lists the soil components in each agricultural polygon along with their predicted dominant slope, class, and ex...

Journal ArticleDOI
TL;DR: It is shown how the employment of formal grammars enables the interpretation of characteristic motion events of pairs of objects by composing motion patterns into specific qualitative features.
Abstract: When accumulating large quantities of positional data with ubiquitous positioning techniques, methods are required that can efficiently make use of these data. This work proposes a representation that approximates motion events of pairs of objects. It is shown how the employment of formal grammars enables the interpretation of such motion events. This is accomplished by composing motion patterns into specific qualitative features. In particular, the change of relative directions defines characteristic motion events.

Journal ArticleDOI
TL;DR: This work presents a simple and efficient algorithm that computes the correct results, and proposes a fast approximation algorithm that returns a desirable subset of the skyline results.
Abstract: As more data-intensive applications emerge, advanced retrieval semantics, such as ranking and skylines, have attracted the attention of researchers. Geographic information systems are a good example of an application using a massive amount of spatial data. Our goal is to efficiently support exact and approximate skyline queries over massive spatial datasets. A spatial skyline query, consisting of multiple query points, retrieves data points that are not father than any other data points, from all query points. To achieve this goal, we present a simple and efficient algorithm that computes the correct results, also propose a fast approximation algorithm that returns a desirable subset of the skyline results. In addition, we propose a continuous query algorithm to trace changes of skyline points while a query point moves. To validate the effectiveness and efficiency of our algorithm, we provide an extensive empirical comparison between our algorithms and the best known spatial skyline algorithms from several perspectives.

Journal ArticleDOI
TL;DR: The discussion explicitly addresses issues related to the iterative processes, at multiple scales, required to develop atlas projects within an academic research setting while using and creating open sour...
Abstract: Digital web atlases can incorporate perspectives derived from diverse participants or communities to create and present narratives using qualitative and quantitative information structured around a...

Journal ArticleDOI
TL;DR: This paper will offer a menu for users to choose hierarchical clustering algorithms on networks from a time complexity point of view.
Abstract: We present a general framework of hierarchical methods for point cluster analysis on networks, and then consider individual clustering procedures and their time complexities defined by typical variants of distances between clusters. The distances considered here are the closest-pair distance, the farthest-pair distance, the average distance, the median-pair distance and the radius distance. This paper will offer a menu for users to choose hierarchical clustering algorithms on networks from a time complexity point of view.

Journal ArticleDOI
TL;DR: This work shows that addresses and positioning expressions, along with fragments such as postal codes or telephone area codes, provide satisfactory support for local search applications, since they are able to determine approximations to the physical location of services and activities named within Web pages.
Abstract: When users need to find something on the Web that is related to a place, chances are place names will be submitted along with some other keywords to a search engine. However, automatic recognition of geographic characteristics embedded in Web documents, which would allow for a better connection between documents and places, remains a difficult task. We propose an ontology-driven approach to facilitate the process of recognizing, extracting, and geocoding partial or complete references to places embedded in text. Our approach combines an extraction ontology with urban gazetteers and geocoding techniques. This ontology, called OnLocus, is used to guide the discovery of geospatial evidence from the contents of Web pages. We show that addresses and positioning expressions, along with fragments such as postal codes or telephone area codes, provide satisfactory support for local search applications, since they are able to determine approximations to the physical location of services and activities named within Web pages. Our experiments show the feasibility of performing automated address extraction and geocoding to identify locations associated to Web pages. Combining location identifiers with basic addresses improved the precision of extractions and reduced the number of false positive results.

Journal ArticleDOI
TL;DR: In this article, a hierarchical image matching strategy and a multiple surface registration technique for 3D reconstruction of a scoliotic torso was proposed, where a low-cost, multi-camera photogrammetric system was used for semi-automated torso surface reconstruction with a sub-millimetre level accuracy.
Abstract: The focus of this research is a hierarchical image matching strategy and a multiple surface registration technique for 3D reconstruction of a scoliotic torso. Scoliosis is a deformity of the human spine most commonly occurring in children. After being detected, periodic examinations via x-rays are traditionally used to measure its progression. However, due to the increased risk of cancer, non-invasive and radiation-free scoliosis detection and progression monitoring methodologies are being researched. For example, quantifying the scoliotic deformity through the torso surface is a valid alternative because of its high correlation with the internal spine curvature. This work proposes a low-cost, multi-camera photogrammetric system for semi-automated 3D reconstruction of a torso surface with a sub-millimetre level accuracy. The paper first describes the system design and calibration for optimal accuracy. It then covers the reconstruction and registration procedures giving insights into the hierarchical image...

Journal ArticleDOI
TL;DR: This paper aims at creating a spatiotemporal version of the sliced representation of moving objects that supports efficient retrieval of snapshots of the past and that supports enforcing topological relationships.
Abstract: Several representations have been created to store topological information in normal spatial databases. Some work has also been done to represent topology for 3D objects, and such representations could be used to store topology for spatiotemporal objects. However, using 3D models has some disadvantages with regards to retrieving snapshots of the database. This paper aims at creating a spatiotemporal version of the sliced representation that supports efficient retrieval of snapshots of the past and that supports enforcing topological relationships. This paper aims to extend an earlier representation of moving objects so that it can also store and enforce some of the topological relationships between the objects. One use of such a representation is storing a changing spatial partition. As part of the effort to construct the model, an analysis of the topological relationships has been carried out to see which need to be stored explicitly and which can be computed from geometry. Both a basic time slice model and a 3D model are examined to determine how suitable they are for storing topological relationships. An extension of the time slice model is then proposed that solves some of the problems of the basic time slice model. Some algorithms for constructing the new model from snapshots of the objects along with an adjacency graph have been created. The paper also contains a short analysis on how to handle current time, as the time slice model is best at handling historical data, and on ways to speed up searches in a database in which objects of many types are connected to one another and many files therefore potentially need to be accessed.

Journal ArticleDOI
TL;DR: This paper describes a basis for the robust geometrical construction of spatial objects in computer applications using a complex called the “Regular Polytope" and argues for a model with two and three-dimensional objects that have been coded in Java and which implement a full set of topological and connectivity functions which is shown to be complete and rigorous.
Abstract: In order to be able to draw inferences about real world phenomena from a representation expressed in a digital computer, it is essential that the representation should have a rigorously correct algebraic structure. It is also desirable that the underlying algebra be familiar, and provide a close modelling of those phenomena. The fundamental problem addressed in this paper is that, since computers do not support real-number arithmetic, the algebraic behaviour of the representation may not be correct, and cannot directly model a mathematical abstraction of space based on real numbers. This paper describes a basis for the robust geometrical construction of spatial objects in computer applications using a complex called the "Regular Polytope". In contrast to most other spatial data types, this definition supports a rigorous logic within a finite digital arithmetic. The definition of connectivity proves to be non-trivial, and alternatives are investigated. It is shown that these alternatives satisfy the relations of a region connection calculus (RCC) as used for qualitative spatial reasoning, and thus introduce the rigor of that reasoning to geographical information systems. They also form what can reasonably be termed a "Finite Boolean Connection Algebra". The rigorous and closed nature of the algebra ensures that these primitive functions and predicates can be combined to any desired level of complexity, and thus provide a useful toolkit for data retrieval and analysis. The paper argues for a model with two and three-dimensional objects that have been coded in Java and which implement a full set of topological and connectivity functions which is shown to be complete and rigorous.

Journal ArticleDOI
TL;DR: This paper focuses on spatial partitioning of controlling variables that are attributed to a particular range of a response variable, based on association analysis technique of identifying emerging patterns, which is extended in order to be applied more effectively to geospatial data sets.
Abstract: Modeling spatially distributed phenomena in terms of its controlling factors is a recurring problem in geoscience. Most efforts concentrate on predicting the value of response variable in terms of controlling variables either through a physical model or a regression model. However, many geospatial systems comprises complex, nonlinear, and spatially non-uniform relationships, making it difficult to even formulate a viable model. This paper focuses on spatial partitioning of controlling variables that are attributed to a particular range of a response variable. Thus, the presented method surveys spatially distributed relationships between predictors and response. The method is based on association analysis technique of identifying emerging patterns, which are extended in order to be applied more effectively to geospatial data sets. The outcome of the method is a list of spatial footprints, each characterized by a unique "controlling pattern"--a list of specific values of predictors that locally correlate with a specified value of response variable. Mapping the controlling footprints reveals geographic regionalization of relationship between predictors and response. The data mining underpinnings of the method are given and its application to a real world problem is demonstrated using an expository example focusing on determining variety of environmental associations of high vegetation density across the continental United States.

Journal ArticleDOI
TL;DR: In this paper, a simple and accurate model for extracting the optical characteristics (intrinsic parameters) as well as the 3D position and orientation of the camera based on image coordinate measurements for targets of known 3-D position is presented.
Abstract: Camera calibration is a process whereby the geometric characteristics of a specific camera are determined. It is performed so that the photograph obtained can be used to produce accurate measurements. This paper presents a simple and accurate model for extracting the optical characteristics (intrinsic parameters) as well as the 3D position and orientation of the camera (extrinsic parameters) based on image coordinate measurements for targets of known 3-D position. The proposed algorithm is divided into two major steps. First, the behaviour of a grid of projected lines is studied and the required local corrections are applied. Such corrections are calculated using Composed Cubic Splines. This function covers radial, decentering and prism distortions. It provides a more realistic model of distortion which is represented as a composed surface. Then, calibration parameters are estimated using a pinhole model based on the minimization of a linear criterion relating the 3D coordinates of the targets with their ...