Author
Simon Scheider
Other affiliations: ETH Zurich, University of Münster, University of California, Santa Barbara
Bio: Simon Scheider is an academic researcher from Utrecht University. The author has contributed to research in topics: Computer science & Semantic Web. The author has an hindex of 18, co-authored 71 publications receiving 1071 citations. Previous affiliations of Simon Scheider include ETH Zurich & University of Münster.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The research field of geospatial semantics is outlined, major research directions and trends are highlighted, and a glance at future challenges are glance at.
Abstract: The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data.
144 citations
••
02 Sep 2013
TL;DR: This paper introduces an ontology design pattern for semantic trajectories and discusses the formalization of the pattern using the Web Ontology Language (OWL) and applies the pattern to two different scenarios, personal travel and wildlife monitoring.
Abstract: Trajectory data have been used in a variety of studies, including human behavior analysis, transportation management, and wildlife tracking. While each study area introduces a different perspective, they share the need to integrate positioning data with domain-specific information. Semantic annotations are necessary to improve discovery, reuse, and integration of trajectory data from different sources. Consequently, it would be beneficial if the common structure encountered in trajectory data could be annotated based on a shared vocabulary, abstracting from domain-specific aspects. Ontology design patterns are an increasingly popular approach to define such flexible and self-contained building blocks of annotations. They appear more suitable for the annotation of interdisciplinary, multi-thematic, and multi-perspective data than the use of foundational and domain ontologies alone. In this paper, we introduce such an ontology design pattern for semantic trajectories. It was developed as a community effort across multiple disciplines and in a data-driven fashion. We discuss the formalization of the pattern using the Web Ontology Language (OWL) and apply the pattern to two different scenarios, personal travel and wildlife monitoring.
116 citations
••
26 May 2013
TL;DR: This work presents an ontology design pattern for map scaling using the Web Ontology Language (OWL) within a particular extension of the OWL RL profile, and proposes an axiomatization that allows for meaningful constraints on the pattern, and, thus, to go beyond simple surface semantics.
Abstract: The concepts of scale is at the core of cartographic abstraction and mapping. It defines which geographic phenomena should be displayed, which type of geometry and map symbol to use, which measures can be taken, as well as the degree to which features need to be exaggerated or spatially displaced. In this work, we present an ontology design pattern for map scaling using the Web Ontology Language (OWL) within a particular extension of the OWL RL profile. We explain how it can be used to describe scaling applications, to reason over scale levels, and geometric representations. We propose an axiomatization that allows us to impose meaningful constraints on the pattern, and, thus, to go beyond simple surface semantics. Interestingly, this includes several functional constraints currently not expressible in any of the OWL profiles. We show that for this specific scenario, the addition of such constraints does not increase the reasoning complexity which remains tractable.
45 citations
••
TL;DR: This paper proposes a formal theory about relevant types of activities and their involved participants, and shows how place referents can be identified and localized by choosing locators and locatum among the participants.
Abstract: Reference to places is a central but largely underexposed problem of information science. Place has been a major object of research in many domains including Geography, Cognitive Science and Geographic Information Science. However, Geographic Information Systems (GIS) have been built solely on space reference systems creating a gap between human conceptualization and machine representation. While reference to space only partially captures reference to place, most existing definitions of place either reduce the latter to the former or lack a formal characterization of how places are constructed. In a spatial coordinate system, locations are referenced by angles and distances to other referents. In this paper, we suggest that place reference systems can be built based on localizing things (locatums) involved in simulated activities relative to other involved referents (locators). We propose a formal theory about relevant types of activities and their involved participants, and show how place referents can be identified and localized by choosing locators and locatum among the participants. We formally derive an ontology of places, publish a corresponding OWL version, and demonstrate how to compute a market place and a vantage place in a GIS.
44 citations
••
21 Sep 2009TL;DR: This work argues that human or technical sensors implement semantic datums, and secondly that primitive symbols are definable from the meaningful environment, a formalized quality space established through such sensors.
Abstract: Ontologies are a common approach to improve semantic interoperability by explicitly specifying the vocabulary used by a particular information community. Complex expressions are defined in terms of primitive ones. This shifts the problem of semantic interoperability to the problem of how to ground primitive symbols. One approach are semantic datums, which determine reproducible mappings (measurement scales) from observable structures to symbols. Measurement theory offers a formal basis for such mappings. From an ontological point of view, this leaves two important questions unanswered. Which qualities provide semantic datums? How are these qualities related to the primitive entities in our ontology? Based on a scenario from hydrology, we first argue that human or technical sensors implement semantic datums, and secondly that primitive symbols are definable from the meaningful environment, a formalized quality space established through such sensors.
44 citations
Cited by
More filters
••
[...]
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).
13,246 citations
••
2,276 citations
••
TL;DR: The sf package implements simple features in R, and has roughly the same capacity for spatial vector data as packages sp, rgeos and rgdal, and its place in the R package ecosystem, and the potential to connect R to other computer systems are described.
Abstract: Simple features are a standardized way of encoding spatial vector data (points, lines, polygons) in computers. The sf package implements simple features in R, and has roughly the same capacity for spatial vector data as packages sp, rgeos and rgdal. We describe the need for this package, its place in the R package ecosystem, and its potential to connect R to other computer systems. We illustrate this with examples of its use. What are simple features? Features can be thought of as “things” or objects that have a spatial location or extent; they may be physical objects like a building, or social conventions like a political state. Feature geometry refers to the spatial properties (location or extent) of a feature, and can be described by a point, a point set, a linestring, a set of linestrings, a polygon, a set of polygons, or a combination of these. The simple adjective of simple features refers to the property that linestrings and polygons are built from points connected by straight line segments. Features typically also have other properties (temporal properties, color, name, measured quantity), which are called feature attributes. Not all spatial phenomena are easy to represent by “things or objects”: continuous phenoma such as water temperature or elevation are better represented as functions mapping from continuous or sampled space (and time) to values (Scheider et al., 2016), and are often represented by raster data rather than vector (points, lines, polygons) data. Simple feature access (Herring, 2011) is an international standard for representing and encoding spatial data, dominantly represented by point, line and polygon geometries (ISO, 2004). It is widely used e.g. by spatial databases (Herring, 2010), GeoJSON (Butler et al., 2016), GeoSPARQL (Perry and Herring, 2012), and open source libraries that empower the open source geospatial software landscape including GDAL (Warmerdam, 2008), GEOS (GEOS Development Team, 2017) and liblwgeom (a PostGIS component, Obe and Hsu (2015)). The need for a new package Package sf (Pebesma, 2017) is an R package for reading, writing, handling and manipulating simple features in R, reimplementing the vector (points, lines, polygons) data handling functionality of packages sp (Pebesma and Bivand, 2005; Bivand et al., 2013), rgdal (Bivand et al., 2017) and rgeos (Bivand and Rundel, 2017). However, sp has some 400 direct reverse dependencies, and a few thousand indirect ones. Why was there a need to write a package with the potential to replace it? First of all, at the time of writing sp (2003) there was no standard for simple features, and the ESRI shapefile was by far the dominant file format for exchanging vector data. The lack of a clear (open) standard for shapefiles, the omnipresence of “bad” or malformed shapefiles, and the many limitations of the ways it can represent spatial data adversely affected sp, for instance in the way it represents holes in polygons, and a lack of discipline to register holes with their enclosing outer ring. Such ambiguities could influence plotting of data, or communication with other systems or libraries. The simple feature access standard is now widely adopted, but the sp package family has to make assumptions and do conversions to load them into R. This means that you cannot round-trip data, as of: loading data in R, manipulating them, exporting them and getting the same geometries back. With sf, this is no longer a problem. A second reason was that external libraries heavily used by R packages for reading and writing spatial data (GDAL) and for geometrical operations (GEOS) have developed stronger support for the simple feature standard. A third reason was that the package cluster now known as the tidyverse (Wickham, 2017, 2014), which includes popular packages such as dplyr (Wickham et al., 2017) and ggplot2 (Wickham, 2016), does not work well with the spatial classes of sp: • tidyverse packages assume objects not only behave like data.frames (which sp objects do by providing methods), but are data.frames in the sense of being a list with equally sized column vectors, which sp does not do. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 CONTRIBUTED RESEARCH ARTICLE 2 • attempts to “tidy” polygon objects for plotting with ggplot2 (“fortify”) by creating data.frame objects with records for each polygon node (vertex) were neither robust nor efficient. A simple (S3) way to store geometries in data.frame or similar objects is to put them in a geometry list-column, where each list element contains the geometry object of the corresponding record, or data.frame “row”; this works well with the tidyverse package family.
1,625 citations
01 Jan 1974
TL;DR: In rural and small-town Nevada, Brothels are legal or openly tolerated and strictly controlled by state statute, city and county ordinances, and local rules as discussed by the authors, and the legal and quasi-legal restrictions placed on prostitutes severely limit their activities outside brothels.
Abstract: Thirty-three brothels in rural and small-town Nevada, which contain between 225 and 250 prostitutes, are legal or openly tolerated and strictly controlled by state statute, city and county ordinances, and local rules. Twenty-two of the brothels are in places with populations between 500 and 8,000, and the remaining eleven are in rural areas. The legal and quasi-legal restrictions placed on prostitutes severely limit their activities outside brothels. These restrictions in conjunction with historical inertia, perceived benefits of crime and venereal disease control, and the good image of madams contribute to widespread positive local attitudes toward brothel prostitution. Interactions between clients and prostitutes in brothel parlors are also restricted and limited to a few basic types which are largely determined by entrepreneurial philosophy. KEY WORDS : Nevada, Political geography, Prostitution, Restricted activity spaces.
931 citations