scispace - formally typeset
Search or ask a question
Author

Yao-Yi Chiang

Other affiliations: Google, University of Minnesota
Bio: Yao-Yi Chiang is an academic researcher from University of Southern California. The author has contributed to research in topics: Raster graphics & Computer science. The author has an hindex of 19, co-authored 85 publications receiving 1745 citations. Previous affiliations of Yao-Yi Chiang include Google & University of Minnesota.


Papers
More filters
Journal ArticleDOI
TL;DR: This article presents an overview of existing map processing techniques, bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.
Abstract: Maps depict natural and human-induced changes on earth at a fine resolution for large areas and over long periods of time. In addition, maps—especially historical maps—are often the only information source about the earth as surveyed using geodetic techniques. In order to preserve these unique documents, increasing numbers of digital map archives have been established, driven by advances in software and hardware technologies. Since the early 1980s, researchers from a variety of disciplines, including computer science and geography, have been working on computational methods for the extraction and recognition of geographic features from archived images of maps (digital map processing). The typical result from map processing is geographic information that can be used in spatial and spatiotemporal analyses in a Geographic Information System environment, which benefits numerous research fields in the spatial, social, environmental, and health sciences. However, map processing literature is spread across a broad range of disciplines in which maps are included as a special type of image. This article presents an overview of existing map processing techniques, with the goal of bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.

674 citations

Journal ArticleDOI
TL;DR: An overview of key concepts surrounding the evolving and interdisciplinary field of geoAI including spatial data science, machine learning, deep learning, and data mining; recent geoAI applications in research; and potential future directions for geoAI in environmental epidemiology are provided.
Abstract: Geospatial artificial intelligence (geoAI) is an emerging scientific discipline that combines innovations in spatial science, artificial intelligence methods in machine learning (e.g., deep learning), data mining, and high-performance computing to extract knowledge from spatial big data. In environmental epidemiology, exposure modeling is a commonly used approach to conduct exposure assessment to determine the distribution of exposures in study populations. geoAI technologies provide important advantages for exposure modeling in environmental epidemiology, including the ability to incorporate large amounts of big spatial and temporal data in a variety of formats; computational efficiency; flexibility in algorithms and workflows to accommodate relevant characteristics of spatial (environmental) processes including spatial nonstationarity; and scalability to model other environmental exposures across different geographic areas. The objectives of this commentary are to provide an overview of key concepts surrounding the evolving and interdisciplinary field of geoAI including spatial data science, machine learning, deep learning, and data mining; recent geoAI applications in research; and potential future directions for geoAI in environmental epidemiology.

102 citations

Proceedings ArticleDOI
06 Nov 2018
TL;DR: This paper presents an approach for forecasting short-term PM2.5 concentrations using a deep learning model, the geo-context based diffusion convolutional recurrent neural network, GC-DCRNN, and captures the temporal dependency leveraging the sequence to sequence encoder-decoder architecture.
Abstract: Forecasting spatially correlated time series data is challenging because of the linear and non-linear dependencies in the temporal and spatial dimensions. Air quality forecasting is one canonical example of such tasks. Existing work, e.g., auto-regressive integrated moving average (ARIMA) and artificial neural network (ANN), either fails to model the non-linear temporal dependency or cannot effectively consider spatial relationships between multiple spatial time series data. In this paper, we present an approach for forecasting short-term PM2.5 concentrations using a deep learning model, the geo-context based diffusion convolutional recurrent neural network, GC-DCRNN. The model describes the spatial relationship by constructing a graph based on the similarity of the built environment between the locations of air quality sensors. The similarity is computed using the surrounding "important" geographic features regarding their impacts to air quality for each location (e.g., the area size of parks within a 1000-meter buffer, the number of factories within a 500-meter buffer). Also, the model captures the temporal dependency leveraging the sequence to sequence encoder-decoder architecture. We evaluate our model on two real-world air quality datasets and observe consistent improvement of 5%-10% over baseline approaches.

96 citations

Patent
28 Jun 2005
TL;DR: In this article, vector-imagery conflation is used to align vector data and imagery over large geographical regions, which is applicable for GIS applications requiring alignment of vector data, raster maps, and imagery.
Abstract: Automatic conflation systems and techniques which provide vector-imagery conflation and map-imagery conflation. Vector-imagery conflation is an efficient approach that exploits knowledge from multiple data sources to identify a set of accurate control points. Vector-imagery conflation provides automatic and accurate alignment of various vector datasets and imagery, and is appropriate for GIS applications, for example, requiring alignment of vector data and imagery over large geographical regions. Map-imagery conflation utilizes common vector datasets as “glue” to automatically integrate street maps with imagery. This approach provides automatic, accurate, and intelligent images that combine the visual appeal and accuracy of imagery with the detailed attribution information often contained in such diverse maps. Both conflation approaches are applicable for GIS applications requiring, for example, alignment of vector data, raster maps, and imagery. If desired, the conflated data generated by such systems may be retrieved on-demand.

85 citations

Proceedings ArticleDOI
TL;DR: An information integration approach that utilizes common vector datasets as "glue" to automatically conflate imagery with street maps is described and efficient techniques to automatically extract road intersections from imagery and maps as control points are presented.
Abstract: Recent growth of the geospatial information on the web has made it possible to easily access various maps and orthoimagery. By integrating these maps and imagery, we can create intelligent images that combine the visual appeal and accuracy of imagery with the detailed attribution information often contained in diverse maps. However, accurately integrating maps and imagery from different data sources remains a challenging task. This is because spatial data obtained from various data sources may have different projections and different accuracy levels. Most of the existing algorithms only deal with vector to vector spatial data integration or require human intervention to accomplish imagery to map conflation. In this paper, we describe an information integration approach that utilizes common vector datasets as "glue" to automatically conflate imagery with street maps. We present efficient techniques to automatically extract road intersections from imagery and maps as control points. We also describe a specialized point pattern matching algorithm to align the two point sets and conflation techniques to align the imagery with maps. We show that these automatic conflation techniques can automatically and accurately align maps with images of the same area. In particular, using the approach described in this paper, our system automatically aligns a set of TIGER maps for an area in El Segundo, CA to the corresponding orthoimagery with an average error of 8.35 meters per pixel. This is a significant improvement considering that simply combining the TIGER maps with the corresponding imagery based on geographic coordinates provided by the sources results in error of 27 meters per pixel.

74 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

01 Jan 2006

3,012 citations

Patent
14 Jun 2016
TL;DR: Newness and distinctiveness is claimed in the features of ornamentation as shown inside the broken line circle in the accompanying representation as discussed by the authors, which is the basis for the representation presented in this paper.
Abstract: Newness and distinctiveness is claimed in the features of ornamentation as shown inside the broken line circle in the accompanying representation.

1,500 citations

Proceedings ArticleDOI
04 Nov 2009
TL;DR: The results show that the ST-matching algorithm significantly outperform incremental algorithm in terms of matching accuracy for low-sampling trajectories and when compared with AFD-based global algorithm, ST-Matching also improves accuracy as well as running time.
Abstract: Map-matching is the process of aligning a sequence of observed user positions with the road network on a digital map. It is a fundamental pre-processing step for many applications, such as moving object management, traffic flow analysis, and driving directions. In practice there exists huge amount of low-sampling-rate (e.g., one point every 2--5 minutes) GPS trajectories. Unfortunately, most current map-matching approaches only deal with high-sampling-rate (typically one point every 10--30s) GPS data, and become less effective for low-sampling-rate points as the uncertainty in data increases. In this paper, we propose a novel global map-matching algorithm called ST-Matching for low-sampling-rate GPS trajectories. ST-Matching considers (1) the spatial geometric and topological structures of the road network and (2) the temporal/speed constraints of the trajectories. Based on spatio-temporal analysis, a candidate graph is constructed from which the best matching path sequence is identified. We compare ST-Matching with the incremental algorithm and Average-Frechet-Distance (AFD) based global map-matching algorithm. The experiments are performed both on synthetic and real dataset. The results show that our ST-matching algorithm significantly outperform incremental algorithm in terms of matching accuracy for low-sampling trajectories. Meanwhile, when compared with AFD-based global algorithm, ST-Matching also improves accuracy as well as running time.

817 citations