scispace - formally typeset
Search or ask a question
Author

Oliver G. Staadt

Bio: Oliver G. Staadt is an academic researcher from University of Rostock. The author has contributed to research in topics: Rendering (computer graphics) & Virtual reality. The author has an hindex of 21, co-authored 106 publications receiving 2324 citations. Previous affiliations of Oliver G. Staadt include University of California, Davis & ETH Zurich.


Papers
More filters
Proceedings ArticleDOI
01 Jul 2003
TL;DR: The blue-c portal as discussed by the authors combines simultaneous acquisition of multiple live video streams with advanced 3D projection technology in a CAVE-like environment, creating the impression of total immersion.
Abstract: We present blue-c, a new immersive projection and 3D video acquisition environment for virtual design and collaboration. It combines simultaneous acquisition of multiple live video streams with advanced 3D projection technology in a CAVE™-like environment, creating the impression of total immersion. The blue-c portal currently consists of three rectangular projection screens that are built from glass panels containing liquid crystal layers. These screens can be switched from a whitish opaque state (for projection) to a transparent state (for acquisition), which allows the video cameras to "look through" the walls. Our projection technology is based on active stereo using two LCD projectors per screen. The projectors are synchronously shuttered along with the screens, the stereo glasses, active illumination devices, and the acquisition hardware. From multiple video streams, we compute a 3D video representation of the user in real time. The resulting video inlays are integrated into a networked virtual environment. Our design is highly scalable, enabling blue-c to connect to portals with less sophisticated hardware.

300 citations

Proceedings ArticleDOI
25 Mar 2006
TL;DR: The effects of large high-resolution displays on human performance and other aspects is important as the authors look toward future advances in display technology and how it is applied in different situations.
Abstract: Continued advances in display hardware, computing power, networking, and rendering algorithms have all converged to dramatically improve large high-resolution display capabilities. We present a survey on prior research with large high-resolution displays. In the hardware configurations section we examine systems including multi-monitor workstations, reconfigurable projector arrays, and others. Rendering and the data pipeline are addressed with an overview of current technologies. We discuss many applications for large high-resolution displays such as automotive design, scientific visualization, control centers, and others. Quantifying the effects of large high-resolution displays on human performance and other aspects is important as we look toward future advances in display technology and how it is applied in different situations. Interacting with these displays brings a different set of challenges for HCI professionals, so an overview of some of this work is provided. Finally, we present our view of the top ten greatest challenges in large highresolution displays.

240 citations

Journal ArticleDOI
TL;DR: The algorithm performs a low algorithmic complexity, so that surface meshing can be achieved at interactive rates, such as required by flight simulators, however, other applications are possible as well.
Abstract: We present a method for adaptive surface meshing and triangulation which controls the local level of detail of the surface approximation by local spectral estimates. These estimates are determined by a wavelet representation of the surface data. The basic idea is to decompose the initial data set by means of an orthogonal or semi orthogonal tensor product wavelet transform (WT) and to analyze the resulting coefficients. In surface regions, where the partial energy of the resulting coefficients is low, the polygonal approximation of the surface can be performed with larger triangles without losing too much fine grain details. However, since the localization of the WT is bound by the Heisenberg principle, the meshing method has to be controlled by the detail signals rather than directly by the coefficients. The dyadic scaling of the WT stimulated us to build an hierarchical meshing algorithm which transforms the initially regular data grid into a quadtree representation by rejection of unimportant mesh vertices. The optimum triangulation of the resulting quadtree cells is carried out by selection from a look up table. The tree grows recursively as controlled by detail signals which are computed from a modified inverse WT. In order to control the local level of detail, we introduce a new class of wavelet space filters acting as "magnifying glasses" on the data. We show that our algorithm performs a low algorithmic complexity, so that surface meshing can be achieved at interactive rates, such as required by flight simulators, however, other applications are possible as well.

133 citations

Proceedings ArticleDOI
18 Oct 1998
TL;DR: The paper describes some fundamental issues for robust implementations of progressively refined tetrahedralizations generated through sequences of edge collapses and addresses the definition of appropriate cost functions.
Abstract: The paper describes some fundamental issues for robust implementations of progressively refined tetrahedralizations generated through sequences of edge collapses. We address the definition of appropriate cost functions and explain on various tests which are necessary to preserve the consistency of the mesh when collapsing edges. Although considered a special case of progressive simplicial complexes (J. Popovic and H. Hoppe, 1997), the results of our method are of high practical importance and can be used in many different applications, such as finite element meshing, scattered data interpolation, or rendering of unstructured volume data.

127 citations

Proceedings ArticleDOI
29 Oct 1995
TL;DR: A new method for adaptive surface meshing and triangulation which controls the local level-of-detail of the surface approximation by local spectral estimates, and introduces a new class of wavelet space filters acting as "magnifying glasses" on the data.
Abstract: Presents a new method for adaptive surface meshing and triangulation which controls the local level-of-detail of the surface approximation by local spectral estimates. These estimates are determined by a wavelet representation of the surface data. The basic idea is to decompose the initial data set by means of an orthogonal or semi-orthogonal tensor product wavelet transform (WT) and to analyze the resulting coefficients. In surface regions where the partial energy of the resulting coefficients is low, the polygonal approximation of the surface can be performed with larger triangles without losing too much fine-grain detail. However, since the localization of the WT is bound by the Heisenberg principle, the meshing method has to be controlled by the detail signals rather than directly by the coefficients. The dyadic scaling of the WT stimulated us to build a hierarchical meshing algorithm which transforms the initially regular data grid into a quadtree representation by rejection of unimportant mesh vertices. The optimum triangulation of the resulting quadtree cells is carried out by selection from a look-up table. The tree grows recursively, as controlled by the detail signals, which are computed from a modified inverse WT. In order to control the local level-of-detail, we introduce a new class of wavelet space filters acting as "magnifying glasses" on the data.

125 citations


Cited by
More filters
Journal ArticleDOI
01 Aug 2004
TL;DR: This paper shows how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms, and develops a novel temporal two-layer compressed representation that handles matting.
Abstract: The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.

1,677 citations

Journal ArticleDOI
TL;DR: The field of AR is described, including a brief definition and development history, the enabling technologies and their characteristics, and some known limitations regarding human factors in the use of AR systems that developers will need to overcome.
Abstract: We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our percep- tion and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome.

1,526 citations

Book
02 Jan 1991

1,377 citations

Journal ArticleDOI
TL;DR: In this paper, the main problems and the available solutions for the generation of 3D models from terrestrial images are addressed, and the full pipeline is presented for 3D modelling from terrestrial image data, considering the different approaches and analyzing all the steps involved.
Abstract: In this paper the main problems and the available solutions are addressed for the generation of 3D models from terrestrial images. Close range photogrammetry has dealt for many years with manual or automatic image measurements for precise 3D modelling. Nowadays 3D scanners are also becoming a standard source for input data in many application areas, but image-based modelling still remains the most complete, economical, portable, flexible and widely used approach. In this paper the full pipeline is presented for 3D modelling from terrestrial image data, considering the different approaches and analysing all the steps involved.

848 citations