scispace - formally typeset
Search or ask a question
Author

Klaus Engel

Bio: Klaus Engel is an academic researcher from Siemens. The author has contributed to research in topics: Volume rendering & Rendering (computer graphics). The author has an hindex of 26, co-authored 127 publications receiving 3615 citations. Previous affiliations of Klaus Engel include Philips & University of Stuttgart.


Papers
More filters
Proceedings ArticleDOI
01 Aug 2001
TL;DR: A novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices, suitable for new flexible consumer graphics hardware and suited for interactive high-quality volume graphics.
Abstract: We introduce a novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices. It is suitable for new flexible consumer graphics hardware and provides high image quality even for low-resolution volume data and non-linear transfer functions with high frequencies, without the performance overhead caused by rendering additional interpolated slices. This is especially useful for volumetric effects in computer games and professional scientific volume visualization, which heavily depend on memory bandwidth and rasterization power.We present an implementation of the algorithm on current programmable consumer graphics hardware using multi-textures with advanced texture fetch and pixel shading operations. We implemented direct volume rendering, volume shading, arbitrary number of isosurfaces, and mixed mode rendering. The performance does neither depend on the number of isosurfaces nor the definition of the transfer functions, and is therefore suited for interactive high-quality volume graphics.

590 citations

Book
21 Jul 2006
TL;DR: This course will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design and deformation.
Abstract: The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.

575 citations

Proceedings ArticleDOI
01 Aug 2000
TL;DR: This paper proposes new rendering techniques that significantly improve both performance and image quality of the 2D-texture based approach and demonstrates how multi-stage rasterization hardware can be used to efficiently render shaded isosurfaces and to compute diffuse illumination for semi-transparent volume rendering at interactive frame rates.
Abstract: Interactive direct volume rendering has yet been restricted to high-end graphics workstations and special-purpose hardware, due to the large amount of trilinear interpolations, that are necessary to obtain high image quality. Implementations that use the 2D-texture capabilities of standard PC hardware, usually render object-aligned slices in order to substitute trilinear by bilinear interpolation. However the resulting images often contain visual artifacts caused by the lack of spatial interpolation. In this paper we propose new rendering techniques that significantly improve both performance and image quality of the 2D-texture based approach. We will show how in ulti-texturing capabilitiesof modern consumer PC graphboards are exploited to enable in teractive high quality volume visualization on low-cost hardware. Furthermore we demonstrate how multi-stage rasterization hardware can be used to efficiently render shaded isosurfaces and to compute diffuse illumination for semi-transparent volume rendering at interactive frame rates.

336 citations

Journal ArticleDOI
TL;DR: Clip methods that are capable of using complex geometries for volume clipping and an optical model is introduced to merge aspects of surface-based and volume-based illumination in order to achieve a consistent shading of the clipping surface are proposed.
Abstract: We propose clipping methods that are capable of using complex geometries for volume clipping. The clipping tests exploit per-fragment operations on the graphics hardware to achieve high frame rates. In combination with texture-based volume rendering, these techniques enable the user to interactively select and explore regions of the data set. We present depth-based clipping techniques that analyze the depth structure of the boundary representation of the clip geometry to decide which parts of the volume have to be clipped. In another approach, a voxelized clip object is used to identify the clipped regions. Furthermore, the combination of volume clipping and volume shading is considered. An optical model is introduced to merge aspects of surface-based and volume-based illumination in order to achieve a consistent shading of the clipping surface. It is demonstrated how this model can be efficiently incorporated in the aforementioned clipping techniques.

190 citations

Proceedings Article
01 Jan 2006
TL;DR: In this article, the authors present techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects.
Abstract: The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.

137 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work studies the visual manifestations of different weather conditions, and model the chromatic effects of the atmospheric scattering and verify it for fog and haze, and derives several geometric constraints on scene color changes caused by varying atmospheric conditions.
Abstract: Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from “bad” weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow. We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics, and identify effects caused by bad weather that can be turned to our advantage. Since the atmosphere modulates the information carried from a scene point to the observer, it can be viewed as a mechanism of visual information coding. We exploit two fundamental scattering models and develop methods for recovering pertinent scene properties, such as three-dimensional structure, from one or two images taken under poor weather conditions. Next, we model the chromatic effects of the atmospheric scattering and verify it for fog and haze. Based on this chromatic model we derive several geometric constraints on scene color changes caused by varying atmospheric conditions. Finally, using these constraints we develop algorithms for computing fog or haze color, depth segmentation, extracting three-dimensional structure, and recovering “clear day” scene colors, from two or more images taken under different but unknown weather conditions.

1,325 citations

Book ChapterDOI
TL;DR: The possibilities to collect and store data increase at a faster rate than the ability to use it for making decisions, and in most applications, raw data has no value in itself; instead the authors want to extract the information contained in it.
Abstract: We are living in a world which faces a rapidly increasing amount of data to be dealt with on a daily basis. In the last decade, the steady improvement of data storage devices and means to create and collect data along the way influenced our way of dealing with information: Most of the time, data is stored without filtering and refinement for later use. Virtually every branch of industry or business, and any political or personal activity nowadays generate vast amounts of data. Making matters worse, the possibilities to collect and store data increase at a faster rate than our ability to use it for making decisions. However, in most applications, raw data has no value in itself; instead we want to extract the information contained in it.

1,047 citations

Proceedings ArticleDOI
22 Oct 2003
TL;DR: This paper describes volume ray-casting on programmable graphics hardware as an alternative to object-order approaches, and exploits the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight.
Abstract: Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card.

885 citations

Journal ArticleDOI
TL;DR: A CT system with energy detection capabilities is presented, which was used to demonstrate the feasibility of quantitative K-edge CT imaging experimentally and derive a phenomenological model for the detector response and the energy bin sensitivities.
Abstract: Theoretical considerations predicted the feasibility of K-edge x-ray computed tomography (CT) imaging using energy discriminating detectors with more than two energy bins. This technique enables material-specific imaging in CT, which in combination with high-Z element based contrast agents, opens up possibilities for new medical applications. In this paper, we present a CT system with energy detection capabilities, which was used to demonstrate the feasibility of quantitative K-edge CT imaging experimentally. A phantom was imaged containing PMMA, calcium-hydroxyapatite, water and two contrast agents based on iodine and gadolinium, respectively. Separate images of the attenuation by photoelectric absorption and Compton scattering were reconstructed from energy-resolved projection data using maximum-likelihood basis-component decomposition. The data analysis further enabled the display of images of the individual contrast agents and their concentrations, separated from the anatomical background. Measured concentrations of iodine and gadolinium were in good agreement with the actual concentrations. Prior to the tomographic measurements, the detector response functions for monochromatic illumination using synchrotron radiation were determined in the energy range 25 keV-60 keV. These data were used to calibrate the detector and derive a phenomenological model for the detector response and the energy bin sensitivities.

807 citations

Journal ArticleDOI
TL;DR: The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration.

744 citations