scispace - formally typeset
Search or ask a question

Showing papers by "Niklas Peinecke published in 2008"


Proceedings ArticleDOI
09 Dec 2008
TL;DR: This work describes an approach for simulating Lidar sensors based on using modern computer graphics hardware making heavy use of recent technologies like vertex and fragment shaders, and presents a vertex shader solution written in GLSL, the OpenGL shading language.
Abstract: Modern enhanced and synthetic vision systems (EVS/SVS) often make use of the fusion of multi-sensor data. Thus there is a demand for simulated sensor data in order to test and evaluate those systems at an early stage. We describe an approach for simulating Lidar sensors based on using modern computer graphics hardware making heavy use of recent technologies like vertex and fragment shaders. This approach has been successfully used for simulating millimeter wave radar sensors before. It is shown that a multi sensor simulation suite integrating such different sensors like millimeter wave radar, Lidar or infrared can be realized using principally similar software techniques thus allowing for a unified comprehensive simulator. This approach allows us to use a single consistent data base for multi sensor fusion. Recent graphics hardware offers the possibility of carrying out a variety of tasks in the graphical processing unit (GPU) as opposed to the traditional approach of doing most computations in the computerpsilas CPU. Using vertex and fragment shaders makes these tasks particularly easy. We present a vertex shader solution written in GLSL, the OpenGL shading language. The program computes all necessary view transformations and shading information necessary for Lidar simulation in one pass. This allows high frame rates for real time simulations of even complex flight scenes.

40 citations


Proceedings ArticleDOI
03 Apr 2008
TL;DR: A new implementation of an imaging radar simulator based on using modern computer graphics hardware making heavy use of recent technologies like vertex and fragment shaders to generate radar shadows implementing shadow map techniques in the programmable graphics hardware.
Abstract: Extending previous works by Doehler and Bollmeyer we describe a new implementation of an imaging radar simulator. Our approach is based on using modern computer graphics hardware making heavy use of recent technologies like vertex and fragment shaders. Furthermore, to allow for a nearly realistic image we generate radar shadows implementing shadow map techniques in the programmable graphics hardware. The particular implementation is tailored to imitate millimeter wave (MMW) radar but could be extended for other types of radar systems easily.

32 citations


Proceedings ArticleDOI
02 Oct 2008
TL;DR: A novel approach for computing the mean normalized radar cross section for use in millimeter wave radar simulations based on Phong lighting is presented, which allows us to model radar power return in an intuitive way using categories of diffuse and specular reflections.
Abstract: Radar simulation involves the computation of a radar response based on the terrains normalized radar crosssection (RCS). In the past dierent models have been proposed for modeling the normalized RCS. While beingaccurate in most cases they lack intuitive handling. We present a novel approach for computing the meannormalized radar cross section for use in millimeter wave radar simulations based on Phong lighting. This allowsus to model radar power return in an intuitive way using categories of diuse and specular re”ections. Themodel is computational more ecient than previous approaches while using only few parameters. Furthermore,we give example setups for dierent types of terrain. We show that our technique can accurately model dataoutput from other approaches as well as real world data.Keywords: imaging radar, radar simulation, Phong lighting, millimeter wave radar, normalized radar crosssection, BRDF 1. INTRODUCTION The task of radar simulation comprises at least two aspects. The “rst is to generate a radar image where eachpart of the scene geometry is mapped to the appropriate location on the radar screen. This involves carryingout non-linear transformations of the objects that are to be displayed. For a more comprehensive descriptionthe reader is refered to Peinecke et al.

15 citations


Proceedings ArticleDOI
03 Apr 2008
TL;DR: In this article, a stereo radar attempt was made to reconstruct the missing elevation from a series of images, which is similar to the reconstruction using photography from different viewpoints to rebuild the depth information.
Abstract: To improve the situation awareness of an aircrew during poor visibility, different approaches emerged during the past couple of years. Enhanced vision systems (EVS - based upon sensor images) are one of those. They improve situation awareness of the crew, but at the same time introduce certain operational deficits. EVS present sensor data which might be difficult to interpret especially if the sensor used is a radar sensor. In particular an unresolved problem of fast scanning forward looking radar systems in the millimeter waveband is the inability to measure the elevation of a target. In order to circumvent this problem effort was made to reconstruct the missing elevation from a series of images. This could be described as a "Stereo radar"-attempt and is similar to the reconstruction using photography (angle-angle images) from different viewpoints to rebuilt the depth information. Two radar images (range-angle images) with different bank angles can be used to reconstruct the elevation of targets. This paper presents the fundamental idea and the methods of the reconstruction. Furthermore, experiences with real data from EADS's "HiVision" MMCW radar are discussed. Two different approaches are investigated: First, a fusion of images with variable bank angles is calculated for different elevation layers and picture processing reveals identical objects in these layers. Those objects are compared regarding contrast and dimension to extract their elevation. The second approach compares short fusion pairs of two different flights with different nearly constant bank angles. Accumulating those pairs with different offsets delivers the exact elevation.

7 citations


Proceedings ArticleDOI
30 Oct 2008
TL;DR: In this paper, the accuracy and the resolution of the LQoStereo Radarrdquo algorithm were evaluated using a series of experiments using synthetic data and the influence of size, elevation, shape and different (radar) textures of a target on the reconstructed elevation.
Abstract: Enhanced vision systems (EVS) are a possibility to improve the situation awareness of an aircrew during poor visibility conditions. EVS are based on sensor data which might be difficult to interpret especially for radar data. Fast scanning radar systems in the millimeter waveband (35 or 94 GHz) are commonly unable to measure the elevation of a target. Nevertheless these elevation data can sometimes be reconstructed from a series of images or from images taken from different viewpoints or bank angles. In case of forward looking millimeter wave radar it is more promising to use different bank angles. The authors have detailed these ideas using the term ldquoStereo Radarrdquo in previous publications. In this paper we take a closer look at the accuracy and the resolution of the algorithm. For this a series of experiments using synthetic data is performed. Furthermore we show the influence of size, elevation, shape, and different (radar) textures of a target on the reconstructed elevation. Finally some tests are carried out to demonstrate the robustness against different kinds of noise.

4 citations