scispace - formally typeset
Search or ask a question
Conference

Vision Modeling and Visualization 

About: Vision Modeling and Visualization is an academic conference. The conference publishes majorly in the area(s): Rendering (computer graphics) & Visualization. Over the lifetime, 720 publications have been published by the conference receiving 10451 citations.


Papers
More filters
Proceedings Article
01 Jan 2003
TL;DR: The presented algorithm is integrated in a physically–based environment, which can be used in game engines and surgical simulators, and employs a hash function for compressing a potentially infinite regular spatial grid.
Abstract: We propose a new approach to collision and self– collision detection of dynamically deforming objects that consist of tetrahedrons. Tetrahedral meshes are commonly used to represent volumetric deformable models and the presented algorithm is integrated in a physically–based environment, which can be used in game engines and surgical simulators. The proposed algorithm employs a hash function for compressing a potentially infinite regular spatial grid. Although the hash function does not always provide a unique mapping of grid cells, it can be generated very efficiently and does not require complex data structures, such as octrees or BSPs. We have investigated and optimized the parameters of the collision detection algorithm, such as hash function, hash table size and spatial cell size. The algorithm can detect collisions and self– collisions in environments of up to 20k tetrahedrons in real–time. Although the algorithm works with tetrahedral meshes, it can be easily adapted to other object primitives, such as triangles.

507 citations

Proceedings Article
21 Nov 2001
TL;DR: This paper uses the theoretical basis provided by Information Theory to define a new measure, viewpoint entropy, that allows us to compute good viewing positions automatically and designs an algorithm that uses this measure to explore automatically objects or scenes.
Abstract: Computation of good viewpoints is important in several fields: computational geometry, visual servoing, robot motion, graph drawing, etc. In addition, selection of good views is rapidly becoming a key issue in computer graphics due to the new techniques of Image Based Rendering. Although there is no consensus about what a good view means in Computer Graphics, the quality of a viewpoint is intuitively related to how much information it gives us about a scene. In this paper we use the theoretical basis provided by Information Theory to define a new measure, viewpoint entropy, that allows us to compute good viewing positions automatically. We also show how it can be used to select a set of good views of a scene for scene understanding. Finally, we design an algorithm that uses this measure to explore automatically objects or scenes.

353 citations

Proceedings ArticleDOI
01 Jan 2013
TL;DR: A new benchmark database is presented to compare and evaluate existing and upcoming algorithms which are tailored to light field processing, characterised by a dense sampling of the light fields, which best fits current plenoptic cameras and is a characteristic property not found in current multi-view stereo benchmarks.
Abstract: We present a new benchmark database to compare and evaluate existing and upcoming algorithms which are tailored to light field processing. The data is characterised by a dense sampling of the light fields, which best fits current plenoptic cameras and is a characteristic property not found in current multi-view stereo benchmarks. It allows to treat the disparity space as a continuous space, and enables algorithms based on epipolar plane image analysis without having to refocus first. All datasets provide ground truth depth for at least the center view, while some have additional segmentation data available. Part of the light fields are computer graphics generated, the rest are acquired with a gantry, with ground truth depth established by a previous scanning of the imaged objects using a structured light scanner. In addition, we provide source code for an extensive evaluation of a number of previously published stereo, epipolar plane image analysis and segmentation algorithms on the database.

307 citations

Proceedings Article
01 Jan 2008
TL;DR: The proposed method is applied to accelerate the cleanup step of a real-time dense stereo method based on plane sweeping with multiple sweeping directions, where the label set directly corresponds to the employed directions.
Abstract: This work presents a real-time, data-parallel approach for global label assignment on regular grids. The labels are selected according to a Markov random field energy with a Potts prior term for binary interactions. We apply the proposed method to accelerate the cleanup step of a real-time dense stereo method based on plane sweeping with multiple sweeping directions, where the label set directly corresponds to the employed directions. In this setting the Potts smoothness model is suitable, since the set of labels does not possess an intrinsic metric or total order. The observed run-times are approximately 30 times faster than the ones obtained by graph cut approaches.

188 citations

Proceedings ArticleDOI
01 Jan 2011
TL;DR: This work systematically evaluates the concurrent use of one to four Kinects, including calibration, error measures and analysis, and presents a time-multiplexing approach on reducing or mitigating the detrimental effects of multiple active light emitters, thereby allowing motion capture from all angles.
Abstract: With the advent of the Microsoft Kinect, renewed focus has been put on monocular depth-based motion capturing. However, this approach is limited in that an actor has to move facing the camera. Due to the active light nature of the sensor, no more than one device has been used for motion capturing so far. In effect, any pose estimation must fail for poses occluded to the depth camera. Our work investigates on reducing or mitigating the detrimental effects of multiple active light emitters, thereby allowing motion capture from all angles. We systematically evaluate the concurrent use of one to four Kinects, including calibration, error measures and analysis, and present a time-multiplexing approach.

156 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
20212
202010
201919
201818
201720
201623