scispace - formally typeset
Search or ask a question
Author

Ivana Kolingerová

Bio: Ivana Kolingerová is an academic researcher from University of West Bohemia. The author has contributed to research in topics: Delaunay triangulation & Bowyer–Watson algorithm. The author has an hindex of 12, co-authored 98 publications receiving 539 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new algorithm for solving the point-in-polygon problem, especially suitable when it is necessary to check whether many points are placed inside or outside a polygon, which works with O ( n ) space complexity.

47 citations

Journal ArticleDOI
01 May 2005
TL;DR: This paper presents several parallel algorithms for the construction of the Delaunay triangulation in E^2 and E^3-one of the fundamental problems in computer graphics.
Abstract: This paper presents several parallel algorithms for the construction of the Delaunay triangulation in E^2 and E^3-one of the fundamental problems in computer graphics. The proposed algorithms are designed for parallel systems with shared memory and several processors. Such a hardware configuration (especially the case with two-processors) became widely spread in the last few years in the computer graphics area. Some of the proposed algorithms are easy to be implemented but not very efficient, while some of them prove opposite characteristics. Some of them are usable in E^2 only, other work in E^3 as well. The algorithms themselves were already published in computer graphics where the computer graphics criteria were highlighted. This paper concentrates on parallel and systematic point of view and gives detailed information about the parallelization of a computational geometry application to parallel and distributed computation oriented community.

41 citations

Journal ArticleDOI
TL;DR: The proposed algorithm is the second fastest except for input points with highly non-uniform distribution, which represents an attractive alternative to other Delaunay triangulation algorithms used in practice.
Abstract: This paper introduces a new algorithm for constructing a 2D Delaunay triangulation. It belongs to the class of incremental insertion algorithms, which are known as less demanding from the implementation point of view. The most time consuming step of the incremental insertion algorithms is locating the triangle containing the next point to be inserted. In this paper, this task is transformed to the nearest point problem, which is solved by a two-level uniform subdivision acceleration technique. Dependencies on the distribution of the input points are reduced using this technique. The algorithm is compared with other popular triangulation algorithms: two variants of Guibas, Knuth, and Sharir's incremental insertion algorithm, two different implementations of Mucke's algorithm, Fortune's sweep-line algorithm, and Lee and Schachter's divide and conquer algorithm. The following point distributions are used for tests: uniform, regular, Gaussian, points arranged in clusters, and real data sets from a GIS databas...

36 citations

Journal ArticleDOI
TL;DR: This paper suggests two improvements to the Delaunay triangulation construction algorithm, the first one speeds up the computation without increasing memory requirements and the second decreases memory requirements, trading space for small slow down.

29 citations

Journal ArticleDOI
20 Jun 2016
TL;DR: A number of different approaches has been proposed in the past that tackle principal curvatures of a smooth surface of sufficient differentiability using various techniques, but an comprehensive comparison of the different approaches is usually missing.
Abstract: While it is usually not difficult to compute principal curvatures of a smooth surface of sufficient differentiability, it is a rather difficult task when only a polygonal approximation of the surface is available, because of the inherent ambiguity of such representation. A number of different approaches has been proposed in the past that tackle this problem using various techniques. Most papers tend to focus on a particular method, while an comprehensive comparison of the different approaches is usually missing.We present results of a large experiment, involving both common and recently proposed curvature estimation techniques, applied to triangle meshes of varying properties. It turns out that none of the approaches provides reliable results under all circumstances. Motivated by this observation, we investigate mesh statistics, which can be computed from vertex positions and mesh connectivity information only, and which can help in deciding which estimator will work best for a particular case. Finally, we propose a meta-estimator, which makes a choice between existing algorithms based on the value of the mesh statistics, and we demonstrate that such meta-estimator, despite its simplicity, provides considerably more robust results than any existing approach.

27 citations


Cited by
More filters
01 Jan 2009
TL;DR: In this paper, a criterion for the convergence of numerical solutions of Navier-Stokes equations in two dimensions under steady conditions is given, which applies to all cases, of steady viscous flow in 2D.
Abstract: A criterion is given for the convergence of numerical solutions of the Navier-Stokes equations in two dimensions under steady conditions. The criterion applies to all cases, of steady viscous flow in two dimensions and shows that if the local ' mesh Reynolds number ', based on the size of the mesh used in the solution, exceeds a certain fixed value, the numerical solution will not converge.

1,568 citations

Book
02 Jan 1991

1,377 citations

Proceedings ArticleDOI
01 Aug 2000
TL;DR: Kinetic data structures (KDSs) as discussed by the authors are a formal framework for designing and analyzing sets of assertions to cache about the environment, so that these assertion sets are at once relatively stable and tailored to facilitate or trivialize the computation of the attribute of interest.
Abstract: Computer systems commonly cache the values of variables to gain efficiency. In applications where the goal is to track attributes of a continuously moving or deforming physical system over time, caching relations between variables works better than caching individual values. The reason is that, as the system evolves, such relationships are more stable than the values of individual variables.Kinetic data structures (KDSs) are a novel formal framework for designing and analyzing sets of assertions to cache about the environment, so that these assertion sets are at once relatively stable and tailored to facilitate or trivialize the computation of the attribute of interest. Formally, a KDS is a mathematical proof animated through time, proving the validity of a certain computation for the attribute of interest. KDSs have rigorous associated measures of performance and their design shares many qualities with that of classical data structures.The KDS framework has led to many new and promising algorithms in applications where the efficient modeling of motion is essential. Among these are collision detection for moving rigid and deformable bodies, connectivity maintenace in ad-hoc networks, local environment tracking for mobile agents, and visibility/occlusion maintenance. This talk will survey the general ideas behind KDSs and illustrate their application to simple geometric problems that arise in virtual and physical environments.

300 citations

Journal ArticleDOI
TL;DR: This article chose the roadmap approach and utilized the Voronoi diagram to obtain a path that is a close approximation of the shortest path satisfying the required clearance value set by the user.
Abstract: Path planning still remains one of the core problems in modern robotic applications, such as the design of autonomous vehicles and perceptive systems. The basic path-planning problem is concerned with finding a good-quality path from a source point to a destination point that does not result in collision with any obstacles. In this article, we chose the roadmap approach and utilized the Voronoi diagram to obtain a path that is a close approximation of the shortest path satisfying the required clearance value set by the user. The advantage of the proposed technique versus alternative path-planning methods is in its simplicity, versatility, and efficiency.

294 citations

BookDOI
01 Jan 2002

291 citations