# Showing papers in "International Journal of Computational Geometry and Applications in 2012"

••

TL;DR: It is NP-hard to decide whether a given set of segments admits an auto-partition that does not make any cuts, and an optimal restricted BSP makes at most 2 times as many cuts as an optimal free BSP for the same set of segment.

Abstract: An optimal BSP for a set S of disjoint line segments in the plane is a BSP for S that produces the minimum number of cuts. We study optimal BSPs for three classes of BSPs, which differ in the splitting lines that can be used when partitioning a set of fragments in the recursive partitioning process: free BSPs can use any splitting line, restricted BSPs can only use splitting lines through pairs of fragment endpoints, and auto-partitions can only use splitting lines containing a fragment. We obtain the following two results: • It is NP-hard to decide whether a given set of segments admits an auto-partition that does not make any cuts. • An optimal restricted BSP makes at most 2 times as many cuts as an optimal free BSP for the same set of segments.

110 citations

••

TL;DR: A new representation for simplicial complexes particularly well adapted for complexes close to flag complexes is proposed, to encode a simplicial complex K by the graph G of its edges together with the inclusion-minimal simplices in the set difference Flag(G)\ K.

Abstract: We study the simplification of simplicial complexes by repeated edge contractions. First, we extend to arbitrary simplicial complexes the statement that edges satisfying the link condition can be contracted while preserving the homotopy type. Our primary interest is to simplify flag complexes such as Rips complexes for which it was proved recently that they can provide topologically correct reconstructions of shapes. Flag complexes (sometimes called clique complexes) enjoy the nice property of being completely determined by the graph of their edges. But, as we simplify a flag complex by repeated edge contractions, the property that it is a flag complex is likely to be lost. Our second contribution is to propose a new representation for simplicial complexes particularly well adapted for complexes close to flag complexes. The idea is to encode a simplicial complex K by the graph G of its edges together with the inclusion-minimal simplices in the set difference Flag(G)\ K. We call these minimal simplices blockers. We prove that the link condition translates nicely in terms of blockers and give formulae for updating our data structure after an edge contraction. Finally, we observe in some simple cases that few blockers appear during the simplification of Rips complexes, demonstrating the efficiency of our representation in this context.

64 citations

••

TL;DR: A novel algorithm is presented that takes as an input such a data set, and outputs a metric graph that is homeomorphic to the underlying metric graph and has bounded distortion of distances.

Abstract: Many real-world data sets can be viewed of as noisy samples of special types of metric spaces called metric graphs.19 Building on the notions of correspondence and Gromov-Hausdorff distance in metric geometry, we describe a model for such data sets as an approximation of an underlying metric graph. We present a novel algorithm that takes as an input such a data set, and outputs a metric graph that is homeomorphic to the underlying metric graph and has bounded distortion of distances. We also implement the algorithm, and evaluate its performance on a variety of real world data sets.

51 citations

••

TL;DR: An algorithm with constant approximation factor 18 is provided to solve the discrete unit disk cover problem, a geometric version of the general set cover problem which is NP-hard.

Abstract: Given a set of n points and a set of m unit disks on a 2-dimensional plane, the discrete unit disk cover (DUDC) problem is (i) to check whether each point in is covered by at least one disk in or not and (ii) if so, then find a minimum cardinality subset such that the unit disks in cover all the points in . The discrete unit disk cover problem is a geometric version of the general set cover problem which is NP-hard. The general set cover problem is not approximable within , for some constant c, but the DUDC problem was shown to admit a constant factor approximation. In this paper, we provide an algorithm with constant approximation factor 18. The running time of the proposed algorithm is . The previous best known tractable solution for the same problem was a 22-factor approximation algorithm with running time .

51 citations

••

TL;DR: The investigation starts with an analysis of the triangulation-based algorithm by Aichholzer and Aurenhammer and it is proved the existence of flip-event-free Steiner triangulations, which motivates a careful generalization of motorcycle graphs such that their intimate geometric connection to straight skeletons is maintained.

Abstract: This paper deals with the fast computation of straight skeletons of planar straight-line graphs (PSLGs) at an industrial-strength level. We discuss both the theoretical foundations of our algorithm and the engineering aspects of our implementation Bone. Our investigation starts with an analysis of the triangulation-based algorithm by Aichholzer and Aurenhammer and we prove the existence of flip-event-free Steiner triangulations. This result motivates a careful generalization of motorcycle graphs such that their intimate geometric connection to straight skeletons is maintained. Based on the generalized motorcycle graph, we devise a non-procedural characterization of straight skeletons of PSLGs and we discuss how to obtain a discretized version of a straight skeleton by means of graphics rendering. Most importantly, this generalization allows us to present a fast and easy-to-implement straight-skeleton algorithm. We implemented our algorithm in C++ based on floating-point arithmetic. Extensive benchmarks with our code Bone demonstrate an time complexity and memory footprint on 22 300 datasets of diverse characteristics. This is a linear factor better than the implementation provided by CGAL 4.0, which shows an time complexity and an memory footprint; the CGAL code has been the only fully-functional straight-skeleton code so far. In particular, on datasets with ten thousand vertices, Bone requires about 0.2–0.6 seconds instead of 4–7 minutes consumed by the CGAL code, and Bone uses only 20 MB heap memory instead of several gigabytes. We conclude our paper with a discussion of the engineering aspects and principles that make Bone reliable enough to compute the straight skeleton of datasets comprising a few million vertices on a desktop computer.

38 citations

••

TL;DR: There is strong empirical evidence that human perception of a graph drawing is negatively correlated with the number of edge crossings, but recent experiments show that one can reduce the impact of these correlations by reducing the total number of crossings.

Abstract: There is strong empirical evidence that human perception of a graph drawing is negatively correlated with the number of edge crossings. However, recent experiments show that one can reduce the nega...

37 citations

••

TL;DR: An algorithm is presented that returns in time 2O(d2) m2n2log2(mn) the minimum Frechet distance between two imprecise polygonal curves with n and m vertices, respectively, and efficient O(dmn)-time algorithms to approximate the maximumFrechet distance as well as the minimum and maximum Frechetdistance under translation are given.

Abstract: We consider the problem of computing the discrete Frechet distance between two polygonal curves when their vertices are imprecise. An imprecise point is given by a region and this point could lie anywhere within this region. By modelling imprecise points as balls in dimension d, we present an algorithm for this problem that returns in time 2O(d2) m2n2log2(mn) the minimum Frechet distance between two imprecise polygonal curves with n and m vertices, respectively. We give an improved algorithm for the planar case with running time O(mn log3(mn) + (m2+n2) log (mn)). In the d-dimensional orthogonal case, where points are modelled as axis-parallel boxes, and we use the L∞ distance, we give an O(dmn log(dmn))-time algorithm. We also give efficient O(dmn)-time algorithms to approximate the maximum Frechet distance, as well as the minimum and maximum Frechet distance under translation. These algorithms achieve constant factor approximation ratios in "realistic" settings (such as when the radii of the balls modelling the imprecise points are roughly of the same size).

26 citations

••

TL;DR: In this paper, it was shown that the Yao graph Y 4 in the L 2 metric is a spanner with stretch factor ρ 8 ρ 2 (29+23 ρ √ ρ ρ + 1.

Abstract: We show that the Yao graph Y 4 in the L 2 metric is a spanner with stretch factor \(8\sqrt{2}(29+23\sqrt{2})\).

26 citations

••

TL;DR: Graph-theoretic properties of certain proximity graphs defined on planar point sets are investigated and bounds on the same parameters of some of the higher order versions are given.

Abstract: Graph-theoretic properties of certain proximity graphs defined on planar point sets are investigated. We first consider some of the most common proximity graphs of the family of the Delaunay graph, and study their number of edges, minimum and maximum degree, clique number, and chromatic number. In the second part of the paper we focus on the higher order versions of some of these graphs and give bounds on the same parameters.

24 citations

••

TL;DR: Three results related to dynamic convex hulls are presented, including a fully dynamic data structure for maintaining a set of n points in the plane to support halfplane range reporting queries in O(log n+k) time with O(polylog n) expected amortized update time.

Abstract: We present three results related to dynamic convex hulls: • A fully dynamic data structure for maintaining a set of n points in the plane so that we can find the edges of the convex hull intersecting a query line, with expected query and amortized update time O(log1+en) for an arbitrarily small constant e > 0. This improves the previous bound of O(log3/2n). • A fully dynamic data structure for maintaining a set of n points in the plane to support halfplane range reporting queries in O(log n+k) time with O(polylog n) expected amortized update time. A similar result holds for 3-dimensional orthogonal range reporting. For 3-dimensional halfspace range reporting, the query time increases to O(log2n/log log n + k). • A semi-online dynamic data structure for maintaining a set of n line segments in the plane, so that we can decide whether a query line segment lies completely above the lower envelope, with query time O(log n) and amortized update time O(ne). As a corollary, we can solve the following problem in O(n1+e) time: given a triangulated terrain in 3-d of size n, identify all faces that are partially visible from a fixed viewpoint.

18 citations

••

TL;DR: In this paper, a new data structure for point location queries in planar triangulations is presented, which is asymptotically as fast as the optimal structures, but it requires no prior information about the queries.

Abstract: Over the last decade, there have been several data structures that, given a planar subdivision and a probability distribution over the plane, provide a way for answering point location queries that is fine-tuned for the distribution. All these methods suffer from the requirement that the query distribution must be known in advance. We present a new data structure for point location queries in planar triangulations. Our structure is asymptotically as fast as the optimal structures, but it requires no prior information about the queries. This is a 2-D analogue of the jump from Knuth's optimum binary search trees (discovered in 1971) to the splay trees of Sleator and Tarjan in 1985. While the former need to know the query distribution, the latter are statically optimal. This means that we can adapt to the query sequence and achieve the same asymptotic performance as an optimum static structure, without needing any additional information.

••

TL;DR: This paper presents a linear-time 3-approximation algorithm based upon the novel partition of an orthogonal polygon into so-called o-star-shaped Orthogonal polygons.

Abstract: The complexity status of the minimum r-star cover problem for orthogonal polygons had been open for many years, until 2004 when Ch. Worman and J. M. Keil proved it to be polynomially tractable (Polygon decomposition and the orthogonal art gallery problem, IJCGA 17(2) (2007), 105-138). However, since their algorithm has O(n17)-time complexity, where O(·) hides a polylogarithmic factor, and thus it is not practical, in this paper we present a linear-time 3-approximation algorithm. Our approach is based upon the novel partition of an orthogonal polygon into so-called o-star-shaped orthogonal polygons.

••

TL;DR: A framework for compressive sensing of images with local distinguishable objects, such as stars, and apply it to solve a problem in celestial navigation, and a comprehensive study of the application of the algorithm to attitude determination, or finding one's orientation in space.

Abstract: We propose a framework for compressive sensing of images with local distinguishable objects, such as stars, and apply it to solve a problem in celestial navigation. Specifically, let x ∈ ℝN be an N pixel image, consisting of a small number of local distinguishable objects plus noise. Our goal is to design an m × Nmeasurement matrix A with m ≪ N, such that we can recover an approximation to x from the measurements Ax. We construct a matrix A and recovery algorithm with the following properties: (i) if there are k objects, the number of measurements m is O((k log N)/(log k)), undercutting the best known bound of O(k log (N/k)) (ii) the matrix A is very sparse, which is important for hardware implementations of compressive sensing algorithms, and (iii) the recovery algorithm is empirically fast and runs in time polynomial in k and log(N). We also present a comprehensive study of the application of our algorithm to attitude determination, or finding one's orientation in space. Spacecraft typically use cameras...

••

TL;DR: In this article, the authors discuss six contemporary 'indigenous' Afrimation projects that have potential of being highly innovative in the digital technology arena, as a medium for promoting the African Renaissance agenda.

Abstract: In spite of its longstanding history, Africa's animation industry's impact on its socio-eco-cultural development has been very inconsequential to say the least. This article discusses six (6) contemporary 'indigenous' Afrimation projects that have potential of being highly innovative in the digital technology arena, as a medium for promoting the African Renaissance agenda. These projects include: Kabongo; Tinga Tinga Tales; Zambezia; The Lion of Judah; Magic Cellar and Interactive Child Learning Aid Project (i-CLAP) Model. The paper also highlights key issues in the relevant to the development of African animation like the design techniques, business models and partnership strategies and the implication of this new digital technology trends on Africa's development and future.

••

TL;DR: In this paper, the searchlight scheduling problem was extended to 3-dimensional polyhedra, with the guards now boundary segments who rotate half-planes of illumination, and it was shown that deciding whether a given set of boundary guards has a successful search schedule is strongly NP-hard.

Abstract: THE SEARCHLIGHT SCHEDULING PROBLEM was first studied in 2-dimensional polygons, where the goal is for point guards in fixed positions to rotate searchlights to catch an evasive intruder. Here the problem is extended to 3-dimensional polyhedra, with the guards now boundary segments who rotate half-planes of illumination. After carefully detailing the 3-dimensional model, several results are established. The first is a nearly direct extension of the planar one-way sweep strategy using what filling guards, a generalization that succeeds despite there being no well-defined we call notion in 3-dimensional space of planar "clockwise rotation." Next follow two results: every polyhedron with r > 0 reflex edges can be searched by at most r2 suitably placed boundary guards, whereas just r edguards suffice if the polyhedron is orthogonal. (Mini-mizing the number of guards to search a given polyhedron is easily seen to be NP-hard.) Finally we show that deciding whether a given set of boundary guards has a successful search schedule is strongly NP-hard. A number of peripheral results are proved en route to these central theorems, and several open problems remain for future work.

••

TL;DR: The image deblocking algorithm presented has been successful in reducing blocky artifacts in an image and therefore increases the subjective as well as objective quality of the reconstructed image.

Abstract: The Block Transform Coded, JPEG- a lossy image compression format has been used to keep storage and bandwidth requirements of digital image at practical levels. However, JPEG compression schemes may exhibit unwanted image artifacts to appear - such as the ‘blocky’ artifact found in smooth/monotone areas of an image, caused by the coarse quantization of DCT coefficients. A number of image filtering approaches have been analyzed in literature incorporating value-averaging filters in order to smooth out the discontinuities that appear across DCT block boundaries. Although some of these approaches are able to decrease the severity of these unwanted artifacts to some extent, other approaches have certain limitations that cause excessive blurring to high-contrast edges in the image. The image deblocking algorithm presented in this paper aims to filter the blocked boundaries. This is accomplished by employing smoothening, detection of blocked edges and then filtering the difference between the pixels containing the blocked edge. The deblocking algorithm presented has been successful in reducing blocky artifacts in an image and therefore increases the subjective as well as objective quality of the reconstructed image.

••

TL;DR: The Interactive Child Learning Aid Project (i -CLAP) model is initiated as a potential indigenous CAI model for application in the local pre-primary school curriculum, and recommendations for its integration into the educational curriculum are made, towards facilitating the attainment of the UBE and MDGs agendas.

Abstract: Developments in technology are more than ever before enabling the creation of remarkable Computer - Assisted Instruction (CAI)resources for enriching and transforming the educational environment in the 21 st century. This progress is considered indispensable for Nigeria inthe wake of declining school enrollment, high dropout rate and low learning achievement levels.Hence, relevant especially if such a predominantly traditional (face-to-face) educational system must be revolutionized to meet contemporary needs and techniques. Therefore, while this article argues for the integration of technology hardware and software into the local education environment, it however emphasizes the need to develop custom instructional resources that integrate local folkloric contentspertinent to Nigeria's educational philosophy and cultural socialization. The Interactive Child Learning Aid Project (i -CLAP) model is initiated as a potential indigenous CAI model for application in the local pre-primary school curriculum. The impact of implementing the model's concept within (N=4) selected pre-primary schools in Zaria - Kaduna State is examined. The researcher used 'classroom observation' for data gathering and Pearson Product Moment Correlation (r) and t-Test for analyzing the on-task and off-task classroom behaviors of (N=80) pupils. Thereby, revealing valuable lessons on the project's potential as a techno-cultural resource for reinforcing motivation and interestamong pre-primary school children in Nigeria. Recommends for its integration into the educational curriculumis made, towards facilitating the attainment of the UBE and MDGs agendas.

••

TL;DR: In this article, an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos is presented. But it is not suitable for large scale video datasets.

Abstract: We present in this paper an intelligent video data visualization tool, based on semantic classification, for retrieving and exploring a large scale corpus of videos. Our work is based on semantic classification resulting from semantic analysis of video. The obtained classes will be projected in the visualization space. The graph is represented by nodes and edges, the nodes are the keyframes of video documents and the edges are the relation between documents and the classes of documents. Finally, we construct the user’s profile, based on the interaction with the system, to render the system more adequate to its preferences.

••

TL;DR: It is shown that for a finite unit family of n squares which are given sorted by their side lengths, a packing into the rectangle can be found in linear time, which yields an O(n log n) time algorithm for the general case.

Abstract: In this paper, we prove that any finite or infinite family of squares with total area at most 1 can be packed into the rectangle with dimensions and that this rectangle is unique with this property and minimum area. Furthermore, we show that for a finite unit family of n squares which are given sorted by their side lengths, a packing into the rectangle can be found in linear time, which yields an O(n log n) time algorithm for the general case. With a restriction to finite unit families, the former statement has been published by D. J. Kleitman and M. Krieger, who – as they state themselves – only provide "a general discussion of the methods used [throughout the proof] and an outline of the major cases". Although they refer their proof to as "being constructive", it is not clear from their presentation how a packing algorithm would look like. Regarding the fact that there are some other results that rely on this statement – some of which even make use of a corresponding packing algorithm – it is important to have an complete and preferably constructive published proof for it. The proof that is presented here is constructive and uses an interesting technique based on quadratic programming which could be applicable to other packing problems as well.

••

TL;DR: An O(n2log n) time algorithm is presented for the problem of finding an obnoxious line that intersects the convex hull of P and maximizes the minimum weighted Euclidean distance to all points of P.

Abstract: Given a set P of n points in the plane such that each point has a positive weight, we study the problem of finding an obnoxious line that intersects the convex hull of P and maximizes the minimum weighted Euclidean distance to all points of P. We present an O(n2log n) time algorithm for the problem, improving the previously best-known O(n2log3 n) time solution. We also consider a variant of this problem whose input is a set of m polygons with a total of n vertices in the plane such that each polygon has a positive weight and whose goal is to locate an obnoxious line with respect to the weighted polygons. An O(mn + n log2 n log m + m2log n log2 m) time algorithm for this variant was known previously. We give an improved algorithm of O(mn + n log2 n + m2log n) time. Further, we reduce the time bound of a previous algorithm for the case of the problem with unweighted polygons from O((m2 + n log m) log n) to O(m2 + n log m).

••

TL;DR: It turns out that a new geometric transformation based on a Voronoi-like tessellation well describes the proximity relations between given weighted points S and every line in ℝ2.

Abstract: Given a set S of weighted points in the plane, we consider two problems dealing with lines in ℝ2 under the weighted Euclidean distance: (1) Preprocess S into a data structure that efficiently finds a nearest point among S to a query line. (2) Find an optimal line that maximizes the minimum of the weighted distance to any point of S. The latter problem is also known as the obnoxious line location problem. We introduce a unified approach to both problems based on a new geometric transformation that maps lines in the plane to points in a line space. It turns out that our transformation, together with its target space, well describes the proximity relations between given weighted points S and every line in ℝ2. We define a Voronoi-like tessellation on the line space and investigate its geometric, combinatorial, and computational properties. As its direct applications, we obtain several new algorithmic results on the two problems. We also show that our approach extends to weighted line segments and weighted polygons.

••

TL;DR: In this article, the use of virtual interactive techniques for personalized product design is described, where the components of the equipment or product such as screen, buttons etc. are projected using a projector connected to the computer into the physical model.

Abstract: Use of Virtual Interactive Techniques for personalized product design is described in this paper. Usually products are designed and built by considering general usage patterns and Prototyping is used to mimic the static or working behaviour of an actual product before manufacturing the product. The user does not have any control on the design of the product. Personalized design postpones design to a later stage. It allows for personalized selection of individual components by the user. This is implemented by displaying the individual components over a physical model constructed using Cardboard or Thermocol in the actual size and shape of the original product. The components of the equipment or product such as screen, buttons etc. are then projected using a projector connected to the computer into the physical model. Users can interact with the prototype like the original working equipment and they can select, shape, position the individual components displayed on the interaction panel using simple hand gestures. Computer Vision techniques as well as sound processing techniques are used to detect and recognize the user gestures captured using a web camera and microphone.

••

TL;DR: This paper extends a result of Jackson and Jordan on the unique realizability preservingness of the so-called “1-extension” in dimension 2 to all dimensions.

Abstract: The problem of deciding the unique realizability of a graph in a Euclidean space with distance and/or direction constraints on the edges of the graph has applications in CAD (Computer-Aided Design) and localization in sensor networks. One approach, which has been proved to be efficient in the similar problem for graphs with distance constraints, is to study operations to construct a uniquely realizable graph from a smaller one. In this paper, we extend a result of Jackson and Jordan on the unique realizability preservingness of the so-called “1-extension” in dimension 2 to all dimensions.

••

TL;DR: This article proposes tow methods that are combined with ray tracing method for obtaining some pixel colour of image plane and shows that they are at least 50% faster than spatial median approach for reasonably complex scenes with around 70k polygons and about 0.2% quality degradation.

Abstract: Many computer graphics rendering algorithms and techniques use ray tracing for generatingnatural and photo-realistic images. Ray tracing is a method to convert 3D-modeles into 2D-high quality images by complex computation. The millions of rays must be simulated and traced to create realistic image. A method for reducing render time is acceleration structures. The kd -tree is the most commonly used in accelerating ray tracing algorithms. This paper has focused on reducing render time. We propose tow methods that are combined with ray tracing method for obtaining some pixel colour of image plane . Our methods are linear algorithm and arefaster than ray tracing method. Our results show that our methods are at least 50% faster than spatial median approach for reasonably complex scenes with around 70k polygons and about 0.2% quality degradation. In this article, we show that these proposal methods canbe combined with other ray tracing methodssuch as SAH to reduce render time.

••

TL;DR: This paper addresses the relative position of points, point set distance problems, and orthogonal range queries in the plane in the presence of geometric uncertainty, and presents efficient algorithms for relative points orientation, minimum and maximum pairwise distance, closest pair, diameter, and efficient algorithm for uncertain range queries.

Abstract: Classical computational geometry algorithms handle geometric constructs whose shapes and locations are exact. However, many real-world applications require modeling and computing with geometric uncertainties, which are often coupled and mutually dependent. In this paper we address the relative position of points, point set distance problems, and orthogonal range queries in the plane in the presence of geometric uncertainty. The uncertainty can be in the locations of the points, in the query range, or both, and is possibly coupled. Point coordinates and range uncertainties are modeled with the Linear Parametric Geometric Uncertainty Model (LPGUM), a general and computationally efficient worst-case, first-order linear approximation of geometric uncertainty that supports dependence among uncertainties. We present efficient algorithms for relative points orientation, minimum and maximum pairwise distance, closest pair, diameter, and efficient algorithms for uncertain range queries: uncertain range/nominal points, nominal range/uncertain points, uncertain range/uncertain points, with independent/dependent uncertainties. In most cases, the added complexity is sub-quadratic in the number of parameters and points, with higher complexities for dependent point uncertainties.

••

TL;DR: The stretch factor of a rectilinear path in L1 plane has a lower bound of Ω(n log n) in the algebraic computation tree model and a worst-case O(σn log2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric is described.

Abstract: The stretch factor and maximum detour of a graph G embedded in a metric space measure how well G approximates the minimum complete graph containing G and the metric space, respectively. In this paper we show that computing the stretch factor of a rectilinear path in L1 plane has a lower bound of Ω(n log n) in the algebraic computation tree model and describe a worst-case O(σn log2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ ≥ 2 vectors and a worst-case O(n logd n) time algorithm to d ≥ 3 dimensions in L1-metric. We generalize the algorithms to compute the stretch factor or maximum detour of trees and cycles in O(σn logd+1 n) time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L1 plane.

••

TL;DR: A considerable execution speed is demonstrated on CUDA as compare to CPU by formalization of parallel algorithms for fluid animation employing Smoothed Particle Hydrodynamics (SPH) model on compute Unified Device Architecture (CUDA).

Abstract: Realistic Fluid Animation is an inherent part of special effects in Film and Gaming Industry . These animations are created through the simulation of highly compute intensive fluid model . The computations involved in execution of fluid modelemphasize the need of high performance parallel system to achieve the real time animation. This paper primarily devoted to the formalization of parallel algorithms for fluid animation employing Smoothed Particle Hydrodynamics (SPH) model onCompute Unified Device Architecture (CUDA). We have demonstrated a considerable execution speedu p on CUDA as compare to CPU. The speedup is further improved by reducingcomplexity of SPH computations from O(N 2 ) to O(N) by utilizing spatial grid based particle neighbou r lookup.

••

TL;DR: This paper presents an O(n2) time and space algorithm for solving the problem of reporting the set of segments of each color intersected by segments of the other color, and proves that this problem is 3-Sum hard.

Abstract: In this paper, we introduce a natural variation of the problem of computing all bichromatic intersections between two sets of segments. Given two sets R and B of n points in the plane defining two sets of segments, say red and blue, we present an O(n2) time and space algorithm for solving the problem of reporting the set of segments of each color intersected by segments of the other color. We also prove that this problem is 3-Sum hard and provide some illustrative examples of several point configurations.

••

TL;DR: For a finite set of points X on the unit hypersphere in ℝd, this article considered the iteration ui+1 = ui + χ, where χi is the point of X farthest from ui.

Abstract: For a finite set of points X on the unit hypersphere in ℝd we consider the iteration ui+1 = ui + χi, where χi is the point of X farthest from ui. Restricting to the case where the origin is contained in the convex hull of X we study the maximal length of ui. We give sharp upper bounds for the length of ui independently of X. Precisely, this upper bound is infinity for d ≥ 3 and for d = 2.

••

TL;DR: Experimental comparisons with the images synthesized using the actual three -dimensional scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.

Abstract: This paper presents an uncalibrated v iew synthesis method using piecewise planar regions that are extracted from a given set of image pairsthrough planar segmentation. Our work concentrates on a view synthesis method that does not needestimation of camera parameters and scene structure. Forour goal, we simply assume that images of real world are composed of piecewise planar regions. Then, we perform view synthesis simply with planar regions and homographiesbetween them. Here, for accurate extraction of planar homographies and piecewise pla nar regions in images, the proposed method employs iterative homography estimation and color segmentation -based planar region extraction. The proposed method synthesizes the virtual view image using a set of planar regions as well as a set of corresponding homographies. Experimental comparisons with the images synthesized using the actual three -dimensional (3-D) scene structure and camera poses show that the proposed method effectively describes scene changes by viewpoint movements without estimation of 3 -D and camera information.