scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2018"


Book ChapterDOI
16 Sep 2018
TL;DR: This work proposes to localize cell nuclei via star-convex polygons, which are a much better shape representation as compared to bounding boxes and thus do not need shape refinement.
Abstract: Automatic detection and segmentation of cells and nuclei in microscopy images is important for many biological applications. Recent successful learning-based approaches include per-pixel cell segmentation with subsequent pixel grouping, or localization of bounding boxes with subsequent shape refinement. In situations of crowded cells, these can be prone to segmentation errors, such as falsely merging bordering cells or suppressing valid cell instances due to the poor approximation with bounding boxes. To overcome these issues, we propose to localize cell nuclei via star-convex polygons, which are a much better shape representation as compared to bounding boxes and thus do not need shape refinement. To that end, we train a convolutional neural network that predicts for every pixel a polygon for the cell instance at that position. We demonstrate the merits of our approach on two synthetic datasets and one challenging dataset of diverse fluorescence microscopy images.

521 citations


Book ChapterDOI
TL;DR: In this paper, the authors propose to localize cell nuclei via star-convex polygons, which are a much better shape representation as compared to bounding boxes and thus do not need shape refinement.
Abstract: Automatic detection and segmentation of cells and nuclei in microscopy images is important for many biological applications. Recent successful learning-based approaches include per-pixel cell segmentation with subsequent pixel grouping, or localization of bounding boxes with subsequent shape refinement. In situations of crowded cells, these can be prone to segmentation errors, such as falsely merging bordering cells or suppressing valid cell instances due to the poor approximation with bounding boxes. To overcome these issues, we propose to localize cell nuclei via star-convex polygons, which are a much better shape representation as compared to bounding boxes and thus do not need shape refinement. To that end, we train a convolutional neural network that predicts for every pixel a polygon for the cell instance at that position. We demonstrate the merits of our approach on two synthetic datasets and one challenging dataset of diverse fluorescence microscopy images.

382 citations


Proceedings ArticleDOI
01 Jun 2018
TL;DR: This work presents a method combining Mask R-CNN with building boundary regularization, which produces better regularized polygons which are beneficial in many applications.
Abstract: The DeepGlobe Building Extraction Challenge poses the problem of localizing all building polygons in the given satellite images. We can create polygons using an existing instance segmentation algorithm based on Mask R-CNN. However, polygons produced from instance segmentation have irregular shapes, which are far different from real building footprint boundaries and therefore cannot be directly applied to many cartographic and engineering applications. Hence, we present a method combining Mask R-CNN with building boundary regularization. Through the experiments, we find that the proposed method and Mask R-CNN achieve almost equivalent performance in terms of accuracy and completeness. However, compared to Mask R-CNN, our method produces better regularized polygons which are beneficial in many applications

145 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: A new database called ModaNet, a large-scale collection of images based on Paperdoll dataset, which allows to measure the performance of state-of-the-art algorithms for object detection, semantic segmentation and polygon prediction on street fashion images in detail.
Abstract: Understanding clothes from a single image would have huge commercial and cultural impacts on modern societies. However, this task remains a challenging computer vision problem due to wide variations in the appearance, style, brand and layering of clothing items. We present a new database called ModaNet, a large-scale collection of images based on Paperdoll dataset. Our dataset provides 55,176 street images, fully annotated with polygons on top of the 1 million weakly annotated street images in Paperdoll. ModaNet aims to provide a technical benchmark to fairly evaluate the progress of applying the latest computer vision techniques that rely on large data for fashion understanding. The rich annotation of the dataset allows to measure the performance of state-of-the-art algorithms for object detection, semantic segmentation and polygon prediction on street fashion images in detail.

97 citations


Journal ArticleDOI
TL;DR: In the proposed method, polygon-featured holes are introduced as basic design primitives whose movements, deformations and intersections allow to control the structural topology.

94 citations


Book ChapterDOI
08 Sep 2018
TL;DR: The experimental results suggest that the proposed algorithm is able to find multiple Pareto optimal solution sets in the decision space, even if the diversity requirements in the objective and decision spaces are inconsistent or there exist local optimal areas in the decided space.
Abstract: Multi-modal multi-objective optimization problems are commonly seen in real-world applications. However, most existing researches focus on solving multi-objective optimization problems without multi-modal property or multi-modal optimization problems with single objective. In this paper, we propose a double-niched evolutionary algorithm for multi-modal multi-objective optimization. The proposed algorithm employs a niche sharing method to diversify the solution set in both the objective and decision spaces. We examine the behaviors of the proposed algorithm and its two variants as well as three other existing evolutionary optimizers on three types of polygon-based problems. Our experimental results suggest that the proposed algorithm is able to find multiple Pareto optimal solution sets in the decision space, even if the diversity requirements in the objective and decision spaces are inconsistent or there exist local optimal areas in the decision space.

76 citations


Journal ArticleDOI
21 Dec 2018-Science
TL;DR: It is discovered that 10-fold QC-SLs could self-organize from truncated tetrahedral quantum dots with anisotropic patchiness, which may spur the creation of various superstructures using an isotropic objects through an enthalpy-driven route.
Abstract: Quasicrystalline superlattices (QC-SLs) generated from single-component colloidal building blocks have been predicted by computer simulations but are challenging to reproduce experimentally. We discovered that 10-fold QC-SLs could self-organize from truncated tetrahedral quantum dots with anisotropic patchiness. Transmission electron microscopy and tomography measurements allow structural reconstruction of the QC-SL from the nanoscale packing to the atomic-scale orientation alignments. The unique QC order leads to a tiling concept, the “flexible polygon tiling rule,” that replicates the experimental observations. The keys for the single-component QC-SL formation were identified to be the anisotropic shape and patchiness of the building blocks and the assembly microscopic environment. Our discovery may spur the creation of various superstructures using anisotropic objects through an enthalpy-driven route.

72 citations


Journal ArticleDOI
TL;DR: A novel method for generating support-free elliptic hollowing for 3D shapes which can entirely avoid additional supporting structures is presented.
Abstract: 3D printing, also called additive manufacturing, has been increasingly popular and printing efficiency has become more critical. To print artifacts faster with less material, thus leading to lighter and cheaper printed products, various types of void structureshave been designed and engineered inside of shape models. In this paper, we present a novel method for generating support-free elliptic hollowing for 3D shapes which can entirely avoid additional supporting structures. To achieve this, we perform the ellipse hollowing in one of the cross sectional polygons and then extrude the hollowed ellipses to the other parallel cross sections. To efficiently pack the ellipses in the polygon, we construct the Voronoi diagram of ellipses to reason the free-space around the ellipses and other geometric features by taking advantage of the available algorithm for the efficient and robust construction of the Voronoi diagram of circles. We demonstrate the effectiveness and feasibility of our proposed method by designing and printing support-free hollow for various 3D shapes using Poretron , the program which computes the hollow by embedding appropriate APIs of the Voronoi Diagram Machine library that is freely available from Voronoi Diagram Research Center. It takes a 3D mesh model and produces an STL file which can be either fed into a 3D printer or postprocessed.

47 citations


Journal ArticleDOI
03 Jul 2018-Sensors
TL;DR: Comparing this method to previous techniques using a Monte Carlo simulation on randomised polygons shows a significant reduction in flight time, due to the fact that ignoring wind for small, slow, fixed-wing aircraft is a considerable oversight.
Abstract: In this paper, a new method for planning coverage paths for fixed-wing Unmanned Aerial Vehicle (UAV) aerial surveys is proposed. Instead of the more generic coverage path planning techniques presented in previous literature, this method specifically concentrates on decreasing flight time of fixed-wing aircraft surveys. This is achieved threefold: by the addition of wind to the survey flight time model, accounting for the fact fixed-wing aircraft are not constrained to flight within the polygon of the region of interest, and an intelligent method for decomposing the region into convex polygons conducive to quick flight times. It is shown that wind can make a huge difference to survey time, and that flying perpendicular can confer a flight time advantage. Small UAVs, which have very slow airspeeds, can very easily be flying in wind, which is 50% of their airspeed. This is why the technique is shown to be so effective, due to the fact that ignoring wind for small, slow, fixed-wing aircraft is a considerable oversight. Comparing this method to previous techniques using a Monte Carlo simulation on randomised polygons shows a significant reduction in flight time.

44 citations


Journal ArticleDOI
TL;DR: A fast calculation method to obtain the full-analytical frequency spectrum of a spatial triangle based on the three-dimensional (3D) affine transformation is presented and the efficiency of the proposed method is enhanced as compared to that of previous works.
Abstract: A fast calculation method to obtain the full-analytical frequency spectrum of a spatial triangle based on the three-dimensional (3D) affine transformation is presented. Computer-generated holograms (CGHs) of an object can then be generated rapidly using the angular spectrum for propagation. The derivation process in the theory, which has more preciseness, indicates a difference from previous methods based on affine transformations ([Appl. Opt.47, 1567 (2008)Appl. Opt.52, A290 (2013)]). The proposed method to achieve 3D transformation from an arbitrary triangle to a primitive triangle includes two steps: 3D rotation and 2D affine transformation. The overall transform matrix is given by the product of a rotation matrix and a 2D affine matrix. A modified back-face culling is also introduced based on exterior normal for correct occlusion relation. Several complex 3D objects are implemented successfully using the proposed method in numerical simulations and optical experiments. The resulting computation time demonstrates that the efficiency of the proposed method is enhanced as compared to that of previous works.

37 citations


Book
11 Feb 2018
TL;DR: A parallel method for triangulating a simple polygon by two (parallel) calls to the trapezoidal map computation is given, which obtains an interesting partition of one-sided monotone polygons.
Abstract: We give a parallel method for triangulating a simple polygon by two (parallel) calls to the trapezoidal map computation. The method is simpler and more elegant than previous methods. Along the way we obtain an interesting partition of one-sided monotone polygons. Using the best-known trapezoidal map algorithm, ours run in timeO(logn) usingO(n) CREW PRAM processors.

Journal ArticleDOI
TL;DR: The scaled boundary finite element formulation enables accurate computation of the stress intensity factors directly from the stress solutions without any special post-processing techniques or local mesh refinement in the vicinity of the crack tip.

Book
08 Feb 2018
TL;DR: It is shown that 2n + k tactile probes are sufficient to determine the shape of a convex polygon of n sides selected from a known finite set of polygons.
Abstract: We show that 2n + k tactile probes are sufficient to determine the shape of a convex polygon of n sides selected from a known finite set of polygons. This result improves on the 3n probe algorithm of Cole and Yap (1983) in the finite case. We show k = 3 under the assumptions of Cole and Yap, k = 2 under slightly stronger assumptions, and k = −1 under the assumptions of Schwartz and Sharir (1984).

Journal ArticleDOI
TL;DR: In this article, the singular behavior of the Helmholtz equation set in a non-convex polygon was analyzed and it was shown that for high frequency problems, the dominant part of the solution is the regular part.
Abstract: We analyze the singular behaviour of the Helmholtz equation set in a non-convex polygon. Classically, the solution of the problem is split into a regular part and one singular function for each re-entrant corner. The originality of our work is that the “amplitude” of the singular parts is bounded explicitly in terms of frequency. We show that for high frequency problems, the “dominant” part of the solution is the regular part. As an application, we derive sharp error estimates for finite element discretizations. These error estimates show that the “pollution effect” is not changed by the presence of singularities. Furthermore, a consequence of our theory is that locally refined meshes are not needed for high-frequency problems, unless a very accurate solution is required. These results are illustrated with numerical examples that are in accordance with the developed theory.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This work proposes a kinetic approach that brings more flexibility on polygon shape and size and demonstrates that output partitions both contain less polygons and better capture geometric structures than those delivered by existing methods.
Abstract: Recent works showed that floating polygons can be an interesting alternative to traditional superpixels, especially for analyzing scenes with strong geometric signatures, as man-made environments. Existing algorithms produce homogeneously-sized polygons that fail to capture thin geometric structures and over-partition large uniform areas. We propose a kinetic approach that brings more flexibility on polygon shape and size. The key idea consists in progressively extending pre-detected line-segments until they meet each other. Our experiments demonstrate that output partitions both contain less polygons and better capture geometric structures than those delivered by existing methods. We also show the applicative potential of the method when used as preprocessing in object contouring.

Journal ArticleDOI
TL;DR: In this article, a scaled boundary polygon equations for saturated soil is established by applying Galerkin method, which can process extraordinary mesh flexibility and fast reconstruction, which will make it a promising tool in liquefaction analysis.
Abstract: In this paper, the polygon scaled boundary finite element method is extended to analyze saturated soil based on the generalized Biot's dynamic consolidation theory. The displacement shape functions of the polygon element are obtained by elastic static theory while the pore pressure shape functions are constructed from steady-state seepage theory. A scaled boundary polygon equations for saturated soil is established by applying Galerkin method. Two sets of Gauss points are adopted, including Gauss points of line utilized to compute the shape functions and Gauss points of area employed to realize material nonlinearity. In order to verify and assess the reliability and accuracy of the presented method, a saturated elastic half space subjected to a uniform cyclic dynamic loading is simulated and the results are compared with the analytical solution. Moreover, a liquefaction analysis of a breakwater built on saturated sand soil with generalized plastic model is subsequently carried out. The results correspond well with those calculated by finite element method (FEM), which indicates the significant capability of the current method in solving nonlinear problems. The proposed method processes extraordinary mesh flexibility and fast reconstruction, which will make it a promising tool in liquefaction analysis.

Proceedings ArticleDOI
06 Nov 2018
TL;DR: The solution is, in a well-defined sense, a locally optimal solution to the problem of choosing centers in the plane and choosing an assignment of people to those 2-d centers so as to minimize the sum of squared distances subject to the assignment being balanced.
Abstract: We consider the problem of political redistricting: given the locations of people in a geographical area (e.g. a US state), the goal is to decompose the area into subareas, called districts, so that the populations of the districts are as close as possible and the districts are "compact" and "contiguous," to use the terms referred to in most US state constitutions and/or US Supreme Court rulings. We study a method that outputs a solution in which each district is the intersection of a convex polygon with the geographical area. The average number of sides per polygon is less than six. The polygons tend to be quite compact. Every two districts differ in population by at most one (so we call the solution balanced). In fact, the solution is a centroidal power diagram: each polygon has an associated center in R3 such that • the projection of the center onto the plane z = 0 is the centroid of the locations of people assigned to the polygon, and • for each person assigned to that polygon, the polygon's center is closest among all centers. The polygons are convex because they are the intersections of 3D Voronoi cells with the plane. The solution is, in a well-defined sense, a locally optimal solution to the problem of choosing centers in the plane and choosing an assignment of people to those 2-d centers so as to minimize the sum of squared distances subject to the assignment being balanced. A practical problem with this approach is that, in real-world redistricting, exact locations of people are unknown. Instead, the input consists of polygons (census blocks) and associated populations. A real redistricting must not split census blocks. We therefore propose a second phase that perturbs the solution slightly so it does not split census blocks. In our experiments, the second phase achieves this while preserving perfect population balance. The district polygons are no longer convex at the fine scale because their boundaries must follow the boundaries of census blocks, but at a coarse scale they preserve the shape of the original polygons.

Journal ArticleDOI
TL;DR: A new approach for hull shape modification is proposed, based on a combination of the Subdivision Surface technique for hull surface modelling and Free Form Deformation for shape variation, which introduces significant simplification to the definition of the transformation.
Abstract: Techniques for shape representation and further modification of a hull surface definitely play a key role in both the design of new buildings and in the optimisation of the existing ones. The simplification of the methods is the goal to reach in order to create useful tools for real applications. A new approach for hull shape modification is proposed. It is based on a combination of the Subdivision Surface technique for hull surface modelling and Free Form Deformation for shape variation. The formal relation between the two methods is established by the Free Form Deformation control volume and the Subdivision Surface control polygon, introducing significant simplification to the definition of the transformation. The new approach is described in detail highlighting its benefits. Its effectiveness is finally proved by an example of application on a real hull shape, where a combination of a local and global modification has been analysed.

Journal ArticleDOI
TL;DR: In this paper, a new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other.

Journal ArticleDOI
TL;DR: This letter presents a novel approach for extracting accurate outlines of individual buildings from very high-resolution (0.1–0.4 m) optical images and demonstrates that the approach is robust to different shapes of building roofs and outperforms the state-of-the-art method.
Abstract: This letter presents a novel approach for extracting accurate outlines of individual buildings from very high-resolution (0.1–0.4 m) optical images. Building outlines are defined as polygons here. Our approach operates on a set of straight line segments that are detected by a line detector. It groups a subset of detected line segments and connects them to form a closed polygon. Particularly, a new grouping cost is defined first. Second, a weighted undirected graph $\textit {G(V,E)}$ is constructed based on the endpoints of those extracted line segments. The building outline extraction is then formulated as a problem of searching for a graph cycle with the minimal grouping cost. To solve the graph cycle searching problem, the bidirectional shortest path method is utilized. Our method is validated on a newly created data set that contains 123 images of various building roofs with different shapes, sizes, and intensities. The experimental results with an average intersection-over-union of 90.56% and an average alignment error of 6.56 pixels demonstrate that our approach is robust to different shapes of building roofs and outperforms the state-of-the-art method.

Journal ArticleDOI
05 Jun 2018
TL;DR: In this article, the effect of particle edge geometry on the descent motion of free falling planar particles is examined through experiments, and a new length scale that accounts for the frontal area of the particles and its edge geometry (i.e. perimeter) is proposed.
Abstract: The effect of particle edge geometry on the descent motion of free falling planar particles is examined through experiments. Various planar particles, such as disk and polygons, with identical frontal areas ($A_p$) and different number of edges (or perimeter) are used. All particles are designed such that their values of Galileo number (G) and dimensionless moment of inertia (I*) correspond to the previously identified fluttering regime of particle motion. Several modes of secondary motion are observed for the same particle and conditions, and these are not equally probable. This probability depends on the particle shape. Disks and heptagons were found to prefer a `planar zig-zag' behaviour. These planar motions are composed of gliding sweeps and turning sections. As the number of sides in the polygon decreases, i.e. for hexagons and pentagons, the trajectory transition to a more three-dimensional form. These trajectories were found to be restricted to one plane per swing but the subsequent swings are in other planes. Further decrease in number of sides to a square results in the trajectories having a severe out-of-plane motion. These sub-regimes of particle motion within the fluttering regime are consistent with those reported for disks in previous studies. Based on this information, a new length scale that accounts for the frontal area of the particles and its edge geometry (i.e. perimeter) is proposed. This length scale represents the first approach to determine an equivalent disks for planar particles such that the phase diagram in the Reynolds number (Re) - dimensionless moment of inertia (I*) domain can be used to characterise the motion of planar particles with different frontal geometries. However further experiments covering other domains of the regime map are needed to verify its universality.

Journal ArticleDOI
19 Oct 2018
TL;DR: In this article, the polygon cloud, a compressible representation of three-dimensional geometry (including attributes, such as color), intermediate between polygonal meshes and point clouds, is introduced.
Abstract: We introduce the polygon cloud, a compressible representation of three-dimensional geometry (including attributes, such as color), intermediate between polygonal meshes and point clouds. Dynamic polygon clouds, like dynamic polygonal meshes and dynamic point clouds, can take advantage of temporal redundancy for compression. In this paper, we propose methods for compressing both static and dynamic polygon clouds, specifically triangle clouds. We compare triangle clouds to both triangle meshes and point clouds in terms of compression, for live captured dynamic colored geometry. We find that triangle clouds can be compressed nearly as well as triangle meshes, while being more robust to noise and other structures typically found in live captures, which violate the assumption of a smooth surface manifold, such as lines, points, and ragged boundaries. We also find that triangle clouds can be used to compress point clouds with significantly better performance than previously demonstrated point cloud compression methods. For intra-frame coding of geometry, our method improves upon octree-based intra-frame coding by a factor of 5–10 in bit rate. Inter-frame coding improves this by another factor of 2–5. Overall, our proposed method improves over the previous state-of-the-art in dynamic point cloud compression by 33% or more.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This paper proposes a novel extrinsic calibration approach for a Lidar (Laser Range Finder) and a camera which only based on a polygon board and offers the relevant tools and the outcome indicates high-precision extrINSic calibration performance.
Abstract: Fusion of heterogeneous exteroceptive sensors is the most efficient and effective path to the representation of the environment precisely, as it can compromise various drawbacks of each homogeneous sensor. The rigid transformation (aka. extrinsic parameters) of heterogeneous sensory systems is the prerequisite of fusing the multi-sensor information. Researchers have proposed several approaches to estimate the extrinsic parameters. However, these approaches neither rely on human interventions or specifically designed auxiliary object or do not provide the library which makes it hard to test or benchmark. In this paper, we propose a novel extrinsic calibration approach for the extrinsic calibration of a Lidar (Laser Range Finder) and a camera which only based on a polygon board and we offer the relevant tools. In this paper, we firstly track and extract the target polygon from both the image and point-cloud. Then we try to match the polygon between the 2D and 3D feature spaces. With the associated polygon, we are able to get multiple constraints to optimize the extrinsic parameters. At the end, we validate our approach by four configurations, including the simulation, t6/32-beam Lidar and 100-line MEMS-Lidar. The outcome indicates high-precision extrinsic calibration performance.

Journal ArticleDOI
TL;DR: By defining the visibility constraint of geographical features, this method is especially suitable for simplifying water areas as it is aligned with people’s visual habits.
Abstract: Line simplification is an important component of map generalization. In recent years, algorithms for line simplification have been widely researched, and most of them are based on vector data. However, with the increasing development of computer vision, analysing and processing information from unstructured image data is both meaningful and challenging. Therefore, in this paper, we present a new line simplification approach based on image processing (BIP), which is specifically designed for raster data. First, the key corner points on a multi-scale image feature are detected and treated as candidate points. Then, to capture the essence of the shape within a given boundary using the fewest possible segments, the minimum-perimeter polygon (MPP) is calculated and the points of the MPP are defined as the approximate feature points. Finally, the points after simplification are selected from the candidate points by comparing the distances between the candidate points and the approximate feature points. An empirical example was used to test the applicability of the proposed method. The results showed that (1) when the key corner points are detected based on a multi-scale image feature, the local features of the line can be extracted and retained and the positional accuracy of the proposed method can be maintained well; and (2) by defining the visibility constraint of geographical features, this method is especially suitable for simplifying water areas as it is aligned with people’s visual habits.

Journal ArticleDOI
TL;DR: A new topology optimization method based on basis functions for design of rotating machines that is shown to outperform the conventional shape optimization based on polygon morphing in both interior permanent magnet and synchronous motor models.
Abstract: This paper presents a new topology optimization method based on basis functions for design of rotating machines. In this method, the core shape of a given rotating machine is represented by the linear combination of the basis functions. The shape is then freely deformed by changing the weighting coefficients to the basis functions to find the optimal shape. The proposed method is compared with the conventional shape optimization based on polygon morphing. The former is shown to outperform the latter in both interior permanent magnet and synchronous motor models.

Journal ArticleDOI
TL;DR: In this paper, a polygonal scaled boundary finite element method (SBFEM) is proposed for linear elastodynamics in two dimensions, where the domain is divided into non-overlapping polygon-al elements, and the scaled-boundary finite element approach is employed over each polygon.

Journal ArticleDOI
TL;DR: In this article, the evolution of the vortex filament equation (VFE) for a regular M-corner polygon as initial datum can be explained at infinitesimal times as the superposition of M one corner initial data.
Abstract: In this paper, we give evidence that the evolution of the vortex filament equation (VFE) for a regular M-corner polygon as initial datum can be explained at infinitesimal times as the superposition of M one-corner initial data. This fact is mainly sustained with the calculation of the speed of the center of mass; in particular, we show that several conjectures made at the numerical level are in agreement with the theoretical expectations. Moreover, due to the spatial periodicity, the evolution of VFE at later times can be understood as the nonlinear interaction of infinitely many filaments, one for each corner; and this interaction turns out to be some kind of nonlinear Talbot effect. We also give very strong numerical evidence of the transfer of energy and linear momentum for the M-corner case; and the numerical experiments carried out provide new arguments that support the multifractal character of the trajectory defined by one of the corners of the initial polygon.

Journal ArticleDOI
TL;DR: The experimental result demonstrates that the S3 method can generate a high-definition hologram with qualified occlusion effect, and the computing complexity of the S2 method is lower than that of previous methods.
Abstract: In a polygon-based computer-generated hologram (CGH), the three-dimensional (3D) model is represented as a polygon, which consists of numerous small facets. Lighting effect, material texture, and surface property can be included in the polygonal model, which enables polygon-based CGH to realize high-fidelity 3D display. On the other hand, the occlusion effect is an important depth cue for 3D display. In polygon-based CGH, however, occlusion processing is difficult and time-consuming work. In this paper, we proposed a simple and fast occlusion processing method, the slice-by-slice silhouette (S3) method, for generating the occlusion effect in polygon-based CGH. In the S3 method, the polygonal model is sliced into multiple thin segments. For each segment, a silhouette mask is generated and located at the backside of the segment. The incident light is first shaded by the mask and superimposes on the light emitted from the facets of the evaluated segment. In this way, every segment can be processed sequentially to get the resulting object light. Our experimental result demonstrates that the S3 method can generate a high-definition hologram with qualified occlusion effect. The computing complexity of the S3 method is lower than that of previous methods. In addition, the S3 method can be parallelized easily, and thus can be further speeded up by applying a parallel computing framework, such as multi-core CPU or GPU.

Posted Content
TL;DR: This article developed a theory of multiplicities of roots for polynomials over hyperfields and used this to provide a unified and conceptual proof of both Descartes' rule of signs and Newton's polygon rule.
Abstract: We develop a theory of multiplicities of roots for polynomials over hyperfields and use this to provide a unified and conceptual proof of both Descartes' rule of signs and Newton's "polygon rule".

Journal ArticleDOI
TL;DR: For example, this article found that participants make more appropriate ps judgments when polygons are presented in their natural context of radar images than when the polygons were presented in isolation and that gradient displays appear to provide no appreciable benefit.
Abstract: To better understand people's interpretations of National Weather Service's tornado warning polygons, 145 participants were shown 22 hypothetical scenarios in one of four displays—deterministic polygon, deterministic polygon + radar image, gradient polygon, and gradient polygon + radar image. Participants judged each polygon's numerical strike probability (ps) and reported the likelihood of taking seven different response actions. The deterministic polygon display produced ps that were highest at the polygon's centroid and declined in all directions from there. The deterministic polygon + radar display, the gradient polygon display, and the gradient polygon + radar display produced ps that were high at the polygon's centroid and also at its edge nearest the tornadic storm cell. Overall, ps values were negatively related to resuming normal activities, but positively correlated with expectations of resuming normal activities, seeking information from social sources, seeking shelter, and evacuating by car. These results replicate the finding that participants make more appropriate ps judgments when polygons are presented in their natural context of radar images than when the polygons are presented in isolation and that gradient displays appear to provide no appreciable benefit. The fact that ps judgments had moderately positive correlations with both sheltering (a generally appropriate response) and evacuation (a generally inappropriate response) provides experimental confirmation that people threatened by actual tornadoes are conflicted about which protective action to take.