scispace - formally typeset
Search or ask a question

Showing papers on "Delaunay triangulation published in 2021"


Journal ArticleDOI
TL;DR: In this paper, a dynamic fusion path-finding algorithm (DFPA) based on Delaunay triangulation and improved A-star (A*) algorithm was designed to improve the efficiency of mobile robot path planning, where the concept of the grid was used to extract obstacle edges to provide obstacle avoidance strategies for robot pathfinding.
Abstract: Although many studies exist on mobile robot path planning, the disadvantages of complex algorithms and many path nodes in logistics warehouses and manufacturing workshops are obvious, mainly due to the inconsistency of map environment construction and pathfinding strategies In this study, to improve the efficiency of mobile robot path planning, the Delaunay triangulation algorithm was used to process complex obstacles and generate Voronoi points as pathfinding priority nodes The concept of the grid was used to extract obstacle edges to provide obstacle avoidance strategies for robot pathfinding Subsequently, the search for priority and regular path nodes used the improved A-star (A*) algorithm The dynamic fusion pathfinding algorithm (DFPA), based on Delaunay triangulation and improved A*, was designed, which realizes the path planning of mobile robots MATLAB 2016a was used as the simulation software, to firstly verify the correctness of the DFPA, and then to compare the algorithm with other methods The results show that under the experimental environment with the same start point, goal point, and number of obstacles, the map construction method and pathfinding strategy proposed in this paper reduce the planned path length of the mobile robot, the number of path nodes, and the cost of overall turn consumption, and increase the success rate of obtaining a path The new dynamic map construction method and pathfinding strategy have important reference significance for processing chaotic maps, promoting intelligent navigation, and site selection planning

51 citations


Journal ArticleDOI
TL;DR: By employing VT and the largest standard cluster analysis (LSCA) to quantify neighborhood between atoms to a MD simulated cooling of liquid metal Ta, the VNA cases are indeed found in the system and it is shown that as the disordered degree of a system increases, the probability of VNA increases.

25 citations


Journal ArticleDOI
TL;DR: A Delaunay-triangulation-based Variable Neighborhood Search (DVNS) algorithm is developed to solve large-scale GCTSPs and outperforms the existing four genetic algorithms and two VNS methods in terms of search ability and convergence rate.
Abstract: A colored traveling salesman problem (CTSP) is a generalization of the well-known multiple traveling salesman problem. It utilizes colors to differentiate the accessibility of its cities to its salesmen. In our prior work, CTSPs are formulated over graphs associated with a city-color matrix. This work redefines a general colored traveling salesman problem (GCTSP) in the framework of hypergraphs and reveals several important properties of GCTSP. In GCTSP, the setting of city colors is richer than that in CTSPs. As results, it can be used to model and address various complex scheduling problems. Then, a Delaunay-triangulation-based Variable Neighborhood Search (DVNS) algorithm is developed to solve large-scale GCTSPs. At the beginning stage of DVNS, a divide and conquer algorithm is exploited to prepare a Delaunay candidate set for lean insertion. Next, the incumbent solution is perturbed by utilizing greedy multi-insertion and exchange mutation to obtain a variety of neighborhoods. Subsequently, 2-opt and 3-opt are used for local search in turn. Extensive experiments are conducted for many large scale GCTSP cases among which two maximal ones are up to 33000+ cities for 4 salesmen and 240 salesmen given 11000+ cities, respectively. The results show that the proposed method outperforms the existing four genetic algorithms and two VNS methods in terms of search ability and convergence rate.

24 citations


Journal ArticleDOI
TL;DR: In this article, a mixed Deep Energy Method (mDEM) is proposed to resolve fine features of the stress and displacement fields, for example concentration features in solid mechanics applications, by introducing stress measures as an additional output of the NN to the recently introduced pure displacement formulation.

23 citations


Journal ArticleDOI
Qiyue Lyu1, Junwei Luo1, Ke Liu1, Xiaolin Yin1, Jiarui Liu1, Wei Lu1 
TL;DR: A novel CMFD method, with good robustness, outperforms some state-of-the-art methods, including double matching process and region localizing process is proposed.

19 citations


Journal ArticleDOI
14 Nov 2021-Sensors
TL;DR: In this paper, the 3D coordinates of each feature point are calculated by using the parameters of the parallel binocular endoscope and the spatial geometric constraints, and the discrete feature points are divided into multiple triangles using the Delaunay triangulation method.
Abstract: At present, feature-based 3D reconstruction and tracking technology is widely applied in the medical field. In minimally invasive surgery, the surgeon can achieve three-dimensional reconstruction through the images obtained by the endoscope in the human body, restore the three-dimensional scene of the area to be operated on, and track the motion of the soft tissue surface. This enables doctors to have a clearer understanding of the location depth of the surgical area, greatly reducing the negative impact of 2D image defects and ensuring smooth operation. In this study, firstly, the 3D coordinates of each feature point are calculated by using the parameters of the parallel binocular endoscope and the spatial geometric constraints. At the same time, the discrete feature points are divided into multiple triangles using the Delaunay triangulation method. Then, the 3D coordinates of feature points and the division results of each triangle are combined to complete the 3D surface reconstruction. Combined with the feature matching method based on convolutional neural network, feature tracking is realized by calculating the three-dimensional coordinate changes of the same feature point in different frames. Finally, experiments are carried out on the endoscope image to complete the 3D surface reconstruction and feature tracking.

15 citations


Journal ArticleDOI
Weiwei Wang1, Han Wang1, Fei Shaohua1, Haijin Wang1, Huiyue Dong1, Yinglin Ke1 
TL;DR: In this article, an adaptive fiber shaking module is developed on the basis of Delaunay triangulation and combined with the proposed algorithm to realize coordination between the parameters of fiber volume fraction and fiber spacing, to overcome the jamming limit and eliminate the unreasonable resin-rich zone that may appear at the boundary of the microstructures.

14 citations


Posted Content
TL;DR: Gu et al. as mentioned in this paper proposed a conformally equivalent metric for a discrete surface, possibly with boundary, exhibiting prescribed Gaussian curvature at all interior vertices and prescribed geodesic curvature along the boundary.
Abstract: We describe an efficient algorithm to compute a conformally equivalent metric for a discrete surface, possibly with boundary, exhibiting prescribed Gaussian curvature at all interior vertices and prescribed geodesic curvature along the boundary. Our construction is based on the theory developed in [Gu et al. 2018; Springborn 2020], and in particular relies on results on hyperbolic Delaunay triangulations. Generality is achieved by considering the surface's intrinsic triangulation as a degree of freedom, and particular attention is paid to the proper treatment of surface boundaries. While via a double cover approach the boundary case can be reduced to the closed case quite naturally, the implied symmetry of the setting causes additional challenges related to stable Delaunay-critical configurations that we address explicitly in this work.

13 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed matching area selection algorithm can extract more characteristics of gravity field, and select more effective matching areas than the traditional algorithms.
Abstract: The matching area selection algorithm is one of the key technologies for underwater gravity-aided inertial navigation system. Positioning accuracy and matching rate of gravity matching can be improved in high quality matching area. The traditional matching area selection algorithms usually use the statistical characteristic parameters of gravity field, which is difficult to reflect the spatial characteristics of gravity field. In order to explore more vector characteristics and spatial relation characteristics of gravity field and find more potential matching areas, the matching area selection algorithm based on Delaunay triangulation is proposed in this article. The proposed algorithm establishes the three-dimensional model and extracts spatial feature parameters to analysis gravity field. The comprehensive characteristics matrix is used to select matching area by feature extraction and cluster analysis. The experimental results show that the proposed algorithm can extract more characteristics of gravity field, and select more effective matching areas than the traditional algorithms.

12 citations


Journal ArticleDOI
TL;DR: To find an optimal tool path guaranteeing minimal production time and high quality of the workpiece, a mixed-integer linear programming problem is derived that takes thermal conduction and radiation during the process into account and aims to minimize temperature gradients inside the material.
Abstract: We consider two mathematical problems that are connected and occur in the layer-wise production process of a workpiece using wire-arc additive manufacturing. As the first task, we consider the automatic construction of a honeycomb structure, given the boundary of a shape of interest. In doing this, we employ Lloyd’s algorithm in two different realizations. For computing the incorporated Voronoi tesselation we consider the use of a Delaunay triangulation or alternatively, the eikonal equation. We compare and modify these approaches with the aim of combining their respective advantages. Then in the second task, to find an optimal tool path guaranteeing minimal production time and high quality of the workpiece, a mixed-integer linear programming problem is derived. The model takes thermal conduction and radiation during the process into account and aims to minimize temperature gradients inside the material. Its solvability for standard mixed-integer solvers is demonstrated on several test-instances. The results are compared with manufactured workpieces.

11 citations


Journal ArticleDOI
28 Feb 2021-Symmetry
TL;DR: Wang et al. as discussed by the authors proposed a new simplification algorithm for scattered point clouds with feature preservation, which can reduce the amount of data while retaining the features of data, and the method can be applied to models with different curvatures and effectively avoid the hole phenomenon in the simplification process.
Abstract: High-precision and high-density three-dimensional point cloud models usually contain redundant data, which implies extra time and hardware costs in the subsequent data processing stage. To analyze and extract data more effectively, the point cloud must be simplified before data processing. Given that point cloud simplification must be sensitive to features to ensure that more valid information can be saved, in this paper, a new simplification algorithm for scattered point clouds with feature preservation, which can reduce the amount of data while retaining the features of data, is proposed. First, the Delaunay neighborhood of the point cloud is constructed, and then the edge points of the point cloud are extracted by the edge distribution characteristics of the point cloud. Second, the moving least-square method is used to obtain the normal vector of the point cloud and the valley ridge points of the model. Then, potential feature points are identified further and retained on the basis of the discrete gradient idea. Finally, non-feature points are extracted. Experimental results show that our method can be applied to models with different curvatures and effectively avoid the hole phenomenon in the simplification process. To further improve the robustness and anti-noise ability of the method, the neighborhood of the point cloud can be extended to multiple levels, and a balance between simplification speed and accuracy needs to be found.

Journal ArticleDOI
TL;DR: GeoGraph as discussed by the authors is a framework for parallel geometric graph construction and parallel graph processing within the same environment, which supports graph construction based on k-nearest neighbors, Delaunay triangulation, and b-skeleton graphs.
Abstract: In many applications of graph processing, the input data is often generated from an underlying geometric point data set. However, existing high-performance graph processing frameworks assume that the input data is given as a graph. Therefore, to use these frameworks, the user must write or use external programs based on computational geometry algorithms to convert their point data set to a graph, which requires more programming effort and can also lead to performance degradation. In this paper, we present our ongoing work on the Geo- Graph framework for shared-memory multicore machines, which seamlessly supports routines for parallel geometric graph construction and parallel graph processing within the same environment. GeoGraph supports graph construction based on k-nearest neighbors, Delaunay triangulation, and b-skeleton graphs. It can then pass these generated graphs to over 25 graph algorithms. GeoGraph contains highperformance parallel primitives and algorithms implemented in C++, and includes a Python interface. We present four examples of using GeoGraph, and some experimental results showing good parallel speedups and improvements over the Higra library. We conclude with a vision of future directions for research in bridging graph and geometric data processing.

Journal ArticleDOI
TL;DR: The experimental results showed that this method can better resist image noise and repeated texture, can effectively detect and separate moving targets, and can match stereo image points in repeated textured areas more accurately and stability.
Abstract: To obtain effective indoor moving target localization, a reliable and stable moving target localization method based on binocular stereo vision is proposed in this paper. A moving target recognition extraction algorithm, which integrates displacement pyramid Horn–Schunck (HS) optical flow, Delaunay triangulation and Otsu threshold segmentation, is presented to separate a moving target from a complex background, called the Otsu Delaunay HS (O-DHS) method. Additionally, a stereo matching algorithm based on deep matching and stereo vision is presented to obtain dense stereo matching points pairs, called stereo deep matching (S-DM). The stereo matching point pairs of the moving target were extracted with the moving target area and stereo deep matching point pairs, then the three dimensional coordinates of the points in the moving target area were reconstructed according to the principle of binocular vision’s parallel structure. Finally, the moving target was located by the centroid method. The experimental results showed that this method can better resist image noise and repeated texture, can effectively detect and separate moving targets, and can match stereo image points in repeated textured areas more accurately and stability. This method can effectively improve the effectiveness, accuracy and robustness of three-dimensional moving target coordinates.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, the authors leverage the properties of 2D Delaunay triangulations to construct a mesh from manifold surface elements and synchronize the local 2D projections of neighboring elements to maximize the manifoldness of reconstructed mesh.
Abstract: We present a method for reconstructing triangle meshes from point clouds. Existing learning-based methods for mesh reconstruction mostly generate triangles individually, making it hard to create manifold meshes. We leverage the properties of 2D Delaunay triangulations to construct a mesh from manifold surface elements. Our method first estimates local geodesic neighborhoods around each point. We then perform a 2D projection of these neighborhoods using a learned logarithmic map. A Delaunay triangulation in this 2D domain is guaranteed to produce a manifold patch, which we call a Delaunay surface element. We synchronize the local 2D projections of neighboring elements to maximize the manifoldness of the reconstructed mesh. Our results show that we achieve better overall manifoldness of our reconstructed meshes than current methods to reconstruct meshes with arbitrary topology. Our code, data and pretrained models can be found online: https://github.com/mrakotosaon/dse-meshing

Journal ArticleDOI
TL;DR: An automatic route design algorithm for intelligent ships considering environmental factors impact and navigation safety is proposed and the experimental results show that the algorithm can provide the adaptive route scheme via ship size and tide height.

Journal ArticleDOI
18 Jan 2021
TL;DR: In this paper, it was shown that the logarithmic volume of a Poisson-Delaunay simplex satisfies a central limit theorem in high dimensions, that is, as $n\to\infty.
Abstract: Typical weighted random simplices $Z_{\mu}$, $\mu\in(-2,\infty)$, in a Poisson-Delaunay tessellation in $\mathbb{R}^n$ are considered, where the weight is given by the $(\mu+1)$st power of the volume. As special cases this includes the typical ($\mu=-1$) and the usual volume-weighted ($\mu=0$) Poisson-Delaunay simplex. By proving sharp bounds on cumulants it is shown that the logarithmic volume of $Z_{\mu}$ satisfies a central limit theorem in high dimensions, that is, as $n\to\infty$. In addition, rates of convergence are provided. In parallel, concentration inequalities as well as moderate deviations are studied. The set-up allows the weight $\mu=\mu(n)$ to depend on the dimension $n$ as well. A number of special cases are discussed separately. For fixed $\mu$ also mod-$\phi$ convergence and the large deviations behaviour of the logarithmic volume of $Z_{\mu}$ are investigated.

Journal ArticleDOI
TL;DR: In this paper, a feature preserving surface reconstruction algorithm is proposed to produce a high fidelity triangulated mesh from an input point set, where the concept of local Delaunay triangulation is applied to speed up the reconstruction procedure and to preserve features.

Journal ArticleDOI
TL;DR: A mesh offset based method capable of robust delta volume generation despite point cloud noise is proposed, which consists of four steps: preprocessing, point cloud extraction, mesh extraction, and delta volume extraction.
Abstract: Three-dimensional (3D) printing technology is an excellent tool for implementing multi item, small scale production or for manufacturing objects of complex shape, and has been utilized in many areas of daily life. One typical application is parts maintenance. For a partially damaged part to be repaired using a 3D printer, it is essential to generate a delta volume for the damaged area. A typical method of delta volume generation is to create a mesh using Delaunay triangulation or Poisson surface reconstruction from the point cloud of a laser scan of the damaged part and to perform a boolean subtraction operation with the mesh of the original part. However, when generating the delta volume, this method is prone to error due to noise, non-uniform sampling, and missing data in the point cloud. To address this problem, we propose a mesh offset based method capable of robust delta volume generation despite point cloud noise. This method consists of four steps: preprocessing, point cloud extraction, mesh extraction, and delta volume extraction. To experimentally validate the proposed method, a prototype system was developed and a numerical implementation was performed for a partially damaged ball valve.


Journal ArticleDOI
26 Apr 2021
TL;DR: In this article, the rigidity of the Delaunay triangulated plane under Luo's PL conformality was shown for a particular type of locally finite convex ideal hyperbolic polyhedra.
Abstract: We show the rigidity of the hexagonal Delaunay triangulated plane under Luo’s PL conformality. As a consequence, we obtain a rigidity theorem for a particular type of locally finite convex ideal hyperbolic polyhedra.

Journal ArticleDOI
04 Mar 2021
TL;DR: The results demonstrate that while the proposed approach improves the scalability of DT-SVM in large low dimensional datasets, it leads SVM algorithms to maintain the accuracy in an acceptable range with considerably lower time in both training and testing phases with using a noticeably fewer number of SVs.
Abstract: Training a support vector machine (SVM) on large datasets is a slow daunting process. Further, SVM becomes slow in the testing phase, due to its large number of support vectors (SVs). This paper proposes an effective geometric algorithm based on construction of Delaunay triangulation (DT) algorithm using Quickhull algorithm with a novel strategy to exactly identify and extract the boundary data points laid between the two classes of a dataset, and later uses these most informative data points as a reduced dataset to solve various SVM algorithms and proposes new DT-SVM algorithms Two synthetic datasets with the size of 1K incrementally up to 500K datasets are generated to extensively verify the effectiveness of the proposed DT-SVM algorithms over various data sizes and for further assessment, the most efficient version of proposed DT-SVM is applied on well-known benchmark datasets from UCI Machine Learning Repository. Two variant of sequential minimization optimization (SMO) decomposition methods, in addition to Least Square form of SVM are implemented to present the scalability of new DT-SVM algorithms in linear/nonlinear separable/non-separable large low dimensional datasets. Moreover, the most efficient version of the proposed algorithm is compared to RCH-SK as a known geometric approach in the SVM literature. The results demonstrate that while the proposed approach improves the scalability of DT-SVM in large low dimensional datasets, it leads SVM algorithms to maintain the accuracy in an acceptable range with considerably lower time in both training and testing phases with using a noticeably fewer number of SVs.

Journal ArticleDOI
16 Jun 2021
TL;DR: In this paper, a novel algorithm is presented for automatically extracting utility poles and cables from MTLS point clouds in three consecutive phases of pre-processing, poles extraction, and cables extraction.
Abstract: Monitoring, maintaining, and organizing power lines corridors are of great importance because they are a primary means to transfer generated electricity from power stations to surrounding areas. Mobile Terrestrial Laser Scanning (MTLS) systems have significant potential for efficiently creating a power line infrastructure inventory. In this paper, a novel algorithm is presented for automatically extracting utility poles and cables from MTLS point clouds in three consecutive phases of pre-processing, poles extraction, and cables extraction. In the pre-processing step, after dividing the MTLS data into several tiles or sections along the road and using trajectory data, noisy points and low-height points are eliminated from each section. Next, search areas containing lines are detected using a Hough Transform (HT) algorithm, and utility poles are identified based on horizontal and vertical density information. The search area for cables is estimated using a two-dimensional (2D) Delaunay Triangulation (DT) of the center points of the extracted poles as vertices. In each search area, high-density points are removed as non-cable points and utility cables are eventually extracted by fitting cable points to polynomial equations. The algorithm was tested on three different MTLS point clouds from a 1371 m urban road section, and a 2800 m and a 500 m non-urban road sections. Each of these datasets has unique challenges and was used to evaluate the efficiency of the proposed algorithm under different conditions. The algorithm was able to extract poles with average correctness of 100% (no false positives) and completeness of 97%. Similarly, average correctness and completeness of 100% and 95.6% were attained for cables, respectively. These detection levels show that the proposed method for power lines extraction from an MTLS point cloud is both reliable and feasible.

Journal ArticleDOI
TL;DR: The proposed content-based medical image retrieval (CBMIR) technique uses an automated image segmentation technique called Delaunay triangulation (DT) in order to segment lung organ (region of interest) from the original medical image.
Abstract: This article presents a novel technique for retrieval of lung images from the collection of medical CT images. The proposed content-based medical image retrieval (CBMIR) technique uses an automated...

Journal ArticleDOI
TL;DR: Study reports on the developed approach for shape and defectiveness characterization by combining Delaunay triangulation of object’s contour with uniformity analysis of triangles in the generated mesh using developed metric “Implenarity”.

Journal ArticleDOI
TL;DR: The effectiveness of the proposed methodology in the simulations of the quasi-brittle fractures is demonstrated using the edge-based smoothed finite element method (ES-FEM) and isotropic damage model.
Abstract: In this study, an effective numerical framework for fracture simulations is proposed using the edge-based smoothed finite element method (ES-FEM) and isotropic damage model. The duality between the...

Journal ArticleDOI
TL;DR: A conflict-avoiding approach to predict pedestrians’ trajectories based on the Delaunay triangulation graph, which can model the crowd hierarchically and an information selection mechanism of pedestrian motion which updates the cell state of LSTM with a new social conflict gate is added.
Abstract: Pedestrian trajectory prediction is important for understanding human social behavior. Since the complex nature of the crowd dynamics, it remains a challenging work. Recent studies based on LSTM or GAN have made great progress in sequence prediction, but they still suffer from limitations of modeling neighborhood and handling pedestrian interaction. To address these problems, we propose a conflict-avoiding approach to predict pedestrians’ trajectories based on the Delaunay triangulation graph, which can model the crowd hierarchically. Meanwhile, the middle-level semantic feature is adopted to represent pedestrians’ dynamic interactions in Delaunay triangulation graph. Besides, to evaluate the effect of an additional semantic feature for LSTM, we add an information selection mechanism of pedestrian motion which updates the cell state of LSTM with a new social conflict gate. Furthermore, the results on two public datasets, BIWI and UCY, reveal that the proposed conflict-avoiding approach is excellent in terms of stability and validity. Our experimental results demonstrate that our method can predict the same time span using shorter observation period than state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: An algorithm based on Delaunay triangulation, the maximum boundary angle threshold is introduced as the parameter in the extraction of the rough boundary, and the fine boundary extraction is conducted for the local areas such as concave envelopes and holes.
Abstract: It is difficult to extract the boundary of complex planar points with nonuniform distribution of point density, concave envelopes, and holes. To solve this problem, an algorithm is proposed in this paper. Based on Delaunay triangulation, the maximum boundary angle threshold is introduced as the parameter in the extraction of the rough boundary. Then, the point looseness threshold is introduced, and the fine boundary extraction is conducted for the local areas such as concave envelopes and holes. Finally, the complete boundary result of the whole point set is obtained. The effectiveness of the proposed algorithm is verified by experiments on the simulated point set and practical measured point set. The experimental results indicate that it has wider applicability and more effectiveness in engineering applications than the state-of-the-art boundary construction algorithms based on Delaunay triangulation.

Journal ArticleDOI
Shihao Dong1, Sen Han1, Yuanyuan Yin1, Zhuang Zhang1, Tengfei Yao1 
TL;DR: In this paper, a 3D pavement macro-texture finite element (FE) model based on point cloud data obtained by the laser texture scanner was proposed for tire-pavement interaction FE analysis.

Journal ArticleDOI
Shankar Prasad Sastry1
TL;DR: In this paper, the authors proposed a generalization of Chew's first algorithm for Delaunay mesh refinement by splitting the line segments of an input planar straight line graph such that the lengths of split segments are asymptotically proportional to the local feature size at their endpoints.
Abstract: I present a generalization of Chew's first algorithm for Delaunay mesh refinement. I split the line segments of an input planar straight line graph (PSLG) such that the lengths of split segments are asymptotically proportional to the local feature size at their endpoints. By employing prior algorithms, I then refine the truly or constrained Delaunay triangulation of the PSLG by inserting off-center Steiner vertices of "skinny" triangles while prioritizing triangles with shortest edges first. This technique inserts Steiner vertices in an advancing front manner such that we obtain a size-optimal, truly or constrained Delaunay mesh if the desired minimum angle is less than 30° (in the absence of small input angles). This is an improvement over prior algorithms that produce size-optimal meshes with minimum angles of about 26.4°and 28.6°for truly and constrained Delaunay meshes, respectively. Even in the presence of small input angles, the upper bound on the maximum angle is an angle strictly greater than 120° (an improvement from about 137°). The lower bound on the minimum angle in the presence of small angles is identical to prior bounds.

Posted Content
TL;DR: DeepDT as mentioned in this paper reconstructs the surface from Delaunay triangulation of point cloud by extracting local geometry features from the input point cloud and aggregated into a graph deriving from the Delaunain triangulations, then a graph filtering is applied on the aggregated features in order to add structural regularization to the label prediction.
Abstract: In this paper, a novel learning-based network, named DeepDT, is proposed to reconstruct the surface from Delaunay triangulation of point cloud. DeepDT learns to predict inside/outside labels of Delaunay tetrahedrons directly from a point cloud and corresponding Delaunay triangulation. The local geometry features are first extracted from the input point cloud and aggregated into a graph deriving from the Delaunay triangulation. Then a graph filtering is applied on the aggregated features in order to add structural regularization to the label prediction of tetrahedrons. Due to the complicated spatial relations between tetrahedrons and the triangles, it is impossible to directly generate ground truth labels of tetrahedrons from ground truth surface. Therefore, we propose a multi-label supervision strategy which votes for the label of a tetrahedron with labels of sampling locations inside it. The proposed DeepDT can maintain abundant geometry details without generating overly complex surfaces, especially for inner surfaces of open scenes. Meanwhile, the generalization ability and time consumption of the proposed method is acceptable and competitive compared with the state-of-the-art methods. Experiments demonstrate the superior performance of the proposed DeepDT.