scispace - formally typeset
Search or ask a question

Showing papers on "Distance transform published in 2004"


Proceedings ArticleDOI
19 Jul 2004
TL;DR: An algorithm which inverts the image formation process, to recover a good visibility image of the object, is presented, which obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range.
Abstract: Underwater imaging is important for scientific research and technology, as well as for popular activities. We present a computer vision approach which easily removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. We show that the main degradation effects can be associated with partial polarization of light. We therefore present an algorithm which inverts the image formation process, to recover a good visibility image of the object. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by product, a distance map of the scene is derived as well. We successfully used our approach when experimenting in the sea using a system we built. We obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range.

283 citations


Journal ArticleDOI
01 Sep 2004
TL;DR: An algorithm for fast computation of discretized 3D distance fields of large models composed of tens of thousands of primitives on high resolution grids using graphics hardware and achieves an order of magnitude improvement in the running time.
Abstract: We present an algorithm for fast computation of discretized 3D distance elds using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance eld for each slice of a uniform spatial grid by rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance elds of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time.

171 citations


Journal ArticleDOI
06 Oct 2004
TL;DR: The algorithm decomposes the polyhedral objects into convex pieces, generates pairwise convex Minkowski sums and computes their union by generating a voxel grid, computing signed distance on the grid points and performing isosurface extraction from the distance field.
Abstract: We present an algorithm to approximate the 3D Minkowski sum of polyhedral objects. Our algorithm decomposes the polyhedral objects into convex pieces, generates pairwise convex Minkowski sums and computes their union. We approximate the union by generating a voxel grid, computing signed distance on the grid points and performing isosurface extraction from the distance field. The accuracy of the algorithm is mainly governed by the resolution of the underlying volumetric grid. Insufficient resolution can result in unwanted handles or disconnected components in the approximation. We use an adaptive subdivision algorithm that overcomes these problems by generating a volumetric grid at an appropriate resolution. We guarantee that our approximation has the same topology as the exact Minkowski sum. We also provide a two-sided Hausdorff distance bound on the approximation. Our algorithm is relatively simple to implement and works well on complex models. We have used it for exact 3D translation motion planning, offset computation, mathematical morphological operations and bounded-error penetration depth estimation.

157 citations


Patent
Yong Rui1
21 Oct 2004
TL;DR: In this article, a hierarchical per-feature approach is used to compare images and a distance is calculated between query vectors and corresponding low-level feature vectors extracted from the particular image.
Abstract: An improved image retrieval process based on relevance feedback uses a hierarchical (per-feature) approach in comparing images. Multiple query vectors are generated for an initial image by extracting multiple low-level features from the initial image. When determining how closely a particular image in an image collection matches the initial image, a distance is calculated between the query vectors and corresponding low-level feature vectors extracted from the particular image. Once these individual distances are calculated, they are combined to generate an overall distance that represents how closely the two images match. According to other aspects, relevancy feedback received regarding previously retrieved images is used during the query vector generation and the distance determination to influence which images are subsequently retrieved.

113 citations


Proceedings ArticleDOI
08 Oct 2004
TL;DR: The proposed MetaMorph deformable models are efficient in convergence, have large attraction range, and are robust to image noise and inhomogeities, which demonstrate the potential of the proposed technique.
Abstract: We present a new class of deformable models, MetaMorphs, that consist of both shape and interior texture. The model deformations are derived from both boundary and region information in a common variational framework. This framework represents a generalization of previous model-based segmentation approaches. The shapes of the new models are represented implicitly as "images" in the higher dimensional space of distance transforms. The interior textures are captured using a nonparametric kernel-based approximation of the intensity probability density functions (p.d.f.s) inside the models. The deformations that MetaMorph models can undergo are defined using a space warping technique - the cubic B-spline based Free Form Deformations (FFD). When using the models for boundary finding in images, we derive the model dynamics from an energy functional consisting of both edge energy terms and intensity/texture energy terms. This way, the models deform wider the influence of forces derived from both boundary and regional information. The proposed MetaMorph deformable models are efficient in convergence, have large attraction range, and are robust to image noise and inhomogeities. Various examples on finding object boundaries in noisy images with complex textures demonstrate the potential of the proposed technique.

105 citations


Journal ArticleDOI
TL;DR: A parametric and feature-based methodology for the design of solids with local composition control (LCC) that allows the designer to simultaneously edit geometry and composition by varying parameters until a satisfactory result is attained.
Abstract: This paper presents a parametric and feature-based methodology for the design of solids with local composition control (LCC). A suite of composition design features are conceptualized and implemented. The designer can use them singly or in combination, to specify the composition of complex components. Each material composition design feature relates directly to the geometry of the design, often relying on user interaction to specify critical aspects of the geometry. This approach allows the designer to simultaneously edit geometry and composition by varying parameters until a satisfactory result is attained. The identified LCC features are those based on volume, transition, pattern, and (user-defined) surface features. The material composition functions include functions parametrized with respect to distance or distances to user-defined geometric features; and functions that use Laplace's equation to blend smoothly various boundary conditions including values and gradients of the material composition on the boundaries. The Euclidean digital distance transform and the Boundary Element Method are adapted to the efficient computation of composition functions. Theoretical and experimental complexity, accuracy and convergence analyses are presented. The representations underlying the composition design features are analytic in nature and therefore concise. Evaluation for visualization and fabrication is performed only at the resolutions required for these purposes, thereby reducing the computational burden.

84 citations


Book ChapterDOI
01 Dec 2004
TL;DR: It is shown that the Euclidean distance squared transform requires fewer computations than the commonly used 5x5 chamfer transform.
Abstract: Within image analysis the distance transform has many applications. The distance transform measures the distance of each object point from the nearest boundary. For ease of computation, a commonly used approximate algorithm is the chamfer distance transform. This paper presents an efficient linear- time algorithm for calculating the true Euclidean distance-squared of each point from the nearest boundary. It works by performing a 1D distance transform on each row of the image, and then combines the results in each column. It is shown that the Euclidean distance squared transform requires fewer computations than the commonly used 5x5 chamfer transform.

80 citations


Journal ArticleDOI
TL;DR: This paper presents a method for particle picking based on shape feature detection, which has been successfully applied to detect particles with approximately circular or rectangular shapes and the extension of this approach to other types of particles with certain geometric features.

70 citations


Patent
12 Oct 2004
TL;DR: In this paper, a method of processing a digital image to produce an improved digital image was proposed, which includes determining a first vanishing point associated with the digital image, determining a second vanishing point corresponding to a direction orthogonal the first vanishing points, and determining a transform for modifying the digital images based on the first and second vanishing points.
Abstract: A method of processing a digital image to produce an improved digital image, includes receiving the digital image captured with a camera; determining a first vanishing point associated with the digital image; determining a second vanishing point associated with the digital image corresponding to a direction orthogonal the first vanishing point; determining a transform for modifying the digital image based on the first vanishing point and the second vanishing point; and applying the transform to the digital image to produce an improved digital image.

63 citations


Proceedings ArticleDOI
02 Oct 2004
TL;DR: This work evaluates and compares the performances of watershed segmentation for binary images with different distance transforms including Euclidean, City block and Chessboard.
Abstract: This work evaluates and compares the performances of watershed segmentation for binary images with different distance transforms including Euclidean, City block and Chessboard.

62 citations


Patent
23 Mar 2004
TL;DR: In this paper, a method and system for automatically generating embroidery designs from a scanned image is described, where the scanned pattern is broken up into pixels, each pixel in the scanned image having a bitmap associated with the color of the pattern.
Abstract: A method and system are disclosed for automatically generating embroidery designs from a scanned image. An embroidery data generating mechanism generates accurate embroidery designs. The embroidery data generating mechanism first reads an image data file, which contains bitmapping information generated from a software scanning tool, the information being related to an embroidery pattern that has been scanned. The scanned pattern is broken up into pixels, each pixel in the scanned image having a bitmap associated with the color of the pattern. Each unique color in the scanned pattern has its own unique bitmap. The embroidery generating mechanism also includes a segmentation mechanism and a chain-encoding mechanism which perform operations to enhance the quality of the bitmapped information and to separate regions of the scanned image into objects. A distance transform evaluation mechanism classifies each object as being either a thick object or a thin, predominantly regular object. Additional mechanisms further interpret the objects into entities such as regular and singular regions and compute optimum sewing paths for embroidery data generation.

Patent
30 Aug 2004
TL;DR: In this paper, a system for automatic counting of non-overlapping objects irrespective of shape, size, and color is provided by imaging and computer subsystems, where objects to be counted are placed on a transparent surface disposed on a diffusing surface uniformly irradiated by electromagnetic radiation sources.
Abstract: A system for automatic counting of non-overlapping objects irrespective of shape, size, and color is provided by imaging and computer subsystems. Objects to be counted are placed on a transparent surface disposed on a diffusing surface uniformly irradiated by electromagnetic radiation sources. Low intensity object shadows and high intensity object background regions are digitally imaged. The digital image is converted by a computing unit to a binary image and subjected to the Distance Transform to determine a count of the objects. Object identification verification is provided by comparing identification information obtained from a bar code associated with a supply container of the objects and identification information obtained from a digitally imaged written request.

Journal ArticleDOI
TL;DR: This work addresses the issue of low-level segmentation for real-valued images in terms of an energy partition of the image domain using a framework based on measuring a pseudo-metric distance to a source point.
Abstract: We address the issue of low-level segmentation for real-valued images. The proposed approach relies on the formulation of the problem in terms of an energy partition of the image domain. In this framework, an energy is defined by measuring a pseudo-metric distance to a source point. Thus, the choice of an energy and a set of sources determines a tessellation of the domain. Each energy acts on the image at a different level of analysiss through the study of two types of energies, two stages of the segmentation process are addressed. The first energy considered, the path variation, belongs to the class of energies determined by minimal paths. Its application as a pre-segmentation method is proposed. In the second part, where the energy is induced by a ultrametric, the construction of hierarchical representations of the image is discussed.

Journal Article
TL;DR: In this paper, a new method capable of segmenting welding discontinuities using robust digital image processing techniques, which include noise attenuation filters, morphological mathematical operators and edge detection techniques such as the canny filter, the watershed transform and the distance transform, was presented.
Abstract: This work presents a new method capable of segmenting welding discontinuities using robust digital image processing techniques, which include noise attenuation filters, morphological mathematical operators and edge detection techniques such as the canny filter, the watershed transform and the distance transform. In order to determine the quality of the segmentation generated by the algorithm, the segmented image is compared with an ideal binary image developed manually. The results of this study have led to the development of the following scheme: first a median filter is used for noise reduction; second, a bottom hat filter is used to separate hypothetical discontinuities from their background; third, the segmented regions are identified by means of binary thresholding; fourth, filters taken from morphological mathematics are used to eliminate oversegmentation; and fifth, the watershed transform is used to separate internal regions. The results of the study have generated an area underneath the receiver operation characteristic curve of 0.9358 in a set of ten images. The best operational point reached corresponds to an 87.83% sensitivity and a 9.40% of 1-specificity.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between the distance parameter and the reference frame parameter in a speeded sentence/picture verification task, and compared across prime and probe trials for which the distance between the objects matched or mismatched.

Proceedings ArticleDOI
25 Jul 2004
TL;DR: The current study reports on the results of applying D as a similarity measure between the color histograms of two images, an extension of the Hamming distance for real-valued vectors.
Abstract: The performance of content-based image retrieval (CBIR) systems mainly depends on the image similarity measure that it uses. The fuzzy Hamming distance (D) is an extension of the Hamming distance for real-valued vectors. Because the feature space of each image is real-valued, the fuzzy Hamming distance can be successfully used as an image similarity measure. The current study reports on the results of applying D as a similarity measure between the color histograms of two images. The fuzzy Hamming distance is suitable for this application because it can take into account not only the number of different colors but also the magnitude of this difference.

Journal ArticleDOI
TL;DR: An adaptive improvement algorithm is proposed, which essentially combines two basic techniques: a long-edge-based vertex insertion strategy, and a local improvement, which guarantees that the refined triangulation is related to features along the front and has elements with appropriate size and shape, which fit the front well.
Abstract: Level set methods offer highly robust and accurate methods for detecting interfaces of complex structures. Efficient techniques are required to transform an interface to a globally defined level set function. In this paper, a novel level set method based on an adaptive triangular mesh is proposed for segmentation of medical images. Special attention is paid to an adaptive mesh refinement and redistancing technique for level set propagation, in order to achieve higher resolution at the interface with minimum expense. First, a narrow band around the interface is built in an upwind fashion. An active square technique is used to determine the shortest distance correspondence (SDC) for each grid vertex. Simultaneously, we also give an efficient approach for signing the distance field. Then, an adaptive improvement algorithm is proposed, which essentially combines two basic techniques: a long-edge-based vertex insertion strategy, and a local improvement. These guarantee that the refined triangulation is related to features along the front and has elements with appropriate size and shape, which fit the front well. We propose a short-edge elimination scheme to coarsen the refined triangular mesh, in order to reduce the extra storage. Finally, we reformulate the general evolution equation by updating: 1) the velocities and 2) the gradient of level sets on the triangulated mesh. We give an approach for tracing contours from the level set on the triangulated mesh. Given a two-dimensional image with N grids along a side, the proposed algorithms run in O(kN) time at each iteration. Quantitative analysis shows that our algorithm is of first order accuracy; and when the interface-fitted property is involved in the mesh refinement, both the convergence speed and numerical accuracy are greatly improved. We also analyze the effect of redistancing frequency upon convergence speed and accuracy. Numerical examples include the extraction of inner and outer surfaces of the cerebral cortex from magnetic resonance imaging brain images.

Patent
12 May 2004
TL;DR: In this article, an image generation device for generating the map image of the periphery of a distance sensor for measuring an orientation and distance to an obstacle by using the distance sensor is configured so that the measured values of the orientations and distances to the obstacle are obtained before and after the sensor moves.
Abstract: PROBLEM TO BE SOLVED: To generate and/or display the map of a peripheral environment where a laser distance sensor moves, as an image while making the sensor move. SOLUTION: This image generation device for generating the map image of the periphery of a distance sensor for measuring an orientation and distance to an obstacle by using the distance sensor is configured so that the measured values of the orientations and distances to the obstacle are obtained before and after the distance sensor moves by making the distance sensor move, and an image showing the occupancy location of the obstacle in the periphery of the distance sensor is generated as a grid image based on the measured value, and featured points in the respective grid images acquired based on the measured values are extracted, and the featured points between the respective grid images are to collated by comparing the extracted featured points of the respective grid images, and the grid images are temporarily arranged by overlapping them whose featured points are the most satisfactorily matched based on the result of collation, and the relative locations and posture of the grid images are to finely calculated based on the measured values between the temporarily arranged grid images in order to generate the map image. COPYRIGHT: (C)2006,JPO&NCIPI

Journal ArticleDOI
TL;DR: An algorithm for computation of a colon centerline that is fast compared to the centerline algorithms presented in the reviewed literature, and that relies little on a complete colon segments identification is developed.
Abstract: Although several methods for generating the centerline of a colon from CT colonographic scans have been proposed, in general they are time-consuming and do not take into account that the images of the colon may be of nonoptimal quality, with collapsed regions, and stool within the colon. Furthermore, the colonic lumen or wall, which is often used as a basis for computation of a centerline, is not always precisely segmented. In this study, we have developed an algorithm for computation of a colon centerline that is fast compared to the centerline algorithms presented in the reviewed literature, and that relies little on a complete colon segments identification. The proposed algorithm first extracts local maxima in a distance map of a segmented colonic lumen. The maxima are considered to be nodes in a set of graphs, and are iteratively linked together, based on a set of connection criteria, giving a minimum distance spanning tree. The connection criteria are computed from the distance from object boundary, the Euclidean distance between nodes and the voxel values on the pathway between pairs of nodes. After the last iteration, redundant branches are removed and end segments are recovered for each remaining graph. A subset of the initial maxima is used for distinguishing between the colon and noncolonic centerline segments among the set of graphs, giving the final centerline representation. A phantom study showed that, with respect to phantom variations, the algorithm achieved nearly constant computation time (2.3-2.9 s) except for the most extreme setting (20.2 s). The algorithm successfully found all, or most of, the centerline (93% - 100%). Displacement from optimum varied with colon diameter (1.2-6.6 mm). By use of 40 CT colonographic scans, the computer-generated centerlines were compared with the centerlines generated by three radiologists. The similarity was measured based on percent coverage and average displacement. The computer-generated centerlines, when compared with human-generated centerlines, had approximately the same displacement as when the human-generated centerlines were compared among each other (3.8 mm versus 4.0 mm). The coverage of the computer-generated centerlines was slightly less than that of the human-generated centerlines (92% versus 94%). The 40 centerlines were, on average, computed in 10.5 seconds, including computation time for the distance transform, with an Intel Pentium-based 800 MHz computer, as compared with 12-17 seconds or more (excluding computation time for the distance transform needed) per centerline as reported in other studies.

Patent
08 Sep 2004
TL;DR: In this paper, the authors propose a propagation-based distance transform which catalogs the achievable paths going from a goal point whose distance is to be estimated to a source point which is the origin of the distance measurements and likens the distance of the goal point to the length of the shortest achievable path or paths.
Abstract: This method allows the calculation, using a terrain elevation database, of a map of the distances of the points accessible to a mobile object subjected to dynamic constraints evolving with its time of travel, for example an aircraft having an imposed vertical flight profile, the distances being measured solely according to paths achievable by the mobile object. It implements a propagation-based distance transform which catalogs the achievable paths going from a goal point whose distance is to be estimated to a source point which is the origin of the distance measurements and likens the distance of the goal point to the length of the shortest achievable path or paths.

Journal ArticleDOI
TL;DR: It is quantitatively established that consistency can be significantly improved by using two-way measures in conjunction with high-order phase modulation and moderate association distances.

Patent
16 Mar 2004
TL;DR: In this paper, a region of a distance field representing an object is represented as a geometric element in a world coordinate system and each geometric element is associated with a texture map, where the texture map includes distance samples of the corresponding source cell of the geometric element.
Abstract: A method and apparatus render a region of a distance field representing an object. The distance field is partitioned into a set of cells, where each cell includes a set of distance samples and a method for reconstructing the distance field within the cell using the distance samples. A set of source cells is selected from the set of cells of the distance field to enable the rendering of the region. Each source cell is represented as a geometric element in a world coordinate system. Each geometric element is associated with a texture map, where the texture map includes distance samples of the corresponding source cell of the geometric element. Each geometric element is transformed from the world coordinate system to a pixel coordinate system and texture mapped, using the distance samples, to determine a distance for each component of each pixel associated with the geometric element. The distance of each component of each pixel is mapped to an antialiased intensity of the component of the pixel.

Journal ArticleDOI
TL;DR: This study proposes a method that is based on the Iterative Closest Point algorithm and a pre-computed closest point map obtained with a slight modification of the fast marching method proposed by Sethian and shows that on these data sets this registration method leads to accuracy numbers that are comparable to those obtained with voxel-based methods.

Patent
16 Mar 2004
TL;DR: In this paper, a set of pixels associated with the region is located and the set of components of each pixel is specified for each pixel, a corresponding distance for the component is determined for each distance field using the corresponding set of cells and the corresponding distances are combined to determine a combined distance.
Abstract: A method and apparatus antialias a region of a set of objects. The set of objects is represented by a set of two-dimensional distance fields. Each distance field is partitioned into cells, where each cell is associated with a method for reconstructing the distance field within the cell. For each distance field, a set of cells associated with the region is identified. A set of pixels associated with the region is located and a set of components is specified for each pixel. For each component of each pixel, a corresponding distance for the component is determined for each distance field using the corresponding set of cells and the corresponding distances are combined to determine a combined distance. The combined distance is mapped to an antialiased intensity of the component of the pixel.

Journal ArticleDOI
TL;DR: A straightforward modification to the Chamfer distance transform algorithm is presented that allows it to produce more accurate results without increasing the window size, and is loosely based on the concept of continual measurements and course correction that was employed by ocean going vessel navigation in the past.

Patent
16 Mar 2004
TL;DR: In this paper, a set of boundary descriptors for the two-dimensional object is determined and partitioned into segments, and a method for reconstructing the distance field within the cell using the first and second sets of distance values is defined.
Abstract: A method generates a distance field for a region of a shape descriptor representing an object. The distance field includes a set of cells for which cell types are defined. A configuration of a set of cells for the region is generated. Each cell of the configuration includes a cell type and a method for reconstructing the distance field within the cell. The configuration of the set of cells is modified until an optimal configuration is reached. The modification is based on the shape descriptor, the region, and the set of cell types. The optimal configuration of the set of cells is stored in a memory to generate the distance field for the region. Another method generates a two-dimensional distance field within a cell associated with a two-dimensional object. A set of boundary descriptors for the two-dimensional object is determined and partitioned into a set of segments. The segments are delimited by a set of features of the boundary descriptors. A first and second segment associated with the cell are identified. First and second sets of distance values using the first and second segments are specified. A method for reconstructing the distance field within the cell, using the first and second sets of distance values, is defined. The first and second sets of distance values and the reconstruction method are stored to enable reconstruction of the distance field within the cell by applying the reconstruction method.

Patent
Rafael Wiemker1, Blaffert Thomas1
22 Mar 2004
TL;DR: In this article, an automated method and corresponding device and computer software are provided, which analyze a volume of interest around a singled out tumor, and which, by virtue of a 3D distance transform and a region drawing scheme advantageously allow to automatically segment a tumor out of a given volume.
Abstract: Volume measurement of for example a tumor in a 3D image dataset is an important and often performed task. The problem is to segment the tumor out of this volume in order to measure its dimensions. This problem is complicated by the fact that the tumors are often connected to vessels and other organs. According to the present invention, an automated method and corresponding device and computer software are provided, which analyze a volume of interest around a singled out tumor, and which, by virtue of a 3D distance transform and a region drawing scheme advantageously allow to automatically segment a tumor out of a given volume.

Proceedings ArticleDOI
09 Jun 2004
TL;DR: This paper combines morphing with deformation theory from continuum mechanics by using strain energy, which reflects the magnitude of deformation, as an objective function, to convert the problem of path interpolation into an unconstrained optimization problem.
Abstract: When two topologically identical shapes are blended, various possible transformation paths exist from the source shape to the target shape. Which one is the most plausible? Here we propose that the transformation process should obey a quasi-physical law. This paper combines morphing with deformation theory from continuum mechanics. By using strain energy, which reflects the magnitude of deformation, as an objective function, we convert the problem of path interpolation into an unconstrained optimization problem. To reduce the number of variables in the optimization we adopt shape functions, as used in the finite element method (FEM). A point-to-point correspondence between the source and target shapes is naturally established using these polynomial functions plus a distance map.

Patent
Higaki Nobuo1, Shimada Takamichi1
25 Mar 2004
TL;DR: In this article, an edge image is generated from captured images taken by CCD cameras, and a moving object distance image indicative of a distance to a human being is generated by extracting pixel corresponding thereto.
Abstract: In a moving object detection system, an edge image is generated from captured images taken by CCD cameras and a moving object distance image indicative of a distance to a moving object (such as a human being) is generated by extracting pixel corresponding thereto. Then, pixels in the moving object distance image are summed to generate a histogram and a profile extraction region is set in the moving object distance image with its center line focusing on a position whose histogram is greatest, while the edge image is superposed on the moving object distance image to correct the center line of the profile extraction region such that the moving object is detected by extracting its profile in the region. With this, it becomes possible to detect the moving objects respectively even when the two or more objects are present in neighborhood.

Proceedings Article
Mark W. Jones1
01 Jan 2004
TL;DR: This paper compares various techniques for compressing floating point distance fields with a new lossless technique that produces a lossless encoding at a third of the file size of entropy encoders, and equivalent to lossy wavelet transforms.
Abstract: This paper compares various techniques for compressing floating point distance fields. Both lossless and lossy techniques are compared against a new lossless technique. The new Vector Transform technique creates a predictor based upon a Vector Distance Transform and its suitability for distance field data sets is reported. The new technique produces a lossless encoding at a third of the file size of entropy encoders, and equivalent to lossy wavelet transforms, where around 75% of the coefficients have been set to zero. The algorithm predicts each voxel value linearly based upon two previous voxels chosen from one of 13 directions which have been previously computed. Those that cannot be predicted are explicitly stored.