scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Graphics in 1992"


Journal ArticleDOI
TL;DR: This paper deals with a two-dimensional space-filling approach in which each node is a rectangle whose area is proportional to some attribute such as node size.
Abstract: The traditional approach to representing tree structures is as a rooted, directed graph with the root node at the top of the page and children nodes below the parent node with lines connecting them (Figure 1). Knuth (1968, p. 305-313) has a long discussion about this standard representation, especially why the root is at the top and he offers several alternatives including brief mention of a space-filling approach. However, the remainder of his presentation and most other discussions of trees focus on various node and edge representations. By contrast, this paper deals with a two-dimensional (2-d) space-filling approach in which each node is a rectangle whose area is proportional to some attribute such as node size.

1,573 citations


Journal ArticleDOI
TL;DR: A new space-efficient design is introduced for octree representations of volumes whose resolutions are not conveniently a power of two; octrees following this design are called branch-on-need octrees (BONOs).
Abstract: The large size of many volume data sets often prevents visualization algorithms from providing interactive rendering. The use of hierarchical data structures can ameliorate this problem by storing summary information to prevent useless exploration of regions of little or no current interest within the volume. This paper discusses research into the use of the octree hierarchical data structure when the regions of current interest can vary during the application, and are not known a priori. Octrees are well suited to the six-sided cell structure of many volumes.A new space-efficient design is introduced for octree representations of volumes whose resolutions are not conveniently a power of two; octrees following this design are called branch-on-need octrees (BONOs). Also, a caching method is described that essentially passes information between octree neighbors whose visitation times may be quite different, then discards it when its useful life is over.Using the application of octrees to isosurface generation as a focus, space and time comparisons for octree-based versus more traditional “marching” methods are presented.

550 citations


Journal ArticleDOI
TL;DR: A method that can handle sets of contours in which adjacent contours share a very contorted boundary is presented, and a new approach to solving the correspondence problem using a Minimum Spanning Tree generated from the contours is described.
Abstract: This paper is concerned with the problem of reconstructing the surfaces of three-dimensional objects, given a collection of planar contours representing cross-sections through the objects. This problem has important aplications in biomedical research and instruction, solid modeling, and industrial inspection.The method we describe produces a triangulated mesh from the data points of the contours which is then used in conjunction with a piecewise parametric surface-fitting algorithm to produce a reconstructed surface.The problem can be broken into four subproblems: the correspondence problem (which contours should be connected by the surface?), the tiling problem (how should the contours be connected?), the branching problem (what do we do when there are branches in the surface?), and the surface-fitting problem (what is the precise geometry of the reconstructed surface?) We describe our system for surface reconstruction from sets of contours with respect to each of these subproblems. Special attention is given to the correspondence and branching problems. We present a method that can handle sets of contours in which adjacent contours share a very contorted boundary, and we describe a new approach to solving the correspondence problem using a Minimum Spanning Tree generated from the contours.

345 citations


Journal ArticleDOI
TL;DR: It is shown how the ordering algorithms used for domain decomposition of finite element meshes for parallel processing, and how the data structures used by these algorithms can be used to solve the spatial point location problem.
Abstract: A visibility-ordering of a set of objects from some viewpoint is an ordering such that if object a obstructs object b, then b precedes a in the ordering. An algorithm is presented that generates a visibility-ordering of an acyclic convex set of meshed convex polyhedra. This algorithm takes time linear in the size of the mesh. Modifications to this algorithm and/or preprocessing techniques are described that permit nonconvex cells nonconvex meshes (meshes with cavities and/or voids), meshes with cycles, and sets of disconnected meshes to be ordered. Visibility-ordering of polyhedra is applicable to scientific visualization, particularly direct volume rendering. It is shown how the ordering algorithms can be used for domain decomposition of finite element meshes for parallel processing, and how the data structures used by these algorithms can be used to solve the spatial point location problem. The effects of cyclically obstructing polyhedra are discussed and methods for their elimination are described, including the use of the Delaunay triangulation. Methods for converting nonconvex meshes into convex meshes are described.

218 citations


Journal ArticleDOI
TL;DR: This paper shows that by ordering the N colors along their principal axis and partitioning the color space with respect to this ordering, the resulting constrained optimization problem can be solved in O(N) time by dynamic programming.
Abstract: Color quantization is a process of choosing a set of K representative colors to approximate the N colors of an image, K 2, of a color space in the principal direction of the input data. This new partitioning strategy leads to smaller quantization error and hence better image quality. Other algorithmic issues in color quantization such as efficient statistical computations and nearest-neighbor searching are also studied. The interplay between luminance and chromaticity in color quantization with and without color dithering is investigated. Our color quantization method allows the user to choose a balance between the image smoothness and hue accuracy for a given K.

159 citations


Journal ArticleDOI
TL;DR: An experiment was performed to psychophysically measure colorimetric tolerances for six images using paired comparison techniques and the necessary precision in number of bits per color channel was determined for both the CIELAB and the CRT rgb device color spaces.
Abstract: An environment was established to perform device-independent color reproduction of full-color pictorial images. In order to determine the required precision for this environment, an experiment was performed to psychophysically measure colorimetric tolerances for six images using paired comparison techniques. These images were manipulated using 10 linear and nonlinear functions in the CIELAB dimensions of lightness, chroma, and hue angle. Perceptibility tolerances were determined using probit analysis. From these results, the necessary precision in number of bits per color channel was determined for both the CIELAB and the CRT rgb device color spaces. For both the CIELAB color space and the CRT rgb device space, approximately eight color bits per channel were required for imperceptible color differences for pictorial images, and 10 bits per channel were required for computational precision.

142 citations


Journal ArticleDOI
TL;DR: The conversion of a device-independent representation to popular device spaces by means of trilinear interpolation requires substantially fewer lookup table entries with CCIR 601-2 YCbCr and CIELAB.
Abstract: Important standards for device-independent color allow many different color encodings. This freedom obliges users of these standards to choose the color space in which to represent their data. A device-independent interchange color space must exhibit an exact mapping to a colorimetric color representation, ability to encode all visible colors, compact representation for given accuracy, and low computational cost for transforms to and from device-dependent spaces. The performance of CIE 1931 XYZ, CIELUV, CIELAB, YES, CCIR 601-2 YCbCr, and SMPTE-C RGB is measured against these requirements. With extensions, all of these spaces can meet the first two requirements. Quantizing error dominates the representational errors of the tested color spaces. Spaces that offer low quantization error also have low gain for image noise. All linear spaces are less compact than nonlinear alternatives. The choice of nonlinearity is not critical; a wide range of gammas yields acceptable results. The choice of primaries for RGB representations is not critical, except that high-chroma primaries should be avoided. Quantizing the components of the candidate spaces with varying precision yields only small improvements. Compatibility with common image data compression techniques leads to the requirement for low luminance contamination, a property that compromises several otherwise acceptable spaces. The conversion of a device-independent representation to popular device spaces by means of trilinear interpolation requires substantially fewer lookup table entries with CCIR 601-2 YCbCr and CIELAB.

127 citations


Journal ArticleDOI
TL;DR: Algorithms for generating NC tool paths for machining of arbitrarily shaped 2 l/2 dimensional pockets with arbitrary islands are described, based on a new offsetting algorithm presented in this paper.
Abstract: In this paper we describe algorithms for generating NC tool paths for machining of arbitrarily shaped 2 l/2 dimensional pockets with arbitrary islands. These pocketing algorithms are based on a new offsetting algorithm presented in this paper. Our offsetting algorithm avoids costly two-dimensional Boolean set operations, relatively expensive distance calculations, and the overhead of extraneous geometry, such as the Voronoi diagrams, used in other pocketing algorithms.

124 citations


Journal ArticleDOI
TL;DR: Computational details of the Hermite interpolation algorithm are presented along with several illustrative applications of the interpolation technique to construction of joining or blending surfaces for solid models as well as fleshing surfaces for curved wire frame models.
Abstract: This paper presents an efficient algorithm called Hermite interpolation, for constructing low-degree algebraic surfaces, which contain, with C1 or tangent plane continuity, any given collection of points and algebraic space curves having derivative information. Positional as well as derivative constraints on an implicitly defined algebraic surface are translated into a homogeneous linear system, where the unknowns are the coefficients of the polynomial defining the algebraic surface. Computaional details of the Hermite interpolation algorithm are presented along with several illustrative applications of the interpolation technique to construction of joining or blending surfaces for solid models as well as fleshing surfaces for curved wire frame models. A heuristic approach to interactive shape control of implicit algebraic surfaces is also given, and open problems in algebraic surface design are discussed.

100 citations


Journal ArticleDOI
TL;DR: Theories of color matching with pigments are extended to determine reflectances for use in realistic image synthesis and the Kubelka–Munk theory of pigment mixing is developed and the relevant equations are derived.
Abstract: This article discusses and applies the Kubelka-Munk theory of pigment mixing to computer graphics in order to facilitate improved image synthesis. The theories of additive and subtractive color mixing are discussed and are shown to be insufficient for pigmented materials. The Kubelka–Munk theory of pigment mixing is developed and the relevant equations are derived. Pigment mixing experiments are performed and the results are displayed on color television monitors. A paint program that uses Kubelka–Munk theory to mix real pigments is presented. Theories of color matching with pigments are extended to determine reflectances for use in realistic image synthesis.

98 citations


Journal ArticleDOI
TL;DR: A simple output-sensitive algorithm for hidden surface removal in a collection of nTriangles in space for which a (partial) depth order is known is derived.
Abstract: We derive a simple output-sensitive algorithm for hidden surface removal in a collection of n triangles in space for which a (partial) depth order is known. If k is the combinatorial complexity of the output visibility map, the method runs in time O(n √k log n). The method is extended to work for other classes of objects as well, sometimes with even improved time bounds. For example, we obtain an algorithm that performs hidden surface removal for n (nonintersecting) balls in time O(n3/2log n + k).

Journal ArticleDOI
TL;DR: This paper presents a new 2D polygon clipping method, based on an extension to the Sutherland-Cohen 2D line clip method, which can use floating point or integer operations; this can be useful for fast or simple implementations.
Abstract: This paper presents a new 2D polygon clipping method, based on an extension to the Sutherland-Cohen 2D line clipping method. After discussing three basic polygon clipping algorithms, a different approach is proposed, explaining the principles of a new algorithm and presenting it step by step.An example implementation of the algorithm is given along with some results. A comparison between the proposed method, the Liang and Barsky algorithm, and the Sutherland-Hodgman algorithm is also given, showing performances up to eight times the speed of the Sutherland-Hodgman algorithm, and up to three times the Liang and Barsky algorithm. The algorithm proposed here can use floating point or integer operations; this can be useful for fast or simple implementations.

Journal ArticleDOI
Joe Warren1
TL;DR: Property of base points of the parameterization lead to a new understanding of incompatible edge twist methods such as Gregory's patch and can create parameterizations of four-, five- and six-sided surface patches using rational Be´zier surfaces defined over triangular domains.
Abstract: Rational Be´zier surfaces provide an effective tool for geometric design. One aspect of the theory of rational surfaces that is not well understood is what happens when a rational parameterization takes on the value (0/0, 0/0, 0/0) for some parameter value. Such parameter values are called base points of the parameterization. Base points can be introduced into a rational parameterization in Be´zier form by setting weights of appropriate control points to zero. By judiciously introducing base points, one can create parameterizations of four-, five- and six-sided surface patches using rational Be´zier surfaces defined over triangular domains. Subdivision techniques allow rendering and smooth meshing of such surfaces. Properties of base points also lead to a new understanding of incompatible edge twist methods such as Gregory's patch.

Journal ArticleDOI
TL;DR: The r- sets are viewed as a more appropriate choice for a modeling space: in particular, the r-sets provide closure with respect to regularized set operations and a complete set of generalized Euler operators for the manipulation of boundary representations, for graphics and other purposes.
Abstract: In this paper we study the relationship between manifold solids (r-sets whose boundaries are two-dimensional closed manifolds) and r-sets. We begin by showing that an r-set may be viewed as the limit of a certain sequence of manifold solids, where distance is measured using the Hausdorff metric. This permits us to introduce a minimal set of generalized Euler operators, sufficient for the construction and manipulation of r-sets. The completeness result for ordinary Euler operators carries over immediately to the generalized Euler operators on the r-sets and the modification of the usual boundary data structures, corresponding to our extension to nonmanifold r-sets, is straightforward. We in fact describe a modification of a well-known boundary data structure in order to illustrate how the extension can be used in typical solid modeling algorithms, and describe an implementation.The results described above largely eliminate what has been called an inherent mismatch between the modeling spaces defined by manifold solids and by r-sets. We view the r-sets as a more appropriate choice for a modeling space: in particular, the r-sets provide closure with respect to regularized set operations and a complete set of generalized Euler operators for the manipulation of boundary representations, for graphics and other purposes. It remains to formulate and prove a theorem on the soundness of the generalized Euler operators.

Journal ArticleDOI
TL;DR: An algorithm to estimate subdivision depths for rational curves and surfaces is presented, which has applications in surface rendering, surface/surface intersection, and mesh generation.
Abstract: An algorithm to estimate subdivision depths for rational curves and surfaces is presented. The subdivision depth is not estimated for the given curve/surface directly. The algorithm computes a subdivision depth for the polynomial curve/surface of which the given rational curve/surface is the image under the standard perspective projection. This subdivision depth, however, guarantees the required flatness of the given curve/surface after the subdivision. This work has applications in surface rendering, surface/surface intersection, and mesh generation.

Journal ArticleDOI
M. Douglas McIlroy1
TL;DR: A concise, incremental algorithm for raster approximation to ellipses in standard position produces approximations that are good to the last pixel even near octant boundaries or the thin ends of highly eccentric ellipse.
Abstract: A concise, incremental algorithm for raster approximations to ellipses in standard position produces approximations that are good to the last pixel even near octant boundaries or the thin ends of highly eccentric ellipses. The resulting approximations commute with reflection about the diagonal and are mathematically specifiable without reference to details of the algorithm.

Journal ArticleDOI
TL;DR: This paper proposes a method for determining the contrast of colored areas displayed on a CRT that uses a contrast metric that is in wide use in visual psychophysics and shows that the metric can be approximated reasonably without display measurement.
Abstract: Luminance contrast is the basis of text legibility, and maintaining luminance contrast is essential for any color selection algorithm. In principle, it can be calculated precisely on a sufficiently well-calibrated display surface, but calibration is very expensive. Consequently, most current systems deal with contrast using heuristics. However, the usual CRT setup puts the display surface into a state that is relatively predictable. Luminance values can be estimated based on this state, and these luminance values have been used to calculate contrast using the Michelson definition. This paper proposes a method for determining the contrast of colored areas displayed on a CRT. It uses a contrast metric that is in wide use in visual psychophysics and shows that the metric can be approximated reasonably without display measurement, as long as it is possible to assume that the CRT has been adjusted according to usual CRT setup standards.

Journal ArticleDOI
TL;DR: This marking requirement arises because one is working at multiple places in the code at one time; the solution is to augment the typical scroll bar with bookmarks, an augmentation of scroll bars that are more widely applicable.
Abstract: This marking requirement arises because one is working at multiple places in the code at one time. This phenomenon is present also when working on large documents. This problem of working in more than one place still exists in on-line environments. Some text editors provide the capability to set one or more marks or registers [1] in the text to which the editor can return. The problem with text editor “marks” is that they are not concrete and visual. Users frequently forget what marks have been set and what they mean. In principle, one can move a scroll bar back to some prior position, but in practice users are poor at remembering or hitting a prior position. The solution is to augment the typical scroll bar with bookmarks (Figure 1). Bookmarks are not an augmentation of text editors; they are an augmentation of scroll bars and thus are more widely applicable. Scrolling through long lists of items, scrolling across large drawings, columns of spreadsheets or historical time lines are all examples of where bookmarks can be applied. A bookmark is simply a numeric value that is displayed as a position consistent with the accompanying scroll bar. Clicking in the black bookmark area will create a new bookmark with the current value of the scroll bar. Clicking on a bookmark will set the scroll bar to the value of that bookmark.

Journal ArticleDOI
TL;DR: A two-step incremental linear interpolation algorithm is derived and it is shown that the algorithm is correct, that it is reversible, and thatIt is faster than previous single-step algorithms.
Abstract: A two-step incremental linear interpolation algorithm is derived and analyzed. It is shown that the algorithm is correct, that it is reversible, and that it is faster than previous single-step algorithms. An example is given of the execution of the algorithm.

Journal ArticleDOI
TL;DR: The class of totally consistent bounding functions is introduced, which have the desirable properties of allowing surprisingly good bounds to be built quickly and are applicable to relatively slow, exact operations, and to general Boolean algebras.
Abstract: In constructive solid geometry, geometric solids are represented as trees whose leaves are labeled by primitive solids and whose internal nodes are labeled by set-theoretic operations. A bounding function in this context is an upper or lower estimate on the extent of the constituent sets; such bounds are commonly used to speed up algorithms based on such trees. We introduce the class of totally consistent bounding functions, which have the desirable properties of allowing surprisingly good bounds to be built quickly. Both outer and inner bounds can be refined using a set of rewrite rules, for which we give some complexity and convergence results. We have implemented the refinement rules for outer bounds within a solid modeling system, where they have proved especially useful for intersection testing in three and four dimensions. Our implementations have used boxes as bounds, but different classes (shapes) of bounds are also explored. The rewrite rules are also applicable to relatively slow, exact operations, which we explore for their theoretical insight, and to general Boolean algebras. Results concerning the relationship between these bounds and active zones are also noted.

Journal ArticleDOI
TL;DR: The technique described below allows very fast generation of accurate shadows for a single convex polyhedron moving over an arbitrary 3D scene rendered in a depth-buffer.
Abstract: This note describes a technique for adding cast shadows to a solid 3D cursor rendered in a depth-buffer. Such a cursor can be used in a range of 3Dpositioning tasks where lack of depth perception cues is a recurring problem [1, 7, 81. The simplest of these tasks is the basic placement of an object at a position in 3-space. However, the technique described here can be used with essentially any 3D-positioning task that uses a cursor for feedback. In addition, this technique can be combined with other interaction techniques such as 3D snap-dragging [21. Shadows can dramatically enhance depth perception in computer-generated scenes. In addition, shadows can also aid in the perception of spatial relationships in complex scenes. For example, shadows can be particularly useful for positioning tasks where one object must be centered above another or where nonadjacent objects must be positioned with their edges aligned. Unfortunately, using traditional techniques, shadows of a moving object such as a cursor often can not be created fast enough for interactive use (unless limited to the special case of casting shadows on a fixed ground plane [7]). The technique described below allows very fast generation of accurate shadows for a single convex polyhedron moving over an arbitrary 3D scene rendered in a depth-buffer. Figure 1 shows an example of this technique. Here a red cube acts as its own cursor, casting a shadow on a complex scene as it is being positioned.