scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Display of surfaces from volume data

01 May 1988-IEEE Computer Graphics and Applications (IEEE)-Vol. 8, Iss: 3, pp 29-37
TL;DR: In this article, a volume-rendering technique for the display of surfaces from sampled scalar functions of 3D spatial dimensions is discussed, which is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane.
Abstract: The application of volume-rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is discussed. It is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane. Surface-shading calculations are performed at every voxel with local gradient vectors serving as surface normals. In a separate step, surface classification operators are applied to compute a partial opacity of every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are examined. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and few other aliasing artifacts. The use of selective blurring and supersampling to further improve image quality is described. Examples from molecular graphics and medical imaging are given. >

Summary (2 min read)

1. Introduction

  • Visualization of scientific computations is a rapidly growing field within computer graphics.
  • The authors explore the use of isovalue contour surfaces to visualize electron density maps for molecular graphics, and the use of region boundary surfaces to visualize computed tomography (CT) data for medical imaging.
  • All of these techniques suffer from the common problem of having to make a binary classification decision: either a surface passes through the current voxel or it does not.
  • Its application to CT data has been demonstrated by PIXAR [11] , but no details of their approach have been published.

2. Rendering pipeline

  • The first step is data preparation which may include correction for non-orthogonal sampling grids in electron density maps, correction for patient motion in CT data, contrast enhancement, and interpolation of additional samples.
  • Rays are then cast into these two arrays from the observer eyepoint.
  • The compositing calculations referred to above are simply linear interpolations.

3. Shading

  • Using the rendering pipeline presented above, the mapping from acquired data to color provides 3-D shape cues, but does not participate in the classification operation.
  • Accordingly, a shading model was selected that provides a satisfactory illusion of smooth surfaces at a reasonable cost.
  • It is not the main point of the paper and is presented mainly for completeness.

4. Classification

  • The mapping from acquired data to opacity performs the essential task of surface classification.
  • The authors will first consider the rendering of isovalue contour surfaces in electron density maps, i.e. surfaces defined by points of equal electron density.
  • The authors will then consider the rendering of region boundary surfaces in computed tomography (CT) data, i.e. surfaces bounding tissues of constant CT number.

4.1. Isovalue contour surfaces

  • These maps are obtained from X-ray diffraction studies of crystallized samples of the molecule.
  • One obvious way to display isovalue surfaces is to opaquely render all voxels having values greater than some threshold.
  • Unfortunately, this solution prevents display of multiple concentric semi-transparent surfaces, a very useful capability.
  • If the window is too narrow, holes appear.
  • The most pleasing image is obtained if the thickness of this transition region stays constant throughout the volume.

4.2. Region boundary surfaces

  • Clinicians are mostly interested in the boundaries between tissues, from which the sizes and spatial relationships of anatomical features can be inferred.
  • The reason can be explained briefly as follows.
  • The procedure employed in this study is based on the following simplified model of anatomical scenes and the CT scanning process.
  • Note that all voxels are typically mapped to some non-zero opacity and will thus contribute to the final image.
  • In order to obtain such effects using volume rendering, the authors would like to suppress the opacity of tissue interiors while enhancing the opacity of their bounding surfaces.

5.1. Computational complexity

  • One of the strengths of the rendering method presented in this paper is its modularity.
  • This implies that if the authors store gradient magnitudes for all voxels, computation of new opacities following a change in classification parameters entails only generation of a new lookup table followed by one table reference per voxel.
  • Effective rotation sequences can be generated, however, using a single set of colors.
  • The visual manifestation of fixing the shading is that light sources appear to travel around with the data as it rotates and highlights are incorrect.
  • Since the authors are visualizing imaginary or invisible phenomena anyway, observers are seldom troubled by this effect.

5.2. Image quality

  • The analogy is not exact, and the differences are fundamental.
  • Unless the authors reconstruct the 3-D scene that gave rise to their volume data, they cannot compute an accurate projection of it.
  • Blurry silhouettes have less visual impact, but they reflect the true imprecision in their knowledge of surface locations.
  • An alternative means for improving image quality is super-sampling.
  • If the interpolation method is a good one, the accuracy of the visibility calculations is improved, reducing some kinds of aliasing.

6. Implementation and results

  • The dataset used in the molecular graphics study is a 113 x 113 x 113 voxel portion of an electron density map for the protein Cytochrome B5.
  • Using the shading and classification calculations described in sections 3 and 4.1, colors and opacities were computed for each voxel in the expanded dataset.
  • These calculations required 5 minutes on a SUN 4/280 having 32MB of main memory.
  • Her skin and nose cartilage are rendered semi-transparently over the bone surface in the tissue-bone images.
  • Figure 10 was generated by expanding the dataset to 452 slices using a cubic B-spline in the vertical direction, then generating an image from the larger dataset by casting one ray per slice.

7. Conclusions

  • Volume rendering has been shown to be an effective modality for the display of surfaces from sampled scalar functions of three spatial dimensions.
  • As demonstrated by the figures, it can generate images exhibiting approximately equivalent resolution, yet fewer interpretation errors, than techniques relying on geometric primitives.
  • This problem manifests itself as striping in images.
  • Alternatively, classical ray tracing of the geometry can be incorporated directly into the volume rendering pipeline.
  • Another useful tool would be the ability to perform a true 3-D merge of two or more visualizations, allowing, for example, the superimposition of radiation treatment planning isodose surfaces over CT data.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

1
Display of Surfaces from Volume Data
Marc Levoy
June, 1987
(revised February, 1988)
Computer Science Department
University of North Carolina
Chapel Hill, NC 27514
Abstract
The application of volume rendering techniques to the display of surfaces from sampled scalar func-
tions of three spatial dimensions is explored. Fitting of geometric primitives to the sampled data is not
required. Images are formed by directly shading each sample and projecting it onto the picture plane.
Surface shading calculations are performed at every voxel with local gradient vectors serving as sur-
face normals. In a separate step, surface classification operators are applied to obtain a partial opacity
for every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are
presented. Independence of shading and classification calculations insures an undistorted visualization
of 3-D shape. Non-binary classification operators insure that small or poorly defined features are not
lost. The resulting colors and opacities are composited from back to front along viewing rays to form
an image. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and
few other aliasing artifacts. The use of selective blurring and super-sampling to further improve image
quality is also described. Examples from two applications are given: molecular graphics and medical
imaging.
1. Introduction
Visualization of scientific computations is a rapidly growing field within computer graphics. A
large subset of these applications involve sampled functions of three spatial dimensions, also known as
volume data. Surfaces are commonly used to visualize volume data because they succinctly present the
3-D configuration of complicated objects. In this paper, we explore the use of isovalue contour sur-
faces to visualize electron density maps for molecular graphics, and the use of region boundary sur-
faces to visualize computed tomography (CT) data for medical imaging.
The currently dominant techniques for displaying surfaces from volume data consist of applying
a surface detector to the sample array, fitting geometric primitives to the detected surfaces, then
rendering these primitives using conventional surface rendering algorithms. The techniques differ
from one another mainly in the choice of primitives and the scale at which they are defined. In the
medical imaging field, a common approach is to apply thresholding to the volume data. The resulting
binary representation can be rendered by treating 1-voxels as opaque cubes having six polygonal faces
[1]. If this binary representation is augmented with the local grayscale gradient at each voxel, substan-
tial improvements in surface shading can be obtained [2-5]. Alternatively, edge tracking can be
applied on each slice to yield a set of contours defining features of interest, then a mesh of polygons
can be constructed connecting the contours on adjacent slices [6]. As the scale of voxels approaches
that of display pixels, it becomes feasible to apply a local surface detector at each sample location.
This yields a very large collection of voxel-sized polygons, which can be rendered using standard
algorithms [7]. In the molecular graphics field, methods for visualizing electron density maps include
stacks of isovalue contour lines, ridge lines arranged in 3-space so as to connect local maxima [8], and

2
basket meshes representing isovalue contour surfaces [9].
All of these techniques suffer from the common problem of having to make a binary
classification decision: either a surface passes through the current voxel or it does not. As a result,
these methods often exhibit false positives (spurious surfaces) or false negatives (erroneous holes in
surfaces), particularly in the presence of small or poorly defined features.
To avoid these problems, researchers have begun exploring the notion of volume rendering
wherein the intermediate geometric representation is omitted. Images are formed by shading all data
samples and projecting them onto the picture plane. The lack of explicit geometry does not preclude
the display of surfaces, as will be demonstrated in this paper. The key improvement offered by
volume rendering is that it provides a mechanism for displaying weak or fuzzy surfaces. This capabil-
ity allows us to relax the requirement, inherent when using geometric representations, that a surface be
either present or absent at a given location. This in turn frees us from the necessity of making binary
classification decisions. Another advantage of volume rendering is that it allows us to separate shading
and classification operations. This separation implies that the accuracy of surface shading, hence the
apparent orientation of surfaces, does not depend on the success or failure of classification. This
robustness can be contrasted with rendering techniques in which only voxels lying on detected sur-
faces are shaded. In such systems, any errors in classification result in incorrectly oriented surfaces.
Smith has written an excellent introduction to volume rendering [10]. Its application to CT data
has been demonstrated by PIXAR [11], but no details of their approach have been published. The
technique described in this paper grew out of the author’s earlier work on the use of points as a render-
ing primitive [12]. Its application to CT data was first reported in June, 1987 [13], and was presented
at the SPIE Medical Imaging II conference in January, 1988 [14].
2. Rendering pipeline
The volume rendering pipeline used in this paper is summarized in figure 1. We begin with an
array of acquired values f
0
(x
i
) at voxel locations x
i
= (x
i
,y
j
,z
k
). The first step is data preparation which
may include correction for non-orthogonal sampling grids in electron density maps, correction for
patient motion in CT data, contrast enhancement, and interpolation of additional samples. The output
of this step is an array of prepared values f
1
(x
i
). This array is used as input to the shading model
described in section 3, yielding an array of voxel colors c
λ
(x
i
), λ = r,g,b. In a separate step, the array
of prepared values is used as input to one of the classification procedures described in section 4, yield-
ing an array of voxel opacities α(x
i
). Rays are then cast into these two arrays from the observer
eyepoint. For each ray, a vector of sample colors c
λ
(x
i
˜
) and opacities α(x
i
˜
) is computed by re-
sampling the voxel database at K evenly spaced locations x
i
˜
= (x
i
˜
,y
j
˜
,z
k
˜
) along the ray and tri-linearly
interpolating from the colors and opacities in the eight voxels closest to each sample location as shown
in figure 2. Finally, a fully opaque background of color c
bkg,λ
is draped behind the dataset and the re-
sampled colors and opacities are merged with each other and with the background by compositing in
back-to-front order to yield a single color C
λ
(u
i
˜
) for the ray, and, since only one ray is cast per image
pixel, for the pixel location u
i
˜
= (u
i
˜
,v
j
˜
) as well.
The compositing calculations referred to above are simply linear interpolations. Specifically, the
color C
out,λ
(u
i
˜
) of the ray as it leaves each sample location is related to the color C
in,λ
(u
i
˜
) of the ray as
it enters and the color c
λ
(x
i
˜
) and opacity α(x
i
˜
) at that sample location by the transparency formula
C
out,λ
(u
i
˜
) = C
in,λ
(u
i
˜
)(1 α(x
i
˜
)) + c
λ
(x
i
˜
)α(x
i
˜
).
Solving for pixel color C
λ
(u
i
˜
) in terms of the vector of sample colors c
λ
(x
i
˜
) and opacities α(x
i
˜
) along
the associated viewing ray gives
C
λ
(u
i
˜
) = C
λ
(u
i
˜
,v
j
˜
) =

3
(1)
k
˜
=0
Σ
K
c
λ
(x
i
˜
,y
j
˜
,z
k
˜
)α(x
i
˜
,y
j
˜
,z
k
˜
)
m
˜
=k
˜
+1
Π
K
(1 α(x
i
˜
,y
j
˜
,z
m
˜
))
where c
λ
(x
i
˜
,y
j
˜
,z
0
) = c
bkg,λ
and α(x
i
˜
,y
j
˜
,z
0
) = 1.
3. Shading
Using the rendering pipeline presented above, the mapping from acquired data to color provides
3-D shape cues, but does not participate in the classification operation. Accordingly, a shading model
was selected that provides a satisfactory illusion of smooth surfaces at a reasonable cost. It is not the
main point of the paper and is presented mainly for completeness. The model chosen is due to Phong
[15]:
c
λ
(x
i
) = c
p,λ
k
a,λ
+
(2)
k
1
+ k
2
d (x
i
)
c
p,λ

k
d,λ
(N(x
i
)
.
L) + k
s,λ
(N(x
i
)
.
H)
n
where
c
λ
(x
i
) = λ’th component of color at voxel location x
i
, λ = r,g,b,
c
p,λ
= λ’th component of color of parallel light source,
k
a,λ
= ambient reflection coefficient for λ’th color component,
k
d,λ
= diffuse reflection coefficient for λ’th color component,
k
s,λ
= specular reflection coefficient for λ’th color component,
n = exponent used to approximate highlight,
k
1
, k
2
= constants used in linear approximation of depth-cueing,
d (x
i
) = perpendicular distance from picture plane to voxel location x
i
,
N(x
i
) = surface normal at voxel location x
i
,
L = normalized vector in direction of light source,
H = normalized vector in direction of maximum highlight.
Since a parallel light source is used, L is a constant. Furthermore,
H =
|V + L|
V + L

where
V = normalized vector in direction of observer.
Since an orthographic projection is used, V and hence H are also constants. Finally, the surface nor-
mal is given by

4
N(x
i
) =
|f (x
i
) |
f (x
i
)

where the gradient vector f (x
i)
is approximated using the operator
f (x
i
) = f (x
i
,y
j
,z
k
)
2
1

f (x
i
,y
j
,z
k +1
) f (x
i
,y
j
,z
k 1
)
.
2
1

f (x
i
,y
j +1
,z
k
) f (x
i
,y
j 1
,z
k
)
,
!
2
1
""#$
f (x
i +1
,y
j
,z
k
) f (x
i 1
,y
j
,z
k
)
%&
,
4. Classification
The mapping from acquired data to opacity performs the essential task of surface classification.
We will first consider the rendering of isovalue contour surfaces in electron density maps, i.e. surfaces
defined by points of equal electron density. We will then consider the rendering of region boundary
surfaces in computed tomography (CT) data, i.e. surfaces bounding tissues of constant CT number.
4.1. Isovalue contour surfaces
Determining the structure of large molecules is a difficult problem. The method most commonly
used is ab initio interpretation of electron density maps, which represent the averaged density of a
molecule’s electrons as a function of position in 3-space. These maps are obtained from X-ray diffrac-
tion studies of crystallized samples of the molecule.
One obvious way to display isovalue surfaces is to opaquely render all voxels having values
greater than some threshold. This produces 3-D regions of opaque voxels the outermost layer of which
is the desired isovalue surface. Unfortunately, this solution prevents display of multiple concentric
semi-transparent surfaces, a very useful capability. Using a window in place of a threshold does not
solve the problem. If the window is too narrow, holes appear. If it too wide, display of multiple sur-
faces is constrained. In addition, the use of thresholds and windows introduces artifacts into the image
that are not present in the data.
The classification procedure employed in this study begins by assigning an opacity α
v
to voxels
having selected value f
v
, and assigning an opacity of zero to all other voxels. In order to avoid aliasing
artifacts, we would also like voxels having values close to f
v
to be assigned opacities close to α
v
. The
most pleasing image is obtained if the thickness of this transition region stays constant throughout the
volume. We approximate this effect by having the opacity fall off as we move away from the selected
value at a rate inversely proportional to the magnitude of the local gradient vector.
This mapping is implemented using the expression

5
(3)
α(x
i
) = α
v
'(
(
(
)
(
(
(
*
0
1
r
1
++
|
|
|
|f (x
i
) |
f
v
f (x
i
)
, ,,,,,,,,
|
|
|
1
otherwise
f (x
i
) + r |f (x
i
) |
f (x
i
) r |f (x
i
) | f
v
if | f (x
i
) | > 0 and
f (x
i
) = f
v
if | f (x
i
) | = 0 and
where r is the desired thickness in voxels of the transition region, and the gradient vector is approxi-
mated using the operator given in section 3. A graph of α(x
i
) as a function of f (x
i
) and | f (x
i
) | for
typical values of f
v
, α
v
, and r is shown in figure 3.
If more than one isovalue surface is to be displayed in a single image, they can be classified
separately and their opacities combined. Specifically, given selected values f
v
n
, n = 1, . . . , N, N 1,
opacities α
v
n
and transition region thicknesses r
n
, we can use equation (3) to compute α
n
(x
i
), then
apply the relation
α
tot
(x
i
) = 1
n =1
Π
N
(1 α
n
(x
i
)).
4.2. Region boundary surfaces
From a densitometric point of view, the human body is a complex arrangement of biological tis-
sues each of which is fairly homogeneous and of predictable density. Clinicians are mostly interested
in the boundaries between tissues, from which the sizes and spatial relationships of anatomical features
can be inferred.
Although many researchers use isovalue surfaces for the display of medical data, it is not clear
that they are well suited for that purpose. The reason can be explained briefly as follows. Given an
anatomical scene containing two tissue types A and B having values f
v
A
and f
v
B
where f
v
A
< f
v
B
, data
acquisition will produce voxels having values f (x
i
) such that f
v
A
f (x
i
) f
v
B
. Thin features of tissue
type B may be represented by regions in which all voxels bear values less than f
v
B
. Indeed, there is no
threshold value greater than f
v
A
guaranteed to detect arbitrarily thin regions of type B, and thresholds
close to f
v
A
are as likely to detect noise as signal.
The procedure employed in this study is based on the following simplified model of anatomical
scenes and the CT scanning process. We assume that scenes contain an arbitrary number of tissue
types bearing CT numbers falling within a small neighborhood of some known value. We further
assume that tissues of each type touch tissues of at most two other types in a given scene. Finally, we
assume that, if we order the types by CT number, then each type touches only types adjacent to it in
the ordering. Formally, given N tissue types bearing CT numbers f
v
n
, n = 1, . . . , N, N 1 such that
f
v
m
< f
v
m +1
, m = 1, . . . , N 1, then no tissue of CT number f
v
n
1
touches any tissue of CT number
f
v
n
2
, |n
1
n
2
| > 1.
If these criteria are met, each tissue type can be assigned an opacity and a piecewise linear map-
ping can be constructed that converts voxel value f
v
n
to opacity α
v
n
, voxel value f
v
n+1
to opacity α
v
n+1
,
and intermediate voxel values to intermediate opacities. Note that all voxels are typically mapped to
some non-zero opacity and will thus contribute to the final image. This scheme insures that thin
regions of tissue will still appear in the image, even if only as faint wisps. Note also that violation of
the adjacency criteria leads to voxels that cannot be unambiguously classified as belonging to one
region boundary or another and hence cannot be rendered correctly using this method.
The superimposition of multiple semi-transparent surfaces such as skin and bone can substan-
tially enhance the comprehension of CT data. In order to obtain such effects using volume rendering,

Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms and analyzes the results obtained to date to draw a large number of conclusions.
Abstract: The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http://vision.middlebury.edu/flow/ . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.

2,534 citations

Journal ArticleDOI
01 Jun 1988
TL;DR: A technique for rendering images of volumes containing mixtures of materials is presented, which allows both the interior of a material and the boundary between materials to be colored.
Abstract: A technique for rendering images of volumes containing mixtures of materials is presented. The shading model allows both the interior of a material and the boundary between materials to be colored. Image projection is performed by simulating the absorption of light along the ray path to the eye. The algorithms used are designed to avoid artifacts caused by aliasing and quantization and can be efficiently implemented on an image computer. Images from a variety of applications are shown.

1,702 citations

Proceedings ArticleDOI
24 Jul 1994
TL;DR: A new object-order rendering algorithm based on the factorization of a shear-warp factorization for perspective viewing transformations is described that is significantly faster than published algorithms with minimal loss of image quality.
Abstract: Several existing volume rendering algorithms operate by factoring the viewing transformation into a 3D shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2D warp to form an undistorted final image. We extend this class of algorithms in three ways. First, we describe a new object-order rendering algorithm based on the factorization that is significantly faster than published algorithms with minimal loss of image quality. Shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. We use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. We use spatial data structures based on run-length encoding for both the volume and the intermediate image. Our implementation running on an SGI Indigo workstation renders a 2563 voxel medical data set in one second. Our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. Third, we introduce a data structure for encoding spatial coherence in unclassified volumes (i.e. scalar fields with no precomputed opacity). When combined with our shear-warp rendering algorithm this data structure allows us to classify and render a 2563 voxel volume in three seconds. The method extends to support mixed volumes and geometry and is parallelizable.

1,249 citations

Journal ArticleDOI
TL;DR: The usefulness of the method is demonstrated by the segmentation and visualization of brain vessels from magnetic resonance imaging and magnetic resonance angiography, bronchi from a chest CT, and liver vessels (portal veins) from an abdominal CT.

1,135 citations

Journal ArticleDOI
TL;DR: This paper presents a front-to-back image-order volume-rendering algorithm and discusses two techniques for improving its performance, which employs a pyramid of binary volumes to encode spatial coherence present in the data and uses an opacity threshold to adaptively terminate ray tracing.
Abstract: Volume rendering is a technique for visualizing sampled scalar or vector fields of three spatial dimensions without fitting geometric primitives to the data. A subset of these techniques generates images by computing 2-D projections of a colored semitransparent volume, where the color and opacity at each point are derived from the data using local operators. Since all voxels participate in the generation of each image, rendering time grows linearly with the size of the dataset. This paper presents a front-to-back image-order volume-rendering algorithm and discusses two techniques for improving its performance. The first technique employs a pyramid of binary volumes to encode spatial coherence present in the data, and the second technique uses an opacity threshold to adaptively terminate ray tracing. Although the actual time saved depends on the data, speedups of an order of magnitude have been observed for datasets of useful size and complexity. Examples from two applications are given: medical imaging and molecular graphics.

1,096 citations

References
More filters
Proceedings ArticleDOI
01 Aug 1987
TL;DR: In this paper, a divide-and-conquer approach is used to generate inter-slice connectivity, and then a case table is created to define triangle topology using linear interpolation.
Abstract: We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.

13,231 citations

Journal ArticleDOI
TL;DR: Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.
Abstract: The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.

3,393 citations

Proceedings ArticleDOI
01 Jan 1984
TL;DR: In this article, a matte component can be computed similarly to the color channels for four-channel pictures, and guidelines for the generation of elements and arithmetic for their arbitrary compositing are discussed.
Abstract: Most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects. There are several applications, however, where elements must be rendered separately, relying on compositing techniques for the anti-aliased accumulation of the full image. This paper presents the case for four-channel pictures, demonstrating that a matte component can be computed similarly to the color channels. The paper discusses guidelines for the generation of elements and the arithmetic for their arbitrary compositing.

1,328 citations

Journal ArticleDOI
TL;DR: Most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects.
Abstract: Most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects. There are several applications, howe...

546 citations

Journal ArticleDOI
TL;DR: Methods of hidden surface removal and shading for computer-displayed surfaces if the surface to be displayed is approximated by a large number of square faces of restricted orientation work at least an order of magnitude faster than previously published methods.

438 citations