scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Illumination for computer generated pictures

Bui Tuong Phong1•
01 Jun 1975-Communications of The ACM (ACM)-Vol. 18, Iss: 6, pp 311-317
TL;DR: Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.
Abstract: The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.

Summary (3 min read)

Methods of Object Modeling

  • Image quality depends directly on the effectiveness of the shading algorithm, which in turn depends on the method of modeling the object.
  • With these systems, exact information at each point of the surface can be obtained, and the result-ing computer generated pictures are most realistic.
  • They have not been taken into consideration due to an increase in computation time to remove hidden surfaces and to perform shading computations.
  • This type of representation has the advantage that it avoids the problem, posed by mathematically curved surface approaches, of solving higher order equations.

Influence of Hidden Surface Algorithms

  • The order in which a hidden surface algorithm computes visible information has a decided influence on the way shading is performed.
  • This made it difficult to perform effective shading on curved objects.
  • On each scan line he computes which polygons intersect the scan line, and then computes the visible segment of each polygon, where this segment is the visible strip of 312 Fig. 1 . the polygon, one screen resolution unit in height, that lies on the scan line.
  • The hidden surface problem is solved by painting the farthest face first, and the nearest last.
  • From the shading aspect, the important attribute of these algorithms is that they both generate information scan line by scan line in order to display the faces of an object.

Shading with the Polyhedral Model

  • When planar polygons are used to model an object, it is customary to shade the object by using the normal vectors to the polygons.
  • The shading of each point on a polygon is then the product of a shading coefficient for the polygon and the cosine of the angle between the polygon normal and the direction of incident light.

2. Highlights created by specular reflection.

  • Frame-to-frame discontinuities of shade in a computer generated film are illustrated in the following situation.
  • A curved surface is approximated with planar facets.
  • When this surface is in motion, all the facets which are perpendicular to the direction of the light take on a uniform shade.
  • Thus the surface appears to change from one with highlights to one of uniform shade.
  • Moreover, the position of these highlights is not steady from frame to frame as the object rotates.

Mach Band Effect

  • Many of the shading problems associated with planar approximation of curved surfaces are the result of the discontinuities at polygon boundaries.
  • One might expect that these problems could be avoided by reducing the size of the polygons.
  • This would be undesirable, of course, since it would increase the number of polygons and hence would increase both the memory requirements for storing the model and the time for hidden surface removal.

Po ,¢

  • Unfortunately, because of visual perception effects, the reduction of polygon size is not as beneficial as might be expected, The particular effect responsible is the Mach Band effect.
  • Therefore unless the size of the displayed facets is shrunk to a resolution point, increasing the number of facets does not solve the problem.
  • The subjective discontinuity of shade at the edges due to the Mach Band effect then destroys the smooth appearance of the curved surface.
  • This new technique requires the computation of the normal to the displayed surface at each point.

Specular Reflection

  • If the goal in shading a computer-synthesized image is to simulate a real physical object, then the shading model should in some way imitate real physical shading situations.
  • I-Z LIGHT I LZ ignores both the position of the observer and the specular properties of the object.
  • Even with the improvements introduced by Gouraud, which provide remarkably better shading, these properties are still ignored.
  • The first step in accounting for the specular properties of objects and the position of the observer is to determine the normal to the surface at each point to be shaded, i.e. at each point where a picture element of the raster display projects onto the surface.
  • It is evident from the preceding discussion, however, that their polyhedral model provides information about normals only at the vertices of polygons.

Computation of the Normal at a Point on the Surface

  • The normal to the visible surface at a point located between two edges is the linear interpolation of the normals at the intersections of these two edges with a scan plane passing through the point under consideration.
  • Note that the general surface normal is quadratically related to the vertex normal.
  • From the approximated normal at a point, a shading function determines the shading value at that point.

The Shading Function Model

  • In computer graphics, a shading function is defined as a function which yields the intensity value of each point on the body of an object from the characteristics of the light source, the object, and the position of the observer.
  • The function W(i) and the power n express the specular reflection characteristics of a material.
  • These numbers are empirically adjusted for the picture, and no physical justifications are made.
  • In order to simplify the model, and thereby the computation of the terms cos(i) and cos(s) of formula (3), it is assumed that: 1. The light source is located at infinity; that is, the light rays are parallel.
  • For a greater angle, this means that the light source is behind the front surface.

Conclusion

  • The linear interpolation scheme used here to approximate the orientation of the normal does not guarantee a continuous first derivative of the shading function across an edge of a polygonal model.
  • Also, an interesting fact discussed previously on Mach Band effect shows 317 that this effect is visible whenever there is a great change in the slope of the intensity distribution curve, even if the curve has a continuous first derivative.
  • The Gouraud model needs one interpolator for the shading function.
  • It must compute a new shading value for each raster unit, and hence must be very high speed to drive a real time display.
  • In addition, since the results of the interpolation do not yield a unit vector, and since eqs. ( 6), (7) , and (8) require a unit normal vector, some extra hardware is necessary to "normalize" the outputs of the interpolators.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Graphics and W. Newman
Image Processing Editor
Illumination for
Computer Generated
Pictures
Bui Tuong Phong
University of Utah
The quality of computer generated images of three-
dimensional scenes depends on the shading technique
used to paint the objects on the cathode-ray tube screen.
The shading algorithm itself depends in part on the
method for modeling the object, which also determines
the hidden surface algorithm. The various methods of
object modeling, shading, and hidden surface removal
are thus strongly interconnected. Several shading tech-
niques corresponding to different methods of object
modeling and the related hidden surface algorithms are
presented here. Human visual perception and the funda-
mental laws of optics are considered in the development
of a shading rule that provides better quality and in-
creased realism in generated images.
Key Words and Phrases: computer graphics, graphic
display, shading, hidden surface removal.
CR Categories: 3.26, 3.41, 8.2
Introduction
This .paper describes several approaches to the pro-
duction of shaded pictures of solid objects. In the past
decade, we have witnessed the development of a number
of systems for the rendering of solid objects by com-
puter. The two principal problems encountered in the
design of these systems are the elimination of the hidden
Copyright @
1975,
Association for Computing Machinery, Inc.
General permission to republish, but not for profit, all or part
of this material is granted provided that ACM's copyright notice
is given and that reference is made to the publication, to its date
of issue, and to the fact that reprinting privileges were granted
by permission of the Association for Computing Machinery.
This research was supported in part by the University of Utah
Computer Science Division and the Advanced Research Projects
Agency of the U.S. Department of Defense, monitored by the
Rome Air Development Center, Griffiss Air Force Base, NY
13440, under Contract F30602-70-C-0300. Author's address:
Digital Systems Laboratory, Stanford University, Stanford, CA
94305.
311
parts and the shading of the objects. Until now, most
effort has been spent in the search for fast hidden surface
removal algorithms. With the development of these
algorithms, the programs that produce pictures are
becoming remarkably fast, and we may now turn to the
search for algorithms to enhance the quality of these
pictures.
In trying to improve the quality of the synthetic
images, we do not expect to be able to display the object
exactly as it would appear in reality, with texture, over-
cast shadows, etc. We hope only to display an image
that approximates the real object closely enough to
provide a certain degree of realism. This involves some
understanding of the fundamental properties of the
human visual system. Unlike a photograph of a real
world scene, a computer generated shaded picture is
made from a numerical model, which is stored in the
computer as an objective description. When an image is
then generated from this model, the human visual sys-
tem makes the final subjective analysis. Obtaining a
close image correspondence to the eye's subjective
interpretation of the real object is then the goal. The
computer system can be compared to an artist who
paints an object from its description and not from direct
observation of the object. But unlike the artist, who can
correct the painting if it does not look right to him, the
computer that generates the picture does not receive
feedback about the quality of the synthetic images,
because the human visual system is the final receptor.
This is a subjective domain. We must at the outset
define the degree of realism we wish to attain, and fix
certain goals to be accomplished. Among these goals
are:
1. "Real time" display of dynamic color pictures of
three-dimensional objects. A real time display system
is one capable of generating pictures at the rate of at
least 30 frames a second.
2. Representation of objects made of smooth curved
surfaces.
3. Elimination or attenuation of the effects of digital
sampling techniques.
The most important consideration in trying to attain
these goals is the object modeling technique.
Existing Shading Techniques
Methods of Object Modeling
Image quality depends directly on the effectiveness
of the shading algorithm, which in turn depends on the
method of modeling the object. Two principal methods
of object description are commonly used :
1. Surface definition using mathematical equations.
2. Surface approximation by planar polygonal mosaic.
Several systems have been implemented to remove
hidden parts for mathematically defined curved surfaces
[1,
2, 3, 4, 5].
With these systems, exact information at
each point of the surface can be obtained, and the result-
Communications June 1975
of Volume 18
the ACM Number 6

ing computer generated pictures are most realistic. The
class of possible surfaces is restricted, however, and the
computation time needed to remove the hidden parts
and to perform shading is very large. Up to the present
time, these systems have usually considered the class of
surfaces represented by quadric patches. Although
higher degree surfaces are desirable and are sometimes
necessary to model an object, they have not been taken
into consideration due to an increase in computation
time to remove hidden surfaces and to perform shading
computations. Even when only quadric surfaces are
considered, the implementation of a real time display
system using this type of model is too expensive and
complex.
A simple method of representing curved surfaces and
objects of arbitrary shape is to approximate the surfaces
with small planar polygons; for example, a cone might
be represented as shown in Figure 1. This type of repre-
sentation has the advantage that it avoids the problem,
posed by mathematically curved surface approaches, of
solving higher order equations.
Planar approximation also offers the only means of
reducing hidden surface computation to within reason-
able bounds, without restricting the class of surfaces
that can be represented. For this reason, all recent
attempts to devise fast hidden surface algorithms have
been based on the use of this approximation for curved
surfaces; these algorithms have been summarized and
classified by Sutherland et al. [6]. The next section dis-
cusses their influence on the way shading is computed.
While planar approximation greatly simplifies
hidden surface removal, it introduces several major
problems in the generation of a realistic displayed
image. One of these is the contour edge problem: the
outline or silhouette of a polygonally approximated
object is itself a polygon, not a smooth curve. The other
problem is that of shading the polygons in a realistic
manner. This paper is concerned with the shading
problem; the contour edge problem is discussed by the
author and F.C. Crow in [7].
Influence of Hidden Surface Algorithms
The order in which a hidden surface algorithm com-
putes visible information has a decided influence on the
way shading is performed. For example Warnock, who
developed one of the first such algorithms [8], com-
puted display data by a binary subdivision process: this
meant that the order of generating display data was
largely independent both of the order of scanning the
display and of the order of the polygons in memory.
This made it difficult to perform effective shading on
curved objects.
The two major advances in the development of fast
hidden surface algorithms have been made by Watkins
[9] and by Newell, Newell, and Sancha [10]. Watkins
generates the displayed picture scan line by scan line.
On each scan line he computes which polygons intersect
the scan line, and then computes the visible segment of
each polygon, where this segment is the visible strip of
312
Fig. 1. A cone represented by means of planar approximation.
the polygon, one screen resolution unit in height, that
lies on the scan line.
Newell, Newell, and Sancha adopt a different ap-
proach, using a
frame buffer into which the object is
painted, face by face. The hidden surface problem is
solved by painting the farthest face first, and the nearest
last. Each face is painted scan line by scan line, starting
at the top of the face.
From the shading aspect, the important attribute of
these algorithms is that they both generate information
scan line by scan line in order to display the faces of an
object. This information is in the form of segments, one
screen resolution unit high, on which the shading com-
putation may then be performed. The main differences
between the algorithms, from the point of view of
shading, are (a) the order in which the segments are
generated, and (b) the fact that Watkins generates each
screen dot only once, whereas the NewelI-Sancha al-
gorithm may overwrite the same dot several times.
Shading with the Polyhedral Model
When planar polygons are used to model an object,
it is customary to shade the object by using the normal
vectors
to the polygons. The shading of each point on a
polygon is then the product of a shading coefficient for
the polygon and the cosine of the angle between the
polygon normal and the direction of incident light. This
cosine relationship is known in optics as the "cosine
law," and allows us to compute the shading Sp for a
polygon p as
sp = Cpcos(i), (1)
where Cp is the reflection coefficient of the material ofp
relative to the incident wavelength, and i is the incident
angle.
Communications June 1975
of Volume 18
the
ACM Number 6

Fig. 2. An example of the use of Newell, Newell, and Sancha's
shading technique, showing transparency and highlight effects.
Fig. 3. Computation of the shading at point R using the Gouraud
method. There are two successive linear interpolations: (1) across
polygon edges, i.e. P between A and B, Q between A and D; and (2)
along the scan line, i.e. R between P and Q.
Fig. 4. Gouraud shading, applied to approximated cone of Fig. 1.
313
This shading offers only a very rough approximation
of the true physical effect. It does not allow for any of
the
specular
properties of the material, i.e. the ability of
the material to generate highlights by reflection from its
outer surface, and the position of the observer, which is
ignored. A more serious drawback to this method, how-
ever, is the poor effect when using it to display smooth
curved surfaces. The cosine law rule is appropriate for
objects that are properly modeled with planar surfaces,
such as boxes, buildings, etc., but it is inappropriate for
smoothly curved surfaces such as automobile bodies.
This does not mean, however, that we should abandon
the use of such a polygon-oriented shading rule and
search for a different rule for curved surfaces. Recent
research in shading techniques demonstrates that signifi-
cant results can be achieved by using the basic shading
rule of eq. (1) and modifying the results to reduce the
discontinuities in shading between adjacent polygons.
1. Warnock's shading. As three-dimensional objects
are projected onto the cathode-ray tube screen, the
depth sensation is lost, and the images of those objects
appear flat. In order to restore the depth sensation, two
effects were simulated by Warnock:
I. Decreasing intensity of the reflected light from the
object with the distance between the light source and the
object.
2. Highlights created by specular reflection.
Warnock placed the light source and the eye at the
same position, so that the shading function was the sum
of two terms, one for the normal "cosine" law, and the
other term for the specularly reflected light. The result-
ing pictures have several desirable attributes; for exam-
ple, identical parallel faces, located differently in space,
will be shaded at different intensities, and facets which
face directly toward the light source are brighter than
adjacent facets facing slightly away from the incident
light. However, the polygonal model gives a discontinu-
ity in shading between faces of an approximated curved
surface. When a curved surface is displayed, the smooth-
ness of the curved surface is destroyed by this discon-
tinuity. This is clearly visible in Figure 1.
2. Newell, Newell, and Sancha's shading. Newell,
Newell, and Sancha presented some ideas on creating
transparency and highlights. From observations in the
real world, they found that highlights are created not
only by the incident light source but also by the reflec-
tion of light from other objects in the scene; this is
especially true in the case of objects made of highly
reflective or transparent materials. In the Newell-
Sancha model, curved surfaces are approximated with
planar polygons. Unfortunately, the ability to generate
highlights is severely limited due to the inability to vary
light intensity over the surface of any single polygon.
This problem is apparent in Figure 2.
3. Gouraud's shading. While working on a technique
to represent curved objects made of "Coons surfaces"
Communications June 1975
of Volume 18
the ACM Number 6

or "Bezier patches," Gouraud [11] developed an al-
gorithm to shade curved surfaces. With his algorithm,
a surface represented by a patch is approximated by
polygonal planar facets. Gouraud computes information
about the curvature of the surface at each vertex of each
of these facets. From the curvature, a shade intensity is
computed and retained. For example, the shade intensity
may be computed for each vertex using eq. (1), with i as
the angle between the incident light and the normal to
the surface at this vertex. When the surface is displayed,
this shade intensity is linearly interpolated along the
edge between adjacent pairs of vertices of the object.
The shade at a point on the surface is also a linear inter-
polation of the shade along a scan line between inter-
sections of the edges with a plane passing through the
scan line (Figure 3). This very simple method gives a
continuous gradation of shade over the entire surface,
which in most cases restores the smooth appearance. An
example of Gouraud's shading is shown in Figure 4.
With the introduction of the Gouraud smooth shading
technique, the quality of computer-generated images
improved sufficiently to allow representation of a large
variety of objects with great realism. Problems still
exist, however, one of which is the apparent discon-
tinuity across polygon edges. On surfaces with a high
component of specular reflection, highlights are often
inappropriately shaped, since they depend upon the
disposition and shape of the polygons used to approxi-
mate a curved surface and not upon the curvature of.the
object surface itself. The shading of a surface in motion
(in a computer generated film) has annoying frame to
frame discontinuities due to the changing orientation of
the polygons describing the surface. Also the shading
algorithms are not invariant under rotation.
Frame-to-frame discontinuities of shade in a com-
puter generated film are illustrated in the following
situation. A curved surface is approximated with planar
facets. When this surface is in motion, all the facets
which are perpendicular to the direction of the light take
on a uniform shade. In the next frame the motion of the
object brings these facets into a different orientation
toward the light, and the intensity of the shade across
their surfaces varies continuously from one end to the
other. Thus the surface appears to change from one with
highlights to one of uniform shade. Moreover, the
position of these highlights is not steady from frame to
frame as the object rotates.
Mach Band Effect
Many of the shading problems associated with
planar approximation of curved surfaces are the result
of the discontinuities at polygon boundaries. One might
expect that these problems could be avoided by reducing
the size of the polygons. This would be undesirable, of
course, since it would increase the number of polygons
and hence would increase both the memory require-
ments for storing the model and the time for hidden
surface removal.
314
Fig. 5. Normal at a point
along an edge.
Fig. 6. Shading at a point.
ILIGHT
SOURCE Np
\
\
^ ^ ^ ~ , X
A N, ^
\~r
*, Ni/4 /2 N~/4 ~' INCIDENT
"0 .... _W~N _++_ _ _~_N. ........ "I RAY ~.
Po
,¢
REFLECTED / EYE
RAY /
/
/
/
/
P
Unfortunately, because of visual perception effects,
the reduction of polygon size is not as beneficial as
might be expected, The particular effect responsible is
the Mach Band effect. Mach established the following
principle:
Wherever the light-intensity curve of an illuminated surface (the
light intensity of which varies in only one direction) has a concave
or convex flection with respect to the axis of the abscissa, that
particular place appears brighter or darker, respectively, than its
surroundings [E. Mach, 1865].
Whenever the slope of the light intensity curve changes,
this effect appears. The extent to which it is noticeable
depends upon the magnitude of the curvature change,
but the effect itself is always present.
Without the Mach Band effect, one might hope to
achieve accurate shading by reducing the size of poly-
gons. Unfortunately the eye enhances the discontinuities
over polygon edges, creating undesired areas of appar-
ent brightness along the edges. Therefore unless the size
of the displayed facets is shrunk to a resolution point,
increasing the number of facets does not solve the
problem. Using the Gouraud method to interpolate the
shade linearly between vertices, the discontinuities of
the shading function disappear, but the Mach Band
effect is visible where the slope of the shading function
changes. This can be seen in Figure 4. The subjective
discontinuity of shade at the edges due to the Mach
Band effect then destroys the smooth appearance of the
curved surface.
A better shading rule is therefore proposed for dis-
playing curved surfaces described by planar polygons.
This new technique requires the computation of the
normal to the displayed surface at each point. It is
therefore more expensive in computation than
Gouraud's technique; but the quality of the resulting
picture, and the accuracy of the displayed highlights, is
much improved.
Using a Physical Model
Specular Reflection
If the goal in shading a computer-synthesized image
is to simulate a real physical object, then the shading
model should in some way imitate real physical shading
situations. Clearly the model of eq. (1) does not ac-
complish this. As mentioned before, it completely
Communications June 1975
of Volume 18
the
ACM Number 6

Fig. 7(a). Determination of the reflected light.
,Y
VR
\ q--
:/
-
\ I"-,,< i i .....
Fig. 7(b). Projections of the reflecte~
X •
[ .....
I ...........
xR XN
light.
I-Z
LIGHT
I
LZ
ignores both the position of the observer and the specu-
lar properties of the object. Even with the improve-
ments introduced by Gouraud, which provide remark-
ably better shading, these properties are still ignored.
The first step in accounting for the specular proper-
ties of objects and the position of the observer is to
determine the normal to the surface at each point to be
shaded, i.e. at each point where a picture element of the
raster display projects onto the surface. It is only with
this knowledge that information about the direction of
reflected rays can be acquired, and only with this in-
formation can we model the specular properties of
objects. It is evident from the preceding discussion,
however, that our polyhedral model provides informa-
tion about normals only at the vertices of polygons.
Thus the first step in improving our shading model is to
devise a way to obtain the normal to the surface for each
raster unit.
Computation of the Normal at a Point on the Surface
The normal at each vertex can be approximated by
either one of the methods described by Gouraud [I0].
It is now necessary to define the normal to the surface
along the edges and at a point on the surface of a poly-
gon.
The normal to the surface at a point along the edge
of a polygonal model is the result of a linear interpola-
tion to the normals at the two vertices of that edge. An
example is given in Figure 5: the normal Nt to the
surface at a point between the two vertices P0 and P1 is
computed as follows:
Nt = tN1 q- (1--t)N0, (2)
where t = 0 at No andt = 1 atNx.
The determination of the normal at a point on the
315
surface of a polygon is achieved in the same way as the
computation of the shading at that point with the
Gouraud technique. The normal to the visible surface at
a point located between two edges is the linear inter-
polation of the normals at the intersections of these two
edges with a scan plane passing through the point under
consideration. Note that the general surface normal is
quadratically related to the vertex normal.
From the approximated normal at a point, a shading
function determines the shading value at that point.
The Shading Function Model
In computer graphics, a shading function is defined
as a function which yields the intensity value of each
point on the body of an object from the characteristics
of the light source, the object, and the position of the
observer.
Taking into consideration that the light received by
the eye is provided one part by the diffuse reflection and
one part by the specular reflection of the incident light,
the shading at point P (Figure 6) on an object can be
computed as:
Sp = Cp[cos(i) (1 - d) +d] q- W(i) [cos(s)] ", (3)
where:
Cp is the reflection coefficient of the object at point P
for a certain wavelength.
i is the incident angle.
d is the environmental diffuse reflection coefficient.
W(i) is a function which gives the ratio of the specular
reflected light and the incident light as a function
of the incident angle i.
s is the angle between the direction of the reflected
light and the line of sight.
n is a power which models the specular reflected
light for each material.
The function W(i) and the power n express the
specular reflection characteristics of a material. For a
highly reflective material, the values of both W(i) and n
are large. The range of W(i) is between 10 and 80
percent, and n varies from 1 to 10. These numbers are
empirically adjusted for the picture, and no physical
justifications are made. In order to simplify the model,
and thereby the computation of the terms cos(i) and
cos(s) of formula (3), it is assumed that:
1. The light source is located at infinity; that is, the
light rays are parallel.
2. The eye is also removed to infinity.
With these two considerations, the values of cos(i)
and cos(s) of the shading function in (3) can be re-
written as: cos(i) = kNp /
IN~I
and cos(s) = uRn, / iRpl
where k and u are respectively the unit vectors in the
direction of the light and the line of sight, Np is the
normal vector at P, and Rp is the reflected light vector at
P.
The quantity kNp/ [NpJ can be referred to as the
projection of a normalized vector N~ on an axis parallel
to the direction of the light. If INp[ is unity, the previous
Communications June 1975
of
Volume 18
the
ACM Number 6

Citations
More filters
Book•
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations


Cites background from "Illumination for computer generated..."

  • ...BRDFs for a given surface can be obtained through physical modeling (Torrance and Sparrow 1967, Cook and Torrance 1982, Glassner 1995), heuristic modeling (Phong 1975), or through empirical observation, e....

    [...]

Book Chapter•DOI•
TL;DR: Raster3D is discussed, which is a suite of programs for molecular graphics, which must compromise the quality of rendered images to achieve rendering speeds high enough for useful interactive manipulation of three-dimensional objects.
Abstract: Publisher Summary This chapter discusses Raster3D, which is a suite of programs for molecular graphics. Crystallographers were among the first and most avid consumers of graphics workstations. Rapid advances in computer hardware, and particularly in the power of specialized computer graphics boards, have led to successive generations of personal workstations with ever more impressive capabilities for interactive molecular graphics. For many years, it was standard practice in crystallography laboratories to prepare figures by photographing directly from the workstation screen. No matter how beautiful the image on the screen, however, this approach suffers from several intrinsic limitations. Among these is the inherent limitation imposed by the effective resolution of the screen. Use of the graphics hardware in a workstation to generate images for later presentation can also impose other limitations. Designers of workstation hardware must compromise the quality of rendered images to achieve rendering speeds high enough for useful interactive manipulation of three-dimensional objects.

3,735 citations

Journal Article•DOI•
TL;DR: In this article, a volume-rendering technique for the display of surfaces from sampled scalar functions of 3D spatial dimensions is discussed, which is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane.
Abstract: The application of volume-rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is discussed. It is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane. Surface-shading calculations are performed at every voxel with local gradient vectors serving as surface normals. In a separate step, surface classification operators are applied to compute a partial opacity of every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are examined. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and few other aliasing artifacts. The use of selective blurring and supersampling to further improve image quality is described. Examples from molecular graphics and medical imaging are given. >

2,437 citations

Proceedings Article•DOI•
07 Dec 2015
TL;DR: In this article, a CNN architecture is proposed to combine information from multiple views of a 3D shape into a single and compact shape descriptor, which can be applied to accurately recognize human hand-drawn sketches of shapes.
Abstract: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors Recognition rates further increase when multiple views of the shapes are provided In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives

2,195 citations

Journal Article•DOI•
TL;DR: Six well-known SFS algorithms are implemented and compared, and the performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth error, mean of surface gradient error, and CPU timing.
Abstract: Since the first shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many different approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error, and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster.

1,879 citations


Cites methods from "Illumination for computer generated..."

  • ...Another model developed by Phong [40] represents the specular component of reflection as powers of the cosine of the angle between the perfect specular direction and the viewing direction....

    [...]

References
More filters
01 Jan 1974
TL;DR: A method for producing computer shaded pictures of curved surfaces using three-dimensional curved patches, which can be 'mapped' onto patches thus providing a means for putting texture on computer-generated pictures.
Abstract: : This report presents a method for producing computer shaded pictures of curved surfaces. Three-dimensional curved patches are used, as contrasted with conventional methods using polygons. The method subdivides a patch into successively smaller subpatches until a subpatch is as small as a raster- element, at which time it can be displayed. In general this method could be very time consuming because of the great number of subdivisions that must take place; however, there is at least one very useful class of patches - the bicubic patch - that can be subdivided very quickly. Pitures produced with the method accurately portray the shading and silhouette of curved surfaces. In addition, photographs can be 'mapped' onto patches thus providing a means for putting texture on computer-generated pictures.

876 citations


"Illumination for computer generated..." refers methods in this paper

  • ...Several systems have been implemented to remove hidden parts for mathematically defined curved surfaces [1, 2, 3, 4, 5 ]. With these systems, exact information at...

    [...]

Journal Article•DOI•
TL;DR: The paper shows that the order of sorting and the types of sorting used form differences among the existing hidden-surface algorithms.
Abstract: : The paper asserts that the hidden-surface problem is mainly one of sorting. The various surfaces of an object to be shown in hidden-surface or hidden-line form must be sorted to find out which ones are visible at various places on the screen. Surfaces may be sorted by lateral position in the picture (XY), by depth (Z), or by other criteria. The paper shows that the order of sorting and the types of sorting used form differences among the existing hidden-surface algorithms. (Modified author abstract)

793 citations


"Illumination for computer generated..." refers background in this paper

  • ...(6), (7), and (8) require a unit normal vector, some extra hardware is necessary to "normalize" the outputs of the interpolators....

    [...]

  • ...Z~ = cos(2i) = 2[cos(i)] 2 -- 1 = 2Z, 2 - 1, (6)...

    [...]

Journal Article•DOI•
Henri Gouraud1•
TL;DR: The surface is approximated by small polygons in order to solve easily the hidden-parts problem, but the shading of each polygon is computed so that discontinuities of shade are eliminated across the surface and a smooth appearance is obtained.
Abstract: A procedure for computing shaded pictures of curved surfaces is presented. The surface is approximated by small polygons in order to solve easily the hidden-parts problem, but the shading of each polygon is computed so that discontinuities of shade are eliminated across the surface and a smooth appearance is obtained. In order to achieve speed efficiency, the technique developed by Watkins is used which makes possible a hardware implementation of this algorithm.

661 citations

Book•
01 Jan 1971
TL;DR: The smooth shading technique described here has been used to produce a large variety of pictures of which several airplanes, a car, a human face and some mathematical surfaces are included to illustrate the effect of the method.
Abstract: : The report describes a method for producing shaded pictures of curved surfaces. It uses a small polygon approximation of the surface to solve efficiently the hidden parts detection, and then computes the shading on each polygon in such a way that visual discontinuities between adjacent polygons disappear, thus restoring the apparent smoothness of the surface and increasing greatly the realism of the pictures produced. The smooth shading technique described here has been used to produce a large variety of pictures of which several airplanes, a car, a human face and some mathematical surfaces are included to illustrate the effect of the method. (Author)

256 citations


"Illumination for computer generated..." refers background in this paper

  • ...of Volume 18 the ACM Number 6 or "Bezier patches," Gouraud [ 11 ] developed an algorithm to shade curved surfaces....

    [...]

01 Jan 1970
TL;DR: The dissertation describes an algorithm designed for a hardware processor capable of displaying solid objects, and a FORTRAN 5 program for simulating the hardware processor.
Abstract: : With the increasing use of computer graphics, a need is growing for a processor capable of displaying solid objects. Environmental simulation and architectural modeling are only two areas that would benefit from such a display processor. The dissertation describes an algorithm designed for such a processor, and a FORTRAN 5 program for simulating the hardware processor. The hardware processor would be capable of generating pictures of fairly complicated objects at thirty frames per second. Statistics describing its simulated performance have been extracted and are reported within the dissertation.

214 citations


"Illumination for computer generated..." refers methods in this paper

  • ...The two major advances in the development of fast hidden surface algorithms have been made by Watkins [9] and by Newell, Newell, and Sancha [10]....

    [...]