A rendering framework for multiscale views of 3D models
TL;DR: In this article, images that seamlessly combine views at different levels of detail are appealing, however, creating such multiscale images is not a trivial task, and most such illustrations are handcrafted by skilled artists.
Abstract: Images that seamlessly combine views at different levels of detail are appealing. However, creating such multiscale images is not a trivial task, and most such illustrations are handcrafted by skil...
Summary (3 min read)
Jump to: [1 Introduction] – [2 Related Work] – [3 Multiscale Image Composition] – [4 Camera Ray Generation] – [4.1 Camera Setup] – [4.2 Multiscale Frustum Construction] – [4.3 Streamlines as Camera Rays] – [4.4 Estimating The Vector Field] – [4.5 Streamline Generation] – [5 Implementation Details] – [6 Discussion] – [6.1 Limitations] and [6.2 Applications]
1 Introduction
 As shown in Figure 3, this illustration depicts both macro and micro perspectives of the human circulatory system in a continuous landscape across multiple scales.
 The astonishment and fascination evoked by the illustration, along with its high educative value, won it first place in the illustration category of the 2008 U.S. National Science Foundation and Science Magazine Visualization Challenge.
 Static multiscale illustrations are frequently used to convey hierarchical structures, such as the anatomy of organ systems and the design of engineered architectures, as shown in Figure 2.
 Large terrain data, on the other hand, is usually encapsulated in explorable, navigable interactive systems [McCrae et al. 2009; Google 2010].
3 Multiscale Image Composition
 The first type consists of the ordinary perspective views at each scale of interest, and the other type consists of the transitions that smoothly connect views at different scales.
 In order to render transitions between scales, a naive approach is to separately render each view and then blend them together using either an illustration metaphor ), or a seamless imagestitching algorithm, such as the graph cut [Kwatra et al. 2003] or Poisson image editing [Pérez et al. 2003].
 Each camera produces an image of its view.
 Adjacent camera rays must be coherent so as to avoid too much distortion in the resulting image.
 The authors discuss these two issues and their solutions in the next section.
4 Camera Ray Generation
 In order to generate camera rays that seamlessly blend the views specified by the user, the authors introduce a streamlinebased approach.
 Rays are derived by tracing streamlines in this vector field.
 The process of generating the camera rays is as follows: 1. Initialize a set of guide curves, which connect a sequence of camera planes via Bézier curves.
 Derive the complete vector field that best matches these curves via minimization.
 Each streamline defines the path that light traverses to generate a multiscale image.
4.1 Camera Setup
 The first step in their design is to set up the cameras that generate the multiscale image.
 Vidi and the normal vector Vi, and the range is determined by the fi and ai.
 Each camera also has an userdefined binary mask Ii,x,y which indicates the preserved viewing regions in its pixel space.
4.2 Multiscale Frustum Construction
 As a preliminary step, the authors bind successive cameras so that the regions of interest are propagated smoothly along the multiple scales.
 Figure 6(a) illustrates a pinhole camera with its viewing frustum highlighted in red.
 Figure 6(c) depicts ray coupling with the application of image masks.
 Rays then continue to proceed linearly to capture the view of Cj+1.
4.3 Streamlines as Camera Rays
 The previous step defines a series of guiding curves that indicate how rays should be traced to realize the multiscale camera.
 The authors need to ensure that these rays do not intersect and that they are smoothly interpolated in the transition regions.
 Streamlines are a set of curves which depict the instantaneous tangents of an underlying vector field.
 A streamline shows the direction that the corresponding fluid element travels in the field at any point in space.
 One characteristic of streamlines is that any two given streamlines would never intersect each other as long as no critical points are present [Fay 1994].
4.4 Estimating The Vector Field
 To derive streamlines for use as multiscale camera rays, the authors must first construct the underlying vector field.
 Unfortunately, although the resulting vector field satisfies their requirements that streamlines pass through all views in preserved regions, and smoothly transition between preserved regions, streamlines in the transitional regions produced by this method may not preserve the characteristics of camera rays in a natural way.
 Thus, the neighboring vectors on top of the preserved views have false values that lead to a deviated perspective projection as shown in Figure 7(a).
4.5 Streamline Generation
 The final step is to generate streamlines from the vector field Ŷ. Since streamlines are used to simulate camera rays, each streamline should pass through a pixel on the projection plane (the nearclipping plane of Camera 1).
 Therefore, the authors take the position of the pixel on the projection plane as seed points to trace streamlines using the RungeKutta method.
 Once the authors have all of the streamlines, they can render the final image using these streamlines as sampling rays.
5 Implementation Details
 To implement vector field smoothing and streamline generation, the authors need to construct a 3D volume of the vector field that encloses the entire space, including all objects and camera frusta.
 This means the authors only have a finite number of vector field samples.
 As a result, the resolution of the volume directly affects the accuracy of streamline integration.
 Rendering 3D scenes with the generated nonlinear rays can be done at interactive frame rates thanks to GPU acceleration.
 Depending on the desired resolution for calculating the vector field, the computation time varies from a fewminutes to several hours.
6 Discussion
 Figure 10 provides a direct comparison between the results using an imagebased approach and their vector field and scalar field optimization.
 As a result, the resulting image does not depict the true relations of the human body and the inner organs.
 In their approach, the vector field is initialized from a set of Bézier curves, which smoothly connect the near clipping planes of each pair of successive cameras.
 But since the final streamlines are traced in this global vector field, the streamlines by definition should not intersect.
 In cases where the frusta of nonconsecutive cameras intersect, critical points could appear, which may distort the image dramatically.
6.1 Limitations
 The quality and beauty of the final image are highly dependent on users’ efforts in selecting image masks and camera viewpoints.
 As a result, the preserved regions in image masks should correspond to the relative positions of multiscale viewpoints.
 3) Contradictory preserved views can result in undesirable images.
 Discontinuous vector fields at junctions between different resolutions also introduce slight streamline perturbations, sometimes resulting in perceivable artifacts in projected images.
 Therefore, a custom raytracer is required to fully apply their framework to geometric scenes.
6.2 Applications
 The authors current design demonstrates attractive results and suggests several interesting uses.
 Showing continuous multiscale views in the same image helps viewers comprehend the spatial relationship between scales.
 The authors believe their ray generation approach can support multiple magnified regions by slightly modifying the way that pinhole cameras are connected so that camera rays can be cast differently in different image regions.
 The top three images (a)(c) in Figure 11 are three different camera viewpoints showing different levels of detail in the scene.
 Camera 1 provides a bird’seye view directly above the stag beetle, while Camera 2 shows a closeup view of the front of the beetle.
Did you find this useful? Give us your feedback
A Rendering Framework for Multiscale Views of 3D Models
WeiHsien Hsu
∗
KwanLiu Ma
†
University of California at Davis
Carlos Correa
‡
Lawrence Livermore National Laboratory
Figure 1: A continuous multiscale view (right) of a volumetric human body dataset shows three different levels of detail (left three) in a single
image. The image on the right is directly rendered with our multiscale framework.
Abstract
Images that seamlessly combine views at different levels of detail
are appealing. However, creating such multiscale images is not a
trivial task, and most such illustrations are handcrafted by skilled
artists. This paper presents a framework for direct multiscale ren
dering of geometric and volumetric models. The basis of our ap
proach is a set of nonlinearly bent camera rays that smoothly cast
through multiple scales. We show that by properly setting up a
sequence of conventional pinhole cameras to capture features of
interest at different scales, along with image masks s pecifying the
regions of interest for each scale on the projection plane, our render
ing framework can generate nonlinear sampling rays that smoothly
project objects in a scene at multiple levels of detail onto a single
image. We address two important issues with nonlinear camera
projection. First, our streamlinebased ray generation algorithm
avoids undesired camera ray inters ections, which often result in
unexpected images. Second, in order to maintain camera ray co
herence and preserve aesthetic quality, we create an interpolated
3D ﬁeld that deﬁnes the contribution of each pinhole camera for
determining ray orientations. The resulting multiscale camera has
three main applications: (1) presenting hierarchical structure in a
compact and continuous manner, (2) achieving focus+context visu
alization, and (3) creating fascinating and artistic images.
CR Categories: I.3.3 [Computer Graphics]: Picture/Image
Generation—Viewing algorithms;
Keywords: multiscale views, camera model, levels of detail, visu
alization
Links:
DL PDF
∗
email: whhsu@ucdavis.edu
†
email: ma@cs.ucdavis.edu
‡
email: correac@llnl.gov
1 Introduction
This project is motivated by an illustration created by artists at
the Exploratorium in San Francisco. As shown in Figure
3, this
illustration depicts both macro and micro perspectives of the hu
man circulatory system in a continuous landscape across multiple
scales. The seamless continuity between scales vividly illustrates
how molecules form blood cells, how blood cells are distributed in
a blood vessel, how the blood vessel connects to a human heart, and
where the heart is located in a human body. The astonishment and
fascination evoked by the illustration, along with its high educative
value, won it ﬁrst place in the illustration category of the 2008 U.S.
National Science Foundation and Science Magazine Visualization
Challenge.
In scientiﬁc studies, it is often desirable to illustrate complex phys
ical phenomena, organic structures, and manmade objects. Many
of these physical structures are hierarchical in nature. Static multi
scale illustrations are frequently used to convey hierarchical struc
tures, such as the anatomy of organ systems and the design of en
gineered architectures, as shown in Figure
2. Large terrain data,
on the other hand, is usually encapsulated in explorable, naviga
ble interactive systems [
McCrae et al. 2009; Google 2010]. An
imations are also helpful for presenting extremely large datasets
(a) (b)
Figure 2: Examples of multiscale illustrations. (a) A handdrawn illustration by Carol Donner [
Bloom et al. 1988], depicting the hierarchical
structure of the human nervous system. (b) A multiscale illustration of the Eiffel Tower using a zoomin metaphor.
which are difﬁcult for users to navigate directly, such as those con
sisting of the solar system and the universe [
Cosmic Voyage 1996;
The Known Universe 2009].
Figure 3: “Zoom Into the
Human Bloodstream” by
Linda Nye and the Ex
ploratorium Visualization
Laboratory [
Nye 2008].
With permission from
Exploratorium, San Fran
cisco, CA, USA. All rights
reserved.
Out of the above scenarios, the cre
ation of multiscale illustrations is the
most challenging because there is no
direct way to project a complex hi
erarchical 3D scene to a 2D image.
In this paper, we focus on gener
ating continuous multiscale images.
Unlike traditional multiscale illustra
tions, in which each scale is dis
played separately from others (Fig
ure
2(b)), a continuous multiscale
image shows objects at several lev
els of detail with smooth object
space continuity between different
scales. Although multiperspective
rendering has received some atten
tion [
Yu et al. 2008], previous work
has not speciﬁcally addressed seam
less multiscale image rendering.
We introduce a rendering frame
work which generates and casts non
linearly bent rays into a 3D scene,
and projects multiple scales of inter
est onto a single image. Our mul
tiscale rendering framework consists
of a sequence of pinhole cameras,
each of which captures a view of
interest at a particular scale. The
camera rays for each pixel in the
ﬁnal projected image are generated
based on a userdeﬁned image mask
which speciﬁes the interesting re
gions in each view. The rays are
bent gradually from one scale to an
other to maintain objectspace conti
nuity as well as imagespace smooth
ness. Our framework can be used on
both polygon models and volumetric
data. The resulting views are use
ful in many contexts. The most di
rect application is the presentation of
complex hierarchical structures, as shown in Figure
1. Our multi
scale camera can also achieve a focus+context effect, a technique
frequently used in many visualization applications. Finally, we
show that it can potentially create pictures that mimic artistic or
impossible views of objects or scenes like the kind made by the
artist M. C. Escher.
2 Related Work
Camera projection is fundamental to computer graphics, since ev
ery 3D scene uses a projection to form a 2D image. As a result,
various camera models have been developed for different scene
modeling and rendering purposes. The General Linear Camera
Model (GLC) developed by Yu and McMillan [
2004b] uses three
rays to deﬁne the afﬁne combination and to generate other sampling
rays. GLC is capable of modeling most linear projections, including
perspective, orthogonal, pushbroom, and crossedslits projections.
GLC was further extended in the Framework for Multiperspective
Rendering [
Yu and McMillan 2004a], which is achieved by com
bining piecewise GLCs that are constrained by an appropriate set
of rules and interpolated rays that are weighted based on the dis
tance to the closest fragment in a GLC region. Another type of
camera model uses image surfaces to explicitly specify how sam
pling rays propagate in a scene [
Glassner 2000; Brosz et al. 2007].
In Glassner’s work, rays are deﬁned by only two NURBS surfaces,
whereas Brosz et al. used parametric surfaces to deﬁne a ﬂexible
viewing volume, and are thus able to accomplish nonlinear pro
jections such as ﬁsheye or cylindrical projections. However, al
though these camera models have employed different methods to
contruct view frusta to achieve either linear or nonlinear camera
projections, their approaches focus on the manipulation of a sin
gle viewpoint and cannot achieve the multiscale projection that we
desire.
Much work has been done on creating nonlinear camera projec
tions. Wang et al. [
2005] presented a camera lens technique based
on geometric optics for volume rendering. Camera rays are re
fracted according to the selected lens type at the image plane be
fore being shot into the 3D volume. But since the lens is put in
front of the image plane, it can only achieve a limited magniﬁca
tion within a single viewing direction. Sudarsanam et al. [
2008]
created camera widgets that encapsulate speciﬁc aspects of non
linear perspective changes and allow users to manipulate both the
widgets and their imagespace views. Instead of explicitly modify
ing camera rays, Mashio et al. [
2010] introduced a technique that
simulates nonperspective projection by deforming a scene based
on camera parameters. But these two methods focus on revealing
or magnifying multiple different interesting regions in a scene.
The Graph Camera developed by Popescu et al. [
2009] introduces
three basic construction operations on the frusta of planar pinhole
cameras (PPC) to connect s everal viewing regions in a 3D scene.
Similar to the Graph Camera, Cui et al. [
2010] introduced the
curved ray camera, which generates a bent view frustum based on
a sequence of PPC’s and provides C
1
continuity at the connec
tions between each PPC segment to circumvent occluders. Both
the Graph Camera and the curved ray camera connect successive
PPC’s by ﬁrst overlapping the frusta and then binding the piece
wise trimmed frusta. However, since the PPC’s are bound in a se
Camera 1
+ + +
Ray
Generator
Renderer
...
...
2
...
n
1
Camera 2 Camera n
Figure 4: Dataﬂow of multiscale rendering. The process starts by
setting up separate pinhole cameras for different scales of view and
image masks which indicate interesting regions in each view. Image
masks are merged into a single image and passed to the camera ray
generator, along with camera information. Nonlinear bent rays
are generated accordingly and used to sample the scene.
quential way that one camera is placed after another, the piecewise
viewing frustum is intrinsically diverging due to the nature of per
spective projection. Although the Graph Camera supports frustum
splitting and can possibly achieve a convergent viewing frustum by
merging two PPC’s with converging viewing directions, it is hard to
set up PPC’s in this way so as to obtain sufﬁcient multiscale mag
niﬁcation; the curved ray camera only supports sequential frustum
bending and can never achieve a convergent frustum which is essen
tial in creating multiscale effects. In short, their approaches focus
on revealing hidden objects in a large 3D scene, whereas ours is de
signed for visualizing multiple levels of detail of the same object.
Other types of multiscale or focus+context rendering include
imagespace or objectspace deformation. B
¨
ottger et al. [
2006] pre
sented a technique for generating complex logarithmic views for vi
sualizing ﬂat vector data. A similar technique was later employed to
visualize complex satellite and aerial imagery, showing details and
context in a single seamless image [
Bttger et al. 2008]. But their
approaches are mainly designed for generating ﬂat cartographic
map projections. Focus+context effects for 3D data can also be
achieved by distorting the object so as to magnify certain focal re
gions [
Carpendale et al. 1996; Wang et al. 2008; Wang et al. 2011].
However, deforming objects can possibly lead to severe distortion
if high magniﬁcation is demanded.
3 Multiscale Image Composition
A continuous multiscale image is composed of two types of re
gions. The ﬁrst type consists of the ordinary perspective views
at each scale of interest, and the other type consists of the tran
sitions that smoothly connect views at different scales. An intu
itive way to create views at multiple scales is to use several pinhole
cameras, with each camera capturing a perspective view at a dif
ferent scale. In order to render transitions between scales, a naive
approach is to separately render each view and then blend them
together using either an illustration metaphor (as shown in Figure
2(b)), or a seamless imagestitching algorithm, such as the graph cut
[
Kwatra et al. 2003] or Poisson image editing [P
´
erez et al. 2003].
Although the images created by these methods appear to be seam
less, the underlying content is not continuous in objectspace, and
thus can make it difﬁcult for viewers to comprehend the true spatial
relations between scales.
We introduce a multiscale rendering framework that generates cam
era rays nonlinearly cast through all scales of interest. Since the
camera rays in our model are bent coherently and march consis
Camera 1
Camera 2
Camera 3
Image Plane
Masks
Figure 5: Through careful image mask and camera placement,
camera rays can cast through multiple scales to capture features
of interest.
tently, the objects projected on the image are continuous in both
imagespace and objectspace. Our approach starts by setting up
several pinhole cameras for viewing different scales of interest, uti
lizing most users’ ease and familiarity with manipulating ordinary
pinhole cameras. Each camera produces an image of its view. In
order to combine all such views to form a single multiscale image,
we use a userspeciﬁed image mask to indicate regions of each view
that the user would like to show in the ﬁnal image. In other words,
every camera projects only part of its view onto the ﬁnal image
space, based on a corresponding image mask as shown in Figure
4.
In order to ensure consistency while projecting different camera
viewpoints onto different parts of the image, we force all camera
rays to originate from the ﬁrst camera, which is the one that has
the largest scale of view. The rest of the cameras deﬁne interme
diate points that camera rays must pass through in order, from the
largest to smallest scales. We use B
´
ezier curves to connect the near
clipping planes of two successive cameras, as described in the fol
lowing section. Figure
5 depicts the resulting nonlinear viewing
frustrum based on the input pinhole cameras and image masks.
The gray regions between colored regions in Figure
5 are transition
areas where camera rays need to gradually change between nearby
colored regions to maintain ray coherence. Two things are worth
noting when dealing with camera ray bending in transition regions:
Ray intersection. Camera rays that are bent at unequal degrees
have a chance to intersect with one another. Simple ray
direction interpolation such as linear interpolation between
two nearest preserved rays can generate intersecting rays that
might result in the same object being projected onto the image
more than once.
Ray coherence. Adjacent camera rays must be coherent so as to
avoid too much distortion in the resulting image. For example,
camera rays which are emitted from the pixels in a horizontal
scanline should maintain their spatial relationship after bent.
The ﬁrst issue is directly related to whether sampling rays can
project the scene correctly. The second issue if properly addressed
can minimize distortion and lead to aesthetically appealing view.
We discuss these two issues and our solutions in the next section.
4 Camera Ray Generation
In order to generate camera rays that seamlessly blend the views
speciﬁed by the user, we introduce a streamlinebased approach. A
key step of this approach is the construction of a vector ﬁeld based
on the selected views. Rays are derived by tracing streamlines in
this vector ﬁeld. The process of generating the camera rays is as
follows:
1. Initialize a set of guide curves, which connect a sequence of
camera planes via B
´
ezier curves.
2. Derive the complete vector ﬁeld that best matches these
curves via minimization.
3. Trace streamlines from each pixel in the image by integrating
the vector ﬁeld.
4. Each streamline deﬁnes the path that light traverses to gener
ate a multiscale image.
4.1 Camera Setup
The ﬁrst step in our design is to set up the cameras that generate the
multiscale image. As an input to our process, we deﬁne a series of
cameras C
i
, deﬁned by a position E
i
, a lookat vector V
i
, a ﬁeld
of view f
i
, an aspect ratio a
i
and a near clipping distance d
i
, for
i ∈ {1, . . . , N}. Thus, the near clipping plane of each camera C
i
is deﬁned by the point E
i
+ V
i
d
i
and the normal vector V
i
, and
the range is determined by the f
i
and a
i
. Each camera also has
an userdeﬁned binary mask I
i,x,y
which indicates the preserved
viewing regions in its pixel space. As a result, every single pixel
has a position p
i,x,y
and a ray vector deﬁned by v
i,x,y
= (p
i,x,y
−
E
i
)/p
i,x,y
− E
i

4.2 Multiscale Frustum Construction
As a preliminary step, we bind successive cameras so that the re
gions of interest are propagated smoothly along the multiple scales.
In order to do that, we trace B
´
ezier curves between two succes
sive cameras. More speciﬁcally, given an ordering of the cameras
C
1
, . . . , C
N
, we bind two cameras C
j
and C
j+1
via B
´
ezier curves
B
j,x,y
(t) for each pixel p
j,x,y
, using the pixel positions and ray
directions as their control points:
B
j,x,y
(t) =
3
X
k=0
b
k,n
(t)Qk (1)
with
Q0 = p
j,x,y
Q1 = p
j,x,y
+ v
j,x,y
βp
j,x,y
− p
j+1,x,y

Q2 = p
j+1,x,y
− v
j+1,x,y
βp
j,x,y
− p
j+1,x,y

Q3 = p
j+1,x,y
where β controls the distances between the ﬁrst and latter two con
trol points (according to our experiments, β = 0.2 gives sufﬁcient
smoothness) and b
k,n
are the Bernstein polynomials of degree 3.
Figure
6(a) illustrates a pinhole camera with its viewing frustum
highlighted in red. Figure
6(b) shows the ray binding between C
j
and C
j+1
where C
j+1
’s rays are extended at the nearclipped end
to couple with C
j
’s rays ensuring that every ray originates from
C
1
. Figure
6(c) depicts ray coupling with the application of image
masks. Note that a ray is curved to the next camera only when it
originates in the region that is assigned to a descendant camera. In
this ﬁgure, the red region indicates the preserved view for C
j
, and
the blue region indicates the preserved view for C
j+1
. As a result,
camera rays in the red region cast linearly, while rays in the blue
region proceed from the nearclipping plane of C
j
, march forward
along B
´
ezier curves, which are constructed based on the original ray
directions, and arrive at C
j+1
’s clipping plane. Rays then continue
to proceed linearly to capture the view of C
j+1
.
4.3 Streamlines as Camera Rays
The previous step deﬁnes a series of guiding curves that indicate
how rays should be traced to realize the multiscale camera. How
ever, we need to ensure that these rays do not intersect and that
(a) (b) (c)
Figure 6: (a) A pinhole camera and its view frustum. (b) The rays
of C
j+1
are extended backward to couple with C
j
using B
´
ezier
curves. Small dotted arrows illustrate control points, which are the
original ray orientations of the two views. (c) Different portions of
the rays are assigned to different views based on the image masks.
The red region is marked as a preserved viewing region for C
j
, blue
is for C
j+1
, and the gray region is the transition between the tw o.
As a result, interesting features of the two views can be seamlessly
shown in the same image.
they are smoothly interpolated in the transition regions. To achieve
this, we think of the problem of camera ray generation as tracing
streamlines from an underlying vector ﬁeld.
Streamlines are a set of curves which depict the instantaneous tan
gents of an underlying vector ﬁeld. A streamline shows the direc
tion that the corresponding ﬂuid element travels in the ﬁeld at any
point in space. One characteristic of streamlines is that any two
given streamlines would never intersect each other as long as no
critical points are present [
Fay 1994]. If we treat camera rays as
streamlines, we can make use of this characteristic to ensure that no
camera ray intersections can occur.
4.4 Estimating The Vector Field
To derive streamlines for use as multiscale camera rays, we must
ﬁrst construct the underlying vector ﬁeld. Since streamlines rep
resent tracks along values in their vector ﬁeld, the construction of
the vector ﬁeld determines the paths of the streamlines. The prob
lem then becomes: given a set of pinhole cameras and an image
mask, how can we construct a vector ﬁeld whose streamlines are
distributed identically to the original camera rays in preserved re
gions, and gradually transition between the interpolated regions?
To achieve this goal, we consider the B
´
ezier curves as an initial set
of streamlines corresponding to an initial guess of the underlying
vector ﬁeld. A complete curved ray R
i,x,y
for camera C
i
at pixel
x, y is deﬁned by a set of B
´
ezier curves B
j,x,y
where j = 1, . . . , i−
1, and a ray at p
i,x,y
pointing toward v
i,x,y
. These viewing rays
are the guideline streamlines used to construct an intermediate ray
ﬁeld. This vector ﬁeld X(x ∈ R
3
) = (u(x), v(x), w(x)) is built
by ﬁlling in the ﬁeld with the tangent directions of R
i,x,y
along the
curves if the rays originate from the region assigned to camera C
i
in its image mask (I
i,x,y
= 1) for every set of curved rays R
1
to
R
N
, and zero elsewhere, i.e.,
X(x) =
dR
i,x,y
(t)
dt
if I
i,x,y
= 1, where R
i,x,y
(t) = x,
for i = 1, . . . , N
(0, 0, 0) otherwise
(2)
The ﬁnal vector ﬁeld Y(x) can be derived by solving an optimiza
tion problem which ensures that the initial guesses are preserved,
but also that the difference between neighboring points is minimal,
resulting in smooth transitions across all the camera frusta.
There are two ways to formulate this optimization problem: as a
direct vector ﬁeld optimization problem or as a twostep optimiza
tion, where we solve for a scalar ﬁeld ﬁrst and then ﬁt a vector ﬁeld
using interpolation.
Camera 1
Camera 2
Camera 1 Camera 2
Without
Mask Field
With
Mask Field
Deviated Camera Rays
Distorted Camera Rays
(a) Without Mask Field
(b) With Mask Field (c) Without Mask Field
(d) With Mask Field
Camera 1
Camera 2
Figure 7: Comparison of the results from direct vector smoothing and with the use of the scalar ﬁeld mask. Two camera views with the
corresponding masks are shown in the upper left images. The mask for Camera 1 contains only a small portion of the image area, which
means only the camera rays in this preserved region are used to ﬁll the view vector ﬁelds. The side view in (a) and the rear view in (c) point
out that the curved rays generated by using direct vector ﬁeld smoothing deviate from the expected perspective projection around Camera
1 (highlighted in the blue circle) and cause undesirable distortion (highlighted in the blue arc). (b) and (d) illustrate the coherent rays
generated by interpolating the vector ﬁelds based on the smoothed scalar ﬁeld mask. The resulting images are shown in bottom left.
Vector ﬁeld optimization. In this case, we formulate the problem
solving an equation that minimizes the following energy function
[
Xu et al. 2010]:
ε(Y) =
Z
ε
1
(Y(x), X(x)) + µε
2
(Y(x))dx (3)
where
ε
1
(Y(x), X(x)) = X(x)
2
Y(x) − X(x)
2
ε
2
(Y = (u(x), v(x), w(x))) = ∇u(x)
2
+ ∇v(x)
2
+ ∇w(x)
2
The term ε
1
(Y(x), X(x)) guarantees that the resulting vector ﬁeld
Y has exactly the same value as the preliminary ﬁeld X at points
where X(x) is not zero, and the second term µε
2
(Y(x)) is min
imized when the neighboring vectors are identical, thus result
ing in smooth transitions. The energy equation can be solved
by the generalized diffusion equations described in the ﬂ uid ﬂow
literature [
Hall and Porsching 1990], and is further discuss ed in
[
Ye et al. 2005; Xu et al. 2010].
Unfortunately, although the resulting vector ﬁeld satisﬁes our re
quirements that streamlines pass through all views in preserved re
gions, and smoothly transition between preserved regions, stream
lines in the transitional regions produced by this method may not
preserve the characteristics of camera rays in a natural way. To
take a simple example, suppose we use only one camera, but that
we only assign a small portion of the image mask to the camera.
Since only streamlines marked as originating from the image mask
are used to ﬁll the vector ﬁeld, the remaining parts of the resulting
smoothed vector ﬁeld will have the same vectors as the boundary
of the marked region and thus fail to project the expected view to
the original camera. Figure
7 illustrates a case of two cameras. One
can see that the preserved rays for Camera 1 in red only provide
a small amount of view vector information. Thus, the neighboring
vectors on top of the preserved views have false values that lead to a
deviated perspective projection as shown in Figure
7(a). Figure 7(c)
shows that the false vector values also cause a distorted distribution
of the camera rays that often result in a twisted image.
Scalar Field Optimization. To avoid the distortions in the camera
rays, we can alternatively pose the problem as ﬁtting a membership
function (scalar) and use this function to interpolate the initial vec
tor ﬁelds. Similar to the construction of X, we construct an initial
scalar ﬁeld M(x ∈ R
3
)), as shown in Figure
8, where we ﬁll in the
mask values along each streamline.
M(x) =
i if I
i,x,y
= 1, where R
i,x,y
(t) = x
for i = 1, . . . , N
0 otherwise
(4)
The ﬁnal scalar ﬁeld N can be derived by optimizing a 1D version
of Equation
3:
ε(N) =
Z
ε
1
(N(x), M(x)) + µε
3
(N(x))dx (5)
where
ε
3
(N(x)) = ∇N(x)
2
and ε
1
(N, M) is deﬁned as before, but for a scalar ﬁeld.
To derive a ﬁnal view vector ﬁeld, we need to construct a set of
intermediate vector ﬁelds X
1
to X
N
representing the entire viewing
frusta of C
1
to C
N
, respectively, where each of the vector ﬁelds is
deﬁned by X
i
(x) = dR
i,x,y
(t)/dt for R
i,x,y
(t) = x. Based on
the smoothed scalar ﬁeld mask N and the intermediate view vector
ﬁelds X
1
to X
N
, we can construct a ﬁnal view vector ﬁeld using the
following function:
ˆ
Y(x) =
P
n
i=1
X
i
(x)ω(N(x), i)
P
n
i=1
ω(N(x), i)
(6)
The term ω(N(x), i) is a weighting function that determines the
weight for each view vector ﬁeld according to the mask scalar ﬁeld.
For example, the following weighting function performs linear in
terpolation between two neighboring view vectors:
ω(N(x), i) =
(
1 − N(x) − i if N(x) − i < 1
0 otherwise
Other types of interpolation, such as monotonic cubic interpola
tion, may be applied to increase the smoothness at the boundaries
of transitional regions, but increasing the smoothness of boundaries
implies a rapid change in the interior of regions. Figures
7(b) and
(d) shows the results of using a scalar ﬁeld to compute the underly
ing vector ﬁeld. Because it eliminates distortion and is more com
putationally efﬁcient, we employ this method in our results.
Citations
More filters
••
17 Jun 2013
TL;DR: Curvicircular Feature Aggregation (CFA) is proposed, which aggregates rotated images of vessel lumen into a single view and eliminates the need for rotation, so vessels can be investigated by inspecting only one image.
Abstract: Radiological investigations are common medical practice for the diagnosis of peripheral vascular diseases. Existing visualization methods such as Curved Planar Reformation (CPR) depict calcifications on vessel walls to determine if blood is still able to flow. While it is possible with conventional CPR methods to examine the whole vessel lumen by rotating around the centerline of a vessel, we propose Curvicircular Feature Aggregation (CFA), which aggregates these rotated images into a single view. By eliminating the need for rotation, vessels can be investigated by inspecting only one image. This method can be used as a guidance and visual analysis tool for treatment planning. We present applications of this technique in the medical domain and give feedback from radiologists.
36 citations
••
TL;DR: A highlevel survey of multiscale molecular visualization techniques, with a focus on applicationdomain questions, challenges, and tasks, and describes a number of domainspecific tasks that drive this work.
26 citations
••
22 Apr 2015
TL;DR: A novel method for creating automatized gameplay dramatization of multiplayer video games that serves as a visual form of guidance through dynamic 3D scenes with multiple foci, typical for such games is presented.
Abstract: We present a novel method for creating automatized gameplay dramatization of multiplayer video games. The dramatization serves as a visual form of guidance through dynamic 3D scenes with multiple foci, typical for such games. Our goal is to convey interesting aspects of the gameplay by animated sequences creating a summary of events which occurred during the game. Our technique is based on processing many cameras, which we refer to as a flock of cameras, and events captured during the gameplay, which we organize into a socalled event graph. Each camera has a lifespan with a certain time interval and its parameters such as position or lookup vector are changing over time. Additionally, during its lifespan each camera is assigned an importance function, which is dependent on the significance of the structures that are being captured by the camera. The images captured by the cameras are composed into a single continuous video using a set of operators based on cinematographic effects. The sequence of operators is selected by traversing the event graph and looking for specific patterns corresponding to the respective operators. In this way, a large number of cameras can be processed to generate an informative visual story presenting the gameplay. Our compositing approach supports insets of camera views to account for several important cameras simultaneously. Additionally, we create seamless transitions between individual selected camera views in order to preserve temporal continuity, which helps the user to follow the virtual story of the gameplay.
16 citations
••
TL;DR: This work proposes an approach which changes local size of a 2D image or 3D surface and, at the same time, minimizes distortion, prevails smoothness, and, most importantly, avoids fold‐overs (collisions).
Abstract: Size matters. Human perception most naturally relates relative extent, area or volume to importance, nearness and weight. Reversely, conveying importance of something by depicting it at a different size is a classic artistic principle, in particular when importance varies across a domain. One striking example is the neuronal homunculus; a human figure where the size of each body part is proportional to the neural density on that part. In this work we propose an approach which changes local size of a 2D image or 3D surface and, at the same time, minimizes distortion, prevails smoothness, and, most importantly, avoids foldovers (collisions). We employ a parallel, twostage optimization process, that scales the shape nonuniformly according to an interactivelydefined importance map and then solves for a nearby, selfintersectionfree configuration. The results include an interactive 3Drendered version of the classic sensorical homunculus but also a range of images and surfaces with different importance maps. © 2012 Wiley Periodicals, Inc.
10 citations
••
11 Jul 2012TL;DR: A novel classification of multiscale techniques for biomedical applications by function is proposed, which will form the basis of a design menu and toolkit for multiscales visualisation.
Abstract: The MSV project aims to survey current best practice in multiscale visualisation and to construct a software toolkit which will make multiscale techniques readily accessible to biomedical researchers and clinicians. In this paper, current methods for multiscale data visualisation in several domains are reviewed, and a novel classification of multiscale techniques for biomedical applications by function is proposed. The classification will form the basis of a design menu and toolkit for multiscale visualisation.
8 citations
References
More filters
••
TL;DR: Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions as discussed by the authors, and the first set of tools permits the seamless...
Abstract: Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The first set of tools permits the seamless ...
1,183 citations
••
TL;DR: An informationtheoretic framework for flow visualization with a special focus on streamline generation is presented, and it is shown that the framework can effectively visualize 2D and 3D flow data.
Abstract: The process of visualization can be seen as a visual communication channel where the input to the channel is the raw data, and the output is the result of a visualization algorithm. From this point of view, we can evaluate the effectiveness of visualization by measuring how much information in the original data is being communicated through the visual communication channel. In this paper, we present an informationtheoretic framework for flow visualization with a special focus on streamline generation. In our framework, a vector field is modeled as a distribution of directions from which Shannon's entropy is used to measure the information content in the field. The effectiveness of the streamlines displayed in visualization can be measured by first constructing a new distribution of vectors derived from the existing streamlines, and then comparing this distribution with that of the original data set using the conditional entropy. The conditional entropy between these two distributions indicates how much information in the original data remains hidden after the selected streamlines are displayed. The quality of the visualization can be improved by progressively introducing new streamlines until the conditional entropy converges to a small value. We describe the key components of our framework with detailed analysis, and show that the framework can effectively visualize 2D and 3D flow data.
145 citations
••
11 May 2004TL;DR: The General Linear Camera (GLC) model as discussed by the authors unifies many previous camera models into a single representation and is capable of describing all perspective (pinhole), orthographic and many multiperspective (including pushbroom and twoslit) cameras, as well as epipolar plane images.
Abstract: We present a General Linear Camera (GLC) model that unifies many previous camera models into a single representation. The GLC model is capable of describing all perspective (pinhole), orthographic, and many multiperspective (including pushbroom and twoslit) cameras, as well as epipolar plane images. It also includes three new and previously unexplored multiperspective linear cameras. Our GLC model is both general and linear in the sense that, given any vector space where rays are represented as points, it describes all 2D affine subspaces (planes) that can be formed by affine combinations of 3 rays. The incident radiance seen along the rays found on subregions of these 2D affine subspaces are a precise definition of a projected image of a 3D scene. The GLC model also provides an intuitive physical interpretation, which can be used to characterize real imaging systems. Finally, since the GLC model provides a complete description of all 2D affine subspaces, it can be used as a tool for firstorder differential analysis of arbitrary (higherorder) multiperspective imaging systems.
133 citations
••
Autodesk^{1}
TL;DR: The cubemap allows consistent navigation at various scales, as well as realtime collision detection without precomputation or prior knowledge of geometric structure, and is used to improve upon previous work on proximal object inspection (HoverCam).
Abstract: We present a comprehensive system for multiscale navigation of 3dimensional scenes, and demonstrate our approach on multiscale datasets such as the Earth. Our system incorporates a novel imagebased environment representation which we refer to as the cubemap. Our cubemap allows consistent navigation at various scales, as well as realtime collision detection without precomputation or prior knowledge of geometric structure. The cubemap is used to improve upon previous work on proximal object inspection (HoverCam), and we present an additional interaction technique for navigation which we call lookandfly. We believe that our approach to the navigation of multiscale 3D environments offers greater flexibility and ease of use than mainstream applications such as Google Earth and Microsoft Virtual Earth, and we demonstrate our results with this system.
89 citations
••
TL;DR: A featurepreserving data reduction and focus+context visualization method based on transfer function driven, continuous voxel repositioning and resampling techniques that avoids the need to smooth the transition between low and highresolution regions as often required by multiresolution methods.
Abstract: The growing sizes of volumetric data sets pose a great challenge for interactive visualization. In this paper, we present a featurepreserving data reduction and focus+context visualization method based on transfer function driven, continuous voxel repositioning and resampling techniques. Rendering reduced data can enhance interactivity. Focus+context visualization can show details of selected features in context on display devices with limited resolution. Our method utilizes the input transfer function to assign importance values to regularly partitioned regions of the volume data. According to user interaction, it can then magnify regions corresponding to the features of interest while compressing the rest by deforming the 3D mesh. The level of data reduction achieved is significant enough to improve overall efficiency. By using continuous deformation, our method avoids the need to smooth the transition between low and highresolution regions as often required by multiresolution methods. Furthermore, it is particularly attractive for focus+context visualization of multiple features. We demonstrate the effectiveness and efficiency of our method with several volume data sets from medical applications and scientific simulations.
59 citations
Related Papers (5)
Frequently Asked Questions (2)
Q2. What have the authors stated for future works in "A rendering framework for multiscale views of 3d models" ?
There are a number of directions for future work especially to increase the usability and robustness of this new rendering technology.