scispace - formally typeset
Search or ask a question
Author

Shenchang Eric Chen

Bio: Shenchang Eric Chen is an academic researcher from Apple Inc.. The author has contributed to research in topics: Rendering (computer graphics) & Global illumination. The author has an hindex of 6, co-authored 6 publications receiving 2001 citations.

Papers
More filters
Proceedings ArticleDOI
Shenchang Eric Chen1
15 Sep 1995
TL;DR: This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment which includes viewing of an object from different directions and hit-testing through orientation-independent hot spots.
Abstract: Traditionally, virtual reality systems use 3D computer graphics to model and render virtual environments in real-time. This approach usually requires laborious modeling and expensive special purpose rendering hardware. The rendering quality and scene complexity are often limited because of the real-time constraint. This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment. The panoramic image is digitally warped on-the-fly to simulate camera panning and zooming. The panoramic images can be created with computer rendering, specialized panoramic cameras or by "stitching" together overlapping photographs taken with a regular camera. Walking in a space is currently accomplished by "hopping" to different panoramic points. The image-based approach has been used in the commercial product QuickTime VR, a virtual reality extension to Apple Computer's QuickTime digital multimedia framework. The paper describes the architecture, the file format, the authoring process and the interactive players of the VR system. In addition to panoramic viewing, the system includes viewing of an object from different directions and hit-testing through orientation-independent hot spots. CR

1,515 citations

Proceedings ArticleDOI
01 Jul 1991
TL;DR: A new progressive global illumination method is presented which produces approximate images quickly, and then continues to systematically produce more accurate images, combining the existing methods of progressive refinement radiosity, Monte Carlo path tracing and light ray tracing.
Abstract: A new progressive global illumination method is presented which produces approximate images quickly, and then continues to systematically produce more accurate images. The method combines the existing methods of progressive refinement radiosity, Monte Carlo path tracing and light ray tracing. The method does not place any limitation on surface properties such as ideal Lambertian or mirror-like. To increase efficiency and accuracy, the new concepts of light source reclassification, caustics reconstruction, Monte Carlo path tracing with a radiosity preprocess and an interruptible radiosity solution are introduced. The method presents the user with most useful information about the scene as early as possible by reorganizing the method into a radiosity pass, a high frequency refinement pass and a low frequency refinement pass. The implementation of the method is demonstrated, and sample images are presented.

192 citations

Patent
13 Oct 1992
TL;DR: In this article, a method and apparatus for generating perspective views of a scene is presented, with a viewing position at the center of to be cylindrical environment map, different views can be obtained by rotating the viewing direction either horizontally or vertically.
Abstract: A method and apparatus for generating perspective views of a scene. With a viewing position at the center of to be cylindrical environment map, different views can be obtained by rotating the viewing direction either horizontally or vertically. The horizontal construction method of the present invention generally involves the steps of: determining the portion of the cylindrical map to be viewed; vertically interpolating pixel values in the portion of the cylindrical map to be viewed and mapping to a viewing plane; and displaying the viewing plane. The vertical construction method of the present invention generally involves the steps of: determining the portion of the cylindrical map to be viewed; vertically interpolating pixel values in the portion of the cylindrical map robe viewed and mapping to a vertical plane; horizontally interpolating pixel values in the vertical plane and mapping to the viewing plane; and displaying the viewing plane.

131 citations

Proceedings ArticleDOI
Shenchang Eric Chen1
01 Sep 1990
TL;DR: In this article, the authors proposed a new radiosity algorithm to incrementally render scenes with changing geometry and surface attributes, which is called rendering-while-modeling (RMSM).
Abstract: Traditional radiosity methods can compute the illumination for a scene independent of the view position. However, if any part of the scene geometry is changed, the radiosity process will need to be repeated from scratch. Since the radiosity methods are generally expensive computationally, the traditional methods do not lend themselves to interactive uses where the geometry is constantly changing. This paper presents a new radiosity algorithm to incrementally render scenes with changing geometry and surface attributes. In other words, the question to be asked is "What is the minimum recomputation I need to do if I turn off a light source, change the color of a surface, add or move an object?" Because a modeling change generally exhibits some coherence and affects only parts of an image, the proposed method may drastically reduce the rendering time and therefore allow interactive manipulation. In addition, since the method is conducted incrementally and view-independently, the rendering process can start before the modeling process is completed. The traditional paradigm of modeling-then-rendering is changed to rendering-while-modeling. This approach not only gives the user better visual feedback but also effectively utilizes CPU time otherwise wasted in the modeling process.

98 citations

Journal ArticleDOI
TL;DR: A new method for navigating through a prerendered 3D space, and interacting with objects in that space has been developed, called ‘virtual navigation’, which employs real-time video decompression for the display of, and interaction with, high-quality computer animation.
Abstract: The Virtual Museum is an interactive, electronic museum where users can move from room to room, and select any exhibit in a room for more detailed examination. The exhibits in the museum are educational, encompassing topics such as medicine, plant growth, the environment, and space. To facilitate interaction with the museum, a new method for navigating through a prerendered 3D space, and interacting with objects in that space has been developed, called ‘virtual navigation’. Virtual navigation employs real-time video decompression for the display of, and interaction with, high-quality computer animation. In addition, a representation for 3D objects in animated sequences is used which permits pixel-accurate, frame-accurate object picking, so that a viewer can select any 3D object to trigger movement within the 3D space, to examine an exhibit in animated form, or to play a digital movie or soundtrack. The use of precomputed video permits 3D navigation in a realistic-looking space, without requiring special-purpose graphics hardware.

69 citations


Cited by
More filters
Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper describes a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views, and describes a compression system that is able to compress the light fields generated by more than a factor of 100:1 with very little loss of fidelity.
Abstract: A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a sampled representation for light fields that allows for both efficient creation and display of inward and outward looking views. We hav e created light fields from large arrays of both rendered and digitized images. The latter are acquired using a video camera mounted on a computer-controlled gantry. Once a light field has been created, new views may be constructed in real time by extracting slices in appropriate directions. Since the success of the method depends on having a high sample rate, we describe a compression system that is able to compress the light fields we have generated by more than a factor of 100:1 with very little loss of fidelity. We also address the issues of antialiasing during creation, and resampling during slice extraction. CR Categories: I.3.2 [Computer Graphics]: Picture/Image Generation — Digitizing and scanning, Viewing algorithms; I.4.2 [Computer Graphics]: Compression — Approximate methods Additional keywords: image-based rendering, light field, holographic stereogram, vector quantization, epipolar analysis

4,426 citations

Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: A new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions.
Abstract: This paper discusses a new method for capturing the complete appearance of both synthetic and real world objects and scenes, representing this information, and then using this representation to render images of the object from new camera positions. Unlike the shape capture process traditionally used in computer vision and the rendering process traditionally used in computer graphics, our approach does not rely on geometric representations. Instead we sample and reconstruct a 4D function, which we call a Lumigraph. The Lumigraph is a subset of the complete plenoptic function that describes the flow of light at all positions in all directions. With the Lumigraph, new images of the object can be generated very quickly, independent of the geometric or illumination complexity of the scene or object. The paper discusses a complete working system including the capture of samples, the construction of the Lumigraph, and the subsequent rendering of images from this new representation.

2,986 citations

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work discusses how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing, and demonstrates a few applications of having high dynamic range radiance maps.
Abstract: We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.

2,967 citations

Journal ArticleDOI
TL;DR: This work forms stitching as a multi-image matching problem, and uses invariant local features to find matches between all of the images, and is insensitive to the ordering, orientation, scale and illumination of the input images.
Abstract: This paper concerns the problem of fully automated panoramic image stitching. Though the 1D problem (single axis of rotation) is well studied, 2D or multi-row stitching is more difficult. Previous approaches have used human input or restrictions on the image sequence in order to establish matching images. In this work, we formulate stitching as a multi-image matching problem, and use invariant local features to find matches between all of the images. Because of this our method is insensitive to the ordering, orientation, scale and illumination of the input images. It is also insensitive to noise images that are not part of a panorama, and can recognise multiple panoramas in an unordered image dataset. In addition to providing more detail, this paper extends our previous work in the area (Brown and Lowe, 2003) by introducing gain compensation and automatic straightening steps.

2,550 citations