scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Catadioptric omnidirectional camera

17 Jun 1997-pp 482-488
TL;DR: A new camera with a hemispherical field of view is presented and results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification.
Abstract: Conventional video cameras have limited fields of view that make them restrictive in a variety of vision applications. There are several ways to enhance the field of view of an imaging system. However, the entire imaging system must have a single effective viewpoint to enable the generation of pure perspective images from a sensed image. A new camera with a hemispherical field of view is presented. Two such cameras can be placed back-to-back, without violating the single viewpoint constraint, to arrive at a truly omnidirectional sensor. Results are presented on the software generation of pure perspective images from an omnidirectional image, given any user-selected viewing direction and magnification. The paper concludes with a discussion on the spatial resolution of the proposed camera.
Citations
More filters
Book
05 Mar 2004
TL;DR: Bringing together all aspects of mobile robotics into one volume, Introduction to Autonomous Mobile Robots can serve as a textbook or a working tool for beginning practitioners.
Abstract: Mobile robots range from the Mars Pathfinder mission's teleoperated Sojourner to the cleaning robots in the Paris Metro. This text offers students and other interested readers an introduction to the fundamentals of mobile robotics, spanning the mechanical, motor, sensory, perceptual, and cognitive layers the field comprises. The text focuses on mobility itself, offering an overview of the mechanisms that allow a mobile robot to move through a real world environment to perform its tasks, including locomotion, sensing, localization, and motion planning. It synthesizes material from such fields as kinematics, control theory, signal analysis, computer vision, information theory, artificial intelligence, and probability theory. The book presents the techniques and technology that enable mobility in a series of interacting modules. Each chapter treats a different aspect of mobility, as the book moves from low-level to high-level details. It covers all aspects of mobile robotics, including software and hardware design considerations, related technologies, and algorithmic techniques.] This second edition has been revised and updated throughout, with 130 pages of new material on such topics as locomotion, perception, localization, and planning and navigation. Problem sets have been added at the end of each chapter. Bringing together all aspects of mobile robotics into one volume, Introduction to Autonomous Mobile Robots can serve as a textbook or a working tool for beginning practitioners.

2,414 citations


Cites background from "Catadioptric omnidirectional camera..."

  • ...The catadioptric camera system, now very popular in mobile robotics, offers an extremely wide field of view [114]....

    [...]

Journal ArticleDOI
TL;DR: This paper derives the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint, and describes all of the solutions in detail, including the degenerate ones, with reference to many of the catadi optric systems that have been proposed in the literature.
Abstract: Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design goal for catadioptric sensors is choosing the shapes of the mirrors in a way that ensures that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We describe all of the solutions in detail, including the degenerate ones, with reference to many of the catadioptric systems that have been proposed in the literature. In addition, we derive a simple expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the cameras used to construct it. Moreover, we include detailed analysis of the defocus blur caused by the use of a curved mirror in a catadioptric sensor.

684 citations


Cites background from "Catadioptric omnidirectional camera..."

  • ...Moreover, in that setting the paraboloid does yield a practical omnidirectional sensor with a number of advantageous properties (Nayar, 1997b)....

    [...]

  • ...One way to convert a thin lens to produce orthographic projection is to place an aperture at the focal point behind the lens (Nayar, 1997b)....

    [...]

  • ...(30) As shown in (Nayar, 1997b) and Fig....

    [...]

  • ...See, for example, (Rees, 1970; Charles et al., 1987; Nayar, 1988; Yagi and Kawato, 1990; Hong, 1991; Goshtasby and Gruver, 1993; Yamazawa et al., 1993; Bogner, 1995; Nalwa, 1996; Nayar, 1997a; Chahl and Srinivassan, 1997)....

    [...]

  • ...Keywords: image formation, sensor design, sensor resolution, defocus blur, omnidirectional imaging, panoramic imaging...

    [...]

Proceedings ArticleDOI
24 Apr 2000
TL;DR: This paper presents a new appearance-based place recognition system for topological localization that uses a panoramic vision system to sense the environment and correctly classified between 87% and 98% of the input color images.
Abstract: This paper presents a new appearance-based place recognition system for topological localization. The method uses a panoramic vision system to sense the environment. Color images are classified in real-time based on nearest-neighbor learning, image histogram matching, and a simple voting scheme. The system has been evaluated with eight cross-sequence tests in four unmodified environments, three indoors and one outdoors. In all eight cases, the system successfully tracked the mobile robot's position. The system correctly classified between 87% and 98% of the input color images. For the remaining images, the system was either momentarily confused or uncertain, but never classified an image incorrectly.

629 citations


Cites background from "Catadioptric omnidirectional camera..."

  • ...The next section outlines the basic approach of the algorithm and describes the motivation for the use of a panoramic color vision system....

    [...]

Journal ArticleDOI
TL;DR: This paper presents the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering.
Abstract: An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated simultaneous localization and map-building (SLAM) to be implemented with vision, permitting repeatable longterm localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.

602 citations


Cites background from "Catadioptric omnidirectional camera..."

  • ...On the other hand sh-e ye lenses and catadioptric mirrors [ 16 ] have the disadvantage of variable and sometimes low angular resolution....

    [...]

Proceedings ArticleDOI
Heung-Yeung Shum1, Li-wei He1
01 Jul 1999
TL;DR: An image based system and process for rendering novel views of a real or synthesized 3D scene based on a series of concentric mosaics depicting the scene using multiple ray directions from the novel viewpoint.
Abstract: An image based system and process for rendering novel views of a real or synthesized 3D scene based on a series of concentric mosaics depicting the scene. In one embodiment, each concentric mosaic represents a collection of consecutive slit images of the surrounding 3D scene taken from a different viewpoint tangent to a circle on a plane within the scene. Novel views from viewpoints within circular regions of the aforementioned circle plane defined by the concentric mosaics are rendered using these concentric mosaics. Specifically, a slit image can be identified by a ray originating at its viewpoint on the circle plane and extending toward the longitudinal midline of the slit image. Each of the rays associated with the slit images needed to construct a novel view will either coincide with one of the rays associated with a previously captured slit image, or it will pass between two of the concentric circles on the circle plane. If it coincides, then the previously captured slit image associated with the coinciding ray can be used directly to construct part of the novel view. If the ray passes between two of the concentric circles of the plane, then the needed slit image is interpolated using the two previously captured slit images associated with the rays originating from the adjacent concentric circles that are parallel to the non-coinciding ray. If the objects in the 3D scene are close to the camera, depth correction is applied to reduce image distortion for pixels located above and below the circle plane. In another embodiment, a single camera is used to capture a sequence of images. Each image includes image data that has a ray direction associated therewith. To render an image at a novel viewpoint, multiple ray directions from the novel viewpoint are chosen. Image data is combined from the sequence of images by selecting image data that has a ray direction substantially aligning with the ray direction from the novel viewpoint.

564 citations


Cites background from "Catadioptric omnidirectional camera..."

  • ...Capturing panoramas is even easier if omnidirectional cameras [ Nay97 ], mirrors [Nal96], or fisheye lenses are used....

    [...]

References
More filters
Book
01 Jan 1959
TL;DR: In this paper, the authors discuss various topics about optics, such as geometrical theories, image forming instruments, and optics of metals and crystals, including interference, interferometers, and diffraction.
Abstract: The book is comprised of 15 chapters that discuss various topics about optics, such as geometrical theories, image forming instruments, and optics of metals and crystals. The text covers the elements of the theories of interference, interferometers, and diffraction. The book tackles several behaviors of light, including its diffraction when exposed to ultrasonic waves.

19,815 citations

Proceedings ArticleDOI
15 Sep 1995
TL;DR: An image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function is presented and a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections are introduced.
Abstract: Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.

1,555 citations

Proceedings ArticleDOI
Shenchang Eric Chen1
15 Sep 1995
TL;DR: This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment which includes viewing of an object from different directions and hit-testing through orientation-independent hot spots.
Abstract: Traditionally, virtual reality systems use 3D computer graphics to model and render virtual environments in real-time. This approach usually requires laborious modeling and expensive special purpose rendering hardware. The rendering quality and scene complexity are often limited because of the real-time constraint. This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment. The panoramic image is digitally warped on-the-fly to simulate camera panning and zooming. The panoramic images can be created with computer rendering, specialized panoramic cameras or by "stitching" together overlapping photographs taken with a regular camera. Walking in a space is currently accomplished by "hopping" to different panoramic points. The image-based approach has been used in the commercial product QuickTime VR, a virtual reality extension to Apple Computer's QuickTime digital multimedia framework. The paper describes the architecture, the file format, the authoring process and the interactive players of the VR system. In addition to panoramic viewing, the system includes viewing of an object from different directions and hit-testing through orientation-independent hot spots. CR

1,515 citations

Journal ArticleDOI
Kenro Miyamoto1

238 citations

Proceedings ArticleDOI
09 Apr 1991
TL;DR: The result of an experiment in a typical indoor environment shows that image-based navigation is a feasible alternative to approaches using 3-D models and more complex model-based vision algorithms.
Abstract: A system that allows a robot to acquire a model of its environment and to use this model to navigate is described. The system maps the environment as a set of snapshots of the world taken at target locations. The robot uses an image-based local homing algorithm to navigate between neighboring target locations. Features of the approach include an imaging system that acquires a compact, 360 degrees representation of the environment and an image-based, qualitative homing algorithm that allows the robot to navigate without explicitly inferring three-dimensional structure from the image. The results of an experiment in a typical indoor environment are described, and its argued that image-based navigation is a feasible alternative to approaches using three-dimensional models. >

201 citations