scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A fast approach for omnidirectional surveillance with multiple virtual perspective views

01 Jul 2013-pp 1578-1585
TL;DR: A method of extracting multiple perspective views from a single omnidirectional image for realtime environments is proposed and a performance improvement strategy is both presented and evaluated.
Abstract: In recent years, video surveillance combined with computer vision algorithms like object detection, tracking or automated behaviour analysis has become an important research topic. However, most of these systems are depending on either fixed or remotely controlled narrow angle cameras. When using the former, the area of coverage is extremely limited, while utilizing the latter leads to high failure rates and troubles in camera calibration. In this paper, a method of extracting multiple perspective views from a single omnidirectional image for realtime environments is proposed. An example application of a ceiling-mounted camera setup is used to show the functional principle. Furthermore a performance improvement strategy is both presented and evaluated.
Citations
More filters
Proceedings ArticleDOI
20 Mar 2014
TL;DR: An automated video based real-time surveillance system based on an omnidirectional camera and a multiple object tracking technique for applications in the field of AAL (Ambient Assisted Living).
Abstract: In this paper an automated video based real-time surveillance system is presented. This system is based on an omnidirectional camera and a multiple object tracking technique for applications in the field of AAL (Ambient Assisted Living). This system is able to monitor a complete room with a single camera and, in addition, to track the people entering and leaving this room. The software was implemented for an embedded platform which acts as a smart sensor.

41 citations


Cites methods from "A fast approach for omnidirectional..."

  • ...[5],[6],[7] We present an improved method using both a generic camera model and a look-up-table-based accelerating method [1]....

    [...]

  • ...Since a spherical image cannot directly be used due to high distortional effects, we have developed a real-time algorithm to extract multiple perspective views from a single omnidirectional [1]....

    [...]

Proceedings ArticleDOI
20 Mar 2016
TL;DR: A corresponding test data set is introduced, consisting of synthetically generated fisheye sequences, ranging from simple patterns to more complex scenes, as well as fisHeye video sequences captured with an actual fis heye camera, facilitating both verification and evaluation of any adapted algorithms.
Abstract: In video surveillance as well as automotive applications, so-called fisheye cameras are often employed to capture a very wide angle of view. As such cameras depend on projections quite different from the classical perspective projection, the resulting fisheye image and video data correspondingly exhibits non-rectilinear image characteristics. Typical image and video processing algorithms, however, are not designed for these fisheye characteristics. To be able to develop and evaluate algorithms specifically adapted to fisheye images and videos, a corresponding test data set is therefore introduced in this paper. The first of those sequences were generated during the authors' own work on motion estimation for fish-eye videos and further sequences have gradually been added to create a more extensive collection. The data set now comprises synthetically generated fisheye sequences, ranging from simple patterns to more complex scenes, as well as fisheye video sequences captured with an actual fisheye camera. For the synthetic sequences, exact information on the lens employed is available, thus facilitating both verification and evaluation of any adapted algorithms. For the real-world sequences, we provide calibration data as well as the settings used during acquisition. The sequences are freely available via www.lms.lnt.de/fisheyedataset/.

37 citations


Additional excerpts

  • ...The data set now comprises synthetically generated fisheye sequences, ranging from simple patterns to more complex scenes, as well as fisheye video sequences captured with an actual fisheye camera....

    [...]

Proceedings ArticleDOI
01 Jan 2019
TL;DR: A rotation invariant training method, which only uses randomly rotated perspective images without any additional annotation for omnidirectional pedestrian detection, which achieved a state-of-the-art performance on four public benchmarks.
Abstract: Recently much progress has been made in pedestrian detection by utilizing the learning ability of convolutional neural networks (CNNs). However, due to the lack of omnidirectional images to train CNNs, few CNN-based detectors have been proposed for omnidirectional pedestrian detection. One significant difference between omnidirectional images and perspective images is that the appearance of pedestrians is rotated in omnidirectional images. A previous method has dealt with this by transforming omnidirectional images into perspective images in the test phase. However, this method has significant drawbacks, namely, the computational cost and the performance degradation caused by the transformation. To address this issue, we propose a rotation invariant training method, which only uses randomly rotated perspective images without any additional annotation. By this method, existing large-scale datasets can be utilized. In test phase, omnidirectional images can be used without the transformation. To group predicted bounding boxes, we also develop a bounding box refinement, which works better for our detector than non-maximum suppression. The proposed detector achieved a state-of-the-art performance on four public benchmarks.

34 citations


Cites methods from "A fast approach for omnidirectional..."

  • ...In this baseline, first, omnidirectional images are transformed into perspective images using calibrated camera parameters as described in [16]....

    [...]

Proceedings ArticleDOI
19 Aug 2016
TL;DR: This paper presents a motion estimation method for real-world fisheye videos by combining perspective projection with knowledge about the underlying f isheye projection, and introduces a re-mapping for ultra-wide angles which would otherwise lead to wrong motion compensation results for the fIsheye boundary.
Abstract: Fisheye cameras prove a convenient means in surveillance and automotive applications as they provide a very wide field of view for capturing their surroundings. Contrary to typical rectilinear imagery, however, fisheye video sequences follow a different mapping from the world coordinates to the image plane which is not considered in standard video processing techniques. In this paper, we present a motion estimation method for real-world fisheye videos by combining perspective projection with knowledge about the underlying fisheye projection. The latter is obtained by camera calibration since actual lenses rarely follow exact models. Furthermore, we introduce a re-mapping for ultra-wide angles which would otherwise lead to wrong motion compensation results for the fisheye boundary. Both concepts extend an existing hybrid motion estimation method for equisolid fisheye video sequences that decides between traditional and fisheye block matching in a block-based manner. Compared to that method, the proposed calibration and re-mapping extensions yield gains of up to 0.58 dB in luminance PSNR for real-world fisheye video sequences. Overall gains amount to up to 3.32 dB compared to traditional block matching.

10 citations


Cites background from "A fast approach for omnidirectional..."

  • ...Applications can range from automotive [1, 2, 3, 4] over video surveillance [5, 6] to imagebased virtual reality [7, 8], thus creating quite a number of potential image and video processing tasks....

    [...]

Posted Content
TL;DR: In this article, a person detector on omnidirectional images is proposed, which adapts the qualitative detection performance of a convolutional neural network based method, namely YOLOv2 to fish-eye images.
Abstract: We propose a person detector on omnidirectional images, an accurate method to generate minimal enclosing rectangles of persons. The basic idea is to adapt the qualitative detection performance of a convolutional neural network based method, namely YOLOv2 to fish-eye images. The design of our approach picks up the idea of a state-of-the-art object detector and highly overlapping areas of images with their regions of interests. This overlap reduces the number of false negatives. Based on the raw bounding boxes of the detector we fine-tuned overlapping bounding boxes by three approaches: non-maximum suppression, soft non-maximum suppression and soft non-maximum suppression with Gaussian smoothing. The evaluation was done on the PIROPO database and an own annotated Flat dataset, supplemented with bounding boxes on omnidirectional images. We achieve an average precision of 64.4 % with YOLOv2 for the class person on PIROPO and 77.6 % on Flat. For this purpose we fine-tuned the soft non-maximum suppression with Gaussian smoothing.

10 citations

References
More filters
01 Jan 1997
TL;DR: A software system that has the capability to generate at video rate (30 Hz), a large number of perspective and panoramic video streams from a single omnidirectional video input, using no more than a PC.
Abstract: Existing software systems for visual exploration are limited in their capabilities in that they are only applicable to static omnidirectional images. We present a software system that has the capability to generate at video rate (30 Hz), a large number of perspective and panoramic video streams from a single omnidirectional video input, using no more than a PC. This permits a remote user to create multiple perspective and panoramic views of a dynamic scene, where the parameters of each view (viewing direction, field of view, and magnification) are controlled via an interactive device such as a mouse, joystick or a head-tracker.

110 citations


Additional excerpts

  • ...[2] developed a software system, which is able to generate multiple perspective and panoramic views from an omnidirectional image....

    [...]

Proceedings ArticleDOI
23 Aug 2004
TL;DR: A generic camera model for cameras equipped with fish-eye lenses and a method for calibration of such cameras is proposed and the obtained results are promising.
Abstract: Fish-eye lenses are convenient in such computer vision applications where a very wide angle of view is needed. However, their use for measurement purposes is limited by the lack of an accurate, generic, and easy-to-use calibration procedure. We hence propose a generic camera model for cameras equipped with fish-eye lenses and a method for calibration of such cameras. The calibration is possible by using only one view of a planar calibration object but more views should be used for better results. The proposed calibration method was evaluated with real images and the obtained results are promising. The calibration software becomes commonly available at the author's Web page.

97 citations


"A fast approach for omnidirectional..." refers methods in this paper

  • ...The model parameters were determined by using the camera calibration toolbox Juho Kannala provides on his homepage [8]....

    [...]

  • ...Therefore we use the generic camera model proposed by Juho Kannala [6], [7]....

    [...]

Proceedings ArticleDOI
16 Aug 1998
TL;DR: A visual surveillance and monitoring system which is based on omnidirectional imaging and view-dependent image generation from omniddirectional video streams using a hyperboloidal mirror has an advantage of less latency in looking around a large field of view.
Abstract: This paper describes a visual surveillance and monitoring system which is based on omnidirectional imaging and view-dependent image generation from omnidirectional video streams. While conventional visual surveillance and monitoring systems usually consist of either a number of fixed regular cameras or a mechanically controlled camera, the proposed system has a single omnidirectional video camera using a hyperboloidal mirror. This approach has an advantage of less latency in looking around a large field of view. In a prototype system developed, the viewing direction is determined by viewers' head tracking, by using a mouse, or by moving object trading in the omnidirectional image.

74 citations

Proceedings ArticleDOI
10 Dec 2012
TL;DR: In this article, a panoramic image acquisition system is combined with a head-mounted display (HMD) for real-time 360° vision of the environment, where the omnidirectional images are transformed to fit the characteristics of HMD screens.
Abstract: Have you ever dreamed of having eyes in the back of your head? In this paper we present a novel display device called FlyVIZ which enables humans to experience a real-time 360° vision of their surroundings for the first time. To do so, we combine a panoramic image acquisition system (positioned on top of the user's head) with a Head-Mounted Display (HMD). The omnidirectional images are transformed to fit the characteristics of HMD screens. As a result, the user can see his/her surroundings, in real-time, with 360° images mapped into the HMD field-ofview. We foresee potential applications in different fields where augmented human capacity (an extended field-of-view) could benefit, such as surveillance, security, or entertainment. FlyVIZ could also be used in novel perception and neuroscience studies.

56 citations

Reference EntryDOI
13 Jun 2008
TL;DR: This article gives a discussion about the camera models and calibration methods used in the field, with the emphasis on conventional calibration methods in which the parameters of the camera model are determined by using images of a calibration object whose geometric properties are known.
Abstract: Geometric camera calibration is a prerequisite for making accurate geometric measurements from image data, and hence it is a fundamental task in computer vision. This article gives a discussion about the camera models and calibration methods used in the field. The emphasis is on conventional calibration methods in which the parameters of the camera model are determined by using images of a calibration object whose geometric properties are known. The presented techniques are illustrated with real calibration examples in which several different kinds of cameras are calibrated using a planar calibration object. Keywords: camera calibration; camera model; computer vision; photogrammetry; central camera; omnidiractional vision; catadioptivic camera; fish.eye camera

52 citations


"A fast approach for omnidirectional..." refers methods in this paper

  • ...Therefore we use the generic camera model proposed by Juho Kannala [6], [7]....

    [...]