scispace - formally typeset
Search or ask a question
Book ChapterDOI

Camera Calibration and Navigation in Networks of Rotating Cameras

TL;DR: This paper introduces a method of camera calibration and navigation based on continuous tracking, which requires minimal human involvement and allows the camera pose to be calculated recursively in real time on the basis of the current and previous camera images and the previous pose.
Abstract: Camera calibration is one of the basic problems concerning intelligent video analysis in networks of multiple cameras with changeable pan and tilt (PT). Traditional calibration methods give satisfactory results, but are human labour intensive. In this paper we introduce a method of camera calibration and navigation based on continuous tracking, which requires minimal human involvement. After the initial pre-calibration, it allows the camera pose to be calculated recursively in real time on the basis of the current and previous camera images and the previous pose. The method is suitable if multiple coplanar points are shared between views from neighbouring cameras, which is often the case in the video surveillance systems.
Citations
More filters
Book ChapterDOI
19 Mar 2018
TL;DR: The aim of the study is to present an overview of the SAVA system enabling identification and classification in the real time of such behaviors as: walking, running, sitting down, jumping, lying, getting up, bending, squatting, waving, and kicking.
Abstract: The intelligent video monitoring system SAVA has been implemented as a prototype at the 9th Technology Readiness Level. The source of data are video cameras located in the public space that provide HD video streaming. The aim of the study is to present an overview of the SAVA system enabling identification and classification in the real time of such behaviors as: walking, running, sitting down, jumping, lying, getting up, bending, squatting, waving, and kicking. It also can identify interactions between persons, such as: greeting, passing, hugging, pushing, and fighting. The system has module-based architecture and is combined of the following modules: acquisition, compression, path detection, path analysis, motion description, action recognition. The effect of the modules operation is a recognized behavior or interaction. The system achieves a classification correctness level of 80% when there are more than ten classes.

7 citations

Book ChapterDOI
14 Mar 2016
TL;DR: This work proposes to solve the problem of crime suspect identification by reconstructing a three-dimensional mesh that could be presented to an observer, so he could identify suspect based on accumulated information rather than fragmented one, while choosing any angle of observation.
Abstract: Growing importance and commonness of video surveillance systems brings new possibilities in the area of crime suspect identification. While suspects can be recognized on video recordings, it is often a difficult task, because in most cases parts of suspect’s face are occluded. Even if there are multiple cameras, and the recordings are long enough to expose entirety of suspect’s face, it is challenging for an observer to accumulate information from different cameras and frames. We propose to solve this problem by reconstructing a three-dimensional mesh that could be presented to an observer, so he could identify suspect based on accumulated information rather than fragmented one, while choosing any angle of observation. Our approach is based on extraction of anthropological features, so that even with imperfect recordings, the most important features in terms of facial recognition are preserved, while those not registered might be supplemented with generic facial surface.

7 citations


Cites methods from "Camera Calibration and Navigation i..."

  • ...To deal with such issues, as first step of suggested approach, we employ a navigation algorithm previously described in [2]....

    [...]

Book ChapterDOI
07 Apr 2021
TL;DR: In this paper, the authors proposed a system designed for automatic detection of emergency landing sites for horizontally landing UAVs, which combines the classic computer vision algorithms and novel segmentation methods based on deep learning techniques using the U-Net inspired network architecture.
Abstract: This article proposes the system designed for automatic detection of emergency landing sites for horizontally landing unmanned aerial vehicles (UAV). The presented solution combines the classic computer vision algorithms and novel segmentation methods based on deep learning techniques using the U-Net inspired network architecture. The presented system uses a single nadir camera mounted on a UAV and the energy-efficient compute module capable of highly-parallelized calculations.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
Roger Y. Tsai1
01 Aug 1987
TL;DR: In this paper, a two-stage technique for 3D camera calibration using TV cameras and lenses is described, aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,940 citations

Book
03 Jan 1992
TL;DR: A new technique for three-dimensional camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses using two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,816 citations


"Camera Calibration and Navigation i..." refers background or methods in this paper

  • ...It has been proven, however, that considering only first order distortion parameter k1 gives sufficient accuracy [10]....

    [...]

  • ...Traditional camera calibration methods [10, 2] require placing 3D reference points or markers in the view of the cameras, which is labour intensive, especially in the case of exterior cameras....

    [...]

Proceedings ArticleDOI
23 Aug 2004
TL;DR: An efficient adaptive algorithm using Gaussian mixture probability density is developed using Recursive equations to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.
Abstract: Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.

2,045 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This paper presents a method which improves this adaptive background mixture model by reinvestigating the update equations at different phases, which allows the system learn faster and more accurately as well as adapts effectively to changing environment.
Abstract: Real-time segmentation of moving regions in image sequences is a fundamental step in many vision systems including automated visual surveillance, human-machine interface, and very low-bandwidth telecommunications A typical method is background subtraction Many background models have been introduced to deal with different problems One of the successful solutions to these problems is to use a multi-colour background model per pixel proposed by Grimson et al [1, 2,3] However, the method suffers from slow learning at the beginning, especially in busy environments In addition, it can not distinguish between moving shadows and moving objects This paper presents a method which improves this adaptive background mixture model By reinvestigating the update equations, we utilise different equations at different phases This allows our system learn faster and more accurately as well as adapts effectively to changing environment A shadow detection scheme is also introduced in this paper It is based on a computational colour space that makes use of our background model A comparison has been made between the two algorithms The results show the speed of learning and the accuracy of the model using our update algorithm over the Grimson et al’s tracker When incorporate with the shadow detection, our method results in far better segmentation than The Thirteenth Conference on Uncertainty in Artificial Intelligence that of Grimson et al

1,638 citations


"Camera Calibration and Navigation i..." refers methods in this paper

  • ...This is done with a method based on Gaussian mixture models [1, 11]....

    [...]

  • ...The original method based on Gaussian mixture model [1] analyses consecutive frames and estimates background colour of a pixel on the basis of how frequently different colours are observed....

    [...]