scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Accuracy of fish-eye lens models.

10 Jun 2010-Applied Optics (Optical Society of America)-Vol. 49, Iss: 17, pp 3338-3347
TL;DR: A method is presented by which the lens curve of a fish-eye camera can be extracted using well-founded assumptions and perspective methods and several of the models from the literature are examined against this empirically derived curve.
Abstract: The majority of computer vision applications assumes that the camera adheres to the pinhole camera model. However, most optical systems will introduce undesirable effects. By far, the most evident of these effects is radial lensing, which is particularly noticeable in fish-eye camera systems, where the effect is relatively extreme. Several authors have developed models of fish-eye lenses that can be used to describe the fish-eye displacement. Our aim is to evaluate the accuracy of several of these models. Thus, we present a method by which the lens curve of a fish-eye camera can be extracted using well-founded assumptions and perspective methods. Several of the models from the literature are examined against this empirically derived curve.
Citations
More filters
Proceedings ArticleDOI
01 Oct 2019
TL;DR: The first extensive fisheye automotive dataset, WoodScape, named after Robert Wood, which comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection is released.
Abstract: Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of their prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images. We release the first extensive fisheye automotive dataset, WoodScape, named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. With WoodScape, we would like to encourage the community to adapt computer vision models for fisheye camera instead of using naive rectification.

196 citations


Cites background from "Accuracy of fish-eye lens models."

  • ...More detailed analysis of accuracy of various projection models is discussed in [25]....

    [...]

Journal ArticleDOI
TL;DR: The aim of this paper is to compare different lens distortion models to define the one that obtains better results under some conditions and to explore if some model can represent high and low distortion adequately.
Abstract: Many lens distortion models exist with several variations, and each distortion model is calibrated by using a different technique. If someone wants to correct lens distortion, choosing the right model could represent a very difficult task. Calibration depends on the chosen model, and some methods have unstable results. Normally, the distortion model containing radial, tangential, and prism distortion is used, but it does not represent high distortion accurately. The aim of this paper is to compare different lens distortion models to define the one that obtains better results under some conditions and to explore if some model can represent high and low distortion adequately. Also, we propose a calibration technique to calibrate several models under stable conditions. Since performance is hard conditioned with the calibration technique, the metric lens distortion calibration method is used to calibrate all the evaluated models.

85 citations

Journal ArticleDOI
Te Chen1, Lu Liu1, Bo Tu1, Zhong Zheng1, Weiwei Hu1 
TL;DR: This letter proposes an imaging receiver scheme for an indoor multiple-input-multiple-output (MIMO) visible light communication (VLC), in which a fisheye lens with ultrawide field-of-view is used for high-quality imaging, so it can realize omnidirectional receiving and provide high-spatial diversity for decoding of the MIMO signals.
Abstract: This letter proposes an imaging receiver scheme for an indoor multiple-input-multiple-output (MIMO) visible light communication (VLC), in which a fisheye lens with ultrawide field-of-view is used for high-quality imaging, so it can realize omnidirectional receiving and provide high-spatial diversity for decoding of the MIMO signals. In addition, the fisheye lens projects planar and small-sized images, which means that the integration with compact planar receiver array is feasible. Using the polynomial projection model, the optical intensity on the receiving plane is obtained, which is in accordance with the experimental result and shows that the images of the light-emitting diodes are clearly separated. The simulation results indicate that low correlations of the channel matrix are achieved, so high spectral efficiency is realized with various receiver positions under indoor circumstance. Consequently, this fisheye-lens-based imaging receiver is a potential candidate for high-performance indoor MIMO VLC applications.

73 citations


Cites background from "Accuracy of fish-eye lens models."

  • ...Besides, the focal length of the fisheye lens is short (several millimeters) [13], and the projected image is planar and small in size (tens of square millimeters), which can match the compact planar receiver array....

    [...]

  • ...The coordinate of p′ can be determined by using various projection functions [12], and it has been suggested that a fifth-order form of the polynomial projection model is adequate to achieve high accuracy [13], given by...

    [...]

Journal ArticleDOI
01 Jun 2016-Sensors
TL;DR: This article presents a new approach to calculating the inverse of radial distortions based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients.
Abstract: This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view.

56 citations


Cites background from "Accuracy of fish-eye lens models."

  • ...A complete description of these models can be found in the review written by Hugues [32] and also in [33]....

    [...]

Proceedings ArticleDOI
01 May 2020
TL;DR: This paper presents a novel self-supervised scale-aware framework for learning Euclidean distance and ego-motion from raw monocular fisheye videos without applying rectification and obtained state-of-the-art results comparable to other self- supervised monocular methods.
Abstract: Fisheye cameras are commonly used in applications like autonomous driving and surveillance to provide a large field of view (> 180o). However, they come at the cost of strong non-linear distortions which require more complex algorithms. In this paper, we explore Euclidean distance estimation on fisheye cameras for automotive scenes. Obtaining accurate and dense depth supervision is difficult in practice, but self-supervised learning approaches show promising results and could potentially overcome the problem. We present a novel self-supervised scale-aware framework for learning Euclidean distance and ego-motion from raw monocular fisheye videos without applying rectification. While it is possible to perform piece-wise linear approximation of fisheye projection surface and apply standard rectilinear models, it has its own set of issues like re-sampling distortion and discontinuities in transition regions. To encourage further research in this area, we will release our dataset as part of the WoodScape project [1]. We further evaluated the proposed algorithm on the KITTI dataset and obtained state-of-the-art results comparable to other self-supervised monocular methods. Qualitative results on an unseen fisheye video demonstrate impressive performance1.

51 citations

References
More filters
Journal ArticleDOI

28,888 citations


"Accuracy of fish-eye lens models." refers methods in this paper

  • ...To achieve this, a nonlinear optimization algorithm can be used, such as Levenberg–Marquardt [20]....

    [...]

  • ...Thus, to achieve the result with least error, the vanishing point must be chosen such that the following error function ξ is minimized, where s and m are given in terms of angles against the x axis in the range ½−π=2; π=2Þ: ξ ¼ Xn i¼1 jΔsij; ð23Þ where Δs is the acute angle formed by the two lines described by the slopes s and m: Δs ¼ m − s; jm − sj < π=2 π − ðm − sÞ; jm − sj > π=2 : ð24Þ To achieve this, a nonlinear optimization algorithm can be used, such as Levenberg–Marquardt [20]....

    [...]

  • ...The Levenberg–Marquardt algorithm is used to complete the minimization....

    [...]

  • ...Then, each fish-eye lens model is fitted to each of the extracted curves, using the Levenberg–Marquardt nonlinear least-mean-squares fit algorithm [20]....

    [...]

Proceedings ArticleDOI
20 Sep 1999
TL;DR: Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.
Abstract: An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.

16,989 citations


"Accuracy of fish-eye lens models." refers background in this paper

  • ...Future work would address this issue; perhaps via the adaptation of a scale-invariant feature transform [22] for this particular application (which may also overcome orientation and approximate affine transforms of features in these regions)....

    [...]

01 Jan 2001
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

14,282 citations

Journal ArticleDOI
ZhenQiu Zhang1
TL;DR: A flexible technique to easily calibrate a camera that only requires the camera to observe a planar pattern shown at a few (at least two) different orientations is proposed and advances 3D computer vision one more step from laboratory environments to real world use.
Abstract: We propose a flexible technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one more step from laboratory environments to real world use.

13,200 citations


"Accuracy of fish-eye lens models." refers background in this paper

  • ...Several other researchers havemade the assumption that other causes of tangential distortion are negligible [2,3,5,10]....

    [...]

Journal ArticleDOI
Roger Y. Tsai1
01 Aug 1987
TL;DR: In this paper, a two-stage technique for 3D camera calibration using TV cameras and lenses is described, aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,940 citations