Calibrating and optimizing poses of visual sensors in distributed platforms
read more
Citations
Computer vision : a modern approach = 计算机视觉 : 一种现代的方法
A convenient multicamera self-calibration for virtual environments
On the optimal placement of multiple visual sensors
The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey
Optimal sensor placement for surveillance of large spaces
References
Distinctive Image Features from Scale-Invariant Keypoints
Multiple view geometry in computer vision
Multiple View Geometry in Computer Vision.
A Combined Corner and Edge Detector
A flexible new technique for camera calibration
Related Papers (5)
Frequently Asked Questions (11)
Q2. What future works have the authors mentioned in the paper "Calibrating and optimizing poses of visual sensors in distributed platforms" ?
As the change in viewpoint between the different cameras is restricted, future work is needed to improve the automatic extraction of point correspondences between images. Future work on this topic will include the investigation of how to handle large numbers of grid points.
Q3. How is the registration of triplets and subgroups achieved?
Registration of triplets and sub-groups is achieved by computing a homography of 3-space between the different metric structures.
Q4. What is the effect of the use of feature matching in combination with a flat screen?
The use of SIFT-feature matching in combination with a flat screen displaying a known pattern enables us to easily and automaticaly detect the subset of image points.
Q5. What is the way to solve the camera positioning problem?
Considering N cameras that are calibrated, i.e. their fields-of-view as well as positions in the space are known, the authors formulate their camera positioning problem in terms of maximizing the coverage.
Q6. What is the main reason for the failure of the optimization problem?
As the optimization problem of the final bundle adjustment is of very high dimension, a poor initial guess commonly results in the non-linear optimization to fail completely, i.e. to converge to a suboptimal solution or to not converge at all.
Q7. How many parameters are used for the camera matrices?
The dimension of the minimization problem adds then up to a total number of 6(N −1) parameters for the camera matrices, plus a set of 3L parameters for the coordinates of the L reconstructed 3D points.
Q8. What is the basic optimization problem of the feature tracker?
The basic optimization problem solved by the feature tracker is:min d,Dωx Xx=−ωxωy Xy=−ωy(I(x+u)− J((D+ I2×2)x+d+u)) 2 (2)where I(u), J(u) represent the grey-scale values of the two images at location u, the vector d = [dx dy ]T is the optical flow at location u, and the matrix D denotes an affine deformation matrix characterized by the four coefficients dxx, dxy, dyx, dyy:D =„dxx dxy dyx dyy«(3)The objective of affine tracking is then to choose d and D in a way that minimizes the dissimilarity between feature windows of size 2ωx + 1 in x and size 2ωy + 1 in y direction around the point u and v in The authorand J respectively.
Q9. What is the way to determine the optimal poses of the multiple cameras?
Given the fixed positions, the authors develop a linear programming model that determines the optimal poses (pan and tilt angles) with respect to coverage while maintaining the required resolution (i.e. minimal ’sampling frequency’).
Q10. What are the constraints used to restrict the ambiguity?
Additional constraints arising from knowledge about the cameras’ parameters and/or the scene can be used to restrict this ambiguity up to an affine, metric or Euclidean transformation.
Q11. What is the way to extract point correspondences between images?
As the change in viewpoint between the different cameras is restricted, future work is needed to improve the automatic extraction of point correspondences between images.