scispace - formally typeset
Search or ask a question
Author

Suresh Kumar Pupala

Bio: Suresh Kumar Pupala is an academic researcher from University of Salford. The author has contributed to research in topics: 3D reconstruction & Camera resectioning. The author has an hindex of 1, co-authored 1 publications receiving 22 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The work presented in this paper is the implementation of a stereo vision system integrated in humanoid robot, one of the aims to avoid expensive investment in hardware when used in robotics for 3D perception.

28 citations


Cited by
More filters
01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Journal ArticleDOI
TL;DR: Compared with other related studies, the proposed method maintains better space-time similarity with the performer, besides ensuring smoother trajectory for multiframe sequential imitation.
Abstract: The ability of a humanoid robot to display human-like facial expressions is crucial to the natural human–computer interaction. To fulfill this requirement for an imitative humanoid robot, XIN-REN, an automatic facial expression learning method is proposed. In this method, first, a forward kinematics model, which is designed to reflect nonlinear mapping relationships between servo displacement vectors and corresponding expression shape vectors, is converted into a linear relationships between the mechanical energy of servo displacements and the potential energy of feature points, based on the energy conservation principle. Second, an improved inverse kinematics model is established under the constraints of instantaneous similarity and movement smoothness. Finally, online expression learning is employed to determine the optimal servo displacements for transferring the facial expressions of a human performer to the robot. To illustrate the performance of the proposed method, we conduct evaluation experiments on the forward kinematics model and the inverse kinematics model, based on the data collected from the robot's random states as well as fixed procedures by animators. Further, we evaluate the facial imitation ability with different values of the weighting factor, according to three sequential indicators (space-similarity, time-similarity, and movement smoothness). Experimental results indicate that the deviations in mean shape and position do not exceed 6 pixels and 3 pixels, respectively, and the average servo displacement deviation does not exceed 0.8%. Compared with other related studies, the proposed method maintains better space–time similarity with the performer, besides ensuring smoother trajectory for multiframe sequential imitation.

30 citations

Journal ArticleDOI
Cheng Lei1, Song Biao1, Yating Dai1, Huaiyu Wu1, Yang Chen1 
TL;DR: This study presents a novel navigation method designed to support a real-time, efficient, accurate indoor localisation for mobile robot system and achieves the levels of accuracy location comparable with that of the IMU/SV fusion algorithm.

22 citations

Journal ArticleDOI
TL;DR: The guidelines suggest keeping long baseline, observing the target as nearly as possible, controlling two active cameras with the same angular velocity and keeping the target on Z w axis if possible to reduce the measurement errors.
Abstract: The depth measurement error of bionic eyes consisting of two active cameras is derived by considering the eyes' rotation angle errors and image feature extraction errors. The effect factors of the depth measurement are obtained from the calculation formula of depth error. Based on the analysis of these effect factors, some effective guidelines for bionic eyes are proposed to reduce the measurement errors. The guidelines suggest keeping long baseline, observing the target as nearly as possible, controlling two active cameras with the same angular velocity and keeping the target on Z w axis if possible. The simulation experiments and practical experiments in bionic eyes platform validate the effectiveness of the proposed guidelines.

16 citations

Journal ArticleDOI
14 Feb 2019-Sensors
TL;DR: This work proposes, for the first time, to combine the binocular vision and Quick Response (QR) code identification techniques together to improve the robot P&N accuracies, and then construct an autonomous library robot for high-precision BAR operations.
Abstract: With rapid advancements in artificial intelligence and mobile robots, some of the tedious yet simple jobs in modern libraries, like book accessing and returning (BAR) operations that had been fulfilled manually before, could be undertaken by robots. Due to the limited accuracies of the existing positioning and navigation (P&N) technologies and the operational errors accumulated within the robot P&N process, however, most of the current robots are not able to fulfill such high-precision operations. To address these practical issues, we propose, for the first time (to the best of our knowledge), to combine the binocular vision and Quick Response (QR) code identification techniques together to improve the robot P&N accuracies, and then construct an autonomous library robot for high-precision BAR operations. Specifically, the binocular vision system is used for dynamic digital map construction and autonomous P&N, as well as obstacle identification and avoiding functions, while the QR code identification technique is responsible for both robot operational error elimination and robotic arm BAR operation determination. Both simulations and experiments are conducted to verify the effectiveness of the proposed technique combination, as well as the constructed robot. Results show that such a technique combination is effective and robust, and could help to significantly improve the P&N and BAR operation accuracies, while reducing the BAR operation time. The implemented autonomous robot is fully-autonomous and cost-effective, and may find applications far beyond libraries with only sophisticated technologies employed.

14 citations