scispace - formally typeset
Search or ask a question

Showing papers by "Gonzalo López-Nicolás published in 2017"


Journal ArticleDOI
TL;DR: A novel method to detect stairs with a RGB-D camera designed to be wearable and aimed to assist the visually impaired and robust enough to work in real-time and even under partial occlusions of the stair.

46 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work proposes using a head-mounted RGB-D camera to detect free-space, obstacles and scene direction in front of the user and proposes a new approach to represent depth information and provide motion cues by using particular phosphene patterns.
Abstract: Recent research demonstrates that visual prostheses are able to provide visual perception to people with some kind of blindness. In visual prostheses, image information from the scene is transformed to a phosphene pattern to be sent to the implant. This is a complex problem where the main challenge is the very limited spatial and intensity resolution. Moreover, depth perception, which is relevant to perform agile navigation, is lost and codifying the semantic information to phosphene patterns remains an open problem. In this work, we consider the framework of perception for navigation where aspects such as obstacle avoidance are critical. We propose using a head-mounted RGB-D camera to detect free-space, obstacles and scene direction in front of the user. The main contribution is a new approach to represent depth information and provide motion cues by using particular phosphene patterns. The effectiveness of this approach is tested in simulation with real data from indoor environments.

16 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: A novel approach to characterize the motion of the robots in the formation that allows to enclose and track the target while overcoming their limited field of view (FOV) is presented.
Abstract: An emerging application of multirobot systems is the monitoring of a dynamic event. Here, the goal is to enclose and track a moving target by attaining a desired geometric formation around it. By considering a circular pattern configuration for the target enclosing, the multirobot system is able to perform full perception of the target along its motion. In the proposed system, the robots rely only on their onboard vision sensor without external input to complete the task. The key problem resides in overcoming the motion and visual constraints of the agents. In particular, differential-drive robots with limited sensing, that must maintain visibility of the moving target as it navigates in the environment, are considered. A novel approach to characterize the motion of the robots in the formation that allows to enclose and track the target while overcoming their limited field of view (FOV) is presented. The proposed approach is illustrated through simulations.

13 citations


Journal ArticleDOI
TL;DR: It is theoretically demonstrated and confirmed by simulation that exponential stability to the prescribed formation is achieved when time delays are constant and known, and results show that the global system performance is significantly improved with respect to the case of no delay compensation.

11 citations


Journal ArticleDOI
TL;DR: A multi-camera system configuration resembling the circular panoramic model is proposed which results in a particular non- central projection allowing the stitching of a non-central panorama and from a single panorama well-conditioned 3D reconstruction of lines, which are specially interesting in texture-less scenarios.

10 citations


Book ChapterDOI
01 Jan 2017
TL;DR: This chapter describes a novel visual homing methodology for robots moving in a planar environment that takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information.
Abstract: The first problem addressed in the monograph is how to enable mobile robots to autonomously navigate toward specific positions in an environment. Vision sensors have often been used for this purpose, supporting a behavior known as visual homing, in which the robot’s target location is defined by an image. This chapter describes a novel visual homing methodology for robots moving in a planar environment. The employed visual information consists of a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment and the current image taken by the robot. One of the contributions presented is an algorithm that calculates the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion scenarios and provides important robustness properties to the presented technique. A further contribution within the proposed methodology is a novel control law that uses the available angles, with no range information involved, to drive the robot to the goal. This way, the method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. The chapter includes a formal proof of the stability of the proposed control law, and the performance of the visual navigation method is illustrated through simulations and different sets of experiments with real images captured by cameras on board robotic mobile platforms.

4 citations


Book ChapterDOI
20 Jun 2017
TL;DR: A system for Simulated Prosthetic Vision based on a head-mounted display with an RGB-D camera, and two tools, one focused on human interaction and the other oriented to navigation, exploring different proposals of phosphenic representations is presented.
Abstract: Recent research on visual prosthesis demonstrates the possibility of providing visual perception to people with certain blindness. Bypassing the damaged part of the visual path, electrical stimulation provokes spot percepts known as phosphenes. Due to physiological and technological limitations the information received by patients has very low resolution and reduced dynamic range. In this context, the inclusion of new computer vision techniques to improve the semantic content in this information channel is an active and open key topic. In this paper, we present a system for Simulated Prosthetic Vision based on a head-mounted display with an RGB-D camera, and two tools, one focused on human interaction and the other oriented to navigation, exploring different proposals of phosphenic representations.

3 citations


Book ChapterDOI
01 Jan 2017
TL;DR: The method is developed considering particularly a unicycle kinematic robot model, and its contribution is that sinusoids are used in such a way that the generated vehicle trajectories are feasible, smooth, and versatile, improving over previous sinusoidal-based control works in terms of efficiency and flexibility.
Abstract: This chapter continues the study of methods for vision-based stabilization of mobile robots to desired locations in an environment, focusing on an aspect that is critical for successful real-world implementation, but often tends to be overlooked in the literature: the control inputs employed must take into account the specific motion constraints of commercial robots, and should conform with feasibility, safety, and efficiency requirements. With this motivation, the chapter proposes a visual control approach based on sinusoidal inputs designed to stabilize the pose of a robot with nonholonomic motion constraints. All the information used in the control scheme is obtained from omnidirectional vision, in a robust manner, by means of the 1D trifocal tensor. The method is developed considering particularly a unicycle kinematic robot model, and its contribution is that sinusoids are used in such a way that the generated vehicle trajectories are feasible, smooth, and versatile, improving over previous sinusoidal-based control works in terms of efficiency and flexibility. Furthermore, the analytical expressions for the evolution of the robot’s state are provided and used to propose a novel state-feedback control law. The stability of the proposed approach is analyzed in the chapter, which also reports on results from simulations and experiments with a real robot, carried out to validate the methodology.

1 citations


Book ChapterDOI
01 Jan 2017
TL;DR: The developments in this chapter pave the way for novel vision-based implementations of control tasks involving teams of mobile robots, which is the leitmotif of the monograph.
Abstract: It is undoubtedly interesting, from a practical perspective, to solve the problem of multirobot formation stabilization in a decentralized fashion, while allowing the agents to rely only on their independent onboard sensors (e.g., cameras), and avoiding the use of leader robots or global reference frames . However, a key observation that serves as motivation for the work presented in this chapter is that the available controllers satisfying these conditions generally fail to provide global stability guarantees. In this chapter, we provide novel theoretical tools to address this issue; in particular, we propose coordinate-free formation stabilization algorithms that are globally convergent. The common elements of the control methods we describe are that they rely on relative position information expressed in each robot’s independent frame, and that the absence of a shared orientation reference is dealt with by introducing locally computed rotation matrices in the control laws. Specifically, three different nonlinear formation controllers for mobile robots are presented in the chapter. First, we propose an approach relying on global information of the team, implemented in a distributed networked fashion. Then, we present a purely distributed method based on each robot using only partial information from a set of formation neighbors. We finally explore formation stabilization applied to a target enclosing task in a 3D workspace. The developments in this chapter pave the way for novel vision-based implementations of control tasks involving teams of mobile robots, which is the leitmotif of the monograph. The controllers are formally studied and their performance is illustrated with simulations.

1 citations


Book ChapterDOI
01 Jan 2017
TL;DR: This chapter presents a novel method that overcomes the latter issue by allowing to compute the planar motion between two views from two different 1D homographies and is applied to a multirobot control task in which multiple robots are driven to a desired formation having arbitrary rotation and translation in a two-dimensional workspace.
Abstract: As Chaps. 2 and 3 of the monograph have illustrated, an effective way to address vision-based control when the robots (and their attached cameras) move in a planar environment is to use omnidirectional vision and 1D multiview models. This provides interesting properties in terms of accuracy, simplicity, efficiency and robustness. After exploring the use of the 1D trifocal tensor model, in this chapter we turn our attention to the 1D homography . This model can be computed from just two views but, compared with the trifocal constraint, presents additional challenges: namely, it is dependent on the structure of the scene, and does not permit direct estimation of camera motion. The chapter presents a novel method that overcomes the latter issue by allowing to compute the planar motion between two views from two different 1D homographies. Additionally, this motion estimation framework is applied to a multirobot control task in which multiple robots are driven to a desired formation having arbitrary rotation and translation in a two-dimensional workspace. In particular, each robot exchanges visual information with a set of predefined formation neighbors, and performs a 1D homography-based estimation of the relative positions of these adjacent robots. Then, using a rigid 2D transformation computed from the relative positions, and the knowledge of the position of the group’s global centroid, each robot obtains its motion command. The robots’ individual motions within this distributed formation control scheme naturally result in the full team reaching the desired global configuration. Results from simulations and tests with real images are presented to illustrate the feasibility and effectiveness of the methodologies proposed throughout the chapter.

Book ChapterDOI
01 Jan 2017
TL;DR: In this chapter, a system setup relying on external cameras and the two-view homography is proposed, to achieve the objective of driving a set of robots moving on the ground plane to a desired geometric formation.
Abstract: Cameras are versatile and relatively low-cost sensors that provide a lot of useful data. Thanks to these remarkable properties, it is possible to envision a range of different setups when considering vision-based multirobot control tasks. For instance, the vision sensors may be carried by the robots that are to be controlled, or external to them. In addition, cameras can be used in the context of both centralized and distributed control strategies. In this chapter, a system setup relying on external cameras and the two-view homography is proposed, to achieve the objective of driving a set of robots moving on the ground plane to a desired geometric formation. In particular, we propose to use multiple unmanned aerial vehicles (UAVs) as control units. Each of them carries a camera that observes a subset of the ground robotic team and is employed to control it. This gives rise to a partially distributed multirobot control method, which aims to combine the optimality and simplicity of centralized approaches with the scalability and robustness of distributed strategies. Relying on a homography computed for each of the UAV-mounted cameras, our method is purely image-based and has low computational cost. We formally study its stability for unicycle-type robots. In order for the multirobot system to converge to the target formation, certain intersections must be maintained between the sets of ground robots seen by the different cameras. To this end, we also propose a distributed strategy to coordinately control the motion of the cameras by using communication of their gathered information. The effectiveness of the proposed vision-based controller is illustrated via simulations and experiments with real robots .