scispace - formally typeset
SciSpace - Your AI assistant to discover and understand research papers | Product Hunt

Proceedings ArticleDOI

A visualisation and simulation framework for local and remote HRI experimentation

01 Nov 2016-Vol. 2016, pp 1-8

TL;DR: This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project, and was implemented using ROS.

AbstractIn this text, we will present work on the design and development of a ROS-based (Robot Operating System1) remote 3D visualisation, control and simulation framework. This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project. The proposed solution was implemented using ROS, and designed to attend the needs of two user groups — local and remote users and developers. The framework consists of: (1) a fully functional simulator integrated with the ROS environment, including a faithful representation of a robotic platform, a human model with animation capabilities and enough features for enacting human robot interaction scenarios, and a virtual experimental setup with similar features as the real laboratory workspace; (2) a fully functional and intuitive user interface for monitoring and development; (3) a remote robotic laboratory that can connect remote users to the framework via a web browser. The proposed solution was thoroughly and systematically tested under operational conditions, so as to assess its qualities in terms of features, ease-of-use and performance. Finally, conclusions concerning the success and potential of this research and development effort are drawn, and the foundations for future work will be proposed.

Topics: User interface (55%), Usability (53%), Robot kinematics (52%), Human–robot interaction (51%)

Summary (2 min read)

Introduction

  • Fortunately, with the increase of computational power, now more than ever, simulation and remote access save time and resources (both physical and budget-related), increasing the productivity of a research team and allowing the community to seamlessly work on the same framework.
  • To meet this demand, a recent trend has been the development of remote robotic laboratories [2].
  • The combined set of desired features resulting from this demand and its relationship with potential user types is depicted in Fig.

A. ROS framework for the CASIR-IMPEP platform

  • ROS is a flexible framework for writing modular robot software, capable of creating complex and robust behaviour in different types of robotic platforms.
  • In rqt, a developer can build his/her own perspective from plugins of all the existing GUI tools in ROS, namely image viewer, terminal, 2D plot, node and package graphs, pose viewer and even Rviz itself [10].
  • In addition to movement, effort and velocity limits were also implemented, not only to emulate the safety mechanisms of the real IMPEP, but also to further approximate the behaviour between both versions of the robot.
  • The developer can abstract from the complexity of communication, seeing only sensor msg/image type messages.
  • A running instantiation of the GUI is presented in Fig. 9. D. Implementation details for the web service supporting the CASIR-IMPEP remote lab.

IV. RESULTS AND DISCUSSION

  • Exhaustive tests were also conducted to evaluate visualisation performance, either running the GUI directly in the main computer or passing topics to the visualisation computer, where they were shown using the rqt interface running in a local ROS installation.
  • Performance was found to be coherent with previous results: CPU drops significantly after taking out the UI and even further with remote visualisation.
  • In order to benchmark network resource usage, the remote lab was tested through three separate internet connections, specified in Table III.
  • Additionally, in all experimental runs the chosen browser was Google Chrome (the most optimized for Web video server applications; in point 3-Latency of [10].).

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
A Visualisation and Simulation Framework for
Local and Remote HRI Experimentation
Andr
´
e Gradil and Jo
˜
ao Filipe Ferreira
Institute of Systems and Robotics (ISR)
Dept. of Electrical & Computer Eng.
University of Coimbra
Pinhal de Marrocos, Polo II
3030-290 COIMBRA, Portugal
Abstract—In this text, we will present work on the design
and development of a ROS-based (Robot Operating System
1
)
remote 3D visualisation, control and simulation framework. This
architecture has the purpose of extending the usability of a
system devised in previous work by this research team during
the CASIR (Coordinated Attention for Social Interaction with
Robots) project. The proposed solution was implemented using
ROS, and designed to attend the needs of two user groups
local and remote users and developers. The framework consists
of: (1) a fully functional simulator integrated with the ROS
environment, including a faithful representation of a robotic
platform, a human model with animation capabilities and enough
features for enacting human robot interaction scenarios, and
a virtual experimental setup with similar features as the real
laboratory workspace; (2) a fully functional and intuitive user
interface for monitoring and development; (3) a remote robotic
laboratory that can connect remote users to the framework
via a web browser. The proposed solution was thoroughly and
systematically tested under operational conditions, so as to assess
its qualities in terms of features, ease-of-use and performance.
Finally, conclusions concerning the success and potential of this
research and development effort are drawn, and the foundations
for future work will be proposed.
Index Terms—Visualisation, Simulation, Remote, User Inter-
face, ROS, Gazebo, Framework.
I. INTRODUCTION
Robots often are too big to transport, too expensive to
replicate, or they may simply not be available to a researcher
or developer at a convenient moment in time. Fortunately,
with the increase of computational power, now more than
ever, simulation and remote access save time and resources
(both physical and budget-related), increasing the productivity
of a research team and allowing the community to seamlessly
work on the same framework. There are several advantages in
robotic simulation, the most important of which the capability
to test new algorithms and routines, reproduce and repeat
experiments, generate data under different conditions, neuro-
evolve robots and benchmark any of the robot characteristics,
without the risk of damaging the real robot [1]. In fact,
having the possibility to repeat complex experiments without
1
In spite of its name, ROS is not an actual operating system in the traditional
sense of process management and scheduling.
Fig. 1: Desired features for most contemporary robotic development frame-
works.
external variables that may influence their outcome, especially
in human-robot interaction (HRI) applications, which depend
critically on human subject availability and for which exact
repetition is impossible precisely due to this human factor,
is a definite advantage. Additionally, there is often a need to
open the project to the broader research community, or simply
give the development team access from anywhere outside the
laboratory. To meet this demand, a recent trend has been the
development of remote robotic laboratories [2]. On the other
hand, the increasing complexity of robotic systems, namely
resulting from the number of modules and functionalities it
comprises, can overwhelm a developer or user when trying
to monitor its operation, and therefore having all of the data
organized in a neat and clear fashion is also paramount.
The combined set of desired features resulting from this de-
mand and its relationship with potential user types is depicted
in Fig. 1. The overall objective of the work presented in this
text was to endow the robotic system developed during the
FCT-funded project CASIR, devoted to studying the effect of
artificial multisensory attention in human-robot interaction
2
,
with these features, as a follow-up on future work planned
in [3] see Fig. 2. This system is supported by the IMPEP
infrastructure (acronym for Integrated Multimodal Perception
Experimental platform) see Fig. 3
3
More specifically, the
work presented in this text had the following main goals: (1)
the development of IMPEP hardware and simulator access to
local users, with support of a intuitive local GUI; (2) providing
2
FCT Contract PTDC/EEI-AUT/3010/2012, which ran from 15-04-2013
until 31-07-2015. The motivations for this work can be found in [4], while
conceptual and implementation details are reported in [5].
3
For more information about this platform please refer to [3], [6] .978-1-5090-5387-2/16/$31.00 2016 IEEE

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
Fig. 2: CASIR-IMPEP system architecture overview [3] only the bottom part of this diagram was originally fully implemented during the duration of the
CASIR project, while the top part was developed as an expansion in the scope of the work presented in this text.
Fig. 3: The Integrated Multimodal Perception Experimental Platform [3],
including actuators and respective degrees of freedom, and mounted sensors.
access to remote users through a remote robotic lab.
II. RELATED WORK
As the effort of applying a systematic approach to meeting
the demand of implementing features such as those presented
in Fig. 1 is a recent trend, a handful of related works exists
these will be described in the following text.
The Care-O-Bot Research project [7] has a similar architec-
ture to the CASIR framework; however, it deals with a differ-
ent application scope via a mobile manipulation platform. The
iCub simulator was created to complement the iCub project. It
is a very specific simulator with an unique architecture, it uses
YARP (Yet Another Robot Platform [8]) instead of ROS and
a network wrapper for remote access. Another project, “The
Construct Sim” [9], consists of a cloud based tool for remote
robotic simulation. It has a very limited free user experience,
both in simulation time and in computational resources, so in
order to properly simulate a scenario one has to resort to the
paid services.
The PR2 and Care-O-Bot were found to possess all of the
desired features displayed on Fig 1, while the iCub lacks
a remote lab and Construct Sim has no GUI nor hardware
access. In terms of availability, while the PR2 and iCub
projects have their features freely accessible, hardware can
only be accessed via purchase, which in both cases is rather
expensive. Construct Sim has several payment options, but
does not make hardware available. Finally, for Care-O-Bot the
price of every module is provided by the company on request.
The contributions of this work, represented in Fig. 4, result-
ing of the implementation of an integrated framework boasting
the features presented in Fig. 1, consist of providing the full
feature set with the widest availability possible. This will allow
the research team to access and develop the attention middle-
ware both locally and remotely, and also make a demonstrator
of the CASIR framework available to the wider scientific
community. Unlike related work, the framework described in
this paper will be developed so as to provide all the features
of Fig. 1 as freely available, and, in the case of the remote
lab access by a user external to the local research team, with
reservation of timeslots, all the time ensuring system and
hardware security.
III. IMPLEMENTATION
A. ROS framework for the CASIR-IMPEP platform
ROS is a flexible framework for writing modular robot
software, capable of creating complex and robust behaviour
in different types of robotic platforms. The ROS framework
involves several core concepts, such as packages, nodes,
topics, services and messages please see [10] and [11]
for more information. ROS is both modular and language-
independent in other words, users can create nodes in C++,
Python, Octave and Lisp without losing the possibility of
communication between them if the messaging interface is
maintained.
Virtual simulation is one of the most widest accepted
recent technologies in robot development. There are numerous
software tools used for simulation with big diversity in features
(supporting a variety of robotic middleware, available sensors
and actuators, and compatible with several types of robots)

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
Fig. 4: Conceptual diagram for the IMPEP ROS framework for remote 3D visualisation, control and simulation. The modules in orange refer to the contributions
of the work presented herewith, namely the simulator represented by the impep_simulation, the hardware access that is not only the IMPEP but also
it’s connection through the common driver API, GUI that consists in a rqt-based software and finally the remote lab supported by the CASIR-IMPEP
web service.
and also diversity in infrastructure (code quality, technical and
community support). According to [12], currently there are
about 40 simulation tools used by the scientific community.
However, since this work follows the CASIR project which is
supported by ROS, thereby narrowing the universe of devel-
opment frameworks of interest to Gazebo [13], MORSE [14],
V-Rep [15] and Webots [16]. Comparing these frameworks
in terms of features, Gazebo and Webots stand out among the
group; however Gazebo is more interesting in terms of support
infrastructure. Moreover, only Gazebo provides the percentage
of coverage from function and branch testing (52.9% and
44.5% respectively) as seen in the Gazebo website [13], this
means that 52.9% of functions (or subroutines) in the program
were called in tests, and 44.5of branches were executed.
To build the models, several 3D modelling tools were com-
pared, namely Maya, 3ds Max and Blender. These solutions
are very similar in features; however, due to the simplicity of
the modelling demands of the work reported in this paper, and
without the use of complex animations, Blender was deemed
to be the most suitable solution.
Applying HMI to robotics is as important as the system
itself it is critical that the user possesses and is familiar with
the right tools to work with the system. In order to organise all
of this information and give the desired control to the user, the
graphic user interface must be designed in order to be simple
and intuitive. In recent ROS distributions there is a tool named
rqt that is basically a framework for plugin development. In
rqt, a developer can build his/her own perspective from plugins
of all the existing GUI tools in ROS, namely image viewer,
terminal, 2D plot, node and package graphs, pose viewer and
even Rviz itself [10]. If the available plugins are not suitable
for the needs of a project, the developer can either edit an
existing plugin or even create his/her own plugin (either in
C++ or Python).
Remote experimental labs allow remotely sharing robot
middleware infrastructures in a modular way with the broader
scientific community, making it easier to compare and con-
tribute to the research of others. Many robotic researchers have
resorted to web technologies for remote robot experimentation,
data collection and HRI studies. There are examples of remote
access and control of robots from as early as 1995, in the
case of [17]. The arrival of new web technologies such as
HTML5 makes it possible for developers to create appealing
and sophisticated interfaces. With the use of protocols such
as rosbridge, the communication between a web browser and
ROS can be made through data messages contained in JSON
[18]. Besides displaying ROS information in the form of
images, we also need to transmit them over rosbridge to
this end, the ROS package named web video server was used.
Within this package there are two streaming options for the
developers to use. The first option is based on the deprecated
package mjpeg server, and consists in converting the video
stream from the desired ROS topic into a mjpeg stream (a
sequence of jpeg images), this stream can then be embedded

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
into any HTML <img> tag. The second option consists in
coding the video with the VP8 codec [19].
The expected outcome of this work was a unified ROS-
supported framework designed so as to attain the objectives
laid down in section I, allowing the CASIR attention middle-
ware described in [5] to be used within the context defined
by those objectives and the use of the IMPEP platform.
Additionally, it is a desired property that this framework be
easily adaptable to conform with any robotic head with some
or all of the same characteristics as IMPEP, so as to be used
with any robotic platform with innate multisensory attention
capabilities.
In this system, we can have either the simulated or the
real version running at once, both of them publishing sensor
information to the same ROS topics (a concept represented
by the Common Driver API module in Fig 4). The published
topics can be subscribed by the attention middleware nodes
or seen directly by the remote and local users through the
respective GUIs. Commands, on the other hand, follow almost
the inverse path, the only difference being the non-existence
of a direct connection between the GUIs themselves and the
physical, as well as virtual, actuators. Manual control of both
versions of the robot can be made through a node in the
attention middleware using terminal commands, which can be
sent within the local GUI (see section III-D).
B. Implementation details for the Gazebo-based simulation
package
Three packages were developed in order to build
a complete robot model that is fully compatible with
ROS: impep_gazebo, impep_controller and
impep_description see Fig. 5. The main package,
impep_gazebo, includes the world file, the avatar scripts
and the ROS launch file. The impep_description,
package is responsible for the robot model itself and contains
the 3D meshes of each individual part (modelled using
Blender) which will be the links of our robot. Using the
meshes we can build the URDF (Unified Robot Description
Format) model, which is an XML format describing the links
and joints of the robot, defining the geometry, position and
collision mesh of each 3D component, and consequently
resulting in models such as represented in Fig. 6. Finally,
impep_controller includes the actuator models,
parameters and publishers.
1) IMPEP simulation sensors: The IMPEP has three
visual sensors: two RGB cameras, and a Microsoft Kinect
sensor. The RGB stereovision set-up, mounted so as to allow
pan, tilt and version using IMPEPs actuators, consists of a pair
of Guppy F-036 [20]. These were modelled as faithfully as
possible in the URDF IMPEP model, including their physical
characteristics (e.g. mass and body dimensions, the latter
also needing to match with the corresponding Blender model
characteristics) and technical specifications (e.g. frame rate,
resolution and bit depth).
In order to create a virtual camera with these specifications,
a Gazebo sensor with the type ”camera” was added and a
Fig. 5: IMPEP model packages for simulation.
Fig. 6: IMPEP virtual model evolution. Model (1) was the pre-existing,
preliminary IMPEP model. Model (2) is the upgraded physical model of
IMPEP, completely to scale in terms of mass and dimensions. Finally, (3)
represents the final model, with the collision mesh and joint referentials.
Gazebo-ROS plugin named libgazebo_ros_camera.so
attached to both right and left camera lens models. This plugin
is responsible for the publication of camera data to a rostopic
specified in its parameter definition. Additionally, the effect
of Gaussian noise was modelled in order to simulate residual
imperfections intrinsic to every real camera.
The depth camera, the Microsoft Kinect V1 RGB-D sen-
sor, already possesses a Microsoft Kinect 3D model na-
tively available in Gazebo that follows the body dimen-
sions of a real Kinect; however, the remainder of the pa-
rameters had to be inserted into the model by hand. For
the simulated depth camera to communicate with ROS, the
libgazebo_ros_openni_kinect.so plugin was used,
allowing us to define the camera namespace and topics.
In order to implement a virtual version of this feature, we
were forced to restrict the range of motion in certain joints;
as this relates also to the virtual actuators we will explain the
specifics of this implementation in the next section.
2) IMPEP simulation actuators: The IMPEP includes
different types of DC motors two PMA-11A-100-01-
E500ML motors (one for pan one for tilt) and two PMA-5A-
80-01-E512ML motors (one for each camera axis) all from
Harmonic Drive (further information about the motors in [21]).
The differentiation between fixed and revolute joints will

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
result from the low-level foundation implementing the virtual
actuators according to the technical specifications of each mo-
tor. The implementation of end of movement sensors consists
in creating an upper and lower movement limit in the revolute
joints, therefore emulating the function of the kinaesthetic
sensors of the real IMPEP. With these restrictions in place,
the virtual IMPEP will have the same range of motion as the
real one in every moving joint. In addition to movement, effort
and velocity limits were also implemented, not only to emulate
the safety mechanisms of the real IMPEP, but also to further
approximate the behaviour between both versions of the robot.
With all the limits and joint parameters defined, the
impep_controller ROS package was developed using the
libgazebo_ros_control.so plugin in order to allow
communication between Gazebo and ROS, similarly to the
camera plugins. This package is responsible for numerous
important tasks, namely implementing PID parameters, deal-
ing with publishing joint states, and converting them to TF
transforms for rviz and other ROS tools.
3) Environmental simulation: In Fig. 7 we have a direct
comparison between the work area of the simulated and real
IMPEP. Some key variables like distance to the table, table-top
and experimental object colour were approximated as much as
possible in the simulated environment. The rest of simulated
laboratory was populated with roughly the same kind of static
objects (e.g. tables and bookshelves); some additional objects
in the room were purposely modelled as being red, so as to
add perceptually salient entities, which can be used as potential
distractors in attention studies [22].
4) Avatar and interaction simulation: In a preliminary an-
imated scene of a simple walking skeleton controlling a male
3D model moving in a circular trajectory was implemented,
thus simulating a male subject walking in front of the IMPEP
set up. This was implemented in the human model XML
file itself and then included in the room_only.world file,
thus building the complete world where the IMPEP will be
inserted. More animations will be created in future work taking
this preliminary animation as a template, using more complex
coding and advanced technologies.
C. Implementation details for the rqt-based user interface
In most ROS frameworks, spatial visualisation is imple-
mented using Rviz. However, in spite of being a very complete
tool, using it standalone is not as simple or interactive as
required for our system. In order to capitalise on the advan-
tages of Rviz while adding increased flexibility in GUI design,
rqt rviz was used [10]. This plugin embeds Rviz into an rqt
interface while keeping all of its features and functionalities;
however, unlike the rqt 2D visualisation plugin, it still has a
dependence on its ROS counterpart.
With the abundance of visual representations required to
monitor camera feeds or processing results from the attention
middleware (e.g. point clouds, 3D reconstructions, audio signal
waveforms, etc.), the developed GUI must be able to display
the greatest variety of information possible, while maintaining
an uncluttered dashboard so as to present a maximum level
of detail for each data visualisation, and all of this allowing
the greatest degree of on-the-fly reconfigurability possible.
For development and debugging purposes, the convenience of
not having to change windows in the Desktop to access text
terminals should be addressed. Therefore, the GUI dashboard
was configured so as to allow the display of text terminals in
embedded frames in the interface.
A GUI layout implementing these features is presented
in Fig. 8. The plugin used for 2D visualisation is called
rqt image view [10] it is an rqt version of ROS’s image view
[10], in which the system uses image transport to provide
classes and nodes capable of transmitting images in arbitrary
over-the-wire representations; however they have no depen-
dencies between them. With this plugin, the developer can
abstract from the complexity of communication, seeing only
sensor msg/image type messages. Alas, image view is not
very user friendly, since the desired topic must be selected by
specifying it when running the tool in a terminal. Fortunately,
the rqt version sidesteps this issue by adding a dropdown
menu feature showing all of the sensor msg/image messages
available. Two additional interesting features of this plugin
are save image and topic refresh buttons (relevant in case new
publisher nodes are launched). A third feature of the GUI is
the ability to embed a terminal in an interface frame, via the
Python GUI plugin rqt shell, which supports a fully functional
embedded XTerm [10]. An improvement to the terminal plugin
was made, allowing it to display two windows in the same
space (with the use of tabs). Finally, we implemented a user-
friendly package launcher using an experimental plugin named
rqt launch, allowing the user, among other things, to run and
stop selected launch files (and individual nodes from the active
launch file) chosen via a dropdown menu.
Using the configuration file .perspective, the user can
run the rqt interface in any computer with a ROS distribution
version equal to or above Indigo to be fully functional. We
were, therefore, able to meet the important requirement of
separating the computational workload resulting from the at-
tentional middleware processing and visualisation, as depicted
in Fig. 2.
A running instantiation of the GUI is presented in Fig. 9.
D. Implementation details for the web service supporting the
CASIR-IMPEP remote lab
The web service supporting the CASIR-IMPEP remote lab
uses a client-server architecture implemented with Rosbridge.
Additionally, since streams of image topics are to be displayed
in the HTML interface, therefore requiring a sustained con-
nection with the appropriate bandwidth and upload/download
speeds, the Web Video Server tool was also used [10].
The first implementation step was to set up the server side.
As the laboratory has a firewalled LAN, a “tunnel” had to be
created in order to grant outside access to the main project
computer (that will be our server). After the connection was
configured, it was necessary to create and configure the video
stream as well using Web Video Server. This tool opens a
local port, and waits for incoming HTTP requests. When a

Citations
More filters

Book ChapterDOI
01 Jan 2018
TL;DR: An approach of how to implement such a software on the basis of the Robot Operating System (ROS) framework in order to enable a realistic simulation of the direct cooperation between human workers and robots is introduced.
Abstract: The idea of human-robot collaboration (HRC) in assembly follows the aim of wisely combining the special capabilities of human workers and of robots in order to increase productivity in flexible assembly processes and to reduce the physical strain on human workers. The high degree of cooperation goes along with the fact that the effort to introduce an HRC workstation is fairly high and HRC has hardly been implemented in current productions so far. A major reason for this is a lack of planning and simulation software for the HRC. Therefore, this paper introduces an approach of how to implement such a software on the basis of the Robot Operating System (ROS) framework in order to enable a realistic simulation of the direct cooperation between human workers and robots.

3 citations


References
More filters

Proceedings Article
01 Jan 2009
TL;DR: This paper discusses how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS.
Abstract: This paper gives an overview of ROS, an opensource robot operating system. ROS is not an operating system in the traditional sense of process management and scheduling; rather, it provides a structured communications layer above the host operating systems of a heterogenous compute cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS.

7,367 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...The ROS framework involves several core concepts, such as packages, nodes, topics, services and messages – please see [10] and [11] for more information....

    [...]


Journal ArticleDOI
TL;DR: The goal of YARP is to minimize the effort devoted to infrastructure-level software development by facilitating code reuse, modularity and so maximize research-level development and collaboration by encapsulating lessons from the experience in building humanoid robots.
Abstract: We describe YARP, Yet Another Robot Platform, an open-source project that encapsulates lessons from our experience in building humanoid robots. The goal of YARP is to minimize the effort devoted to infrastructure-level software development by facilitating code reuse, modularity and so maximize research-level development and collaboration. Humanoid robotics is a "bleeding edge" field of research, with constant flux in sensors, actuators, and processors. Code reuse and maintenance is therefore a significant challenge. We describe the main problems we faced and the solutions we adopted. In short, the main features of YARP include support for inter-process communication, image processing as well as a class hierarchy to ease code reuse across different hardware platforms. YARP is currently used and tested on Windows, Linux and QNX6 which are common operating systems used in robotics.

589 citations


"A visualisation and simulation fram..." refers methods in this paper

  • ...It is a very specific simulator with an unique architecture, it uses YARP (Yet Another Robot Platform [8]) instead of ROS and a network wrapper for remote access....

    [...]


Journal ArticleDOI
TL;DR: Some current trends and challenges of state-of-the-art technologies in the development of remote laboratories in several areas related with industrial electronics education are identified and discussed.
Abstract: Remote laboratories have been introduced during the last few decades into engineering education processes as well as integrated within e-learning frameworks offered to engineering and science students. Remote laboratories are also being used to support life-long learning and student's autonomous learning activities. In this paper, after a brief overview of state-of-the-art technologies in the development of remote laboratories and presentation of recent and interesting examples of remote laboratories in several areas related with industrial electronics education, some current trends and challenges are also identified and discussed.

398 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...To meet this demand, a recent trend has been the development of remote robotic laboratories [2]....

    [...]


Proceedings ArticleDOI
19 Aug 2008
TL;DR: The prototype of a new computer simulator for the humanoid robot iCub, developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots.
Abstract: This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the "RobotCub" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.

177 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...There are several advantages in robotic simulation, the most important of which the capability to test new algorithms and routines, reproduce and repeat experiments, generate data under different conditions, neuroevolve robots and benchmark any of the robot characteristics, without the risk of damaging the real robot [1]....

    [...]


Book ChapterDOI
01 Jan 2017
TL;DR: Rosbridge provides a simple, socket-based programmatic access to robot interfaces and algorithms provided by ROS, the open-source “Robot Operating System”, the current state-of-the-art in robot middleware.
Abstract: We present rosbridge, a middleware abstraction layer which provides robotics technology with a standard, minimalist applications development framework accessible to applications programmers who are not themselves roboticists. Rosbridge provides a simple, socket-based programmatic access to robot interfaces and algorithms provided (for now) by ROS, the open-source “Robot Operating System”, the current state-of-the-art in robot middleware. In particular, it facilitates the use of web technologies such as Javascript for the purpose of broadening the use and usefulness of robotic technology. We demonstrate potential applications in the interface design, education, human-robot interaction and remote laboratory environments.

140 citations