scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A visualisation and simulation framework for local and remote HRI experimentation

01 Nov 2016-Vol. 2016, pp 1-8
TL;DR: This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project, and was implemented using ROS.
Abstract: In this text, we will present work on the design and development of a ROS-based (Robot Operating System1) remote 3D visualisation, control and simulation framework. This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) project. The proposed solution was implemented using ROS, and designed to attend the needs of two user groups — local and remote users and developers. The framework consists of: (1) a fully functional simulator integrated with the ROS environment, including a faithful representation of a robotic platform, a human model with animation capabilities and enough features for enacting human robot interaction scenarios, and a virtual experimental setup with similar features as the real laboratory workspace; (2) a fully functional and intuitive user interface for monitoring and development; (3) a remote robotic laboratory that can connect remote users to the framework via a web browser. The proposed solution was thoroughly and systematically tested under operational conditions, so as to assess its qualities in terms of features, ease-of-use and performance. Finally, conclusions concerning the success and potential of this research and development effort are drawn, and the foundations for future work will be proposed.

Summary (2 min read)

Introduction

  • Fortunately, with the increase of computational power, now more than ever, simulation and remote access save time and resources (both physical and budget-related), increasing the productivity of a research team and allowing the community to seamlessly work on the same framework.
  • To meet this demand, a recent trend has been the development of remote robotic laboratories [2].
  • The combined set of desired features resulting from this demand and its relationship with potential user types is depicted in Fig.

A. ROS framework for the CASIR-IMPEP platform

  • ROS is a flexible framework for writing modular robot software, capable of creating complex and robust behaviour in different types of robotic platforms.
  • In rqt, a developer can build his/her own perspective from plugins of all the existing GUI tools in ROS, namely image viewer, terminal, 2D plot, node and package graphs, pose viewer and even Rviz itself [10].
  • In addition to movement, effort and velocity limits were also implemented, not only to emulate the safety mechanisms of the real IMPEP, but also to further approximate the behaviour between both versions of the robot.
  • The developer can abstract from the complexity of communication, seeing only sensor msg/image type messages.
  • A running instantiation of the GUI is presented in Fig. 9. D. Implementation details for the web service supporting the CASIR-IMPEP remote lab.

IV. RESULTS AND DISCUSSION

  • Exhaustive tests were also conducted to evaluate visualisation performance, either running the GUI directly in the main computer or passing topics to the visualisation computer, where they were shown using the rqt interface running in a local ROS installation.
  • Performance was found to be coherent with previous results: CPU drops significantly after taking out the UI and even further with remote visualisation.
  • In order to benchmark network resource usage, the remote lab was tested through three separate internet connections, specified in Table III.
  • Additionally, in all experimental runs the chosen browser was Google Chrome (the most optimized for Web video server applications; in point 3-Latency of [10].).

Did you find this useful? Give us your feedback

Figures (14)

Content maybe subject to copyright    Report

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
A Visualisation and Simulation Framework for
Local and Remote HRI Experimentation
Andr
´
e Gradil and Jo
˜
ao Filipe Ferreira
Institute of Systems and Robotics (ISR)
Dept. of Electrical & Computer Eng.
University of Coimbra
Pinhal de Marrocos, Polo II
3030-290 COIMBRA, Portugal
Abstract—In this text, we will present work on the design
and development of a ROS-based (Robot Operating System
1
)
remote 3D visualisation, control and simulation framework. This
architecture has the purpose of extending the usability of a
system devised in previous work by this research team during
the CASIR (Coordinated Attention for Social Interaction with
Robots) project. The proposed solution was implemented using
ROS, and designed to attend the needs of two user groups
local and remote users and developers. The framework consists
of: (1) a fully functional simulator integrated with the ROS
environment, including a faithful representation of a robotic
platform, a human model with animation capabilities and enough
features for enacting human robot interaction scenarios, and
a virtual experimental setup with similar features as the real
laboratory workspace; (2) a fully functional and intuitive user
interface for monitoring and development; (3) a remote robotic
laboratory that can connect remote users to the framework
via a web browser. The proposed solution was thoroughly and
systematically tested under operational conditions, so as to assess
its qualities in terms of features, ease-of-use and performance.
Finally, conclusions concerning the success and potential of this
research and development effort are drawn, and the foundations
for future work will be proposed.
Index Terms—Visualisation, Simulation, Remote, User Inter-
face, ROS, Gazebo, Framework.
I. INTRODUCTION
Robots often are too big to transport, too expensive to
replicate, or they may simply not be available to a researcher
or developer at a convenient moment in time. Fortunately,
with the increase of computational power, now more than
ever, simulation and remote access save time and resources
(both physical and budget-related), increasing the productivity
of a research team and allowing the community to seamlessly
work on the same framework. There are several advantages in
robotic simulation, the most important of which the capability
to test new algorithms and routines, reproduce and repeat
experiments, generate data under different conditions, neuro-
evolve robots and benchmark any of the robot characteristics,
without the risk of damaging the real robot [1]. In fact,
having the possibility to repeat complex experiments without
1
In spite of its name, ROS is not an actual operating system in the traditional
sense of process management and scheduling.
Fig. 1: Desired features for most contemporary robotic development frame-
works.
external variables that may influence their outcome, especially
in human-robot interaction (HRI) applications, which depend
critically on human subject availability and for which exact
repetition is impossible precisely due to this human factor,
is a definite advantage. Additionally, there is often a need to
open the project to the broader research community, or simply
give the development team access from anywhere outside the
laboratory. To meet this demand, a recent trend has been the
development of remote robotic laboratories [2]. On the other
hand, the increasing complexity of robotic systems, namely
resulting from the number of modules and functionalities it
comprises, can overwhelm a developer or user when trying
to monitor its operation, and therefore having all of the data
organized in a neat and clear fashion is also paramount.
The combined set of desired features resulting from this de-
mand and its relationship with potential user types is depicted
in Fig. 1. The overall objective of the work presented in this
text was to endow the robotic system developed during the
FCT-funded project CASIR, devoted to studying the effect of
artificial multisensory attention in human-robot interaction
2
,
with these features, as a follow-up on future work planned
in [3] see Fig. 2. This system is supported by the IMPEP
infrastructure (acronym for Integrated Multimodal Perception
Experimental platform) see Fig. 3
3
More specifically, the
work presented in this text had the following main goals: (1)
the development of IMPEP hardware and simulator access to
local users, with support of a intuitive local GUI; (2) providing
2
FCT Contract PTDC/EEI-AUT/3010/2012, which ran from 15-04-2013
until 31-07-2015. The motivations for this work can be found in [4], while
conceptual and implementation details are reported in [5].
3
For more information about this platform please refer to [3], [6] .978-1-5090-5387-2/16/$31.00 2016 IEEE

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
Fig. 2: CASIR-IMPEP system architecture overview [3] only the bottom part of this diagram was originally fully implemented during the duration of the
CASIR project, while the top part was developed as an expansion in the scope of the work presented in this text.
Fig. 3: The Integrated Multimodal Perception Experimental Platform [3],
including actuators and respective degrees of freedom, and mounted sensors.
access to remote users through a remote robotic lab.
II. RELATED WORK
As the effort of applying a systematic approach to meeting
the demand of implementing features such as those presented
in Fig. 1 is a recent trend, a handful of related works exists
these will be described in the following text.
The Care-O-Bot Research project [7] has a similar architec-
ture to the CASIR framework; however, it deals with a differ-
ent application scope via a mobile manipulation platform. The
iCub simulator was created to complement the iCub project. It
is a very specific simulator with an unique architecture, it uses
YARP (Yet Another Robot Platform [8]) instead of ROS and
a network wrapper for remote access. Another project, “The
Construct Sim” [9], consists of a cloud based tool for remote
robotic simulation. It has a very limited free user experience,
both in simulation time and in computational resources, so in
order to properly simulate a scenario one has to resort to the
paid services.
The PR2 and Care-O-Bot were found to possess all of the
desired features displayed on Fig 1, while the iCub lacks
a remote lab and Construct Sim has no GUI nor hardware
access. In terms of availability, while the PR2 and iCub
projects have their features freely accessible, hardware can
only be accessed via purchase, which in both cases is rather
expensive. Construct Sim has several payment options, but
does not make hardware available. Finally, for Care-O-Bot the
price of every module is provided by the company on request.
The contributions of this work, represented in Fig. 4, result-
ing of the implementation of an integrated framework boasting
the features presented in Fig. 1, consist of providing the full
feature set with the widest availability possible. This will allow
the research team to access and develop the attention middle-
ware both locally and remotely, and also make a demonstrator
of the CASIR framework available to the wider scientific
community. Unlike related work, the framework described in
this paper will be developed so as to provide all the features
of Fig. 1 as freely available, and, in the case of the remote
lab access by a user external to the local research team, with
reservation of timeslots, all the time ensuring system and
hardware security.
III. IMPLEMENTATION
A. ROS framework for the CASIR-IMPEP platform
ROS is a flexible framework for writing modular robot
software, capable of creating complex and robust behaviour
in different types of robotic platforms. The ROS framework
involves several core concepts, such as packages, nodes,
topics, services and messages please see [10] and [11]
for more information. ROS is both modular and language-
independent in other words, users can create nodes in C++,
Python, Octave and Lisp without losing the possibility of
communication between them if the messaging interface is
maintained.
Virtual simulation is one of the most widest accepted
recent technologies in robot development. There are numerous
software tools used for simulation with big diversity in features
(supporting a variety of robotic middleware, available sensors
and actuators, and compatible with several types of robots)

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
Fig. 4: Conceptual diagram for the IMPEP ROS framework for remote 3D visualisation, control and simulation. The modules in orange refer to the contributions
of the work presented herewith, namely the simulator represented by the impep_simulation, the hardware access that is not only the IMPEP but also
it’s connection through the common driver API, GUI that consists in a rqt-based software and finally the remote lab supported by the CASIR-IMPEP
web service.
and also diversity in infrastructure (code quality, technical and
community support). According to [12], currently there are
about 40 simulation tools used by the scientific community.
However, since this work follows the CASIR project which is
supported by ROS, thereby narrowing the universe of devel-
opment frameworks of interest to Gazebo [13], MORSE [14],
V-Rep [15] and Webots [16]. Comparing these frameworks
in terms of features, Gazebo and Webots stand out among the
group; however Gazebo is more interesting in terms of support
infrastructure. Moreover, only Gazebo provides the percentage
of coverage from function and branch testing (52.9% and
44.5% respectively) as seen in the Gazebo website [13], this
means that 52.9% of functions (or subroutines) in the program
were called in tests, and 44.5of branches were executed.
To build the models, several 3D modelling tools were com-
pared, namely Maya, 3ds Max and Blender. These solutions
are very similar in features; however, due to the simplicity of
the modelling demands of the work reported in this paper, and
without the use of complex animations, Blender was deemed
to be the most suitable solution.
Applying HMI to robotics is as important as the system
itself it is critical that the user possesses and is familiar with
the right tools to work with the system. In order to organise all
of this information and give the desired control to the user, the
graphic user interface must be designed in order to be simple
and intuitive. In recent ROS distributions there is a tool named
rqt that is basically a framework for plugin development. In
rqt, a developer can build his/her own perspective from plugins
of all the existing GUI tools in ROS, namely image viewer,
terminal, 2D plot, node and package graphs, pose viewer and
even Rviz itself [10]. If the available plugins are not suitable
for the needs of a project, the developer can either edit an
existing plugin or even create his/her own plugin (either in
C++ or Python).
Remote experimental labs allow remotely sharing robot
middleware infrastructures in a modular way with the broader
scientific community, making it easier to compare and con-
tribute to the research of others. Many robotic researchers have
resorted to web technologies for remote robot experimentation,
data collection and HRI studies. There are examples of remote
access and control of robots from as early as 1995, in the
case of [17]. The arrival of new web technologies such as
HTML5 makes it possible for developers to create appealing
and sophisticated interfaces. With the use of protocols such
as rosbridge, the communication between a web browser and
ROS can be made through data messages contained in JSON
[18]. Besides displaying ROS information in the form of
images, we also need to transmit them over rosbridge to
this end, the ROS package named web video server was used.
Within this package there are two streaming options for the
developers to use. The first option is based on the deprecated
package mjpeg server, and consists in converting the video
stream from the desired ROS topic into a mjpeg stream (a
sequence of jpeg images), this stream can then be embedded

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
into any HTML <img> tag. The second option consists in
coding the video with the VP8 codec [19].
The expected outcome of this work was a unified ROS-
supported framework designed so as to attain the objectives
laid down in section I, allowing the CASIR attention middle-
ware described in [5] to be used within the context defined
by those objectives and the use of the IMPEP platform.
Additionally, it is a desired property that this framework be
easily adaptable to conform with any robotic head with some
or all of the same characteristics as IMPEP, so as to be used
with any robotic platform with innate multisensory attention
capabilities.
In this system, we can have either the simulated or the
real version running at once, both of them publishing sensor
information to the same ROS topics (a concept represented
by the Common Driver API module in Fig 4). The published
topics can be subscribed by the attention middleware nodes
or seen directly by the remote and local users through the
respective GUIs. Commands, on the other hand, follow almost
the inverse path, the only difference being the non-existence
of a direct connection between the GUIs themselves and the
physical, as well as virtual, actuators. Manual control of both
versions of the robot can be made through a node in the
attention middleware using terminal commands, which can be
sent within the local GUI (see section III-D).
B. Implementation details for the Gazebo-based simulation
package
Three packages were developed in order to build
a complete robot model that is fully compatible with
ROS: impep_gazebo, impep_controller and
impep_description see Fig. 5. The main package,
impep_gazebo, includes the world file, the avatar scripts
and the ROS launch file. The impep_description,
package is responsible for the robot model itself and contains
the 3D meshes of each individual part (modelled using
Blender) which will be the links of our robot. Using the
meshes we can build the URDF (Unified Robot Description
Format) model, which is an XML format describing the links
and joints of the robot, defining the geometry, position and
collision mesh of each 3D component, and consequently
resulting in models such as represented in Fig. 6. Finally,
impep_controller includes the actuator models,
parameters and publishers.
1) IMPEP simulation sensors: The IMPEP has three
visual sensors: two RGB cameras, and a Microsoft Kinect
sensor. The RGB stereovision set-up, mounted so as to allow
pan, tilt and version using IMPEPs actuators, consists of a pair
of Guppy F-036 [20]. These were modelled as faithfully as
possible in the URDF IMPEP model, including their physical
characteristics (e.g. mass and body dimensions, the latter
also needing to match with the corresponding Blender model
characteristics) and technical specifications (e.g. frame rate,
resolution and bit depth).
In order to create a virtual camera with these specifications,
a Gazebo sensor with the type ”camera” was added and a
Fig. 5: IMPEP model packages for simulation.
Fig. 6: IMPEP virtual model evolution. Model (1) was the pre-existing,
preliminary IMPEP model. Model (2) is the upgraded physical model of
IMPEP, completely to scale in terms of mass and dimensions. Finally, (3)
represents the final model, with the collision mesh and joint referentials.
Gazebo-ROS plugin named libgazebo_ros_camera.so
attached to both right and left camera lens models. This plugin
is responsible for the publication of camera data to a rostopic
specified in its parameter definition. Additionally, the effect
of Gaussian noise was modelled in order to simulate residual
imperfections intrinsic to every real camera.
The depth camera, the Microsoft Kinect V1 RGB-D sen-
sor, already possesses a Microsoft Kinect 3D model na-
tively available in Gazebo that follows the body dimen-
sions of a real Kinect; however, the remainder of the pa-
rameters had to be inserted into the model by hand. For
the simulated depth camera to communicate with ROS, the
libgazebo_ros_openni_kinect.so plugin was used,
allowing us to define the camera namespace and topics.
In order to implement a virtual version of this feature, we
were forced to restrict the range of motion in certain joints;
as this relates also to the virtual actuators we will explain the
specifics of this implementation in the next section.
2) IMPEP simulation actuators: The IMPEP includes
different types of DC motors two PMA-11A-100-01-
E500ML motors (one for pan one for tilt) and two PMA-5A-
80-01-E512ML motors (one for each camera axis) all from
Harmonic Drive (further information about the motors in [21]).
The differentiation between fixed and revolute joints will

2016 23
Encontro Portugu
ˆ
es de Computac¸
˜
ao Gr
´
afica e Interac¸
˜
ao (EPCGI)
result from the low-level foundation implementing the virtual
actuators according to the technical specifications of each mo-
tor. The implementation of end of movement sensors consists
in creating an upper and lower movement limit in the revolute
joints, therefore emulating the function of the kinaesthetic
sensors of the real IMPEP. With these restrictions in place,
the virtual IMPEP will have the same range of motion as the
real one in every moving joint. In addition to movement, effort
and velocity limits were also implemented, not only to emulate
the safety mechanisms of the real IMPEP, but also to further
approximate the behaviour between both versions of the robot.
With all the limits and joint parameters defined, the
impep_controller ROS package was developed using the
libgazebo_ros_control.so plugin in order to allow
communication between Gazebo and ROS, similarly to the
camera plugins. This package is responsible for numerous
important tasks, namely implementing PID parameters, deal-
ing with publishing joint states, and converting them to TF
transforms for rviz and other ROS tools.
3) Environmental simulation: In Fig. 7 we have a direct
comparison between the work area of the simulated and real
IMPEP. Some key variables like distance to the table, table-top
and experimental object colour were approximated as much as
possible in the simulated environment. The rest of simulated
laboratory was populated with roughly the same kind of static
objects (e.g. tables and bookshelves); some additional objects
in the room were purposely modelled as being red, so as to
add perceptually salient entities, which can be used as potential
distractors in attention studies [22].
4) Avatar and interaction simulation: In a preliminary an-
imated scene of a simple walking skeleton controlling a male
3D model moving in a circular trajectory was implemented,
thus simulating a male subject walking in front of the IMPEP
set up. This was implemented in the human model XML
file itself and then included in the room_only.world file,
thus building the complete world where the IMPEP will be
inserted. More animations will be created in future work taking
this preliminary animation as a template, using more complex
coding and advanced technologies.
C. Implementation details for the rqt-based user interface
In most ROS frameworks, spatial visualisation is imple-
mented using Rviz. However, in spite of being a very complete
tool, using it standalone is not as simple or interactive as
required for our system. In order to capitalise on the advan-
tages of Rviz while adding increased flexibility in GUI design,
rqt rviz was used [10]. This plugin embeds Rviz into an rqt
interface while keeping all of its features and functionalities;
however, unlike the rqt 2D visualisation plugin, it still has a
dependence on its ROS counterpart.
With the abundance of visual representations required to
monitor camera feeds or processing results from the attention
middleware (e.g. point clouds, 3D reconstructions, audio signal
waveforms, etc.), the developed GUI must be able to display
the greatest variety of information possible, while maintaining
an uncluttered dashboard so as to present a maximum level
of detail for each data visualisation, and all of this allowing
the greatest degree of on-the-fly reconfigurability possible.
For development and debugging purposes, the convenience of
not having to change windows in the Desktop to access text
terminals should be addressed. Therefore, the GUI dashboard
was configured so as to allow the display of text terminals in
embedded frames in the interface.
A GUI layout implementing these features is presented
in Fig. 8. The plugin used for 2D visualisation is called
rqt image view [10] it is an rqt version of ROS’s image view
[10], in which the system uses image transport to provide
classes and nodes capable of transmitting images in arbitrary
over-the-wire representations; however they have no depen-
dencies between them. With this plugin, the developer can
abstract from the complexity of communication, seeing only
sensor msg/image type messages. Alas, image view is not
very user friendly, since the desired topic must be selected by
specifying it when running the tool in a terminal. Fortunately,
the rqt version sidesteps this issue by adding a dropdown
menu feature showing all of the sensor msg/image messages
available. Two additional interesting features of this plugin
are save image and topic refresh buttons (relevant in case new
publisher nodes are launched). A third feature of the GUI is
the ability to embed a terminal in an interface frame, via the
Python GUI plugin rqt shell, which supports a fully functional
embedded XTerm [10]. An improvement to the terminal plugin
was made, allowing it to display two windows in the same
space (with the use of tabs). Finally, we implemented a user-
friendly package launcher using an experimental plugin named
rqt launch, allowing the user, among other things, to run and
stop selected launch files (and individual nodes from the active
launch file) chosen via a dropdown menu.
Using the configuration file .perspective, the user can
run the rqt interface in any computer with a ROS distribution
version equal to or above Indigo to be fully functional. We
were, therefore, able to meet the important requirement of
separating the computational workload resulting from the at-
tentional middleware processing and visualisation, as depicted
in Fig. 2.
A running instantiation of the GUI is presented in Fig. 9.
D. Implementation details for the web service supporting the
CASIR-IMPEP remote lab
The web service supporting the CASIR-IMPEP remote lab
uses a client-server architecture implemented with Rosbridge.
Additionally, since streams of image topics are to be displayed
in the HTML interface, therefore requiring a sustained con-
nection with the appropriate bandwidth and upload/download
speeds, the Web Video Server tool was also used [10].
The first implementation step was to set up the server side.
As the laboratory has a firewalled LAN, a “tunnel” had to be
created in order to grant outside access to the main project
computer (that will be our server). After the connection was
configured, it was necessary to create and configure the video
stream as well using Web Video Server. This tool opens a
local port, and waits for incoming HTTP requests. When a

Citations
More filters
Book ChapterDOI
01 Jan 2018
TL;DR: An approach of how to implement such a software on the basis of the Robot Operating System (ROS) framework in order to enable a realistic simulation of the direct cooperation between human workers and robots is introduced.
Abstract: The idea of human-robot collaboration (HRC) in assembly follows the aim of wisely combining the special capabilities of human workers and of robots in order to increase productivity in flexible assembly processes and to reduce the physical strain on human workers. The high degree of cooperation goes along with the fact that the effort to introduce an HRC workstation is fairly high and HRC has hardly been implemented in current productions so far. A major reason for this is a lack of planning and simulation software for the HRC. Therefore, this paper introduces an approach of how to implement such a software on the basis of the Robot Operating System (ROS) framework in order to enable a realistic simulation of the direct cooperation between human workers and robots.

3 citations

References
More filters
Proceedings ArticleDOI
01 Nov 2014
TL;DR: The state of the art in dynamics simulation is surveyed and the analysis of an online survey about the use of dynamics simulation in the robotics research community finds Gazebo emerges as the best choice among the open-source projects, while V-Rep is the preferred commercial simulator.
Abstract: The number of tools for dynamics simulation has grown substantially in the last few years. Humanoid robots, in particular, make extensive use of such tools for a variety of applications, from simulating contacts to planning complex motions. It is necessary for the humanoid robotics community to have a systematic evaluation to assist in choosing which of the available tools is best for their research. This paper surveys the state of the art in dynamics simulation and reports on the analysis of an online survey about the use of dynamics simulation in the robotics research community. The major requirements for robotics researchers are better physics engines and open-source software. Despite the numerous tools, there is not a general-purpose simulator which dominates the others in terms of performance or application. However, for humanoid robotics, Gazebo emerges as the best choice among the open-source projects, while V-Rep is the prefered commercial simulator. The survey report has been instrumental for choosing Gazebo as the base for the new simulator for the iCub humanoid robot.

69 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...According to [12], currently there are about 40 simulation tools used by the scientific community....

    [...]

Journal ArticleDOI
TL;DR: This review intends to provide an overview of the state of the art in the modeling and implementation of automatic attentional mechanisms for socially interactive robots by summarizing the contributions already made in these matters in robotic cognitive systems research, and drawing conclusions that may suggest a roadmap for future successful research efforts.
Abstract: This review intends to provide an overview of the state of the art in the modeling and implementation of automatic attentional mechanisms for socially interactive robots. Humans assess and exhibit intentionality by resorting to multisensory processes that are deeply rooted within low-level automatic attention-related mechanisms of the brain. For robots to engage with humans properly, they should also be equipped with similar capabilities. Joint attention, the precursor of many fundamental types of social interactions, has been an important focus of research in the past decade and a half, therefore providing the perfect backdrop for assessing the current status of state-of-the-art automatic attentional-based solutions. Consequently, we propose to review the influence of these mechanisms in the context of social interaction in cutting-edge research work on joint attention. This will be achieved by summarizing the contributions already made in these matters in robotic cognitive systems research, by identifying the main scientific issues to be addressed by these contributions and analyzing how successful they have been in this respect, and by consequently drawing conclusions that may suggest a roadmap for future successful research efforts.

47 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...The motivations for this work can be found in [4], while conceptual and implementation details are reported in [5]....

    [...]

Journal ArticleDOI
TL;DR: Modulation of the lateralized components revealed that the color red captured and later held the attention in both positive and negative conditions, but not in a neutral condition, indicating that an emotional context can alter color’s impact both on attention and motor behavior.
Abstract: The color red is known to influence psychological functioning, having both negative (e.g., blood, fire, danger), and positive (e.g., sex, food) connotations. The aim of our study was to assess the attentional capture by red-colored images, and to explore the modulatory role of the emotional valence in this process, as postulated by Elliot and Maier's (2012) color-in-context theory. Participants completed a dot-probe task with each cue comprising two images of equal valence and arousal, one containing a prominent red object and the other an object of different coloration. Reaction times were measured, as well as the event-related lateralizations of the EEG. Modulation of the lateralized components revealed that the color red captured and later holded the attention in both positive and negative conditions, but not in a neutral condition. An overt motor response to the target stimulus was affected mainly by attention lingering over the visual field where the red cue had been flashed. However, a weak influence of the valence could still be detected in reaction times. Therefore, red seems to guide attention, specifically in emotionally-valenced circumstances, indicating that an emotional context can alter color's impact both on attention and motor behavior.

47 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...tables and bookshelves); some additional objects in the room were purposely modelled as being red, so as to add perceptually salient entities, which can be used as potential distractors in attention studies [22]....

    [...]

Proceedings ArticleDOI
17 Dec 2015
TL;DR: The main components comprising the action-perception loop of an overarching framework implementing artificial attention are introduced, designed to fulfil the requirements of social interaction (i.e., reciprocity, and awareness), with strong inspiration on current theories in functional neuroscience.
Abstract: In this paper, we introduce the main components comprising the action-perception loop of an overarching framework implementing artificial attention, designed to fulfil the requirements of social interaction (i.e., reciprocity, and awareness), with strong inspiration on current theories in functional neuroscience. We demonstrate the potential of our framework, by showing how it exhibits coherent behaviour without any inbuilt prior expectations regarding the experimental scenario. Current research in cognitive systems for social robots has suggested that automatic attention mechanisms are essential to social interaction. In fact, we hypothesise that enabling artificial cognitive systems with middleware implementing these mechanisms will empower robots to perform adaptively and with a higher degree of autonomy in complex and social environments. However, this type of assumption is yet to be convincingly and systematically put to the test. The ultimate goal will be to test our working hypothesis and the role of attention in adaptive, social robotics.

24 citations


"A visualisation and simulation fram..." refers background or methods in this paper

  • ...The motivations for this work can be found in [4], while conceptual and implementation details are reported in [5]....

    [...]

  • ...The expected outcome of this work was a unified ROSsupported framework designed so as to attain the objectives laid down in section I, allowing the CASIR attention middleware described in [5] to be used within the context defined by those objectives and the use of the IMPEP platform....

    [...]

Dissertation
01 Sep 2016
TL;DR: This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) and BACS (Bayesian Approach to Cognitive Systems) projects.
Abstract: In this dissertation, work on the design and development of a ROS-based remote 3D visualisation, control and simulation framework is presented. This architecture has the purpose of extending the usability of a system devised in previous work by this research team during the CASIR (Coordinated Attention for Social Interaction with Robots) and BACS (Bayesian Approach to Cognitive Systems) projects. The proposed solution was implemented using ROS (Robot Operative System), and designed to attend the needs of two user groups – local and remote users and developers. The framework consists of: (1) a fully functional simulator integrated with the ROS environment, including a faithful representation of a robotic platform, a human model with animation capabilities and enough features for enacting human robot interaction scenarios, and a virtual experimental setup with similar features as the real laboratory workspace; (2) a fully functional and intuitive user interface with 2D and 3D image representation capabilities, also allowing both common and advanced users or developers to launch specific sets of modules; (3) a remote robotic laboratory that can connect remote users to the rest of the framework via a web browser, providing them basic control of the simulated platform, via a virtual joystick controller. This solution’s contributions are as follows: (1) access for the local research team to the CASIR-IMPEP attention middleware, both locally and remotely, in order to allow seamless development and research efforts by effectively and efficiently sharing the framework’s resources; (2) access for the local research team to a user-friendly and flexible dashboard as a user interface that saves computational resources by running in a transparent fashion on a separate computer; (3) the opportunity for remote users which are not part of the local research team to openly and safely run the framework demonstrator, thereby opening CASIR-IMPEP research outcomes to the wider community (4) the opportunity for these external researchers to access source code developed during this project so as to adapt its outcomes for their own purposes, consequently representing an example for replicating the systematic approach applied herewith. The proposed solution was thoroughly and systematically tested under operational conditions, so as to assess its qualities in terms of features, ease-of-use and performance.

1 citations


"A visualisation and simulation fram..." refers background in this paper

  • ...A detailed description of the work presented herewith can be found in [24]....

    [...]