Q2. What are the contributions mentioned in the paper "A skill-based approach towards hybrid assembly" ?
In this article, a hybrid assembly station is presented, in which an industrial robot can learn new tasks from worker instructions. This workspace is monitored using multi-sensory perception for detecting persons as well as objects. The environmental data are processed within the collision avoidance module to provide safety for persons and equipment. The real-time capable software architecture and the orchestration of the involved modules using a knowledge-based system controller is presented.
Q3. What future works have the authors mentioned in the paper "A skill-based approach towards hybrid assembly" ?
In addition, the combination of the shared workspace surveillance unit and the collision avoidance module provide the possibility to share the workspace between the human and the robot at the same time. Future work will concentrate on an improved ergonomic view of the assembly process and evaluation of the presented safety concepts. Therefore, the observation and interpretation of the worker ’ s activities will become one focus of the ongoing research as suggested in [ 44 ]. After repeated observations of what was added at what position and at which time, the system will be able to collect knowledge on a semantic level of the process.
Q4. Why was a real-time capable software architecture implemented?
Due to the requirements of an on-line robot motion control in the hybrid assembly cell, a real-time capable software architecture was implemented.
Q5. What is the need for the robot to move autonomously within the shared working environment?
For more efficiency in the collaboration, it is necessary for the robot to move autonomously within the shared working environment to fulfill its current task with regard to the worker’s safety.
Q6. Why is it necessary to use only one feature point per box?
Due to the fact, that the content of the box is not of uniform color (cluttered content – screws, cables, etc.), it is sufficient to use only one feature point per box.
Q7. What are the main features of the multi-modal interaction channels?
In order to make the communication with the system more intuitive and ergonomic for the worker, multi-modal interaction channels were established.
Q8. What is the cost of using head-worn tracking glasses?
These glasses enable a higher precision in the gaze tracking compared to remote eye tracking at the cost of being less comfortable and more invasive.
Q9. What is the main challenge that arises here?
The main challenge that arises here, is that the planned motion2 and the avoidance motion must be handled in a way where they do not interfere with each other.
Q10. How much speed can an industrial robot achieve in the collaborating mode?
The industrial standard EN ISO 10218-1:2006 [41] limits the maximum speed of an industrial robot in the collaborating mode to 250 mm/s, in case the robot is not sufficiently limited in power and force by inherent design.
Q11. What are the skills that are used to register the robot?
These skills include several basic blocks with actions, e.g. move to position, open gripper, and several higher-level skills including picking up an object from the table.
Q12. How do you calculate the distances of the robot from the surrounding objects?
To compute the velocity that repels the robot from surrounding obstacles, the minimum distances of all objects in the environment model (including self-collision) to all body parts of the robot need to be calculated.
Q13. How is the collision avoidance controller implemented?
In the developed collision avoidance controller, the avoidance is done in a reactive way using a dynamic internal 3D environment model as shown in Fig.
Q14. What is the purpose of learning tasks?
The learned tasks are stored in a XML-representation into a file to generate a persistent task database over time and enhance the systems capabilities.