scispace - formally typeset
Search or ask a question
DOI

Cognitive Coordination for Cooperative Multi-Robot Teamwork

12 May 2015-
TL;DR: An agent-based cognitive robot architecture is proposed that is used to bridge the gap between low-level robotic control with high-level cognitive reasoning and a formal domain-independent graphical language that reflects the need for coordination in multi-agent teamwork is developed.
Abstract: Multi-robot teams have potential advantages over a single robot. Robots in a team can serve different functionalities, so a team of robots can be more efficient, robust and reliable than a single robot. In this dissertation, we are in particular interested in human level intelligent multi-robot teams. Social deliberation should be taken into consideration in such a multi-robot system, which requires that the robots are capable of generating long term plans to achieve a global or team goal, rather than just dealing with the problems at hand. Robots in a team have to cope with dynamic environments due to the presence of the others. Thus, a robot cannot foresee what its environment will be because other robots may change the environment. Moreover, multiple robots may interfere with each other. We can say that the need for coordination in a robot team stems from interdependence relationships between the robots. More specifically, one robot performing an activity may influence another robot's activity. In order to achieve good team performance, the robots in a team all need to well coordinate their activities. This dissertation studies the multi-robot teamwork in the context of search and retrieval, which is known as foraging in robotics. In a foraging task, a team of robots is required to search targets of interest in the environment and also deliver them back to a home base. Many practical applications require search and retrieval such as urban search and rescue robots, deep-sea mining robots, and autonomous warehouse robots. Requiring both searching and delivering makes a foraging task more complicated than a pure searching, exploration or coverage task. Foraging robots have to consider not only where to explore but also when to explore. Coordination for a foraging task concerns how to direct the movements of the robots and how to distribute the workload more evenly in a team. In this dissertation, we first proposed an agent-based cognitive robot architecture that is used to bridge the gap between low-level robotic control with high-level cognitive reasoning. Cognitive agents realized by means of the agent programming language GOAL are used to control both real and simulated robots. We carried out an empirical study to investigate the role of communication and its impact on team performance. The results and findings were used to study the multi-robot pathfinding and multi-robot task allocation problems. A novel fully decentralized approach was proposed to deal with the multi-robot pathfinding problem, which also reduces the communication overhead, compared to usual decentralized approaches. An auction-based approach and a prediction approach were proposed to deal with the dynamic foraging task allocation problem. The difference is that the prediction approach performs better with respect to completion time, while the auction-based approach performs better with respect to travel costs. In order to facilitate the identification of interdependence relationships between the agents in the early design phase of a multi-agent system, we developed a formal domain-independent graphical language that reflects the need for coordination in multi-agent teamwork.

Content maybe subject to copyright    Report

Citations
More filters
07 Feb 2018
TL;DR: This thesis addresses research challenges in which a multi-perspective view on processes is needed and that look beyond the control-flow perspective, which defines the sequence of activities of a process.
Abstract: Process mining methods analyze an organization’s processes by using process execution data. During the handling of a process instance data about the execution of activities is recorded. Process mining uses such data to gain insights about the real execution of processes. In this thesis, we address research challenges in which a multi-perspective view on processes is needed and that look beyond the control-flow perspective, which defines the sequence of activities of a process. We consider problems in which multiple interacting process perspectives — in particular control-flow, data, resources, time, and functions — are considered together. The contributed methods span several types of process mining: two are concerned with conformance checking, two are process discovery techniques, and one is a decision mining method. All methods have been implemented, evaluated, and applied in the context of four case studies.

95 citations

01 Jan 2017
TL;DR: In this paper, the authors present a take down policy to remove access to the work immediately and investigate the claim. But they do not provide details of the claim and do not discuss the content of the work.
Abstract: Users may download and print one copy of any publication from the public portal for the purpose of private study or research You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim.

80 citations

01 Jan 2018
TL;DR: In this article, a set of studies were conducted to investigate the effect of menselijke visuele karakteristieken of individuele frames on the interpretation of social signals.
Abstract: Wanneer mensen met elkaar communiceren bestaat de boodschap vaak uit meer dan alleen de gesproken woorden. Door middel van bijvoorbeeld gezichtsuitdrukkingen, intonatie, of lichaamshouding kunnen zender en ontvanger los van de woorden die zij gebruiken en soms zelfs zonder dat ze het in de gaten hebben elkaar informeren over hun achterliggende gevoelens, sociale attitude, mentale staat of andere persoonlijke eigenschappen. Het automatisch interpreteren van menselijke visuele sociale signalen met behulp van computers staat centraal in dit onderzoek. Om visuele sociale signalen te interpreteren maken computers gebruik van beeldverwerkingstechnieken. Eerder werk in dit veld richtte zich op technieken die toegepast werden op afbeeldingen of individuele frames uit videoclips en beschreven de inhoud aan de hand van lokale visuele kenmerken. Aan de hand van deze visuele kenmerken probeerde de computer het menselijk signaal te herkennen en te interpreteren. Een nadeel van deze eerdere aanpak is dat informatie over bewegingen in de tijd buiten beschouwing wordt gelaten. Om deze tekortkoming te overbruggen hebben we in dit werk een techniek toegepast die niet alleen in staat is om lokale visuele karakteristieken van individuele frames te beschrijven, zoals kenmerken van aanwezige contourovergangen, maar ook temporele kenmerken, zoals het verplaatsen van deze contourovergangen over tijd in kaart kan brengen. Om erachter te komen of computers daadwerkelijk gebaat zijn bij het toevoegen van temporele informatie om menselijke sociale signalen te herkennen, hebben we in dit werk vier studies verricht waarin we voor vier verschillende sociale signalen telkens systematisch de prestaties van voorspellende algoritmes vergeleken voor zowel de conditie met enkel lokale visuele kenmerken als de conditie met lokale en temporele kenmerken. We begonnen ons onderzoek met visuele spraakherkenning, dat wil zeggen, bepalen of iemand spreekt door alleen naar het beeld te kijken. Daarna onderzochten we of we bij kinderen konden bepalen of zij moeilijkheden ondervonden tijdens beantwoorden van rekensommen. Vervolgens keken we naar het onderscheid tussen gespeelde en spontane glimlachen en tenslotte onderzochten we of we aan de hand van iemands wandelpatroon het geslacht konden bepalen. Hiervoor hebben we in de eerste drie studies gekeken naar signalen die tot uiting komen in het gezicht en in de laatste studie nemen we het hele lichaam in beschouwing. Op basis van de resultaten in deze studies kunnen we concluderen computers inderdaad vaak beter in staat zijn het sociale signaal te voorspellen wanneer zij beschikken over extra temporele informatie. Dit was voornamelijk het geval wanneer het saillante deel van het signaal expliciet aanwezig was in een specifiek deel van het gezicht of lichaam. Voor subtiele signalen of signalen die niet te koppelen zijn aan specifieke delen van het gezicht presteerden zowel de lokale als temporele kenmerken matig in het herkennen van het signaal, al leken de temporele kenmerken het net iets beter te doen. Alles bij elkaar opgeteld kunnen we concluderen dat voor het herkennen van sociale signalen het beter is om temporele informatie te gebruiken dan alleen lokale en toekomstig werk zou zich kunnen richten op het automatisch leren van de optimale temporele kenmerken.

72 citations

DOI
08 Jun 2020
TL;DR: Machine learning is impacting modern society at large, thanks to its increasing potential to effciently and effectively model complex and heterogeneous phenomena, but it is not infallible and models can deliver unreasonable outcomes.
Abstract: Machine learning is impacting modern society at large, thanks to its increasing potential to effciently and effectively model complex and heterogeneous phenomena. While machine learning models can achieve very accurate predictions in many applications, they are not infallible. In some cases, machine learning models can deliver unreasonable outcomes. For example, deep neural networks for self-driving cars have been found to provide wrong steering directions based on the lighting conditions of street lanes (e.g., due to cloudy weather). In other cases, models can capture and reflect unwanted biases that were concealed in the training data. For example, deep neural networks used to predict likely jobs and social status of people based on their pictures, were found to consistently discriminate based on gender and ethnicity–this was later attributed to human bias in the labels of the training data.

71 citations