✤ Using Coherence Inspectors to Determine the Reliability of Debriefed Epistemic Agents
from Unmonitored Autonomous Team Missions
(International Symposium on Cognition Science and Engineering July 2013 Proceedings).
✤Cameron Hughes, Ctest Laboratories
✤Tracey Hughes, Ctest Laboratories
✤Stephen Rhoden, PhD Youngstown State University
ABSTRACT
It is sometimes necessary to deploy autonomous multi-agent teams to perform analysis, interpretation, threat or safety assessments in environments that are either too remote, hazardous, or physically impractical to send human beings. It may also be the case that synchronous data communications along with any real-time video or audio monitoring of the team may not be possible or at the very least not dependable. In these situations if we are able to debrief the team at some point how do we rely on any analysis, or assessment that the team makes if we could not monitor its performance? On teams that perform assessment or analysis there is always the possibility that one or more members may make mis-identifications, or false positives. Without the ability to monitor , it may not be clear that proper location or substances were analyzed and assessed. Without real-time monitoring and communications with the team it is also possible that team concluded its analysis and assessment without considering the entire environment. We could add to these problems partial failure of any or all team members in terms of inter-team communication or subtask completion. In this paper we describe F.A.C.T a experimental heterogeneous multi-agent system consisting of software agents and autonomous robots agents, and Coherence Inspector a concept that consist of a knowledge structure, and a knowledge engineering technique, that is used to determine the reliability of any analysis, interpretation or assessment that F.A.C.T makes during the debriefing process.