2018 I/ITSEC - 9250

Human-Agent Teaming: State of Assessments and Selected Issues (Room S320F)

27 Nov 18
3:00 PM - 3:30 PM
Progress in computing and robotics technologies has fueled research in Human-Agent Teaming. More than before, robots, machines, and systems are seen as viable agent teammates that work alongside humans as force-multipliers to enhance performance, to ensure safety, and to improve efficiency. However, even as the demand for research and development in HAT by both the military and industry continues to increase, there is a growing concern over the current and foreseeable future challenges with assessments in this relatively new domain. For instance, it is difficult to compare results of assessments that are purported to examine the common HAT construct relationships, such as that between reliability and workload, but use different definitions and measures for the constructs. Moderation of the effects of multiple HAT innovations by contextual factors is also not well understood because many assessments have been done using various tasks and testbeds. All this constrains the extent to which study findings can be generalized, and the establishment of knowledge base about the critical factors and relationships in HAT. In this paper, analyses were conducted on the metadata of 74 HAT research studies from ten researchers from military labs. Results show patterns and trends in the metadata that illustrate which constructs tend to be examined together, which measures seem to cluster, and which constructs had the most and least diverse measures and definitions. Implications of these findings on assessment quality and the utility of assessment outcomes for informing a variety of critical decisions are also discussed.