2017 MILCOM

Technical Panel: Can We Trust the Robots? (Room 316)

Automated agents are increasingly entering Command-and-Control (C2) structures of both first responders and armed forces.  As such agents become more sophisticated, they may become in-tegral parts of mission teams and be tasked with commanding other agents and even humans.  This raises critical issues associated with the allocation of decision rights to automated agents, and the levels of trust that will be required.  Problems include measurement of trust in the con-text of mixed human and robotic teams, and the propagation and updating of decision rights throughout such teams. What level of autonomy should be allowed? Which tasks might be better suited to agents and which to humans, both at a perceptual/computational level and at the social level? How does human trust in automation affect the command structure? For example, to what degree will humans trust giving open-ended commands to automated subordinates? Likewise, how will humans respond to taking commands from automated agents?