2017 I/ITSEC - 8250

Analyzing SLAM Algorithm Performance for Tracking in Augmented Reality Systems (Room S320B)

In developing augmented reality based tutoring scenarios difficult issues can arise if the environment used is not initially known. Lacking in pre-determined fiducial markers, the tracking of the user’s position and orientation relative to their starting point can easily be lost. A potential solution lies within the robotics field: simultaneous localization and mapping (or SLAM) algorithms which rely on visual tracking methods to both determine the layout of the environment and the robot’s current position and orientation given the previous estimate. However, when applied to a human subject in an augmented reality environment, the agility of their movement during performance activities can lead to issues. In this paper, we discuss the framework used to test a set of SLAM algorithms and determine their capabilities of tracking a human subject performing a variety of movements. We detail the SLAM algorithms analyzed and explain their potential usage given equipment combinations that may be developed in a lab environment. We also go through each movement set, detailing the hardware used in the recording process and how the user’s movements are designed to test the limits of a SLAM algorithm. By developing a networked framework, we show how the system is easily adapted to test an algorithm with minimal changes to its code and how it may be used to evaluate future SLAM research. In the end, our framework shows the capabilities and limits of SLAM algorithms when tracking a human user in an augmented reality system.