2017 I/ITSEC - 8250

Simulation – Dynamic Occlusion using Fixed Infrastructure for Augmented Reality (Room S320A)

The U.S. Army Research Laboratory-Human Research and Engineering Directorate, Advanced Simulation Technology Division (ARL-HRED-ATSD) performs research and development in the field of augmented/mixed reality training technology. Within training, the Department of Defense (DoD) has a strong interest in augmented reality (AR) for its ability to combine live and virtual assets to reduce cost, increase safety, and to mitigate unavailability of needed live assets. Answering this is the commercial sector, which is rapidly advancing a host of capabilities to support AR, such as head mounted displays (HMDs). An important capability of AR is the realistic occlusion between live and virtual objects based on their respective depth in the augmented reality scene. Existing solutions use pre-scanned environmental depth information to provide this capability; however, this is only useful for objects that will never move or static objects. This does not address live objects that move or dynamic objects (e.g., live Soldiers). Dynamic objects must be constantly scanned using a depth camera(s) to provide the occlusion information for an AR-enabled system. Of the few commercial sector vendors that provide depth cameras on their HMD, most are lacking the sufficient depth range to adequately support occlusion from the HMD – anything beyond roughly two meters has greatly diminished fidelity. This paper describes a solution that will add a network of fixed infrastructure cameras with a centralized occlusion server to merge the depth images from various sources. This then creates depth images suitable for occlusion on the HMDs at any range with realistic fidelity. The paper will report the use of commercial off the shelf (COTS) computers and cameras that instrument an area such that it can be used for occlusion in a training system. The paper will speak to the performance of fusing data, given resolution and volume. Finally, this paper will show scenes where virtual objects are added to a