2017 I/ITSEC - 8250

Comparing Visual Assembly Aids for Augmented Reality Work Instructions (Room S320A)

Increased product complexity and the focus on zero defects, especially when manufacturing complex engineered products, means new tools are required for helping workers conduct challenging assembly tasks. Augmented reality (AR) has shown considerable promise in delivering work instructions over traditional methods. Many proof-ofconcept systems have demonstrated the feasibility of AR but little work has been devoted to understanding how users perceive different AR work instruction interface elements. This paper presents a between-subjects study looking at how interface elements for object depth placement in a scene impact a user’s ability to quickly and accurately assemble a mock aircraft wing in a standard work cell. For object depth placement, modes with varying degrees of 3D modeled occlusion were tested, including a control group with no occlusion, virtual occlusion, and occlusion by contours. Results for total assembly time and total errors indicated no statistically significant difference between interfaces, leading the authors to conclude a floor has been reached for optimizing the current assembly when using AR for work instruction delivery. However, looking at a handful of highly error prone steps showed the impact different types of occlusion have on helping users correctly complete an assembly task. The results of the study provide insight into how to construct an interface for delivering AR work instructions using occlusion. Based on these results, the authors recommend customizing the occlusion method based on the features of the required assembly task. The authors also identified a floor effect for the steps of the assembly process, which involved picking the necessary parts from tables and bins. The authors recommend using vibrant outlines and large textural cues (e.g., numbers on parts bins) as interface elements to guide users during these types of “picking” steps.