2019 I/ITSEC

Lean Scenes: Variable-Fidelity Models Reduce Machine-Learning Training Requirements (Room 320A)

05 Dec 19
11:30 AM - 12:00 PM

Tracks: Full Schedule, Thursday Schedule

This paper describes an approach to avoid waste in machine learning.  Image-processing neural networks characterize objects from features learned in training data.  In some applications, the amount of training data is constrained by expensive data collection or limited-number of physical images.  In such contexts, it may be preferable to first train performance against simulated imagery, then fine-tune by training with physical images.  This "transfer learning" procedure helps networks meet performance requirements with reduced demands for amount of physical data consumed. Simulation-based transfer learning raises two key concerns. First, modeling highly-realistic scenery through high-fidelity simulations typically involves significant computational expense.  This may constrain the amount of generated imagery available for training, leading to reduced performance.  Second, modeling artefacts may result in the network learning to recognize and respond to artificial features having no real-world equivalent. This paper examines several approaches which address these concerns within the application of an image-segmentation neural-network for calibrating camera pose.  First, transfer learning is applied to a series of simulation results, beginning with a pool of many moderate-fidelity runs, then fine-tune training on fewer, higher-fidelity cases. Second, the influence of artificial features may be (A) mitigated through blending physical/synthetic imagery through object texturing or (B) monitored through saliency-map diagnostics which inform analysts of image regions most-responsible for network performance. Artefact robustness methods represents an active research area. These approaches demonstrate that (A) simulation output need not be exclusively highest fidelity to be of utility to early-phase training, and (B) overall computational expense can reduce through training sequences which increase modeling realism while also reducing number of generated samples.  In these ways, variable-fidelity simulations dynamically provide the modeled realism appropriate to a machine-learning algorithm's evolving capabilities.