2018 I/ITSEC - 9250

Exterior Attribute Extraction and Interior Layout Speculation of 3D Structures (Room S320C)

27 Nov 18
4:30 PM - 5:00 PM
Automated collection-to-construction of terrain databases is a critical capability envisioned for future U.S. Army training systems. The challenge is how to automatically produce terrain data that supports both visual rendering and simulated reasoning with content sufficient to train ground forces in dense urban environments. The process of automated terrain construction begins with surface capture. Drones and ground-robots are deployed, capturing large amounts of raw surface data. Processing the surface data yields point clouds or 3D polygonal meshes, providing an initial 3D terrain model, typically with very high point/polygon densities and large raster memory requirements. While certain applications may be able to utilize these terrain models directly, most visualization applications, require additional processing to generate well-formed model geometry, sharp textures, door and window apertures, and material classifications. This additional processing, performed on the point cloud or 3D polygonal mesh, extracts point, line, and polygon feature geometries along with descriptive feature attributes (e.g., height, roofline, roof-type). A bare earth elevation model is generated to provide a ground surface in which to place the extracted 3D features. The final enabler of the terrain construction process is the automated generation of 3D models from the feature and attributed data. This paper reports on research which expands automated extraction of attributes from images through deep-learning and image processing techniques, identifying structural dimensions, apertures (e.g., doors, windows), appendages (e.g., A/C-Unit, chimneys), colors, and materials. From this set of enhanced attributes, geo-representative 3D models are procedurally generated. In addition, from the same set of enhanced attributes, geo-representative building-interiors are speculated and procedurally generated. This paper details these image processing and deep-learning techniques, describes the enha