AUVSI's Unmanned Systems 2016

Trusted Autopilot Architecture for GPS-Denied and Experimental UAV Operations (Room 283-285)

03 May 16
11:00 AM - 11:30 AM

Tracks: Air, Defense, Research and Development, Tech Track: Developments in Autonomy

Unmanned and autonomous systems require a great deal of human trust, especially in order to gain acceptance in high-risk mission applications. In the aerial domain, loss of vehicle control, crashing or leaving the safety of the range can present a hazard to humans as well as property. A robust autopilot with multiple redundant and failsafe navigation methods, such as that here, presents a much more appealing platform for use in testing new flight packages and navigation methods under development. This failsafe autopilot system was developed for the NASA Langley Research Center to improve the safety of commercial and experimental flight operations. In these applications, the system must be able to detect failures in the vehicle control or navigation subsystems, and switch to a failsafe autopilot. However, current failsafe methods, such as human operator override, do not provide sufficient robustness in many situations. This paper presents an additional failsafe navigation approach in the form of a visual odometry (VO) system. VO represents the problem of estimating relative position (motion) of a vehicle using a camera sensor. Visual odometry approaches provide a useful method for vehicle trajectory estimation provided sufficient light, texture, saliency, and overlap between successive images. In addition to a survey of state-of-the-art VO methods, a novel VO-based failsafe autopilot method is presented in this paper for use in an unmanned aerial vehicle (UAV). This method highlights the importance of a robust means for estimating the accuracy of visual odometry estimates in an unmanned system. An estimate of accuracy facilitates the autopilot in gracefully degrading between navigation methods in the presence of error. One applicable case for GPS-denied navigation with a UAV occurs when the onboard GPS sensor is jammed or spoofed with false data. GPS currently represents a single point of failure for many UAV systems. There exists a need for a redundant positioning system to augment the GPS sensor for detection of spoofing (diverging estimates of position) as well as redundancy for the case of GPS signal loss. With the latter, it is commonly the case that human operator control (the standard failsafe) is also unavailable due to a concurrent communications loss (i.e. from jamming). Current state of the art VO methods perform well in feature-rich (e.g. urban, forested, coral reef) environments, but can fail in more challenging environments with repetitive patterns and those lacking sufficient salient features (e.g. aerial above water, underwater, desert). In these more difficult cases, it is even more import that a VO system have a robust way of estimating accuracy along with the estimated odometry. The autopilot system should not rely on poorly estimated VO as this can cause the vehicle to leave the range, crash, or encounter other control problems. In order to robustly leverage the technology available with visual odometry, it is important to know when to use VO and when to reject it based on accuracy. In the case of a combined GPS and VO failure, the system should gracefully degrade to inertial navigation, the final redundancy. With three positioning methods available to an UAV (GPS, VO, inertial navigation), the vehicle must be able to optimally leverage each of the methods to obtain a robust navigation solution. This requires the system to have knowledge of the real-time accuracy (or confidence) of each independent navigation subsystem, and to be able to seamlessly and gracefully switch between them when needed. Such a design provides a robust method for autonomous navigation and position estimation using a redundant and multi-layered failsafe approach, presented here. The VO method presented here incorporates the C-FLOW approach to optical flow image point registration, along with the PEGUS robust pose estimation method. This provides a deterministic, high-speed VO system, which is required for real-time visual servo control applications. This approach is also capable of providing a quantified estimate of accuracy, or confidence, of the visual odometry result at each frame. A novel strength of the C-FLOW optical flow method is the ability to flag false matches and outliers to provide an estimate of accuracy and tracking precision. This satisfies the need for an accuracy measure with the visual odometry estimates in order to robustly switch between positioning methods. The PEGUS method for robust pose estimation is a novel approach for estimating relative pose given a set of pose hypotheses (generated from visual odometry) in the presence of outliers. PEGUS is a clustering method, which averages many “low-noise” relative pose hypotheses for improved accuracy. This results in a deterministic and high-speed robust pose estimation system for use in real-time visual servo control applications. This VO system is integrated into a robust failsafe autopilot system, detailed herein.