Deep Active Localization

Deep Active Localization

Abstract

Active localization is the problem of generating robot actions that allow it to maximally disambiguate its pose within a reference map. Traditional approaches to this use an information-theoretic criterion for action selection and hand-crafted perceptual models. In this work we propose an end-to-end differentiable method for learning to take informative actions that is trainable entirely in simulation and then transferable to real robot hardware with zero refinement. The system is composed of two modules - a convolutional neural network for perception, and a deep reinforcement learned planning module. We introduce a multi-scale approach to the learned perceptual model since the accuracy needed to perform action selection with reinforcement learning is much less than the accuracy needed for robot control. We demonstrate that the resulting system outperforms using the traditional approach for either perception or planning. We also demonstrate our approaches robustness to different map configurations and other nuisance parameters through the use of domain randomization in training. The code is also compatible with the OpenAI gym framework, as well as the Gazebo simulator.

Publication
In Robotics and Automation Letters
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Click the Slides button above to demo Academic’s Markdown slides feature.

Supplementary notes can be added here, including code and math.

Sai Krishna G.V.
Masters Student
Dhaivat Bhatt
Masters Student
Vincent Mai
PhD Student
Krishna Murthy Jatavallabhula
Krishna Murthy Jatavallabhula
PhD Candidate

My research blends robotics, computer vision, graphics, and physics with deep learning.

Liam Paull
Assistant Professor

I lead the Montreal robotics and embodied AI lab. I am affiliated with Université de Montréal, Mila, and I hold a CIFAR AI chair.

Related