Vision-and-language navigation (VLN) requires an agent to understand natural language instructions, perceive the visual world, and perform navigation actions to arrive at a target location.
A recent paper on arXiv.org proposes the History Aware Multimodal Transformer (HAMT), a fully transformer-based architecture for multimodal decision making in VLN tasks.
It consists of unimodal transformers for text, history, and observation encoding and a cross-modal transformer to capture long-range dependencies of the history sequence, current observation, and instruction. The transformer is trained with auxiliary proxy tasks in an end-to-end fashion, and reinforcement learning is used to improve the navigation policy.
Extensive experiments on various VLN tasks demonstrate that HAMT outperforms the state-of-the-art on both seen and unseen environments in all the tasks.
Vision-and-language navigation (VLN) aims to build autonomous visual agents that follow instructions and navigate in real scenes. To remember previously visited locations and actions taken, most approaches to VLN implement memory using recurrent states. Instead, we introduce a History Aware Multimodal Transformer (HAMT) to incorporate a long-horizon history into multimodal decision making. HAMT efficiently encodes all the past panoramic observations via a hierarchical vision transformer (ViT), which first encodes individual images with ViT, then models spatial relation between images in a panoramic observation and finally takes into account temporal relation between panoramas in the history. It, then, jointly combines text, history and current observation to predict the next action. We first train HAMT end-to-end using several proxy tasks including single step action prediction and spatial relation prediction, and then use reinforcement learning to further improve the navigation policy. HAMT achieves new state of the art on a broad range of VLN tasks, including VLN with fine-grained instructions (R2R, RxR), high-level instructions (R2R-Last, REVERIE), dialogs (CVDN) as well as long-horizon VLN (R4R, R2R-Back). We demonstrate HAMT to be particularly effective for navigation tasks with longer trajectories.
Research paper: Chen, S., Guhur, P.-L., Schmid, C., and Laptev, I., “History Aware Multimodal Transformer for Vision-and-Language Navigation”, 2021. Link: https://arxiv.org/abs/2110.13309