Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation

Ziad Al-HalahSanthosh K. RamakrishnanKristen Grauman

[Paper]   [Code]   [Data]  



We propose the first approach to generalize to diverse set of semantic navigation tasks (e.g., ImageNav, ObjectNav, RoomNav) and goal modalities (e.g., image, label, audio, sketch) in a novel zero-shot experience learning framework.



Plug and Play Modular Transfer Learning for Semantic Visual Navigation

In reinforcement learning for visual navigation, it is common to develop a model for each new task, and train that model from scratch with task-specific interactions in 3D environments. However, this process is expensive; massive amounts of interactions are needed for the model to generalize well. Moreover, this process is repeated whenever there is a change in the task type or the goal modality. We present a unified approach to visual navigation using a novel modular transfer learning model. Our model can effectively leverage its experience from one source task and apply it to multiple target tasks (e.g., ObjectNav, RoomNav, ViewNav) with various goal modalities (e.g., image, sketch, audio, label). Furthermore, our model enables zero-shot experience learning, whereby it can solve the target tasks without receiving any task-specific interactive training. Our experiments on multiple photorealistic datasets and challenging tasks show that our approach learns faster, generalizes better, and outperforms SoTA models by a significant margin.


Paper

paper thumbnail Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation
Ziad Al-Halah, Santhosh K. Ramakrishnan and Kristen Grauman
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
[paper]

@inproceedings{al-halah2022zsel,
  title={Zero Experience Required: Plug \& Play Modular Transfer Learning for Semantic Visual Navigation},
  author={Ziad Al-Halah, Santhosh K. Ramakrishnan and Kristen Grauman},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  doi = {},
  year={2022}
}


Code

The code for this work is available here.


Data

The datasets used for this work can be downloaded from here.