Code for our paper GOAT-Bench: A Benchmark for Multi-Modal Lifelong Navigation.
Mukul Khanna*, Ram Ramrakhya*, Gunjan Chhablani, Sriram Yenamandra, Theophile Gervet, Matthew Chang, Zsolt Kira, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi
Sample episode from GOAT-Bench
GOAT-Bench is a benchmark for the Go to Any Thing (GOAT) task where an agent is spawned randomly in an unseen indoor environment and tasked with sequentially navigating to a variable number (in 5-10) of goal objects, described via the category name of the object (e.g. couch
), a language description (e.g. a black leather couch next to coffee table
), or an image of the object uniquely identifying the goal instance in the environment. We refer to finding each goal in a GOAT episode as a subtask. Each GOAT episode comprises 5 to 10 subtasks. We set up the GOAT task in an open-vocabulary setting; unlike many prior works, we are not restricted to navigating to a predetermined, closed set of object categories. The agent is expected to reach the goal object
Create the conda environment and install all of the dependencies. Mamba is recommended for faster installation:
# Create conda environment. Mamba is recommended for faster installation.
conda_env_name=goat
mamba create -n $conda_env_name python=3.7 cmake=3.14.0 -y
mamba install -n $conda_env_name \
habitat-sim=0.2.3 headless pytorch cudatoolkit=11.3 \
-c pytorch -c nvidia -c conda-forge -c aihabitat -y
# Install this repo as a package
mamba activate $conda_env_name
git clone https://github.com/korayaykor/goat-bench.git
cd goat-bench
pip install -e .
# Install habitat-lab
git clone --branch v0.2.3 git@github.com:facebookresearch/habitat-lab.git or git clone --branch v0.2.3 https://github.com/facebookresearch/habitat-lab.git
cd habitat-lab
pip install -e habitat-lab
pip install -e habitat-baselines
cd ..
cd goat-bench
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
pip install ftfy regex tqdm GPUtil trimesh seaborn timm scikit-learn einops transformers
git clone https://github.com/facebookresearch/eai-vc.git
cd eai-vc
pip install -e vc_models/
-
Download the HM3D dataset using the instructions here (download the full HM3D dataset for use with habitat)
-
Move the HM3D scene dataset or create a symlink at
data/scene_datasets/hm3d
. -
Download the GOAT-Bench episode dataset from here.
The code requires the datasets in data
folder in the following format:
├── goat-bench/
│ ├── data
│ │ ├── scene_datasets/
│ │ │ ├── hm3d/
│ │ │ │ ├── JeFG25nYj2p.glb
│ │ │ │ └── JeFG25nYj2p.navmesh
│ │ ├── datasets
│ │ │ ├── goat_bench/
│ │ │ │ ├── hm3d/
│ │ │ │ | ├── v1/
│ │ │ │ │ │ ├── train/
│ │ │ │ │ │ ├── val_seen/
│ │ │ │ │ │ ├── val_seen_synonyms/
│ │ │ │ │ │ ├── val_unseen/
In order to increase training throughput we leverage frozen pretrained visual and text encoder (ex: CLIP) for encoding goals. As the goal encoders are not being finetuned during training we cache the embeddings for all object categories, language instructions and image goals on disk. You can download these embeddings from the following huggingface repo 🤗 using following command:
git clone https://huggingface.co/datasets/axel81/goat-bench data/goat-assets/
This command will download cached embeddings and model checkpoints for SenseAct-NN monolithic policy.
Run the following command to train the monolithic GOAT policy that uses goal embeddings generated using CLIP text and image encoder:
TENSORBOARD_DIR="/path/to/tensorboard/dir/"
CHECKPOINT_DIR="/path/to/checkpoint/dir/"
python -um goat_bench.run --run-type train \
--exp-config config/experiments/ver_goat_monolithic.yaml \
habitat_baselines.num_environments=4 \
habitat_baselines.tensorboard_dir=${TENSORBOARD_DIR} \
habitat_baselines.checkpoint_folder=${CHECKPOINT_DIR}
To run distributed training on more than 1 GPU on a compute cluster managed using slurm use the following sbatch script:
sbatch scripts/train/2-goat-ver-monolithic.sh
Run the following command to evaluate the SenseAct-NN monolithic policy on the val_seen
evaluation split:
DATA_PATH="data/datasets/goat_bench/hm3d/v1/"
eval_ckpt_path_dir="/path/to/goat-assets/checkpoints/sense_act_nn_monolithic/"
tensorboard_dir="/path/to/tensorboard/"
split="val_seen"
python -um goat_bench.run --run-type eval \
--exp-config config/experiments/ver_goat_monolithic.yaml \
habitat_baselines.num_environments=1 \
habitat_baselines.trainer_name="goat_ppo" \
habitat_baselines.tensorboard_dir=$tensorboard_dir \
habitat_baselines.eval_ckpt_path_dir=$eval_ckpt_path_dir \
habitat.dataset.data_path="${DATA_PATH}/${split}/${split}.json.gz" \
habitat_baselines.load_resume_state_config=False \
habitat_baselines.eval.use_ckpt_config=False \
habitat_baselines.eval.split=$split \
habitat.task.lab_sensors.goat_goal_sensor.image_cache=/path/to/image_goal_embeddings/${split}_embeddings/ \
habitat.task.lab_sensors.goat_goal_sensor.language_cache=/path/to/language_goal_embeddings/${split}_instruction_clip_embeddings.pkl
Similarly you can evaluate the same checkpoint on val_seen_synonyms
and val_unseen
splits by changing the value of environment variable split
in the above command.
To run evaluation on slurm as batch job use the following sbatch script:
sbatch scripts/eval/2-goat-eval.sh
Run the following command to evaluate the SenseAct-NN skill chain policy that chains individual policies trained for each modality:
tensorboard_dir="/path/to/tensorboard/dir/"
eval_ckpt_path_dir="/path/to/goat-assets/checkpoints/sense_act_nn_skill_chain/"
python -um goat_bench.run \
--run-type eval \
--exp-config config/experiments/ver_goat_skill_chain.yaml \
habitat_baselines.num_environments=1 \
habitat_baselines.trainer_name="goat_ppo" \
habitat_baselines.rl.policy.name=GoatHighLevelPolicy \
habitat_baselines.tensorboard_dir=$tensorboard_dir \
habitat_baselines.eval_ckpt_path_dir=$eval_ckpt_path_dir \
habitat_baselines.checkpoint_folder=$eval_ckpt_path_dir \
habitat.dataset.data_path="${DATA_PATH}/${split}/${split}.json.gz" \
+habitat/task/lab_sensors@habitat.task.lab_sensors.clip_objectgoal_sensor=clip_objectgoal_sensor \
+habitat/task/lab_sensors@habitat.task.lab_sensors.language_goal_sensor=language_goal_sensor \
+habitat/task/lab_sensors@habitat.task.lab_sensors.cache_instance_imagegoal_sensor=cache_instance_imagegoal_sensor \
~habitat.task.lab_sensors.goat_goal_sensor \
habitat.task.lab_sensors.cache_instance_imagegoal_sensor.cache=data/goat-assets/goal_cache/iin/${split}_embeddings/ \
habitat.task.lab_sensors.language_goal_sensor.cache=data/goat-assets/goal_cache/language_nav/${split}_bert_embedding.pkl \
habitat_baselines.load_resume_state_config=False \
habitat_baselines.eval.use_ckpt_config=False \
habitat_baselines.eval.split=$split \
habitat_baselines.eval.should_load_ckpt=False \
habitat_baselines.should_load_agent_state=False
If you use this code or benchmark in your research, please consider citing:
@inproceedings{khanna2024goatbench,
title={GOAT-Bench: A Benchmark for Multi-Modal Lifelong Navigation},
author={Mukul Khanna* and Ram Ramrakhya* and Gunjan Chhablani and Sriram Yenamandra and Theophile Gervet and Matthew Chang and Zsolt Kira and Devendra Singh Chaplot and Dhruv Batra and Roozbeh Mottaghi},
year={2024},
booktitle={CVPR},
}