Skip to content

Latest commit

 

History

History
320 lines (220 loc) · 29.4 KB

README.md

File metadata and controls

320 lines (220 loc) · 29.4 KB

Scene Graph Benchmark in Pytorch

LICENSE Python PyTorch arXiv

Our latest paper on Real-Time Scene Graph Generation is finally available! Please have a look if you're interested: https://arxiv.org/abs/2405.16116. We dive into current bottlenecks of SGG models for real-time constraints and propose a simple yet very efficient implementation using YOLOV8. Here are the main results:

real_time_sgg

Background

This implementation is a new benchmark for the task of Scene Graph Generation, based on a fork of the SGG Benchmark by Kaihua Tang. The implementation by Kaihua is a good starting point however it is very outdated and is missing a lot of new development for the task. My goal with this new codebase is to provide an up-to-date and easy-to-run implementation of common approaches in the field of Scene Graph Generation. This codebase also focuses on real-time and real-world usage of Scene Graph Generation with dedicated dataset tools and a large choice of object detection backbones. This codebase is actually a work-in-progress, do not expect everything to work properly on the first run. If you find any bugs, please feel free to post an issue or contribute with a PR.

Recent Updates

  • TODO: Change Dataloader to COCO format (in progress).
  • 23.05.2024: Added support for Hyperparameters Tuning with the RayTune library, please check it out: Hyperparameters Tuning
  • 23.05.2024: Added support for the YOLOV10 backbone and SQUAT relation head!
  • 28.05.2024: Official release of our Real-Time Scene Graph Generation implementation.
  • 23.05.2024: Added support for the YOLO-World backbone for Open-Vocabulary object detection!
  • 10.05.2024: Added support for the PSG Dataset
  • 03.04.2024: Added support for the IETrans method for data augmentation on the Visual Genome dataset, please check it out! IETrans.
  • 03.04.2024: Update the demo, now working with any models, check DEMO.md.
  • 01.04.2024: Added support for Wandb for better visualization during training, tutorial coming soon.

Contents

  1. Overview
  2. Install the Requirements
  3. Prepare the Dataset
  4. Simple Webcam Demo
  5. Supported Models
  6. Metrics and Results for our Toolkit
  7. Training on Scene Graph Generation
  8. Hyperparameters Tuning
  9. Evaluation on Scene Graph Generation
  1. Other Options that May Improve the SGG
  1. Frequently Asked Questions
  2. Citations

Overview

Note from Kaihua Tang, I keep it for reference:

" This project aims to build a new CODEBASE of Scene Graph Generation (SGG), and it is also a Pytorch implementation of the paper Unbiased Scene Graph Generation from Biased Training. The previous widely adopted SGG codebase neural-motifs is detached from the recent development of Faster/Mask R-CNN. Therefore, I decided to build a scene graph benchmark on top of the well-known maskrcnn-benchmark project and define relationship prediction as an additional roi_head. By the way, thanks to their elegant framework, this codebase is much more novice-friendly and easier to read/modify for your own projects than previous neural-motifs framework (at least I hope so). It is a pity that when I was working on this project, the detectron2 had not been released, but I think we can consider maskrcnn-benchmark as a more stable version with less bugs, hahahaha. I also introduce all the old and new metrics used in SGG, and clarify two common misunderstandings in SGG metrics in METRICS.md, which cause abnormal results in some papers. "

Installation

Check INSTALL.md for installation instructions.

Dataset

Check DATASET.md for instructions regarding dataset preprocessing.

DEMO

I made a small demo code to try SGDET with your webcam in the demo folder, feel free to have a look! You will need a trained model in SGDET mode for the demo.

Supported Models

Scene Graph Generation approaches can be categorized between one-stage and two-stage approaches. Two-stages approaches are the original implementation of SGG. It decouples the training process into (1) training an object detection backbone and (2) using bounding box proposals and image features from the backbone to train a relation prediction model. One-stage approaches are learning both the object and relation features in the same learning stage. This codebase focuses on the first category, two-stage approaches.

Object Detection Backbones

We proposed different object detection backbones that can be plugged with any relation prediction head, depending on the use case.

🚀 NEW! No need to train a backbone anymore, we support Yolo-World for fast and easy open-vocabulary inference. Please check it out!

  • YOLOV10: New end-to-end yolo architecture for SOTA real-time object detection.
  • YOLOV8-World: SOTA in real-time open-vocabulary object detection!
  • YOLOV9: SOTA in real-time object detection.
  • YOLOV8: SOTA in real-time object detection.
  • Faster-RCNN: This is the original backbone used in most SGG approaches. It is based on a ResNeXt-101 feature extractor and an RPN for regression and classification. See the original paper for reference. Performance is 38.52/26.35/28.14 mAp on VG train/val/test set respectively. You can find the original pretrained model by Kaihua here.

Relation Heads

We try to compiled the main approaches for relation modeling in this codebase:

Debiasing methods

On top of relation heads, several debiasing methods have been proposed through the years with the aim of increasing the accuracy of baseline models in the prediction of tail classes.

Data Augmentation methods

Due to severe biases in datasets, the task of Scene Graph Generation as also been tackled through data-centring approaches.

Model ZOO

We provide some of the pre-trained weights for evaluation or usage in downstream tasks, please see MODEL_ZOO.md.

Metrics and Results (IMPORTANT)

Explanation of metrics in our toolkit and reported results are given in METRICS.md

Alternate links

Since OneDrive links might be broken in mainland China, we also provide the following alternate links for all the pretrained models and dataset annotations using BaiduNetDisk:

Link:https://pan.baidu.com/s/1oyPQBDHXMQ5Tsl0jy5OzgA Extraction code:1234

Faster R-CNN pre-training

The following command can be used to train your own Faster R-CNN model:

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --master_port 10001 --nproc_per_node=4 tools/detector_pretrain_net.py --config-file "configs/e2e_relation_detector_X_101_32_8_FPN_1x.yaml" SOLVER.IMS_PER_BATCH 8 TEST.IMS_PER_BATCH 4 DTYPE "float16" SOLVER.MAX_ITER 50000 SOLVER.STEPS "(30000, 45000)" SOLVER.VAL_PERIOD 2000 SOLVER.CHECKPOINT_PERIOD 2000 MODEL.RELATION_ON False OUTPUT_DIR /home/kaihua/checkpoints/pretrained_faster_rcnn SOLVER.PRE_VAL False

where CUDA_VISIBLE_DEVICES and --nproc_per_node represent the id of GPUs and number of GPUs you use, --config-file means the config we use, where you can change other parameters. SOLVER.IMS_PER_BATCH and TEST.IMS_PER_BATCH are the training and testing batch size respectively, DTYPE "float16" enables Automatic Mixed Precision, SOLVER.MAX_ITER is the maximum iteration, SOLVER.STEPS is the steps where we decay the learning rate, SOLVER.VAL_PERIOD and SOLVER.CHECKPOINT_PERIOD are the periods of conducting val and saving checkpoint, MODEL.RELATION_ON means turning on the relationship head or not (since this is the pretraining phase for Faster R-CNN only, we turn off the relationship head), OUTPUT_DIR is the output directory to save checkpoints and log (considering /home/username/checkpoints/pretrained_faster_rcnn), SOLVER.PRE_VAL means whether we conduct validation before training or not.

YOLOV8 Backbone

If you want to use YoloV8 as a backbone instead of Faster-RCNN, you need to first train a model using the official ultralytics implementation. Once you have a model, you can modify this config file and change the path PRETRAINED_DETECTOR_CKPT to your model weights. Please note that you will also need to change the variable SIZE and OUT_CHANNELS accordingly if you use another variant of YoloV8 (nano, small or large for instance). For training an SGG model with YOLOV8 as a backbone, you need to modify the META_ARCHITECTURE variable in the same config file to GeneralizedYOLO. You can then follow the standard procedure for PREDCLS, SGCLS or SGDET training below.

Perform training on Scene Graph Generation

There are three standard protocols: (1) Predicate Classification (PredCls): taking ground truth bounding boxes and labels as inputs, (2) Scene Graph Classification (SGCls) : using ground truth bounding boxes without labels, (3) Scene Graph Detection (SGDet): detecting SGs from scratch. We use the argument --task to select the protocols.

For Predicate Classification (PredCls), we need to set:

--task predcls

For Scene Graph Classification (SGCls):

--task sgcls

For Scene Graph Detection (SGDet):

--task sgdet

Predefined Models

We abstract various SGG models to be different relation-head predictors in the file roi_heads/relation_head/roi_relation_predictors.py. To select our predefined models, you can use MODEL.ROI_RELATION_HEAD.PREDICTOR.

For Neural-MOTIFS Model:

MODEL.ROI_RELATION_HEAD.PREDICTOR MotifPredictor

For Iterative-Message-Passing(IMP) Model (Note that SOLVER.BASE_LR should be changed to 0.001 in SGCls, or the model won't converge):

MODEL.ROI_RELATION_HEAD.PREDICTOR IMPPredictor

For VCTree Model:

MODEL.ROI_RELATION_HEAD.PREDICTOR VCTreePredictor

For Transformer Model (Note that Transformer Model needs to change SOLVER.BASE_LR to 0.001, SOLVER.SCHEDULE.TYPE to WarmupMultiStepLR, SOLVER.MAX_ITER to 16000, SOLVER.IMS_PER_BATCH to 16, SOLVER.STEPS to (10000, 16000).), which is provided by Jiaxin Shi:

MODEL.ROI_RELATION_HEAD.PREDICTOR TransformerPredictor

For Unbiased-Causal-TDE Model:

MODEL.ROI_RELATION_HEAD.PREDICTOR CausalAnalysisPredictor

The default settings are under configs/e2e_relation_X_101_32_8_FPN_1x.yaml and sgg_benchmark/config/defaults.py. The priority is command > yaml > defaults.py

Customize Your Own Model

If you want to customize your own model, you can refer sgg_benchmark/modeling/roi_heads/relation_head/model_XXXXX.py and sgg_benchmark/modeling/roi_heads/relation_head/utils_XXXXX.py. You also need to add the corresponding nn.Module in sgg_benchmark/modeling/roi_heads/relation_head/roi_relation_predictors.py. Sometimes you may also need to change the inputs & outputs of the module through sgg_benchmark/modeling/roi_heads/relation_head/relation_head.py.

As to the Unbiased-Causal-TDE, there are some additional parameters you need to know. MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE is used to select the causal effect analysis type during inference(test), where "none" is original likelihood, "TDE" is total direct effect, "NIE" is natural indirect effect, "TE" is total effect. MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE has two choice "sum" or "gate". Since Unbiased Causal TDE Analysis is model-agnostic, we support Neural-MOTIFS, VCTree and VTransE. MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER is used to select these models for Unbiased Causal Analysis, which has three choices: motifs, vctree, vtranse.

Note that during training, we always set MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE to be 'none', because causal effect analysis is only applicable to the inference/test phase.

Examples of the Training Command

NEW: I replaced the training by iteration (steps) with training by epochs (iteration on the whole dataset), controlling the training loop by iteration is still possible but it's made easier by epochs imo, you can try with the argument SOLVER.MAX_EPOCH (see below)

By default, only the last checkpoint will be saved which is not very efficient. You can choose to save only the best checkpoint instead with the argument --save-best. Training Example 1 : (PreCls, Motif Model)

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --master_port 10025 --nproc_per_node=2 tools/relation_train_net.py --task predcls --save-best --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" MODEL.ROI_RELATION_HEAD.PREDICTOR MotifPredictor SOLVER.IMS_PER_BATCH 12 TEST.IMS_PER_BATCH 2 DTYPE "float16" SOLVER.MAX_EPOCH 20 MODEL.PRETRAINED_DETECTOR_CKPT ./checkpoints/pretrained_faster_rcnn/model_final.pth OUTPUT_DIR ./checkpoints/motif-precls-exmp

where MODEL.PRETRAINED_DETECTOR_CKPT is the pretrained Faster R-CNN model you want to load, OUTPUT_DIR is the output directory used to save checkpoints and the log. Since we use the WarmupReduceLROnPlateau as the learning scheduler for SGG, SOLVER.STEPS is not required anymore.

Training Example 2 : (SGCls, Causal, TDE, SUM Fusion, MOTIFS Model)

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --master_port 10026 --nproc_per_node=2 tools/relation_train_net.py --task sgcls --save-best  --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" MODEL.ROI_RELATION_HEAD.PREDICTOR CausalAnalysisPredictor MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE none MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE sum MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER motifs  SOLVER.IMS_PER_BATCH 12 TEST.IMS_PER_BATCH 2 DTYPE "float16" SOLVER.MAX_EPOCH 20 MODEL.PRETRAINED_DETECTOR_CKPT ./checkpoints/pretrained_faster_rcnn/model_final.pth OUTPUT_DIR ./checkpoints/causal-motifs-sgcls-exmp

Hyperparameters Tuning

Required library: pip install ray[data,train,tune] optuna tensorboard

We provide a training loop for hyperparameters tuning in hyper_param_tuning.py. This script uses the RayTune library for efficient hyperparameters search. You can define a search_space object with different values related to the optimizer (AdamW and SGD supported for now) or directly customize the model structure with model parameters (for instance Linear layers dimensions or MLP dimensions etc). The ASHAScheduler scheduler is used for the early stopping of bad trials. The default value to optimize is the overall loss but this can be customize to specific loss values or standard metrics such as mean_recall.

To launch the script, do as follow:

CUDA_VISIBLE_DEVICES=0 python tools/hyper_param_tuning.py --save-best --task sgdet --config-file "/home/maelic/SGG-Benchmark/configs/IndoorVG/e2e_relation_yolov10.yaml" MODEL.ROI_RELATION_HEAD.PREDICTOR PrototypeEmbeddingNetwork DTYPE "float16" SOLVER.PRE_VAL True GLOVE_DIR /home/maelic/glove OUTPUT_DIR /home/maelic/SGG-Benchmark/checkpoints/IndoorVG4/SGDET/penet-yolov10m SOLVER.IMS_PER_BATCH 8

The config and OUTPUT_DIR paths need to be absolute to allow faster loading. A lot of terminal outputs are disabled by default during tuning, using the cfg.VERBOSE variable.

To watch the results with tensorboardX:

tensorboard --logdir=/home/maelic/ray_results/train_relation_net_2024-06-23_15-28-01

Evaluation

Examples of the Test Command

Test Example 1 : (PreCls, Motif Model)

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10027 --nproc_per_node=1 tools/relation_test_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL True MODEL.ROI_RELATION_HEAD.PREDICTOR MotifPredictor TEST.IMS_PER_BATCH 1 DTYPE "float16" GLOVE_DIR /home/kaihua/glove MODEL.PRETRAINED_DETECTOR_CKPT /home/kaihua/checkpoints/motif-precls-exmp OUTPUT_DIR /home/kaihua/checkpoints/motif-precls-exmp

Test Example 2 : (SGCls, Causal, TDE, SUM Fusion, MOTIFS Model)

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10028 --nproc_per_node=1 tools/relation_test_net.py --config-file "configs/e2e_relation_X_101_32_8_FPN_1x.yaml" MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False MODEL.ROI_RELATION_HEAD.PREDICTOR CausalAnalysisPredictor MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE TDE MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE sum MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER motifs  TEST.IMS_PER_BATCH 1 DTYPE "float16" GLOVE_DIR /home/kaihua/glove MODEL.PRETRAINED_DETECTOR_CKPT /home/kaihua/checkpoints/causal-motifs-sgcls-exmp OUTPUT_DIR /home/kaihua/checkpoints/causal-motifs-sgcls-exmp

Other Options that May Improve the SGG

  • For some models (not all), turning on or turning off MODEL.ROI_RELATION_HEAD.POOLING_ALL_LEVELS will affect the performance of predicate prediction, e.g., turning it off will improve VCTree PredCls but not the corresponding SGCls and SGGen. For the reported results of VCTree, we simply turn it on for all three protocols like other models.

  • For some models (not all), a crazy fusion proposed by Learning to Count Object will significantly improves the results, which looks like f(x1, x2) = ReLU(x1 + x2) - (x1 - x2)**2. It can be used to combine the subject and object features in roi_heads/relation_head/roi_relation_predictors.py. For now, most of our model just concatenate them as torch.cat((head_rep, tail_rep), dim=-1).

  • Not to mention the hidden dimensions in the models, e.g., MODEL.ROI_RELATION_HEAD.CONTEXT_HIDDEN_DIM. Due to the limited time, we didn't fully explore all the settings in this project, I won't be surprised if you improve our results by simply changing one of our hyper-parameters

Frequently Asked Questions:

  1. Q: Fail to load the given checkpoints. A: The model to be loaded is based on the last_checkpoint file in the OUTPUT_DIR path. If you fail to load the given pretained checkpoints, it probably because the last_checkpoint file still provides the path in my workstation rather than your own path.

  2. Q: AssertionError on "assert len(fns) == 108073" A: If you are working on VG dataset, it is probably caused by the wrong DATASETS (data path) in sgg_benchmark/config/paths_catlog.py. If you are working on your custom datasets, just comment out the assertions.

  3. Q: AssertionError on "l_batch == 1" in model_motifs.py A: The original MOTIFS code only supports evaluation on 1 GPU. Since my reimplemented motifs is based on their code, I keep this assertion to make sure it won't cause any unexpected errors.

Citations

If you find this project helps your research, please kindly consider citing our project or papers in your publications.

@misc{neau2024realtime,
      title={Real-Time Scene Graph Generation}, 
      author={Maëlic Neau and Paulo E. Santos and Karl Sammut and Anne-Gwenn Bosser and Cédric Buche},
      year={2024},
      eprint={2405.16116},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}