The following is adapted from Danfei Xu and neural-motifs.
Note that our codebase intends to support attribute-head too, so our VG-SGG.h5
and VG-SGG-dicts.json
are different with their original versions in Danfei Xu and neural-motifs. We add attribute information and rename them to be VG-SGG-with-attri.h5
and VG-SGG-dicts-with-attri.json
. The code we use to generate them is located at datasets/vg/generate_attribute_labels.py
. Although, we encourage later researchers to explore the value of attribute features, in our paper "Unbiased Scene Graph Generation from Biased Training", we follow the conventional setting to turn off the attribute head in both detector pretraining part and relationship prediction part for fair comparison, so does the default setting of this codebase.
- Download the VG images part1 part2. Extract these images to the file
datasets/vg/VG_100K
. If you want to use other directory, please link it inDATASETS['VG_stanford_filtered']['img_dir']
ofpysgg/config/paths_catelog.py
. - Download the scene graphs and extract them to
datasets/vg/VG-SGG-with-attri.h5
, or you can edit the path inDATASETS['VG_stanford_filtered_with_attribute']['roidb_file']
ofpysgg/config/paths_catelog.py
. - Link the image into the project folder
ln -s /path-to-vg/VG_100K datasets/vg/stanford_spilt/VG_100k_images
ln -s /path-to-vg/VG-SGG-with-attri.h5 datasets/vg/stanford_spilt/VG-SGG-with-attri.h5
The initial dataset(oidv6/v4-train/test/validation-annotations-vrd.csv) can be downloaded from offical website.
The Openimage is a very large dataset, however, most of images doesn't have relationship annotations.
To this end, we filter those non-relationship annotations and obtain the subset of dataset (.ipynb for processing ).
You can download the processed dataset: Openimage V6(38GB),
Openimage V4(28GB)
The dataset dir contains the images
and annotations
folder. Link the open_image_v4
and open_image_v6
dir to the /datasets/openimages
then you are ready to go.