Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run in google colab #288

Open
dheerajvarma24 opened this issue Feb 27, 2023 · 1 comment
Open

Unable to run in google colab #288

dheerajvarma24 opened this issue Feb 27, 2023 · 1 comment

Comments

@dheerajvarma24
Copy link

Error 1: Failed to display the inference output.

qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/usr/local/lib/python3.8/dist-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem
Available platform plugins are: xcb.

Error 2: Failed to save the inference video (output video)

OpenCV: FFMPEG: tag 0x34363248/'H264' is not supported with codec id 27 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x31637661/'avc1'
[ERROR:[email protected]] global /io/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (2927) open Could not find encoder for codec_id=27, error: Encoder not found
[ERROR:[email protected]] global /io/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp (3002) open VIDEOIO/FFMPEG: Failed to initialize VideoWriter

@dheerajvarma24
Copy link
Author

dheerajvarma24 commented Feb 27, 2023

To overcome error 1:
It is because google colab does not support QT GUI library to render the output. Check this Reference
Therefore,

  1. comment out "cv2.imshow" at line 80 in demo.py.
  2. replace "cv2.imshow" in function show_all_image in debugger.py(/src/lib/utils/) with "pass"

To overcome error 2:

  1. line 51 in src -> demo.py the output is getting saved to results folder. Create results directory if it doesn't automatically gets created by Colab. Also try
    fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') in place of fourcc = cv2.VideoWriter_fourcc(*'H264')
  2. Then always give the video frames height and width (can be found in video properties) as arguments to demo.py for running the inference.
    !python demo.py tracking --load_model ../models/coco_tracking.pth --demo ../videos/MOT16-13-raw-avi.avi --save_video --video_h 540 --video_w 960
    or any video of your choice
    !python demo.py tracking --load_model ../models/mot17_half.pth --num_class 1 --demo ../videos/test_pedestrian.mp4 --save_video --video_h 1080 --video_w 1920

NOTE: If the DCNv2, nuscenes_devkit packages are not getting cloned automatically with recursive tag, then manually clone then to their respective directories as described in installation guide

NOTE: I have tested this for 2D objects tracking with coco_tracking and mot pth files.

Ignore below: (The complete colab commands)

# allocate GPU
!pip install torch==1.4.0 torchvision==0.5.0
!pip install cython;
!pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'


!git clone --recursive https://github.com/xingyizhou/CenterTrack
%cd CenterTrack
!pip install -r requirements.txt

# install nuscenes-devkit
%cd src/tools/
!git clone https://github.com/nutonomy/nuscenes-devkit # clone if it is not automatically downloaded by `--recursive`.

# install DCNv2 
%cd ../lib/model/networks/
!git clone https://github.com/CharlesShang/DCNv2/ # clone if it is not automatically downloaded by `--recursive`.
%cd DCNv2
!./make.sh

# adding model zoo (since I saved them in my drive)
from google.colab import drive 
drive.mount('/content/drive')

%cd /content/CenterTrack
# create symlink to model zoo (pth files) present in mounted drive to CenterTrack models dir.
%%shell
ln -s /content/drive/MyDrive/models /content/CenterTrack/


######################
# Before running the inference, make sure to modify code as described in overcome errors 1 & 2 above.
%cd src/
# Here I have taken a sample video from MOT16 train dataset.
!python demo.py tracking --load_model ../models/coco_tracking.pth --demo ../videos/MOT16-13-raw-avi.avi --save_video --video_h 540 --video_w 960
# run on any video of your choice.
!python demo.py tracking --load_model ../models/mot17_half.pth --num_class 1 --demo ../videos/test_pedestrian.mp4 --save_video --video_h 1080 --video_w 1920

The output video is saved in CenterTrack -> results directory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant