Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make jetson_utils work with gstreamer and yolov8 api #209

Open
samthephantom opened this issue May 18, 2024 · 1 comment
Open

Make jetson_utils work with gstreamer and yolov8 api #209

samthephantom opened this issue May 18, 2024 · 1 comment

Comments

@samthephantom
Copy link

Hi dusty. I used to decode h.264-codec rtsp camera stream with gstreamer python api ( I built the pipeline with nvv4l2decoder and nvvidconv). Here is the detail:

  1. The pipeline convert image from NVMM to CPU RAM with nvvidconv
  2. I get the decoded image data from appsink by sink.emit("pull-sample").
  3. The image data is read and transferred to numpy like this:
def new_buffer(sink, data):
    # pull sample
    sample = sink.emit("pull-sample")
    buf = sample.get_buffer()
    caps = sample.get_caps()

    # create buffer
    arr = np.ndarray(
        (caps.get_structure(0).get_value('height'),
         caps.get_structure(0).get_value('width'),
         4),
        buffer=buf.extract_dup(0, buf.get_size()),
        dtype=np.uint8)
    
    # convert BGRx to RGB for yolov8 inference
    data.img = cv2.cvtColor(arr, cv2.COLOR_BGRA2BGR)
   
   
    data.data_valid()
    data.frame_shape = data.img.shape
    data.frame_timestamp = time.time()

    if data.frame < 10000:
        data.frame = data.frame+1
    else:
        data.frame = 0
    return Gst.FlowReturn.OK

Please share your experience on how to make cudaImage work with gstreamer, thanks. And I couldn't find out if the yolov8 python api accept the cudaImage as input.

@johnnynunez
Copy link
Contributor

Hi dusty. I used to decode h.264-codec rtsp camera stream with gstreamer python api ( I built the pipeline with nvv4l2decoder and nvvidconv). Here is the detail:

  1. The pipeline convert image from NVMM to CPU RAM with nvvidconv
  2. I get the decoded image data from appsink by sink.emit("pull-sample").
  3. The image data is read and transferred to numpy like this:
def new_buffer(sink, data):
    # pull sample
    sample = sink.emit("pull-sample")
    buf = sample.get_buffer()
    caps = sample.get_caps()

    # create buffer
    arr = np.ndarray(
        (caps.get_structure(0).get_value('height'),
         caps.get_structure(0).get_value('width'),
         4),
        buffer=buf.extract_dup(0, buf.get_size()),
        dtype=np.uint8)
    
    # convert BGRx to RGB for yolov8 inference
    data.img = cv2.cvtColor(arr, cv2.COLOR_BGRA2BGR)
   
   
    data.data_valid()
    data.frame_shape = data.img.shape
    data.frame_timestamp = time.time()

    if data.frame < 10000:
        data.frame = data.frame+1
    else:
        data.frame = 0
    return Gst.FlowReturn.OK

Please share your experience on how to make cudaImage work with gstreamer, thanks. And I couldn't find out if the yolov8 python api accept the cudaImage as input.

https://github.com/STCE-Detector/small-fast-detector/blob/main/tracker/track_recognize.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants