You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi dusty. I used to decode h.264-codec rtsp camera stream with gstreamer python api ( I built the pipeline with nvv4l2decoder and nvvidconv). Here is the detail:
The pipeline convert image from NVMM to CPU RAM with nvvidconv
I get the decoded image data from appsink by sink.emit("pull-sample").
The image data is read and transferred to numpy like this:
Please share your experience on how to make cudaImage work with gstreamer, thanks. And I couldn't find out if the yolov8 python api accept the cudaImage as input.
The text was updated successfully, but these errors were encountered:
Hi dusty. I used to decode h.264-codec rtsp camera stream with gstreamer python api ( I built the pipeline with nvv4l2decoder and nvvidconv). Here is the detail:
The pipeline convert image from NVMM to CPU RAM with nvvidconv
I get the decoded image data from appsink by sink.emit("pull-sample").
The image data is read and transferred to numpy like this:
Please share your experience on how to make cudaImage work with gstreamer, thanks. And I couldn't find out if the yolov8 python api accept the cudaImage as input.
Hi dusty. I used to decode h.264-codec rtsp camera stream with gstreamer python api ( I built the pipeline with nvv4l2decoder and nvvidconv). Here is the detail:
Please share your experience on how to make cudaImage work with gstreamer, thanks. And I couldn't find out if the yolov8 python api accept the cudaImage as input.
The text was updated successfully, but these errors were encountered: