You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I used TensorRT 10.5 to infer Flux Dit on A800 using BF16 dataType, I found that there was a significant decrease in accuracy, while there was no significant decrease in accuracy when I used Pytorch BF16 to infer
Description
When I used TensorRT 10.5 to infer Flux Dit on A800 using BF16 dataType, I found that there was a significant decrease in accuracy, while there was no significant decrease in accuracy when I used Pytorch BF16 to infer
Environment
TensorRT Version:
NVIDIA GPU: A800
NVIDIA Driver Version: 535.54.03
CUDA Version:12.2
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: