Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use directml on NVIDIA GPU. UnimplementedError: Graph execution error. #362

Open
github-user-en opened this issue Jun 10, 2023 · 1 comment

Comments

@github-user-en
Copy link

Hello,

I'm currently trying to run TensorFlow on a Windows computer using the tensorflow-directml-plugin as discussed in this guide.

My computer is equipped with an NVIDIA Quadro K1200 GPU, which supports DirectX12. You can check its capabilities here.

I'm using the NVIDIA Graphics Driver version 528.89.

The code I'm working with is located in this notebook. When I run the fit() method, I encounter an error message:
UnimplementedError: Graph execution error.
This issue is visible in the output of cell 20 in the notebook.

I am accessing this computer via Remote Desktop Protocol (RDP), and it seems that DxDIAG doesn't recognize my NVIDIA GPU in its list of Display Adapters. Instead, it displays "Microsoft Remote Display Adapter." However, the Device Manager correctly lists the NVIDIA GPU as an active device.

Here are my questions:

  1. Could the remote connection be causing this error?
  2. The machine also has an Intel i7-6700 CPU with built-in graphics. In this context, when tf.config.list_physical_devices('GPU') outputs GPU:0, is it referring to the built-in Intel graphics or the NVIDIA GPU? If it's the built-in Intel graphics, could this be the cause of the error?
  3. If none of the above factors are causing the error, do you have any ideas about what might be causing it?

I appreciate your assistance and guidance in resolving this issue.

Thank you.

@PatriceVignola
Copy link
Collaborator

The plugin should be able to ignore the Remote Desktop Adapter, so it's unexpected that it is still being enumerated in your case. One thing you can do is set the DML_VISIBLE_DEVICES environment variable to only list the device that you're interested in (e.g. DML_VISIBLE_DEVICES="1").

Unfortunately, we had to pause the development of this plugin until further notice so it's not something that we can fix at the moment. For the time being, all latest DirectML features and performance improvements are going into onnxruntime for inference scenarios. We'll update this issue if/when things change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants