Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operator 'aten::std_mean.correction' is not currently supported on the DML backend #536

Open
tinnet opened this issue Dec 7, 2023 · 8 comments
Labels
pytorch-directml Issues in PyTorch when using its DirectML backend

Comments

@tinnet
Copy link

tinnet commented Dec 7, 2023

I've encountered another operator that has to fallback on the CPU in DML, couldn't find it in the roadmap wiki.

...\Fooocus_win64_2-1-791\Fooocus\modules\anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at D:\a_work\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)

For context this is using Fooocus on a Radeon 7800xt

@rickfeldschau
Copy link

Same issue here, and requesting implementation of aten::std_mean.correction because this should speed up model loading as currently implemented in Fooocus.

However, this, in part, is due to relatively recent changes to speed things up in Fooocus' code. For more information, I've opened up an issue on the Fooocus repo.

@Thelionsfan
Copy link

Anyone figure out how to fix this?

@moharnab123saikia
Copy link

Same issue I found today.

@Thelionsfan
Copy link

Still no fix?

@Samkist
Copy link

Samkist commented Jan 5, 2024

also have this issue.

@Adele101 Adele101 added the pytorch-directml Issues in PyTorch when using its DirectML backend label Jan 8, 2024
@f0n51
Copy link

f0n51 commented Jan 11, 2024

Same issue here

@patientx
Copy link

rx 6600 , same problem. If we cancel the gen, it doesn't appear on the other ones. I don't know it is still in effect though.

@virenchocha
Copy link

Same issue here. AMD VEGA TM8 integrated graphics. Should we do overclocking with AMD GPUs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pytorch-directml Issues in PyTorch when using its DirectML backend
Projects
None yet
Development

No branches or pull requests

10 participants