-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NHWC support for group normalization #126635
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/126635
Note: Links to docs will display an error until the docs builds have been completed. ❌ 10 New Failures, 2 Pending, 1 Unrelated FailureAs of commit c9565ce with merge base e3230f8 (): NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
auto cur_b = b[cur_sample * C + cur_channel]; | ||
Y[index] = (static_cast<T_ACC>(X[index]) + cur_b) * cur_a; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure how to achieve this behavior using tensor iterators and gpu_kernel
- so it may be the case that this is unneeded.
|
||
for (int64_t c = 0; c < group_channels; c++) { | ||
val = welford_op.reduce(val, static_cast<T_ACC>(X[index + c]), index + c); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This kernel uses a different indexing strategy that will work for rNHWC tensors and uses welfords algorithm. Aside from that, the logic is very similar.
case MemoryFormat::ChannelsLast: { | ||
ApplyScaleBiasNHWCKernel<T><<<N * G, num_threads, 0, cuda_stream>>>(X_data, Y_data, N, height, width, C, D*HxW, a_data, b_data); | ||
C10_CUDA_KERNEL_LAUNCH_CHECK(); | ||
break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not know how to do this with a tensor iterator. IF someone could show me how to that would be great.
I intend on adding tests for this in |
Really looking forward to this! I've found that at least in some cases a naive implementation is actually faster than the existing native group norm kernel when using channels last format, and I'd love to see how much better a proper channels last kernel does. |
Interested to know the context? What GPU, architecture are you on? If possible could you give me a minimal reproducible example of a naive implementation outperforming native so I could take a look? |
This was on a 3090, using Stable Diffusion 1.5 in inference mode -- I'm not sure that it would be easy make a minimal reproducible example because I think it is at least partially dependent on having operations dispatched fairly far ahead of how fast they execute on the GPU. But to summarize, I first made sure that every |
Fixes #111824
Currently it is the case that if the user specifies their group normalization to be of NHWC format, pytorch will default to NCHW tensors and convert. This conversion is not immediately obvious to the user unless they check the format themselves which is not intuitive. This PR adds suppor for NHWC for cuda by adding necessary kernels.
cc: @mikaylagawarecki