-
-
Notifications
You must be signed in to change notification settings - Fork 728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qwen2 error when loading from checkpoint #478
Labels
currently fixing
Am fixing now!
Comments
Also, if the trained model is merged/saved and then loaded, this is returned:
..and also generates gibberish during inference |
Hmmm weird - ill check this sorry on the issue |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Works as expected when loading the base model, but when a LoRA checkpoint is loaded in place of the base model, unsloth returns:
Unsloth cannot patch Attention layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used.
Currently using the latest git version of unsloth.
The model still loads, but performing inference returns gibberish.
Thanks.
The text was updated successfully, but these errors were encountered: