-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Fail to use zero_init to construct llama2 with deepspeed zero3 and qlora!
#1844
opened Jun 11, 2024 by
CHNRyan
2 of 4 tasks
Tutorial notebook for applying PEFT with DNA Language models
#1837
opened Jun 9, 2024 by
rahulbshrestha
Different results when merging LORA weights into the base model vs. when not
#1836
opened Jun 9, 2024 by
ankur6ue
1 of 4 tasks
[BugReport] init_lora_weights with pissa is not compatible with deepspeed stage3
#1826
opened Jun 5, 2024 by
wsp317
AdaLora: rank remains constant (to init_r value) across training
#1801
opened May 24, 2024 by
geoffvdr
2 of 4 tasks
How to finetune embeddings and LM head as a single layer when they are tied?
#1750
opened May 21, 2024 by
GokulNC
cannot import name 'get_peft_config' from 'peft' (unknown location)
#1748
opened May 20, 2024 by
jiyuwangbupt
4 tasks
Initialization for LoRA weights A and B initialized
#1728
opened May 13, 2024 by
sanaullah-06
4 tasks
TypeError: unsupported operand type(s) for *: 'Parameter' and 'NoneType'
#1721
opened May 9, 2024 by
misonsky
4 tasks
RuntimeError: only Tensors of floating point dtype can require gradients for QLoRA since transformers 4.40
#1720
opened May 9, 2024 by
dipanjanS
2 of 4 tasks
eval_loss showing Nan but train_loss decreases and goes to NaN after couple of steps while fine tuning gemma model with additional vocab
#1715
opened May 7, 2024 by
sidtandon2014
2 of 4 tasks
Previous Next
ProTip!
Updated in the last three days: updated:>2024-06-09.