-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
PeftModel.from_pretrained not load modules_to_save in config file
#1894
opened Jun 28, 2024 by
double-fire-0
4 tasks
LORA finetuning gradients are scaled by a unknown constant factor
#1893
opened Jun 28, 2024 by
goliaro
2 of 4 tasks
ValueError: Trying to set a tensor of shape torch.Size([43176, 8192]) in "weight" (which has shape torch.Size([0])), this look incorrect.
#1890
opened Jun 27, 2024 by
KarasZhang
2 of 4 tasks
local pytest run puts torch.allclose() out of range in test_4bit_lora_mixed_adapter_batches_lora
#1889
opened Jun 27, 2024 by
kallewoof
4 tasks
After fine-tuning, the model inference is abnormal
#1883
opened Jun 24, 2024 by
mumu029
2 of 4 tasks
eval_loss missing when using peft model with STFTrainer
#1881
opened Jun 21, 2024 by
J4Q8
1 of 4 tasks
Model with LoRA adapter errors when saving with
safe_serialization=True
#1876
opened Jun 20, 2024 by
mliu-aqtc
4 tasks
Geometric Parametrization (GmP) [CLIP] - is it compatible with PEFT?
#1863
opened Jun 16, 2024 by
zer0int
Do we have to delete the PiSSA adapter after save_pissa_as_lora
#1860
opened Jun 15, 2024 by
hiyouga
2 of 4 tasks
Fail to use zero_init to construct llama2 with deepspeed zero3 and qlora!
#1844
opened Jun 11, 2024 by
CHNRyan
2 of 4 tasks
Tutorial notebook for applying PEFT with DNA Language models
#1837
opened Jun 9, 2024 by
rahulbshrestha
[BugReport] init_lora_weights with pissa is not compatible with deepspeed stage3
#1826
opened Jun 5, 2024 by
wsp317
AdaLora: rank remains constant (to init_r value) across training
#1801
opened May 24, 2024 by
geoffvdr
2 of 4 tasks
How to finetune embeddings and LM head as a single layer when they are tied?
#1750
opened May 21, 2024 by
GokulNC
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.