-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
LLaMA Adapter Fails during the inference due to mixed precision weights.
#455
by HamidShojanazeri
was closed Jun 1, 2023
Bug when pass subfolder parameter in peftmodel.load_adapter method!
#718
by ShayDuane
was closed Jul 19, 2023
2 of 4 tasks
Inference on multiple GPUs with LoRA weights raises device-side assert triggered CUDA error
#709
by changyeli
was closed Aug 31, 2023
2 of 4 tasks
Prototype LoRA/Adapter/Prefix-tuning support - Arxiv
PRs welcome to address this
contributions are welcome from community members on this issue
#648
by he20010515
was closed Aug 9, 2023
High memory consumption during LORA training
#658
by DanielRoeder1
was closed Aug 9, 2023
2 of 4 tasks
Calling
merge_and_unload
then save_pretrained
uploads weights twice
#692
by fozziethebeat
was closed Aug 20, 2023
1 of 4 tasks
P-tuning (GPT Understands, too)
solved
solved
#627
by MichelleHS777
was closed Aug 4, 2023
2 of 4 tasks
tuning “shallow encoder and deep decoder” model
#689
by Zz-dong
was closed Aug 20, 2023
2 of 4 tasks
Matrix mistmatch when trying to adapt Falcon with QLoRA, how to fix?
#685
by brando90
was closed Jul 22, 2023
2 of 4 tasks
Add dense layers to TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING
#735
by BramVanroy
was closed Aug 27, 2023
ProTip!
Updated in the last three days: updated:>2024-07-06.