-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Better warning message or error when target module names are not available
#40
by sayakpaul
was closed Feb 2, 2023
Number of trainable parameters for a LoRA model w.r.t the original model
#41
by sayakpaul
was closed Feb 1, 2023
Add support for T-Few
PRs welcome to address this
contributions are welcome from community members on this issue
#42
by lewtun
was closed May 13, 2023
Add an example demoing the use of PEFT for ViT Image Classification
#44
by sayakpaul
was closed Feb 7, 2023
Add an example demoing the use of PEFT for SegFormer Semantic Segmentation
#45
by sayakpaul
was closed Feb 9, 2023
Is it possible to support multiple GPUs for distributed training at the same time?
#52
by ScottishFold007
was closed Feb 9, 2023
[lora]
push_to_hub()
save_pretrained()
errors and potential inconsistencies
#57
by sayakpaul
was closed Feb 7, 2023
Do I need to report the model file in bin format during the training process?
#61
by ScottishFold007
was closed Feb 9, 2023
Enhancement: detach dtype for prompt embeddings from the model itself
#62
by mayank31398
was closed Feb 13, 2023
Can I give the number of steps to save the model file?
#65
by ScottishFold007
was closed Feb 10, 2023
[Feature] Add support for Donut (Multimodal Model)
PRs welcome to address this
contributions are welcome from community members on this issue
#70
by WaterKnight1998
was closed May 7, 2023
CUDA Error when fine tuning GPT-J for CasualLM
solved
solved
#74
by JohnnyRacer
was closed May 3, 2023
GPU and CPU memory utilization while running the peft_lora_clm_accelerate_ds_zero3_offload.py script
#76
by karthikmurugadoss
was closed Feb 13, 2023
model.named_parameters()
giving tensors of shape 0 with DeepSpeed CPU offloading
#83
by karthikmurugadoss
was closed Feb 14, 2023
inference load_in_8bit = True after fine tuning give mat1 and mat2 must have the same dtype
#84
by acul3
was closed Apr 3, 2023
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.