We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch version: 2.2.1 peft version: 0.11.1
The following snippet can trigger this bug:
import multiprocessing import peft from torch import nn def func(): m = nn.Linear(2, 3).cuda(0) if __name__ == "__main__": proc = multiprocessing.Process(target=func) proc.start(), proc.join()
Raise: RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Issue: early import torch.utils.cpp_extension in peft/tuners/boft/layer.py
torch.utils.cpp_extension
peft/tuners/boft/layer.py
How to fix: import cpp_extension module in the get_fbd_cuda function.
cpp_extension
get_fbd_cuda
No RuntimeError. CUDA context is not initialized in the main process.
The text was updated successfully, but these errors were encountered:
Thanks for investigating. The proposed solution sounds reasonable. Are you interested in creating the PR?
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
System Info
PyTorch version: 2.2.1
peft version: 0.11.1
Reproduction
The following snippet can trigger this bug:
Raise:
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Issue: early import
torch.utils.cpp_extension
inpeft/tuners/boft/layer.py
How to fix: import
cpp_extension
module in theget_fbd_cuda
function.Expected behavior
No RuntimeError. CUDA context is not initialized in the main process.
The text was updated successfully, but these errors were encountered: