-
Notifications
You must be signed in to change notification settings - Fork 588
Issues: bentoml/OpenLLM
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
How to deploy a model using a single machine multi card approach?
#1026
opened Jun 25, 2024 by
ttaop
bug: Attempting to invoke OpenLLM from Langchain results in error
#1014
opened Jun 14, 2024 by
Said-Ikki
bug: error coming up while install the vllm using pip install "openllm[vllm]"
#967
opened Apr 25, 2024 by
Developer-atomic-amardeep
Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser
#934
opened Mar 18, 2024 by
sanket038
I'm having trouble getting statted with openllm, but I don't want to use conda and I have WSL2
#929
opened Mar 11, 2024 by
Lightwave234
bug: Error in sending post request for bentoml container service
#904
opened Feb 13, 2024 by
hahmad2008
bug: Requests with "use_beam_search: true" fail with an unclear exception message.
#903
opened Feb 13, 2024 by
yan-virin
How to update the prompt template without change openllm-core config
#895
opened Feb 11, 2024 by
hahmad2008
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.