-
-
Notifications
You must be signed in to change notification settings - Fork 582
Issues: xenova/transformers.js
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Converted QA model answers in lower case, original model does not. What am I doing wrong?
question
Further information is requested
#623
opened Mar 4, 2024 by
MarceloEmmerich
Option to easily offload tokenizer to worker
enhancement
New feature or request
#613
opened Feb 29, 2024 by
josephrocca
Chunk-based caching in New feature or request
tokenizer
encode/decode
enhancement
#612
opened Feb 29, 2024 by
josephrocca
Add Twitter/twhin-bert-large
new model
Request a new model
#609
opened Feb 28, 2024 by
do-me
2 tasks done
DepthEstimationPipeline
crashes with large images
bug
#593
opened Feb 20, 2024 by
jparismorgan
1 of 5 tasks
TypeError: fetch failed at Object.fetch
bug
Something isn't working
#591
opened Feb 19, 2024 by
samlhuillier
1 of 5 tasks
Does Further information is requested
WEBGPU
Truly Enhance Inference Time Acceleration?
question
#586
opened Feb 14, 2024 by
kishorekaruppusamy
Using a server backend to generate masks - doublelotus
question
Further information is requested
#585
opened Feb 13, 2024 by
jeremiahmark
dims undefined when converting own model to ONNX
bug
Something isn't working
#584
opened Feb 12, 2024 by
khromov
1 of 5 tasks
How can we use the sam-vit-huge in the production?
question
Further information is requested
#581
opened Feb 9, 2024 by
moneyhotspring
Getting 'fs is not defined' when trying the latest "background removal" functionality in the browser?
question
Further information is requested
#577
opened Feb 8, 2024 by
lancejpollard
Can GPU acceleration be used when using this library in a node.js environment?
question
Further information is requested
#575
opened Feb 7, 2024 by
SchneeHertz
Add support for indictrans2
new model
Request a new model
#571
opened Feb 6, 2024 by
bil-ash
2 tasks done
Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order.
question
Further information is requested
#567
opened Feb 5, 2024 by
a414166402
Compatibility with the latest onnxruntime 1.17.0
enhancement
New feature or request
#560
opened Feb 3, 2024 by
nemphys
Segfault in Something isn't working
node:alpine-18
Docker Container
bug
#555
opened Feb 1, 2024 by
Marviel
1 of 5 tasks
Intallation fails on Ubuntu 22.04 due to outdated version of the package sharp
bug
Something isn't working
#552
opened Jan 30, 2024 by
schkovich
1 of 5 tasks
Whisper model word-level timestamps broken
bug
Something isn't working
#551
opened Jan 30, 2024 by
BjoernRave
1 of 5 tasks
Converting a model to onnx using given script is hard(fails most of the time)
question
Further information is requested
#543
opened Jan 27, 2024 by
bajrangCoder
How can i use this Model?
question
Further information is requested
#539
opened Jan 25, 2024 by
wfk007
YOLOS model extremely slow
bug
Something isn't working
#533
opened Jan 23, 2024 by
tarekziade
1 of 5 tasks
env.backends.onnx.logLevel = "fatal"; does not work
bug
Something isn't working
#529
opened Jan 22, 2024 by
sroussey
1 of 5 tasks
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.