-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Issues: huggingface/text-generation-inference
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Tool Calling using Vercel's AI SDK not working as intended
#2864
opened Dec 23, 2024 by
kldzj
2 of 4 tasks
text-generation-inference:latest-trtllm is missing dependencies to run models
#2854
opened Dec 18, 2024 by
selalipop
2 of 4 tasks
Entire system crashes when get to warm up model
#2853
opened Dec 17, 2024 by
ad-astra-video
1 of 4 tasks
random text generation from Qwen2-VL-7B-Instruct with TGI3
#2851
opened Dec 17, 2024 by
DongyoungKim2
2 of 4 tasks
TGI hangs when running two extremely long prompts at once
#2842
opened Dec 14, 2024 by
JohnTheNerd
2 of 4 tasks
Model warmup fails after adding Triton indexing kernels
#2838
opened Dec 13, 2024 by
YaserJaradeh
2 of 4 tasks
Server stucks at model warming phase for codestral-22b on 4xH100
#2835
opened Dec 13, 2024 by
phymbert
2 of 4 tasks
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
#2828
opened Dec 11, 2024 by
KartDriver
2 of 4 tasks
Error for Qwen2-VL-2B-Instruct using v3.0.0
#2823
opened Dec 11, 2024 by
tobiasvanderwerff
2 of 4 tasks
[broken-compatibility] chat completion breaks base64 standard / openAI spec
#2820
opened Dec 11, 2024 by
lucyknada
2 of 4 tasks
text-generation-inference False make install exception
#2805
opened Dec 6, 2024 by
tangliangwu
1 of 4 tasks
NotImplementedError: 4bit quantization is not supported for AutoModel
#2800
opened Dec 4, 2024 by
Jaimin-Nividous
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.