-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel-analytics/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
the reference results are blank with deepseek model and our generate example code
#12696
opened Jan 10, 2025 by
K-Alex13
ippex-llm docker: SIGABRT on lunar lake (Intel Ultra 7 258V)
user issue
#12687
opened Jan 9, 2025 by
raj-ritu17
[ipex-llm[cpp]][ollama] low performance and gpu usage when running minicpm3-4B model
#12675
opened Jan 8, 2025 by
jianjungu
performance downgrade on dGPU Arc770 after loading more than one LLM model
#12660
opened Jan 7, 2025 by
qing-xu-intel
Fundamental issue in response with IPEX-LLM with Ollama
user issue
#12636
opened Dec 30, 2024 by
anandnandagiri
[LNL][npu_llm] LNL npu doesn't support codegeex4-all-9b and codegeex4-all-9b model
user issue
#12633
opened Dec 29, 2024 by
johnysh
Docker Documentation incorrect. Benchmarks are all not working. Testing needed
user issue
#12629
opened Dec 27, 2024 by
TimoGoetze
PI_ERROR_BUILD_PROGRAM_FAILURE error when running Ollama using ipex-llm on 12450H CPU
#12597
opened Dec 23, 2024 by
qadzhang
WSL / Docker ipex-llm-inference-cpp-xpu:latest SIGSEGV on model load
#12592
opened Dec 21, 2024 by
vladislavdonchev
[BMG dgfx][ipex-llm[cpp]] low performance and gpu using when running llama.cpp inference on B580
user issue
#12586
opened Dec 20, 2024 by
jianjungu
RuntimeError: XPU out of memory on WSL2 vLLM running Qwen2.5-7B-Instruct, sym_int4, Arc A770
user issue
#12584
opened Dec 19, 2024 by
nkt-dk
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.