Pinokio

ipex-llm

https://github.com/intel/ipex-llmupdated 1/28/2026, 5:32:39 PMindexed 5/10/2026, 9:24:06 AM

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.

Pinokio Apps Using This Repo
No Pinokio apps using this repo yet.
Community tagsLoading...

Posts

Sort
Loading…