- Groningen, The Netherlands
-
22:14
(UTC +01:00) - https:/danieldk.eu/
- @danieldekok
- @danieldk@mastodon.social
- https://bsky.app/profile/danieldk.eu
Stars
High performance AI inference stack. Built for production. @ziglang / @openxla / MLIR / @bazelbuild
Nix language server, based on nix libraries [maintainer=@inclyc,@Aleksanaa]
A smarter cd command. Supports all major shells.
💥 Blazing fast terminal file manager written in Rust, based on async I/O.
Parametric Key Caps
A split keyboard layout, optimized for Portuguese, English, working with numbers and software programming with VIM plugins.
Dynamic Memory Management for Serving LLMs without PagedAttention
Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.
Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets
Large Language Model Text Generation Inference
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
TrailBlazer enables you to seemlessly move through important project marks as quickly and efficiently as possible to make your workflow blazingly fast ™.
Vim-fork focused on extensibility and usability
An interactive and powerful Git interface for Neovim, inspired by Magit
🚀⭐ Minimalistic, powerful and extremely customizable Zsh prompt
A secure, fast, and adaptable OS based on the seL4 microkernel
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU su…
The local version of the backend and UI for the gProfiler agent, featuring advanced flamegraph analysis tools. For the also free cloud version, please see https://profiler.granulate.io