- Toronto
-
05:20
(UTC -04:00)
Stars
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
Metadata driven Databricks Delta Live Tables framework for bronze/silver pipelines
The Clay Foundation Model (in development)
Tiny matrix multiplication ASIC with 4-bit math
Want a faster ML processor? Do it yourself! -- A framework for playing with custom opcodes to accelerate TensorFlow Lite for Microcontrollers (TFLM). . . . . . Online tutorial: https://google.githu…
A guide on how to package HDL code (VHDL or Verilog) for PYNQ environments
A template project for beginning new Chisel work
Submission template for Tiny Tapeout 9 - Verilog HDL Projects
Generator Bootcamp Material: Learn Chisel the Right Way
Chisel: A Modern Hardware Design Language
Dataflow QNN inference accelerator examples on FPGAs
Install and Test of Yolov8 on Raspberry Pi5 with USB Coral TPU
Tiny status page generated by a Python script
Examples of using the Membrane Framework
Open source implementation of AlphaFold3
Elixir library used to capture MJPEG video on a Raspberry Pi using the camera module.
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer
Learn to use WebGPU for native graphic applications in C++
Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
A lightweight library for portable low-level GPU computation using WebGPU.