Open
Description
Here's an overview of the features we intend to work on in the near future. Feel free to comment on this issue below to suggest new directions for us to improve the library!
Models
We are always adding new models to the library. Here's what is currently on our radar.
- Vision models
- NLP models
- Continue Gemma* releases.
- As more models are released in the Gemma family of models, we will bring these to KerasHub on an ongoing basis.
- Llama 3.1.
- Qwen 2, Add qwen2 support #1679.
- Mixtral.
- all-MiniLM.
- Continue Gemma* releases.
- Audio models
- Add a high-level Whisper text to speech task with
generate()
support. - Audio Spectrogram Transformer.
- wav2vec.
- Add a high-level Whisper text to speech task with
Feature improvements
- Feature extractor task support.
- Improved generation.
- Improved quantization support (here and core Keras).
- int4 support, QAT, more quantization options.
- Improved multi-host training support on Jax.
- An option for pythonic preprocessing decoupled from
tf.data
. - Support RLHF and other instruction fine-tuning options beyond supervised fine-tuning.
Integrations
- Continue to add conversion support for Huggingface Transformers and Timm checkpoints.
- See this blog post.
- Support JetStream, Add support for JetStream generative inference for all KerasHub LLMs #1863.
- Allow native, high-throughput Jax LLM inference on TPUs.