MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
-
Updated
Oct 22, 2024 - Python
MiniCPM-V 2.6: A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone
Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
ModelScope: bring the notion of Model-as-a-Service to life.
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
a state-of-the-art-level open visual language model | 多模态预训练模型
Start building LLM-empowered multi-agent applications in an easier way.
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Unified embedding generation and search engine. Also available on cloud - cloud.marqo.ai
Open Source Routing Engine for OpenStreetMap
Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
[EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction
OmniGen: Unified Image Generation. https://arxiv.org/pdf/2409.11340
Making data higher-quality, juicier, and more digestible for foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
Represent, send, store and search multimodal data
GPT4V-level open-source multi-modal model based on Llama3-8B
Mixture-of-Experts for Large Vision-Language Models
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
Add a description, image, and links to the multi-modal topic page so that developers can more easily learn about it.
To associate your repository with the multi-modal topic, visit your repo's landing page and select "manage topics."